uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,500,115 | arxiv | \section{Introduction}
For nearly 25 years of research on QCD, reconciliation of chiral symmetry and the lattice regularisation was considered a formidable problem. The Nielsen-Ninomiya no-go theorem prohibits local definitions of single chiral modes at finite cutoff, which reproduce the Dirac operator in the continuum limit.
Minimally doubled fermions comply with the no-go theorem by having two chiral modes, which are detached in the Brillouin zone, but still degenerate in the continuum limit. Their displacement defines an explicit violation of hypercubic symmetry, which entails counterterms \cite{1} with the same reduced symmetry.
Karsten-Wilczek fermions \cite{2} are a particular class of minimally doubled fermions with two residual zero modes from the original na\"\i ve fermion action. The Wilczek parameter $ \zeta $, which must satisfy $ |\zeta|>1/2 $, is by default fixed to $ 1 $. The full action \cite{1} reads
\small
\begin{eqnarray}
S_{f,\alpha}^{KW}
&=&
\sum\limits_{x}\sum\limits_{\mu}\tfrac{1+d(g_0^2)\delta_{\mu\alpha}}{2a}
\big(\overline{\psi}_{x} \gamma_{\mu} U_{\mu}(x) \psi_{x+\hat{\mu}}
-\overline{\psi}_{x+\hat{\mu}} \gamma_{\mu} U_{\mu}^{\dagger}(x-\hat{\mu}) \psi_{x}\big)+
\big(\overline{\psi}_{x} m_{0} \psi_{x}\big) \nonumber \\
&-&
\sum\limits_{\mu\neq\alpha}i\tfrac{\zeta}{2a}
\big(\overline{\psi}_{x} \gamma_{\alpha} U_{\mu}(x) \psi_{x+\hat{\mu}}
+\overline{\psi}_{x+\hat{\mu}} \gamma_{\alpha} U_{\mu}^{\dagger}(x-\hat{\mu}) \psi_{x}\big)+
\big(\overline{\psi}_{x} \left(i\tfrac{3\zeta+c(g_0^2)}{a}\gamma_{\alpha}\right) \psi_{x}\big), \label{eq: Sf}\\
S_{g,\alpha}^{KW}
&=&
\beta \sum\limits_{x}\sum\limits_{\mu<\nu}\big(1-\tfrac{1}{N_c}\mathrm{Re} \mathrm{Tr} P_{\mu\nu}(x)\big)(1+d_P(g_0^2)\delta_{\mu\alpha}), \label{eq: Sg}
\end{eqnarray}\normalsize
which includes three counterterms.
The zero modes are aligned at $ k_\alpha=0 $ and $ k_\alpha=\pi/a $ on the $ x_\alpha $-axis, which is commonly chosen as $ x_\alpha=x_0 $. The spinor field $ \psi(x) $ simultaneously contains two tastes with degenerate continuum limit, which are treated as the light quarks.
The Karsten-Wilzek term has non-singlet taste structure \cite{3}
and explicitly breaks $ x_\alpha $-reflection and charge symmetry, but is invariant under their product \cite{3,4}.
Point-split vector and axial currents are obtained with chiral Ward-Takahashi identities. Their conservation has been verified at 1-loop level \cite{1}. Counterterms which expli\-cit\-ly break hypercubic symmetry are indispensable for restoring isotropy to the continuum limit.
One relevant and two marginal operators share the Karsten-Wilczek term's symmetry. Renormalisation of the Karsten-Wilczek action at 1-loop level is covered in great detail in \cite{1}.
Mixing with these operators reflects in anisotropies in the fermionic self-energy,
\small
\begin{equation}
\Sigma = \Sigma_{1} i\slashed{p} + \Sigma_{2}m_{0} + d_{1L}\,i(\gamma_{\alpha}\,p_{\alpha}) + c_{1L} \dfrac{i}{a}\,\gamma_{\alpha},
\end{equation}\normalsize
and in the fermionic contribution to the vacuum polarisation,
\small
\begin{equation}
\left(p_{\mu}p_{\nu}(\delta_{\alpha\mu}+\delta_{\alpha\nu})-\delta_{\mu\nu}(p^2\delta_{\alpha\mu}\delta_{\alpha\nu}+p_{\alpha}^2) \right)\times d_{P,\,1L}.
\end{equation}\normalsize
These anisotropies are removed by setting the coefficients to their $ 1 $-loop values,
\small
\begin{equation}
c=c_{1L}=-29.5320\,C_{F}\,b,\ d=d_{1L}=-0.12554\,C_{F}\,b,\ d_{P}=-12.69766\,C_{2}\,b,\ b=\dfrac{g_{0}^2}{16\pi^2}.
\end{equation}\normalsize
The coefficients inherit the taste structure of the Karsten-Wilczek term:
\small
\begin{equation}
c_{1L}(-\zeta)=-c_{1L}(\zeta),\ d_{1L}(-\zeta)=+d_{1L}(\zeta),\ d_{P,\,1L}(-\zeta)=+d_{P,\,1L}(\zeta).
\end{equation}\normalsize
If minimally doubled fermions are employed in numerical simulations, it is desirable to determine the coefficients non-perturbatively. Boosted perturbation theory \cite{5} employing Parisi's coup\-ling yields estimates (cf. table \ref{tab: boosted coefficients}), which are often close to non-perturbative values.
\begin{table}[hbt]
\center\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$ \beta $ & $ U_{0}^4 $ & $ c_{1L} $ & $ c_{BPT} $ & $ d_{1L} $ & $ d_{BPT} $ & $ d_{P,\,1L} $ & $ d_{P,\,BPT} $ \\
\hline
$ 6.0 $ & $ 0.594 $ & $ -0.249 $ & $ -0.420 $ & $ -0.00106 $ & $ -0.00179 $ & $ -0.0893 $ & $ -0.150 $ \\
\hline
$ 6.2 $ & $ 0.614 $ & $ -0.241 $ & $ -0.393 $ & $ -0.00103 $ & $ -0.00167 $ & $ -0.0865 $ & $ -0.141 $ \\
\hline
\end{tabular}
\caption{Boosted 1-loop coefficients serve as starting point for non-perturbative determinations. Non-perturbative effects are estimated with the fourth root of the average plaquette, $ U_0=\sqrt[4]{\langle \sum_{\mu<\nu}P_{\mu\nu}\rangle} $. Numerical values for $ U_0^4 $ are taken from \cite{6}.}
\label{tab: boosted coefficients}
\vspace{-8pt}
\end{table}
\section{Non-perturbative renormalisation}
The violation of hypercubic symmetry in the Karsten-Wilczek action and its counter\-terms manifests itself as an anisotropy of the transfer matrix of QCD.
Never\-the\-less, fully-tuned counterterm coefficients must minimise the degree of anisotropy which is observed at finite lattice spacing.
Hence, the most straightforward strategy for non-perturbative tuning is a comparison of computations of correlation functions in different euclidean directions. Since the strength of anisotropies due to the action is a priori unclear, additional causes of anisotropy (e.g. $ L \neq T $) must be avoided.
\subsection{Numerical procedure}
\begin{table}[hbt]
\center\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$ \beta $ & $ a\,[fm] $ & $ r_0 $ & $ L $ & $ n_{cfg} $ & $ m_0\,(\times100) $ & $ c $ & $ d\,(\times1000) $ \\
\hline
$ 6.0 $ & $ 0.093 $ & $ 5.368 $ & $ 32 $ & $ 100 $ & $ 2,3,4,5 $ &
$ [-1.2,+0.3] $ & $ 0.0 $ \\
$ 6.0 $ & $ 0.093 $ & $ 5.368 $ & $ 32 $ & $ 100 $ & $ 1,2,3,4,5 $ &
$ [-0.65,-0.20] $ & $ [-8,+2] $ \\
$ 6.0 $ & $ 0.093 $ & $ 5.368 $ & $ 48 $ & $ 40 $ & $ 2 $ &
$ [-0.65,+0.0] $ & $ 0.0 $ \\
\hline
$ 6.2 $ & $ 0.068 $ & $ 7.360 $ & $ 32 $ & $ 100 $ & $ 1,2,3,4,5 $ &
$ [-0.65,-0.20] $ & $ [-8,+4] $ \\
$ 6.2 $ & $ 0.068 $ & $ 7.360 $ & $ 48 $ & $ 40 $ & $ 2 $ &
$ [-0.65,+0.0] $ & $ 0.0 $ \\
\hline
$ 5.8 $ & $ 0.136 $ & $ 3.668 $ & $ 32 $ & $ 100 $ & $ 2 $ &
$ [-0.65,+0.0] $ & $ [0,+2] $ \\
\hline
\end{tabular}
\caption{Symmetric lattices ($ T=L $) were used for studies of the anisotropy. The parameter $ c $ is varied with smaller step size close to the estimates from boosted perturbation theory. The scale was fixed using the Sommer parameter according to \cite{7}.}
\label{tab: lattice parameters}
\vspace{-4pt}
\end{table}\normalsize
In the quenched approximation, $ d_P $ equals zero due to the absence of virtual quark loops\footnote{
In full QCD, $ d_P $ is fixed by restoring the isotropy of the plaquette at fixed $ c,d $ (cf. \cite{1}).
}
and four-dimensional parameter space is spanned by $ \{\beta,m_0,c,d\} $. Simulations are performed on symmetric lattices ($ L=T $) using the temporal Karsten-Wilczek action ($ x_\alpha=x_0 $) with default Wilczek parameter ($ \zeta=+1 $).
Gaussian smearing \cite{8} at the source is combined with local and smeared sink operators using HYP-smeared \cite{9} link variables. Here, we restrict the discussion to pseudoscalar correlation functions.
The relevant parameter $ c $ is varied at fixed coupling $ \beta $ and quark mass $ m_0 $ in order to establish an smooth relation between hadronic quantities and renormalisation coefficients. The small size of $ d $ in perturbation theory suggests that its influence is mild; hence, we set $ d=0 $ initially. The difference of pseudoscalar fit masses of both directions, the mass anisotropy,
\small
\begin{equation}
\Delta(M_{PS}^2)=(M_{PS}^{x_0})^2-(M_{PS}^{x_3})^2,
\label{eq: mass anisotropy}
\end{equation}\normalsize
is used as a tuning criterion for $ c $ at a fixed value of $ d $ and several values of the the bare quark mass. Finally, effects due to the variation of $ d $ are studied.
\subsection{Determination of the pseudoscalar mass}
\begin{figure}[htb]
\begin{picture}(360,105)
\put( 15 , 0.0){\includegraphics[height=40mm]{b60effm_x0c0.pdf}}
\put(215.0, 0.0){\includegraphics[height=40mm]{b60effm_x0c1.pdf}}
\end{picture}
\vspace{-4pt}
\caption{ Effective mass plots ($ \beta=6.0,\ L=48,\ m_0=0.02,\ d=0 $) using the ``log'' mass exhibit isolated plateaus of forward (e.g. 8-16) and backward (e.g. 32-40) states. Local (red) and smeared sink (blue) are in good agreement. The left plot shows $ c=0.0 $ and the right plot shows $ c=-0.45 $.
}
\label{fig: x0 effective mass}
\vspace{-4pt}
\end{figure}
The Karsten-Wilczek term explicitly breaks $ T $-symmetry. Thus, it is conceivable that forward and backward propagating states in the $ x_0 $-direction are not degenerate. Hence, $ x_0 $-correlation functions must not be symmetrised. Forward and backward states are separated when the effective mass is obtained as a logarithm of the correlation function,
\small
\begin{equation}
m_{log}(t)=\log\frac{\mathcal{C}(t)}{\mathcal{C}(t+1)}.
\label{eq: log mass}
\vspace{-4pt}
\end{equation}\normalsize
The single exponential does not provide a good description of the data around $ T/2 $ (cf. figure \ref{fig: x0 effective mass}). Fits to the correlation function probe forward and backward states independently:
\small
\begin{equation}
C_{PS}(t)\equiv A_{f}e^{-m_{f}t}+A_{b} e^{-m_{b}(T-t)}.
\vspace{-4pt}
\end{equation}\normalsize
The 4-parameter fit extracts forward and backward masses as independent parameters. ``Log'' mass plateaus agree within errors with definitions of the ``cosh'' mass.
No numerical evidence of broken \textit{T}-symmetry was found regardless of $ c $ within $ 1\,\sigma $ (cf. table \ref{tab: forward and backward masses}).
\begin{table}[hbt]
\center\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$ c $ & $ m_f $ (SL) & $m_b $ (SL) & $m_s $ (SL) & $ m_f $ (SS) & $m_b $ (SS) & $m_s $ (SS) \\
\hline
$ +0.0 $ & $ 0.3373(19) $ & $ 0.3370(18) $ & $ 0.3372(14) $ & $ 0.3355(17) $ & $ 0.3375(18) $ & $ 0.3364(13) $ \\
$ -0.45 $ & $ 0.3036(25) $ & $ 0.3027(18) $ & $ 0.3029(15) $ & $ 0.3014(18) $ & $ 0.3008(15) $ & $ 0.3010(13) $ \\
\hline
\end{tabular}
\caption{Forward and backward fit masses in the $ x_0 $-direction ($ \beta=6.0,\ L=48,\ m_0=0.02,\ d=0 $) with local and smeared sink agree within $ 1 $-$ 2\,\sigma $. A tentative ``cosh'' mass fit is consistent within $ 1\,\sigma $.
}
\label{tab: forward and backward masses}
\vspace{-8pt}
\end{table}
\begin{table}[hbt]
\center\footnotesize
\begin{tabular}{|c|c|c|c|c|}
\hline
$ c $ & $ M $ (SL, $ [12,23] $) & $ M $ (SL, $ [20,23] $) & $ M $ (SS, $ [12,23] $) & $ M $ (SS, $ [20,23] $) \\
\hline
$ +0.0 $ & $ 0.2823(9) $ & $ 0.2831(17) $ & $ 0.2963(10) $ & $ 0.2917(20) $ \\
$ -0.45 $ & $ 0.2961(11) $ & $ 0.2973(21) $ & $ 0.3046(10) $ & $ 0.3048(23) $ \\
\hline
\end{tabular}
\caption{Fit masses of smeared-local and smeared-smeared correlation functions in the $ x_3 $-direction ($ \beta=6.0,\ L=48,\ m_0=0.02,\ d=0 $) agree within $ 3\,\sigma $.
}
\label{tab: x_3 masses}
\vspace{-4pt}
\end{table}
\begin{figure}[htb]
\begin{picture}(360,105)
\put( 15 , 0.0){\includegraphics[height=40mm]{b60effm_x3c0.pdf}}
\put(215.0, 0.0){\includegraphics[height=40mm]{b60effm_x3c1.pdf}}
\end{picture}
\vspace{-4pt}
\caption{Effective mass plots ($ \beta=6.0,\ L=48,\ m_0=0.02,\ d=0 $) in the $ x_3 $-direction using the ``cosh'' mass are calculated from symmetrised correlation functions with local (red) and smeared (blue) sink. Plateaus are considerably shorter for $ c=0 $ (left plot) than for $ c=-0.45 $ (right plot).
}
\label{fig: x3 effective mass}
\vspace{-8pt}
\end{figure}
Effective masses in the $ x_3 $-direction (cf. figure \ref{fig: x3 effective mass}) are computed from correlation functions, which were symmetrised over forward and backward propagating states. Excited state contributions persist longer than in figure \ref{fig: x0 effective mass}. The plateaus are more extended in the vicinity of $ c_{BPT} $ (cf. table \ref{tab: boosted coefficients}). Figure \ref{fig: x3 effective mass} demonstrates that effective masses of $ x_3 $-correlation functions with local and smeared sink interpolators reach $ 1 $-$ 2\,\sigma $ level agreement only after $ 16 $-$ 18 $ time slices at $ c=0.0 $ (cf. table \ref{tab: x_3 masses}). Therefore, this analysis of the mass anisotropy with $ L=32 $ (cf. section \ref{sec: minimisation of the anisotropy}) uses only local sinks.
\section{Numerical results}
\vspace{-8pt}
\subsection{Minimisation of the anisotropy}
\label{sec: minimisation of the anisotropy}
\begin{figure}[hbt]
\begin{picture}(360,105)
\put(15.0, 0.0){\includegraphics[height=40mm]{b60extra.pdf}}
\put(215.0, 0.0){\includegraphics[height=40mm]{b60dextra.pdf}}
\end{picture}
\vspace{-8pt}
\caption{Fit masses ($ \beta=6.0,\ L=32,\ m_0=0.02,\ d=0.0 $) are interpolated in $ c \in [-0.65,-0.25] $. The minimum of $ \Delta(M_{PS}^2) $ as a function of $ c $ (right plot) is shallow with respect to statistical errors.
}
\label{fig: interpolation}
\vspace{-4pt}
\end{figure}
The minimisation of eq. (\ref{eq: mass anisotropy}) as a function of $ c $ and $ d $ defines the renormalisation condition. The squared fit masses $ (M_{PS}^{x_\mu})^2 $ with local sinks are interpolated as functions of $ c $ (cf. figure \ref{fig: interpolation}). The interpolations are directly subtracted and the minimum is computed,
\small
\begin{equation}
c_{min}=-\tfrac{(a_1^{x_0}-a_1^{x_3})}{2(a_2^{x_0}-a_2^{x_3})},\quad (M_{PS}^{x_\mu})^2=a_0^{x_\mu}+a_1^{x_\mu}\,c+a_2^{x_\mu}\,c^2.
\vspace{-4pt}
\end{equation}\normalsize
$ c_{min} $ is extrapolated (cf. figure \ref{fig: extrapolation}) in the quark mass $ m_0 $ with a linear and a quadratic ansatz, which agree at $ 1 $-$ 2\,\sigma $ level. The error is dominated by the lightest quark mass.
\begin{figure}[htb]
\begin{picture}(360,105)
\put(15.0, 0.0){\includegraphics[height=40mm]{b60min.pdf}}
\put(215.0, 0.0){\includegraphics[height=40mm]{b60mres.pdf}}
\end{picture}
\vspace{-8pt}
\caption{Dependence on $ d $ of $ c_{min} $ ($ \beta=6.0,\ L=32 $) cannot be resolved (left plot). The minimal fit mass anisotropy is consistent with zero on a $ 2\,\sigma $ level (right plot).
}
\label{fig: extrapolation}
\vspace{-4pt}
\end{figure}
\begin{table}[hbt]
\center\footnotesize
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$ \beta $ & $ c_{BPT} $ & $ c_{min} $ (lin.) & $ \Delta(M_{PS}^2) $ (lin.) & $ c_{min} $ (quad.) & $ \Delta(M_{PS}^2) $ (quad.) \\
\hline
$ 6.0 $ & $ -0.420 $ & $ -0.432(08)^{\text{stat}} $ & $ 0.0015(07)^{\text{stat}} $ & $ -0.418(13)^{\text{stat}} $ & $ 0.0019(08)^{\text{stat}} $ \\
\hline
$ 6.2 $ & $ -0.393 $ & $ -0.413(16)^{\text{stat}} $ & $ -0.0015(12)^{\text{stat}} $ & $ -0.414(35)^{\text{stat}} $ & $ -0.0006(13)^{\text{stat}} $ \\
\hline
\end{tabular}
\caption{The different $ c_{min} $ ($ L=32,\ d=0.0 $) from linear and quadratic extrapolations in the quark mass ($ am_0\in[0.1,0.5] $) agree on $ 1 $-$ 2\,\sigma $ level. The mass anisotropy scatters around $ 0 $ within $ 1 $-$ 2\,\sigma $.
}
\label{tab: cmin}
\vspace{-4pt}
\end{table}
We conclude that use of the parameter estimates ( $ c_{BPT} $, $ d_{BPT} $) from boosted perturbation theory removes the mass anisotropy within our statistic and systematic accuracy. However, careful study of additional observables \cite{10} indicates slightly different values ($ c(\beta=6.0)=-0.45(1) $, $ c(\beta=6.2)=-0.40(1) $), which we use in studies of the tuned action.
\subsection{Simulations with the tuned Karsten-Wilczek action}
\begin{table}[hbt]
\center\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
$ \beta $ & $ c $ & $ d $ & $ m_0\,(\times1000) $ & $ (r_0 m_0) $ & $ (r_0 M_{PS})^2 $ & $ (M_{PS})^2\ [MeV] $ & $ \tfrac{(r_0 M_{PS})^2}{(r_0 m_0)} $ \\
\hline
$ 6.0 $ & $ -0.45 $ & $ -0.001 $ & $ 20 $ & $ 0.107 $ & $ 1.595(4)
$ & $ 629(2) $ & $ 23.7(1) $ \\
$ 6.0 $ & $ -0.45 $ & $ -0.001 $ & $ 10 $ & $ 0.054 $ & $ 1.147(4)
$ & $ 452(2) $ & $ 24.5(2) $ \\
$ 6.0 $ & $ -0.45 $ & $ -0.001 $ & $ 5 $ & $ 0.027 $ & $ 0.831(5) $ & $ 328(2) $ & $ 25.7(3) $ \\
$ 6.0 $ & $ -0.45 $ & $ -0.001 $ & $ 3.65 $ & $ 0.020 $ & $ 0.718(5) $ & $ 283(2) $ & $ 26.3(4) $ \\
\hline
$ 6.2 $ & $ -0.40 $ & $ -0.001 $ & $ 20^* $ & $ 0.147^* $ & $ 1.834(9)^* $ & $ 724(3)^* $ & $ 22.8(2)^* $ \\
$ 6.2 $ & $ -0.40 $ & $ -0.001 $ & $ 10 $ & $ 0.074 $ & $ 1.327(8) $ & $ 524(3) $ & $ 23.9(3) $ \\
$ 6.2 $ & $ -0.40 $ & $ -0.001 $ & $ 5 $ & $ 0.037 $ & $ 0.965(8) $ & $ 381(3) $ & $ 25.3(4) $ \\
$ 6.2 $ & $ -0.40 $ & $ -0.001 $ & $ 3.65 $ & $ 0.027 $ & $ 0.834(9) $ & $ 329(3) $ & $ 25.9(5) $ \\
$ 6.2 $ & $ -0.40 $ & $ -0.001 $ & $ 2.66 $ & $ 0.020 $ & $ 0.720(9) $ & $ 284(4) $ & $ 26.5(7) $ \\
$ 6.2 $ & $ -0.40 $ & $ -0.001 $ & $ 1.94 $ & $ 0.014 $ & $ 0.622(10) $ & $ 245(4) $ & $ 27.1(9) $ \\
$ 6.2 $ & $ -0.40 $ & $ -0.001 $ & $ 1.41^* $ & $ 0.010^* $ & $ 0.557(18)^* $ & $ 219(7)^* $ & $ 29.9(19)^* $ \\
\hline
\end{tabular}
\caption{The tuned action is simulated on lattices with $ T=48 $. The spatial extent is $ L=24 $ for $ \beta=6.0 $ and $ L=32 $ for $ \beta=6.2 $. Two parameter sets (marked with ``$ ^* $'') have only $ L=24 $.
}
\label{tab: simulation parameters}
\vspace{-4pt}
\end{table}
The action with tuned parameters is applied to a study of the spectrum of light pseudoscalar mesons (cf. table \ref{tab: simulation parameters}). Ground state masses below $ 250\,MeV $ are achieved without encountering exceptional configurations. Since the squared ground state mass is approximately linear in the quark mass (cf. figure \ref{fig: chiral limit}), it is tentatively extrapolated like a Goldstone boson including quenched chiral logarithms \cite{10},
\small
\begin{equation}
(r_{0}\,M_{PS})^2 = (r_{0}\,B_{0})(r_{0}\, m_{0})\left((1-\delta) - \delta \log (m_{0}/r_{0})\right).
\end{equation}\normalsize
\begin{figure}[hbt]
\begin{picture}(360,105)
\put(020.0, 0.0){\includegraphics[height=40mm]{m2ps.pdf}}
\put(245.0, 0.0){\includegraphics[height=40mm]{cond.pdf}}
\end{picture}
\vspace{-4pt}
\caption{The pseudoscalar mass at $ \beta=6.0 $ and $ \beta=6.2 $ agrees well (left plot). The ratio {\small $ {(r_0 M_{PS})^2}/{(r_0 m_0)} $} shows finite volume effects and quenched chiral logarithms.
}
\label{fig: chiral limit}
\vspace{-4pt}
\end{figure}
However, the separation of chiral logarithms from effects due to finite volume or higher chiral orders is difficult. With the enlarged volume, the statistical error of $ \delta $ decreases and agreement between different lattice spacings is improved considerably.
We obtain the estimate $ 0.10(1) \leq \delta \leq 0.16(5)) $ and find consistency of $ \delta $ between different lattice spacings and different volumes within $ 2\,\sigma $.
\section{Conclusions}
The first simulations with minimally doubled fermions in the quenched approximation have been performed with various volumes and different lattice spacings (cf. table \ref{tab: lattice parameters}). Pseudoscalar correlation functions do not show any numerical evidence of \textit{T}-parity violation. This surprising result is currently under scrutiny \cite{10}. Anisotropies of the pseudoscalar masses are applied to determine $ c $ non-perturbatively. Results are largely insensitive to $ d $ and agree well with estimates from boosted perturbation theory (cf. table \ref{tab: cmin}). However, separate methods for obtaining $ c $ and $ d $ with reduced errors are still desirable \cite{10}.
The tuned action (cf. table \ref{tab: simulation parameters}) is used in studies of light pseudoscalar mesons ($ M_{PS}\lesssim250\,MeV $) without exceptional configurations. After taking quenched chiral logarithms into account,
the ground state is consistent with a Goldstone boson. This remarkable result requires a detailed study of the nature of the pseudoscalar ground state \cite{10}.
\vskip1em
\textbf{Acknowledgements}: {The speaker thanks Sinya Aoki and Michael Creutz for invaluable discussions. This work was supported by Deutsche Forschungsgemeinschaft (SFB 1044), Gesellschaft f\"ur Schwerionenforschung GSI, the Research Center ``Elementary Forces \& Mathematical Foundations'' (EMG), Helmholtz Institute Mainz (HIM), Deutscher Akademischer Austauschdienst (DAAD) and the Japanese Ministry of Education, Culture, Sports and Technology (MEXT). Simulations have been performed on the cluster ``Lilly'' at the Institue for Nuclear Physics, Univ. of Mainz. We thank C. Seiwerth for technical support.}
|
1,116,691,500,116 | arxiv | \section{Introduction}
Schwarz domain decomposition methods are the oldest domain
decomposition methods. They were invented by Hermann Amandus Schwarz
(see Figure \ref{SchwarzOriginalFig} on the left) in 1869 with a
publication in the following year \cite{Schwarz:1870:UGA}.
\begin{figure}
\centering
\tabcolsep2em
\begin{tabular}{cc}
\includegraphics[width=0.3\textwidth]{Figures/SCHWARZ}&
\parbox[b]{0.4\textwidth}{\includegraphics[width=0.4\textwidth]{Figures/SchwarzDDFigOrg}\vspace{1em}\\}
\end{tabular}
\caption{Left: Herman Amandus Schwarz (25.1.1843-30.11.1921). Right: original
figure for the decomposition in the alternating Schwarz method.}
\label{SchwarzOriginalFig}
\end{figure}
Schwarz invented the now called {\em alternating Schwarz method} in
order to close a gap in the proof of the Riemann mapping theorem at
the end of Riemann's PhD thesis \cite{riemann1851grundlagen}, an
English translation of which is available
\cite{riemann1851foundations}. In his proof, Riemann had assumed that
the Dirichlet problem
\begin{equation}\label{eq:DirichletProblem}
\Delta u=0\ \mbox{in $\Omega$},\quad \hbox{$u=g$ on $\partial \Omega$},
\end{equation}
always had a solution, just by taking the function $u(x,y)$ which
solves the minimization problem\footnote{Indeed, taking a variational
derivative of ${\cal J}(u)$, we find for an arbitrary variation
$v$ that vanishes on $\partial \Omega$ that $\frac{\D}{\D \epsilon}{\cal
J}(u+\epsilon v)=2\iint _{\Omega}(\frac{\partial (u+\epsilon
v)}{\partial x})\frac{\partial v}{\partial x}+(\frac{\partial
(u+\epsilon v)}{\partial y})\frac{\partial v}{\partial y}\D x\D y$.
Therefore at $\epsilon=0$, using integration by parts and that the
variation $v$ vanishes on $\partial \Omega$, we get $\iint
_{\Omega}\Delta u v\D x\D y=0$. Since this must hold for all variations
$v$, we must have $\Delta u=0$, {\it i.e.}\ equation
\R{eq:DirichletProblem} holds at a stationary point.}
\begin{equation}\label{eq:DirichletIntegral}
{\cal J}(u):=\iint _{\Omega} \bigl(\frac{\partial u}{\partial x}\bigr)^2+ \bigl(\frac{\partial u}{\partial y}\bigr)^2\D x\D y\longrightarrow \min,\quad
\hbox{$u=g$ on $\partial \Omega$},
\end{equation}
and satisfies the boundary condition $u=g$ on $\partial \Omega$. When asked why this minimization
problem had always a solution, Riemann replied that he had learned this in his analysis course
taught by Dirichlet, the functional ${\cal J}(u)$ being bounded from below, thus coining the term
Dirichlet principle. \citeasnoun{weierstrass} then presented a counterexample to this way of arguing: the
functional to minimize, $\int_{-1}^{+1} (x\cdot u')^2\D x\longrightarrow \min$, is also bounded from
below, but when one tries to find a function that minimizes this integral with boundary conditions
$u(-1)=a$, $u(1)=b$, $a\ne b$, $u'$ should be small when $x$ is away from zero, best is taking
$u'=0$ so there is no contribution to the integral, and when $x=0$, $u'$ can be large, since it is
multiplied with $0$ and thus also does not contribute. The minimizing ``solution'' is thus piecewise
constant and discontinuous at $x=0$, and the gap in Riemann's proof remained. However, the Dirichlet
problem \R{eq:DirichletProblem} had a well known solution on rectangular and circular domains, using
Fourier techniques from 1822. H.A. Schwarz invented his alternating method by choosing as domain
$\Omega$ the union of a disk and an overlapping rectangle (see Figure \ref{SchwarzOriginalFig}
(right)). If we call the overlapping subdomains $\Omega_1$ and $\Omega_2$ ($T_1$ and $T_2$ in the
original drawing of Schwarz), and the interfaces $\Gamma_1$ and $\Gamma_2$ ($L_2$ and $L_1$ in the
original drawing of Schwarz), the alternating Schwarz method computes alternatingly on the disk and
on the rectangle the Dirichlet problem and carries the newly computed solution from the interface
curves $\Gamma_1$ and $\Gamma_2$ over to the other subdomain as new boundary condition for the next
solve,
\begin{equation}\label{AlternatingSchwarzMethod}
\begin{array}{rcllrcll}
\Delta u_1^n & = & 0 & \mbox{in $\Omega_1$},&
\Delta u_2^n & = & 0 & \mbox{in $\Omega_2$},\\
u_1^n & = & g & \mbox{on $\partial \Omega \cap \overline{\Omega}_1$},\quad &
u_2^n & = & g & \mbox{on $\partial \Omega \cap
\overline{\Omega}_2$},\\
u_1^n &=& u_2^{n-1}\ & \mbox{on $\Gamma_1$}, &
u_2^n &=& u_1^n\ & \mbox{on $\Gamma_2$}.
\end{array}
\end{equation}
Schwarz proved convergence of this alternating method using the
maximum principle, see \citeasnoun{gander2014origins} for more information
on the origins of the alternating Schwarz method. All this happened
during the fascinating time of the development of variational calculus
and functional analysis, which led eventually to the finite element
method, see the historical review \citeasnoun{gander2012euler}.
It took many decades before the alternating Schwarz method became a computational
tool. \citeasnoun{miller1965numerical} was the first to point out its usefulness for computations, but it
was with the seminal work of Lions \cite{Lions:1988:SAM,lions1989schwarzII,lions1990schwarz}, and
the additive Schwarz method of \citeasnoun{dryja1987additive}, that Schwarz methods became powerful and
mainstream parallel computational tools for solving discretized partial differential equations.
\citeasnoun{Lions:1988:SAM} also proposed a parallel variant of the method, by simply not using the newest
available value along $\Gamma_2$ but the previous one in \R{AlternatingSchwarzMethod},
\begin{equation}
u_2^n = u_1^{n-1}\ \mbox{on $\Gamma_2$},
\end{equation}
so that the subdomain solves can be performed in parallel. This is analogous to the Jacobi
stationary iterative method, compared to the Gauss-Seidel stationary iterative method in linear
algebra. Note that the additive Schwarz method is quite different from the parallel Schwarz method:
it is a preconditioner for the conjugate gradient method, and does not converge when used as a
stationary iteration without relaxation, in contrast to the parallel Schwarz method, see
\citeasnoun[Section 3.2]{gander2008schwarz} for more details.
\citeasnoun{lions1990schwarz} also introduced a non-overlapping variant of the Schwarz method\footnote{He
considered therefore decompositions where $\Gamma_1=\Gamma_2$, but the method is equally
interesting with overlap as well, since it converges faster than the classical Schwarz method, as
we will see.} using Robin transmission conditions in \R{AlternatingSchwarzMethod},
$$
\arraycolsep0.2em
\begin{array}{rcllrcll}
(\partial_{n_1}\!+\!p_1)u_1^n &=& (\partial_{n_1}\!+\!p_1) u_2^{n-1}\ & \mbox{on $\Gamma_1$}, &
(\partial_{n_2}\!+\!p_2)u_2^n &=& (\partial_{n_2}\!+\!p_2)u_1^n\ & \mbox{on $\Gamma_2$},
\end{array}
$$
where $\partial_{n_j}$, $j=1,2$, denotes the unit outward normal derivative for $\Omega_j$ along the
interfaces $\Gamma_j$, and, following Lions, the $p_j$ can be constants, or functions along the
interface, or even operators. In contrast to the classical Schwarz method, where it was only
important to obtain convergence for closing the gap in the proof of the Riemann mapping theorem, and
convergence speed was not an issue, a good choice of the Robin parameters $p_j$ can greatly improve
the convergence of Schwarz methods. \citeasnoun{hagstrom1988numerical} worked on Schwarz methods for
non-linear problems, and advocated to use Robin transmission conditions involving non-local
operators. The work of \citeasnoun{tang1992generalized} on generalized Schwarz splittings also points into
this direction, for a more detailed review, see \citeasnoun{gander2008schwarz}.
What is the underlying idea of changing the transmission conditions
from Dirichlet to Robin conditions, even involving non-local
operators? In Schwarz methods for parallel computing, one decomposes a
domain into many subdomains, see Figure \ref{1Dand2DDecompositionFig}
for two typical examples.
\begin{figure}
\centering
\mbox{\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm,scale=0.9]
\clip(-.6,-0.6) rectangle (7,7);
\draw [->] (0,0) -- (6.5,0);
\draw [->] (0,0) -- (0,6.5);
\draw (-0.2,7) node[anchor=north west] {$y$};
\draw (6.5,0.2) node[anchor=north west] {$x$};
\draw (-0.5,0.15) node[anchor=north west] {$0$};
\draw (-0.5,6.3) node[anchor=north west] {$Y$};
\draw (0,6)-- (6,6);
\draw (6,6)-- (6,0);
\draw (6,0)-- (0,0);
\draw (0,0)-- (0,6);
\draw (2,6)-- (2,0);
\draw (4,6)-- (4,0);
\draw (0,4)-- (6,4);
\draw (0,2)-- (6,2);
\draw (1.7,1.7)-- (1.7,4.3);
\draw (1.7,1.7)-- (4.3,1.7);
\draw (4.3,1.7)-- (4.3,4.3);
\draw (4.3,4.3)-- (1.7,4.3);
\draw [->] (2.7,2.3) -- (1.7,2.3);
\draw [->] (3.3,2.3) -- (4.3,2.3);
\draw (2.56,2.6) node[anchor=north west] {$\Omega_{ij}$};
\draw [->] (3.3,3.5) -- (4,3.5);
\draw [->] (2.7,3.5) -- (2,3.5);
\draw (2.56,3.9) node[anchor=north west] {$\tilde{\Omega}_{ij}$};
\draw (1.3,0.07) node[anchor=north west] {$X_i^l$};
\draw [dashed](1.7,0)-- (1.7,1.7);
\draw [dashed](4.3,0)-- (4.3,1.7);
\draw (3.9,0) node[anchor=north west] {$X_i^r$};
\draw (5.67,0) node[anchor=north west] {$X$};
\draw [dashed](0,1.7)-- (1.7,1.7);
\draw [dashed](0,4.3)-- (1.7,4.3);
\draw (-0.7,2.1) node[anchor=north west] {$Y_j^b$};
\draw (-0.7,4.7) node[anchor=north west] {$Y_j^t$};
\end{tikzpicture}\hspace{-1em}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm,scale=0.9]
\clip(-.4,-2.6) rectangle (7,3.3);
\draw [->] (0,0) -- (6.5,0);
\draw [->] (0,0) -- (0,2.8);
\draw (-0.2,3.3) node[anchor=north west] {$y$};
\draw (6.5,0.2) node[anchor=north west] {$x$};
\draw (-0.5,0.15) node[anchor=north west] {$0$};
\draw (-0.5,2.3) node[anchor=north west] {$Y$};
\draw (0,2)-- (6,2);
\draw (6,2)-- (6,0);
\draw (6,0)-- (0,0);
\draw (0,0)-- (0,2);
\draw (2,2)-- (2,0);
\draw (4,2)-- (4,0);
\draw (1.7,2)-- (1.7,0);
\draw (2.3,2)-- (2.3,0);
\draw (3.7,2)-- (3.7,0);
\draw (4.3,2)-- (4.3,0);
\draw [->] (1.3,0.4) -- (2.3,0.4);
\draw [->] (0.7,0.4) -- (0,0.4);
\draw [->] (2.7,0.2) -- (1.7,0.2);
\draw [->] (3.3,0.2) -- (4.3,0.2);
\draw [->] (5.3,0.4) -- (6,0.4);
\draw [->] (4.7,0.4) -- (3.7,0.4);
\draw (0.66,0.7) node[anchor=north west] {$\Omega_1$};
\draw (2.65,0.53) node[anchor=north west] {$\Omega_2$};
\draw (4.66,0.7) node[anchor=north west] {$\Omega_3$};
\draw [->] (1.3,1.5) -- (2,1.5);
\draw [->] (0.7,1.5) -- (0,1.5);
\draw [->] (3.3,1.5) -- (4,1.5);
\draw [->] (2.7,1.5) -- (2,1.5);
\draw [->] (5.3,1.5) -- (6,1.5);
\draw [->] (4.7,1.5) -- (4,1.5);
\draw (0.66,1.9) node[anchor=north west] {$\tilde{\Omega}_1$};
\draw (2.65,1.9) node[anchor=north west] {$\tilde{\Omega}_2$};
\draw (4.66,1.9) node[anchor=north west] {$\tilde{\Omega}_3$};
\draw (1.3,0.07) node[anchor=north west] {$X_2^l$};
\draw (3.9,0) node[anchor=north west] {$X_2^r$};
\draw (5.67,0) node[anchor=north west] {$X$};
\end{tikzpicture}
}
\caption{Two typical domain decompositions: a two dimensional
decomposition with cross points (left), and a one dimensional or
sequential domain decomposition (right).}
\label{1Dand2DDecompositionFig}
\end{figure}
To obtain an overlapping decomposition, it is convenient to first
define a decomposition into non-overlapping subdomains
$\tilde{\Omega}_{ij}$, and then to enlarge these subdomains by a thin
layer to get overlapping subdomains $\Omega_{ij}$, as indicated in
Figure \ref{1Dand2DDecompositionFig}. One then wants to compute only
subdomain solutions on the $\Omega_{ij}$ to approximate the global, so
called mono-domain solution on the entire domain $\Omega$. Let us
imagine for the moment that the source term of the partial
differential equation to be solved has only support in one subdomain
somewhere in the middle of the global domain, like for example in
subdomain $\Omega_{ij}$ in Figure \ref{1Dand2DDecompositionFig} on the
left, and assume that the global domain is infinitely large. Then it
would be best to put on the boundary of the subdomain containing the
source transparent boundary conditions, since then by solving the
subdomain problem we obtain by definition\footnote{Transparent
boundary conditions are exactly defined by truncating the global
domain such that the solution on the truncated domain coincides with
the solution on the global domain.} the restriction of the global,
mono-domain solution to that subdomain. Robin boundary conditions are
approximations of the transparent boundary conditions, and one can
thus expect that with Robin boundary conditions subdomain solvers
compute better approximations to the overall mono-domain solution than
with Dirichlet boundary conditions.
To illustrate this, we solve the Poisson problem $\Delta u=f$ on the
unit square with zero Dirichlet boundary conditions and the right hand
side source function
$f(x,y):=100e^{-100((x-\frac{1}{2})^2+(y-\frac{1}{2})^2)}$. We show in
Figure \ref{PoissonExampleFig} on the left this source term, and on
the right the corresponding solution of the Poisson problem, using a
centered finite difference scheme with mesh size $h=1/40$.
\begin{figure}
\centering
\mbox{\includegraphics[width=0.4\textwidth]{Figures/SourceTerm}\quad
\includegraphics[width=0.4\textwidth]{Figures/RASResult}}
\caption{Gaussian source term (left) and solution on of the
corresponding Poisson problem (right).}
\label{PoissonExampleFig}
\end{figure}
If we decompose the unit square domain into $3 \times 3$ subdomains as
indicated in Figure \ref{1Dand2DDecompositionFig} (left), and solve
the subdomain problem in the center with Dirichlet boundary conditions
$u=0$, we obtain the approximation shown in Figure
\ref{PoissonRASORASFig} (top left).
\begin{figure}
\centering
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/RAS9n41d1Iter1}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/RAS9n41d1Iter2}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/RAS9n41d1Iter3}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/ORAS9n41d1Iter1}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/ORAS9n41d1Iter2}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/ORAS9n41d1Iter3}}
\caption{First few iterations of classical (top) and optimized
parallel Schwarz (bottom) for the Poisson problem with Gaussian
source term from Figure \ref{PoissonExampleFig}.}
\label{PoissonRASORASFig}
\end{figure}
If we perform the solve with Robin conditions $(\partial_n+p)u=0$ with
$p=4$, we get the approximation shown in Figure
\ref{PoissonRASORASFig} (bottom left). We clearly see that the result
when truncating with Dirichlet conditions is much further away from
the desired solution shown in Figure \ref{PoissonExampleFig} (right)
than when truncating with Robin conditions. This is however precisely
the first iteration of a classical Schwarz method with Dirichlet
transmission conditions, compared to an optimized Schwarz method with
Robin transmission conditions, when starting the iteration with a zero
initial guess. We show in Figure \ref{PoissonRASORASFig} also the next
two iterations of the classical (top) and optimized Schwarz method
(bottom) with algebraic overlap of two mesh layers\footnote{This
corresponds for the classical Schwarz method to a physical overlap
of $3h$ ($h$ the mesh size), see \citeasnoun[Figure 3.1]{gander2008schwarz}
for the relation between algebraic and physical overlap, and the
discussion in \citeasnoun[Section 4.1]{St-Cyr07} for the algebraic
overlap when Robin conditions are used, and algebraic overlap of
two mesh layers corresponds to physical overlap $h$ only.}, and we see the
great convergence enhancement due to the Robin transmission
conditions, which are much better approximations of the transparent
boundary conditions than the Dirichlet transmission conditions. This
illustrates well that Schwarz methods (and domain decomposition
methods in general) are methods which approximate solutions by domain
truncation, and we see that transmission conditions that are better at
truncating domains lead to better convergence.
This became a major new viewpoint on domain decomposition methods over
the past two decades. Naturally Dirichlet (and Neumann) conditions
then appear as not very good candidates to truncate domains and be
used as transmission conditions between subdomains: it is of interest
for rapid convergence to use absorbing boundary conditions which
approximate transparent conditions, like Robin conditions or higher
order Ventcell conditions, and also perfectly matched layers (PMLs), or
integral operators in the transmission conditions between subdomains.
Optimized Schwarz methods were pioneered in the early nineties by
Nataf et al. \cite{Nataf93,NRS94}; see in particular also the early
contributions of \citeasnoun{japhet1998optimized}, \citeasnoun{Chevalier}, \citeasnoun{EZ98},
and \citeasnoun{gander2000optimized} where the name optimized Schwarz methods
was coined. Optimized Schwarz methods use Robin or higher order
transmission conditions or PMLs at the interfaces between subdomains,
and all are approximations of transparent boundary conditions, see
\citeasnoun{GanderOSM} and references therein for an
introduction\footnote{\label{footnoteoptimalSchwarz} The term optimal
Schwarz method for Schwarz methods with transparent boundary
conditions appeared already in \cite{gander1999optimal} for time
dependent problems, and this use of optimal means really faster is
not possible, in contrast to the other common use of optimal meaning
just scalable in the domain decomposition literature.}. This
conceptual change for domain decomposition methods is fundamental and
was discovered independently for solving hard wave propagation
problems by domain decomposition and iteration, since for such
problems classical domain decomposition methods are not effective, see
the seminal work by \citeasnoun{Despres90}, \citeasnoun{Despres}, and \citeasnoun{EG} for a review why it is
hard to solve such problems by iteration. Also rational approximations
have been proposed in the transmission conditions for such problems,
see for example \citeasnoun{Boubendir}, \citeasnoun{KimZhang}, \citeasnoun{Kimsweep}. This new
idea of domain truncation led, independently of the work on optimized
Schwarz methods, to the invention of the sweeping preconditioner
\cite{EY1,EY2,Poulson,Tsuji1,Tsuji2,LiuYingRecur}, the source transfer
domain decomposition \cite{Chen13a,Chen13b,xiang2019double}, the
single layer potential method \cite{Stolk,StolkImproved}, and the
method of polarized traces \cite{Zepeda,ZD,ZepedaNested}. All these
methods are very much related, and can be understood in the context of
optimized Schwarz methods, for a review and formal proofs of
equivalence, see \citeasnoun{gander2019class}, and also the references therein
for the many followup papers by the various groups. A key ingredient
of these independently developed methods is the use of perfectly
matched layers as absorbing boundary conditions, a technique which had
only rarely been used in the domain decomposition community before,
for exceptions see \citeasnoun{Toselli}, \citeasnoun{Schadle}, \citeasnoun{SZBKS}.
While optimized Schwarz methods were developed for general
decompositions of the domain into subdomains, including cross points,
as shown on the left in Figure \ref{1Dand2DDecompositionFig} and in
the corresponding example in Figure \ref{PoissonRASORASFig},
the sweeping preconditioner, source transfer domain decomposition, the
single layer potential method and the method of polarized traces were
all formulated for one dimensional or sequential domain
decompositions, as shown in Figure \ref{1Dand2DDecompositionFig} on
the right, without cross points. This is because these methods were
developed with the physical intuition of wave propagation in one
direction, and not with domain decomposition in mind. In these
methods, the authors had in mind the transparent boundary conditions
at the interfaces, which contain the Dirichlet to Neumann (DtN), or more
generally the Steklov Poincaré operator replacing $p$ in the Robin
transmission condition, and with this choice, the method becomes a
direct solver. We illustrate this in Figure
\ref{PoissonOptimalSchwarzFig} for the strip decomposition in Figure
\ref{1Dand2DDecompositionFig} (right) with three subdomains and our
Poisson model problem with Gaussian source from Figure
\ref{PoissonExampleFig}.
\begin{figure}
\centering
\mbox{\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzOptimalIt=1forward1}
\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzOptimalIt=1forward2}
\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzOptimalIt=1forward3}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzOptimalIt=1backward2}
\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzOptimalIt=1backward1}
\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingOptimalSchwarzError}}
\caption{Optimal alternating Schwarz using DtN
transmission conditions sweeping with the iterates from left to
right and then back to left (double sweep) for a sequential
decomposition into three subdomains, solving the Poisson problem
with Gaussian source term from Figure \ref{PoissonExampleFig}. The
last figure shows the error after one double sweep, on the order of
machine precision.}
\label{PoissonOptimalSchwarzFig}
\end{figure}
We see that the method converges in one double sweep, {\it i.e.}\ it is a
direct solver. The iteration matrix (or operator at the continuous
level) is nilpotent, and one can interpret this method as an exact
block LU factorization, where in the forward sweep the lower block
triangular matrix $L$ is solved, and in the backward sweep the
upper block triangular matrix $U$ is solved, and the blocks correspond
to the subdomains. This interpretation had already led earlier to the
Analytic Incomplete LU (AILU) preconditioners, see
\citeasnoun{GanderAILU00}, \citeasnoun{GanderAILU05} and references therein. Note that
other domain decomposition methods can also be nilpotent for
sequential domain decompositions: for Neumann-Neumann and FETI this is
however only possible for two-subdomain decompositions, and for
Dirichlet-Neumann up to three subdomains, and in some specific
cases more than three subdomains, see
\citeasnoun{chaouqui2017nilpotent}. The only domain decomposition method
which can in general become nilpotent for sequential domain
decompositions is the optimal Schwarz method.
If we use approximations of the optimal Schwarz method, for example an
optimized one with Robin transmission conditions, the method is not
nilpotent any more, as shown in Figure \ref{OSMSweepFig},
\begin{figure}
\centering
\mbox{\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzOptimized=1forward1}
\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzOptimized=1forward2}
\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzOptimized=1forward3}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzOptimized=1backward2}
\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzOptimized=1backward1}
\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingOptimizedSchwarzError}}
\caption{Optimized alternating Schwarz using Robin transmission
conditions sweeping like optimal alternating Schwarz in Figure
\ref{PoissonOptimalSchwarzFig} (the last figure shows the error
after one double sweep).}
\label{OSMSweepFig}
\end{figure}
but convergence is still very fast, compared to the classical Schwarz
method with Dirichlet transmission conditions shown in Figure
\ref{SMSweepFig}.
\begin{figure}
\centering
\centering
\mbox{\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzClassicalIt=1forward1}
\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzClassicalIt=1forward2}
\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzClassicalIt=1forward3}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzClassicalIt=1back2}
\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingSchwarzClassicalIt=1back1}
\includegraphics[width=0.33\textwidth,trim=50 30 50 50,clip]{Figures/SweepingClassicalSchwarzError}}
\caption{Classical alternating Schwarz (using Dirichlet transmission
conditions) sweeping like optimized alternating Schwarz in
Figure \ref{OSMSweepFig} (the last figure shows the error after one
double sweep).}
\label{SMSweepFig}
\end{figure}
Using PML as transmission conditions in optimized Schwarz methods to
approximate the optimal DtN transmission conditions is currently a
very active field of research in the case of cross points.
\citeasnoun{leng2019additive} proposed a parallel Schwarz method for
checkerboard domain decompositions including cross points based on the
source transfer formalism which uses PML transmission conditions, and
the method still converges in a finite number of steps, see also
\citeasnoun{lengdiag} for a diagonal sweeping variant, and the earlier work
\cite{leng2015overlapping,leng2015fast}. In \citeasnoun{taus2020sweeps},
L-sweeps are proposed for the method of polarized traces
interpretation of the optimized Schwarz methods, which traverse a
square domain decomposed into little squares by subdomain solves
organized in the form of L's, from one corner going over the
domain. In these methods, storing the PML layers and using them in the
exchange of information is an essential ingredient, and it is not
clear at this stage if such a formulation using DtN transmission
conditions is possible.
It is however possible to use the block LU decomposition also in the
case of general decompositions including cross points, to obtain a
sweeping domain decomposition method which converges in one double
sweep. We show this in Figure \ref{OptimalSchwarzLUSweep}
\begin{figure}
\centering
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward1}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward2}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward3}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward4}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward5}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward6}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward7}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward8}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward9}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack8}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack7}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack6}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack5}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack4}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack3}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack2}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack1}}\hfill
\caption{Forward and backward sweep for an optimal Schwarz method
obtained by a block LU decomposition for the model problem and
$3\times 3$ subdomains. We observe convergence after one double
sweep.}
\label{OptimalSchwarzLUSweep}
\end{figure}
for our model problem and the same $3\times3$ domain decomposition
used in Figure \ref{PoissonRASORASFig}. We have chosen here simply a
lexicographic ordering of the subdomains, and we see that the method
converges in one double sweep. We show in Figure \ref{OSMLUfactors}
\begin{figure}
\centering
\mbox{\includegraphics[width=0.49\textwidth]{Figures/OSMLfactor}
\includegraphics[width=0.49\textwidth]{Figures/OSMUfactor}}
\caption{Sparsity of the block $L$ and $U$ factors of the optimal
Schwarz method for the $3\times 3$ subdomain decomposition from
Figure \ref{OptimalSchwarzLUSweep}.}
\label{OSMLUfactors}
\end{figure}
the sparsity structure of the corresponding LU factors, which indicate the structure of the
transmission conditions generated by the block LU decomposition in this case, a subject that needs
further investigation; if the domain decomposition is a strip decomposition without cross points,
{\it i.e.}\ like in Figure \ref{1Dand2DDecompositionFig} on the right, it is known that the block LU
decomposition generates transparent transmission conditions on the left and Dirichlet conditions on
the right of the subdomains, like in the design of the source transfer domain decomposition method. There
is uniqueness in the block LU decomposition only once one chooses the diagonal of one of the
factors, {\it e.g.}~the identity matrices on the diagonal of $U$ as we did here. For the strip
decomposition our block LU factorization gives therefore a different algorithm from the optimal
Schwarz method shown in Figure \ref{PoissonOptimalSchwarzFig} which used transparent boundary
conditions involving the DtN operator on both sides of the subdomains. We give a simple Matlab
implementation for these block LU factorizations in Appendix A for the interested
reader to experiment with.
Note that we could have used any other ordering of the subdomains for
the sweeping before performing the block LU factorization: we show for
example in Figure \ref{OptimalSchwarzLUSweepL}
\begin{figure}
\centering
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward1L}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward2L}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward3L}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward4L}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward5L}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward6L}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward7L}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward8L}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward9L}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack8L}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack7L}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack6L}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack5L}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack4L}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack3L}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack2L}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack1L}}\hfill
\caption{Forward and backward sweep for an optimal Schwarz method
obtained by a block LU decomposition for the model problem and
$3\times 3$ subdomains using an L-sweep. We observe again
convergence after one double sweep.}
\label{OptimalSchwarzLUSweepL}
\end{figure}
the ordering for L-sweeps. We see that the algorithm also converges in
one double sweep, by construction. The corresponding block LU factors
are shown in Figure \ref{OSMLUfactorsL}.
\begin{figure}
\centering
\mbox{\includegraphics[width=0.49\textwidth]{Figures/OSMLfactorL}
\includegraphics[width=0.49\textwidth]{Figures/OSMUfactorL}}
\caption{Sparsity of the block $L$ and $U$ factors of the optimal
Schwarz method for the $3\times 3$ subdomain decomposition from
Figure \ref{OptimalSchwarzLUSweep} when using the L-sweep
ordering.}
\label{OSMLUfactorsL}
\end{figure}
We show in Figure \ref{OptimalSchwarzLUSweepD} the further popular
diagonal ordering for sweeping.
\begin{figure}
\centering
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward1D}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward2D}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward3D}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward4D}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward5D}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward6D}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward7D}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward8D}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subForward9D}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack8D}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack7D}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack6D}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack5D}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack4D}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack3D}}
\mbox{\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack2D}
\includegraphics[width=0.33\textwidth,trim=50 20 50 50,clip]{Figures/OptimalSchwarz9subBack1D}}\hfill
\caption{Forward and backward sweep for an optimal Schwarz method
obtained by a block LU decomposition for the model problem and
$3\times 3$ subdomains using a D-sweep. We observe again
convergence after one double sweep.}
\label{OptimalSchwarzLUSweepD}
\end{figure}
Again we see convergence in one double sweep. The corresponding
block LU factors are shown in Figure \ref{OSMLUfactorsD}.
\begin{figure}
\centering
\mbox{\includegraphics[width=0.49\textwidth]{Figures/OSMLfactorD}
\includegraphics[width=0.49\textwidth]{Figures/OSMUfactorD}}
\caption{Sparsity of the block $L$ and $U$ factors of the optimal
Schwarz method for the $3\times 3$ subdomain decomposition from
Figure \ref{OptimalSchwarzLUSweep} when using the D-sweep
ordering.}
\label{OSMLUfactorsD}
\end{figure}
It is currently not known how to obtain a general parallel nilpotent
Schwarz method for such general decompositions including cross
points. It was however discovered in \citeasnoun{GK1} that it is possible to
define transmission conditions at the algebraic level, including a
global communication component, such that the associated optimal
parallel Schwarz method converges in two (!) iterations, independently
of the number of subdomains, the decomposition and the partial
differential equation that is solved. Such a global
communication component is also present in the recent work
\cite{Claeys:2019:ADD,Claeys:2021:RTO} for time harmonic wave
propagation problems, which is based on earlier work of
\citeasnoun{claeys2019new}, where the multi-trace formulation was
interpreted as an optimized Schwarz method, including cross points.
We will use in what follows the two specific domain decompositions
shown in Figure \ref{1Dand2DDecompositionFig}, namely one dimensional,
sequential or strip decompositions, and two dimensional decompositions
including cross points. For sequential domain decompositions, we can
use Fourier analysis techniques to accurately study the influence of
the transmission conditions used on the convergence of the domain
decomposition iteration, whereas for two dimensional domain
decompositions, such results are not yet available.
\section{Two subdomain analysis}\label{2SubSec}
It is very instructive to understand Schwarz methods by domain truncation starting first with a
simple two subdomain decomposition, since then many detailed convergence properties of the Schwarz
methods can be obtained by direct, analytical computations. We consider therefore the strip
decomposition shown in Figure \ref{1Dand2DDecompositionFig} on the right but with only two
subdomains, $\Omega_1:=(0,X_1^r)\times(0,Y)$ and $\Omega_2:=(X_2^l,1)\times(0,Y)$. Throughout the
paper, by $y=O(x)$ we mean $C_1|x|\le |y|\le C_2|x|$ for some constant $C_1, C_2>0$ independent of
$x$, and we write the asymptotic $y\sim x$ if $\lim y/x =1$ and $x$ consists of the leading terms.
\subsection{Laplace type problems}
We study first the screened Laplace equation (sometimes also called
the Helmholtz equation with the ``good sign''),
\begin{equation}\label{ScreenedLaplaceEquation}
(\eta-\Delta)u=f \quad \mbox{in $\Omega:=(0,1)\times(0,Y)$},
\end{equation}
with $\eta\ge 0$. As boundary conditions, we impose on the left and right
a Robin boundary condition,
\begin{equation}\label{bclr}
{\cal B}^l(u):=\partial_nu+p^lu=g^l,\quad
{\cal B}^r(u):=\partial_nu+p^ru=g^r,
\end{equation}
where $\partial_n$ denotes the unit outer normal derivative
({\it i.e.}\ $-\partial_x$ on the left and $\partial_x$ on the right). On
top and bottom we impose either a Dirichlet or a Neumann condition,
\begin{equation}\label{bctb}
\arraycolsep0.2em
\begin{array}{rcll}
{\cal B}^b(u)&=&g^b &\mbox{with either ${\cal B}^b(u):=u$ or
${\cal B}^b(u):=\partial_nu$}, \\
{\cal B}^t(u)&=&g^t &\mbox{with either ${\cal B}^t(u):=u$ or
${\cal B}^t(u):=\partial_nu$}.
\end{array}
\end{equation}
An alternating Schwarz method then starts with an initial guess
$u_2^0$ in subdomain $\Omega_2$ and performs for iteration index
$n=1,2,\ldots$ alternatingly solutions on the subdomains,
\begin{equation}\label{SMDT}
\arraycolsep0.3em
\begin{array}{rcllrcll}
(\eta-\Delta)u_1^n&=&f &\mbox{in $\Omega_1$},&
(\eta-\Delta)u_2^n&=&f &\mbox{in $\Omega_2$},\\
{\cal B}_1^r(u_1^n)&=& {\cal B}_1^r(u_2^{n-1}) &\mbox{at $x=X_1^r$}, &
{\cal B}_2^l(u_2^n)&=& {\cal B}_2^l(u_1^n) &\mbox{at $x=X_2^l$},\\
{\cal B}^l(u_1^n)&=&g^l &\mbox{at $x=0$}, &
{\cal B}^r(u_2^n)&=&g^r &\mbox{at $x=1$}, \\
{\cal B}^b(u_1^n)&=&g^b &\mbox{at $y=0$}, &
{\cal B}^b(u_2^n)&=&g^b &\mbox{at $y=0$}, \\
{\cal B}^t(u_1^n)&=&g^t &\mbox{at $y=Y$}, &
{\cal B}^t(u_2^n)&=&g^t &\mbox{at $y=Y$}.
\end{array}
\end{equation}
Here the key ingredient are the transmission conditions ${\cal B}_1^r$
and ${\cal B}_2^l$, which in the classical Schwarz method are
Dirichlet, see \R{AlternatingSchwarzMethod}, but in Schwarz
methods based on domain truncation, {\it e.g.}~optimized Schwarz methods,
they are of the form
\begin{equation}\label{TC}
{\cal B}_1^r(u):=\partial_nu+{\cal S}_1^r(u),\quad
{\cal B}_2^l(u):=\partial_nu+{\cal S}_2^l(u).
\end{equation}
Here ${\cal S}_1^r$ and ${\cal S}_2^l$ can either be constants, ${\cal
S}_1^r=p_1$, ${\cal S}_2^l=p_2$, $p_j\in \mathbb{R}$, which leads
to Robin transmission conditions, or more general tangential
operators acting along the interface, {\it e.g.}\ ${\cal
S}_1^r=p_1-q_1\partial_{yy}$ with $p_1,q_1\in\mathbb{R}$, which
would be Ventcell transmission conditions, or one can also consider
more general transmission conditions involving rational functions or
PMLs, see \citeasnoun{gander2019class} and references therein.
In order to study the convergence of the Schwarz method \R{SMDT},
we introduce the error $e_j^n:=u-u_j^n$, $j=1,2$, which by linearity satisfies
the same iteration equations as the original algorithm, but with zero
data, {\it i.e.}~
\begin{equation}\label{SMDTerror}
\arraycolsep0.3em
\begin{array}{rcllrcll}
(\eta-\Delta)e_1^n&=&0 &\mbox{in $\Omega_1$},&
(\eta-\Delta)e_2^n&=&0 &\mbox{in $\Omega_2$},\\
{\cal B}_1^r(e_1^n)&=& {\cal B}_1^r(e_2^{n-1}) &\mbox{at $x=X_1^r$}, &
{\cal B}_2^l(e_2^n)&=& {\cal B}_2^l(e_1^n) &\mbox{at $x=X_2^l$},\\
{\cal B}^l(e_1^n)&=&0 &\mbox{at $x=0$}, &
{\cal B}^r(e_2^n)&=&0 &\mbox{at $x=1$}, \\
{\cal B}^b(e_1^n)&=&0 &\mbox{at $y=0$}, &
{\cal B}^b(e_2^n)&=&0 &\mbox{at $y=0$}, \\
{\cal B}^t(e_1^n)&=&0 &\mbox{at $y=Y$}, &
{\cal B}^t(e_2^n)&=&0 &\mbox{at $y=Y$},
\end{array}
\end{equation}
as one can easily verify by directly evaluating the expressions on the
left of the equal signs, {\it e.g.}
$$
(\eta-\Delta)e_1^n=(\eta-\Delta)(u-u_1^n)=f-f=0.
$$
To obtain detailed information on the functioning of such Schwarz
methods, it is best to expand the errors in a Fourier series in the
$y$ direction\footnote{Note the different symbols $\hat{e}_j^n$ for
the cosine, and $\hat{\epsilon}_j^n$ for the sine coefficients.},
\begin{equation}\label{CosSineExpansion}
e_j^n=\sum_{\tilde{k}=0}^\infty\hat{e}_j^n(\tilde{k})\cos(\frac{\tilde{k}\pi}{Y}y)+
\sum_{\tilde{k}=1}^\infty\hat{\epsilon}_j^n(\tilde{k})\sin(\frac{\tilde{k}\pi}{Y}y).
\end{equation}
Now if at the bottom and top we have Dirichlet conditions, all the
error coefficients of the cosine are zero, and if we have Neumann
conditions, all the error coefficients of the sine are zero. In either
case, inserting the Fourier series into the error equations
\R{SMDTerror}, we obtain by orthogonality of the sine and cosine
functions for each cosine error Fourier mode $\hat{e}_j^n$, $j=1,2$
(and analogously for the sine error Fourier mode $\hat{\epsilon}_j^n$)
the Schwarz iteration
\begin{equation}\label{SMDTerrorF}
\thickmuskip=0.6mu
\medmuskip=0.2mu
\begin{aligned}
(\eta-\partial_{xx}+k^2)\hat{e}_1^n&=0& &\mbox{in $(0,X_1^r)$},&
(\eta-\partial_{xx}+k^2)\hat{e}_2^n&=0& &\mbox{in $(X_2^l,1)$},\\
\beta_1^r(\hat{e}_1^n)&=\beta_1^r(\hat{e}_2^{n-1})& &\mbox{at $x=X_1^r$}, &
\beta_2^l(\hat{e}_2^n)&=\beta_2^l(\hat{e}_1^n)& &\mbox{at $x=X_2^l$},\\
\beta^l(\hat{e}_1^n)&=0& &\mbox{at $x=0$}, &
\beta^r(\hat{e}_2^n)&=0& &\mbox{at $x=1$},
\end{aligned}
\end{equation}
where we defined the frequency variable $k:=\frac{\tilde{k}\pi}{Y}$,
and $\beta_1^r$, $\beta_2^l$, $\beta^l$, $\beta^r$ denote the Fourier
transforms of the boundary operators, also called their symbols,
{\it e.g.}~$\beta_1^r=\partial_n+\sigma_1^r$ with $\sigma_1^r$ the symbol of
the tangential operator chosen in \R{TC}. If this operator was for
example ${\cal S}_1^r=p_1-q_1\partial_{yy}$, $p_1$, $q_1$ constants, then its
symbol would be $\sigma_1^r=p_1+q_1k^2$, from the two derivatives acting
on the cosine, which also leads to the sign change from minus to plus.
Solving the ordinary differential equations in the error iteration
\R{SMDTerrorF}, we obtain using the outer Robin boundary conditions
for each Fourier mode $k$ the solutions\footnote{These computations can easily be performed in Maple, see Appendix B.} (with $\underline{x}:=1-x$)
\begin{equation}\label{ScreenedLaplaceSols}
\begin{aligned}
\hat{e}_1^n(x,k)&=A_1^n(k)\left({\sqrt{k^2+\eta}\cosh(\sqrt{k^2+\eta}x)
+ p^l\sinh(\sqrt{k^2 +\eta}x)}\right),\\
\hat{e}_2^n(x,k)&=A_2^n(k)\left({\sqrt{k^2+\eta}\cosh(\sqrt{k^2+\eta}\underline{x}) + p^r\sinh(\sqrt{k^2 +\eta}\underline{x})}\right).
\end{aligned}
\end{equation}
To determine the two remaining constants $A_1^n(k)$ and $A_2^n(k)$ we
insert the solutions into the transmission conditions in
\R{SMDTerrorF}, which leads to
\begin{equation}
A_1^n=\rho_1(k,\eta,p^l,p^r,\sigma_1^r)A_2^{n-1},\quad
A_2^n=\rho_2(k,\eta,p^l,p^r,\sigma_2^l)A_1^{n},
\end{equation}
with the two quantities
\begin{equation}\label{rho12}
\arraycolsep0.1em
\begin{array}{rcl}
\rho_1 &=& \frac{(k^2 - \sigma_1^rp^r + \eta)
\sinh(\sqrt{k^2 + \eta}(X_1^r - 1))
- \cosh(\sqrt{k^2 + \eta}(X_1^r - 1))
\sqrt{k^2 + \eta}(p^r - \sigma_1^r)}
{(k^2 + \sigma_1^rp^l + \eta)\sinh(\sqrt{k^2 + \eta}X_1^r)
+ \cosh(\sqrt{k^2 + \eta}X_1^r)\sqrt{k^2 + \eta}(p^l + \sigma_1^r)},\\
\rho_2&=&\frac{(k^2 - \sigma_2^lp^l + \eta)
\sinh(\sqrt{k^2 + \eta}X_2^l)
+ \cosh(\sqrt{k^2 + \eta}X_2^l)
\sqrt{k^2 + \eta}(p^l - \sigma_2^l)}
{(k^2 + \sigma_2^lp^r + \eta)\sinh(\sqrt{k^2 + \eta}(X_2^l - 1))
- \cosh(\sqrt{k^2 + \eta}(X_2^l - 1))\sqrt{k^2 + \eta}(p^r + \sigma_2^l)}.
\end{array}
\end{equation}
Their product represents the convergence factor of the Schwarz method,
\begin{equation}\label{RhoOSM}
\rho(k,\eta,p^l,p^r,\sigma_1^r,\sigma_2^l):=\rho_1(k,\eta,p^l,p^r,\sigma_1^r)\rho_2(k,\eta,p^l,p^r,\sigma_2^l),
\end{equation}
{\it i.e.}\ it determines by which coefficient the corresponding error
Fourier mode $k$ is multiplied over each alternating Schwarz
iteration.
Since the expression for the convergence factor $\rho$ looks rather
complicated, it is instructive to look at the special case first where
the domain, and thus the subdomains, are unbounded on the left and
right. This can be obtained from the result above by introducing for
the outer boundary conditions a transparent one, {\it i.e.}\ choosing for the
Robin parameters the symbol of the DtN
operator\footnote{\label{footnoteDtN} The symbol of the DtN operator
for the transparent boundary condition can be obtained by solving
{\it e.g.}~$(\eta-\partial_{xx}+k^2)\hat{e}_1=0$ on the outer domain
$(-\infty,0)$ with Dirichlet data $\hat{e}_1(0)=\hat{g}$, which
gives $\hat{e}_1=\hat{g}e^{\sqrt{k^2+\eta}x}$, because solutions
must remain bounded as $x\to-\infty$. Since
$\partial_x\hat{e}_1=\sqrt{k^2+\eta}\hat{e}_1$, the outer solution
$\hat{e}_1$ satisfies for any $x\in(-\infty,0]$ the identity
$-\partial_x\hat{e}_1+\sqrt{k^2+\eta}\hat{e}_1=0$. One could therefore
also solve this outer problem on any bounded domain, {\it e.g.}~$(a,0)$
imposing at $x=a$ the transparent boundary condition
$\partial_n\hat{e}_1+\sqrt{k^2+\eta}\hat{e}_1=0$, since the outward
normal $\partial_n=-\partial_x$, and get the same solution. Note that
$\sqrt{k^2+\eta}$ is the symbol of the DtN map $\hat{g}\mapsto
\partial_n \hat{e}_1$ with $\partial_n=\partial_x$, {\it i.e.}\ the operator
that takes the Dirichlet data $\hat{g}$, solves the problem on the
domain, and then computes the Neumann data for the outward normal
derivative.} $p^r:=\sqrt{k^2+\eta}$, $p^l:=\sqrt{k^2+\eta}$. This
leads after simplification to the convergence factor
\begin{equation}\label{RhoOSMUnbounded}
\rho(k,\eta,\sigma_1^r,\sigma_2^l)=
\frac{\sqrt{k^2+\eta}-\sigma_1^r}{\sqrt{k^2+\eta}+\sigma_1^r}
\frac{\sqrt{k^2+\eta}-\sigma_2^l}{\sqrt{k^2+\eta}+\sigma_2^l}
e^{-2(X_1^r-X_2^l)\sqrt{k^2+\eta}}.
\end{equation}
This convergence factor shows us a very important property of these
Schwarz methods: if we choose
$\sigma_1^r=\sigma_2^l:=\sqrt{k^2+\eta}$, the tangential symbol of the
transparent boundary condition, the convergence factor vanishes
identically for all Fourier frequencies $k$, {\it i.e.}\ after two
consecutive subdomain solves, one on the left and one on the right (or
vice versa), the error in each Fourier mode is zero, {\it i.e.}\ we have the
exact solution on the subdomain after two alternating subdomain
solves! This is called an optimal Schwarz method, see footnote
\ref{footnoteoptimalSchwarz}. If we thus used a double sweep, first
solving on the left subdomain and then on the right, followed by
another solve on the left (or a double sweep in the other direction),
we would have the solution on both subdomains, which shows for two
subdomains the double sweep result illustrated in Figure
\ref{PoissonOptimalSchwarzFig} for three subdomains. This result holds
in general for many subdomains in the strip decomposition case; it was
first proved in \citeasnoun[Result 3.1]{Nataf93} for an advection diffusion
problem. If we use the parallel Schwarz method for two subdomains,
then we need two iterations in order to have each subdomain solved one
after the other, so the optimal parallel Schwarz method for two
subdomains will converge in two iterations, a result that generalizes
to convergence in $J$ iterations for $J$ subdomains in the strip
decomposition case, first proved in \citeasnoun[Proposition 2.4]{NRS94}.
In the bounded domain case, we can still obtain these same results by
chosing in the transmission conditions the tangential symbols of the
transparent boundary conditions for the bounded subdomains, which is
equivalent to choosing $\sigma_1^r$ and $\sigma_2^l$ such that
$\rho_1$ and $\rho_2$ in \R{rho12} vanish, which leads to
\begin{equation}\label{DtNbdd}
\begin{array}{rcl}
\sigma_1^r&=&\frac{\sqrt{k^2+\eta}\left(\tanh(\sqrt{k^2+\eta}(X_1^r-1))\sqrt{k^2+\eta} - p^r\right)}{\tanh(\sqrt{k^2+\eta}(X_1^r - 1))p^r-\sqrt{k^2+\eta}},\\
\sigma_2^l&=&\frac{\sqrt{k^2+\eta}\left(\tanh(\sqrt{k^2+\eta}X_2^l)\sqrt{k^2+\eta} + p^l\right)}{\tanh(\sqrt{k^2+\eta}X_2^l)p^l+\sqrt{k^2+\eta}}.
\end{array}
\end{equation}
As in the unbounded domain case, these values correspond to the
symbols of the associated DtN operators on the bounded domain, as one
can verify by a direct computation using the solutions
\R{ScreenedLaplaceSols} on their respective domains, as done for
the unbounded domain in footnote \ref{footnoteDtN}. We see that in the
bounded domain case, the symbols of the DtN maps in \R{DtNbdd},
and hence the DtN maps also, depend on the outer boundary conditions,
since both the domain parameters ($X_1^r$ and $X_2^l$) and the Robin
parameters in the outer boundary conditions ($p^l$ and $p^r$) appear
in them. However, for large frequencies $k$, we have
$$
\sigma_1^r\sim\sqrt{k^2+\eta},\quad \sigma_2^l\sim\sqrt{k^2+\eta},
$$
since $\tanh z\to\pm1$ as $z\to\pm\infty$, and thus only low frequencies see the difference
from the bounded to the unbounded case in the screened Laplace problem.
Choosing Robin transmission conditions, ${\cal
S}_1^r:=p_1^r=\sigma_1^r$ and ${\cal S}_2^l:=p_2^l=\sigma_2^l$ with
$p_1^r,p_2^l\in\mathbb{R}$ in \R{TC}, and taking the limit in the
convergence factor \R{RhoOSMUnbounded} as the Robin parameters
$p_1^r$ and $p_2^l$ go to infinity, we find
\begin{equation}\label{RhoAltSUnbounded}
\rho(k,\eta,)=e^{-2(X_1^r-X_2^l)\sqrt{k^2+\eta}},
\end{equation}
which is the convergence factor of the classical alternating Schwarz
method on the unbounded domain, since the transmission conditions
\R{TC} become Dirichlet transmission conditions in this limit of the
Robin transmission conditions. We see now explicitly the overlap
$L:=X_1^r-X_2^l$ appearing in the exponential function. The classical
Schwarz method for the screened Laplace problem therefore converges
for all Fourier modes $k$, provided there is overlap, and $\eta>0$. If
$\eta=0$, {\it i.e.}\ we consider the Laplace problem, then the Fourier mode
$k=0$ does not contract on the unbounded domain. Recall however that
$k=0$ is only present in the cosine expansion for Neumann boundary
conditions on top and bottom, it is the constant mode, and the Schwarz
method on the unbounded domain does indeed not contract for this
mode. For Dirichlet boundary conditions on top and bottom, {\it i.e.}\
considering the sine series in \eqref{CosSineExpansion}, the smallest
Fourier frequency is $k=\frac{\pi}{Y}>0$ so the Schwarz method is
contracting, even with $\eta=0$. Note also that the contraction is
faster for large Fourier frequencies $k$ than for small ones due to
the exponential function, and hence the Schwarz method is a smoother
for the screened Laplace equation. A comparison of the classical
Schwarz convergence factor \R{RhoAltSUnbounded} with the convergence
factor for the optimized Schwarz method with Robin transmission
conditions in \R{RhoOSMUnbounded} shows that the latter contains the
former, but in addition also the two fractions in front which are also
smaller than one for suitable choices of the Robin parameters. The
optimized Schwarz method therefore always converges faster than the
classical Schwarz method, and furthermore can also converge without
overlap, which was the original reason for Lions to propose Robin
transmission conditions.
To see how the classical Schwarz method contracts on a bounded domain,
we show in Figure \ref{SchwarzScreenedLaplaceRhos} (left) the
different cases for the outer boundary conditions ($p^l=p^r=5$ in the
Robin case), by plotting \R{RhoOSM} for when the parameters in the
Robin transmission conditions go to infinity, and the model parameter
$\eta=1$, for a small overlap, $L=X_1^r-X_2^l=0.51-0.49=0.02$.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figures/RhoAltS}
\includegraphics[width=0.48\textwidth]{Figures/RhoOSM}
\caption{Left: classical Schwarz convergence factor for
$\eta-\Delta$ and different outer boundary conditions on the left
and right. Right: corresponding Schwarz convergence
factor with Robin transmission conditions.}
\label{SchwarzScreenedLaplaceRhos}
\end{figure}
This shows that the different outer boundary conditions only influence the convergence of the lowest
frequencies ($k$ small) in the error, for larger frequencies there is no influence. This is due to
the diffusive nature of the screened Laplace equation: high freqency components are damped rapidly
by the (screened) Laplace operator, and thus they don't see the different outer boundary conditions.
On the right in Figure \ref{SchwarzScreenedLaplaceRhos} we show the corresponding convergence
factors with Robin transmission conditions, chosen as $p_1^r=p_2^l=3$. This indicates that there is
an optimal choice which leads to the fastest possible convergence, which in our example is achieved
for the Neumann outer boundary condition with the choice $p_1^r=p_2^l\approx 3$, since the maximum
of the convergence factor is minimized by equioscillation, {\it i.e.}\ the convergence factor at $k=0$
equals the convergence factor at the interior maximum at around $k=18$. For different outer boundary
conditions, there would be a better value of $p_1^r=p_2^l$ that makes {\it e.g.}~ the blue curve
for Dirichlet outer boundary conditions equioscillating. This optimization process led to the name
optimized Schwarz methods.
Let us compute the optimized parameter values: for the simplified
situation where we choose the same Robin parameter in the transmission
condition, $p_1^r=p_2^l=p$, we need to solve the min-max problem
\begin{equation}
\min_p\max_{k\in[k_{\min},\infty)}|\rho(k,\eta,p^l,p^r,p,p)|
\end{equation}
with the convergence factor $\rho$ from \R{RhoOSM}. The solution
is given by equioscillation, as indicated in Figure
\ref{SchwarzScreenedLaplaceRhos}, {\it i.e.}\ we have to solve
\begin{equation}\label{equioscillationEq}
\rho(k_{\min},\eta,p^l,p^r,p^*,p^*)=\rho(\bar{k},\eta,p^l,p^r,p^*,p^*),
\end{equation}
for the optimal Robin parameter $p^*$, where $\bar{k}$ denotes the
location of the interior maximum visible in Figure
\ref{SchwarzScreenedLaplaceRhos} (right). We observe numerically that
$\bar{k}\sim C_k\frac{1}{L^{2/3}}$, and $p^*\sim
C_p\frac{1}{L^{1/3}}$, see also \citeasnoun{GanderOSM} for a proof of this,
and \citeasnoun{bennequin2009homographic} for a more comprehensive analysis of such
min-max problems. Inserting this Ansatz into the system of
equations formed by \R{equioscillationEq} and the derivative
condition for the local maximum
\begin{equation}
\partial_k\rho(\bar{k},\eta,p^l,p^r,p^*,p^*)=0,
\end{equation}
we find by asymptotic expansion\footnote{For the expansions involving
$\bar{k}$, it is much easier to expand the convergence factor of the
unbounded domain analysis \R{RhoOSMUnbounded}, which gives the
same result as the expansion of the convergence factor
\R{RhoOSM} with \R{rho12} from the bounded domain analysis,
since they behave the same for large $k$, see Figure
\ref{SchwarzScreenedLaplaceRhos} on the right. This approach was
discovered in \citeasnoun{gander2014optimized} under the name asymptotic
approximation of the convergence factor, see also
\citeasnoun{gander2017optimized}, \citeasnoun{chen2021optimized} where this new
technique was used.} for small overlap $L$
\begin{eqnarray}
\partial_k\rho(\bar{k},\eta,p^l,p^r,p^*,p^*)&=&
\frac{2(2C_p-C_k^2)}{C_k^2}L+O(L^{4/3}),\label{de}\\
\rho(k_{\min},\eta,p^l,p^r,p^*,p^*)&=&1-\frac{C}{C_p}L^{1/3}+O(L^{2/3}),\label{lfe}\\
\rho(\bar{k},\eta,p^l,p^r,p^*,p^*)&=&1-2\frac{C_k^2 + 2C_p}{C_k}L^{1/3}+O(L^{2/3}),\label{hfe}
\end{eqnarray}
where all the information about the geometry\footnote{See also
\citeasnoun{gander2016optimized} where it is shown that variable
coefficients also influence essentially the low frequency
behavior.} is in the constant stemming from the value of the
convergence factor at the lowest frequency \R{lfe},
\begin{equation}\label{Cgeom}
C=\frac{2s((k_{\min}^2 + p_lp_r + \eta)s_s + s c_s(p_l +
p_r))}{(((s_sp_r+sc_s)c_{sx}-(c_sp_r + s_ss)s_{sx})(s_{sx}p_l + c_{sx}s))}.
\end{equation}
Here we let $s:=\sqrt{k^2+\eta}$, $c_s:=\cosh(s)$, $s_s:=\sinh(s)$,
$c_{sx}:=\cosh(sX_2^l)$, $s_{sx}:=\sinh(sX_2^l)$ to shorten the
formula. Setting the leading order term of the derivative in
\R{de} to zero, and the other two leading terms from \R{lfe}
and \R{hfe} to be equal for equioscillation leads to the system
for the constants $C_k$ and $C_p$,
$$
\frac{2(2C_p-C_k^2)}{C_k^2}=0,\quad \frac{C}{C_p}=2\frac{C_k^2 +
2C_p}{C_k},
$$
whose solution is
$$
C_k=\left(\frac{C}{2}\right)^{1/3}, \quad
C_p=\frac{1}{2}\left(\frac{C}{2}\right)^{2/3}.
$$
The best choice of the Robin parameter is therefore given for small
overlap $L$ by
\begin{equation}\label{pstar}
p^*=\frac{1}{2}\left(\frac{C}{2}\right)^{2/3}L^{-1/3},
\end{equation}
with the geometry constant $C$ from \R{Cgeom}, which leads to a
convergence factor of the associated optimized Schwarz method
\begin{equation}\label{OptRho}
\rho^*\sim 1-2\left(\frac{2}{C}\right)^{2/3}L^{1/3}.
\end{equation}
We show in Figure \ref{SchwarzScreenedLaplaceOptRhos} on the left the
contraction factor for this optimized Schwarz method for the different
outer boundary conditions, and also for the unbounded domain case, for
the same parameter choice as in Figure
\ref{SchwarzScreenedLaplaceRhos}.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figures/RhoOSMopt}
\includegraphics[width=0.48\textwidth]{Figures/RhoOSMu}
\caption{Left: optimized Schwarz convergence factor for
$\eta-\Delta$ and different outer boundary conditions. Right:
corresponding convergence factors when using the optimized
parameter from the simpler unbounded domain analysis.}
\label{SchwarzScreenedLaplaceOptRhos}
\end{figure}
We see that the contraction factor equioscillates using the asymptotic
formulas already for an overlap $L=0.02$, so the formulas are very
useful in practice. We see also that the unbounded domain analysis
which does not take into account the geometry leads to an optimized
convergence factor in between the other ones. In practice one
can also use the corresponding simpler formula from \citeasnoun[equation
(4.21)]{GanderOSM}, namely
\begin{equation}\label{pstarunbounded}
p^*=\frac{1}{2}(4(k_{\min}^2+\eta))^{1/3}L^{-1/3}
\end{equation}
in the case of the screened Laplace equation, which only deteriorates
the performance a little, see Figure
\ref{SchwarzScreenedLaplaceOptRhos} on the right.
Another, easy to use choice is to use a low frequency Taylor expansion
about $k=0$ of the optimal symbol of the DtN operator
\R{DtNbdd}, since the classical Schwarz method is not working
well for low frequencies, as we have seen in Figure
\ref{SchwarzScreenedLaplaceRhos} on the left. Expanding
the optimal DtN symbols in \R{DtNbdd}, we obtain
\begin{equation}\label{pT}
\begin{array}{rcl}
p_1^r&=&\frac{\sqrt{\eta}\left(\tanh(\sqrt{\eta}(X_1^r-1))\sqrt{\eta} - p^r\right)}{\tanh(\sqrt{\eta}(X_1^r - 1))p^r-\sqrt{\eta}}+O(k^2),\\
p_2^l&=&\frac{\sqrt{\eta}\left(\tanh(\sqrt{\eta}X_2^l)\sqrt{\eta} + p^l\right)}{\tanh(\sqrt{\eta}X_2^l)p^l+\sqrt{\eta}}+O(k^2).
\end{array}
\end{equation}
We see that the Taylor parameters also take into account the
geometry. Using just the zeroth order term leads to the Taylor
convergence factors shown in Figure
\ref{SchwarzScreenedLaplaceTaylorRhos} on the left.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figures/RhoOSMT0}
\includegraphics[width=0.48\textwidth]{Figures/RhoOSMT0u}
\caption{Left: convergence factor for Schwarz with Taylor based
Robin transmission conditions for $\eta-\Delta$ and different
outer boundary conditions on the left Right: corresponding Taylor
Schwarz convergence factor using the zeroth order Taylor
transmission conditions from the unbounded domain analysis,
$p_1^r=p_2^l=\sqrt{\eta}$.}
\label{SchwarzScreenedLaplaceTaylorRhos}
\end{figure}
Their maximum $\bar{k}$ is affected by the outer boundary conditions,
and can be explicitly computed when the overlap $L:=X_2^l-X_1^r$ becomes
small\footnote{We give in Appendix B the Maple commands
to show how such technical computations can be performed automatically.},
\begin{equation}\label{TaylorRho}
\bar{k}\sim \frac{\sqrt{p_1^r+p_2^l}}{\sqrt{L}},\quad
\rho\sim1-4\sqrt{p_1^r+p_2^l}\sqrt{L},
\end{equation}
where $p_1^r$ and $p_2^l$ are from \eqref{pT} without the order term,
and setting $X_1^r:=X_2^l$ in $p_1^r$ due to the expansion for $L$
small.
On the right, we show the results when using the first term in the
Robin transmission conditions from the Taylor expansion of the optimal
symbols from the unbounded domain analysis,
$\sqrt{k^2+\eta}=\sqrt{\eta}+\frac{1}{2\sqrt{\eta}}k^2+O(k^4)$,
{\it i.e.}\ $p_1^r=p_2^l=\sqrt{\eta}$. We see that this choice works even
better compared to the precise Taylor coefficients adapted to the
outer boundary conditions in the Neumann case, but slightly worse for
the Dirichlet and Robin case. Nevertheless, convergence with the
simple Taylor conditions is not as good as with the optimized ones in
Figure \ref{SchwarzScreenedLaplaceOptRhos}, and this becomes more
pronounced when the overlap $L$ becomes small as shown in
\eqref{TaylorRho} compared to \eqref{OptRho}. Higher order
transmission conditions can also be used and analyzed, see
\citeasnoun{GanderOSM}, and give even better performance with weaker
asymptotic dependence on the overlap.
\subsection{Helmholtz problems}
We now investigate what changes if the equation is the Helmholtz equation,
\begin{equation}\label{HelmholtzEquation}
(\Delta+\omega^2)u=f \quad \mbox{in $\Omega:=(0,1)\times(0,Y)$},
\end{equation}
with $\omega\in\mathbb{R}$. As boundary conditions, we impose on the left and
right again the Robin boundary condition \R{bclr}, and on top and
bottom first a Dirichlet or a Neumann condition as in \R{bctb}. The
alternating Schwarz method remains as in \R{SMDT}, but with the
differential operator $(\eta-\Delta)$ replaced by the Helmholtz
operator $(\Delta+\omega^2)$. The corresponding error equations for
the Fourier coefficients are
\begin{equation}\label{SMDTerrorFH}
\thickmuskip=0.6mu
\medmuskip=0.2mu
\thinmuskip=0.1mu
\nulldelimiterspace=-0.1pt
\scriptspace=0pt
\begin{aligned}
(\partial_{xx}+\omega^2-k^2)\hat{e}_1^n&=0& &\mbox{in $(0,X_1^r)$},&
(\partial_{xx}+\omega^2-k^2)\hat{e}_2^n&=0& &\mbox{in $(X_2^l,1)$},\\
\beta_1^r(\hat{e}_1^n)&=\beta_1^r(\hat{e}_2^{n-1})& &\mbox{at $x=X_1^r$}, &
\beta_2^l(\hat{e}_2^n)&=\beta_2^l(\hat{e}_1^n)& &\mbox{at $x=X_2^l$},\\
\beta^l(\hat{e}_1^n)&=0& &\mbox{at $x=0$}, &
\beta^r(\hat{e}_2^n)&=0& &\mbox{at $x=1$}.
\end{aligned}
\end{equation}
Solving the ordinary differntial equations, we obtain using the outer
Robin boundary conditions for each Fourier mode $k\ne\pm\omega$
the solutions ($\underline{x}:=1-x$)
\begin{equation}\label{HelmholtzSols}
\thickmuskip=0.9mu
\begin{aligned}
\hat{e}_1^n(x,k)&=A_1^n(k)\left({\sqrt{\omega^2-k^2}\cos(\sqrt{\omega^2-k^2}x)
+ p^l\sin(\sqrt{\omega^2-k^2}x)}\right),\\
\hat{e}_2^n(x,k)&=A_2^n(k)\left({\sqrt{\omega^2-k^2}\cos(\sqrt{\omega^2-k^2}\underline{x}) + p^r\sin(\sqrt{\omega^2-k^2}\underline{x})}\right).
\end{aligned}
\end{equation}
Comparing with the solutions for the screened Laplace equation in
\R{ScreenedLaplaceSols}, we see that the hyperbolic sines and
cosines are simply replaced by the normal sines and cosines, which
shows that the solutions are oscillatory, instead of decaying. Note
however that the arguments of the sines and cosines are only real if
$k^2<\omega^2$, {\it i.e.}\ for Fourier modes below the frequency parameter
$\omega$. For larger Fourier frequencies, the argument becomes
imaginary, and the sines and cosines need to be replaced by their
hyperbolic variants and we obtain that solutions behave like for the
screened Laplace problem in \R{ScreenedLaplaceSols}.
The two remaining constants $A_1^n(k)$ and $A_2^n(k)$ are determined
again using the transmission conditions, and we obtain after a short
calculation again the convergence factor of the form \R{RhoOSM},
with
\begin{equation}\label{rho12H}
\arraycolsep0.1em
\begin{array}{rcl}
\rho_1 &=& \frac{(k^2 - \sigma_1^rp^r - \omega^2)
\sin(\sqrt{\omega^2-k^2}(X_1^r - 1))
- \cos(\sqrt{\omega^2-k^2}(X_1^r - 1))
\sqrt{\omega^2-k^2}(p^r - \sigma_1^r)}
{(k^2 + \sigma_1^rp^l - \omega^2)\sin(\sqrt{\omega^2-k^2}X_1^r)
+ \cos(\sqrt{\omega^2-k^2}X_1^r)\sqrt{\omega^2-k^2}(p^l + \sigma_1^r)},\\
\rho_2&=& \frac{(k^2 - \sigma_2^lp^l -\omega^2)
\sin(\sqrt{\omega^2-k^2}X_2^l)
+ \cos(\sqrt{\omega^2-k^2}X_2^l)
\sqrt{\omega^2-k^2}(p^l - \sigma_2^l)}
{(k^2 + \sigma_2^lp^r -\omega^2)\sin(\sqrt{\omega^2-k^2}(X_2^l - 1))
- \cos(\sqrt{\omega^2-k^2}(X_2^l - 1))\sqrt{\omega^2-k^2}(p^r + \sigma_2^l)}.
\end{array}
\end{equation}
It is again instructive to look at the special case first where the
domain, and thus the subdomains, are unbounded on the left and right,
which can be obtained from the result above by introducing for the
outer Robin parameters the symbol of the DtN operator,
{\it i.e.}\ $p^r:=\vartheta\I\sqrt{\omega^2-k^2}$,
$p^l:=\vartheta\I\sqrt{\omega^2-k^2}$, where
$\vartheta=\mathrm{sign}(\omega^2-k^2)$ if $\sqrt{z}$ for
$z\in\mathbb{C}$ uses the branch cut $(-\infty,0)$. These symbols can
be obtained as shown in footnote \ref{footnoteDtN} for the screened
Laplace problem, and leads after simplification to the convergence factor
\begin{equation}\label{RhoOSMUnboundedH}
\thickmuskip=0.6mu
\medmuskip=0.3mu
\rho(k,\omega,\sigma_1^r,\sigma_2^l)=
\frac{\vartheta\I\sqrt{\omega^2-k^2}-\sigma_2^l}{\vartheta\I\sqrt{\omega^2-k^2}+\sigma_2^l}\,
\frac{\vartheta\I\sqrt{\omega^2-k^2}-\sigma_1^r}{\vartheta\I\sqrt{\omega^2-k^2}+\sigma_1^r}
e^{-2(X_1^r-X_2^l)\sqrt{k^2-\omega^2}}.
\end{equation}
As in the case of the screened Laplace equation, we see again that if
we choose in the transmission conditions the symbol of the DtN
operators, $\sigma_1^r=\sigma_2^l=\vartheta\I\sqrt{\omega^2-k^2}$,
the convergence factor vanishes identically for all Fourier
frequencies $k$, and we get an optimal Schwarz method,
see footnote \ref{footnoteoptimalSchwarz}, as for the screened Laplace
problem.
In the bounded domain case, we can still obtain an optimal Schwarz
method choosing in the transmission conditions the tangential symbols of the
transparent boundary conditions for the bounded subdomains, which is
equivalent to choosing $\sigma_1^r$ and $\sigma_2^l$ such that
$\rho_1$ and $\rho_2$ in \R{rho12H} vanish, and leads to
\begin{equation}\label{DtNbddH}
\begin{array}{rcl}
\sigma_1^r&=&\frac{\sqrt{\omega^2-k^2}\left(-\tan(\sqrt{\omega^2-k^2}(X_1^r-1))\sqrt{\omega^2-k^2} - p^r\right)}{\tan(\sqrt{\omega^2-k^2}(X_1^r - 1))p^r-\sqrt{\omega^2-k^2}},\\
\sigma_2^l&=&\frac{\sqrt{\omega^2-k^2}\left(-\tan(\sqrt{\omega^2-k^2}X_2^l)\sqrt{\omega^2-k^2} + p^l\right)}{\tan(\sqrt{\omega^2-k^2}X_2^l)p^l+\sqrt{\omega^2-k^2}}.
\end{array}
\end{equation}
As in the unbounded domain case, these values correspond to the
symbols of the associated DtN operators, as one can verify by a direct
computation using the solutions \R{HelmholtzSols} on their
respective domains, as done for the unbounded domain case of the
screened Laplace problem in footnote \ref{footnoteDtN}. We see that in
the bounded domain case, the symbols of the DtN maps in
\R{DtNbddH}, and hence the DtN maps also, depend on the outer
boundary conditions, since again both the domain parameters ($X_1^r$
and $X_2^l$) and the Robin parameters in the outer boundary conditions
($p^l$ and $p^r$) appear in them. For large frequencies $k$,
we still have
$$
\sigma_1^r\sim \sqrt{k^2-\omega^2},\quad \sigma_2^l\sim \sqrt{k^2-\omega^2},
$$
since the tangent function for imaginary argument $\I z$ becomes $\I\tanh z\to\pm\I$ as
$z\to\pm\infty$, and thus high frequencies, also called evanescent modes in the Helmholtz
solution, still do not see the difference from the bounded to the unbounded case, as in the screened
Laplace problem.
Choosing Robin transmission conditions, ${\cal
S}_1^r:=p_1^r=\sigma_1^r$ and ${\cal S}_2^l:=p_2^l=\sigma_2^l$ with
$p_1^r,p_2^l\in\mathbb{R}$ in \R{TC}, and taking the limit in the
convergence factor \R{RhoOSMUnbounded} as the Robin parameters
$p_1^r$ and $p_2^l$ go to infinity, we find
\begin{equation}\label{RhoAltSUnboundedH}
\rho(k,\eta,)=e^{-2(X_1^r-X_2^l)\sqrt{k^2-\omega^2}},
\end{equation}
which is the convergence factor of the classical alternating Schwarz
method for the Helmholtz equation on the unbounded domain, and we see
again the overlap $L:=X_1^r-X_2^l$ appearing in the exponential
function. The classical Schwarz method therefore converges for all
Fourier modes $k^2>\omega^2$, provided there is overlap. However for
smaller Fourier frequencies, the method does not contract on the
unbounded domain.
To see how the Schwarz method contracts on a bounded domain, we show
in Figure \ref{SchwarzHelmholtzRhos} (left) the different cases for
the outer boundary conditions ($p^l=p^r=\I\,\omega$ in the Robin case),
by plotting \R{RhoOSM} for when the parameters in the Robin
transmission conditions goes to infinity, and the Helmholtz frequency
parameter $\omega=10$, again with overlap $L=0.02$.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figures/RhoHAltS}
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSM}
\caption{Left: classical Schwarz convergence factor for Helmholtz
and different outer boundary conditions on the left and
right. Right: corresponding optimized Schwarz convergence factor
with Taylor transmission conditions from the unbounded domain
analysis.}
\label{SchwarzHelmholtzRhos}
\end{figure}
This shows that the different outer boundary conditions greatly
influence the convergence for Fourier modes $k^2<\omega^2$, the
Schwarz method violently diverges for these modes, except for the
unbounded case where we obtain stagnation, but still no
convergence. For larger frequencies $k^2>\omega^2$, there is no
influence of the outer boundary conditions on the convergence. On the
right in Figure \ref{SchwarzHelmholtzRhos} we show the corresponding
convergence factors with Robin transmission conditions, chosen as
$p_1^r=p_2^l=\I\,\omega$, which correspond to Taylor transmission
conditions of order zero from the unbounded domain
anaylsis\footnote{It is interesting to note that the same result
is also obtained for the bounded domain case when the outer Robin
transmission conditions are chosen as $p^r=p^l=\I\,\omega$.}, by
expansion of the corresponding optimal symbol around $k=0$,
$\I\sqrt{\omega^2-k^2}=\I\,\omega+O(k^2)$. Here we see that again the
different outer boundary conditions greatly influence the convergence
for Fourier modes $k^2<\omega^2$: for unbounded subdomains, and also
subdomains with Robin radiation conditions at the ends, the Schwarz
method converges well for small Fourier frequencies. With Dirichlet
and Neumann outer boundary conditions however, we obtain again
divergence, although a bit less violent than in the classical Schwarz
method. For larger frequencies $k^2>\omega^2$, there is again very
little influence of the outer boundary conditions on the convergence.
We also observe an interesting phenomenon around $k^2=\omega^2$ the so
called resonnance frequency: with Robin radiation conditions, the
Schwarz method also converges in this case, while for unbounded
subdomains we obtain a convergence factor with modulus one there.
It does not help to use the corresponding Taylor transmission
conditions of order zero adapted to each outer boundary conditions, as we show
in Figure \ref{SchwarzHelmholtzRhos2} on the left.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSM2}
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMOpt}
\caption{Left: optimized Schwarz convergence factor for
Helmholtz and different outer boundary conditions on the left
and right, using the correspondingly adapted Taylor transmission
conditions. Right: corresponding optimized Schwarz convergence
factor minimizing the maximum of the convergence factor
numerically with complex Robin transmission
conditions.} \label{SchwarzHelmholtzRhos2}
\end{figure}
Now the low frequencies converge well in all cases, but divergence is
even more violent for other frequencies with Dirichlet and Neumann
outer boundary conditions. We finally show the results of numerically
optimized Robin transmission conditions with
$p_1^r,p_2^l\in\mathbb{C}$ on the right in Figure
\ref{SchwarzHelmholtzRhos2}, minimizing the convergence factor in
modulus\footnote{For the unbounded case we did not optimize since the
convergence factor equals $1$ there for $k=\omega$. There are
however optimization techniques in this case as well, see
{\it e.g.}\ \citeasnoun{GMN02}, \citeasnoun{GHM07} and references therein.}. We see that there do not exist complex Robin parameters
that can make the optimized Schwarz method work well for Dirichlet and
Neumann outer boundary conditions on the left and right, low
frequencies simply converge badly or not at all. However with Robin
outer boundary conditions on the left and right, the optimization
leads to a quite fast method now, with a convergence factor bounded by
$0.4$, compared to the Taylor transmission conditions where the
convergence factor was bounded by about $0.8$. This shows that for
Helmholtz problems absorbing boundary conditions on the original
problem are very important for Schwarz methods to converge, and we
thus focus on this case in what follows, {\it i.e.}\ $p^l=p^r=\I \omega$.
We start by studying the performance with Taylor transmission
conditions, $p_1^r=p_2^l=\I \omega$, when the overlap is varying,
and the Helmholtz frequency $\omega$ is increasing. We show in
Figure \ref{SchwarzHelmholtzLsmall}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMTL1}
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMTL2}
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMTL3}
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMTL4}
\caption{Optimized Schwarz convergence factor for Helmholtz with
Robin outer boundary conditions and Taylor transmission
conditions for different overlap sizes: from top left to bottom
right $\omega=10,20,30,40$.} \label{SchwarzHelmholtzLsmall}
\end{figure}
the convergence factors for four different overlap sizes
$L=0.2,0.1,0.02,0.002$ and Helmholtz frequencies $\omega=10,20,30,40$.
We see that convergence difficulties increase dramatically with
increasing $\omega$: while for $\omega=10$, the convergence factor is
smaller than one for all overlap sizes $L$, we see already that the
largest overlap $L=0.2$ is not leading to better convergence than the
next smaller one, $L=0.1$. For $\omega=20$ now the two larger overlaps
$L=0.2$ and $L=0.1$ lead to divergence. For $\omega=30$ the best
performance is obtained for the smallest overlap $L=0.002$, and for
$\omega=40$ the smallest overlap is the only overlap for which the
method still contracts. We thus need small overlap in this waveguide
type setting with Dirichlet or Neumann conditions on top and bottom
for the method to work with Taylor transmission conditions.
Let us investigate this analytically when the overlap $L$ goes to
zero. Computing the location of the maximum at $\bar{k}>\omega$
visible in Figure \ref{SchwarzHelmholtzLsmall} and then evaluating the
value of the convergence factor in modulus at this frequency
$\bar{k}$, we obtain for the convergence factor for small overlap $L$
after a long and technical computation
\begin{equation}\label{TaylorRhoHelmholtz}
\max_k|\rho(k,\omega,L)|= 1 - (8-4\ln 2 - 4\ln L)L + O(L^2).
\end{equation}
The method therefore converges for small enough overlap also when
$\omega$ becomes large for this two subdomain decomposition and
outer Robin conditions on the left and right, $p^l=p^r=\I \omega$
with Taylor transmission conditions $p_1^r=p_2^l=\I \omega$. However
convergence is rather slow, compared to the case of the screened
Laplace equation with convergence factor $1-O(\sqrt{L})$ as shown in
\eqref{TaylorRho}, even though there is absorption for Helmholtz on
the left and right. We show in Table \ref{TabTaylorHelmholtz}
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline
$L$ & $\max_{k}|\rho|$ & $1-\max_{k}|\rho|$ \\\hline
0.1000000000 &22.7898956625 &-21.7898956625 \\
0.0100000000 & 1.2309097387 & -0.2309097387 \\
0.0010000000 & 0.9962177046 & 0.0037822954 \\
0.0001000000 & 0.9987180209 & 0.0012819791 \\
0.0000100000 & 0.9998069224 & 0.0001930776 \\
0.0000010000 & 0.9999749357 & 0.0000250643 \\
0.0000001000 & 0.9999969499 & 0.0000030501 \\
0.0000000100 & 0.9999996424 & 0.0000003576 \\
0.0000000010 & 0.9999999591 & 0.0000000409 \\
0.0000000001 & 0.9999999954 & 0.0000000046 \\\hline
\end{tabular}
\caption{Maximum of the convergence factor for Taylor transmission
conditions with the same outer boundary conditions on the left
and right and $\omega=100$, for decreasing overlap size $L$.}
\label{TabTaylorHelmholtz}
\end{table}
the convergence factor for $\omega=100$ when $L$ goes to zero for a
symmetric subdomain decomposition, $X_2^l=\frac{1}{2}-L/2$,
$X_1^r=\frac{1}{2}+L/2$, and one can clearly see the logarithmic term $\ln
L$ in the last column which displays $1-\max_{k}|\rho|$, from the
slow increase in the first non-zero digit. One can also see from
Table \ref{TabTaylorHelmholtz} that for a given Helmholtz frequency
$\omega$, there is an optimal choice for the overlap, here around
$L=0.001$ for best performance.
We next investigate if an optimized choice of the complex
transmission parameters $p_1^r,p_2^l\in\mathbb{C}$ can further
improve the convergence behavior with Robin conditions with
$p^r=p^l=\I\omega$ on the left and right outer boundaries. A
numerical optimization minimizing the maximum of the convergence
factor gives the results shown in Figure
\ref{SchwarzHelmholtzOptLsmall}.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMOL1}
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMOL2}
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMOL3}
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMOL4}
\caption{Optimized Schwarz convergence factor for Helmholtz with
Robin outer boundary conditions and optimized complex Robin
transmission conditions for different overlap sizes: from top
left to bottom right
$\omega=10,20,30,40$.} \label{SchwarzHelmholtzOptLsmall}
\end{figure}
Comparing with the corresponding Figure \ref{SchwarzHelmholtzLsmall}
for the Taylor transmission conditions, we see that much better
contraction factors can be obtained using the optimized transmission
conditions, and also that the method is convergent for all overlaps
for $\omega=10,20,30$. However, for $\omega=40$, we see again that for
the largest overlap, the optimized contraction factor is above $1$,
and thus as in the Taylor transmission conditions, for large Helmholtz
frequency $\omega$, the overlap will need to be small enough for the
optimized method to converge, and there will be a best choice for the
overlap.
To investigate this further, we show in Table
\ref{TabOptHelmholtz} the best contraction factor that can be
obtained for $\omega=100$, as we did in Table
\ref{TabTaylorHelmholtz} for the Taylor transmission conditions, and
we also show the value of the optimized complex transmission
condition parameter.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$L$ & $p_1^r=p_2^l$ & $\max_{k}|\rho|$ & $1-\max_{k}|\rho|$ \\\hline
0.1000000000 & -13.7512+\I 17.6068 & 1.366317 & -0.366317 \\
0.0100000000 & -0.7183+\I 26.7434 & 0.963833 & 0.036166 \\
0.0010000000 & 0.7700+\I 7.8147 & 0.882396 & 0.117603 \\
0.0001000000 & 1.6590+\I 11.5704 & 0.929865 & 0.070134 \\
0.0000100000 & 3.6059+\I 24.6349 & 0.966615 & 0.033384 \\
0.0000010000 & 7.7838+\I 52.9876 & 0.984342 & 0.015657 \\
0.0000001000 & 16.7767+\I 114.0584 & 0.992699 & 0.007300 \\
0.0000000100 & 36.1477+\I 245.6848 & 0.996604 & 0.003395 \\
0.0000000010 & 77.8794+\I 529.2904 & 0.998422 & 0.001577 \\
0.0000000001 & 167.7869+\I 1140.3116 & 0.999267 & 0.000732 \\\hline
\end{tabular}
\caption{Best complex parameter choice and maximum of the
convergence factor for optimized complex Robin transmission
conditions with outer Robin boundary conditions
$p^l=p^r=\I \omega$ on the left and right and $\omega=100$, for
decreasing overlap size $L$.}
\label{TabOptHelmholtz}
\end{table}
We see that the optimization leads to a method which degrades much
less, compared to the Taylor transmission conditions, and there is
still a best choice for the overlap that leads to the most
advantageous contraction, around overlap $L=0.001$ as in the Taylor
case. But using the optimized parameters the method will converge
approximately $100$ times faster, since
$0.9987180208^{100}=0.879607\approx 0.882396$. There are however so
far no analytical asymptotic formulas for this optimized choice
available. Nevertheless, plotting the results from Table
\ref{TabOptHelmholtz} in Figure \ref{HelmholtzOO0AsymptPlot}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figures/OptHelmholtzParamsAsympt}
\includegraphics[width=0.48\textwidth]{Figures/OptHelmholtzRhoAsympt}
\caption{Asymptotic behavior of the optimized complex Robin parameter
and associated convergence factor distance from $1$.}
\label{HelmholtzOO0AsymptPlot}
\end{figure}
cleary shows that the optimized choice behaves asymptotically for
small overlap $L$ as
\begin{equation}
\thickmuskip=0.6mu
\medmuskip=0.2mu
\thinmuskip=0.1mu
\scriptspace=0pt
\nulldelimiterspace=-0.1pt
p_1^r\sim(C_1^r+\I \tilde{C}_1^r)L^{-1/3},\quad
p_2^l\sim(C_2^l+\I \tilde{C}_2^l)L^{-1/3},\quad
\max_{k}|\rho|= 1-O(L^{1/3}),
\end{equation}
which is a much better result than for the Taylor transmission
conditions that gave $\max_{k}|\rho|= 1-O(L)$ up to the logarithmic
term.
Since we have seen how important absorption is for the Helmholtz
problem, in contrast to the screened Laplace problem, we now study
the Helmholtz problem also with Robin conditions all around, also on
top and bottom.
This needs the solution of the corresponding Sturm-Liuville problem
(details can be found in Section 3).
We start again by studying the performance with Taylor transmission
conditions, $p_1^r=p_2^l=\I \omega$, when the overlap is varying,
and the Helmholtz frequency $\omega$ is increasing. We show in
Figure \ref{SchwarzHelmholtzRRLsmall}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMTRRL1}
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMTRRL2}
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMTRRL3}
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMTRRL4}
\caption{Optimized Schwarz convergence factor for Helmholtz with
Robin outer boundary conditions all around and Taylor transmission
conditions for different overlap sizes: from top left to bottom
right $\omega=10,20,30,40$.} \label{SchwarzHelmholtzRRLsmall}
\end{figure}
the convergence factors for four different overlap sizes
$L=0.2,0.1,0.02,0.002$ and Helmholtz frequencies $\omega=10,20,30,40$.
We see immediately the tremendous improvement by having now also Robin
conditions on the top and bottom in the Helmholtz equation: the
Schwarz method converges for all overlap sizes $L$, even if the
Helmholtz frequency is increaseing, there is no requirement any more
for the overlap $L$ to be small. We do currently not yet have an
analysis of this behavior, but we show in Table
\ref{TabTaylorRRHelmholtz}
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline
$L$ & $\max_{k}|\rho|$ & $1-\max_{k}|\rho|$ \\\hline
0.1000000000 & 0.0471851340 & 0.9528148659 \\
0.0100000000 & 0.3360508362 & 0.6639491637 \\
0.0010000000 & 0.7854583112 & 0.2145416887 \\
0.0001000000 & 0.9537799195 & 0.0462200804 \\
0.0000100000 & 0.9913446318 & 0.0086553681 \\
0.0000010000 & 0.9984392375 & 0.0015607624 \\
0.0000001000 & 0.9997213603 & 0.0002786396 \\
0.0000000100 & 0.9999503928 & 0.0000496071 \\
0.0000000010 & 0.9999911753 & 0.0000088246 \\
0.0000000001 & 0.9999984305 & 0.0000015694 \\\hline
\end{tabular}
\caption{Maximum of the convergence factor for Taylor transmission
conditions with the same Robin outer boundary conditions all around
and $\omega=100$, for decreasing overlap size $L$.}
\label{TabTaylorRRHelmholtz}
\end{table}
the convergence factor for $\omega=100$ when $L$ goes to zero for a
symmetric subdomain decomposition, $X_2^l=\frac{1}{2}-L/2$,
$X_1^r=\frac{1}{2}+L/2$, and one can clearly see the much better
asymptotic performance due to the Robin boundary conditions on top
and bottom, compared to the waveguide case shown in Table
\ref{TabTaylorHelmholtz}. We can also numerically observe the
dependence of the convergence factor in this case to be
\begin{equation}\label{TaylorRhoRRHelmholtz}
\max_k|\rho(k,\omega,L)|= 1 - O(L^{3/4})
\end{equation}
as we show in Figure \ref{HelmholtzTaylorRRAsymptPlot}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figures/TaylorHelmholtzRhoRRAsympt}
\includegraphics[width=0.48\textwidth]{Figures/TaylorHelmholtzParamsRRAsympt}
\caption{Asymptotic behavior of the convergence factor with Robin
conditions all around (left) and of the maximum location $\bar{k}$
(right).}
\label{HelmholtzTaylorRRAsymptPlot}
\end{figure}
on the left. On the right, we show
how the maximum location $\bar{k}= O(L^{-1/4})$ is increasing now.
We next show that an optimized choice of the complex transmission
parameters $p_1^r,p_2^l\in\mathbb{C}$ can still further improve the
convergence behavior with Robin conditions all around the global
domain. A numerical optimization minimizing the maximum of the
convergence factor gives the results shown in Figure
\ref{SchwarzHelmholtzOptRRLsmall}.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMRRL1}
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMRRL2}
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMRRL3}
\includegraphics[width=0.48\textwidth]{Figures/RhoHOSMRRL4}
\caption{Optimized Schwarz convergence factor for Helmholtz with
Robin outer boundary conditions all around and optimized complex
Robin transmission conditions for different overlap sizes: from
top left to bottom right
$\omega=10,20,30,40$.} \label{SchwarzHelmholtzOptRRLsmall}
\end{figure}
Comparing with the corresponding Figure \ref{SchwarzHelmholtzRRLsmall}
for the Taylor transmission conditions, we see that again much better
contraction factors can be obtained using the optimized transmission
conditions. To investigate this further, we show in Table
\ref{TabOptRRHelmholtz} the best contraction factor that can be
obtained for $\omega=100$, as we did in Table
\ref{TabTaylorRRHelmholtz} for the Taylor transmission conditions, and
we also show the value of the optimized complex transmission
condition parameter.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$L$ & $p_1^r=p_2^l$ & $\max_{k}|\rho|$ & $1-\max_{k}|\rho|$ \\\hline
0.1000000000 & 0.0002+\I 82.4166 & 0.057800 & 0.942200 \\
0.0100000000 & 0.0003+\I 50.8853 & 0.292866 & 0.707134 \\
0.0010000000 & 47.7606+\I 79.4621 & 0.301674 & 0.698326 \\
0.0001000000 & 120.2543+\I 142.8230 & 0.538849 & 0.461151 \\
0.0000100000 & 266.8504+\I 287.5608 & 0.746725 & 0.253275 \\
0.0000010000 & 578.4744+\I 609.4633 & 0.872807 & 0.127193 \\
0.0000001000 & 1247.9382+\I 1308.2677 & 0.938763 & 0.061237 \\
0.0000000100 & 2689.3694+\I 2816.3510 & 0.971090 & 0.028910 \\
0.0000000010 & 5794.4274+\I 6066.6093 & 0.986475 & 0.013525 \\
0.0000000001 & 12483.8801+\I 13069.6327 & 0.993699 & 0.006301 \\\hline
\end{tabular}
\caption{Best complex parameter choice and maximum of the
convergence factor for optimized complex Robin transmission
conditions with outer Robin boundary conditions
all around and $\omega=100$, for
decreasing overlap size $L$.}
\label{TabOptRRHelmholtz}
\end{table}
We see that the optimization leads to a method which again degrades much
less, compared to the Taylor transmission conditions, and the best choice
is the largest overlap when there are Robin conditions all around the
domain. There are currently no analytical asymptotic formulas for this optimized choice
either, but plotting the results from Table
\ref{TabOptRRHelmholtz} in Figure \ref{HelmholtzOO0RRAsymptPlot}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Figures/OptHelmholtzParamsRRAsympt}
\includegraphics[width=0.48\textwidth]{Figures/OptHelmholtzRhoRRAsympt}
\caption{Asymptotic behavior of the optimized complex Robin parameter
and associated convergence factor distance from $1$ when there are Robin conditions
all around the domain.}
\label{HelmholtzOO0RRAsymptPlot}
\end{figure}
cleary shows that the optimized choice behaves asymptotically for
small overlap $L$ again as
\begin{equation}
\thickmuskip=0.6mu
\medmuskip=0.2mu
\thinmuskip=0.1mu
\scriptspace=0pt
\nulldelimiterspace=-0.1pt
p_1^r\sim(C_1^r+\I \tilde{C}_1^r)L^{-1/3},\quad
p_2^l\sim(C_2^l+\I \tilde{C}_2^l)L^{-1/3},\quad
\max_{k}|\rho|= 1-O(L^{1/3}),
\end{equation}
like in the waveguide case with Dirichlet or Neumann conditions on top and bottom.
This is still much better than for the Taylor transmission
conditions that gave $\max_{k}|\rho|= 1-O(L^{3/4})$.
From the two subdomain analysis in this section, we have already
learned many things for Schwarz methods by domain truncation, or
optimized Schwarz methods: for the screened Laplace problem, outer
boundary conditions are not influencing convergence very much, and
complete analysis is available for the convergence of these methods,
also with optimized formulas for Robin transmission
conditions. Compared to classical Schwarz methods with contraction
factors $1-O(L)$ when the overlap $L$ becomes small, Taylor
transmission conditions give a contraction factor $1-O(\sqrt{L})$,
and optimized Robin transmission conditions give
$1-O(L^{1/3})$. This is very different for the Helmholtz case, where
the performance of Schwarz methods is very much dependent on the
outer boundary conditions imposed on the domain on which the problem
is considered. Radiation boundary conditions are very important for
Schwarz methods to function here: in a waveguide setting with Robin
conditions on the left and right, classical Schwarz methods still do
not work, but Taylor transmission conditions of Robin type, the
simplest absorbing boundary conditions, can then lead to convergent
Schwarz methods, provided the overlap is small enough, with
convergence factor $1-O(L)$ up to a logarithmic term. Optimized
Robin transmission conditions lead again to $1-O(L^{1/3})$, as in
the screened Laplace case. If there are radiation conditions of
Robin type all around, using these same conditions also as
transmission conditions leads to contraction factors of the form
$1-O(L^{3/4})$, but now the methods also work with larger overlap,
and optimized Robin conditions give $1-O(L^{1/3})$ with numerically
observed much better constants, asymptotically comparable to the
screened Laplace case!
\section{Many subdomain analysis}
We now investigate the performance of optimized Schwarz methods,
or equivalently Schwarz methods by domain truncation, when more than
two subdomains are used. Since we have seen in Section \ref{2SubSec}
that absorbing boundary conditions (ABCs) play an important role for
the Helmholtz case, and perfectly matched layers (PMLs) are another
technique to truncate domains, we now study the convergence of
Schwarz methods for a sligthly more general problem where the
operators in the $x$ and $y$ direction are written separately, so
that PML modifications can be introduced, namely
\begin{equation}\label{eqprb}
\begin{aligned}
(\mathcal{L}_x + \mathcal{L}_y)u + \eta u &= f & &\text{ on }\Omega=[0,X]\times[0,Y],\\
\mathcal{B}^{l}u&=0 & &\text{ on } \{0\}\times [0,Y],\\
\mathcal{B}^ru&=0 & &\text{ on } \{X\}\times[0,Y],\\
\mathcal{B}^bu&=0 & &\text{ on } [0,X]\times \{0\},\\
\mathcal{B}^tu&=0 & &\text{ on } [0,X]\times \{Y\},
\end{aligned}
\end{equation}
where $\mathcal{L}_{*}$ is a linear partial differential operator
about $*$, $\eta$ is a constant which we will choose to handle both
the screened Laplace and the Helmholtz case, the unknown $u$ and
the data $f$ are functions on $\Omega$, and $\mathcal{B}^{*}$'s are
linear trace operators. For Schwarz methods, the domain is decomposed
first into $N$ subdomains $[X_{j}^{l},X_j^{r}]\times[0,Y]$ with
$X_j^{l}=(j-1)H$, $X_j^r=jH+2L$ and $H=(X-2L)/N$, as shown for an
example in Figure \ref{1Dand2DDecompositionFig} on the right. The
restriction of \R{eqprb} onto the subdomains $\Omega_j$ are
solved with the solutions $u_j$ to satisfy the transmission
conditions
\begin{equation}\label{eqtc}
\mathcal{B}_j^{l,r}u_j=\mathcal{B}_j^{l,r}u_{j\mp1} \text{ on } \{X_{j}^{l,r}\}\times[0,Y],
\end{equation}
except on the original boundary $\{0,X\}\times[0,Y]$.
\begin{remark}
We note that if $u_j$ satisfies the restriction of \R{eqprb} onto
$\Omega_j$, by linearity of \R{eqprb}, the partial differential
equation for the error $u-u_j$ on $\Omega_j$ is homogeneous (with
$f=0$) and invariant under the transformation $(x, y,\eta) \mapsto
(\frac{x}{Y}, \frac{y}{Y}, Y^2\eta)$. We further assume that the
boundary and transmission conditions have the same invariance. Then,
we can let $Y=1$ without loss of generality for the convergence
analysis.
\end{remark}
The generalised Fourier frequencies $\pm\xi$ correspond to the
square-roots of the Sturm-Liouville eigenvalues $\xi^2$,
\begin{equation}\label{eqsturm}
\begin{aligned}
\mathcal{L}_y\phi &= \xi^2\phi & &\text{ on }[0,Y],\\
\mathcal{B}^b\phi &= 0 & &\text{ at }\{0\},\\
\mathcal{B}^t\phi &= 0 & &\text{ at }\{Y\}.
\end{aligned}
\end{equation}
We assume that the eigenfunctions $\{\phi(y,\xi_n)\}$, $\xi\in
\{\xi_0,\xi_1,\xi_2,\ldots\}\subset\mathbb{C}$ allow the solution of
\R{eqprb} to be represented as
$\sum_{n=0}^{\infty}u(x,\xi_n)\phi(y,\xi_n)$\footnote{We still denote
the Fourier transformed quantities for simplicity by the same
symbols $u$, $f$, $\mathcal{B}^l$ and $\mathcal{B}^r$ to avoid a
more complicated notation.}, with $u(x,\xi)$ satisfying the problem
\begin{equation}\label{eqode}
\begin{aligned}
\mathcal{L}_xu + (\xi^2+\eta)u &= f\quad & &\mbox{on }[0,X],\\
\mathcal{B}^lu&=0 \quad & &\mbox{at }x=0,\\
\mathcal{B}^ru&=0 \quad & &\mbox{at }x=X.
\end{aligned}
\end{equation}
Let $g_{j}^l:=\mathcal{B}_j^lu_j$ at $x=X_j^l$ and $g_{j}^r:=\mathcal{B}_j^ru_j$ at $x=X_j^r$. To
rewrite \R{eqtc} in terms of the interface data $g_2^l,\ldots, g_N^l, g_1^r,\ldots, g_{N-1}^r$, {\it i.e.}\ in substructured form, we
define the interface-to-interface operators (see also Figure~\ref{figi2i})
\[
\thickmuskip=0.6mu
\medmuskip=0.2mu
\begin{aligned}
a_j:&\left(\ell_j\mbox{\ at }x=X_j^l\right)\rightarrow
\left(\mathcal{B}_{j+1}^lv_j\mbox{\ at }x=X_{j+1}^l\right)\
\mbox{with $v_j$ solving}\\
&\mathcal{L}_xv_j+(\xi^2+\eta)v_j=0 \text{ in $(X_j^l,X_j^r)$},\;\,
\mathcal{B}_j^lv_j=\ell_j\text{ at $x=X_j^l$},\;\,
\mathcal{B}_j^rv_j=0\text{ at $x=X_j^r$},\\
\end{aligned}
\]
\[
\thickmuskip=0.6mu
\medmuskip=0.2mu
\begin{aligned}
b_j:&\left(\gamma_j\mbox{\ at }x=X_j^r\right)\rightarrow
\left(\mathcal{B}_{j+1}^lv_j\mbox{\ at }x=X_{j+1}^l\right)\
\mbox{with $v_j$ solving}\\
&\mathcal{L}_xv_j+(\xi^2+\eta)v_j=0\text{ in $(X_j^l,X_j^r)$},\;\,
\mathcal{B}_j^lv_j=0\text{ at $x=X_j^l$},\;\,
\mathcal{B}_j^rv_j=\gamma_j\text{ at $x=X_j^r$},\\
\end{aligned}
\]
\[
\thickmuskip=0.6mu
\medmuskip=0.2mu
\begin{aligned}
c_j:&\left(\gamma_j\mbox{\ at }x=X_j^r\right)\rightarrow
\left(\mathcal{B}_{j-1}^rv_j\mbox{\ at }x=X_{j-1}^r\right)\
\mbox{with $v_j$ solving}\\
&\mathcal{L}_xv_j+(\xi^2+\eta)v_j=0\text{ in $(X_j^l,X_j^r)$},\;\,
\mathcal{B}_j^lv_j=0\text{ at $x=X_j^l$},\;\,
\mathcal{B}_j^rv_j=\gamma_j\text{ at $x=X_j^r$},\\
\end{aligned}
\]
\[
\thickmuskip=0.6mu
\medmuskip=0.2mu
\begin{aligned}
d_j:&\left(\ell_j\mbox{\ at }x=X_j^l\right)\rightarrow
\left(\mathcal{B}_{j-1}^rv_j\mbox{\ at }x=X_{j-1}^r\right)\
\mbox{with $v_j$ solving}\\
&\mathcal{L}_xv_j+(\xi^2+\eta)v_j=0\text{ in $(X_j^l,X_j^r)$},\;\,
\mathcal{B}_j^lv_j=\ell_j\text{ at $x=X_j^l$},\;\,
\mathcal{B}_j^rv_j=0\text{ at $x=X_j^r$},
\end{aligned}
\]
where $\mathcal{B}_1^l:=\mathcal{B}^l$ and $\mathcal{B}_N^r:=\mathcal{B}^r$. Using these operators,
we can rewrite \R{eqtc} as a linear system, namely
\begin{figure}
\centering
\includegraphics[scale=.25]{Figures/abcd.pdf}
\caption{Illustration of the interface-to-interface
operators.}\label{figi2i}
\end{figure}
\begin{equation}\label{eq:gp}
{\footnotesize
\left[
\begin{array}{cccc|cccc}
1\vphantom{g_1^l} & & & & -b_1 & & &\\
-a_2 & 1\vphantom{g_2^l\ddots} & & & & -b_2 & &\\
& \ddots & \ddots\vphantom{\vdots} & & & & \ddots &\\
& & -a_{N-1} & 1\vphantom{g_N^l} & & & & -b_{N-1}\\
\hline
-d_2 & & & & 1\vphantom{g_2^r} & -c_2 & &\\
& -d_3 & & & & 1\vphantom{g_3^r} & \ddots & \\
& & \ddots & & & & \ddots & -c_{N-1}\vphantom{\vdots} \\
& & & -d_{N} & & & & 1\vphantom{g_{N-1}^r}
\end{array}
\right]
\left[
\begin{array}{c}
g_2^l\\ g_3^l\vphantom{\ddots} \\ \vdots \\ g_N^l\\\hline
g_1^r\\ g_2^r\vphantom{\ddots} \\ \vdots \\ g_{N-1}^r
\end{array}
\right]=
\left[
\begin{array}{c}
\tau_2^l\\ \tau_3^l\vphantom{\ddots} \\ \vdots \\ \tau_N^l\\\hline
\tau_1^r\\ \tau_2^r\vphantom{\ddots} \\ \vdots \\ \tau_{N-1}^r
\end{array}
\right],
}
\end{equation}
which we denote by
\begin{equation}\label{eq:bp}
\begin{bmatrix}
I-A & -B\\
-D & I-C
\end{bmatrix}
\begin{bmatrix}
g^l\\ g^r
\end{bmatrix}=
\begin{bmatrix}
\tau^l\\ \tau^r
\end{bmatrix},
\end{equation}
where $\tau_j^l:=\mathcal{B}_j^lv_{j-1}$, $\tau_j^r:=\mathcal{B}_j^rv_{j+1}$ with $v_j$ satisfying
\[
\begin{aligned}
\mathcal{L}_xv_j+(\xi^2+\eta)v_j&=f \quad & &\mbox{on }[X_j^l, X_j^r],\\
\mathcal{B}_j^lv_j&=0\quad & &\mbox{at $x=X_j^l$},\\
\mathcal{B}_j^rv_j&=0\quad & &\mbox{at $x=X_j^r$}.
\end{aligned}
\]
The double sweep Schwarz method amounts to a block Gauss-Seidel iteration for \R{eq:bp}: given an
initial guess $g^{r,0}$ of $g^r$, we compute for iteration index $m=0,1,\ldots$
\begin{equation}\label{eq:gs}
g^{r,m+1}:=(I-C)^{-1}\left[\tau^r+D(I-A)^{-1}(\tau^l+Bg^{r,m})\right].
\end{equation}
We denote by $\epsilon^{r,m}:=g^{r}-g^{r,m}$ the error, which then by \R{eq:gs}
satisfies a recurrence relation with iteration matrix $T_{\rm{DOSM}}$,
\begin{equation}\label{eq:Tdosm}
\epsilon^{r,m+1}=T_{\rm{DOSM}}\epsilon^{r,m}:=(I-C)^{-1}D(I-A)^{-1}B\epsilon^{r,m}.
\end{equation}
The Jacobi iteration for \R{eq:gp} gives the parallel Schwarz method with the recurrence relation
\begin{equation}\label{eq:Tposm}
\epsilon^{m+1}=T_{\rm{POSM}}\epsilon^{m}:=
\begin{bmatrix}
A & B\\
D & C
\end{bmatrix}
\begin{bmatrix}
\epsilon^{l,m}\\ \epsilon^{r,m}
\end{bmatrix}.
\end{equation}
\begin{remark}
The block Jacobi iteration for \R{eq:bp} leads to the X (cross) sweeps
\cite{StolkImproved,Zepeda,ZD} starting from the first and last subdomains simultaneously, namely,
\begin{equation}\label{eq:Tcosm}
\epsilon^{m+1}=T_{\rm{COSM}}\epsilon^{m}:=
\begin{bmatrix}
O & (I-A)^{-1}B\\
(I-C)^{-1}D & O
\end{bmatrix}
\begin{bmatrix}
\epsilon^{l,m}\\ \epsilon^{r,m}
\end{bmatrix}.
\end{equation}
It is easy to see that $T_{\rm{COSM}}^2$ has the same spectra as $T_{\rm{DOSM}}$ does.
\end{remark}
\begin{remark}
For two subdomains $N=2$, \R{eq:gp} reduces to
\[
\begin{bmatrix}
1 & -b_1\\ -d_2 & 1
\end{bmatrix}
\begin{bmatrix}
g_2^l\\ g_1^r
\end{bmatrix}=
\begin{bmatrix}
\tau_2^l\\ \tau_1^r
\end{bmatrix}.
\]
Then, $T_{\rm{DOSM}}=b_1d_2$ is exactly the two-subdomain
convergence factor \R{RhoOSM}. On the one hand, the influence of the
outer left/right boundary conditions $\mathcal{B}^{l,r}u=0$ is
totally contained in $b_1$ and $d_N$. For fast convergence, the
$b_j$ and $d_j$ are the only entries that can be made arbitrarily
small by approximating the transparent transmission conditions. On
the other hand, the influence of $N$ is mainly through $(I-C)^{-1}$,
$(I-A)^{-1}$ in \R{eq:Tdosm} or the nilpotent $A$, $C$ in
\R{eq:Tposm}, not directly related to the diagonal $B$, $D$. This
explains why and how the two-subdomain analysis from Section
\ref{2SubSec} is useful.
\end{remark}
Let $s:=\sqrt{\xi^2+\eta}$ be the symbol of the square-root operator $\sqrt{\mathcal{L}_y+\eta}$
(also known as the half-plane Dirichlet-to-Neumann operator, see footnote~\ref{footnoteDtN}). Assume
the boundary operator $\mathcal{B}_{*}^{l,r}$ has the symbol
$\mp q_{*}^{l,r}(\xi)\partial_x + p_{*}^{l,r}(\xi)$. Then, one can find the symbols of the operators
$a_j$, $b_j$, $c_j$ and $d_j$ in \R{eq:gp} so that the iteration matrices $T_{\rm{POSM}}$ and
$T_{\rm{DOSM}}$ become symbol matrices; see {\it e.g.}~\citeasnoun{NN97}, \citeasnoun{bootland2022analysis},
\citeasnoun{GZDD25}. Let
\[
\mathfrak{A} = \begin{bmatrix} A & O\\ O & O\end{bmatrix},
\mathfrak{B} = \begin{bmatrix} O & B\\ O & O\end{bmatrix},
\mathfrak{C} = \begin{bmatrix} O & O\\ O & C\end{bmatrix},
\mathfrak{D} = \begin{bmatrix} O & O\\ D & O\end{bmatrix}.
\]
There exist the following results in norm.
\begin{theorem}[\citeasnoun{NN97}] Let $\|\cdot\|$ be an algebra norm of matrices (so that the submultiplicative property
$\|AB\|\le\|A\|\|B\|$ holds), with $\|I\|=1$. If for some $\xi$ we have
\[
{\varrho}(\xi):=\|\mathfrak{B}(\xi)\| \|\mathfrak{D}(\xi)\|
\left(\sum_{n=0}^{N-2}\|\mathfrak{A}(\xi)\|^n\right)
\left(\sum_{n=0}^{N-2}\|\mathfrak{C}(\xi)\|^n\right)<1,
\]
then the estimates
\[
\begin{aligned}
\|T_{\rm{POSM}}^n(\xi)\|&\le
C_0(\xi)\frac{1}{1-{\varrho}(\xi)}{\varrho}(\xi)^{\left[\frac{n}{2N-2}-\frac{3}{2}\right]},
& &\forall n\ge 2N,\\
\|T_{\rm{DOSM}}^{n}(\xi)\|&\le C_0(\xi)(1+{\varrho}(\xi)){\varrho}(\xi)^{n-1}, & &\forall
n\ge 2,
\end{aligned}
\]
hold with
$C_0(\xi)=\left(1+\frac{{\varrho}(\xi)}{\|\mathfrak{B}(\xi)\|}\right)\left(1+\frac{{\varrho}(\xi)}{\|\mathfrak{D}(\xi)\|}+\frac{{\varrho}(\xi)}{\|\mathfrak{B}(\xi)\|\|\mathfrak{D}(\xi)\|}\right)$.
\end{theorem}
This theorem tells us the optimized Schwarz methods converge as soon as $|b_j|$, $|d_j|$ are
sufficiently small while $|a_j|, |c_j|$ remain bounded, which corresponds to having nearly
transparent conditions on the left and right boundaries of each subdomain. The upper bounds
suggest that a double sweep iteration $T_{\mathrm{DOSM}}$ is worth $2N-2$ parallel iterations
$T_{\mathrm{POSM}}^{2N-2}$. But since we do not know how sharp the bounds are, we can still not
answer precisely the scalability questions such as the scaling with $N\to\infty$. For that purpose,
we opt to calculate the eigenvalues of the symbol matrices numerically. In case of $T_{\rm{DOSM}}$
with real-valued symbols, a two-sided estimate is also available \cite{GZDD25}.
Let $\rho=\rho(\xi,\eta,N,H,L,\ldots)$ be the spectral radius of the symbol iteration matrix, where
the dots represent the parameters to be introduced in the transmission operator
$\mathcal{B}^{l,r}_j$. \emph{The goal} of this section is to find the asymptotic scalings of
$\max_{\xi}\rho$ in terms of $\eta$, $N$, $H$, $L$, {\it etc.}\ One scaling is $N\to\infty$ with
$\mathcal{B}^{l,r}=\mathcal{B}_j^{l,r}$ and $\eta$, $H$, $L$ fixed (or generally all the symbol
values $a$, $b$ fixed). Another scaling is to fix $X$ and let $N\to \infty$ with $\eta$ either fixed
or growing. In this scaling, $H=(X-2L)/N\to 0$ and $L=CH^{\nu}$ ($\nu\ge 1$). To distinguish the
effects of $N\to \infty$ and $H\to 0$, it is useful to study a third scaling with $H\to 0$,
$L=CH^{\nu}$ and all the other parameters fixed. In particular, for $N=\infty$, we can see how the
subdomain size (or block size in the matrix language) influences the convergence. Also, the effect
of small overlap can be revealed with $L\to 0$ and $N$, $X$ fixed.
For the parallel Schwarz iteration with $a_j=a$, $b_j=b$, $c_j=c$, $d_j=d$ independent of $j$ and
$a=c$, a closed formula for $\rho_{\infty}:=\lim_{N\to\infty}\rho$ has been found rigorously
\cite{bootland2022analysis}, which we shall use later. Here, we give a formal derivation when the
domain becomes $(-\infty, \infty)\times(0,Y)$ and $\Omega_j=(X_j^l,X_j^r)\times(0,Y)$ for
$j=0,\pm1,\pm2, ...$. The interface data are collected as the bi-infinite sequence
$g:=(\ldots, g_{j}^l, g_{j-1}^r, g_{j+1}^l, g_j^r, \ldots)$ with the displayed four entries indexed
by $2j-1$, $2j$, $2j+1$ and $2j+2$. We define the splitted Fourier sequences (entries displayed at
the same indices as before)
\begin{align*}
\phi_{\mu,1}&:=(\ldots,\mathrm{e}^{\mathrm{i}\mu (2j-1)},0,\mathrm{e}^{\mathrm{i}\mu (2j+1)}, 0,\ldots),\\
\phi_{\mu,2}&:=(\ldots,0,\mathrm{e}^{\mathrm{i}\mu (2j)},0, \mathrm{e}^{\mathrm{i}\mu (2j+2)},\ldots),
\end{align*}
with $\mu\in [-\pi,\pi]$. Then, one can find that $\mathrm{span} \{\phi_{\mu,1}, \phi_{\mu,2}\}$ is
an invariant subspace of $T_{\rm{POSM}}$. In fact, using the definition \R{eq:Tposm}, we can
calculate
\begin{align*}
(T_{\rm{POSM}}\phi_{\mu,1})(2j-1)&= a\mathrm{e}^{\mathrm{i}\mu(2j-3)} = a\,\mathrm{e}^{-2\mathrm{i}\mu}
\cdot \phi_{\mu,1}(2j-1),\\
(T_{\rm{POSM}}\phi_{\mu,1})(2j)&= d\mathrm{e}^{\mathrm{i}\mu(2j-1)} = d\,\mathrm{e}^{-\mathrm{i}\mu}
\cdot \phi_{\mu,2}(2j),
\end{align*}
so we have
$ T_{\rm{POSM}}\phi_{\mu,1}= a\,\mathrm{e}^{-2\mathrm{i}\mu}\cdot \phi_{\mu,1} +
d\,\mathrm{e}^{-\mathrm{i}\mu} \cdot \phi_{\mu,2}.$ Therefore, we find
\begin{equation*}
T_{\rm{POSM}}(\phi_{\mu,1},\phi_{\mu,2})= (\phi_{\mu,1},\phi_{\mu,2})\left(
\begin{array}{ll}
a\,\mathrm{e}^{-2\mathrm{i}\mu} & b\,\mathrm{e}^{\mathrm{i}\mu} \\
d\,\mathrm{e}^{-\mathrm{i}\mu} & c\,\mathrm{e}^{2\mathrm{i}\mu}
\end{array}
\right).
\end{equation*}
When $a=c$, the eigenvalues of the last matrix are
\begin{equation*}
\lambda_{\mu,\pm}:=a \cos(2\mu) \pm \sqrt{bd-a^2\sin^2(2\mu)},
\end{equation*}
which for $\mu\in[-\pi,\pi]$ generate the continuum part of the limiting spectra proved by
\citeasnoun{bootland2022analysis}. But there may be also isolated points to appear in the limiting
spectra under some circumstances \cite{bootland2022analysis}, which we did not find here. In all
of our numerical observations, we found that no outlier contributed to the spectral radius.
When we graph the function $\rho(\xi)$, in contrast to the two
subdomain Section \ref{2SubSec}, we will rescale $\xi$ as
$\sqrt{\xi^2+\eta}/\sqrt{\eta}$ for $\eta>0$ and $\Re \xi/\omega$
for $\eta=-\omega^2$, $\omega>0$ so that in both cases the
evanescent modes among $\mathrm{e}^{\pm\sqrt{\xi^2+\eta}x+\I\xi y}$
always correspond to the rescaled variables greater than
one. Moreover, in the transmission conditions for domain truncation,
it is essential to approximate the square-root symbol
$\sqrt{\xi^2+\eta}$. For example, the Taylor of order zero
approximation gives $\sqrt{\xi^2+\eta}\approx \sqrt{\eta}$. The
reflection coefficient without overlap given by
\[
R=\frac{\sqrt{\eta}-\sqrt{\xi^2+\eta}}{\sqrt{\eta}-\sqrt{\xi^2+\eta}}
=\frac{1-\frac{\sqrt{\xi^2+\eta}}{\sqrt{\eta}}}{1+\frac{\sqrt{\xi^2+\eta}}{\sqrt{\eta}}}
\]
can naturally be understood as a function of the rescaled variables.
Two different elliptic partial differential equations will be considered. One is the diffusion
equation with $\mathcal{L}_x+\mathcal{L}_y=-\Delta$, $\eta>0$, also known as the screened Laplace
equation, the modified Helmholtz equation, or the Helmholtz equation with the good sign. The
other is the time-harmonic wave equation with $\mathcal{L}_x+\mathcal{L}_y=-\Delta$ in $\Omega$,
$\eta=-\omega^2$, $\omega>0$, also known as the Helmholtz equation. For free space waves, the bottom
and top boundary $[0,X]\times\{0,Y\}$ is from domain trunction of $[0,X]\times(-\infty,\infty)$. In
case PMLs are used for this along the bottom and top boundaries, we consider the problem
\R{eqprb} on the PML-augmented domain $[0,X]\times[-D,Y+D]$ with
$\mathcal{L}_y=-\tilde{s}(y)\partial_y(\tilde{s}(y)\partial_y)$ in the PML,
$\mathcal{B}^{b,t}=\mathcal{C}=\mathcal{I}$ or $\mp \partial_y$ on $[0,X]\times\{-D, Y+D\}$, and the complex
stretching function
\begin{equation}
\tilde{s}(y)=\left\{
\begin{array}{ll}
1 & \text{ on }[0,Y],\\
(1-\I\tilde{\sigma}(-y))^{-1} & \text{ on }[-D,0],\\
(1-\I\tilde{\sigma}(y-Y))^{-1} & \text{ on }[Y,Y+D],
\end{array}
\right.
\label{eqpml}
\end{equation}
$\tilde{\sigma}(0)=0$,
$\int_0^D\tilde{\sigma}(y)~\mathrm{d}y=\frac{1}{2}D\gamma$ (a specific
form of $\tilde{\sigma}$ has no influence on the convergence
analysis), and $\gamma>0$ is the PML strength. In this case, we will
consider the generalised Fourier frequency from the Sturm-Liouville
problem \R{eqsturm} defined on $[-D, Y+D]$ and we have
$\xi=j\pi/(Y+(2-\I\gamma)D)$, $j$ a nonnegative integer for
$\mathcal{C}=\mp\partial_y$ or positive integer for
$\mathcal{C}=\mathcal{I}$. The trace operators in the transmission
conditions \R{eqtc} are chosen from Table~\ref{tabB}, where the PML
operators need more explanation. For the diffusion problem, we will
use the following Dirichlet-to-Neumann operator:
\begin{equation}
\begin{aligned}
\mathcal{S}: g\mapsto \partial_xv(0) \text{ with $v$ satisfying}& \\
(-\hat{s}(x)\partial_x({\hat{s}(x)\partial_x}\cdot)-\partial_{yy}+\eta) v&=0 & &\text{ on }[-D, 0]\times[0,Y], \\
\mathcal{B}^{b,t}v&=0 & &\text{ on }[-D,0]\times\{0,Y\},\\
\mathcal{C}v&=0 & &\text{ on }\{-D\}\times[0,Y],\\
v&=g & &\text{ on }\{0\}\times[0,Y],
\end{aligned}
\label{eqsd}
\end{equation}
where $\hat{s}(x)=(1+\tilde{\sigma}(-x))^{-1}$ on $(-D,0)$, $\tilde{\sigma}(0)=0$,
$\int_0^D\tilde{\sigma}(x)\D{x}=\frac{1}{2}D\gamma$, $\gamma>0$, and $\mathcal{C}$ is either
Dirichlet or Neumann trace operator. Let $\hat{\mathcal{S}}$ be the symbol of the above
$\mathcal{S}$ and $\mathbf{n}$ be the outer normal unit vector. We have
\begin{equation}
\hat{\mathcal{S}} = \left\{
\begin{array}{ll}
\displaystyle
\sqrt{\xi^2+\eta}{{1+E}\over{1-E}},\quad & \text{if }\mathcal{C}=\mathcal{I},\\[10pt]
\displaystyle
\sqrt{\xi^2+\eta}{{1-E}\over{1+E}},\quad & \text{if }\mathcal{C}=\partial_{\mathbf{n}},
\end{array}
\right.
\label{eqhats}
\end{equation}
with $E=\exp(-(2+\gamma)D\sqrt{\xi^2+\eta})$. For the free space wave problem, we will use the
following Dirichlet-to-Neumann operator:
\begin{equation}
\thickmuskip=0.6mu
\medmuskip=0.2mu
\thinmuskip=0.1mu
\scriptspace=0pt
\nulldelimiterspace=-0.1pt
\begin{aligned}
\mathcal{S}: g\mapsto \partial_xv(0) \text{ with $v$ satisfying}& \\
\tilde{s}(x)\partial_x({\tilde{s}(x)\partial_x}v)+\tilde{s}(y)\partial_y(\tilde{s}(y)\partial_{y}v)&=\eta v& &\text{ on }[-D, 0]\times[-D,Y+D], \\
\mathcal{C}v&=0& &\text{ on }[-D,0]\times\{-D,Y+D\},\\
\mathcal{C}v&=0& &\text{ on }\{-D\}\times[-D,Y+D],\\
v&=g& &\text{ on }\{0\}\times[-D,Y+D],
\end{aligned}
\label{eqsw}
\end{equation}
where the complex stretching function $\tilde{s}$ has been defined in \R{eqpml}. The symbol of the
above $\mathcal{S}$ is \R{eqhats} but with $\displaystyle E=\exp(-(2-\I\gamma)D\sqrt{\xi^2+\eta})$.
\begin{table}
\caption{Trace operators $\mathcal{B}_j^{l,r}$ in the transmission conditions \R{eqtc}: column 2
for the diffusion problem, column 3 for the wave problem}
\begin{tabular}{c|c|c}
\hline
\hline
Problem & $(-\Delta+\eta)u=f$, $\eta>0$ & $(-\Delta-\omega^2)u=f$, $\omega>0$\\
\hline
Dirichlet & $\mathcal{I}$ & not used\\
\hline
Taylor of order 0 & $\mp\partial_{x}+\sqrt{\eta}$ & $\mp\partial_{x}+\I\omega$ \\
\hline
Continuous PML & $\mp\partial_{x}+\mathcal{S}$ with $\mathcal{S}$ in \R{eqsd}
& $\mp\partial_{x}+\mathcal{S}$ with $\mathcal{S}$ in \R{eqsw} \\
\hline
\end{tabular}
\label{tabB}
\end{table}
\begin{remark}
The signs in front of the imaginary unit $\I$, {\it e.g.}, $-$ in \R{eqpml} and $+$ in Table~\ref{tabB},
reflect that we are using the time-harmonic convention $\mathrm{e}^{\I\omega t}$, and the signs
would be opposite for the other convention $\mathrm{e}^{-\I\omega t}$ (more popular). Our choice
is consistent with $\sqrt{\xi^2+\eta}$ using the branch cut $(-\infty, 0)$.
\end{remark}
In the following subsections, we shall explore the various specifications of the Schwarz methods for
the diffusion and free space wave problems in detail based on the numerical calculations of the
eigenvalues of the symbol iteration matrices. For the anxious readers, we first list the main
observations in Table~\ref{tabd} for diffusion and Table~\ref{tabw} for free space waves. More
information, {\it e.g.}, the dependence on the other parameters, will be revealed later.
\begin{table}
\caption{Scalings of $\displaystyle\max_{\xi\in [0,\infty)}\rho$ of the Schwarz methods for the
\emph{diffusion} problem. $N$ number of subdomains; $L$ overlap width; $X$ domain width; $H$
subdomain width}
\begin{tabular}{c|c|c}
\hline
\hline
Parallel Schwarz & $N\to\infty$, $H$, $L$ fixed & $N\to\infty$, $X$, $\frac{L}{H}$ fixed \\
\hline
Dirichlet & $O(1)\in (0,1)$ & $1-O(N^{-2})$\\
\hline
Taylor of order zero & $O(1)\in (0,1)$ & $1-O(N^{-1})$\\
\hline
PML fixed & $O(1)\in (0,1)$ & $1-O(N^{-1})$\\
\hline
\end{tabular}
\begin{tabular}{c|c|c}
\hline
\hline
Double sweep Schwarz & $N\to\infty$, $H$, $L$ fixed & $N\to\infty$, $X$, $\frac{L}{H}$ fixed \\
\hline
Dirichlet & $O(1)\in (0,1)$ & $1-O(N^{-2})$\\
\hline
Taylor of order zero & $O(1)\in (0,1)$ & $1-O(N^{-1})$\\
\hline
PML fixed & $O(1)\in (0,1)$ & $O(1)\in(0,1)$\\
\hline
\end{tabular}
\label{tabd}
\end{table}
\begin{table}
\caption{Scalings of $\max_{\xi}\rho$ of the Schwarz methods for the free space \emph{wave}
problem. $N$ number of subdomains; $L$ overlap width; $X$ domain width; $H$ subdomain width;
$\omega$ wavenumber; $D$ PML width}
\begin{tabular}{c|c|c|c|c}
\hline
\hline
$\begin{aligned}\text{Parallel}\\\text{Schwarz}\end{aligned}$ & $\begin{aligned}&N\to\infty, \omega, \\&H, L\text{ fixed}\end{aligned}$ & $\begin{aligned}&N\to\infty, \omega,\\&X, \Frac{L}{H}\text{ fixed}\end{aligned}$ & $\begin{aligned}&\omega\to\infty, N,\\&X, L\omega\text{ fixed}\end{aligned}$ & $\begin{aligned}&N\to\infty, \Frac{\omega}{N},\\&X, L\omega^{\frac{3}{2}}\text{ fixed}\end{aligned}$\\
\hline
Taylor & $1-O(N^{-1})^*$ & $1-O(N^{-\frac{5}{3}})$ & $1-O(\omega^{-\frac{9}{20}})$ & $1-O(N^{-2})$\\
\hline
$\begin{aligned}\text{PML}\\L=0\end{aligned}$ & $\begin{aligned}1-O(N^{-1})\\\text{with }D\text{ fixed}\end{aligned}$ & $\begin{aligned}1-O(N^{-1})\\\text{with }D\text{ fixed}\end{aligned}$ & $\begin{aligned}&O(1)\\\text{with }&D\text{ fixed}\end{aligned}$ & $\begin{aligned}1-O(N^{-1})\\\text{with }D\text{ fixed}\end{aligned}$\\
\hline
\end{tabular}
{\raggedright \footnotesize *~We observed also one exception for which
$\max_{\xi}\rho=1-O(N^{-3/2})$; see Figure~\ref{figfspt0n}. \par}
\begin{tabular}{c|c|c|c|c}
\hline
\hline
$\begin{aligned}&\text{Double}\\&\text{sweep}\\&\text{Schwarz}\end{aligned}$ & $\begin{aligned}&N\to\infty, \omega, \\&H, L\text{ fixed}\end{aligned}$ & $\begin{aligned}&N\to\infty, \omega,\\&X, \Frac{L}{H}\text{ fixed}\end{aligned}$ & $\begin{aligned}&\omega\to\infty, N,\\&X, L\omega\text{ fixed}\end{aligned}$ & $\begin{aligned}&N\to\infty, \Frac{\omega}{N},\\&X, L\omega^{\frac{3}{2}}\text{ fixed}\end{aligned}$\\
\hline
Taylor & $O(1)$ & diverges & $1-O(\omega^{-\frac{9}{20}})$ & diverges \\
\hline
$\begin{aligned}\text{PML}\\L=0\end{aligned}$ & $\begin{aligned}&\to0\text{ with }D\text{ }\\&=O(\log N)\end{aligned}$ & $\begin{aligned}&\to0\text{ with }D\text{ }\\&=O(\log N)\end{aligned}$ & $\begin{aligned}&O(1)\\\text{with }&D\text{ fixed}\end{aligned}$ & $\begin{aligned}\to0\text{ with }D\\=O(\log N)\end{aligned}$\\
\hline
\end{tabular}
\label{tabw}
\end{table}
\subsection{Parallel Schwarz methods for the diffusion problem}
In this case, the original problem \R{eqprb} has $\mathcal{L}_x+\mathcal{L}_y=-\Delta$, $\eta>
0$. The Neumann boundary condition $\mathcal{B}^{b,t}=\mp\partial_y$ is imposed on top and bottom of
$\Omega$ so that $\xi=0, \pi, 2\pi, 3\pi, ...$ but for simplicity the continuous range
$\xi\in [0, \infty)$ is considered for the diffusion problem. The boundary condition on left and
right of $\Omega$ is Neumann, Dirichlet or the same as the transmission condition:
$\mathcal{B}^{l,r}=\mp\partial_x$, $\mathcal{I}$ or $\mathcal{B}_j^{l,r}$.
\subsubsection{Parallel Schwarz methods with Dirichlet transmission for the diffusion problem}
We begin with the parallel Schwarz method with classical Dirichlet transmission
$\mathcal{B}_j^{l,r}=\mathcal{I}$ for which the overlap width $L>0$ is necessary for
convergence. For a general sequential decomposition, a variational interpretation of the convergence
was given by \citeasnoun{Lions:1988:SAM}. An estimate of the convergence rate was derived recently
\cite{ciaramella3}. A general theory for the parallel Schwarz method is far from being as complete
as the theory for the additive Schwarz method \cite{TWbook}. For example, a convergence theory of
the restricted additive Schwarz (RAS) method \cite{cai1999restricted} has been an open problem for
two decades, and RAS is equivalent to the parallel Schwarz method \cite{Efstathiou,St-Cyr07}.
\begin{paragraph}{Convergence with increasing number of fixed size subdomains}
With the number of subdomains $N\to\infty$, the subdomain width $H$ fixed, and the overlap width
$L$ fixed, the convergence factor $\rho=\rho(\xi)$ is illustrated in the top half of
Figure~\ref{figpdn}. The plots of $\rho$ on the left show that $\rho_{\infty}$ from the limiting
spectrum formula is a very accurate approximation of $\rho$ already starting from $N=10$. It is
also clear that the Schwarz method is a smoother that performs better for larger cross-sectional
frequency $\xi$. The plots of the $\log$-scaled $1-\rho$ on the right display a constant slope of
the initial parts of the curves. The slope is estimated to be $2$ for $\rho_{\infty}$. The
influence of the original boundary condition is also manisfested in those plots: the asymptotic
$\rho\to \rho_{\infty}$ as $N\to\infty$ comes later for the Dirichlet problem (more precisely,
mixed with the Neumann conditions on top and bottom, similarly hereinafter) than for the Neumann
problem. In the bottom half of Figure~\ref{figpdn}, the scaling of $1-\max_{\xi}\rho$ (attained
at $\xi=0$ as seen before) is shown. The conclusion is that $\max_{\xi}\rho=O(1)<1$ independent
of $N\to\infty$. But a preasymptotic deterioration with growing $N$ is visible for
$\mathcal{B}^{l,r}=\mathcal{I}$ and small $\eta$, $H$.
\begin{figure}
\centering
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPDXNBN1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPDXNBN2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPDXNBD1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPDXNBD2.png}
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBNN.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBNN2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBDN.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBDN2.png}
\caption{Convergence of the parallel Schwarz method with Dirichlet transmission for
diffusion with increasing number of {fixed size subdomains}.}
\label{figpdn}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with increasing number of subdomains}
With the number of subdomains $N\to\infty$ and the domain width $X$ fixed, the
results are shown in Figure~\ref{figpd1} in a format similar to
Figure~\ref{figpdn}. In this scaling, the symbol values $a$, $b$ vary with $N$
and no limiting spectral radius is known in closed form, since the matrix entries change with the scaling. But the plots of the
convergence factor $\rho=\rho(\xi)$ in the left column of the top half of
Figure~\ref{figpd1} indicate the limiting curve is the constant one, and the
plots in the right column display a constant slope in the $\log$ scale of
$1-\rho$ for small $\sqrt{\xi^2+\eta}$. The slope is estimated to be $2$ for the
Neumann problem and $0$ for the Dirichlet problem. The bottom half of
Figure~\ref{figpd1} shows the scaling $\max_{\xi}\rho = 1-O(N^{-2})$
for various values of the coefficient $\eta$ and the overlap width $L$. The hidden
constant factor in $O(N^{-2})$ can be seen depending linearly on
$\eta$ and $\frac{L}{H}$ for the Neumann problem, and linearly on
$\frac{L}{H}$ but is independent of small $\eta$ for the Dirichlet problem.
\begin{figure}
\centering
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPDX1BN1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPDX1BN2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPDX1BD1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPDX1BD2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBN11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBN12.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBD11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBD12.png}
\caption{Convergence of the parallel Schwarz method with Dirichlet transmission for
diffusion on a {fixed domain} with {increasing number of
subdomains}.}
\label{figpd1}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence with a fixed number of subdomains of shrinking
width}
With the number of subdomains $N$ fixed and the subdomain width $H\to 0$, we try to understand the
dependence of the convergence factor $\rho$ on $H$ separately from the dependence on $N$. As we
can see from the top half of Figure~\ref{figpdh}, smaller $H$ leads to slower convergence for all
the Fourier frequencies $\xi$. Note that each graph of $\rho(\xi)$ for $N=10$ is accompanied with
a graph of the corresponding $\rho_{\infty}$ for infinitely many subdomains $N=\infty$. So by
``interpolation'' one can imagine roughly the graphs for other values of $N$ based on the
knowledge of Figure~\ref{figpdn}. In the right column of Figure~\ref{figpdh}, we see that, for the
Neumann problem, the process of the subdomain width $H\to 0$ alone incurs the deterioration of
convergence, while for the Dirichlet problem the process of $H\to 0$ needs to be combined with
number of subdomains $N\to\infty$ to incur a deterioration. The bottom half of Figure~\ref{figpdh}
presents the scaling of $1-\max_{\xi}\rho$ with $H\to 0$. On the left for the overlap width
$L=O(H)$, it shows that $\max_{\xi} \rho = 1-O(H^2)$ for the Neumann problem where $O(H^2)$
depends linearly on the coefficient $\eta$, and $\max_{\xi}\rho=O(1)$ for the Dirichlet problem
where $O(1)$ is independent of $\eta$. On the right, different values of $\nu$ for the overlap
width $L=O({H^\nu})$ are used, which suggests $\max_{\xi}\rho = 1-O(HL)$ for the Neumann problem
and $\max_{\xi}\rho = 1-O(H^{-1}L)$ for the Dirichlet problem.
\begin{figure}
\centering
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPDXHBN1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPDXHBN2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPDXHBD1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPDXHBD2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBNH1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBNH2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBDH1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBDH2.png}
\caption{Convergence of the parallel Schwarz method with Dirichlet transmission for
diffusion with a {fixed number of subdomains} of {shrinking width}.}
\label{figpdh}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with a fixed number of subdomains of shrinking
overlap}
With both number of subdomains $N$ and the domain width $X$ fixed and the overlap width $L\to 0$,
we find the convergence factor $\rho(\xi)\to 1$; see the top half of Figure~\ref{figpdl}. In
particular, the right column shows a faster convergence of the Schwarz method for the Dirichlet
problem than for the Neumann problem. The speed of $\max_{\xi}\rho =\rho(0)\to 1$ appears linear
in $L\to 0$; see the bottom half of Figure~\ref{figpdl}. The hidden constant factor in
$O(L)=1-\rho$ is $O(\eta H)$ for the Neumann problem and robust in the
coefficient $\eta$ and subdomain width $H$ for the Dirichlet problem.
\begin{figure}
\centering
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPDXLBN1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPDXLBN2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPDXLBD1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPDXLBD2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBNL1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBNL2.png}
\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBDL1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPDXIBDL2.png}
\caption{Convergence of the parallel Schwarz method with Dirichlet transmission for diffusion on
a {fixed domain} with a {fixed number of subdomains} of {shrinking overlap}.}
\label{figpdl}
\end{figure}
\end{paragraph}
\subsubsection{Parallel Schwarz method with Taylor of order zero transmission for the diffusion
problem}
By using the Taylor of order zero transmission condition, see Table~\ref{tabB}, the subdomain
problem away from the boundary $\{0, X\}\times [0,Y]$ is a domain truncation of the problem on the
infinite pipe $(-\infty, \infty)\times [0,Y]$. It is interesting to check how the original boundary
condtion on $\{0, X\}\times [0,Y]$ influences the convergence. At least, we expect the Schwarz
method to work when the original boundary operator is also Taylor of order zero:
$\mathcal{B}^{l,r}=\mathcal{B}^{l,r}_j$, because then the original problem is a domain trunction of
the infinite pipe problem. In the following paragraphs, we will study the convergence of parallel
Schwarz for the Dirichlet/Neumann/Taylor $\mathcal{B}^{l,r}$ separately. The literature on a general
theory of the optimized Schwarz method with Robin transmission conditions is rather
sparse. \citeasnoun{lions1990schwarz} gave the first convergence proof in the non-overlapping case, without
an estimate of the convergence rate, see also \citeasnoun{deng1997}. It seems possible only at the discrete
level to have a convergence rate of the non-overlapping optimized Schwarz method. \citeasnoun{qin2006} got
the first estimate of the convergence {factor} $1-O(h^{1/2}H^{-1/2})$ with an {optimized} choice of
the Robin parameter; see also \citeasnoun{qin2008}, \citeasnoun{xu2010}, \citeasnoun{lui2009}, \citeasnoun{Loisel},
\citeasnoun{liu2014robin}, \citeasnoun{GH15}, \citeasnoun{GH18}. In the overlapping case, the literature becomes even
sparser, and there is only the work of \citeasnoun{loisel2010} to our knowledge.
\begin{paragraph}{Convergence with increasing number of fixed size subdomains}
With the number of subdomains $N\to\infty$ and the subdomain width $H$, the overlap width $L$ fixed,
two regimes can be observed in the top halves of Figure~\ref{figpt0nn}, \ref{figpt0nd},
\ref{figpt0nt0}. In one regime, see the first row of each figure, the maximum point of the
convergence factor $\rho(\xi)$ tends to $\xi=0$ as the number of subdomains $N\to\infty$. The other
regime appears when the overlap width $L$ is sufficiently small, see the second row of each
figure, in which the maximum point of $\rho(\xi)$ is almost fixed at the critical point of
$\rho_{\infty}(\xi)=\lim_{N\to\infty}\rho(\xi)$. In both regimes, $\max_{\xi}\rho=O(1)<1$ as
$N\to \infty$.
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPT0XNBN1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPT0XNBN2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPT0XNBN3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPT0XNBN4.png}
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBNN1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBNN2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBNN3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBNN4.png}
\caption{Convergence of the parallel Schwarz method with Taylor of order zero
transmission for the Neumann problem of diffusion with increasing number of
fixed size subdomains.}
\label{figpt0nn}
\end{figure}
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPT0XNBD1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPT0XNBD2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPT0XNBD3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPT0XNBD4.png}
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBDN1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBDN2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBDN3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBDN4.png}
\caption{Convergence of the parallel Schwarz method with Taylor of order zero
transmission for {the Dirichlet problem} of diffusion with increasing number
of fixed size subdomains.}
\label{figpt0nd}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPT0XNBT01.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPT0XNBT02.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPT0XNBT03.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPT0XNBT04.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBT0N1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBT0N2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBT0N3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBT0N4.png}
\caption{Convergence of the parallel Schwarz method with Taylor of order zero transmission
for {the infinite pipe} diffusion with increasing number of fixed size
subdomains.}
\label{figpt0nt0}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with increasing number of subdomains}
With the number of subdomains $N\to \infty$ and the domain width $X$ fixed, it follows that the
subdomain width $H\to 0$. The convergence for the Neumann problem is studied in
Figure~\ref{figpt01n}. Recall that $\rho_{\infty}:=\lim_{N\to\infty}\rho$ for fixed $H$, $L$ but
$X=NH+2L$. From the top half of Figure~\ref{figpt01n}, we find that the convergence factor
$\rho(\xi)$ attains its maximum at $\xi=0$ when the overlap width $L=O(H)$ is not too small and
at the critical point of $\rho_{\infty}(\xi)$ when $L=O(H^2)$ is sufficiently small. In both
regimes, it holds that $\max_{\xi}\rho=1-O(N^{-1})$. The difference is in how the hidden factor
depends on $\eta$ and $L$: in the first regime $\max_{\xi}\rho=1-O(\sqrt{\eta}N^{-1})$
independent of $L$, while in the latter regime
$\max_{\xi}\rho=1-O(\eta^{1/4}\sqrt{L}N^{-1})$. Note that we have the same (up to a constant
factor) dependence on $L$ and $\eta$ as in \R{TaylorRho} from the two-subdomain analysis. The
convergence for the Dirichlet problem and the infinite pipe problem are studied in
Figure~\ref{figpt01d} and Figure~\ref{figpt01t0}. Albeit the graphs of $\rho$ look different, the
maximum of $\rho$ depends on $N$ in the same way as for the Neumann problem.
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPT0X1BN1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPT0X1BN2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPT0X1BN3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPT0X1BN4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBN11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBN12.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBN13.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBN14.png}
\caption{Convergence of the parallel Schwarz method with Taylor of order zero transmission for
{the Neumann} problem of diffusion on a {fixed domain} with {increasing number of
subdomains}.}
\label{figpt01n}
\end{figure}
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPT0X1BD1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPT0X1BD2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPT0X1BD3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPT0X1BD4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBD11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBD12.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBD13.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBD14.png}
\caption{Convergence of the parallel Schwarz method with Taylor of order zero transmission for
{the Dirichlet} problem of diffusion on a fixed domain with increasing number of subdomains.}
\label{figpt01d}
\end{figure}
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPT0X1BT01.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPT0X1BT02.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPT0X1BT03.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPT0X1BT04.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBT011.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBT012.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBT013.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPT0XIBT014.png}
\caption{Convergence of the parallel Schwarz method with Taylor of order zero transmission for
{the infinite pipe} diffusion on a fixed domain with increasing number of subdomains.}
\label{figpt01t0}
\end{figure}
\end{paragraph}
\subsubsection{Parallel Schwarz method with PML transmission for the diffusion problem}
By using the PML transmission condition, the subdomain problem away from the boundary
$\{0, X\}\times[0,Y]$ can be a very good domain truncation of the problem on the infinite pipe
$(-\infty, \infty)\times [0,Y]$, because PML can make the reflection coefficient
\[
R=\frac{\hat{\mathcal{S}}-\sqrt{\xi^2+\eta}}{\hat{\mathcal{S}}+\sqrt{\xi^2+\eta}}=\pm\mathrm{e}^{-(2+\gamma)D\sqrt{\xi^2+\eta}}\qquad\text{($\hat{\mathcal{S}}$
given in \R{eqhats})}
\]
arbitrarily small by increasing its numerical cost related to the PML
width $D$ and PML strength $\gamma$. If the original boundary condition is also given by PML, {\it i.e.},
$\mathcal{B}^{l,r}=\mathcal{B}^{l,r}_j$, then the original problem does approximate the infinite
pipe problem and so {we can expect that the Schwarz method will perform well}. What if
$\mathcal{B}^{l,r}$ is Dirichlet or Neumann? What condition should be used on the external boundary
of the PML? We will address these questions in the following paragraphs.
\begin{paragraph}{Convergence with increasing number of fixed size subdomains}
The study is carried out on a growing chain of fixed size subdomains. We first consider the
Neumann problem in Figure~\ref{figppnn}. The first row is the convergence factor $\rho(\xi)$ from
using the Neumann condition on the PML external boundaries, while the second row is from using the
Dirichlet condition. We see that their asymptotics $\rho_{\infty}(\xi)=\lim_{N\to\infty}\rho(\xi)$
have little difference but the Neumann terminated PML is better than the Dirichlet terminated PML
for moderate number of subdomains $N$, which is reasonable because the original domain $\Omega$ is
equipped with the Neumann condition. In the bottom half of Figure~\ref{figppnn}, we see that
$\max_{\xi}\rho=O(1)<1$ with the constant linearly dependent on the coefficient $\eta$ and the
PML width $D$. If we compare this figure with Figure~\ref{figpdn} and Figure~\ref{figpdl}, we can
find many similarities. That is, the PML for diffusion behaves like an overlap ({see also patch substructuring methods \cite{gander2007analysis}, and references therein}). But the PML can be
of arbitrary width, while the overlap width can not exceed the subdomain width. The same scaling
is observed for the Dirichlet problem in Figure~\ref{figppnd} and for the truncated infinite pipe
problem in Figure~\ref{figppnp}. For a moderate number of subdomains, the Dirichlet terminated PML is
favorable for the original Dirichlet problem.
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPPXNBN1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPPXNBN2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPPXNBN3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPPXNBN4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBNN1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBNN2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBNN3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBNN4.png}
\caption{Convergence of the parallel Schwarz method with PML transmission for the Neumann
problem of diffusion with increasing number of fixed size subdomains.}
\label{figppnn}
\end{figure}
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPPXNBD1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPPXNBD2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPPXNBD3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPPXNBD4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBDN1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBDN2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBDN3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBDN4.png}
\caption{Convergence of the parallel Schwarz method with PML transmission for the
Dirichlet problem of diffusion with increasing number of fixed size subdomains.}
\label{figppnd}
\end{figure}
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPPXNBP1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPPXNBP2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPPXNBP3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPPXNBP4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBPN1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBPN2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBPN3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBPN4.png}
\caption{Convergence of the parallel Schwarz method with PML transmission for the infinite
pipe diffusion with increasing number of fixed size subdomains.}
\label{figppnp}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with increasing number of subdomains}
In this scaling, the subdomain width $H=(X-2L)/N\to 0$ as the number of subdomains $N\to \infty$
and the domain width $X$ is fixed. Different from the overlap width $L$, the PML width $D$ is not
bound to $H$ and so it can be fixed. From the top half of Figure~\ref{figpp1n}, we can find that
$\rho_{\infty}=\lim_{N\to\infty}\rho$ (the limit taken for each fixed $H$) is tending to the
constant one as $H\to0$ and the change of $1-\rho$ with growing $N$ looks the same as the change
of $1-\rho_{\infty}$. The convergence deterioration is estimated in the bottom half of
Figure~\ref{figpp1n} as $1-\max_{\xi}\rho=O(N^{-1})$ with a linear dependence on the PML width
$D$. The condition on the external boundary of the PML plays a significant role for small
$\eta>0$; see the first column of the bottom half of Figure~\ref{figpp1n}. If the PML is
terminated with the same condition as for the original domain, {\it i.e.},
$\mathcal{C}=\mathcal{B}^{l,r}$, then $1-\max_{\xi}\rho$ tends to be robust for small $\eta>0$;
otherwise, the convergence deteriorates with $\eta\to0^{+}$. A similar phenomenon appears for the
classical Dirichlet transmission, see the earlier Figure~\ref{figpd1}, in particular, the first
column of the bottom half. All the above observations apply equally to the Dirichlet problem
(Figure~\ref{figpp1d}) and the infinite pipe problem (Figure~\ref{figpp1p}).
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPPX1BN1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPPX1BN2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPPX1BN3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPPX1BN4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBN11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBN12.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBN13.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBN14.png}
\caption{Convergence of the parallel Schwarz method with PML transmission for the Neumann
problem of diffusion on a fixed domain with increasing number of subdomains.}
\label{figpp1n}
\end{figure}
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPPX1BD1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPPX1BD2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPPX1BD3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPPX1BD4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBD11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBD12.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBD13.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBD14.png}
\caption{Convergence of the parallel Schwarz method with PML transmission for the
Dirichlet problem of diffusion on a fixed domain with increasing number of subdomains.}
\label{figpp1d}
\end{figure}
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPPX1BP1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPPX1BP2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionPPX1BP3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionPPX1BP4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBP11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBP12.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBP13.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionPPXIBP14.png}
\caption{Convergence of the parallel Schwarz method with PML transmission for the infinite
pipe diffusion on a fixed domain with increasing number of subdomains.}
\label{figpp1p}
\end{figure}
\end{paragraph}
\subsection{Double sweep Schwarz methods for the diffusion problem}
Double sweep Schwarz methods differ from parallel Schwarz methods in the order of updating subdomain
solutions, which is analogous to the difference between the symmetric Gauss-Seidel and Jacobi
iterations. Moreover, the block symmetric Gauss-Seidel method and the block Jacobi method are
special double sweep and parallel Schwarz methods with minimal overlap and Dirichlet
transmission condition. On {the one hand}, it is typical that the Gauss-Seidel iteration (sweep in only one
order of the unknowns) converges twice as fast as the Jacobi iteration, and the symmetric
Gauss-Seidel method is no more than twice as fast as the Gauss-Seidel method; see
{\it e.g.}~\citeasnoun{hackbusch1994iterative}. On {the other hand}, the optimal double sweep Schwarz method
converges in one iteration, much faster than the optimal parallel Schwarz method that converges in
$N$ iterations. Of course, it comes at a price: the double sweep is inherently sequential between
subdomains, while the parallel Schwarz method allows all the subdomain problems to be solved
simultaneously in one iteration. {Our goal in this subsection is to investigate the convergence speed in the general setting
between these two limits.}
\subsubsection{Double sweep Schwarz method with Dirichlet transmission for the diffusion problem}
The method proposed by \citeasnoun{Schwarz:1870:UGA} uses Dirichlet transmission conditions and
overlap. It solves the subdomain problems in alternating order, now also known as double sweep
\cite{NN97}. At the matrix level, the classical Schwarz method can be viewed as an improvement of
the block symmetric Gauss-Seidel method by adding overlaps between blocks. So, yet another name for
the double sweep Schwarz method is the symmetric multiplicative Schwarz method \cite[Section
1.6]{TWbook}. It can be seen from \citeasnoun[Theorem 4.1]{bramble1991convergence} that the convergence {factor}
of the method is bounded from above by $1-O(H^2)$ for an overlap $L=O(H)$. In the classical
context with Dirichlet transmission, the symmetric multiplicative Schwarz method is rarely used
because there would be no benefit in convergence \cite{holst1997schwarz} compared to the (single
sweep) multiplicative Schwarz method.
\begin{paragraph}{Convergence with increasing number of fixed size subdomains}
In the top half of Figure~\ref{figddn}, the convergence factor $\rho(\xi)$ of the double sweep
Schwarz method is shown for growing $N$ (number of subdomains). There seems to be a limiting
curve $\rho_{\infty}:=\lim_{N\to\infty}\rho$ independent of the original boundary condition on
$\{0, X\}\times [0, Y]$. From the bottom half of Figure~\ref{figddn}, the scaling of the double
sweep iteration turns out to be $1-\max_{\xi}\rho=O(1)\in(0,1)$ with a linear dependence on the
overlap width $L$. The dependence on the coefficient $\eta>0$ is linear for both Neumann and
Dirichlet problems. Comparing with the parallel iteration studied in Figure~\ref{figpdn}, the
double sweep iteration shown in Figure~\ref{figddn} does converge faster. More precisely,
comparing the botttom left quandrants of the two figures, we find
the double sweep iteration is about twice as fast as the parallel iteration.
\begin{figure}
\centering \includegraphics[width=.56\textwidth,trim=10 10 0
6,clip]{Figures/diffusionDDXNBN1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDDXNBN2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0
6,clip]{Figures/diffusionDDXNBD1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDDXNBD2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0
2,clip]{Figures/diffusionDDXIBNN1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0
2,clip]{Figures/diffusionDDXIBNN2.png}
\includegraphics[width=.5\textwidth,trim=5 6 0
2,clip]{Figures/diffusionDDXIBDN1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0
2,clip]{Figures/diffusionDDXIBDN2.png}
\caption{Convergence of the double sweep Schwarz method with Dirichlet transmission for
diffusion with increasing number of fixed size subdomains.}
\label{figddn}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with increasing number of subdomains}
We find from the top half of Figure~\ref{figdd1} that the graph of the convergence factor
$\rho(\xi)$ for the double sweep Schwarz method tends to the constant {$1$} when we refine the
decomposition on a fixed domain. For small $\eta>0$, the convergence is much faster for the
Dirichlet problem than for the Neumann problem, a phenomenon observed also with the parallel
Schwarz method in Figure~\ref{figpd1}. The scaling of $\max_{\xi}\rho(\xi)=\rho(0)$ is illustrated
in the bottom half of Figure~\ref{figdd1}. Note that a large overlap $L=O(H)$ is
considered. We find $\max_{\xi}\rho=1-O(N^{-2})$, the same as of the parallel Schwarz
method shown in Figure~\ref{figpd1}. The dependence on the coefficient $\eta>0$ and the overlap
width $L$ is also the same as before. A more {careful comparison of} the two figures shows that the double
sweep iteration is about twice as fast as the parallel iteration.
\begin{figure}
\centering \includegraphics[width=.56\textwidth,trim=10 10 0
6,clip]{Figures/diffusionDDX1BN1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDDX1BN2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0
6,clip]{Figures/diffusionDDX1BD1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDDX1BD2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0
2,clip]{Figures/diffusionDDXIBN11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0
2,clip]{Figures/diffusionDDXIBN12.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0
2,clip]{Figures/diffusionDDXIBD11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0
2,clip]{Figures/diffusionDDXIBD12.png}
\caption{Convergence of the double sweep Schwarz method with Dirichlet transmission for
diffusion on a fixed domain with increasing number of subdomains.}
\label{figdd1}
\end{figure}
\end{paragraph}
\subsubsection{Double sweep Schwarz method with Taylor of order zero transmission for the
diffusion problem}
The Taylor of order zero condition is designed for domain truncation
of unbounded problems. It is interesting to see how such transmission
conditions work for other problems. As we did for the parallel
Schwarz method, three types of boundary value problems -- the
Dirichlet, Neumann and infinite pipe problems will all be
investigated. Another important goal is to find out how much is
gained from the parallel iteration to the double sweep iteration.
\begin{paragraph}{Convergence with increasing number of fixed size subdomains}
For the double sweep Schwarz method, there also seems to be a
limiting curve $\rho_{\infty}=\lim_{N\to\infty}\rho$ independent of
the boundary condition defined by $\mathcal{B}^{l,r}$; see
Figure~\ref{figdt0n}. The maximum of $\rho$ is attained at an
asymptotic point away from $\xi=0$ as $N\to\infty$. Since the
scaling is found independent of $\mathcal{B}^{l,r}$, only one
boundary condition is illustrated; see the bottom row of
Figure~\ref{figdt0n}. The observation is $\rho = O(1)<1$ with a
square-root dependence on the overlap width $L$. Compared to the
subplot in the third row and first column of Figure~\ref{figpt0nd},
the double sweep iteration is three to four times as fast as the
parallel iteration.
\begin{figure}
\centering \includegraphics[width=.56\textwidth,trim=10 10 0
6,clip]{Figures/diffusionDT0XNBN1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDT0XNBN2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0
6,clip]{Figures/diffusionDT0XNBD1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDT0XNBD2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0
6,clip]{Figures/diffusionDT0XNBT01.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0
6,clip]{Figures/diffusionDT0XNBT02.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0
2,clip]{Figures/diffusionDT0XIBDN1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0
2,clip]{Figures/diffusionDT0XIBDN2.png}
\caption{Convergence of the double sweep Schwarz method with Taylor of order zero
transmission for diffusion with increasing number of fixed size
subdomains.}
\label{figdt0n}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with increasing number of subdomains}
In the first three rows of Figure~\ref{figdt01}, we see a deterioration of the convergence as we
refine the decomposition of a fixed domain: a peak of the convergence factor $\rho$ rises and
expands to the right to form a plateau. The shape and location of the plateu are almost the same
for the three types of boundary value problems. Clearly, the double sweep Schwarz method is not a
smoother. The scaling of $\max_{\xi}\rho$ is illustrated in the bottom row of
Figure~\ref{figdt01} for the Dirichlet problem. We find
$\max_{\xi}\rho=1-O(N^{-1})$. The hidden constant factor in $O(N^{-1})$ has a
square-root dependence on the coefficient $\eta>0$ and a mild dependence on the overlap width
$L\to 0^+$. Compared to the parallel iteration explored in Figure~\ref{figpt01d}, the double
sweep iteration scales the same way but converges $3\sim 4$ times as fast.
\begin{figure}
\centering \includegraphics[width=.56\textwidth,trim=10 10 0
6,clip]{Figures/diffusionDT0X1BN1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDT0X1BN2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0
6,clip]{Figures/diffusionDT0X1BD1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDT0X1BD2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0
6,clip]{Figures/diffusionDT0X1BT01.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0
6,clip]{Figures/diffusionDT0X1BT02.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0
2,clip]{Figures/diffusionDT0XIBD11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0
2,clip]{Figures/diffusionDT0XIBD12.png}
\caption{Convergence of the double sweep Schwarz method with Taylor of order zero
transmission for diffusion on a fixed domain with increasing number of
subdomains.}
\label{figdt01}
\end{figure}
\end{paragraph}
\subsubsection{Double sweep Schwarz method with PML transmission for the diffusion problem}
So far, the transmission conditions that we have explored for the double sweep Schwarz methods are
the Dirichlet and Taylor of order zero conditions, which both rely on the overlap width $L>0$ to
control the high frequency error. Compared to their parallel Schwarz counterparts, these double
sweep Schwarz methods do not improve the scaling orders in $N$ (number of subdomains) but only the
constant factors. Will the comparison carry over to the PML condition? For the parallel Schwarz
methods on a fixed domain, we have seen that a fixed PML condition leads to the convergence rate
$\max_{\xi}\rho=1-O(N^{-1})$, the same as the Taylor of order zero condition does, which
is not strange given that the optimal parallel Schwarz method converges in $N$ iterations. But the
optimal double sweep Schwarz method converges in just one iteration. So, if a double sweep Schwarz
method deteriorates with growing $N$, then we can owe the deterioration to the non-optimal
transmission condition, {\it e.g.}, the Taylor of order zero condition. Can the PML condition as a more
accurate domain truncation technique push the double sweep Schwarz method toward the optimal one?
How good a PML in terms of the PML width $D$ and strength $\gamma$ do we need to achieve the constant
scaling {independent of} $N$? Let us find out the answers below.
\begin{paragraph}{Convergence with increasing number of fixed size subdomains}
Three types of boundary value problems -- Neumann, Dirichlet and
infinite pipe problems are explored in individual figures; see
Figure~\ref{figdpnn}, Figure~\ref{figdpnd} and
Figure~\ref{figdpnp}. In each case, we test both the Dirichlet and
Neumann terminating conditions for the PML. For example, in
Figure~\ref{figdpnn}, the first row is for the Neumann terminated
PML and the second row is for the Dirichlet terminated PML. We see
as $N\to \infty$, the graph of $\rho(\xi)$ tends to some limiting
profile, independent of the PML terminating condition and the
original boundary condition. The existence of
$\lim_{N\to\infty}\rho<1$ implies the scaling
$\max_{\xi}\rho(\xi)=\rho(0)=O(1)<1$ for all $N$, which is
confirmed with scaling plots in the bottom halves of the
figures. For moderate $N$, we find it better to use the same
condition as for the original problem to terminate the PML. As
expected, the convergence rate improves for bigger PML width $D$ but
deteriorates for smaller coefficient $\eta>0$. Compared to the
parallel Schwarz method shown in Figure~\ref{figppnn},
Figure~\ref{figppnd} and Figure~\ref{figppnp}, the double sweep
method is about $10$ times as fast.
\begin{figure}
\centering \includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionDPXNBN1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDPXNBN2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionDPXNBN3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDPXNBN4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBNN1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBNN2.png}
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBNN3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBNN4.png}
\caption{Convergence of the double sweep Schwarz method with PML transmission for the Neumann
problem of diffusion with increasing number of fixed size subdomains.}
\label{figdpnn}
\end{figure}
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionDPXNBD1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDPXNBD2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionDPXNBD3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDPXNBD4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBDN1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBDN2.png}
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBDN3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBDN4.png}
\caption{Convergence of the double sweep Schwarz method with PML transmission for the
Dirichlet problem of diffusion with increasing number of fixed size subdomains.}
\label{figdpnd}
\end{figure}
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionDPXNBP1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDPXNBP2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionDPXNBP3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDPXNBP4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBPN1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBPN2.png}
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBPN3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBPN4.png}
\caption{Convergence of the double sweep Schwarz method with PML transmission for the infinite
pipe diffusion with increasing number of fixed size subdomains.}
\label{figdpnp}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with increasing number of subdomains}
In this scaling, we solve a fixed problem with more and more subdomains. Since we are using the
PML transmission, the PML width $D$ can be fixed rather than shrinking -- an advantage over the
overlap. The convergence behavior of the double sweep Schwarz method is illustrated in
Figure~\ref{figdp1n}, Figure~\ref{figdp1d} and Figure~\ref{figdp1p}. Intriguingly, by fixing $D$,
the double sweep
Schwarz method becomes scalable, which does neither happen to the parallel+PML combination nor to
the double sweep+Taylor combination! Not only the maximum of $\rho$ but also the whole graph of
$\rho$ is scalable. That is, a limiting profile of $\rho$ seems to exist as $N\to\infty$.
Moreover, the terminating condition of PML has a remarkable impact, {\it e.g.}, when $\eta=10^{-2}$ and
$D=1/5$, the difference can be by a factor of about 200. The convergence deterioration with small
coefficient $\eta>0$ is mild when the good PML termination is used. Also, it seems that
$\max_{\xi}\rho$ decays exponentially to zero as $D$ increases. Hence, given the PML strength
$\gamma$, a fixed PML width $D$ is not only sufficient for the scalability but also necessary, for
which more evidence can be found in \citeasnoun{GZDD25}.
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionDPX1BN1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDPX1BN2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionDPX1BN3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDPX1BN4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBN11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBN12.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBN13.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBN14.png}
\caption{Convergence of the double sweep Schwarz method with PML transmission for the Neumann
problem of diffusion on a fixed domain with increasing number of subdomains.}
\label{figdp1n}
\end{figure}
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionDPX1BD1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDPX1BD2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionDPX1BD3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDPX1BD4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBD11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBD12.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBD13.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBD14.png}
\caption{Convergence of the double sweep Schwarz method with PML transmission for the
Dirichlet problem of diffusion on a fixed domain with increasing number of subdomains.}
\label{figdp1d}
\end{figure}
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionDPX1BP1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDPX1BP2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/diffusionDPX1BP3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/diffusionDPX1BP4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBP11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBP12.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBP13.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/diffusionDPXIBP14.png}
\caption{Convergence of the double sweep Schwarz method with PML transmission for the infinite
pipe diffusion on a fixed domain with increasing number of subdomains.}
\label{figdp1p}
\end{figure}
\end{paragraph}
\subsection{Parallel Schwarz method for the free space wave problem}\label{secpsfs}
The free space wave problem corresponds to \R{eqprb} with $\mathcal{L}_x=-\partial_{xx}$,
$\eta=-\omega^2$, $\mathcal{B}^{l,r}=\mathcal{B}_j^{l,r}$, and $\mathcal{L}_y$, $\mathcal{B}^{b,t}$
related to absorbing conditions on bottom and top. If it is the Taylor of order zero condition, then
$\mathcal{L}_y=-\partial_{yy}$, $\mathcal{B}^{b,t}=\mp \partial_y -\I \omega$ and the generalised
Fourier frequency $\xi$ from the Sturm-Liouville problem \R{eqsturm} is on a curve in the complex
plane; see Figure~\ref{figxit0}. For the PML condition on bottom and top, the domain
$(0,X)\times(0,1)$ is extended to $(0,X)\times (-D, 1+D)$,
$\mathcal{L}_y=-\tilde{s}\partial_y(\tilde{s}\partial_y)$, $\mathcal{B}^{b,t}=\mp \partial_y$ and
$\xi=0, \tilde{\xi}\pi, 2\tilde{\xi}\pi, 3\tilde{\xi}\pi, ...$, where $\tilde{s}=1$ on $[0,1]$,
$\tilde{s}=(1-\I\tilde{\sigma}(-x))^{-1}$ on $(-D, 0)$, $\tilde{s}=(1-\I\tilde{\sigma}(x-1))^{-1}$
on $(1,1+D)$, $\int_0^D\tilde{\sigma}(x)\D{x}=\frac{1}{2}D\gamma$ and
$\tilde{\xi}=((2-\I\gamma)D+1)^{-1}$. The generalised Fourier frequency $\xi$ is on a straight line
in the complex plane. For example, when $\gamma=5\pi/\omega$ and $D=1$, the range of $\xi$ is shown
in Figure~\ref{figxipml}.
\begin{figure}
\centering
\includegraphics[scale=.385]{Figures/xit0}~
\includegraphics[scale=.385]{Figures/xit02}
\caption{Generalised Fourier frequency $\xi$ from the Sturm-Liouville problem \R{eqsturm} with
$Y=1$, $\mathcal{L}_y=-\partial_{yy}$, $\mathcal{B}^{b,t}=\mp \partial_y -\I \omega$ (Taylor of
order zero condition).}
\label{figxit0}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.385]{Figures/xipml}~
\includegraphics[scale=.385]{Figures/xipml2}
\caption{Generalised Fourier frequency $\xi$ from the Sturm-Liouville problem \R{eqsturm} on the
extended domain $(-D,1+D)$ with the PML condition \R{eqpml}.}
\label{figxipml}
\end{figure}
We note that the convergence factor $\rho$ can be viewed as a function of the rescaled Fourier
frequency $\xi_{\omega}:=\xi/\omega$, the rescaled subdomain width $H_{\omega}:=\omega H$, the
rescaled overlap width $L_{\omega}:=\omega L$ and the number of subdomains $N$. But the range of the
generalised Fourier frequency $\xi$ depends on the wavenumber $\omega$ through the original boundary
condition on top and bottom. Increasing $\omega$ is not only to have more sampling points
$\frac{\Re \xi}{\omega}\in \mathbb{R}$ but also to decrease
$\frac{\Im \xi}{\omega}$; see again Figure~\ref{figxit0} and Figure~\ref{figxipml}. So, the
dependence of $\rho$ on $\omega$ is of particular interest besides the dependence on $\xi$, $H$, $L$
and $N$.
There are not many theoretical results on the convergence of Schwarz methods for the Helmholtz
equation. \citeasnoun{Despres} in his thesis proposed the first Schwarz method for the Helmholtz
equation. He replaced the classical overlapping Dirichlet transmission with the non-overlapping
Taylor of order zero transmission, and he proved the convergence of the resulting Schwarz iteration
in general geometry, arbitrary decomposition and variable media as long as part of the original
boundary is equipped with the Taylor of order zero condition. That idea has been further developed
and generalised. \citeasnoun{CGJ00} showed that for a decomposition without cross points the convergence of
the relaxed Schwarz iteration can be geometric if the Taylor of order zero transmission
$\partial_{\mathbf{n}}+\I\omega$ is enhanced by a square-root operator to
$\partial_{\mathbf{n}}+\I\omega\sqrt{\alpha-\beta\omega^{-2}\partial_{yy}}$. \citeasnoun{Claeys:2019:ADD}
analyzed the discrete version and showed that the convergence rate is uniform in the mesh size. For
recent progress along the direction of nonlocal transmission conditions, see \citeasnoun{lecouvez2014quasi},
\citeasnoun{collino2020exponentially}, \citeasnoun{parolin}, \citeasnoun{Claeys:2021:RTO}, \citeasnoun{claeys2021non}. For local
transmission conditions, the convergence rate of the Schwarz preconditioned Krylov iteration was first
analyzed by \citeasnoun{GrahamSpenceZou} and then generalised by \citeasnoun{Gonghetero}, \citeasnoun{BCNT}. Besides the
above general theories, convergence for domain decomposition in a rectangle has also been
studied. \citeasnoun{GongPS} considered the high wavenumber asymptotic. A variational interpretation of the
Schwarz preconditioner was discussed in \citeasnoun{Gongvariational}, and an analysis at the discrete level
was given in \citeasnoun{GongRAS}. \citeasnoun{Chen13a} gave the first convergence estimate of the double sweep
Schwarz method with PML transmission. A recursive double sweep Schwarz method was analyzed by
\citeasnoun{du2020pure}. For the waveguide problem, we refer to \citeasnoun{KimZhang}, \citeasnoun{Kimsweep}, \citeasnoun{KimPML}.
\subsubsection{Parallel Schwarz method with Taylor of order zero transmission for the free space
wave problem}
In this case, we assume that the original boundary condition is also
the Taylor of order zero condition, {\it i.e.},
$\mathcal{B}^{b,t}=\mathcal{B}^{l,r}=\mathcal{B}^{l,r}_j$. According
to \citeasnoun{Despres}, the parallel Schwarz method with non-overlapping
Taylor of order zero transmission converges. However, the convergence
rate for the evanescent modes is very slow. To mitigate the situation,
here we shall add an overlap, though the convergence is no longer
guaranteed by any theory. On the one hand, divergence was observed if
the overlap exceeds a threshold related to the wavenumber $\omega$ and
the subdomain width $H$\footnote{The restriction of small overlap disappears when $H$ is big engouh, {\it e.g.} H-12 as shown for two subdomains in Figure and Table. In that case, a fixed generous overlap can lead to convergence which is robust in the wavenumber.}. On the other hand, intuitively, we still
expect convergence if the overlap is sufficiently small, which we
will find to be the case from the following paragraphs, see also
the two subdomain case in Section \ref{2SubSec}.
Indeed, we carried out a scaling test of shrinking overlap $L\to 0$ on
a fixed domain with 10 subdomains and fixed wavenumber and found the same
scaling as in \R{TaylorRhoRRHelmholtz}.
\begin{paragraph}{Convergence with increasing number of fixed size subdomains}
The graph of the convergence factor $\rho=\rho(\xi)$ is shown in the
top half of Figure~\ref{figfspt0n}. Note that, for a finite number
of subdomains $N$, $\rho(0)=0$ because the Taylor expansion in the
transmission condition is exactly at the point $\xi=0$, but $\rho$
grows drastically in the neighborhood of $\xi=0$. Actually we see
that $\rho_{\infty}:=\lim_{N\to\infty}\rho$ is tending to one as
$\xi\to0$. Note also that there is no singularity at $\Re\xi=\omega$
because $\Im\xi\ne0$ due to the top and bottom Taylor of order zero
conditions (see Figure~\ref{figxit0}), and the convergence of the
evanescent modes corresponding to $\Re\xi>\omega$ is good and
independent of $N$. Given the limiting graph $\rho_{\infty}$, we see
that as $N\to\infty$, the maximum point of $\rho$ decreases toward
zero. Since $\xi$ is discrete and $\rho(0)=0$, it seems that for
sufficiently large $N$ the maximum of $\rho$ will be attained at the
first nonzero Fourier frequency $\xi_1$ (close to $\pi$) and the
maximum value tends to $\rho_{\infty}(\xi_1)$. But this asymptotic
is hardly observed up to $N=5200$ in the bottom half of
Figure~\ref{figfspt0n}.
Rather, we find $\max_{\xi}\rho =
1-O(N^{-1})$ in most cases. An exception occurs in the bottom left
subplot when $\omega=100$, $H=1/20$ and $L=H/40$, for which
$\max_{\xi}\rho=1-O(N^{-3/2})$ is observed.
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsPT0XNBT01.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsPT0XNBT02.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsPT0XNBT03.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsPT0XNBT04.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT0N1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT0N2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT0N3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT0N4.png}%
\caption{Convergence of the parallel Schwarz method with Taylor of order zero transmission for
the free space wave with increasing number of fixed size subdomains.}
\label{figfspt0n}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with increasing number of subdomains}
Now we increase the number of subdomains $N$ for a fixed domain width $X$ and a fixed wavenumber
$\omega$. This time all the modes have slower convergence for bigger $N$; see the top half of
Figure~\ref{figfspt01}. Away from $\xi=0$, the behavior of $\rho$ can be explained by that of the
limiting curve $\rho_{\infty}$ with the corresponding subdomain width $H=(X-2L)/N\to0$. As $N$
grows, the maximum point of $\rho$ can change between the two regions $\Re\xi/\omega<1$ and
$\Re\xi/\omega>1$. This is because smaller $L$ benefits convergence of the propagating modes but
hinders convergence of the evanescent modes. Nevertheless, a scaling of $\max_{\xi}\rho$ with
$N\to\infty$ still appears; see the bottom half of Figure~\ref{figfspt01}. It is estimated that
$\max_{\xi}\rho=1-O(N^{-5/3})$ if $L=O(H)$ is sufficiently small and
$\max_{\xi}\rho=1-O(N^{-5/2})$ if $L=O(H^2)$ is sufficiently small.
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsPT0X1BT01.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsPT0X1BT02.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsPT0X1BT03.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsPT0X1BT04.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT011.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT012.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT013.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT014.png}%
\caption{Convergence of the parallel Schwarz method with Taylor of order zero transmission for
the free space wave on a fixed domain with increasing number of subdomains.}
\label{figfspt01}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with a fixed number of subdomains for increasing
wavenumber}
We show the graph of the convergence factor $\rho$ for the domain width $X=1$, the number of
subdomains $N=40$, the overlap width $L=\omega^{-1}/40$ and growing wavenumber $\omega$ in the top
half of Figure~\ref{figfspt0w}. We find that the graph of $1-\rho$ has a limiting profile in the
propagating region $\Re\xi/\omega<1$ as $\omega\to\infty$ and has a unique local minimum in the
evanescent region $\Re\xi/\omega>1$ which decreases as $\omega\to\infty$. So, the Schwarz method
is preasymptotically robust about $\omega$ until the local maximum of $\rho$ over
$\Re\xi/\omega>1$ becomes dominating --a phenomenon observed numerically in \citeasnoun{GZ16}. The
asymptotic scaling is estimated in the bottom half of Figure~\ref{figfspt0w} as
$\max_{\xi}\rho=\max_{\Re\xi>\omega}\rho=1-O(\omega^{-9/20})$ for $L=O(\omega^{-1})$ and
$1-O(\omega^{-2/3})$ for $L=O(\omega^{-3/2})$.
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsPT0XwBT01.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsPT0XwBT02.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XwBT03.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XwBT04.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT0w1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT0w2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT0w3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT0w4.png}%
\caption{Convergence of the parallel Schwarz method with Taylor of order zero transmission for
the free space wave on a fixed domain with a fixed number of subdomains for increasing
wavenumber.}
\label{figfspt0w}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with number of subdomains increasing with the
wavenumber}
The scaling here is more challenging because both the increasing wavenumber and the shrinking
subdomain width force the overlap width $L$ to be small, and $L=O(\omega^{-1})$ is not small
enough for convergence. We use $L=O(\omega^{-3/2})$ in the top half of Figure~\ref{figfspt0wh}
for the graph of the convergence factor $\rho=\rho(\xi)$. Both the local maxima of $\rho$ in the
two regions $\Re\xi<\omega$ and $\Re\xi>\omega$ increase with $N$, with the first one increasing
faster and the seccond one dominating in the preasymptotic regime. The scaling of $\max_{\xi}\rho$ is
estimated in the bottom half of Figure~\ref{figfspt0wh}. Asymptotically
$\max_{\xi}\rho=1-O(N^{-2})$ for $L=O(\omega^{-3/2})$ and $\max_{\xi}\rho=1-O(N^{-5/3})$ for
$L=O(\omega^{-2})$, while the preasymptotic scaling is better.
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsPT0XwhBT01.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsPT0XwhBT02.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsPT0XwhBT03.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsPT0XwhBT04.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT0wh1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT0wh2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT0wh3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPT0XIBT0wh4.png}%
\caption{Convergence of the parallel Schwarz method with Taylor of order zero transmission for
the free space wave on a fixed domain with number of subdomains increasing with the
wavenumber.}
\label{figfspt0wh}
\end{figure}
\end{paragraph}
\subsubsection{Parallel Schwarz method with PML transmission for the free space wave problem}
In this case, we assume that the original boundary condition is also
the PML condition, {\it i.e.},
$\mathcal{B}^{b,t}=\mathcal{B}^{l,r}=\mathcal{B}_j^{l,r}$. For
analysis purposes, the PML on the left and right is regarded as a
boundary condition involving \R{eqsw}, while the PML on top and bottom
is treated as part of the extended domain. The use of PML transmission
conditions for parallel Schwarz methods can be traced back to
\citeasnoun{Toselli}, see also \citeasnoun{Schadle}, \citeasnoun{SZBKS}, \citeasnoun{GuddatiDD20}. It
is combined with a coarse space in \citeasnoun{astaneh}. Since the PML
condition can be made arbitrarily close to the exact transparent
condition, {\it i.e.}, the PML Dirichlet-to-Neumann operator \R{eqsw} can
approximate the exact Dirichlet-to-Neumann operator to arbitrary
accuracy, we can expect the Schwarz method to converge as soon as the
PML is sufficiently accurate; see also \citeasnoun[Theorem 5.1]{NN97}. This
has been first quantified by \citeasnoun{Chen13a}, though for the double sweep
Schwarz method. There is a pending issue about how accurate the PML
needs to be to ensure a robust convergence. We shall address the issue
at the continuous level for the parallel Schwarz method under
different scalings in this subsection.
\begin{paragraph}{Convergence with increasing number of fixed size subdomains}
On the first row of Figure~\ref{figfsppn}, we check the influence of the terminating condition of
the PML. One effect is from the top and bottom PMLs. The Neumann terminated PMLs give the zero frequency
$\xi=0$, while the Dirichlet terminated PMLs do not. This has already made a difference on the
convergence. The limiting convergence factor $\rho_{\infty}$ for the Neumann terminated PMLs is
actually greater than one at $\xi=0$, while that for the Dirichlet terminated PMLs is less than one
over the range of frequencies. Otherwise, quantitively the terminating condition of the PMLs makes little
difference. The graph of the convergence factor $\rho$ in the top half of Figure~\ref{figfsppn}
looks somewhat similar to Figure~\ref{figppnp} for the diffusion problem: both are better at
higher frequencies $\xi$. Roughly speaking, the PML works like an overlap but with all the modes
decaying inside it, and the top and bottom PMLs make the decaying faster for higher frequencies. As
we have encountered for the diffusion problem, the parallel Schwarz method with a fixed PML for the
wave problem also slows down with increasing number of subdomains $N$. A logarithmic growth of the
PML width $D$ with $N$ does not change the scaling; see the third row of Figure~\ref{figfsppn},
where the dependence on the wavenumber $\omega$ is also measured. Then, in the last row of the
figure, we recorded the dependence on $D$ and the subdomain width $H$. We actually find that
$1-\max_{\xi}\rho=O(N^{-1})$ improves its hidden factor as $D$ grows and is essentially
independent of $H$ and $\omega$.
\begin{figure}
\centering%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXNBP1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXNBP2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsPPXNBP3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsPPXNBP4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBPN1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBPN2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBPN3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBPN4.png}%
\caption{Convergence of the parallel Schwarz method with PML transmission for
the free space wave with increasing number of fixed size subdomains.}
\label{figfsppn}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with increasing number of subdomains}
Now we consider solving a fixed problem with increasing number of subdomains $N$. In the top half
of Figure~\ref{figfspp1}, the graph of the convergence factor $\rho=\rho(\xi)$ for each $N$ is
plotted along with the limiting convergence factor $\rho_{\infty}$ with the corresponding
$H=X/N$. In this case, $\rho$ is much better than $\rho_{\infty}$, similar to that for the
diffusion problem (see Figure~\ref{figpp1p}). The scaling of $\max_{\xi}\rho$ and its dependence
on the wavenumber $\omega$ are measured on the third row of Figure~\ref{figfspp1}. On the right,
the PML width $D$ grows logarithmically with $N$, and we see that when $D$ becomes sufficiently large, $\max_{\xi}\rho$ drops to numerical
zero. However, this does not mean {that the convergence does not depend on the number of
subdomains $N$,} because the optimal parallel Schwarz iteration is nilpotent of index $N$. The
last row of the figure checks the dependence on $D$ and the domain width $X$. We find
$\max_{\xi}\rho=1-O(N^{-1})$ which improves its hidden factor as $D$ grows and depends very
mildly on $X$.
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsPPX1BP1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsPPX1BP2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsPPX1BP3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsPPX1BP4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBP11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBP12.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBP13.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBP14.png}%
\caption{Convergence of the parallel Schwarz method with PML transmission for
the free space wave on a fixed domain with increasing number of subdomains.}
\label{figfspp1}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with a fixed number of subdomains for increasing
wavenumber}
The scaling with the wavenumber $\omega$ is studied while all the other parameters are fixed. A
limiting profile of the convergence factor $\rho$ as $\omega\to\infty$ is indicated in the top
half of Figure~\ref{figfsppw}. The scaling of $\max_{\xi}\rho$ with $\omega$ and its dependence on
the number of subdomains $N$, the PML width $D$ and the domain width $X$ are explored one by one in
the bottom half of the figure. We find $\max_{\xi}\rho=O(1)<1$ which increases with $N$
(superlinearly at high wavenumber $\omega$), decays exponentially with $D$ and depends mildly on
$X$.
\begin{figure}
\centering%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXwBP1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXwBP2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXwBP3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXwBP4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBPw1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBPw2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBPw3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBPw4.png}%
\caption{Convergence of the parallel Schwarz method with PML transmission for
the free space wave on a fixed domain with a fixed number of subdomains for increasing
wavenumber.}
\label{figfsppw}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with number of subdomains increasing with the
wavenumber}
In the previous scalings, we have found that the subdomain width $H$ and the wavenumber $\omega$
have little influence on the convergence of the parallel Schwarz method with a fixed PML
transmission, and the convergence slows down mainly due to an increasing number of subdomains
$N$. While larger PML width $D$ accelerates the convergence and can even make the convergence
factor $\rho$ numerical zero, the dependence on $N$ is inherent from the nilpotency index of the
optimal parallel Schwarz iteration matrix which has the symbol entries
$a_j=c_j=\exp(-\I\sqrt{\omega^2-\xi^2}H)$ of modulus about $1$ for the propagating modes
($\Re\xi<\omega$). Now in Figure~\ref{figfsppwh}, we increase also the wavenumber $\omega$ with
$N$. It can be seen that the scaling $\max_{\xi}\rho=1-O(N^{-1})$ is the same as for fixed
$\omega$ in Figure~\ref{figfspp1}.
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsPPXwhBP1.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsPPXwhBP2.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsPPXwhBP3.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsPPXwhBP4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBPwh1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBPwh2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBPwh3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsPPXIBPwh4.png}%
\caption{Convergence of the parallel Schwarz method with PML transmission for the free space
wave on a fixed domain with number of subdomains increasing with the wavenumber.}
\label{figfsppwh}
\end{figure}
\end{paragraph}
\subsection{Double sweep Schwarz method for the free space wave problem}
We have already seen for the parallel Schwarz methods that the free space wave problem is
significantly more difficult than the diffusion problem. For example, with fixed size subdomains,
the free space wave problem requires the parallel Schwarz iteration number to grow linearly with
number of subdomains $N$ even with the wavenumber $\omega$ fixed, while the diffusion problem
requires only a constant iteration number\footnote{\label{fnoteN}This is best understood for the optimal parallel Schwarz method of nilpotency index $N$ which has the nonzero symbol entries $a_j=c_j=\exp(-\sqrt{\xi^2+\eta}H)$. In the diffusion problem, $\eta>0$ so the nonzero entries are less than $1$ independent of $N$. But in the wave problem, $\eta=-\omega^2<0$ so the nonzero entries are of modulus about $1$ when $\Re\xi<\omega$.}. The Taylor of order zero transmission and the PML
transmission lead to the same scalings for the diffusion problem, while for the wave problem a
sufficient PML is necessary to guarantee convergence of the parallel Schwarz method and leads to
much faster convergence than the Taylor of order zero transmission conditions do. We also note that
the parallel Schwarz method with a fixed PML on a fixed domain has the same scalings of the
convergence rate for the wave and the diffusion problems.
Another angle of view is to compare the parallel Schwarz methods with their double sweep
counterparts, which has been presented for the diffusion problem. We have seen that the dependence
of the convergence rate on the number of subdomains can be removed by using together a fixed PML
transmission and the double sweep Schwarz iteration.
Now we are going to study the double sweep Schwarz methods for the free space wave problem. Do they
scale with fixed size subdomains, given that the parallel Schwarz methods do for the diffusion
problem but not for the wave problem? With Taylor of order zero transmission without overlap, the
parallel Schwarz method is proved to converge \cite{Despres}. Do we still expect convergence
when we change only the parallel iteration to the double sweep iteration? How do the double sweep
Schwarz methods depend on the wavenumber? Will a fixed PML make the convergence scalable with the
number of subdomains like it does for the diffusion problem?
\subsubsection{Double sweep Schwarz method with Taylor of order zero transmission for the free
space wave problem}
For the free space problem, we assume that the original boundary condition is also the Taylor of
order zero condition, {\it i.e.}, $\mathcal{B}^{b,t}=\mathcal{B}^{l,r}=\mathcal{B}^{l,r}_j$. There is no
theoretical convergence result for the double sweep Schwarz method with Taylor of order zero
transmission. Our experience with the parallel Schwarz method is that a sufficiently small overlap
allows convergence of both evanescent and propagating modes. We shall try different overlap width
$L$ in the following study.
\begin{paragraph}{Convergence with increasing number of fixed size subdomains}
When the domain grows by adding fixed size subdomains along one direction and the wavenumber
$\omega$ is fixed, we find a suitable overlap width can be $L=O(\omega^{-1})$; see the top half
of Figure~\ref{figfsdt0n}. Although not shown in the figure, now not only too large overlap can
cause divergence of some propgating modes but also too small overlap can cause divergence of some
evanescent modes! From the figure, we can see that a limiting curve
$\rho_{\infty}:=\lim_{N\to\infty}\rho$ exists for the double sweep Schwarz method, and $\rho$
grows with $N$ towards $\rho_{\infty}$. The tendency indicates quite good convergence at large
$N$, which is verified in the bottom half of the figure. We find $\max_{\xi}\rho=O(1)<1$ which
deteriorates with increasing wavenumber $\omega$, shrinking overlap width $L$ and shrinking
subdomain width $H$.
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsDT0XNBT01.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsDT0XNBT02.png}\\
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsDT0XNBT03.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsDT0XNBT04.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0XIBT0N1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0XIBT0N2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0XIBT0N3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0XIBT0N4.png}%
\caption{Convergence of the double sweep Schwarz method with Taylor of order zero transmission
for the free space wave with increasing number of fixed size subdomains.}
\label{figfsdt0n}
\end{figure}
\end{paragraph}
\begin{paragraph}{Divergence on a fixed domain with increasing number of subdomains}
As we mentioned before, the overlap width $L$ should be small for convergence of propagating modes
and also should be large for convergence of evanescent modes. The two opposite requirements become
difficult to satisfy when the subdomain width $H$ is small. Figure~\ref{figfsdt01} illustrates
this: we see that the double sweep Schwarz method can not converge for
these examples with whatever overlap. In other words, under the scaling with increasing number of
subdomains on a fixed domain, the double sweep Schwarz method with Taylor of order zero transmission
eventually diverges.
\begin{figure}
\centering%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0X1BT01.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0X1BT02.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0X1BT03.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0X1BT04.png}
\caption{Divergence of the double sweep Schwarz method with Taylor of order zero transmission
for the free space wave on a fixed domain with increasing number of subdomains.}
\label{figfsdt01}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with a fixed number of subdomains for increasing
wavenumber}
As we learned from the previous paragraphs, when the subdomain width is sufficiently large, the
double sweep Schwarz method with Taylor of order zero transmission converges with a suitable overlap
width. In the convergent regime, we can still study the scaling with the wavenumber $\omega$; see
Figure~\ref{figfsdt0w}. The top half illustrates the changing graph of $\rho=\rho(\xi)$ with
$\omega$, which shows a limiting profile away from a neighborhood of $\Re\xi=\omega$, and its
maximum attained near $\Re\xi=\omega$ increases with $\omega$. The bottom half of the figure shows
the estimate $\max_{\xi}\rho=1-O(\omega^{-9/20})$ for large $\omega$, with the hidden factor
independent of number of subdomains $N$ and the domain width $X$.
\begin{figure}
\centering%
\includegraphics[width=.56\textwidth,trim=10 10 0 6,clip]{Figures/fsDT0XwBT01.png}%
\includegraphics[width=.44\textwidth,height=13em,trim=0 10 0 6,clip]{Figures/fsDT0XwBT02.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0XwBT03.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0XwBT04.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0XIBT0w1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0XIBT0w2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0XIBT0w3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0XIBT0w4.png}%
\caption{Convergence of the double sweep Schwarz method with Taylor of order zero transmission
for the free space wave on a fixed domain with a fixed number of subdomains for increasing
wavenumber.}
\label{figfsdt0w}
\end{figure}
\end{paragraph}
\begin{paragraph}{Divergence on a fixed domain with number of subdomains increasing with the
wavenumber}
As we mentioned before, convergence of the double sweep Schwarz method with Taylor of order zero
transmission requires a sufficienly large subdomain width $H$. So, on a fixed domain under the
scaling $N\to\infty$, the method eventually must diverge. That the wavenumber increases here with
$N$ does not change the situation. As can be seen in Figure~\ref{figfsdt0wh}, divergence of some
propagating modes is persistent at large $N$ for any overlap width $L$ including $L=0$.
\begin{figure}
\centering%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0XwhBT01.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0XwhBT02.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0XwhBT03.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDT0XwhBT04.png}
\caption{Divergence of the double sweep Schwarz method with Taylor of order zero transmission
for the free space wave on a fixed domain with number of subdomains increasing with the
wavenumber.}
\label{figfsdt0wh}
\end{figure}
\end{paragraph}
\subsubsection{Double sweep Schwarz method with PML transmission for the free space wave problem}
For the free space problem, the PML is used all around the domain. The top and bottom PML is
treated as part of the extended domain. For analysis purposes, the left and right PML for the subdomains
and the domain goes into the Dirichlet-to-Neumann operator \R{eqsw} in the boundary condition listed
in Table~\ref{tabB}. The double sweep Schwarz method with PML transmission for the free space wave
problem is the most noticeable case in the last decade. Impressive numerical experiments have been
shown in the literature; see {\it e.g.}~\citeasnoun{EY2}, \citeasnoun{Poulson}, \citeasnoun{Stolk}, \citeasnoun{StolkImproved},
\citeasnoun{Chen13a}, \citeasnoun{Chen13b}, \citeasnoun{ZD}, \citeasnoun{ZepedaNested}, \citeasnoun{xiang2019double}. Recently, it was
applied to an inverse problem \cite{eslaminia}. An interesting attempt is to use double sweeps in
small groups of connected subdomains for accelerating the corresponding parallel Schwarz method
\cite{vion2018}. A question that has not been answered in theory is how many discrete layers of PML
are needed for scalable iterations. \citeasnoun{Chen13a} have an estimate at the continuous level indicating
that a PML width $D=O(N\log\omega)$ is sufficient. It was claimed based on numerical experiments
that a logarithmic growth of discrete layers of PML could be sufficient for increasing wavenumber
$\omega$ but fixed oscillations $\omega H$ in the subdomain width $H$; see {\it e.g.}~\citeasnoun{Poulson}. In
this subsection, we shall investigate the question at the continuous level. Note that due to the
small values of $\rho$ the vertical axes in this subsection will be $\rho$ or $\max_{\xi}\rho$
instead of $1-\rho$ or $1-\max_{\xi}\rho$!
\begin{paragraph}{Convergence with increasing number of fixed size subdomains}
We first try with a fixed PML. As we add more fixed size subdomains
along one direction to the domain, we see a rapid growth of $\rho$
in a neighborhood of $\xi=0$ from the first row of
Figure~\ref{figfsdpn}, which eventually leads to divergence. Then,
we turn to use a logarithmic growth of the PML width $D$ with the
number of subdomains $N$ in the second row of the figure. We see
$\rho$ is decreasing rapidly with $N$ over all the range of
$\xi$. In the third row and the first column, we try with a smaller
constant factor for the logarithmic growth of $D$, and we see
divergence again. The next subplot shows the exponential (or faster)
decay of $\max_{\xi}\rho$ with increasing $D$. We test the scaling
of $\max_{\xi}\rho$ with different wavenumber $\omega$ and subdomain
width $H$ in the last row of the figure. Given $D=(\log_{10}N)/10$,
the dependence of $\max_{\xi}\rho=O(N^{-2})$ on $\omega$ and $H$ is
negligible. Actually, larger $\omega$ and smaller $H$ can even give
smaller $\max_{\xi}\rho$.
\begin{figure}
\centering%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXNBP1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXNBP2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXNBP3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXNBP4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXNBP5.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXIBPN1.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXIBPN2.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXIBPN3.png}%
\caption{Convergence and divergence of the double sweep Schwarz method with PML transmission
for the free space wave with increasing number of fixed size subdomains. (Scaling of
$\max_{\xi}\rho$ not $1-\max_{\xi}\rho$!)}
\label{figfsdpn}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with increasing number of subdomains}
Based on the previous paragraph, we choose a sufficiently large PML width $D$ to grow
logarithmically with the number of subdomains $N$, now on a fixed domain. From the top half
of Figure~\ref{figfsdp1}, we can see that the convergence factor $\rho$ decreases rapidly with
$N$. Increasing the wavenumber $\omega$ only introduces more oscillations and changes little the
envelope profile. We next show the constant PML width $D$ can still not work; see the first
subplot of the third row. In the following subplots, the tendency of $\max_{\xi}\rho\to0$ as
$N\to\infty$ is illustrated for different values of $\omega$, $D$ and the domain width $X$. Given
$D=(\log_{10}N)/10$, the speed of $\max_{\xi}\rho\to0$ is faster than quadratic for all the listed
values of $\omega$ and $X$. But the speed strongly depends on $D$ because $\max_{\xi}\rho$ decays
exponentially as $D$ increases.
\begin{figure}
\centering%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPX1BP1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPX1BP2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPX1BP3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPX1BP4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPX1BP5.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXIBP11.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXIBP12.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXIBP13.png}
\caption{Convergence and divergence of the double sweep Schwarz method with PML transmission
for the free space wave on a fixed domain with increasing number of subdomains. (Scaling of
$\max_{\xi}\rho$ not $1-\max_{\xi}\rho$!)}
\label{figfsdp1}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with a fixed number of subdomains for increasing
wavenumber}
Now we study the scaling with the wavenumber $\omega$ alone. The PML width $D$ is taken as
constant with respect to $\omega$. In the top half of Figure~\ref{figfsdpw}, we see that the graph
of the convergence facor $\rho=\rho(\xi)$ tends to a limiting profile as $\omega\to\infty$. So,
asymptotically $\max_{\xi}\rho=O(1)$ is independent of $\omega$, as verified in the bottom half
of Figure~\ref{figfsdpw}. At the same time, we see $\max_{\xi}\rho$ grows with the number of
subdomains $N$, decays exponentially with $D$, and grows with shrinking domain width
$X$. Moreover, we find also that $D$ needs to be sufficiently big for convergence at large
$N$ and small $X$; see particularly the last subplot where divergence appears at $N=80$ and
$X=1/10$.
\begin{figure}
\centering%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXwBP1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXwBP2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXwBP3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXwBP4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXIBPw1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXIBPw2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXIBPw3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXIBPw4.png}%
\caption{Convergence and divergence of the double sweep Schwarz method with PML transmission
for the free space wave on a fixed domain with a fixed number of subdomains for increasing
wavenumber. (Scaling of $\max_{\xi}\rho$ not $1-\max_{\xi}\rho$!)}
\label{figfsdpw}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with number of subdomains increasing with the
wavenumber}
This scaling can be viewed as a combination of the previous two scalings. As learned before, a
logarithmic growth of the PML width $D$ with the number of subdomains $N$ is necessary for
convergence, now verified also for this scaling in the first row of Figure~\ref{figfsdpwh}. Then,
in the second row we see that a larger wavenumber $\omega$ causes more osciallations of the convergence
factor $\rho=\rho(\xi)$ with little change of the maximum of $\rho$. Given $D=(\log_{10}N)/10$,
the scaling of $\max_{\xi}\rho$ is about $O(N^{-2})$; see the bottom half of the figure. We can
find that $\max_{\xi}\rho$ has a mild dependence on $\omega$ and $X$ but an exponential decay with
$D$. In particular, in the third row and the last column, we see that different $D$ gives a
different power law decay of $\max_{\xi}\rho$ with $N$.
\begin{figure}
\centering%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXwhBP1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXwhBP2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXwhBP3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXwhBP4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXIBPwh1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXIBPwh2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXIBPwh3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/fsDPXIBPwh4.png}%
\caption{Convergence and divergence of the double sweep Schwarz method with PML transmission
for the free space wave on a fixed domain with number of subdomains increasing with the
wavenumber. (Scaling of $\max_{\xi}\rho$ not $1-\max_{\xi}\rho$!)}
\label{figfsdpwh}
\end{figure}
\end{paragraph}
\subsection{Parallel Schwarz methods for the layered medium wave problem}
This case differs from the free space wave problem (see section~\ref{secpsfs}) only in $\eta$. We
assume that the inhomogeneity is bounded, the medium is layer-wise constant in a box, and the
exterior medium is constant. Specifically, $\eta=-\omega^2/v^2$ with the wave velocity given as one
of the following:
\begin{itemize}
\item[vel1] or vel1($v_1$), one inclusion in the full space: $v=v_1$ when
$(x,y)\in (\frac{1}{3}X,\frac{2}{3}X)\times(0,1)$, and $v=1$ on
$\mathbb{R}^2-(\frac{1}{3}X,\frac{2}{3}X)\times(0,1)$;
\item[vel2] or vel2($v_1$), two inclusions in the full space: $v=v_1$ when
$(x,y)\in (\frac{1}{6}X,\frac{1}{3}X)\times(0,1)$ or
$(x,y)\in (\frac{2}{3}X,\frac{5}{6}X)\times(0,1)$, and $v=1$ otherwise in $\mathbb{R}^2$;
\item[vell] or vel($l$,$v_1$), $l$ inclusions in the full space: $v=v_1$ when
$(x,y)\in ((j-1)W,jW)\times(0,1)$ for $j=2,4,..,2l$, $v=1$ otherwise in $\mathbb{R}^2$ and
$W=\frac{X}{2l+1}$.
\end{itemize}
So, the medium interfaces are parallel to the subdomain interfaces. In fact, it is more favorable
for convergence to choose the domain decomposition along the other direction such that the subdomain
interfaces are perpendicular to the medium interfaces. But in practical applications, it is very
possible to have the situation that we choose to study here. For example, if the top and bottom
boundary conditions are Neumann or Dirichlet, a decomposition along the other direction will be hard
for convergence. Moreover, a velocity model may contain horizontal, vertical and curved interfaces
along different directions, which makes reflections traveling back and forth between subdomains
unavoidable.
Numerical experiments of the double sweep Schwarz method with a matrix Schur complement transmission
based on the constant-medium extension have shown the difficulty of convergence in layered media
\cite{gander2019class}. An analysis and approximation of the variable-medium Dirichlet-to-Neumann
operator is carried out by \citeasnoun{layer2}, and more recently they proposed a matrix optimization
approach to the infinite element method \cite{hohage2021learned,PreussThesis}. A boundary integral
transmission condition is proposed by \citeasnoun{layer1}. The idea of \citeasnoun{heikkola} consists in extending
each layer (used as subdomain) to a slab in the physical domain, which can be thought as using a
physical PML, and approximately solving the extended subdomain problem (with zero source in the
extension) by a fast direct solver. The impact of reflected waves in the direction of decomposition
was early recognized by \citeasnoun{Schadle} for the PML transmission, see also \citeasnoun{ZepedaNested}.
Our goal in this subsection is to explore the convergence factor $\rho$ based on the Fourier
analysis, as we did in the previous part of this review. A main difference in this case is that the
interface-to-interface operators $a_j$, $b_j$, $c_j$, $d_j$ need to be solved from subdomain layered
medium problems. In this case, Taylor of order zero transmission with overlap can diverge for some
medium distribution, and without overlap it hardly gives good convergence, albeit it does converge
without overlap when $\mathcal{B}^r_j=\mathcal{B}_{j+1}^l$ as guaranteed by \citeasnoun{Despres}. The
situation can be improved by using relaxation and nonlocal transmission conditions
\cite{collino2020exponentially}. For simplicity, we will focus on the PML transmission which has
been numerically tested for layered media problems \cite{Vion,leng2019additive,lengdiag}.
\begin{paragraph}{Convergence with increasing number of fixed size layers as subdomains}
We use medium layers as subdomains. So, the subdomain interfaces are aligned with the medium
interfaces. It is then a question which wave velocity to take in the PML equation along an
interface. In Figure~\ref{figlmppn} (except the last subplot), the same PML Dirichlet-to-Neumann
is used for $\mathcal{B}_j^{r}$ and $\mathcal{B}_{j+1}^l$, and in particular, the domain averaged
velocity $\bar{v}=\frac{1}{|\Omega|}\int_{\Omega}v$ is used in all the PML equations. The choice
of $\mathcal{B}_j^{r}=\mathcal{B}_{j+1}^{l}$ here seems important for convergence, as emphasized
in general convergence proofs of parallel optimized Schwarz methods without overlap
\cite{lions1990schwarz,collino2020exponentially}. For example, if we take the wave velocity in the
right neighborhood of $\{X_j^r\}\times(0,Y)$ for $\mathcal{B}_j^r$ but the wave velocity in the
left neighborhood of $\{X_{j+1}^l\}\times(0,Y)$ for $\mathcal{B}_{j+1}^l$, we will get divergence
no matter how large or strong a PML is used, see the last subplot of Figure~\ref{figlmppn}. In the
top half of Figure~\ref{figlmppn}, the graphs of $\rho=\rho(\xi)$ tell us roughly that lower
frequency modes converge slower, higher contrast in the wave velocity leads to slower convergence
for modest number of subdomains $N$, and increasing PML width $D$ has a limited positive impact on
convergence of low frequency modes. In the following three subplots, the scaling of
$1-\max_{\xi}\rho$ with $N$ and its dependence on $\omega$, $D$ and contrast in the wave velocity
$v_1$ are illustrated. Specifically, $1-\max_{\xi}\rho$ decreases with $N$ almost independently of
$\omega$ for an initial stage and then stays at a constant which comes later for larger $\omega$.
\begin{figure}
\centering%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmPPXNBP1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmPPXNBP2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmPPXNBP3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmPPXNBP4.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmPPXIBPN1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmPPXIBPN2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmPPXIBPN3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmPPXNBP5.png}%
\caption{Convergence ($\mathcal{B}_j^r=\mathcal{B}_{j+1}^{l}$ except bottom right) and divergence
($\mathcal{B}_j^r\ne\mathcal{B}_{j+1}^{l}$ bottom right) of the parallel Schwarz method with PML
transmission for the full space wave problem with fixed size medium layers as subdomains.}
\label{figlmppn}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with increasing number of subdomains} In this case,
the medium property in the domain is also fixed as we increase the number of subdomains. The
subdomain interfaces are not aligned with the medium interfaces. Here, it turns out not to be a
good idea (divergence observed) to take the domain average wave velocity as we did in the last
paragraph. Instead, we simply use the values of the wave velocity at subdomain interfaces for the
PML equations. Figure~\ref{figlmpp1} compares the `vel1' and `vel2' velocity models in two
columns. It can be seen that the convergence for `vel2' is slower than for `vel1'. Their
convergence rates $\max_{\xi}\rho$ both deteriorate to $1$ linearly with $N^{-1}\to0$ but are
independent of $\omega$. The convergence also deteriorates with higher contrast in the wave
velocity $v_1$. Roughly, $1-\max_{\xi}\rho$ looks like $O(v_1^{-1})$ for `vel1' and
$O(v_1^{-2})$ for `vel2'.
\begin{figure}
\centering%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmPPX1BP1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lm2PPX1BP1.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmPPX1BP2.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lm2PPX1BP2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmPPXIBP11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lm2PPXIBP11.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmPPXIBP12.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lm2PPXIBP12.png}
\caption{Convergence of the parallel Schwarz method with PML transmission for the full space
wave problem with one inclusion (left) and two inclusions (right) on a fixed domain with
increasing number of subdomains.}
\label{figlmpp1}
\end{figure}
\end{paragraph}
\subsection{Double sweep Schwarz methods for the layered medium wave problem}
As seen in the constant medium case, Taylor of order zero transmission can hardly work with the
double sweep iteration. So, we will focus solely on the PML transmission. The readers are referred
to the beginning part of the previous subsection for the velocity models used here.
\begin{paragraph}{Convergence and divergence with increasing number of fixed size layers as
subdomains}
In this case, we found it is better to use the neighbor's wave velocity for the PML equation of a
subdomain. But even for this choice, divergence is observed for a large number of layers (as
subdomains); see Figure~\ref{figlmdpn}.
\begin{figure}
\centering%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmDPXNBP1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmDPXNBP2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmDPXNBP3.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmDPXNBP4.png}
\caption{Convergence and divergence of the double sweep Schwarz method with PML transmission for
the full space wave problem with fixed size medium layers as subdomains.}
\label{figlmdpn}
\end{figure}
\end{paragraph}
\begin{paragraph}{Convergence on a fixed domain with increasing number of subdomains}
Compared to the parallel iteration previously shown in Figure~\ref{figlmpp1}, the double sweep
iteration converges significantly faster (see Figure~\ref{figlmdp1}) and does not seem to
deteriorate with a larger number of subdomains $N$. For `vel1', the convergence rate
$\max_{\xi}\rho$ is independent of the wavenumber $\omega$. But for `vel2', $\max_{\xi}\rho$
oscillates with $\omega$. Higher contrast in the wave velocity $v_1$ leads to slower
convergence. In the bottom right subplot, we see a deterioration of the convergence factor
when the inclusions become smaller and more, and in particular divergence with more than
three inclusions.
\begin{figure}
\centering%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmDPX1BP1.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lm2DPX1BP1.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmDPX1BP2.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lm2DPX1BP2.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmDPXIBP11.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lm2DPXIBP11.png}\\
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lmDPXIBP12.png}%
\includegraphics[width=.5\textwidth,trim=5 6 0 2,clip]{Figures/lm2DPXIBP12.png}
\caption{Convergence of the double sweep Schwarz method with PML transmission for the full space
wave problem with one inclusion (left), two inclusions (right except bottom) and $l$ inclusions
(bottom right) on a fixed domain with increasing number of subdomains. (Scaling of
$\max_{\xi}\rho$ not $1-\max_{\xi}\rho$!)}
\label{figlmdp1}
\end{figure}
\end{paragraph}
\vskip0.8em With these results and the results in the previous subsection on layered medium wave
problems, we have seen the limitations of PML transmission conditions. Is there a better way? For
example, are the nonlocal transmission conditions \cite{collino2020exponentially} or the learned
infinite elements \cite{hohage2021learned,PreussThesis} more robust for layered media? To what
extent can PML work for general variable media problems? For example, how to explain the success
seen in the existing literature for numerous geophysical applications? Can other techniques be
combined with transmission conditions as a remedy? We emphasize the recent progress of some
techniques for solving wave problems: for example, coarse spaces
\cite{CDKN,bonazzoli2018,bootland2020comparison} or deflation
\cite{dwarka2020scalable,dwarka2021towards,bootland2021inexact}, time domain solvers
\cite{grote2019,grote2020,appelo2020,stolk2021}, absorption or shifted-Laplace
\cite{GGS,GSV,GrahamSpenceZou,hocking2021}, $\mathcal{H}$-matrix
\cite{beams2020,lorca2021,liu2021sparse,bonev2021hierarchical}, DPG \cite{petrides2021} and
high-frequency asymptotic \cite{lu2016,fang2018,jacobs2021}. There are plenty of questions still to
be answered in the future.
\section{Schwarz methods with cross points}
In a domain decomposition, a cross point is where more than two subdomains intersect. If a cross
point is on the boundary of a subdomain, a transmission pattern thereof has to be decided. This is
important especially at the discrete level when the cross point is associated with a degree of
freedom. For detailed discussions, see \citeasnoun{gander2016cross}, \citeasnoun{GK2}, \citeasnoun{Loisel}. In particular,
different treatments of cross points for high-order ({\it e.g.}~PML) transmission conditions have recently
been proposed by \citeasnoun{modave2020non}, \citeasnoun{daithesis}, \citeasnoun{dai2021}, \citeasnoun{royer}, \citeasnoun{DNT2}. See also
\citeasnoun{Claeys:2021:RTO}, \citeasnoun{claeys2021non} for nonlocal transmission conditions. A related issue is to
deal with a corner point where the normal direction is not unique
\cite{chniti2006improved,chniti2009improved,DNT1}.
Our emphasis in this section is on optimal Schwarz methods that converge exactly in finite
steps{, {\it i.e.}\ methods that are nilpotent}. But before that, we mention quickly a couple of workable treatments of cross points in a {\it
general} decomposition that are practically useful albeit non-optimal.
From the viewpoint of the subdomain boundary, a cross point is where the interfaces of the subdomain
with two different neighbors join. In a non-overlapping decomposition, a simple approach is to
include the cross point for transmission on both the interfaces with the two neighbors. For example,
in the non-overlapping case ($\Omega_{ij}=\tilde{\Omega}_{ij}$) of the decomposition in Figure~1.2
(left), the method of \citeasnoun{Despres} for the time-harmonic wave problem $(-\Delta+\omega^2)u=f$ in
$\Omega$ uses $(-\partial_{x}+\I\,\omega)u_{ij}=(-\partial_{x}+\I\,\omega)u_{i-1,j}$ on
$\{X_i^l\}\times[Y_j^b,Y_j^t]$ and
$(-\partial_{y}+\I\,\omega)u_{ij}=(-\partial_{y}+\I\,\omega)u_{i,j-1}$ on
$[X_i^l,X_i^r]\times\{Y_j^b\}$, and any degrees of freedom associated with the cross point at
$(X_i^l,Y_j^b)$ will be taken into account for both of the transmission conditions. In the
substructured form of the Schwarz method that reduces the unkowns to the interfaces, this
cross point treatment means introducing an unknown (data for the transmission condition, also called
dual variable or Lagrange multiplier) at the cross point for each incident interface of the
subdomain. Another approach is to keep the original degrees of freedom (also called primal
variables) and the corresponding equations at cross points unique (unsplit), then separate them out
by a Schur complement from the other primal and the dual variables \cite{Bendali}.
In an overlapping decomposition, the interfaces of a subdomain with its different neighbors can
overlap. To avoid redundant transmission, a partition of unity of the subdomain boundary is used. A
very natural choice is from a non-overlapping decomposition of the domain. For example, in
Figure~1.2 we can take $\partial\Omega_{ij}\cap\tilde{\Omega}_{i-1,j}$ as the interface for
transmission to $\Omega_{ij}$ from $\Omega_{i-1,j}$. But cross points where two interfaces of a
subdomain join still exist. For example, in Figure~1.2, we have the cross point where
$\partial\Omega_{ij}\cap\overline{\tilde{\Omega}}_{i-1,j}$ joins with
$\partial\Omega_{ij}\cap\overline{\tilde{\Omega}}_{i-1,j-1}$. At the discrete level, one can still
assign the cross point uniquely to one of the incident interface, and there is no problem for the
implementation of Schwarz methods based on subdomain iterates or interface substructuring. The
deferred correction form provides another approach, which starts from an initial guess (or the
previous iterate) of the solution on the original domain, takes the residual to subdomains for local
corrections, and make a global correction by gluing the local corrections. To minimise communication
between subdomains, it is practically common to use a non-overlapping decomposition of the domain
for gluing the local corrections, which gives the Restricted Additive Schwarz (RAS) method proposed
by \citeasnoun{cai1999restricted} with Dirichlet transmission conditions, and the Optimized Restricted
Additive Schwarz (ORAS) method proposed by \citeasnoun{St-Cyr07}, with general Robin-like interface
conditions. Moreover, it was shown in \citeasnoun{St-Cyr07} that, under some algebraic assumptions on the
gluing scheme, ORAS is equivalent to the parallel Schwarz method with the corresponding transmission
conditions. In the case of non-equivalence, the transmission data coming from the glued global
iterate can be a mix of multiple subdomain solutions. However, there is no convergence theory
(except using the maximum principle) for RAS and ORAS as of today. \citeasnoun{haferssas2017additive}
analyzed a symmetrised version of RAS and ORAS using the same non-overlapping partition-of-unity for
both the restriction/prolongation operators. \citeasnoun{GrahamSpenceZou} analyzed a symmetrised optimized
Schwarz method for the Helmholtz equation. See \citeasnoun{Gonghetero}, \citeasnoun{BCNT} for more analysis of
symmetrised Schwarz preconditioners for indefinite problems.
Now we turn to the main question: {is there an optimal Schwarz method beyond the sequential
decomposition?} For example, Figure~1.2 (right) is a sequential decomposition and Figure~1.2
(left) is not. A difficulty arises even in Figure~1.2 (right) when the domain is $X$-periodic in
$x$, then the first subdomain is coupled to the last subdomain by identifying the boundary
$\{0\}\times[0,Y]$ with $\{X\}\times[0,Y]$. The difficulty is that an interface
$\{X_i^l\}\times[0,Y]$ can not separate the original problem into two regions because the region on
the left is coupled to the region on the right not only through the interface but also through the
periodic condition. Similarly, an annular domain decomposed into annular sectors has the same
problem. The issue from a loop of subdomains was recognized early by \citeasnoun{nier1998remarques}. It
arises also in a decomposition with cross points; as posed by \citeasnoun{NRS94}. For example, in Figure~1.2
(left) the interface $\{X_i^l\}\times[Y_j^b,Y_j^t]$ can not separate the domain into two regions
because there is also a loop in the subdomain adjacency relation. Why do we need an interface
separating the domain into two regions? Because that is the case for the domain truncation and
transmission to work as in the sequential decomposition.
The key of domain truncation and transmission can be understood in analogy to Gaussian
elimination. If there is no source term in the complementary domain of a subdomain, one needs only
to approximate the Schur complement which corresponds to a transparent boundary condition along the
interface separating the subdomain and its complementary domain, such that the subdomain solution
coincides with the solution of the original problem. If there is a source in the complementary
domain, one needs only to know the solution corresponding to the source in an arbitrarily small
outer (viewed from the subdomain) neighborhood of the interface, and to take the outer source into
the subdomain through a transmission condition (conveniently using the same boundary operator as in
the transparent boundary condition), which is like a back substitution in Gaussian elimination.
So, how did the recent work of \citeasnoun{leng2019additive}, \citeasnoun{taus2020sweeps} revive the domain
truncation and transmission in a checkerboard decomposition like Figure~1.2 (left)? To understand
their idea which is truly creative and a big step from the formerly existing optimal Schwarz
methods, it is very worth to read the original work. In this review, we present what we have
learned. First, it is good to have conceptually a domain truncation of the original problem into a
patch including the target subdomain, and use the truncated problem on the patch for transmission
into the subdomain. For example, in Figure~1.2 (left), a patch including $\Omega_{ij}$ can be
$\Xi_{i,j}^l:=\bar{\Omega}_{1j}\cup\cdots\cup\bar{\Omega}_{ij}$. A PML surrounding $\Xi_{i,j}^l$ is
used for domain truncation of the original problem, and we denote the PML augmented patch by
$\hat{\Xi}_{ij}^l$. Note that $\Omega_{ij}$ is also augmented with PML for domain truncation of the
original problem, and the PML augmented subdomain is denoted by $\hat{\Omega}_{ij}$. A transmission
will take place through the interface
$\hat{\Gamma}_{ij}^l:=\hat{\Omega}_{ij}\cap\{(x,y): x=X_i^l\}$. Let
$\hat{\Omega}_{ij}^l:=\hat{\Omega}_{ij}\cap\{(x,y): x\ge X_i^l\}$.
We assume the PML on top (and bottom) of $\Xi_{1j}$, $\Xi_{2j}$, ..., $\Omega_{1j}$, $\Omega_{2j}$,
... are conforming ({\it i.e.}, the same) along their interfaces and in their overlaps, and the PML on the
right of $\Xi_{ij}^l$ and $\Omega_{ij}$ are also the same. In the following lemma, we show
conceptually how the sources on the left of $\Omega_{ij}$ can be transmitted into $\Omega_{ij}$.
\begin{lemma}\label{lem4.1}
Suppose the source $f$ of the original problem \R{eqprb} vanishes on $(X_i^l,X]\times[0,Y]$ and
$[0,X]\times([0,Y]-[Y_j^b,Y_j^t])$; see Figure~\ref{lem41}. Let $\hat{v}_{ij}^l$ be the solution
of the truncated problem on $\hat{\Xi}_{ij}^l$. Let $\hat{u}_{ij}^{l}$ be the solution of
\begin{equation}
\begin{aligned}
(\mathcal{L}_x+\mathcal{L}_y+\eta)\hat{u}_{ij}^{l}&=0 & &\text{ on }\hat{\Omega}_{ij}^l,\\
\mathcal{C}\hat{u}_{ij}^l&=0 & &\text{ on }[X_i^l, X_i^r+D]\times\{Y_j^b-D, Y_j^t+D\},\\
\mathcal{C}\hat{u}_{ij}^l&=0 & &\text{ on }\{X_i^r+D\}\times[Y_j^b-D, Y_j^t+D],\\
\mathcal{B}\hat{u}_{ij}^l&=\mathcal{B}\hat{v}_{ij}^l & &\text{ on
}\hat{\Gamma}_{ij}^l=\{X_i^l\}\times[Y_j^b-D, Y_j^t+D],
\end{aligned}
\label{eqtl}
\end{equation}
where $\mathcal{C}$ is the terminating condition of the PML, and $\mathcal{B}$ is any boundary
operator that makes the problem well-posed. Then, $\hat{u}_{ij}^l$ equals the restriction of
$\hat{v}_{ij}^l$ onto $\hat{\Omega}_{ij}^l$.
\label{lemtl}
\end{lemma}
\begin{figure}
\centering
\includegraphics[scale=.3]{Figures/lem41}
\caption{Illustration of Lemma \ref{lem4.1} for the transmission in the patch $\hat{\Xi}_{ij}^l$.}
\label{lem41}
\end{figure}
\begin{proof}
Since \R{eqtl} has the unique solution $\hat{u}_{ij}^l$ and the restriction of $\hat{v}_{ij}^l$
onto $\hat{\Omega}_{ij}^l$ satisfies \R{eqtl}, the two must be equal.
\end{proof}
\begin{remark}
If the PML of $\hat{\Xi}_{ij}^l$ is exactly transparent, then the restriction of $\hat{v}_{ij}^l$
onto $\Xi_{ij}^l$ coincides with the solution of the original problem, and so is the restriction of
$\hat{u}_{ij}^l$ onto $\Omega_{ij}$. A transparent PML does exist at the continuous level, see
{\it e.g.}~\citeasnoun{yang2021truly}.
\end{remark}
\begin{remark}
To solve \R{eqtl}, we need only the knowledge of $\hat{v}_{ij}^l$ in a left neighborhood of
$\hat{\Gamma}_{ij}^l$ to evaluate the transmission data $\mathcal{B}\hat{v}_{ij}^l$. The truncated
problem on the patch $\hat{\Xi}_{ij}^l$ is never solved directly (like the original problem
on $\Omega$ is never solved directly). In the algorithm to be detailed later, the transmission
data is obtained from a solution defined on $\hat{\Omega}_{i-1,j}$. In a deferred correction form,
the residual instead of the transmission data is evaluated, which gives rise to the source
transfer method \cite{Chen13a}; see \citeasnoun{CGZ} for their relation.
\end{remark}
\begin{remark}
Since we also want to resolve the local source $f_{ij}$ -- the restriction of $f$ onto ${\Omega}_{ij}$,
the truncated problem on $\hat{\Omega}_{ij}$ will be used. It is then convenient to take
$\mathcal{B}$ as $-\partial_x+\mathcal{S}$ with $\mathcal{S}$ the PML Dirichlet-to-Neumann
operator \R{eqsd} or \R{eqsw} defined by the PML $\hat{\Omega}_{ij}-\hat{\Omega}_{ij}^l$. In this
case, the problem \R{eqtl} is extended to $\hat{\Omega}_{ij}$ with $\mathcal{S}$ unfolded
\cite{gander2019class}.
\end{remark}
\begin{remark}
Note that we can replace in Lemma~\ref{lemtl} the patch and the subdomain with an arbitrary domain
$\Xi$ and an arbitrary subdomain $\Omega_*\subset\Xi$ of any shape, with the transmission
condition put on $\partial\Omega_*-\partial\Xi$ and the source $f$ vanishing on $\Omega_*$, as
long as the problems on $\Xi$ and $\Omega_*$ are well-posed. See also \citeasnoun{leng2019additive} for
the source transfer version.
\end{remark}
Similarly, we can define the patches $\Xi_{ij}^r$, $\Xi_{ij}^t$, $\Xi_{ij}^b$ for the transmission
of the sources outside the right/top/bottom of $\Omega_{ij}$; see Figure~\ref{figpatch}
(left). There are still four corner regions outside $\Omega_{ij}$. For the source to the bottom left
of $\Omega_{ij}$, we have conceptually the patch
$\Xi_{ij}^{bl}:=\cup_{{c\le i;\ r\le j}}\bar{\Omega}_{cr}$ and the PML augmented patch
$\hat{\Xi}_{ij}^{bl}$; see Figure~\ref{figpatch} (right). Let
$\hat{\Omega}_{ij}^{bl}:=\hat{\Omega}_{ij}\cap\{(x,y): x\ge X_i^l, y\ge Y_j^b\}$. A transmission
will take place through the interface
$\hat{\Gamma}_{ij}^{bl}:=\hat{\Omega}_{ij}\cap\partial\hat{\Omega}_{ij}^{bl}$. Although the source
to be transmitted into $\Omega_{ij}$ is to the bottom left of $\Omega_{ij}$, {\it i.e.}, on
$\Xi_{i-1,j-1}^{bl}$, a patch containing both $\Omega_{ij}$ and $\Xi_{i-1,j-1}^{bl}$ has to be
larger for the PML to work. That is why the patch $\Xi_{ij}^{bl}$ is used. As we mentioned in the
above remarks following Lemma~\ref{lemtl}, the PML augmented problem on $\hat{\Omega}_{ij}$ can be
solved with either the transmission data on $\hat{\Gamma}_{ij}^{bl}$ or the residual in a top-right
neighborhood of $\hat{\Gamma}_{ij}^{bl}$ which in turn is acquired from a glued version of subdomain
solutions on $\Omega_{i-1,j}$, $\Omega_{i-1,j-1}$ and $\Omega_{i,j-1}$. The resulting solution
$\hat{u}_{ij}^{bl}$ on $\hat{\Omega}_{ij}$ then has the same restriction onto
$\hat{\Omega}_{ij}^{bl}$ as does the conceptual solution $\hat{v}_{ij}^{bl}$ for the source located
to the bottom left of $\Omega_{ij}$. In the same way, we can define the PML augmented patches
$\hat{\Xi}_{ij}^{br}$, $\hat{\Xi}_{ij}^{tr}$, $\hat{\Xi}_{ij}^{tl}$ for the transmission of the
sources located to the bottom-right/top-right/top-left of $\Omega_{ij}$.
\begin{figure}
\centering
\includegraphics[scale=.35]{Figures/patch}\quad\quad%
\includegraphics[scale=.3]{Figures/patchbl}
\caption{Patches containing the target subdomain $\Omega_{ij}$ (left), and the bottom left patch
augmented with PML $\hat{\Xi}_{ij}^{bl}$ for the non-overlapping decomposition (right).}
\label{figpatch}
\end{figure}
Let $\hat{u}_{ij}^c$ be the solution of the truncated problem on $\hat{\Omega}_{ij}$ corresponding
to the original problem \R{eqprb} with the source $f$ vanishing outside $\Omega_{ij}$. Clearly, if
we have the solutions $\hat{u}_{ij}^{l,r,b,t,bl,br,tl,tr,c}$ and the PML is exactly transparent,
then their sum on $\Omega_{ij}$ equals the solution of the original problem with the source $f$
distributed anywhere on $\Omega$, based on the linearity of \R{eqprb}. The remaining question is how
to get the transmission data needed for $\hat{u}_{ij}^{l,r,b,t,bl,br,tl,tr}$ on
$\hat{\Omega}_{ij}$. This is based on the following induction on the subscripts $i$, $j$. Only the
transmission data for $\hat{u}_{ij}^{l, bl}$ are illustrated.
\begin{lemma}
Let the domain decomposition be non-overlapping. Suppose the PML is exactly transparent and the
solutions $\hat{u}_{i-1,j}^{l,c}$ are available on $\hat{\Omega}_{i-1,j}$. Let
$\hat{\Omega}_{i-1,j}^{lr}:=\hat{\Omega}_{i-1,j}\cap\{(x,y): X_{i-1}^l\le x\le X_{i-1}^r\}$. Then
$\hat{u}_{i-1,j}^l+\hat{u}_{i-1,j}^c$ and $\hat{v}_{i,j}^l$ have the same restriction onto
$\hat{\Omega}_{i-1,j}^{lr}$ and in particular the same trace $\mathcal{B}\hat{v}_{i,j}^l$ on
$\hat{\Gamma}_{ij}^l$ for any boundary operator $\mathcal{B}$.
\label{lemil}
\end{lemma}
\begin{proof}
By Lemma~\ref{lemtl}, $\hat{u}_{i-1,j}^l$ coincides with $\hat{v}_{i-1,j}^l$ on
$\hat{\Omega}_{i-1,j}^{l}\supset \hat{\Omega}_{i-1,j}^{lr}$. Let $\hat{V}_{ij}^l$ be the solution
operator taking a source in $\Xi_{ij}^l$ as input for the truncated problem on
$\hat{\Xi}_{ij}^l$. Let $\chi_{i-1,j}$ be the indicator function of $\Omega_{i-1,j}$. Let $f$ be
the source in $\Xi_{i-1,j}^l$ (the domain on the left side of $\Omega_{ij}$) for obtaining
$\hat{v}_{ij}^l$. Then, $f=\chi_{i-1,j}f+(1-\chi_{i-1,j})f$ and
$\hat{v}_{ij}^l=\hat{V}_{ij}^l(\chi_{i-1,j}f) + \hat{V}_{ij}^l((1-\chi_{i-1,j})f)$. Since the PML
is exact, $\hat{u}_{i-1,j}^c=\hat{V}_{ij}^l(\chi_{i-1,j}f)$ and
$\hat{v}_{i-1,j}^l=\hat{V}_{ij}^l((1-\chi_{i-1,j})f)$ on $\hat{\Omega}_{i-1,j}^{lr}$. Hence,
$\hat{u}_{i-1,j}^c+\hat{u}_{i-1,j}^l=\hat{v}_{ij}^l$ on $\hat{\Omega}_{i-1,j}^{lr}$.
\end{proof}
\begin{remark}
Since we rely on the linearity or superposition principle, we need a partition of unity to
restrict the source to the non-overlapping subdomains. So the source across the interfaces needs
to be treated carefully. For a square integrable source, the partition is naturally done by the
variational form. For a singular source {\it e.g.}\ Dirac line source or point source on the interface,
one needs to split the source, {\it e.g.}, regard the source belonging to the subdomain on the left of
the interface. Similarly, the source at a cross point needs a careful partition of unity.
\end{remark}
\begin{lemma}
Let the domain decomposition be non-overlapping. Suppose the PML is exactly transparent and the
solutions $\hat{u}_{i-1,j}^{b,bl}$ on $\hat{\Omega}_{i-1,j}$, $\hat{u}_{i,j-1}^{l,bl}$ on
$\hat{\Omega}_{i,j-1}$ and $\hat{u}_{i-1,j-1}^{l,b,bl,c}$ on $\hat{\Omega}_{i-1,j-1}$ are
given. Let
$\hat{\Omega}_{i-1,j}^{lrb}:=\hat{\Omega}_{i-1,j}\cap\{(x,y): X_{i-1}^l\le x\le X_{i-1}^r, y\ge
Y_j^b\}$ and
$\hat{\Omega}_{i,j-1}^{lbt}:=\hat{\Omega}_{i,j-1}\cap\{(x,y): x\ge X_{i}^l, Y_{j-1}^b\le y\le
Y_{j-1}^t\}$. Then $\hat{u}_{i-1,j}^{b}+\hat{u}_{i-1,j}^{bl}=\hat{v}_{i,j}^{bl}$ on
$\hat{\Omega}_{i-1,j}^{lrb}$, $\hat{u}_{i,j-1}^{l}+\hat{u}_{i,j-1}^{bl}=\hat{v}_{i,j}^{bl}$ on
$\hat{\Omega}_{i,j-1}^{lbt}$, and
$\hat{u}_{i-1,j-1}^{l}+\hat{u}_{i-1,j-1}^{b}+\hat{u}_{i-1,j-1}^{bl}+\hat{u}_{i-1,j-1}^{c}=\hat{v}_{i,j}^{bl}$
on $\Omega_{i-1,j-1}$.
\label{lemibl}
\end{lemma}
\begin{proof}
Similar to the proof of Lemma~\ref{lemil}, so the details are omitted.
\end{proof}
\begin{remark}
Based on Lemma~\ref{lemibl}, the solutions $\hat{u}_{i-1,j}^{b}+\hat{u}_{i-1,j}^{bl}$,
$\hat{u}_{i,j-1}^{l}+\hat{u}_{i,j-1}^{bl}$ and
$\hat{u}_{i-1,j-1}^{l}+\hat{u}_{i-1,j-1}^{b}+\hat{u}_{i-1,j-1}^{bl}+\hat{u}_{i-1,j-1}^{c}$ can be
glued along the interfaces by a partition of unity to a function on the L-shaped block
$\overline{\hat{\Omega}_{i-1,j}^{lrb}}\cup \overline{\hat{\Omega}_{i,j-1}^{lbt}}\cup
\overline{\Omega_{i-1,j-1}}$ which equals $\hat{v}_{ij}^{bl}$. Then, the function can be used for
evaluation of the transmission data or residual.
\end{remark}
Now we can present the L-(diagonal) sweep preconditioner of \citeasnoun{taus2020sweeps}, \citeasnoun{lengdiag},
\citeasnoun{leng2020trace}. For simplicity, let us consider a non-overlapping decomposition with $3\times 3$
subdomains. In the following we first list the steps corresponding to a sweep from bottom left to
top right.
\begin{itemize}
\item[1$^\circ$] For each $(i,j)$, solve the truncated problem on $\hat{\Omega}_{ij}$ with the local
source $f$ restricted onto $\Omega_{ij}$, and denote the solution by $\hat{u}_{ij}^c$.
\item[2$^\circ$] Solve for $\hat{u}_{21}^l$ on $\hat{\Omega}_{21}$ and $\hat{u}_{12}^b$ on
$\hat{\Omega}_{12}$ with the transmission data provided by $\hat{u}_{11}^c$.
\item[3$^\circ$] Solve for $\hat{u}_{31}^l$ on $\hat{\Omega}_{31}$, $\hat{u}_{13}^b$ on
$\hat{\Omega}_{13}$ and $\hat{u}_{22}^{l,b,bl}$ on $\hat{\Omega}_{22}$ with the transmission data
provided by $\hat{u}_{21}^l+\hat{u}_{21}^c$, $\hat{u}_{12}^b+\hat{u}_{12}^c$, $\hat{u}_{12}^c$,
$\hat{u}_{21}^c$ and a glued function from $\{\hat{u}_{11}^c$, $\hat{u}_{12}^b$,
$\hat{u}_{21}^l\}$.
\item[4$^\circ$] Solve for $\hat{u}_{32}^{l,b,bl}$ and $\hat{u}_{23}^{l,b,bl}$ (details similar to
$3^\circ$).
\item[5$^\circ$] Solve for $\hat{u}_{33}^{l,b,bl}$.
\end{itemize}
In parallel to the above steps $2^\circ$-$5^\circ$, the sweep from top right to bottom left, the
sweep from bottom right to top left and the sweep from top left to bottom right can be performed
simultaneously. Note that the latter two sweeps produce also the solutions with the superscripts
$l, r, b, t$, which can be shared with the former two sweeps so that any already available solutions
will not be computed again. After finishing the four parallel sweeps, we just need to add the nine
solutions on each subdomain.
\begin{itemize}
\item[6$^\circ$] Define the subdomain approximate solutions $\tilde{u}_{ij}:=\sum_{*}\hat{u}_{ij}^*$
on $\Omega_{ij}$ with $*\in\{l, r, b, t, bl, br, tl, tr, c\}$. Define the approximate solution
$\tilde{u}$ on $\Omega$ by gluing $\tilde{u}_{ij}$ with a partition of unity.
\end{itemize}
Note that the solutions inside each of the steps $2^\circ$-$5^\circ$ can be computed in parallel,
albeit from step to step the execution is sequential. For a pipeline parallel implementation of the
preconditioned Krylov iteration for multiple right hand sides, see \citeasnoun{leng2020trace}.
It can be seen that the L-(diagonal) sweep preconditioner is an exact solver if the PML is exactly
transparent. In this sense, we say it is an optimal Schwarz method. In the non-exact setting, it is
not clear to us whether a subdomain iterative method in the spirit of the classical Schwarz method
can be formulated. That is, can we explain the preconditioned Richardson iteration with the
L-(diagonal) sweep preconditioner as some subdomain iteration, like the relation between the ORAS
preconditioner and the parallel Schwarz method \cite{St-Cyr07}?
The above idea of transmission in PML truncated patches was first proposed by \citeasnoun{leng2019additive}
for a \emph{parallel} Schwarz preconditioner which can be described as follows. Let the checkerboard
domain decomposition be non-overlapping. Let $\hat{u}_{ij}^{i'j'}$ be a solution on
$\hat{\Omega}_{ij}$ that approximates on $\Omega_{ij}$ the solution of the original problem with the
source $f_{i'j'}$ that is a partition of unity of the original source onto
$\bar{\Omega}_{i'j'}$. The preconditioner constructs on $\Omega_{ij}$ step by step the solutions for
farer and farer sources $f_{i'j'}$. The computation of $\hat{u}_{ij}^{i'j'}$ is totally parallel
between different $(i,j)$'s. In each of the following steps, the computation is for all $(i,j)$. It
is sufficient to describe the first few steps for understanding.
\begin{itemize}
\item[0$^\circ$] Compute $\hat{u}_{ij}^{ij}$.
\item[1$^\circ$] Compute $\hat{u}_{ij}^{i'j'}$ using the transmission data from
$\hat{u}_{i'j'}^{i'j'}$ for $(i',j')\in\{(i\pm1,j), (i,j\pm 1)\}$.
\item[2$^{\circ}$] Compute $\hat{u}_{ij}^{i\pm2,j}$ using the transmission data from
$\hat{u}_{i\pm1,j}^{i\pm2,j}$. Also, compute $\hat{u}_{ij}^{i,j\pm2}$ using
$\hat{u}_{i,j\pm1}^{i,j\pm2}$. Moreover, compute $\hat{u}_{ij}^{i+1,j+1}$ using a glued function
from $\hat{u}_{i+1,j+1}^{i+1,j+1}$, $\hat{u}_{i+1,j}^{i+1,j+1}$ and
$\hat{u}_{i,j+1}^{i+1,j+1}$. Similarly, compute $\hat{u}_{ij}^{i-1,j+1}$, $\hat{u}_{ij}^{i-1,j-1}$
and $\hat{u}_{ij}^{i+1,j-1}$.
\item[3$^{\circ}$] Compute $\hat{u}_{ij}^{i'j'}$ for $(i',j'): |i'-i|+|j'-j|=3$. If $i'<i, j'=j$,
using the transmission data from $\hat{u}_{i-1,j}^{i'j}$. If $i'>i$, $j'=j$, using
$\hat{u}_{i+1,j}^{i'j}$. Similarly, using $\hat{u}_{i,j\pm 1}^{ij'}$ if $i'=i$. If $i'>i, j'>j$,
using a glued function from $\hat{u}_{i+1,j+1}^{i'j'}$, $\hat{u}_{i+1,j}^{i'j'}$ and
$\hat{u}_{i,j+1}^{i'j'}$. Similarly, using appropriate solutions from the preceding two steps in
the other cases of $(i', j')$ compared to $(i, j)$.
\end{itemize}
Since each step only looks back for the subdomain solutions obtained in the preceding two steps and
our goal is to get the sum $u_{ij}:=\sum_{(i',j')}\hat{u}_{ij}^{i'j'}$ on $\Omega_{ij}$, in each
step we can add the solutions obtained into $u_{ij}$ and discard any solutions that are no longer
needed. Moreover, in step 3 since the transmission data for $\hat{u}_{ij}^{i+1,j+2}$ and
$\hat{u}_{ij}^{i+2,j+1}$ are taken from the same neighbors in the same patch
$\hat{\Xi}_{ij}^{tr}$, we can first add the two transmission data and then compute directly the
sum $\hat{u}_{ij}^{i+1,j+2}+\hat{u}_{ij}^{i+2,j+1}$ rather than compute $\hat{u}_{ij}^{i+1,j+2}$
and $\hat{u}_{ij}^{i+2,j+1}$ individually.
Another approach to an optimal Schwarz method for a checkerboard decomposition is based on a
recursive application of an optimal Schwarz method for a sequential decomposition. For example, with
the decomposition in Figure~1.2 (left), we can combine the subdomains in each column to a block
$\Omega_{i}:=\cup_{j}\Omega_{ij}$. For the sequential decomposition
$\bar{\Omega}=\cup_i\bar{\Omega}_i$, an optimal Schwarz method by exact transparent PML transmission
can be applied. Then, the problem on each block augmented with the PML is solved again by an optimal
Schwarz method using the decomposition of the PML augmented $\Omega_i$ induced from the physical
decomposition $\Omega_{i}=\cup_{j}\Omega_{ij}$. Such recursive Schwarz methods were proposed by
\citeasnoun{LiuYingRecur}; \citeasnoun{ZepedaNested}, and \citeasnoun{du2020pure} who gave also a convergence analysis. A
recursive optimal parallel Schwarz method converges in $N^2$ steps on a checkerboard of $N\times N$
subdomains, with each step costing one subdomain solve in wall-clock time. A recursive optimal
double sweep Schwarz method converges in one sequential double-double sweep costing $4N^2$ subdomain
solves in wall-clock time. For comparison, the new optimal parallel Schwarz method
\cite{leng2019additive} converges in $2N-1$ steps with each step costing one subdomain solve in
wall-clock time. The optimal L-(diagonal) sweep Schwarz method converges in four sweeps, with each
sweep parallel to the others and costing $2N-1$ subdomain solves in wall-clock time if the
subdomains on the same diagonal are treated in parallel. Of course, if only one worker is available
for computing, the recursive double sweep Schwarz method is competitive as a sequential solver.
\section{Conclusions}
While working on this review over the last two years, we discovered
many new results on optimized Schwarz methods based on domain
truncation, and each time realized that there are more further open
questions that it would be very interesting to research. In
particular, we think the following research questions would be of
great interest for Schwarz methods based on domain truncation and
optimized Schwarz methods:
\begin{itemize}
\item What is the best overlap to be used with the zeroth order Taylor transmission condition for
the Helmholtz equation? We have seen that for thin subdomains the overlap must be small enough for
the method to converge, but it then deteriorates when the overlap goes to zero, so there must be
an optimal overlap size for best performance. How this best choice depends on the Helmholtz
frequency and decomposition is an open problem. We have however also seen that for large enough
subdomains and generous overlap, the method can become robust in the wavenumber.
\item What is the asymptotic optimized parameter of the Robin
condition for the free space wave problem on a bounded domain? Based
on Fourier analysis, we discovered that with enough absorption in
the original boundary conditions put on the domain on which the
problem is posed, the asymptotic dependence of the convergence
factor on the overlap is comparable to the much simpler screened
Laplace problem, and the optimized transmission parameters have a
clear asymptotic behavior, but their dependence on the wave number
and geometry is not yet known.
\item What are the limiting spectra of double sweep Schwarz methods as
the number of subdomains goes to infinity? The limiting spectra provide
a very interesting new and accurate technique to study the
convergence of Schwarz methods when the number of subdomains becomes
large, and permit to obtain rather sharp estimates in several of
the situations we have studied here, but not all of them.
\item What are the optimized parameters for many subdomains in a strip
or checkerboard decomposition? For the strip decomposition case,
there are first new results for a complex diffusion problem in
\citeasnoun{KyriakisDD26}, \citeasnoun{ThesisAlex}, which indicate that the two
subdomain asymptotic results also hold in the case of many
subdomains, but for Helmholtz type problems this is still a largely
open field.
\item Can we rigorously prove the many detailed convergence results
and parameter dependence of the convergence factors we obtained from
the many subdomain Fourier analysis? To this end one needs to be
able to estimate accurately the spectral radii or norms of the
substructured iteration matrices from Fourier analysis we studied
here numerically.
\item When does the parallel or double sweep Schwarz method with PML
converge for layered media? We have seen that PML can not capture
the behavior of solutions in layered media, and often even the
excellent damping properties do not suffice for the Schwarz methods
based on domain truncation to work. Our concrete iteration matrices
based on Fourier analysis provide however an excellent tool to get
more fundamental theoretical insight into this.
\item Is there a better transmission condition for layered media? A
first fundamental contribution into this direction is the recent PhD
thesis \citeasnoun{PreussThesis} on learned infinite elements, which construct
transmission conditions based on approximating the symbol of the Dirichlet
to Neumann operator for the layered outer medium. Strong
assumptions on the separability are however currently needed for
this approach to be successful, and further research should be very
fruitful into this direction.
\item Finally, the research on more general decompositions including
cross points is today largely open: is it possible to get a precise
convergence factor in a checkerboard decomposition, and to optimize
transmission conditions in this case, or estimate the PML depth
needed for good performance? And what would be good coarse space
components for these problems and methods?
\end{itemize}
We hope that our present snapshot of the state of the art of Schwarz
methods based on domain truncation and optimized Schwarz methods, and
the above list of challenging open research questions will lead
to further progress in this fascinating and challenging field of
powerful domain decomposition methods, which can not be studied using
classical abstract Schwarz framework techniques.
\section{Appendix A}\label{AppendixA}
The following simple Matlab code allows the user to experiment with
nilpotent Schwarz methods based on block LU decompositions. It is
currently not known in the presence of cross points what kind of
transmission operators this approach generates, but it works for
arbitrary discretized partial differential equations.
{\small
\verbatiminput{example9dom.m}
}
\section{Appendix B}\label{AppendixB}
The following Maple commands can be used to compute many of the
formulas in the two subdomain analysis for the screened Laplace
problem, and also easily be modified to compute the corresponding
Helmholtz results. There are many useful tricks in these Maple
commands, as indicated by the comments on the right. Note also that we
absorbed the term in the denominator of the solutions for {\tt E1} and
{\tt E2} obtained by Maple in the constants $A_1$ and $A_2$ in our
expressions used in \eqref{ScreenedLaplaceSols} to simplify the
expressions, without affecting the resulting convergence factor.
{\small
\verbatiminput{mapleTaylorEta.txt}
}
\bibliographystyle{actaagsm}
|
1,116,691,500,117 | arxiv | \section{Introduction}
Lattice gauge theory is up to now the only successful nonpertubative numerical approach
to solve physical problems related to the strong interaction. Among the most reknown
recent results the prediction of a critical endpoint of the phase transition in QCD
became in the forefront of research\cite{FODOR1,FODOR2,PETREC1,PETREC2}.
Also a large scale experimental program,
FAIR at GSI, has been initiated, among other goals for studying the interface between quark-
and hadronic matter in the CBM experiment \cite{CBM}.
Accelerator experiments, however, do not have a control on thermodynamically
relevant parameters, like the temperature and pressure, to such a degree that these
could be regarded as having a sharp and constant value during the evolution of the strongly
interacting matter. Lattice theoretical simulations on the other hand assume a fixed
value for the temperature.
Our aim with the study presented in this paper is to move towards a more flexible scheme:
we treat temperature as a random variable, defined not only by its expectation value, but also
by a width. In fact the thermodynamically consequent approach to this problem requires
that the inverse temperature, $\beta=1/k_BT$, occurring also as a Lagrange multiplier
for the fixed energy constraint by maximizing the entropy, is fixed on the average and then
randomized. Such a superstatistical method \cite{SUPER1,SUPER2,SUPER3,SUPER4,SUPER5,SUPER6}
is in accord with recent findings on non-extensive thermodynamics,
where the canonical energy distribution is not-exponential, but rather shows an experimentally
observed power-law tail \cite{POWER1,POWER2,POWER3,POWER4,POWER5,POWER6}.
In this paper we review basic thermodynamic arguments to relate the temperature to the parameters
of a statistical power-law tailed, canonical energy distribution. Following this the superstatistical
method is presented, in particular its realization strategy for lattice Monte Carlo simulations.
We choose to randomize the timelike to spacelike lattice spacing ratio, $\theta=a_t/a_s$.
The most important first task is to check the deconfinement phase transition by observing
the Polyakov loop expectation value. These results are presented and discussed.
As a main consequence we predict that the deconfinement transition temperature is likely to be higher
than determined by fixed-$T$ lattice calculations so far.
\section{Thermodynamical Background}
Based on arguments regarding the compatibility of general composition rules
for the total entropy and energy of composed thermodynamical systems \cite{POWER6},
in an extended canonical thermal equilibrium problem the absolute temperature
is given by
\begin{equation}
\beta = 1/T = \partial \hat{L}(S)/\partial L(E),
\ee{ABSOLUT_THERM_TEM}
with $\hat{L}(S)$ and $L(E)$ being the additive formal logarithms of the
respective composition formulas. The formal logarithm maps a general composition law,
say $S_{12}=S_1\oplus S_2$, to the addition by $\hat{L}(S_{12})=\hat{L}(S_1)+\hat{L}(S_2)$.
This construction leads us to maximize
$\hat{L}(S)-\beta L(E)$ when looking for canonical energy distributions \cite{POWER5}.
The probability distribution, $w_i$, of states with energy $E_i$ in equilibrium
maximizes the formal logarithm of the non-extensive entropy formula with constraints
on the average value of the also non-additive energy and the probability normalization:
\begin{equation}
\hat{L}(S)\left[ w_i \right] - \beta \sum_i w_i L(E_i) -\alpha \sum_i w_i = {\rm max}.
\ee{GENERAL_CANONICAL}
Here $\beta$ and $\alpha$ are Lagrange multipliers and it can be proven that
$\beta=1/T$ is related to the thermodynamically valid temperature according to
the zeroth law of thermodynamics.
Choosing the next to simplest composition formula to the addition, supplemented with a
leading second order correction,
\begin{equation}
S_{12}=S_1+S_2+\hat{a}S_1S_2,
\ee{ENTROPY_TSALLIS_COMPO}
the additive formal logarithm function is given by
\begin{equation}
\hat{L}(S)=\frac{1}{\hat{a}} \ln(1+\hat{a}S).
\ee{TSA_FORM_LOG}
This way $\hat{L}(S_{12})=\hat{L}(S_1)+\hat{L}(S_2)$, indeed.
By using the Tsallis entropy formula \cite{NEXT1,NEXT2,NEXT3,NEXT4,NEXT5},
\begin{equation}
S = \frac{1}{\hat{a}} \sum_i \left( w_i^{1-\hat{a}} - w_i \right),
\ee{TSALLIS_ENTROPY}
this formal logarithm turns out to be the R\'enyi entropy \cite{RENYI1,RENYI2}
\begin{equation}
\hat{L}(S) = \frac{1}{\hat{a}} \ln \sum_i w_i^{1-\hat{a}}.
\ee{RENYI_ENTROPY}
It is customary to use the parameter, $q=1-\hat{a}$.
The above power-law tailed form of energy distribution can be fitted to
experimentally observed particle spectra, and this way a numerical value
for the parameter $\hat{a}$ can be obtained.
The $\hat{a}=0$ ($q=1$) case recovers the classical Boltzmann-Gibbs-Shannon (BGS)
formula \cite{GE1,GE2,GE3,GE4}
\begin{equation}
S_{BG} = \sum_i - w_i \ln w_i.
\ee{BGS_ENTROPY}
According to this the quantity \(\hat L(S) \) is to be maximized with constraints.
Identifying the analogous formal logarithm for leading order non-additive energy composition,
$E_{12}=E_1+E_2+aE_1E_2$, as
\begin{equation}
L(E) = \frac{1}{a} \ln(1+aE),
\ee{E_FROM_LOG}
one considers
\begin{equation}
\frac{1}{\hat{a}} \ln \sum_i w_i^{1-\hat{a}} - \beta \sum_i w_i \frac{1}{a} \ln(1+aE_i)
- \alpha \sum_i w_i = {\rm max}.
\ee{RENYI_CANONICAL}
The maximum is achieved by the canonical probability distribution
\begin{equation}
w_i =A\left(b(\alpha + \beta L_i)\right)^{-\frac{1}{\hat a}}
\ee{edist}
with
\begin{equation}
L_i =\frac{1}{a} \ln(1+aE_i), \qquad A = e^{-\hat L(S)}, \qquad b=\frac{\hat a}{1-\hat a}.
\ee{LI}
Then the normalization, the average and the definition of the entropy
lead to the condition
\begin{equation}
1=b\alpha+b\beta\left<L\right>.
\ee{maincond}
Finally the equilibrium distribution simplifies to
\begin{equation}
w_i
= \frac{1}{Z} \left(1+\hat{a}\hat\beta L_i \right)^{-1/\hat{a}}
\ee{RENYI_DISTRIB}
with $L_i$ given in eq.(\ref{LI}).
Here we have introduced the following shorthand notations:
\begin{equation}
Z = \frac{1}{A } \, (1-b \beta \left<L\right> )^\frac{1}{\hat a}, \qquad
\hat{\beta} = \frac{\beta}{1-\hat{a}(1+\beta \left<L\right>)}.
\ee{cons1}
We should keep in mind that the reciprocal temperature, distinguished by the Zeroth Law,
is the Lagrange multiplier \(\beta\). This is reflected well by the whole formalism,
because the usual thermodynamic relations are valid.
It is particularly interesting to consider now cases, when only one of the two quantities is
composed by non-additive rules.
In the limit of additive entropy but non-additive energy
(\(\hat a \rightarrow 0 \)) the canonical distribution approaches
\begin{equation}
w_i = \frac{1}{Z_{0}} \left( 1+aE_i\right)^{-\beta/a},\qquad
\text{where} \qquad \ln Z_0 = S_{BG} - \beta \left<E\right>.
\ee{ENTR_ADD_ENERG_NON}
Here \(S_{BG}\) is the Boltzmann-Gibbs-Shannon entropy (cf. eq.\ref{BGS_ENTROPY}).
For non-additive entropy and additive energy on the other hand a similar, but differently parametrized
power-law tailed distribution emerges:
\begin{equation}
w_i = \frac{1}{Z} \left( 1+\hat{\beta} \hat{a} E_i\right)^{-1/\hat{a}},
\ee{ENERG_ADD_ENT_NON}
with
\begin{equation}
\hat{\beta} = \frac{\beta}{1-\hat{a}(1+\beta\exv{E}) }.
\ee{HAT_BETA}
The latter relation can be transformed into a more suggestive form by using $q=1-\hat{a}$ and the
temperature parameters $T=1/\beta$ and $\hat{T}=1/\hat{\beta}$:
\begin{equation}
T = \frac{1}{q} \hat{T} + \left(\frac{1}{q}-1 \right) \exv{E}.
\ee{HAT_T_T}
By using the distribution given in eq.(\ref{ENERG_ADD_ENT_NON}), the expectation value of the
energy, $\exv{E}$, is directly given as a function of $\hat{T}$ and $\hat{a}=1-q$.
\section{Superstatistical Monte Carlo Method}
\vspace{0mm}
\vspace{0mm}
In either case discussed in the previous section, the generalized canonical
distribution of the different energy states in a system in thermal equilibrium
with non-additive composition rules is given by a formula
\begin{equation}
w_i = \frac{1}{Z_{TS}} \: \left( 1 + \frac{\beta E_i}{c} \right)^{-c}.
\ee{POWER_WEIGHT}
In the $c \rightarrow\infty$ limit this formula coincides with the
familiar Gibbs factor:
\begin{equation}
\lim_{c\rightarrow\infty} w_i = \frac{1}{Z_G} \exp(-\beta E_i).
\ee{GIBBS_LIMIT}
The quantity $q=1-1/c$ is called the Tsallis index.
Here $c=\beta/a$ and $\beta$ is in fact the inverse absolute temperature for the energy non-additivity case;
for the entropy non-additivity on the other hand $\beta$ has to be replaced by $\hat{\beta}$ and $c$ by $1/\hat{a}$
as it was explained in the previous section.
The thermodynamic temperature in the latter case, according to the Zeroth Law,
can be obtained by using eq.(\ref{HAT_T_T}).
\vspace{0mm}
The Tsallis distribution weight factor, $w_i$,
on the other hand can be obtained as an integral
of Gibbs factors over the Gamma distribution \cite{NEXLAT1,NEXLAT2},
\begin{equation}
w_i = \frac{1}{Z_{TS}} \, \int_0^{\infty}\!d\theta \,
w_c(\theta) \exp(- \theta \beta E_i),
\ee{SUPER_WEIGHT}
with
\begin{equation}
w_c(\theta) = \frac{c^c}{\Gamma(c)} \, \theta^{c-1} \, e^{-c\theta}.
\ee{NORM-GAMMA}
$\Gamma(c)=(c-1)!$ for integer $c$ is Euler's Gamma function. By its definition the integral of
$w_c(\theta)$ is normalized to one. This approach is a particular case of the so called {\em superstatistics}
\cite{SUPER1,SUPER3}.
\vspace{0mm}
Based on this, any canonical Gibbs expectation value, if known as a function
of $\beta$, can be converted into the corresponding expectation values
with the power-law tailed canonical energy distribution.
The respective partition functions, $Z_G$ and $Z_{TS}$ ensure the normalization
of the $w_i$ probabilities, $\sum_i w_i = 1$. They are related to each other:
\begin{equation}
Z_{TS}(\beta) = \sum_i \int_0^{\infty}\!d\theta \, w_c(\theta)
\exp(-\theta \beta E_i) =
\int_0^{\infty}\!d\theta \, w_c(\theta) Z_G(\theta \beta).
\ee{ZTS}
The above formula can be interpreted as averaging over different $\theta \beta$-valued
Gibbs simulations. The averaging is understood in the partition sum, meaning
that the weighting 'Boltzmann'-factor is also fluctuating. It assumes that the underlying process of
mixing different inverse temperatures is much faster than the averaging itself.
\vspace{0mm}
The question arises, which strategy is the best to follow in order
to perform lattice field theory
simulations with power-law tailed statistics instead of the Gibbs one.
Neither the ensemble of different $\beta$ values (Euclidean timelike lattice
sizes), nor the re-sampling of the traditional, Gibbs distributed
configurations is practicable in a naive way. The $N_t$ lattice sizes are
limited to a small number of integer values -- hence the good coverage
of a Gamma distribution with an arbitrary real $c$ value is questionable.
The already produced configuration ensembles were selected by a Monte
Carlo process according to the Gibbs distribution with the original
lattice action; there is no guarantee that the re-weighting procedure
(which includes part of the weight factors in the operator expressions
for observables) is really convergent (i.e. does not contain
parts growing exponentially or worse).
We choose another strategy: we use $\theta$ values selceted as random deviates
from an Euler-Gamma distribution during the Monte Carlo statistics.
\vspace{0mm}
The lattice simulation incorporates the physical temperature by the period length in the
Euclidean time direction: $\beta = N_ta_t$. Due to the restriction to a
few integer values of $N_t$,
we simulate the Gamma distribution of the physical $\beta=1/T$ values by
a Gamma distribution of the timelike link lengths, $a_t$.
We assume that its mean value is equal to the spacelike lattice spacing, $a_s$.
Then the ratio $\theta=a_t/a_s$ follows
a normalized Gamma distribution with the mean value $1$ and a width of
$1/\sqrt{c}$. (In the view of ZEUS $e^+e^-$ data $c \approx 5.8 \pm 0.5$,
the width is about $40$ per cent.) In our numerical calculations we
apply the value $c=5.5$.
\vspace{0mm}
For calculating expectation values in field theory a generating functional
based on the Legendre transform of $Z$ is used.
Our starting assumption is the formula (\ref{ZTS})
with
\begin{equation}
Z_G\left[\theta \beta\right] = \int {\cal D}U \, e^{-S\left[U,\theta \right]}.
\ee{STARTING}
Since we simulate the
canonical power-law distribution by a lattice with fluctuating
asymmetry ratio, there are two limiting strategies to execute the
Legendre transformation: i) in the {\em annealing} scenario the lattice
fluctuates slowly and one considers first summations over field configurations,
in the ii) {\em quenched} scenario on the contrary, the lattice fluctuations
are fast, form an effective action (virtually re-weighting the occurrence
probability of a field configuration), and the summation over possible
field configuration is the slower process performing the second
(i.e. the path-) integral. In this paper we investigate numerically the general case
when one may choose when a new value for $\theta$ is taken. The frequency of
these fluctuations may go from one in each Metropolis step for the field
configurations to one in the whole Monte Carlo process (the latter being the
traditional method). Our results presented in the next section belong to
a choice of $5$ field updates for the whole lattice before choosing a new $\theta$.
This peculiar value was controlled by a series of simulations and proved to be
sufficient for a close equilibration to a given, momentary temperature \cite{ACTADEB}.
\vspace{0mm}
The effect of $\theta$ fluctuation is an effective weight
for field configurations, which may depend on a scaling power according to
the time (or energy) dimension of the operator under study. In general we
consider
the Tsallis expectation value of an observable $\hat{A}[U]$ over lattice field
configurations $U$. $\hat{A}$ may include
the timelike link length, say with the power $v$:
$\hat{A}=\theta^{\:v}A$.
The Tsallis expectation value then is an average over all possible $a_t$
link lengths according to a Gamma distribution of $\theta=a_t/a_s$.
We obtain:
\begin{equation}
\langle A \rangle_{TS} \, = \, \frac{1}{Z_{TS}} \frac{c^c}{\Gamma(c)}
\int\!d\theta\: \theta^{\: c-1} e^{-c\:\theta} \int {\cal D}U A\left[U\right]
\theta^{\:v} e^{-S\left[\theta,U\right]}
\ee{TS-EXP}
with
\begin{equation}
Z_{TS} \, = \, \frac{c^c}{\Gamma(c)}
\int\!d\theta\: \theta^{\: c-1} e^{-c\:\theta} \int {\cal D}U
e^{-S\left[\theta,U\right]}.
\ee{Z-TS}
The $\theta$ dependence of the lattice gauge action is known for long:
due to the time derivatives of vector potential in the expression of electric
fields, the ''kinetic'' part scales like $a_ta_s^3/(a_t^2a_s^2)=a_s/a_t$,
and the magnetic (''potential'') part like $a_ta_s^3/(a_s^2a_s^2)=a_t/a_s$
\footnote{This generalizes to all lattice field actions: kinetic and mass
terms scale like $1/\theta$, potential terms like $\theta$.}.
This leads to the following expression for the general lattice action:
\begin{equation}
S\left[\theta,U\right] = a \: \theta + b / \theta,
\label{SLAT}
\ee
where
$a=S_{ss}[U]$ contains space-space oriented plaquettes and
$b=S_{ts}[U]$ contains time-space oriented plaquettes. The simulation
runs in lattice units anyway, so actually the $U$ configurations are
selected according to weights containing $a$ and $b$. In the
$c \rightarrow \infty$ limit the scaled Gamma distribution approximates
$\delta(\theta-1)$, (its width narrows extremely, while its integral
is normalized to one), and one gets back the traditional lattice
action $S=a+b$, and the traditional averages.
For finite $c$, one can exchange
the $\theta$ integration and the configuration sum (path integral) and
obtains exactly the power-law-weighted expression.
\section{Statistics of Polyakov Loops}
Before discussing our results for the SU(2) pure gauge lattice field simulation
using Euler-Gamma distributed timelike lattice spacing (and simulating this way
a fluctuating inverse temperature to leading order in non-extensive thermodynamics),
let us present a figure about the numerical quality of this randomization.
In Fig.\ref{Fig_tasym} the evolution process and the frequency distribution of
the $\theta$ values are shown for the reference run with $c=1024.0$ and for
the investigated case with $c=5.5$. We have choosen a new value for the asymmetry ratio
$\theta$ in each 5-th Monte Carlo update -- in order to leave some time for
the relaxation of the field to its thermal state at each instantaneous $\beta\theta$
inverse temperature. In the figure only each $5$-th value is shown. The Monte Carlo
simulations were done at the coupling $4/g^2=2.40$ for this particular statistics
with the Metropolis method.
Our reference case, thought to be close to the $c=\infty$ traditional system,
is specified by $c=1024$. The re-fit to the distribution of effectively used values
after 20000 draws from the Euler-Gamma distribution by a numerical subroutine
was done by the statistics tool ''gretl''. In the special case of our
random weighting one expects an Euler-Gamma distribution with reciprocial
$\alpha=c$ and $\beta=1/c$ parameters. On the basis of a sample of 20000 $\theta$
values we achieved a reconstruction of $\alpha=1009.8$ and $1/\beta=1010.1$.
Similarly for $c=5.5$ we obtained $\alpha=5.5179$ and $1/\beta=5.5255$.
\begin{figure}
\begin{center}
\includegraphics[width=0.44\textwidth]{Fig1a.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig1b.eps}
\includegraphics[width=0.44\textwidth]{Fig1c.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig1d.eps}
\end{center}
\caption{ \label{Fig_tasym}
The Monte Carlo evolution and distribution of $t_{asym}=a_t/a_s$ for the
coupling $4/g^2 = 2.40$.
The random deviates for the results shown in the upper row are thrown with the parameters
$\alpha = c = 1024.0$ and $\beta = 1/c = 0.000977$. The re-fit by gretl gave
$\alpha = 1009.8$ and $\beta = 0.000990$.
The same parameters in the lower row are
$\alpha = c = 5.5$ and $\beta = 1/c = 0.181818$. The re-fit by gretl gave
$\alpha = 5.5179$ and $\beta = 0.18098$.
}
\end{figure}
Now let us turn to the discussion of the behavior of the order parameter of
the confinement -- deconfinement phase transition. The Polyakov Loop
is calculated by taking the trace of the product of gauge group elements on timelike
links closing a loop due to the periodic boundary condition:
\begin{equation}
P(x) = {\rm Tr} \prod_{t=1}^{N_t}\limits U_{t}(t,x).
\ee{LOCAL_POLYAKOV_LOOP}
The traditional order parameter of the phase transition is the expectation value
of the volume averages for each lattice field-configuration during the Monte Carlo
process. For the gauge group $SU(2)$ this quantity is real:
\begin{equation}
{\Re e \, P} = {\Re e \,} \sum_x P(x).
\ee{RE_POLYAKOV}
In our present investigations the characteristic width parameter of $1/T$-fluctuations
is $c=5.5$, corresponding to a relative
width of $1/\sqrt{c} \approx 0.43$. As a reference the $c=1024.0$ case
is taken -- here the relative width is about $1/\sqrt{c} =1/32 \approx 0.03$.
The plots in Fig.\ref{FigReP1024_1.80_1.95} show the fluctuations of the order parameter
$ {\Re e \, P} $ for the reference runs with $c=1024.0$.
The fluctuating values as a function of the Monte Carlo step are plotted on the left hand side,
while their probability distributions on the right hand side. The values for
the inverse coupling include both the confinement and deconfinement phases.
By producing these results we took five consecutive Metropolis sweeps
over the whole 4-dimensional $10^3\times 2$ lattice while keeping the asymmetry value $\theta=a_t/a_s$ constant.
Then a new $\theta$ was chosen as a random deviate from an Euler-Gamma distribution.
Only these 5-th values are plotted and counted for obtaining expectation values.
The probability distributions of these values were determined by using the statistics
software tool "gretl'. Hereby the first 5000 configurations were sometimes taken
out from the samples, consisting of 100000 lattice configurations each, this did not
change expectation values appreciably.
For the statistical evaluation only each 5-th configuration was selected, being fairly independent
of each other in the evolution governed by the Metropolis algorithm and certainly belonging to different
$\theta$ values.
The frequency distributions reflect cleanly when several $ {\Re e \, P} $ expectation
values are occurring during the Monte Carlo evolution, by several maxima.
This is the case near to the phase transition point.
\begin{figure}
\begin{center}
\includegraphics[width=0.44\textwidth]{Fig2a.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig2b.eps}
\includegraphics[width=0.44\textwidth]{Fig2c.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig2d.eps}
\includegraphics[width=0.44\textwidth]{Fig2e.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig2f.eps}
\includegraphics[width=0.44\textwidth]{Fig2g.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig2h.eps}
\end{center}
\caption{ \label{FigReP1024_1.80_1.95}
The Monte Carlo evolution and distribution of $ {\Re e \, P} $ for the
couplings $4/g^2 = 1.80, 1.85, 1.90$ and $1.95$ using $c=1024.0$ from the top to the bottom.
This reference pictures show a nearly-traditional confinement -- deconfinement phase transition
for the SU(2) Yang-Mills system. Note the small width of the order parameter distribution.
}
\end{figure}
Similar pictures from Monte Carlo simulations with fluctuating inverse temperature using the parameter
$c=5.5$ are plotted in the figures \ref{FigReP_1.80_1.95} -- \ref{FigReP_2.40_2.55}. Here the effect of
the width in the possible temperature values is clearly seen in the larger fluctuations of the
order parameter compared to the reference case $c=1024.0$ at the same coupling.
Also the critical inverse coupling strength moves towards higher values for $c=5.5$.
In Fig.\ref{FigReP_2.10_2.15} we zoom to the neighborhood of the critical coupling:
The distribution of the $ {\Re e \, P} $ values are characteristically wide. In the third row, at
$4/g^2=2.14$, the distribution of possible values is almost flat between $-1$ and $1$.
(Due to the $SU(2)$ trace normalization, as we use it, the maximal absolute value of the order parameter is $2$.)
The intermittent behavior between positive and negative values of $ {\Re e \, P} $, a sure sign of the
restoration of the center symmetry $Z_2$, can be catched until the value $4/g^2=2.20$, as it
can be inspected in Fig.\ref{FigReP_2.16_2.25}. For even higher inverse coupling strength the observational
sample is too short to observe this effect.
\begin{figure}
\begin{center}
\includegraphics[width=0.44\textwidth]{Fig3a.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig3b.eps}
\includegraphics[width=0.44\textwidth]{Fig3c.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig3d.eps}
\includegraphics[width=0.44\textwidth]{Fig3e.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig3f.eps}
\includegraphics[width=0.44\textwidth]{Fig3g.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig3h.eps}
\end{center}
\caption{ \label{FigReP_1.80_1.95}
The Monte Carlo evolution and distribution of $ {\Re e \, P} $ for the
couplings $4/g^2 = 1.80, 1.85, 1.90$ and $1.95$ using $c=5.5$ from the top to the bottom. Confinement phase.
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.44\textwidth]{Fig4a.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig4b.eps}
\includegraphics[width=0.44\textwidth]{Fig4c.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig4d.eps}
\includegraphics[width=0.44\textwidth]{Fig4e.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig4f.eps}
\includegraphics[width=0.44\textwidth]{Fig4g.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig4h.eps}
\end{center}
\caption{ \label{FigReP_2.00_2.08}
The Monte Carlo evolution and distribution of $ {\Re e \, P} $ for the
couplings $4/g^2 = 2.00, 2.05, 2.06$ and $2.08$i using $c=5.5$ from the top to the bottom.
These couplings are nearly critical.
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.44\textwidth]{Fig5a.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig5b.eps}
\includegraphics[width=0.44\textwidth]{Fig5c.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig5d.eps}
\includegraphics[width=0.44\textwidth]{Fig5e.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig5f.eps}
\includegraphics[width=0.44\textwidth]{Fig5g.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig5h.eps}
\end{center}
\caption{ \label{FigReP_2.10_2.15}
The Monte Carlo evolution and distribution of $ {\Re e \, P} $ for the
couplings $4/g^2 = 2.10, 2.12, 2.14$ and $2.15$ using $c=5.5$ from the top to the bottom.
Here the two-peak distribution develops, the deconfinement sets in.
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.44\textwidth]{Fig6a.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig6b.eps}
\includegraphics[width=0.44\textwidth]{Fig6c.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig6d.eps}
\includegraphics[width=0.44\textwidth]{Fig6e.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig6f.eps}
\includegraphics[width=0.44\textwidth]{Fig6g.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig6h.eps}
\end{center}
\caption{ \label{FigReP_2.16_2.25}
The Monte Carlo evolution and distribution of $ {\Re e \, P} $ for the
couplings $4/g^2 = 2.16, 2.18, 2.20$ and $2.25$ using $c=5.5$ from the top to the bottom.
By these couplings we dwell into the deconfinement regime.
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.44\textwidth]{Fig7a.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig7b.eps}
\includegraphics[width=0.44\textwidth]{Fig7c.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig7d.eps}
\includegraphics[width=0.44\textwidth]{Fig7e.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig7f.eps}
\includegraphics[width=0.44\textwidth]{Fig7g.eps} \hspace{4mm} \includegraphics[width=0.44\textwidth]{Fig7h.eps}
\end{center}
\caption{ \label{FigReP_2.40_2.55}
The Monte Carlo evolution and distribution of $ {\Re e \, P} $ for the
couplings $4/g^2 = 2.40, 2.45, 2.50$ and $2.55$ using $c=5.5$ from the top to the bottom.
For these couplings only one symmetry breaking maximum occurs representing a well-developed
deconfinement phase.
}
\end{figure}
How to estimate the critical coupling for the appearence of the nonzero order parameter?
The method closest to the traditional one\cite{KUTIETAL} is to take the average value over the statistics.
In Fig.\ref{FIG11} we plot $\exv{ {\Re e \, P} }$ over the longer Monte Carlo runs presented above
with their distribution. There is a characteristic difference between the $c=5.5$ and the $c=1024.0$
cases. A possible fit to the average values is given by a fractional power; it seems that a $1/3$
power-law behavior describes the critical scaling well. Of course, on the basis of the present data
a square root behaviour also cannot be excluded. The obtained positions of the critical couplings
differ: $4/g_c^2 \approx 1.85$ for $c=1024.0 $ while $4/g_c^2 \approx 2.12$ for $c=5.5$.
\begin{figure}
\includegraphics[width=0.7\textwidth,angle=-90]{Fig8.eps}
\caption{ \label{FIG11}
Results on Polyakov Loop spatial average
expectation values in long runs (100.000 Monte Carlo steps, each 5-th kept)
on $10^3\times 2$ lattices at $c=5.5$
(red squares) and at $c=1024.0$ (green circles). The Gaussian widths are indicated
by error bars. The transition point, i.e. the critical coupling strenth, $x=4/g^2_c$,
is estimated by a functional fit, $ {\Re e \, P} \sim (x-x_c)^{1/3}$.
}
\end{figure}
\begin{figure}
\includegraphics[width=0.7\textwidth,angle=-90]{Fig9.eps}
\caption{ \label{FIGCumul}
Fourth order cumulants of the Polyakov Loop spatial average
in long runs (100.000 Monte Carlo steps, each 5-th kept)
on $10^3\times 2$ lattices at $c=5.5$
(full circles) and at $c=1024.0$ (open circles).
The critical coupling strenth, $x=4/g^2_c$,
is obtained by a linear fit to the smaller nonzero values.
}
\end{figure}
For drawing conlcusions relevant to the physics the inverse lattice couplings
have to be related to temperatures. Figure \ref{FIG12} presents $T/T_c$ ratios
versus the inverse coupling, $4/g^2$ for $N_t=2$ lattices, based
on data for critical couplings on different $N_t$-sized lattices \cite{Velytsky}.
Although those simulations were carried out without
temperature fluctuations, i.e. taking $c=\infty$, we use them as a first estimate
for the temperature -- coupling correspondence. The critical coupling in our calculation
for $c=1024.0$ is close to the result obtained previously on same sized ($N_t=2$) lattices.
The critical coupling at $c=5.5$ -- following the $c=\infty$ line of constant physics --
corresponds on the other hand to a temperature which is $1.3$ times higher than the usual value.
\begin{figure}
\includegraphics[width=0.7\textwidth,angle=-90]{Fig10.eps}
\caption{ \label{FIG12}
Deconfinement temperatures based on \cite{Velytsky} vs inverse coupling strength obtained from
different size lattice simulations ($N_t$ values are indicated on the plot).
The arrows point to our findings of critical temperatures with $N_t=2$ for $c=1024.0$ and $c=5.5$, respectively.
The corresponding horizontal lines are drawn at $1.00$ and $1.30$ with respect to the $c=\infty$ case.
}
\end{figure}
\section{Conclusion}
\begin{enumerate}
\item For $c=5.5$ (a realistic value from $p_T$ spectra) the critical coupling at the deconfinement phase transition
shifts towards higher values. To this value an increase of the deconfinement temperature is
obtained at $T_c(5.5)\approx 1.3T_c(1024) \approx 1.3T_c(c=\infty)$.
\item Aiming at the same $1/T$ value for the simulation, i.e. $\exv{\theta}=1$, the temperature
is expected to make an increase of about $20$ per cent due to $\exv{1/\theta}=c/(c-1)\approx 1.22$.
This shows the same trend as obtained by the Monte Carlo simulations, but not its whole magnitude.
\item We obtained, assuming the traditional scaling dependence between coupling and physical temperature,
an increase of $15$ per cent in $4/g^2_c$ leading to about an increase of $30$ per cent in $T_c$.
The dynamical effect is definitely larger than the trivial statistical factor of $1.22$.
\item Therefore experiments aiming at producing quark matter under circumstances characteristric
to high energy collisions should consider the possibility of an about $30$ per cent higher $T_c$
then predicted by traditional Monte Carlo lattice calculations. A possible measurement of the
value of the width parameter $c$ can be achieved by analyzing event-by-event spectra.
\end{enumerate}
These preliminary conclusions are based on a comparison with the $c=\infty$ traditional results.
In future works we aim to explore the $T/T_c - 4/g^2$ curve and possibly the renormalization
of physical quantities under the condition of fluctuating temperature with finite $c$ values.
\section*{Acknowledgment}
This work has been supported by the Hungarian National Science Fund,
OTKA (K68108) and by the T\'AMOP 4.2.1/B-09/1/KONV-2010-0007 project co-financed
by the European Union and the European Social Fund.
Partial support from the Helmholtz International Center (HIC) for FAIR
within framework of the Landes-Offensive zur Entwicklung Wirtschaftlich-\"Okonomischer
Exzellenz (LOEWE) launched by the State of Hesse, Germany.
Discussions with Prof. B. M\"uller and A.Jakov\'ac are gratefully acknowledged.
|
1,116,691,500,118 | arxiv | \section{Introduction}
Establishing long distance quantum networks relies on the efficient exchange of quantum information, conveniently encoded in photonic qubits \cite{Kimble.2008}. Quantum light sources emitting at telecom wavelengths are fundamental to this endeavor, due to the minimal absorption window of standard fibre networks at these wavelengths. As a consequence, there has been a lot of interest recently in developing novel quantum systems with direct emission at these wavelengths, where III-V semiconductor systems ranging from InP to wide bandgap materials such as GaP and SiN \cite{Ward.2014, Dibos.2018, Zhou.2018, Merkel.2020, Wolfowicz.2020, Durand.2021} have shown promise as quantum light sources. For emission in the telecom C-band, quantum dot (QD) technology has been most prominent so far \cite{Vajner.2022}, where two different material systems, modified InAs/GaAs and InAs/InP based, respectively, have been pursued \cite{Portalupi.2019, Anderson.2021}. These systems have made leaps in their development recently, maturing from showing evidence of single photon emission \cite{Kim.2005, Cade.2006} to demonstrations of entangled photon emission \cite{Olbrich.2017, Muller.2018} and the development of a spin-photon interface \cite{Dusanowski.2022}.
A key component for any interference-based quantum network applications is a source of coherent photons. The coherence of the photons is ultimately limited by the radiative linewidth of the underlying transition, where the coherence time $T_2$ cannot exceed twice the radiative lifetime $T_1$ (Fourier limit). However, for solid state emitters, reaching this limit is very challenging due to the inevitable coupling of the emitter to its host matrix and the associated decoherence processes. While resonant excitation is key to minimizing such noise processes \cite{Kuhlmann.2013}, previous demonstrations of Fourier-limited emission from QDs have been limited to lower-wavelength regions around 900 nm and have further relied on cavity enhancement, reducing $T_1$ below the timescale of the dephasing processes \cite{XingDing.2016, Wang.2016, Somaschi.2016}, or on manipulation of the noise processes in the environment \cite{Kuhlmann.2015}.
To produce photons with coherence times even beyond the Fourier limit, it is possible to take advantage of an elatic scattering process often termed Resonant Raileigh Scattering (RRS), whereby resonant laser light can be elastically scattered from quantum emitters even at excitation powers approaching saturation \cite{Bennett.2016}. First demonstrated in 900-nm QDs in a decade ago \cite{Nguyen.2011, Matthiesen.2012}, this phenomenon continues to be of fundamental interest, and recent work has led to a much improved understanding of the underlying processes \cite{Hanschke.2020, Phillips.2020}. However, the phenomenon has never been observed for telecom wavelength emitters, where arguably it has the highest impact for practical quantum networking applications.
Here, we use the InAs/InP QD platform to demonstrate photons with coherence times much longer than the Fourier limit at telecom wavelength. We first establish resonance fluorescence on a neutral exciton ($X$) transition and characterize the purity of the emission as well as the signal to background ratio achievable in our system. Measuring indistinguishability of consecutively emitted photons, we observe a distinct signature in two-photon correlation traces measurement indicating the presence of photons with coherence times much longer than the Fourier limit. We show that these originate from Resonant Rayleigh scattering rather than residual excitation laser light, and model the observed signatures analytically. The long coherence times are then directly measured in a Michelson interferometer setup. This allows us to show further that even the inelastically scattered part of the photons have coherence times around the Fourier limit. Finally we investigate the indistinguishabilty of photons emitted $\sim$ 100 000 emission cycles apart by sending one of the two photons through a 25-km fibre spool, enabled by the minimal absorption loss experienced by the QD emission near the telecom C-band.
\section{Resonance fluorescence and single photon purity}
\label{sec:g2}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{Fig1.pdf}
\caption{\textbf{Resonant excitation of a telecom wavelength quantum dot.} (a) Absorption spectrum of a neutrally charged exciton, showing two transitions separated by the fine structure splitting (FSS). The full emission spectrum of this QD, recorded under non-resonant excitation, is shown in the inset. (b) Power-dependent exciation paramters. (c) Autocorrelation measurements.}
\label{fig:g2}
\end{figure*}
Our sample consists of InAs/InP quantum dots in a weak planar cavity formed by two Bragg mirrors. For enhanced extraction efficiency, it is further topped by a Zirconia hemisphere. Further details about the sample are given in \ref{app:setup}. Under above-band, non-resonant excitation at 850 nm, we record the typical spectrum from such quantum dots shown in the inset Fig. \ref{fig:g2} (a). To resonantly excite the neutral exciton ($X$) studied here, we tune a narrowband cw laser across the resonance energy and remove any backscattered laser light using polarization suppression, as further described in \ref{app:setup}. Guiding the emission from the quantum dot to superconducting single photon detectors, we observe two transitions separated by the $X$ fine structure splitting of $22.77\pm0.27$ $\upmu$eV, as shown in Fig. \ref{fig:g2} (a). The relative intensities of the transitions is given by their respective overlap with the excitation and detection polarization in our system, which was set to allow both transitions to be visible for maximum efficiency.
For the remaining measurements described here, we focus on the higher energy transition. Repeating the excitation laser scans as a function of power, we can extract the maximum emission intensity at each power. This results in the count rate saturation behavior clearly seen in Fig. \ref{fig:g2} (b) (blue data points), with a saturation count rate of 491$\pm$9 kcts/s extracted from a fit to the theoretically expected behavior.
Fitting the absorption spectra to a Lorentzian lineshape further allows us to extract the linewidth as a function of power, shown by the orange data points in Fig. \ref{fig:g2} (b). The linewidths can be fitted using the square root dependence on laser intensity expected from pure power broadening, which indicates a natural linewidth of 2.7$\pm$0.12 $\upmu$eV. Further, background emission at the relevant transition energy (consiting of residual laser light as well as detector dark counts and ambient background contributions) can be extrapolated from an empirical polynomial fit to background data only and shows a linear dependence in laser power (Fig. \ref{fig:g2} (b) open circles and cyan curve).
Next, emission is guided to a Hanbury-Brown and Twiss setup for autocorrelation measurements. These are repeated for excitation powers spanning more than two orders of magnitude, as seen in Fig. \ref{fig:g2} (c). All measurements show a pronounced antibunching dip at zero delay. For higher excitation powers, we further observe the onset of Rabi oscillations, which can be fitted using the expected theoretical description
\begin{equation}
g^{(2)}(\tau) = 1-\exp(-\eta |\tau|)\left[\cos{\mu|\tau|} +\frac{\eta}{\mu}\sin{\mu |\tau|}\right],
\label{eq:g2}
\end{equation}
where $\eta = (1/T_1 + 1/T_2)/2$ and $\mu = \sqrt{\Omega^2+\left(\frac{1}{T_1} - \frac{1}{T_2}\right)^2}$. The dependence of these fitting parameters on the excitation power is given in \ref{fig:g2} (d), and shows that $\eta$, which gives the decay of the oscillations, is similar across the different powers, whereas the effective oscillation frequency $\mu$ is reduced with decreasing power as expected. They encompass the three physical quantities $T_1$, the excited state lifetime, $T_2$, the coherence time, and $\Omega$, the Rabi frequency. To determine any two of these, the third one has to be measured independently. In our case, we will measure $T_2$ to extract $T_2$ and $\Omega$ further below.
From the autocorrelation data at zero delay, we can further extract the background/single photon signal ratio under resonant excitation, as shown in Fig. \ref{fig:g2} (e). Values down to 0.01$\pm$0.001 are reached when exciting at 23 dB attenuation. This is about an order of magnitude lower than under non-resonant excitation for these QD \cite{Anderson.2020} and comparable to other work resonantly exciting telecom wavelength QDs \cite{Takemoto.2015, Nawrath.2021}, but does not yet quite reach values reported at lower wavelengths. To investigate where this emission background resulting in non-zero $g^{(2)}(0)$ values comes from, we compare the $g^{(2)}(0)$ values to the background to signal ratio determined via the Lorentzian fit to the absorption spectrum, which is shown in Fig. \ref{fig:g2} (e) as well. This ratio is dominated by laser breakthrough at high excitation powers, when the QD transition is saturated, and reaches a minimum around 19 dB attenuation. For lower powers, detector dark counts and ambient background become significant. At the high as well as the low end of excitation power, the $g^{(2)}(0)$ values agree very well with the independently determined background ratio, letting us conclude that ambient background and detector dark counts are the biggest contributors to $g^{(2)}(0)$ values at low excitation powers, while at higher powers laser breakthrough constitutes a more significant fraction of the total emission.
\section{Signatures of coherent scattering in two-photon interference measurements}
We now investigate the two-photon interference of our QD emission. The collected light is guided to the setup shown in Figure \ref{fig:HOM} (a) for Hong-Ou-Mandel (HOM) type measurements \cite{Hong.1987}. The photons are first separated into two separate arms by a 50:50 beam splitter, and are recombined on a second beamsplitter with an extra delay $\sim$ 20 ns introduced in one of the arms. Performing correlation measurements on the outputs on the superconducting single photon detectors SSPD1 and SSPD2 let us measure the degree of two-photon interference on the second beamsplitter. Electronic polarization controllers (EPCs) are placed in each arm of the setup to control the polarization of the photons, and are set so that the photon polarization is either fully distinguishable (cross-polarized) or fully indistinguishable (co-polarized).
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\textwidth]{Fig2_TM_2.pdf}
\caption{\textbf{Photon indistinguishability of a resonantly excitation of a telecom wavelength quantum dot.} (a) HOM experimental setup. (b) Schematic showing the individual contributions from the four ways in which coincidences can occur. U (L) indicates the upper (lower) arm was taken for a coincidence event. Blue solid curves stand for cross-polarized coincidences, whereas red solid (dashed) curves stand for co-polarized coincidences where the coherence time was set equal to four times the transition lifetime (the transition lifetime). (c) Schematic showing the total coincidences for cross and co polarized light, with curve colors as above. (d )- (i) Cross (blue) and co (red) polarized normalized autocorrelation measurements for HOM with laser attenuation (d) 20 dB, (e) 9 dB and (f) 0 dB. Corresponding visibility measurements (red) with fits (blue) are also shown for (g) 20 dB, (h) 9 dB and (i) 0 dB.}
\label{fig:HOM}
\end{figure*}
There are four ways in which coincidences can occur, depending on which of the arms the two detected photons traveled though. The individual contributions are illustrated in Figure \ref{fig:HOM} (b), while the combined contributions are shown in Figure \ref{fig:HOM} (c), for the case of a balanced interferometer with equal intensity in both arms. If the two photons detected on SSPD 1 and SSPD 2 both traveled through the same arm (either the upper U or lower L arm in Fig. \ref{fig:HOM} (a) ), an antibunching dip is observed irrespective of the indistinguishability (polarization) of the photons, with the width of the dip determined by the natural lifetime of the transition. This is shown by the lower two curves in \ref{fig:HOM} (b). If the photons take different paths at the first beamsplitter, the single photon nature of the emission now manifests itself in antibunching dips at $\pm\Delta \tau$. For co-polarized photons, we measure an additional interference dip at zero delay due to the HOM effect, guiding both incoming photons to the same detector and resulting in an absence of coincidences. The width of this additional dip is determined by the coherence time of the arriving photons, which in general differs from the width of the antibunching dip. The overall coincidence pattern, shown in Fig. \ref{fig:HOM} (c), is expected to show a main dip at zero delay, whose depth depends on the indistinguishability of the photons, and two side dips reaching 75\% of the total coincidences for a balanced setup.
Expressing this intuition mathematically, the co-polarized ($\|, \phi=0$) and cross-polarized ($\perp, \phi=\pi/2$) correlations are given by
\begin{equation}\label{eq:corrs}
g_{\phi}^{(2)}(\tau)=\frac{1}{2}g^{(2)}(\tau)+\frac{1}{2}\left(g^{(2)}(\tau+\Delta\tau)+g^{(2)}(\tau-\Delta\tau)\right)P_{HOM}(\tau, \phi).
\end{equation}
where $\Delta\tau$ is the delay between photons determined by the HOM setup. Here, $P_{HOM}(\tau, \phi)$ describes the two-photon interference probability. For cross-polarized light, $P_{HOM}(\tau, \perp) = 0.5$, and for co-polarized light $P_{HOM}(\tau, \|)$ can be calculated from the known photon mode functions at the second beam splitter in the setup \cite{Legero.2003, Patel.2010, Anderson.2021}, as further detailed in Appendix \ref{app:HOMnorm}. The resulting visibility is then defined as
\begin{equation} \label{eq:vis}
V_{HOM}(\tau)=1-\frac{g_{\parallel}^{(2)}(\tau)}{g_{\perp}^{(2)}(\tau)}.
\end{equation}
The measured correlations for co-and cross-polarized photons are given in Fig. \ref{fig:HOM} (d), (e), and (f) for three different excitation powers. Concentrating on the lowest excitation power in the top panel, the blue data points show a normalized correlation measurement for cross-polarized photons after the HOM setup, where normalization to Poissonian statistics was performed by determining average correlation intensities at large delays. We observe the expected signature with a centre dip just below 50\% and side dips close to the 75\% indicated by the blue dashed line. Any reduction of this side dip depth, after deconvolution with detector resolution and correction for imbalanced power in the two arms, is due residual excitation laser light. Using the measured dip depth to determine the laser background, we find it to be the order of a few percent for these measurements, in agreement with the direct estimate presented in Fig. \ref{fig:g2} (e).
The same measurement for co-polarsed photons is shown in red. Here, interestingly, the side dips do not reach 75\% of the local $g^{(2)}(\tau)$ maxima, indicated with the red dashed line. As modeled in detail below, the reason for this reduction in dip depth, which only occurs in the co-polarized data, is two-photon interference of scattered photons with coherence times much longer the $\Delta \tau$. These interference events prevent some of the coincidence events surrounding the side dips, effectively making them appear shallower. The origin of these photons is the resonant Rayleigh scattering process, where a power-dependent fraction of the scattered photons inherit the coherence properties of the excitation laser \cite{Nguyen.2011, Matthiesen.2012, Bennett.2016b}, which has a 10-kHz bandwidth in our case. The presence of these ultralong-coherence-time photons affect the normalization of the co-polarized $g^{(2)}(\tau)$, which is defined as two-photon intensity normalized to the product of the time averaged single photon intensities typically estimated from $g^{(2)}$ at $|\tau| \gg 0$. However, in our measurement, this time average of single photon intensities is underestimated due to interference of the coherently scattered fraction of photons over the entire time delay $\tau$ of our measurement. As discussed below and in Appendix \ref{app:HOMnorm}, by estimating this coherently scattered fraction of the emission from the co-polarized side dip depth, we can compensate for this interference effect to correctly normalize our data and determine $g^{(2)}(0)$ as well as the interference visibility.
To confirm the expected power dependence, Fig. \ref{fig:HOM} (d), (e) and (f) show cross- and co-polarized autocorrelation measurements for three different powers. The co-polarized side dips for low powers are decidedly shallower than their cross-polarized counterparts, but increase in depth for increasing power, until they match the cross-polarized side dips at high driving powers. At these powers, we further observe oscillations on either side of the antibunching dips. As in Section \ref{sec:g2}, these are a manifestation of Rabi oscillations. The data are fitted using Eqs. \ref{eq:g2} and \ref{eq:corrs}. We calculate the resulting visibility using Equation \ref{eq:g2} and present the results in Figures \ref{fig:HOM} (g), (h) and (i). Maximum visibility values are 0.64$\pm$0.09, 0.56$\pm$0.05, and 0.53$\pm$0.14 respectively. These values are lower than previously reported results at the same wavelength under resonant excitation \cite{Nawrath.2021} and the result may be surprising given the long coherence times and the good indistinguishability results achieved previously under non-resonant exciation \cite{Anderson.2021}. It is important to note however that because of the continuous wave excitaiton of our source, these maximum visibility values are not a meaningful measurement of the indistinguishability of the emitted photons \cite{Proux.2015, Baudin.2019, Schofield.2022}, but rather also include the limits of the measurement setup used. The experimental imperfections here are a combination of finite detector resolution, non-ideal branching ratio in the interferometer and imperfect mode overlap in the fibre beamsplitter. For a true measurement of photon indistinguishability, a measurement under pulsed excitation, or an adaptation of Ref. \cite{Schofield.2022} to allow for the coherent fraction of the emission would be needed.
To develop an intuitive understanding of the described signature of photons with ultralong coherence times in correlation measurements, we analytically calculate Equation \ref{eq:corrs}, modeling the incoming field on the second beamsplitter in the HOM setup as the superposition of two traveling waves with coherence times $T_1$ and $T_2$ respectively:
\begin{equation}
\xi_{1,2}=
\begin{cases}
\frac{1}{N}\left(\alpha \sqrt{\frac{2}{T_1}}e^{-\frac{1}{T_1}t-i\omega t}+\beta e^{i\Phi_{1,2}}\sqrt{\frac{2}{T_2}}e^{-\frac{1}{T_2}t-i\omega t}\right), & t\geq0 \\
0, & t<0
\end{cases}
\label{eq:HOMmodel}
\end{equation}
The subscripts refer to the mode function at input 1 and 2 of the second beamsplitter, respectively, and the phases $\Phi_{1,2}$ are random phases included to denote a statistical mixture of the two parts rather than a coherent superposition. The first term refers to spontaneous emission from the QD and has a relatively short coherence time on a nanosecond time scale. This part of the emission has been inelastically scattered during the resonance fluorescence process. The second term denotes photons where the coherence time is inherited from the laser coherence. $|\alpha|^2$ gives the fraction of inelastically scattered photons in the mode, while $|\beta|^2=1-|\alpha|^2$ gives the fraction of elastically scattered photons.
\begin{figure*}[h]
\centering
\includegraphics[width=0.45\textwidth]{HOMmodel.pdf}
\caption{\textbf{Model of the Hong-Ou-Mandel effect for photons with two coherence times components.} (a) HOM interference on the time scale of the elastically scattered photon coherence time. (b) HOM interference on the time scale of the inelastically scattered photon coherence time.}
\label{fig:HOMmodel}
\end{figure*}
The results from our model are shown in Figure \ref{fig:HOMmodel} (a) for timescales on the order of the laser coherence time. A broad HOM dip is visible due to the elastically scattered fraction of the emitted photons, with the dip depth decreasing with increasing $\alpha$. Looking at the same curves on a ns timescale that is more similar to QD times scales (\ref{fig:HOMmodel} (d)), see that the model reproduces the experimental results shown in Figure \ref{fig:HOM}. If a fraction of the photons have a very long coherence time, it looks as though the dips at$\pm\Delta\tau$ are shallow. Furthermore, the apparent side dip depth is dependent on the fraction of photons emitted via spontaneous emission $|\alpha|^2$. This is the dependency we use to extract the fraction of elastically scattered photons and normalize our co-polarized data (see Appendix \ref{app:HOMnorm}).
\section{Direct measurement of coherence time beyond the Fourier limit}
To obtain direct evidence of the Resonant Rayleigh scattering described above, we measure the coherence time of the photon emission from our QD using a fibre based Michelson interferometer as described in earlier work\cite{Anderson.2021}. To establish a benchmark and compare this QD to our previous results\cite{Anderson.2020, Anderson.2021}, we first measure the emission coherence time under non-resonant cw excitation at 850 nm. The resulting visibility as a function of time delay between the two arms of the interferometer can be seen in Fig. \ref{fig:coherence} (a). It is fitted with the Fourier transform of a double Lorentzian, accounting for the interference between the two fine structure split $X$ states. From the fit, we can extract a coherence time of 447$\pm$15 ps at saturation power, and a fine structure splitting of 26.3$\pm$0.1 $\upmu$eV, which close to the value extracted from the resonant scan in Fig. \ref{fig:g2} (a).
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{Fig3b_TM.pdf}
\caption{\textbf{Coherence of emission.} (a) Coherence measurement under non-resonant excitation. The visibilities at each delay are determined from sinusoidal fits to interference fringes recorded while changing the relative path lengths over the scale of a few wavelengths using a fibre stretcher, and are normalized to the visibility recorded using a narrowband laser. (b) Coherence measurements as a function of resonant excitation power. Visibilities are determined as in (a). (c) Extracted inelastic coherence times, lifetimes and Fourier limit (see text for details). (d) Coherent fraction of the emission and fit according to text. Note that for the coherent fraction recorded from HOM measurements (blue data points), there are two measurements with identical dip depth for the co- and cross-polarized cases. These measurements give a coherent fraction of zero, and, as becomes clear from the slope of $|\beta|^2(D)$ in Fig. \ref{fig:HOMmaths}, infinite error bars. These error bars were omitted here for clarity.}
\label{fig:coherence}
\end{figure*}
Next, we record the interference visibility under resonant excitation for resonant laser intensities spanning more than two orders of magnitude. To fully capture the long coherence times expected from the Resonant Rayleigh scattering process, we extend the delay in our Michelson interferometer up to $\sim$ 15 ns. Fig. \ref{fig:coherence} (b) shows the resulting visibilities. We fit the data assuming that the total visibility recorded is the sum of the visibilites from elasitcally and inelastically scattered photons:
\begin{equation}
V_{tot}(\tau) = Ae^{-\frac{t}{\tau_1}}+(1-A)e^{-\frac{t}{\tau_2}},
\label{eq:cohfrac}
\end{equation}
where $A$ represents the coherently scattered fraction of the light, $\tau_1$ is the coherence time of the elastically scattered photons and $\tau_2$ is the coherence time of the inelastically scattered photons corresponding to the natural linewidth of the QD $X$ transition. For the purpose of our fits, $\tau_1$ was set to infinity and the first exponential term therefore set to 1. Looking at Fig. \ref{fig:coherence} (b), we can easily identify the two parts to the visibility in the data: there is an initial exponential decay in coherence on the timescale given by $\tau_2$, followed by a constant section determined by the coherently scattered fraction of the emission. We note that for the highest powers, the data at $\tau \sim$ 7 ns shows markedly higher visibiliy than the data at $\tau \sim$ 15 ns, even though both delays are well beyond the expected Fourier limit and should give similar visibilites originating from only the coherent fraction of the emission. We attribute this to drifts in the experimental setup resulting in a lower effective laser intensity experienced by the quantum dot. To extract the coherent fraction in these cases, we consider the lower possible values by focusing on the data at $\tau \sim$ 15 ns. Further, as discussed above, there is non-negligible laser breakthrough in our measurements, especially for higher excitation powers. This breakthrough contribution is estimated from the background in the autocorrelation measurements and the data shown in Fig. 2 (b) corrected accordingly before fitting the data with Eq. \ref{eq:cohfrac}.
The QD coherence time values $\tau_2$ extracted from the fits are plotted in Fig. \ref{fig:coherence} (c) (green data points). These values now allow us to extract the transition lifetime $T_1$ from the fitting parameter $\eta$ shown in Fig. \ref{fig:g2} (d). The resulting values are given in Fig. \ref{fig:coherence} (c) as well. Averaging over the obtained $T_1$ values, we obtain the Fourier limit $2T_1$ (grey dashed line in Fig. \ref{fig:coherence} (c), with the error (grey shaded region) given by the standard deviation of $T_1$ values. For all but the highest excitation power, the coherence times are within the error bars of the Fourier limit, meaning that for resonantly generated single photons from this source, the Fourier limit is actually the lower bound of observed coherence times. Such Fourier limited emission has previously only been reported around 900 nm \cite{Kuhlmann.2015, XingDing.2016, Wang.2016, Somaschi.2016}.
Finally, we plot the values for $A$ as a function of Laser intensity, and compare them to the values obtained from the HOM measurements described above. As seen in Fig. \ref{fig:coherence}, the two methods give largely agreeing values. Theory predicts that the coherent fraction $F_{CS}$ depends on the driving intensity as follows \cite{Phillips.2020}:
\begin{equation}
F_{CS} = \frac{1}{1+\frac{\Omega^2}{\gamma}},
\end{equation}
where $\gamma$ is the natural linewidth of the quantum dot. This model fits our data reasonably well [Fig. \ref{fig:coherence} (d)]. Residual deviations can be attributed to imperfect calibration of the HOM interferometer as well as drifts in the setup leading to differing effective excitation intensities, or also to phonon sideband contributions, which ultimately limit the coherently scattered fraction of photons \cite{Koong.2019, Brash.2019}. While a precise determination of this contribution is left for a future study, the high degree of elastic scattering as well as the $T_2$ times near the Fourier limit suggest that this process is of limited importance in our InAs/InP QDs.
\section{Indistinguishability of photons separated by 25 km of fibre}
Next, we make direct use of the minimal attenuation in fibre at the emission wavelength of our QDs and measure the indistinguishability by adding a 25-km fibre delay to one of the arms of the interferometer shown in Fig. \ref{fig:HOM} (a). At 0.173/km dB for the standard SMF-28 Ultra used, this attenuates the signal in the long arm by 4.37 dB or a factor 2.74 compared to the short arm. For comparison, the $\sim$ 4dB/km-attenuation in specialized fibre at 900 nm would still result in a signal attenuation by 10 orders of magnitude, making an indistinguishability measurement all but impossible.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{Fig4_TM.pdf}
\caption{\textbf{HOM measurements after 25 km of fibre} (a) HOM measurement for dot 2 without the long fibre delay. (b) Visibility calculated from (a). (c) Autocorrelation measurements passing one arm of the HOM interferomter through 25 km of fibre. (d) Visibility resulting from (c)}
\label{fig:25kmHOM}
\end{figure*}
For the subsequent measurement, a separate QD was used. To establish a baseline, we first measure the indistinguishability without the extra delay, in the same configuration as above. Fig. \ref{fig:25kmHOM} (a) shows the measured correlations for the co- and cross-polarized cases, where the co-polarized data has been normalized taking into account the coherently scattered fraction of the light as above. The extracted visibility is shown in \ref{fig:25kmHOM} (b). At 0.77$\pm$0.06, the maximum is slightly higher than the values measured for QD 1, likely due to better experimental overlap of the mode functions.
Next, we insert a 25-km fibre spool into the long arm of the interferometer. The measured relative attenuation of this arm is 5.8 dB and includes extra connectors as well as a switch for polarization calibration. The correlation measurements resulting from this configuration are shown in Fig. \ref{fig:25kmHOM} (c). It is immediately obvious that the side dips seen in the short-delay configuration have disappeared, as they have shifted out of our measurement window to a time delay of $\sim 95$ $\mu$s, corresponding to the extra distance traveled in the fibre. We can therefore no longer use the measured dip depth of the side dips to extract the coherent fraction, and have to rely instead on the short-delay measurement performed at a similar excitation power. We further note that the cross-polarized correlation curve at zero delay dips well below the 50\% mark expected for a balanced interferometer, seen in the inset to Fig. \ref{fig:25kmHOM} (c) showing a zoom around the zero-delay dips. This is due to the extra attenuation in one of the two arms, which makes correlations arising from two photons having both traveled the short arm dominant compared to the other contributions in Fig. \ref{fig:25kmHOM} (b). For the measured attenuation values, we expect a $g^{(2)}_{cross}(0)=0.33 \pm 0.03$, in reasonable agreement with the fitted value of 0.27$\pm$0.02. The resulting visibility is shown in Figure \ref{fig:25kmHOM} (d), and we obtain a value of 0.72$\pm$0.15. The extra fibre therefore results in a drop of visibility by about 6.5\%, meaning that even after $\sim$100 000 excitation cycles, the transition remains little affected by additional spectral wandering and other slow dephasing processes, demonstrating the stability of our QD transition on the timescale of $95$ $\upmu$s.
\section{Conclusion}
In conclusion, we have used HOM measurements as well as direct measurements of emission coherence to show that, except for very high driving powers, the coherence time of scattered photons is at least equal to the Fourier limit, and exceeds it considerably for low driving powers due to a large fraction of coherently scattered photons. For this particular QD, the Fourier limit for inelastically scattered photons is reached even without any additional Purcell enhancement or active feedback on the quantum dot. We further measure indistinguishability of photons emitted $\sim95$ $\upmu$s apart by guiding one of the photons through a 25-km fibre spool, a measurement infeasible with quantum emitters at lower wavelength. While spectral wandering processes affecting the QDs on the timescale of seconds are currently limiting the achieved visibilities, we expect that placing the QDs in doped structures similar to our earlier measurements \citep{Anderson.2021} will readily improve indistinguishability.
An outstanding challenge hampering network integration of C-band quantum dots is further the improvement of the limited extraction efficiency. Recent proposals integrating QD sources into circular Bragg gratings or micropillars structures in the telecom C-band show that coupling efficiencies into single mode fibres up to around 80\% are possible while at the same time also providing Purcell enhancement of factors 10-43 \cite{Barbiero.2022, Bremer.2022}. Seeing that under non-resonant excitation the investigated QD has a coherence time only about a factor three higher than the average in InAs/InP QDs \cite{Anderson.2021}, we expect that combining resonant excitation with appropriate photonic engineering will enable the majority of the QDs to perform at the Fourier level with high efficiency. Such a device will be a desirable hardware component for quantum network applications ranging from simple point-to-point quantum key distribution to distributed quantum computation tasks based on interference of entangled photons linking remote locations.
\begin{acknowledgments}
The authors gratefully acknowledge the usage of wafer material developed during earlier projects in partnership with Andrey Krysa and Jon Heffernan at the National Epitaxy Facility and at the University of Sheffield. They further acknowledge funding from the Ministry of Internal Affairs and Communications, Japan, via the project ‘Research and Development for Building a Global Quantum Cryptography Communication Network’. L. W. gratefully acknowledges funding from the EPSRC and financial support from Toshiba Europe Limited. \\
\end{acknowledgments}
\section{Introduction}
Establishing long distance quantum networks relies on the efficient exchange of quantum information, conveniently encoded in photonic qubits \cite{Kimble.2008}. Quantum light sources emitting at telecom wavelengths are fundamental to this endeavor, due to the minimal absorption window of standard fibre networks at these wavelengths. As a consequence, there has been a lot of interest recently in developing novel quantum systems with direct emission at these wavelengths, where III-V semiconductor systems ranging from InP to wide bandgap materials such as GaP and SiN \cite{Ward.2014, Dibos.2018, Zhou.2018, Merkel.2020, Wolfowicz.2020, Durand.2021} have shown promise as quantum light sources. For emission in the telecom C-band, quantum dot (QD) technology has been most prominent so far \cite{Vajner.2022}, where two different material systems, modified InAs/GaAs and InAs/InP based, respectively, have been pursued \cite{Portalupi.2019, Anderson.2021}. These systems have made leaps in their development recently, maturing from showing evidence of single photon emission \cite{Kim.2005, Cade.2006} to demonstrations of entangled photon emission \cite{Olbrich.2017, Muller.2018} and the development of a spin-photon interface \cite{Dusanowski.2022}.
A key component for any interference-based quantum network applications is a source of coherent photons. The coherence of the photons is ultimately limited by the radiative linewidth of the underlying transition, where the coherence time $T_2$ cannot exceed twice the radiative lifetime $T_1$ (Fourier limit). However, for solid state emitters, reaching this limit is very challenging due to the inevitable coupling of the emitter to its host matrix and the associated decoherence processes. While resonant excitation is key to minimizing such noise processes \cite{Kuhlmann.2013}, previous demonstrations of Fourier-limited emission from QDs have been limited to lower-wavelength regions around 900 nm and have further relied on cavity enhancement, reducing $T_1$ below the timescale of the dephasing processes \cite{XingDing.2016, Wang.2016, Somaschi.2016}, or on manipulation of the noise processes in the environment \cite{Kuhlmann.2015}.
To produce photons with coherence times even beyond the Fourier limit, it is possible to take advantage of an elatic scattering process often termed Resonant Raileigh Scattering (RRS), whereby resonant laser light can be elastically scattered from quantum emitters even at excitation powers approaching saturation \cite{Bennett.2016}. First demonstrated in 900-nm QDs in a decade ago \cite{Nguyen.2011, Matthiesen.2012}, this phenomenon continues to be of fundamental interest, and recent work has led to a much improved understanding of the underlying processes \cite{Hanschke.2020, Phillips.2020}. However, the phenomenon has never been observed for telecom wavelength emitters, where arguably it has the highest impact for practical quantum networking applications.
Here, we use the InAs/InP QD platform to demonstrate photons with coherence times much longer than the Fourier limit at telecom wavelength. We first establish resonance fluorescence on a neutral exciton ($X$) transition and characterize the purity of the emission as well as the signal to background ratio achievable in our system. Measuring indistinguishability of consecutively emitted photons, we observe a distinct signature in two-photon correlation traces measurement indicating the presence of photons with coherence times much longer than the Fourier limit. We show that these originate from Resonant Rayleigh scattering rather than residual excitation laser light, and model the observed signatures analytically. The long coherence times are then directly measured in a Michelson interferometer setup. This allows us to show further that even the inelastically scattered part of the photons have coherence times around the Fourier limit. Finally we investigate the indistinguishabilty of photons emitted $\sim$ 100 000 emission cycles apart by sending one of the two photons through a 25-km fibre spool, enabled by the minimal absorption loss experienced by the QD emission near the telecom C-band.
\section{Resonance fluorescence and single photon purity}
\label{sec:g2}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{Fig1.pdf}
\caption{\textbf{Resonant excitation of a telecom wavelength quantum dot.} (a) Absorption spectrum of a neutrally charged exciton, showing two transitions separated by the fine structure splitting (FSS). The full emission spectrum of this QD, recorded under non-resonant excitation, is shown in the inset. (b) Power-dependent exciation paramters. (c) Autocorrelation measurements.}
\label{fig:g2}
\end{figure*}
Our sample consists of InAs/InP quantum dots in a weak planar cavity formed by two Bragg mirrors. For enhanced extraction efficiency, it is further topped by a Zirconia hemisphere. Further details about the sample are given in \ref{app:setup}. Under above-band, non-resonant excitation at 850 nm, we record the typical spectrum from such quantum dots shown in the inset Fig. \ref{fig:g2} (a). To resonantly excite the neutral exciton ($X$) studied here, we tune a narrowband cw laser across the resonance energy and remove any backscattered laser light using polarization suppression, as further described in \ref{app:setup}. Guiding the emission from the quantum dot to superconducting single photon detectors, we observe two transitions separated by the $X$ fine structure splitting of $22.77\pm0.27$ $\upmu$eV, as shown in Fig. \ref{fig:g2} (a). The relative intensities of the transitions is given by their respective overlap with the excitation and detection polarization in our system, which was set to allow both transitions to be visible for maximum efficiency.
For the remaining measurements described here, we focus on the higher energy transition. Repeating the excitation laser scans as a function of power, we can extract the maximum emission intensity at each power. This results in the count rate saturation behavior clearly seen in Fig. \ref{fig:g2} (b) (blue data points), with a saturation count rate of 491$\pm$9 kcts/s extracted from a fit to the theoretically expected behavior.
Fitting the absorption spectra to a Lorentzian lineshape further allows us to extract the linewidth as a function of power, shown by the orange data points in Fig. \ref{fig:g2} (b). The linewidths can be fitted using the square root dependence on laser intensity expected from pure power broadening, which indicates a natural linewidth of 2.7$\pm$0.12 $\upmu$eV. Further, background emission at the relevant transition energy (consiting of residual laser light as well as detector dark counts and ambient background contributions) can be extrapolated from an empirical polynomial fit to background data only and shows a linear dependence in laser power (Fig. \ref{fig:g2} (b) open circles and cyan curve).
Next, emission is guided to a Hanbury-Brown and Twiss setup for autocorrelation measurements. These are repeated for excitation powers spanning more than two orders of magnitude, as seen in Fig. \ref{fig:g2} (c). All measurements show a pronounced antibunching dip at zero delay. For higher excitation powers, we further observe the onset of Rabi oscillations, which can be fitted using the expected theoretical description
\begin{equation}
g^{(2)}(\tau) = 1-\exp(-\eta |\tau|)\left[\cos{\mu|\tau|} +\frac{\eta}{\mu}\sin{\mu |\tau|}\right],
\label{eq:g2}
\end{equation}
where $\eta = (1/T_1 + 1/T_2)/2$ and $\mu = \sqrt{\Omega^2+\left(\frac{1}{T_1} - \frac{1}{T_2}\right)^2}$. The dependence of these fitting parameters on the excitation power is given in \ref{fig:g2} (d), and shows that $\eta$, which gives the decay of the oscillations, is similar across the different powers, whereas the effective oscillation frequency $\mu$ is reduced with decreasing power as expected. They encompass the three physical quantities $T_1$, the excited state lifetime, $T_2$, the coherence time, and $\Omega$, the Rabi frequency. To determine any two of these, the third one has to be measured independently. In our case, we will measure $T_2$ to extract $T_2$ and $\Omega$ further below.
From the autocorrelation data at zero delay, we can further extract the background/single photon signal ratio under resonant excitation, as shown in Fig. \ref{fig:g2} (e). Values down to 0.01$\pm$0.001 are reached when exciting at 23 dB attenuation. This is about an order of magnitude lower than under non-resonant excitation for these QD \cite{Anderson.2020} and comparable to other work resonantly exciting telecom wavelength QDs \cite{Takemoto.2015, Nawrath.2021}, but does not yet quite reach values reported at lower wavelengths. To investigate where this emission background resulting in non-zero $g^{(2)}(0)$ values comes from, we compare the $g^{(2)}(0)$ values to the background to signal ratio determined via the Lorentzian fit to the absorption spectrum, which is shown in Fig. \ref{fig:g2} (e) as well. This ratio is dominated by laser breakthrough at high excitation powers, when the QD transition is saturated, and reaches a minimum around 19 dB attenuation. For lower powers, detector dark counts and ambient background become significant. At the high as well as the low end of excitation power, the $g^{(2)}(0)$ values agree very well with the independently determined background ratio, letting us conclude that ambient background and detector dark counts are the biggest contributors to $g^{(2)}(0)$ values at low excitation powers, while at higher powers laser breakthrough constitutes a more significant fraction of the total emission.
\section{Signatures of coherent scattering in two-photon interference measurements}
We now investigate the two-photon interference of our QD emission. The collected light is guided to the setup shown in Figure \ref{fig:HOM} (a) for Hong-Ou-Mandel (HOM) type measurements \cite{Hong.1987}. The photons are first separated into two separate arms by a 50:50 beam splitter, and are recombined on a second beamsplitter with an extra delay $\sim$ 20 ns introduced in one of the arms. Performing correlation measurements on the outputs on the superconducting single photon detectors SSPD1 and SSPD2 let us measure the degree of two-photon interference on the second beamsplitter. Electronic polarization controllers (EPCs) are placed in each arm of the setup to control the polarization of the photons, and are set so that the photon polarization is either fully distinguishable (cross-polarized) or fully indistinguishable (co-polarized).
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\textwidth]{Fig2_TM_2.pdf}
\caption{\textbf{Photon indistinguishability of a resonantly excitation of a telecom wavelength quantum dot.} (a) HOM experimental setup. (b) Schematic showing the individual contributions from the four ways in which coincidences can occur. U (L) indicates the upper (lower) arm was taken for a coincidence event. Blue solid curves stand for cross-polarized coincidences, whereas red solid (dashed) curves stand for co-polarized coincidences where the coherence time was set equal to four times the transition lifetime (the transition lifetime). (c) Schematic showing the total coincidences for cross and co polarized light, with curve colors as above. (d )- (i) Cross (blue) and co (red) polarized normalized autocorrelation measurements for HOM with laser attenuation (d) 20 dB, (e) 9 dB and (f) 0 dB. Corresponding visibility measurements (red) with fits (blue) are also shown for (g) 20 dB, (h) 9 dB and (i) 0 dB.}
\label{fig:HOM}
\end{figure*}
There are four ways in which coincidences can occur, depending on which of the arms the two detected photons traveled though. The individual contributions are illustrated in Figure \ref{fig:HOM} (b), while the combined contributions are shown in Figure \ref{fig:HOM} (c), for the case of a balanced interferometer with equal intensity in both arms. If the two photons detected on SSPD 1 and SSPD 2 both traveled through the same arm (either the upper U or lower L arm in Fig. \ref{fig:HOM} (a) ), an antibunching dip is observed irrespective of the indistinguishability (polarization) of the photons, with the width of the dip determined by the natural lifetime of the transition. This is shown by the lower two curves in \ref{fig:HOM} (b). If the photons take different paths at the first beamsplitter, the single photon nature of the emission now manifests itself in antibunching dips at $\pm\Delta \tau$. For co-polarized photons, we measure an additional interference dip at zero delay due to the HOM effect, guiding both incoming photons to the same detector and resulting in an absence of coincidences. The width of this additional dip is determined by the coherence time of the arriving photons, which in general differs from the width of the antibunching dip. The overall coincidence pattern, shown in Fig. \ref{fig:HOM} (c), is expected to show a main dip at zero delay, whose depth depends on the indistinguishability of the photons, and two side dips reaching 75\% of the total coincidences for a balanced setup.
Expressing this intuition mathematically, the co-polarized ($\|, \phi=0$) and cross-polarized ($\perp, \phi=\pi/2$) correlations are given by
\begin{equation}\label{eq:corrs}
g_{\phi}^{(2)}(\tau)=\frac{1}{2}g^{(2)}(\tau)+\frac{1}{2}\left(g^{(2)}(\tau+\Delta\tau)+g^{(2)}(\tau-\Delta\tau)\right)P_{HOM}(\tau, \phi).
\end{equation}
where $\Delta\tau$ is the delay between photons determined by the HOM setup. Here, $P_{HOM}(\tau, \phi)$ describes the two-photon interference probability. For cross-polarized light, $P_{HOM}(\tau, \perp) = 0.5$, and for co-polarized light $P_{HOM}(\tau, \|)$ can be calculated from the known photon mode functions at the second beam splitter in the setup \cite{Legero.2003, Patel.2010, Anderson.2021}, as further detailed in Appendix \ref{app:HOMnorm}. The resulting visibility is then defined as
\begin{equation} \label{eq:vis}
V_{HOM}(\tau)=1-\frac{g_{\parallel}^{(2)}(\tau)}{g_{\perp}^{(2)}(\tau)}.
\end{equation}
The measured correlations for co-and cross-polarized photons are given in Fig. \ref{fig:HOM} (d), (e), and (f) for three different excitation powers. Concentrating on the lowest excitation power in the top panel, the blue data points show a normalized correlation measurement for cross-polarized photons after the HOM setup, where normalization to Poissonian statistics was performed by determining average correlation intensities at large delays. We observe the expected signature with a centre dip just below 50\% and side dips close to the 75\% indicated by the blue dashed line. Any reduction of this side dip depth, after deconvolution with detector resolution and correction for imbalanced power in the two arms, is due residual excitation laser light. Using the measured dip depth to determine the laser background, we find it to be the order of a few percent for these measurements, in agreement with the direct estimate presented in Fig. \ref{fig:g2} (e).
The same measurement for co-polarsed photons is shown in red. Here, interestingly, the side dips do not reach 75\% of the local $g^{(2)}(\tau)$ maxima, indicated with the red dashed line. As modeled in detail below, the reason for this reduction in dip depth, which only occurs in the co-polarized data, is two-photon interference of scattered photons with coherence times much longer the $\Delta \tau$. These interference events prevent some of the coincidence events surrounding the side dips, effectively making them appear shallower. The origin of these photons is the resonant Rayleigh scattering process, where a power-dependent fraction of the scattered photons inherit the coherence properties of the excitation laser \cite{Nguyen.2011, Matthiesen.2012, Bennett.2016b}, which has a 10-kHz bandwidth in our case. The presence of these ultralong-coherence-time photons affect the normalization of the co-polarized $g^{(2)}(\tau)$, which is defined as two-photon intensity normalized to the product of the time averaged single photon intensities typically estimated from $g^{(2)}$ at $|\tau| \gg 0$. However, in our measurement, this time average of single photon intensities is underestimated due to interference of the coherently scattered fraction of photons over the entire time delay $\tau$ of our measurement. As discussed below and in Appendix \ref{app:HOMnorm}, by estimating this coherently scattered fraction of the emission from the co-polarized side dip depth, we can compensate for this interference effect to correctly normalize our data and determine $g^{(2)}(0)$ as well as the interference visibility.
To confirm the expected power dependence, Fig. \ref{fig:HOM} (d), (e) and (f) show cross- and co-polarized autocorrelation measurements for three different powers. The co-polarized side dips for low powers are decidedly shallower than their cross-polarized counterparts, but increase in depth for increasing power, until they match the cross-polarized side dips at high driving powers. At these powers, we further observe oscillations on either side of the antibunching dips. As in Section \ref{sec:g2}, these are a manifestation of Rabi oscillations. The data are fitted using Eqs. \ref{eq:g2} and \ref{eq:corrs}. We calculate the resulting visibility using Equation \ref{eq:g2} and present the results in Figures \ref{fig:HOM} (g), (h) and (i). Maximum visibility values are 0.64$\pm$0.09, 0.56$\pm$0.05, and 0.53$\pm$0.14 respectively. These values are lower than previously reported results at the same wavelength under resonant excitation \cite{Nawrath.2021} and the result may be surprising given the long coherence times and the good indistinguishability results achieved previously under non-resonant exciation \cite{Anderson.2021}. It is important to note however that because of the continuous wave excitaiton of our source, these maximum visibility values are not a meaningful measurement of the indistinguishability of the emitted photons \cite{Proux.2015, Baudin.2019, Schofield.2022}, but rather also include the limits of the measurement setup used. The experimental imperfections here are a combination of finite detector resolution, non-ideal branching ratio in the interferometer and imperfect mode overlap in the fibre beamsplitter. For a true measurement of photon indistinguishability, a measurement under pulsed excitation, or an adaptation of Ref. \cite{Schofield.2022} to allow for the coherent fraction of the emission would be needed.
To develop an intuitive understanding of the described signature of photons with ultralong coherence times in correlation measurements, we analytically calculate Equation \ref{eq:corrs}, modeling the incoming field on the second beamsplitter in the HOM setup as the superposition of two traveling waves with coherence times $T_1$ and $T_2$ respectively:
\begin{equation}
\xi_{1,2}=
\begin{cases}
\frac{1}{N}\left(\alpha \sqrt{\frac{2}{T_1}}e^{-\frac{1}{T_1}t-i\omega t}+\beta e^{i\Phi_{1,2}}\sqrt{\frac{2}{T_2}}e^{-\frac{1}{T_2}t-i\omega t}\right), & t\geq0 \\
0, & t<0
\end{cases}
\label{eq:HOMmodel}
\end{equation}
The subscripts refer to the mode function at input 1 and 2 of the second beamsplitter, respectively, and the phases $\Phi_{1,2}$ are random phases included to denote a statistical mixture of the two parts rather than a coherent superposition. The first term refers to spontaneous emission from the QD and has a relatively short coherence time on a nanosecond time scale. This part of the emission has been inelastically scattered during the resonance fluorescence process. The second term denotes photons where the coherence time is inherited from the laser coherence. $|\alpha|^2$ gives the fraction of inelastically scattered photons in the mode, while $|\beta|^2=1-|\alpha|^2$ gives the fraction of elastically scattered photons.
\begin{figure*}[h]
\centering
\includegraphics[width=0.45\textwidth]{HOMmodel.pdf}
\caption{\textbf{Model of the Hong-Ou-Mandel effect for photons with two coherence times components.} (a) HOM interference on the time scale of the elastically scattered photon coherence time. (b) HOM interference on the time scale of the inelastically scattered photon coherence time.}
\label{fig:HOMmodel}
\end{figure*}
The results from our model are shown in Figure \ref{fig:HOMmodel} (a) for timescales on the order of the laser coherence time. A broad HOM dip is visible due to the elastically scattered fraction of the emitted photons, with the dip depth decreasing with increasing $\alpha$. Looking at the same curves on a ns timescale that is more similar to QD times scales (\ref{fig:HOMmodel} (d)), see that the model reproduces the experimental results shown in Figure \ref{fig:HOM}. If a fraction of the photons have a very long coherence time, it looks as though the dips at$\pm\Delta\tau$ are shallow. Furthermore, the apparent side dip depth is dependent on the fraction of photons emitted via spontaneous emission $|\alpha|^2$. This is the dependency we use to extract the fraction of elastically scattered photons and normalize our co-polarized data (see Appendix \ref{app:HOMnorm}).
\section{Direct measurement of coherence time beyond the Fourier limit}
To obtain direct evidence of the Resonant Rayleigh scattering described above, we measure the coherence time of the photon emission from our QD using a fibre based Michelson interferometer as described in earlier work\cite{Anderson.2021}. To establish a benchmark and compare this QD to our previous results\cite{Anderson.2020, Anderson.2021}, we first measure the emission coherence time under non-resonant cw excitation at 850 nm. The resulting visibility as a function of time delay between the two arms of the interferometer can be seen in Fig. \ref{fig:coherence} (a). It is fitted with the Fourier transform of a double Lorentzian, accounting for the interference between the two fine structure split $X$ states. From the fit, we can extract a coherence time of 447$\pm$15 ps at saturation power, and a fine structure splitting of 26.3$\pm$0.1 $\upmu$eV, which close to the value extracted from the resonant scan in Fig. \ref{fig:g2} (a).
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{Fig3b_TM.pdf}
\caption{\textbf{Coherence of emission.} (a) Coherence measurement under non-resonant excitation. The visibilities at each delay are determined from sinusoidal fits to interference fringes recorded while changing the relative path lengths over the scale of a few wavelengths using a fibre stretcher, and are normalized to the visibility recorded using a narrowband laser. (b) Coherence measurements as a function of resonant excitation power. Visibilities are determined as in (a). (c) Extracted inelastic coherence times, lifetimes and Fourier limit (see text for details). (d) Coherent fraction of the emission and fit according to text. Note that for the coherent fraction recorded from HOM measurements (blue data points), there are two measurements with identical dip depth for the co- and cross-polarized cases. These measurements give a coherent fraction of zero, and, as becomes clear from the slope of $|\beta|^2(D)$ in Fig. \ref{fig:HOMmaths}, infinite error bars. These error bars were omitted here for clarity.}
\label{fig:coherence}
\end{figure*}
Next, we record the interference visibility under resonant excitation for resonant laser intensities spanning more than two orders of magnitude. To fully capture the long coherence times expected from the Resonant Rayleigh scattering process, we extend the delay in our Michelson interferometer up to $\sim$ 15 ns. Fig. \ref{fig:coherence} (b) shows the resulting visibilities. We fit the data assuming that the total visibility recorded is the sum of the visibilites from elasitcally and inelastically scattered photons:
\begin{equation}
V_{tot}(\tau) = Ae^{-\frac{t}{\tau_1}}+(1-A)e^{-\frac{t}{\tau_2}},
\label{eq:cohfrac}
\end{equation}
where $A$ represents the coherently scattered fraction of the light, $\tau_1$ is the coherence time of the elastically scattered photons and $\tau_2$ is the coherence time of the inelastically scattered photons corresponding to the natural linewidth of the QD $X$ transition. For the purpose of our fits, $\tau_1$ was set to infinity and the first exponential term therefore set to 1. Looking at Fig. \ref{fig:coherence} (b), we can easily identify the two parts to the visibility in the data: there is an initial exponential decay in coherence on the timescale given by $\tau_2$, followed by a constant section determined by the coherently scattered fraction of the emission. We note that for the highest powers, the data at $\tau \sim$ 7 ns shows markedly higher visibiliy than the data at $\tau \sim$ 15 ns, even though both delays are well beyond the expected Fourier limit and should give similar visibilites originating from only the coherent fraction of the emission. We attribute this to drifts in the experimental setup resulting in a lower effective laser intensity experienced by the quantum dot. To extract the coherent fraction in these cases, we consider the lower possible values by focusing on the data at $\tau \sim$ 15 ns. Further, as discussed above, there is non-negligible laser breakthrough in our measurements, especially for higher excitation powers. This breakthrough contribution is estimated from the background in the autocorrelation measurements and the data shown in Fig. 2 (b) corrected accordingly before fitting the data with Eq. \ref{eq:cohfrac}.
The QD coherence time values $\tau_2$ extracted from the fits are plotted in Fig. \ref{fig:coherence} (c) (green data points). These values now allow us to extract the transition lifetime $T_1$ from the fitting parameter $\eta$ shown in Fig. \ref{fig:g2} (d). The resulting values are given in Fig. \ref{fig:coherence} (c) as well. Averaging over the obtained $T_1$ values, we obtain the Fourier limit $2T_1$ (grey dashed line in Fig. \ref{fig:coherence} (c), with the error (grey shaded region) given by the standard deviation of $T_1$ values. For all but the highest excitation power, the coherence times are within the error bars of the Fourier limit, meaning that for resonantly generated single photons from this source, the Fourier limit is actually the lower bound of observed coherence times. Such Fourier limited emission has previously only been reported around 900 nm \cite{Kuhlmann.2015, XingDing.2016, Wang.2016, Somaschi.2016}.
Finally, we plot the values for $A$ as a function of Laser intensity, and compare them to the values obtained from the HOM measurements described above. As seen in Fig. \ref{fig:coherence}, the two methods give largely agreeing values. Theory predicts that the coherent fraction $F_{CS}$ depends on the driving intensity as follows \cite{Phillips.2020}:
\begin{equation}
F_{CS} = \frac{1}{1+\frac{\Omega^2}{\gamma}},
\end{equation}
where $\gamma$ is the natural linewidth of the quantum dot. This model fits our data reasonably well [Fig. \ref{fig:coherence} (d)]. Residual deviations can be attributed to imperfect calibration of the HOM interferometer as well as drifts in the setup leading to differing effective excitation intensities, or also to phonon sideband contributions, which ultimately limit the coherently scattered fraction of photons \cite{Koong.2019, Brash.2019}. While a precise determination of this contribution is left for a future study, the high degree of elastic scattering as well as the $T_2$ times near the Fourier limit suggest that this process is of limited importance in our InAs/InP QDs.
\section{Indistinguishability of photons separated by 25 km of fibre}
Next, we make direct use of the minimal attenuation in fibre at the emission wavelength of our QDs and measure the indistinguishability by adding a 25-km fibre delay to one of the arms of the interferometer shown in Fig. \ref{fig:HOM} (a). At 0.173/km dB for the standard SMF-28 Ultra used, this attenuates the signal in the long arm by 4.37 dB or a factor 2.74 compared to the short arm. For comparison, the $\sim$ 4dB/km-attenuation in specialized fibre at 900 nm would still result in a signal attenuation by 10 orders of magnitude, making an indistinguishability measurement all but impossible.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{Fig4_TM.pdf}
\caption{\textbf{HOM measurements after 25 km of fibre} (a) HOM measurement for dot 2 without the long fibre delay. (b) Visibility calculated from (a). (c) Autocorrelation measurements passing one arm of the HOM interferomter through 25 km of fibre. (d) Visibility resulting from (c)}
\label{fig:25kmHOM}
\end{figure*}
For the subsequent measurement, a separate QD was used. To establish a baseline, we first measure the indistinguishability without the extra delay, in the same configuration as above. Fig. \ref{fig:25kmHOM} (a) shows the measured correlations for the co- and cross-polarized cases, where the co-polarized data has been normalized taking into account the coherently scattered fraction of the light as above. The extracted visibility is shown in \ref{fig:25kmHOM} (b). At 0.77$\pm$0.06, the maximum is slightly higher than the values measured for QD 1, likely due to better experimental overlap of the mode functions.
Next, we insert a 25-km fibre spool into the long arm of the interferometer. The measured relative attenuation of this arm is 5.8 dB and includes extra connectors as well as a switch for polarization calibration. The correlation measurements resulting from this configuration are shown in Fig. \ref{fig:25kmHOM} (c). It is immediately obvious that the side dips seen in the short-delay configuration have disappeared, as they have shifted out of our measurement window to a time delay of $\sim 95$ $\mu$s, corresponding to the extra distance traveled in the fibre. We can therefore no longer use the measured dip depth of the side dips to extract the coherent fraction, and have to rely instead on the short-delay measurement performed at a similar excitation power. We further note that the cross-polarized correlation curve at zero delay dips well below the 50\% mark expected for a balanced interferometer, seen in the inset to Fig. \ref{fig:25kmHOM} (c) showing a zoom around the zero-delay dips. This is due to the extra attenuation in one of the two arms, which makes correlations arising from two photons having both traveled the short arm dominant compared to the other contributions in Fig. \ref{fig:25kmHOM} (b). For the measured attenuation values, we expect a $g^{(2)}_{cross}(0)=0.33 \pm 0.03$, in reasonable agreement with the fitted value of 0.27$\pm$0.02. The resulting visibility is shown in Figure \ref{fig:25kmHOM} (d), and we obtain a value of 0.72$\pm$0.15. The extra fibre therefore results in a drop of visibility by about 6.5\%, meaning that even after $\sim$100 000 excitation cycles, the transition remains little affected by additional spectral wandering and other slow dephasing processes, demonstrating the stability of our QD transition on the timescale of $95$ $\upmu$s.
\section{Conclusion}
In conclusion, we have used HOM measurements as well as direct measurements of emission coherence to show that, except for very high driving powers, the coherence time of scattered photons is at least equal to the Fourier limit, and exceeds it considerably for low driving powers due to a large fraction of coherently scattered photons. For this particular QD, the Fourier limit for inelastically scattered photons is reached even without any additional Purcell enhancement or active feedback on the quantum dot. We further measure indistinguishability of photons emitted $\sim95$ $\upmu$s apart by guiding one of the photons through a 25-km fibre spool, a measurement infeasible with quantum emitters at lower wavelength. While spectral wandering processes affecting the QDs on the timescale of seconds are currently limiting the achieved visibilities, we expect that placing the QDs in doped structures similar to our earlier measurements \citep{Anderson.2021} will readily improve indistinguishability.
An outstanding challenge hampering network integration of C-band quantum dots is further the improvement of the limited extraction efficiency. Recent proposals integrating QD sources into circular Bragg gratings or micropillars structures in the telecom C-band show that coupling efficiencies into single mode fibres up to around 80\% are possible while at the same time also providing Purcell enhancement of factors 10-43 \cite{Barbiero.2022, Bremer.2022}. Seeing that under non-resonant excitation the investigated QD has a coherence time only about a factor three higher than the average in InAs/InP QDs \cite{Anderson.2021}, we expect that combining resonant excitation with appropriate photonic engineering will enable the majority of the QDs to perform at the Fourier level with high efficiency. Such a device will be a desirable hardware component for quantum network applications ranging from simple point-to-point quantum key distribution to distributed quantum computation tasks based on interference of entangled photons linking remote locations.
\begin{acknowledgments}
The authors gratefully acknowledge the usage of wafer material developed during earlier projects in partnership with Andrey Krysa and Jon Heffernan at the National Epitaxy Facility and at the University of Sheffield. They further acknowledge funding from the Ministry of Internal Affairs and Communications, Japan, via the project ‘Research and Development for Building a Global Quantum Cryptography Communication Network’. L. W. gratefully acknowledges funding from the EPSRC and financial support from Toshiba Europe Limited. \\
\end{acknowledgments}
|
1,116,691,500,119 | arxiv | \section{Introduction}
It is well known that anti-de Sitter (AdS) gravity coupled to a scalar with mass at or slightly above the
Breitenlohner-Freedman (BF) bound \cite{Breitenlohner82} admits a large class of boundary conditions, defined by an
essentially arbitrary real function $W$, for which the conserved charges are well defined and finite
\cite{Henneaux02,Henneaux04,Hertog04,Hertog05,Henneaux06,Amsel06}.
Theories of this type have been called designer gravity theories \cite{Hertog05}, because their dynamical properties depend
significantly on the choice of $W$. For example, one can essentially preorder the number and masses of solitons or of black holes with scalar hair in these theories, simply by choosing the appropriate boundary condition function $W$. Designer gravity
theories also have interesting cosmological applications, because certain $W$ admit solutions where smooth
asymptotically AdS initial data evolve to a big crunch singularity \cite{Hertog03c,Hertog04b}. In supergravity theories
with a dual conformal field theory (CFT) description one can study the quantum nature of this singularity\footnote{See \cite{Hertog05b,Elitzur06} for recent work on this.} using the AdS/CFT correspondence \cite{Aharony00}.
The AdS/CFT duality relates $W$ to a potential term $\int W({ O})$ in the dual CFT action, where ${\cal O}$ is the field
theory operator that is dual to the bulk scalar \cite{Witten02, Berkooz02}. This led \cite{Hertog05} to conjecture that (a) there is a lower bound on the gravitational energy in those designer gravity theories where $W$ is bounded from below, and that (b) the solutions locally minimizing the energy are given by the spherically symmetric, static soliton configurations found in
\cite{Hertog05}.
Following these conjectures, the stability of designer gravity theories has been studied using purely gravitational
arguments. A lower bound on the conserved energy in terms of the global minimum of $W$ was established in \cite{Hertog05c} for a consistent truncation of ${\cal N}=8$ $D=4$ gauged supergravity that has several $m^2=-2$ scalars.
This bound was obtained by relating the Hamiltonian charges to the spinor
charges, which were shown to be positive for all $W$. It was further conjectured in \cite{Hertog05c} that this result should generalize to all designer gravity theories in $d$ dimensions where the scalar potential $V$ arises from a superpotential $P$, and the scalar reaches an extremum of $P$ at infinity. A more detailed derivation that seemed to confirm this was subsequently given in \cite{Amsel06}.
In this paper, however, we present negative mass solutions in theories where $V$ can be written in terms of a
superpotential $P$ for boundary conditions specified by a positive function $W$. These solutions are constructed by
conformally rescaling spherical static solitons that obey designer gravity boundary conditions which preserve the full
AdS symmetry group. Since the energy can be made arbitrary small, these findings suggest that only a subclass of
superpotentials have a stable (purely bosonic) ground state when $W$ is bounded from below.
\section{Designer Gravity}
\subsection{Tachyonic Scalars in AdS}
We consider gravity in $d \geq 4$ spacetime dimensions minimally coupled to a scalar
field with potential $V(\phi)$. So the action is
\begin{equation} \label{act}
S=\int d^d x \sqrt{-g} \left [{1\over 2} R - {1\over 2}(\nabla\phi)^2 -V(\phi)\right ]
\end{equation}
where we have set $8\pi G=1$. We require the potential can be written as
\begin{equation} \label{superpot}
V(\phi) = (d-2) P'^2 - (d-1)P^2
\end{equation}
for some function $P(\phi)$. Scalar potentials of this form arise in the context of $N=1$ supergravity
coupled to $N=1$ matter, in which case $P(\phi)$ is the superpotential. We are interested in configurations
where $\phi$ asymptotically approaches a positive local minimum of $P$ at $\phi=\phi_0$. An extremum
of $P$ is always an extremum of $V$, and (\ref{superpot}) implies that $V_0=V(\phi_0) < 0$.
Hence $\phi=\phi_0$ corresponds to an anti-de Sitter solution, with metric
\begin{equation} \label{adsmetric}
ds^2_0 = \bar g_{\mu \nu} dx^{\mu} dx^{\nu}=
-(1+{ r^2 \over l^2})dt^2 + {dr^2\over 1+{ r^2 \over l^2}} + r^2 d\Omega_{d-2}
\end{equation}
where the AdS radius is given by
\begin{equation}
l^2 = -{(d-1)(d-2) \over 2 V_0}
\end{equation}
At an extremum of $P$ one has
\begin{equation}\label{ddsuperpot}
V'' = 2P'' \left[ (d-2)P'' -(d-1)P\right]
\end{equation}
so a positive local minimum of $P$ corresponds to a minimum of $V$ only when $(d-2)P_0'' > (d-1)P_0$.
This is a quadratic equation for $P''$, which has a real solution if only if
\begin{equation}
V'' \geq - {(d-1)^2 P^2 \over 2(d-2)} = {(d-1)V_0 \over 2(d-2)} = - {(d-1)^2 \over 4l^2}
\end{equation}
Hence we recover the BF bound $m^2_{BF}= - {(d-1)^2 \over 4l^2}$ on the scalar mass, which is required for the AdS
solution to be perturbatively stable.
We will focus on positive extrema of superpotentials where
\begin{equation} \label{Wrange}
(d-1)P_0< 2(d-2)P_0'' < (d+1)P_0
\end{equation}
These correspond to tachyonic scalars in AdS with mass $m^2$ in the range
\begin{equation} \label{range}
m^2_{BF} < m^2 < m^2_{BF}+{ 1 \over l^2}<0.
\end{equation}
Solutions to the linearized wave equation $\nabla^2 \phi - m^2 \phi=0$ for tachyonic scalars in an AdS background,
with harmonic time dependence $e^{-i\omega t}$, all fall off asymptotically like\footnote{For fields that saturate the
BF bound, $\lambda_{+}=\lambda_{-} \equiv \lambda$ and $\phi = {\alpha \over r^{\lambda}}\ln r + {\beta \over r^{\lambda}}$.}
\begin{equation}\label{genfall}
\phi -\phi_0= {\alpha \over r^{\lambda_{-}}} + {\beta \over r^{\lambda_{+}}}
\end{equation}
where $\alpha$ and $\beta$ are functions of $t$ and the angles and
\begin{equation}\label{fallofftest}
\lambda_\pm = {d-1 \over 2} \pm {1 \over 2} \sqrt{(d-1)^2 + 4l^2 m^2}.
\end{equation}
When the scalar mass lies in the range (\ref{range}) both modes are normalizable and hence
should a priori represent physically acceptable fluctuations. To have a well defined theory, however,
one must specify boundary conditions at spacelike infinity.
In general this amounts to choosing a functional relation between $\alpha$ and $\beta$.
The standard choice of boundary condition corresponds to taking $\alpha=0$ in (\ref{genfall}).
Taking in account the self-interaction of the scalar field, as well as its backreaction on the geometry,
one finds this is consistent with the usual set of asymptotic conditions
on the metric components that is left invariant under $SO(d-1,2)$ \cite{Henneaux85}. In particular, writing
the metric as $g_{\mu \nu} = \bar g_{\mu \nu} +h_{\mu \nu}$, the asymptotic behavior of the gravitational
fields is given by
\begin{equation} \label{asmetric}
h_{rr}={\cal O}(r^{-(d+1)}), \qquad h_{rm}={\cal O}(r^{-d}), \qquad h_{mn}={\cal O}(r^{-d+3})
\end{equation}
where the indices $m,n$ label the time coordinate $t$ and the $d-2$ angles.
Furthermore, the charges that generate the asymptotic symmetries $\xi^{\mu}$
involve only the metric and its derivatives\footnote{There is a finite contribution to the conserved charges from the scalar field
if this saturates the BF bound \cite{Hertog03c}.}. They are given by \cite{Henneaux85}
\begin{equation}\label{gravch}
{\cal H}_{G} [\xi] = \frac{1}{2}\oint d^{d-2} S_i
\bar G^{ijkl}(\xi^\perp \bar{D}_j h_{kl}-h_{kl}\bar{D}_j\xi^\perp)
\end{equation}
where $G^{ijkl}={1 \over 2} g^{1/2} (g^{ik}g^{jl}+g^{il}g^{jk}-2g^{ij}g^{kl})$,
$h_{ij}=g_{ij}-\bar{g}_{ij}$ is the deviation from the spatial $AdS$ metric $\bar{g}_{ij}$,
$\bar{D}_i$ denotes covariant differentiation with respect to $\bar{g}_{ij}$ and
$\xi^\perp = \xi \cdot n$ with $n$ the unit normal to the surface.
One can, however, choose different scalar boundary conditions, defined by $\alpha \neq 0$ and $\beta = \beta(\alpha)$
\cite{Henneaux02,Henneaux04,Hertog04,Hertog05,Henneaux06,Amsel06}. Such asymptotic conditions in general break
the
AdS symmetry to $\Re \times SO(d-1)$ - the asymptotic scalar profile changes under the action of $\xi^r$ - but the
conserved charges associated with the remaining asymptotic symmetries are well defined and finite.
The dynamical properties of designer gravity theories, however, depend significantly on $\beta(\alpha)$.
Here we are primarily concerned with the positivity properties of the conserved energy, or more generally with the
conditions that the superpotential $P$ and the boundary condition function $\beta(\alpha)$ must satisfy for the theory to have a stable
ground state. But first we briefly review the asymptotics and the construction of conserved charges in designer gravity.
\subsection{Asymptotics and Conserved Charges}
The backreaction of the $\alpha$-branch of the scalar field, as well as its self-interaction, causes the metric
components $h_{rm}$ to fall off slower than usual. A complete analysis of the asymptotics is given in
\cite{Henneaux06}, where it is shown that the asymptotic behavior of the scalar and the gravitational fields
in designer gravity depends not only on the scalar mass $m^2$, but in general also on the cubic, quartic and
even quintic terms in the scalar potential. It is also found that the asymptotic fields generally develop
logarithmic branches for `resonant' scalars\footnote{See \cite{Banados06} for recent work in the context of the AdS/CFT
correspondencen on resonant scalars with $\alpha \neq 0$ boundary conditions.},
i.e. at integer values of the ratio $\lambda_{+}/\lambda_{-}$.
For our purposes, however, it will be sufficient to restrict attention to even scalar potentials of the form
\begin{equation} \label{potgen}
V(\phi) = \Lambda + {1 \over 2} m^2 \phi^2 +{1 \over 4} C\phi^4 +{\cal O}(\phi^6) + ...
\end{equation}
with $m^2$ in the range (\ref{range}) and $\Lambda= -{(d-1)(d-2) \over 2}$ so that $l^2 =1$. $C$ is taken to
be a free parameter for $m^2 < { (d-1)^2 \over 16 l^2}$, but $C={(d-1) \lambda_{-}^2 \over 8 (d-2)} $ otherwise.
For these scalar potentials the analysis of the asymptotics given in \cite{Hertog04} applies.
In particular, the asymptotic scalar profile, when one takes in account its backreaction on the geometry,
is given by (\ref{genfall}), with $\beta(\alpha)$. The corresponding asymptotic behavior of the metric
components that allows the construction of well defined and finite Hamiltonian generators is given by
\begin{equation} \label{4-grr}
h_{rr}= -\frac{\alpha^2 \lambda_{-}}{(d-2)} l^2r^{-2-2\lambda_{-}}+ {\cal O} (r^{-(d+1)}), \quad
h_{rm} = {\cal O}(r^{-d+2})
\end{equation}
The expression for the conserved charges depends on the asymptotic behavior of the fields, and is defined
as follows. Let $\xi^{\mu}$ be an asymptotic Killing vector field. The Hamiltonian takes the form
\begin{equation}
{\cal H}[\xi] = \int_{\Sigma}^{\ } \xi^{\mu} C_{\mu} + {\rm surface} \ {\rm terms}
\end{equation}
where $\Sigma$ is a spacelike surface, $C_{\mu}$ are the usual constraints, and the surface terms should be
chosen so that the variation of the Hamiltonian is well defined. The variation of the usual gravitational
surface term (\ref{gravch}) diverges as $\sim r^{d-(2\lambda_{-}+1)}$ if $\alpha \neq 0$,
but there is an additional contribution to the variation of the surface terms that involves the scalar field
\begin{equation}\label{scalch}
\delta {\cal H}_\phi[\xi] =-\oint \xi^\perp \delta \phi D_i \phi dS^i
\end{equation}
which exactly cancels the divergence of the gravitational term \cite{Hertog04,Hertog05,Henneaux06,Amsel06}.
The total charge can therefore be integrated, which yields
\begin{equation}\label{ch}
{\cal H} [\xi] = {\cal H}_{G}[\xi]+ { \lambda_{-} \over (d-2)} r^{d-(2\lambda_{-}+1)} \oint \alpha^2 d\Omega
+(\lambda_{+} - \lambda_{-})\oint \left[ {\lambda_{-} \over (\lambda_{+} - \lambda_{-})} \alpha\beta
+ W(\alpha) \right]d\Omega
\end{equation}
where we have defined a smooth function
\begin{equation} \label{bc}
W(\alpha) = \int_0^\alpha \beta(\tilde \alpha) d \tilde \alpha
\end{equation}
which specifies the choice of boundary conditions in designer gravity.
For spherically symmetric solutions, therefore, a manifestly finite expression for the mass ${\cal H} [\partial_t] $ is given by
\begin{equation}\label{mass}
M = {\rm Vol}(S^{d-2}) \left[{(d-2) \over 2} M_0 + \lambda_{-} \alpha\beta +(\lambda_{+} - \lambda_{-}) W \right]
\end{equation}
where $M_0$ is the coefficient of the $1/r^{d+1}$ term in the asymptotic expansion (\ref{4-grr})
of $h_{rr}$.
\subsection{Stability and Ground State}
By generalizing Witten's spinorial proof of the Positive Energy Theorem (PET) for asymptotically flat
spacetimes \cite{Witten81}, it has been shown \cite{Townsend84} (see also \cite{Gibbons83}) that for
the case of a single scalar field and with standard $\alpha=0$
scalar boundary conditions, a potential $V(\phi)$ admits a PET {\it if and only if} $V$
can be written in terms of a 'superpotential' $P(\phi)$, and $\phi$ approaches an extremum of $P$ at infinity.
This result obviously concerns the positivity properties of the energy as defined by the spinor charge
\begin{equation} \label{spinorcharge}
{\mathcal Q}_\xi = \oint *{\bf B}
\end{equation}
where the integrand is the dual of the Nester 2-form, with components
\begin{equation}\label{2form}
{B}_{ab} =
\frac{1}{2}(\overline \psi \gamma^{[c} \gamma^d \gamma^{e]} \widehat \nabla_e \psi
+ {\rm h.c.})\epsilon_{abcd} \,
\end{equation}
$\psi$ is taken to be an asymptotically supercovariantly constant spinor field and
\begin{equation}\label{covder}
\widehat \nabla_a \psi
= \left[ \nabla_a - {1 \over \sqrt{2(d-2)}} \gamma_a P(\phi) \right]\psi
\end{equation}
with $P(\phi)$ given by (\ref{superpot}). This definition of the covariant derivative enabled \cite{Townsend84} to
express the spinor charge (\ref{spinorcharge}) as a manifestly non-negative quantity, provided $\psi$
satisfies the spatial Dirac equation $\gamma^i \widehat D_i \, \psi = 0$. In the context of $N=1$ supergravity $P$ is the superpotential, but the argument of \cite{Townsend84} applies to any gravity plus scalar theory, irrespective of whether it is
a sector of a supergravity theory.
Townsend's Positive Energy Theorem \cite{Townsend84} establishes the positivity of the Hamiltonian generator
(\ref{gravch}), because this equals the spinor charge for $\alpha=0$. This is not the case in designer gravity, however,
where the $\alpha$ branch of the scalar modifies the expression of the charges. Indeed it follows from the asymptotic expansion of the spinor field, and the asymptotic expansions of the metric and the scalar field, that the Hamiltonian charges (\ref{ch}) are related to the spinor charges (\ref{spinorcharge}) as \cite{Hertog05c}
\begin{equation}
\label{hq}
{\cal H}_\xi = {\mathcal Q}_\xi +(\lambda_{+} - \lambda_{-}) \oint W (\alpha) d \Omega
\end{equation}
where $ W(\alpha)$ is defined in (\ref{bc}).
The spinor charge, therefore, needs not be conserved in designer gravity. Instead it depends on the choice of
the cross section $S^{d-2}$ at infinity, because $\alpha$ is, in general, time dependent. On the other hand, the above calculation that leads from the Witten condition to the positivity of the spinor charge still applies. Taking
$\xi=\partial_t +\omega \partial_{\phi}$, with $\vert \omega \vert <1$, this yields \cite{Hertog05c,Amsel06}
\begin{equation}
E + \omega J \ge (\lambda_{+} - \lambda_{-})\oint W(\alpha) d \Omega
\end{equation}
and therefore
\begin{equation}\label{bound}
E \ge {\rm Vol}(S^{d-2}) (\lambda_{+} - \lambda_{-})\, {\rm inf} \, W + |J|.
\end{equation}
where $J$ is the angular momentum. Hence it would seem to follow that the energy is bounded from below in designer gravity theories (for scalar potentials that arise from a superpotential $P$ and with $m^2$ in the range (\ref{range})) for all asymptotic
conditions (\ref{bc}) that are defined by a function $ W(\alpha)$ that has a global minimum. Furthermore, the inequality (\ref{bound}) suggests that theories where $ W$ is unbounded from below admit smooth solutions with arbitrary negative energy. Such solutions have indeed been shown to exist in certain theories \cite{Hertog04b}. The only subtlety in the derivation of (\ref{bound}) is showing that with $\alpha \neq 0$ boundary conditions, asymptotically supercovariantly constant solutions to $\gamma^i \widehat D_i \, \psi = 0$ exist. This was shown for the consistent truncation of ${\cal N}=8$ $d=4$ gauged supergravity studied in \cite{Hertog05c}, but this has not been demonstrated in general. We return to this point in the conclusion.
Finally we mention that in the case of supergravity theories with a dual field theory description, the AdS/CFT correspondence indicates \cite{Hertog05} that the true ground state of the theory (when $ W$ is bounded from below) is given by the lowest energy spherical soliton. The nature of the ground state has not been established yet, however, using purely gravitational arguments\footnote{The lowest energy soliton does not saturate the lower bound (\ref{bound}), because the actual soliton mass has an additional positive contribution coming from the spinor charge.}.
\subsection{AdS-invariant boundary conditions}
Designer gravity boundary conditions generally break the asymptotic AdS symmetry to $\Re \times SO(d-1)$.
The full AdS symmetry group is preserved, however, for asymptotic conditions defined by
\begin{equation}
W(\alpha)=k \alpha^{d-1/\lambda_{-}}
\end{equation}
where $k$ is an arbitrary constant without variation\footnote{We note that this defines AdS-invariant boundary conditions
only for scalar potentials of the form (\ref{potgen}). The expression of conformally invariant asymptotics for
other potentials is given in \cite{Henneaux06,Amsel06}.}. In this case the total charge (\ref{ch}) becomes
\begin{equation}\label{adsch}
{\cal H} [\xi] = {\cal H}_{G}[\xi]+ { \lambda_{-} \over (d-2)} r^{d-(2\lambda_{-}+1)} \oint \alpha^2 d\Omega
+2k\lambda_{+} \oint \alpha^{d-1/\lambda_{-}} d\Omega
\end{equation}
which yields the following expression for the mass of spherically symmetric solutions,
\begin{equation}\label{adsmass}
M = {\rm Vol}(S^{d-2}) \left[{(d-2) \over 2} M_0 + 2 k \lambda_{+} \alpha^{d-1/\lambda_{-}} \right]
\end{equation}
In the next section we use the conformal rescaling symmetry of this class of boundary conditions to
show there are superpotentials for which the energy bounds (\ref{bound}) do not hold.
\section{Violation of Energy Bounds}
\subsection{Asymptotically AdS Solitons}
Consider the following class of superpotentials in $d=4$ dimensions,
\begin{equation}\label{superpot2}
P(\phi)= (1+{1 \over 2} \phi^2 ) e^{-{A \over 4} \phi^4}
\end{equation}
where $A >0 $ is a free parameter. These yield scalar potentials with a negative maximum at $\phi=0$, and with two global minima at $\phi = \pm \phi_m$. The potential corresponding to (\ref{superpot2}) with $A=1/4$ is plotted in Figure 1. Small fluctuations around $\phi=0$ have $m^2 =-2$, which is above the BF bound and within the range (\ref{range}). Hence asymptotically the scalar generically decays as
\begin{equation}\label{genfall2}
\phi = {\alpha \over r } + {\beta \over r^2}
\end{equation}
and the asymptotic behavior of the $g_{rr}$ metric component reads
\begin{equation}\label{asmetric2}
g_{rr} = {1 \over r^2} - { (1+\alpha^2/2 )\over r^4} +{\cal O} (r^{-5})
\end{equation}
\begin{figure}
\begin{picture}(0,0)
\put(225,200){$V$}
\put(390,136){$\phi$}
\end{picture}
\centerline{\epsfig{file=superpot.eps,width=4.5in}}
\caption{Scalar potential $V$ that can be written in terms of a superpotential $P$ with $P'(0)=0$.}
\label{1}
\end{figure}
We adopt AdS-invariant boundary conditions defined by $ W(\alpha)=0$ everywhere. The conserved mass, therefore, is simply given by the surface integral of the coefficient of the $1/r^5$ term in (\ref{asmetric2}). According to the lower bound (\ref{bound}) this should be positive for all solutions where $\phi$ asymptotically decays as $\phi \sim \alpha/r +{\cal O}(1/r^3)$.
We now show, however, that for a wide range of values of $A$ there are negative mass solutions.
We begin by looking for static spherical soliton solutions of the theory (\ref{superpot2}). Writing the metric as
\begin{equation}
ds^2=-h(r)e^{-2\chi(r)}dt^2+h^{-1}(r)dr^2+r^2d\Omega_2
\end{equation}
the field equations read
\begin{equation}\label{hairy14d}
h\phi_{,rr}+\left(\frac{2h}{r}+\frac{r}{2}\phi_{,r}^2h+h_{,r} \right)\phi_{,r} = V_{,\phi}
\end{equation}
\begin{equation}\label{hairy24d}
1-h-rh_{,r}-\frac{r^2}{2}\phi_{,r}^2h = r^2V(\phi)
\end{equation}
\begin{equation} \label{hairy34d}
\chi_{,r} = -{1 \over 2}r \phi_{,r}^2
\end{equation}
Regularity at the origin requires $h=1$ and $h_{,r}=\phi_{,r}=\chi_{,r}=0$ at $r=0$.
Rescaling $t$ shifts $\chi$ by a constant, so its value at the origin is arbitrary.
Thus solutions can be labeled by the value of $\phi$ at the origin.
\begin{figure}
\begin{picture}(0,0)
\put(36,205){$\phi$}
\put(335,20){$r$}
\end{picture}
\centering{\psfig{file=soliton.eps,width=4.5in}}
\caption{Soliton solution $\phi (r)$ with boundary conditions specified by $\beta=0$.}
\label{2}
\end{figure}
One can numerically integrate the field equations. For every nonzero $\phi(0)$ at the origin in the range
$-\phi_m < \phi (0) < \phi_m$, the solution to (\ref{hairy14d}) is asymptotically of the form (\ref{genfall2}).
The staticity and spherical symmetry of the soliton mean $\alpha(t,\Omega)$ and $\beta(t,\Omega)$ are simply
constants. For $A \sim {\cal O}(1)$ we find there is a `critical' value $\phi_c (0)$ for which $\beta=0$, and hence
$\phi \sim \alpha/r +{\cal O}(1/r^3)$ asymptotically. We plot this soliton solution $\phi_{s}(r)$
in Figure 2 for the $A=1/4$ potential. We have found a class of scalar potentials, therefore, that can be derived from a
superpotential and admit regular static spherical soliton solutions for AdS-invariant boundary conditions.
\subsection{AdS solitons imply negative energy}
The existence of scalar solitons with AdS-invariant boundary conditions implies there are negative mass
solutions in these theories. This was shown, using scaling arguments, in \cite{Heusler92} for
non-negative potentials and then generalized to potentials with a negative local maximum in
\cite{Hertog04b}. We emphasize the claim is not that the soliton itself must have negative energy (in general
it has positive mass), but only that negative energy solutions must exist.
To apply the scaling arguments of \cite{Heusler92,Hertog04b} to our case we first need an explicit formula
for the mass of spherically symmetric (and time symmetric) initial data when the scalar field has a profile
$\phi(r)$. In this case, the constraint equations reduce to
\begin{equation}\label{constr}
\ ^{3}{\cal R} = g^{ij}\phi_{,i}\phi_{,j} + 2 V(\phi)
\end{equation}
Writing the spatial metric as
\begin{equation} \label{metric}
ds^2 = \left(1-{m(r)\over r}+r^2 \right)^{-1} dr^2 + r^2 d\Omega_{2}
\end{equation}
the constraint (\ref{constr}) yields the following equation for $m(r)$
\begin{equation} \label{mscalar}
m_{,r} +\frac{1}{2}m(r)r\phi_{,r}^2 = r^2
\left[(V(\phi)-\Lambda)+{1 \over 2} \left(1+ r^2\right)\phi_{,r}^2 \right]
\end{equation}
The general solution for arbitrary $\phi (r)$ is
\begin{equation}\label{gensoln}
m(r) = \int_{0}^{r} e^{-{1\over 2}\int_{\tilde r}^r d\hat r \ \hat r\phi_{,\hat r}^2}
\left[(V(\phi)-\Lambda) +{1 \over 2} \left(1+ \tilde r^2 \right)
\phi_{,\tilde r}^2 \right] \tilde r^{2} d\tilde r.
\end{equation}
Hence the total mass (\ref{ch}) is given by
\begin{equation} \label{totm}
M = 4\pi \lim_{r\to\infty} \left[ m(r) +{ \alpha^2 \over 2} r\right]
\end{equation}
Now suppose $\phi_s(r)$ is a static soliton and consider the one parameter family of configurations
$\phi_\lambda(r) = \phi_s(\lambda r)$. Because of the conformal rescaling symmetry these obey the same boundary
conditions as the soliton. Then from (\ref{gensoln}) and (\ref{totm}), it is easy to see that the total
mass of the rescaled configurations takes the form
\begin{equation}
M_\lambda = \lambda^{-3} M_1 + \lambda^{-1} M_2
\end{equation}
where $M_2$ is independent of the potential and is manifestly positive, and
both $M_i$ are finite and independent of $\lambda$.
Furthermore, because the static soliton extremizes the energy \cite{Sudarsky92} one has
\begin{equation}
0={d M_\lambda\over d\lambda}|_{\lambda=1} = -3 M_1 -M_2
\end{equation}
and hence $M_1=-{1 \over 3} M_2 <0$.
Therefore the contribution to the mass that scales as the volume, which includes the potential and scalar
terms, is negative. This means that rescaled configurations $\phi_\lambda(r)$ with $\lambda < 1/\sqrt{3}$ must
have negative total mass\footnote{We have verified that the rescaled configurations $\phi_\lambda (r)$ are regular initial data
for small $\lambda$, i.e. $h_\lambda(r)= r^2 +1 - {m_\lambda(r) \over r}$ is strictly positive everywhere.},
and hence violate the energy bound
(\ref{bound}). For the soliton solution shown in Figure 2 we find $M_1=-{1 \over 3} M_2=-1/4$, and hence $M=1/2$.
The rescaled configurations are initial data for time dependent solutions.
For sufficiently small $\lambda$ one has a large central region where $\phi$ is essentially constant and away
from an extremum of the potential. Hence one expects the field to evolve to a spacelike singularity.
This singularity cannot be hidden behind an event horizon, because the mass of all spherically
symmetric black holes is larger than the soliton mass. Instead, one expects initial data of this type to
produce a big crunch\footnote{Initial data for which this can be shown rigorously can be constructed from Euclidean
$O(4)$-invariant instanton solutions of the form $ds^2 = { d\rho^2 \over b^2 (\rho)} +\rho^2 d\Omega_3$. The slice through
the instanton obtained by restricting to the equator of the $S^3$ defines time symmetric initial data for a zero mass
Lorentzian solution. With conformally invariant boundary conditions, the evolution of these initial data is simply obtained
from analytic continuation of the instanton geometry. One finds the spacetime evolves like a collapsing FRW universe
\cite{Hertog04b,Hertog05b}. For the $A=1/4$ potential of the form (\ref{superpot2}) we have considered here, the instanton
that obeys $W=0$ boundary conditions has $\phi(0)=.652$.} \cite{Hertog04b,Hertog05b}.
\subsection{Further Examples}
Finally we show that $W=0$ is not an isolated example of boundary conditions for which the bounds (\ref{bound})
do not hold. Consider AdS gravity coupled to a scalar with $m^2 =-27/16$ in four dimensions, with AdS-invariant
boundary conditions defined by
\begin{equation} \label{genadsbc}
W(\alpha)={ k \over 4} \alpha^4
\end{equation}
where $k$ is an arbitrary constant. According to (\ref{bound}) the theory should satisfy the PET
when $k \geq 0$. We find below, however, that negative mass solutions exist for all $k$.
We concentrate on the following class of potentials,
\begin{equation}\label{pot3}
V(\phi)=-3-{27 \over 32} \phi^2 - {27 \over 256} \phi^4 -{3 \over 4} \phi^6 +B \phi^8
\end{equation}
where $B$ is a free parameter. For positive $B$ these are qualitatively similar to the potentials
we considered above, with a negative maximum at $\phi=0$ and global minima at $\phi = \pm \phi_m$. But
scalar fluctuations around $\phi=0$ now have mass $m^2 =-27/16$, so the scalar generically decays as
\begin{equation}\label{genfall3}
\phi = {\alpha \over r^{3/4} } + k{\alpha^3 \over r^{9/4}}.
\end{equation}
Townsend's result \cite{Townsend84} says that potentials of this form admit the PET for solutions that
asymptotically behave as $\phi \sim 1/r^{9/4}$, if (and only if) $V$ can be derived from a superpotential
$P$ with $P'(0)=0$. To construct the corresponding superpotential one needs to solve
\begin{equation}\label{Weq}
P'(\phi) =\frac{1}{\sqrt{2}}\sqrt{V+3 P^2}
\end{equation}
starting with $P(0)=1$.
\begin{figure}
\begin{picture}(0,0)
\put(225,200){V}
\put(392,159){$\phi$}
\end{picture}
\centerline{\epsfig{file=pot.eps,width=4.5in}}
\caption{The dashed line corresponds to a critical scalar potential $V$ that is on the verge of violating
the Positive Energy Theorem for standard scalar AdS boundary conditions. The full line gives a potential that
arises from a superpotential, yet violates the PET with $W>0$ designer gravity boundary conditions}
\label{3}
\end{figure}
A solution to (\ref{Weq}) exists unless the quantity inside the square root becomes negative. As we integrate
out from $\phi=0$, $P$ is increasing and the square root remains real because the scalar satisfies the BF bound.
For sufficiently large values of $B$ the global minima at $\pm \phi_m$ will not be very much lower than the local
maximum at $\phi=0$, so a global solution for $P$ will exist and $P'(\phi_m)>0$. This is expected,
since the PET (for $\alpha=0$) holds for potentials of this form.
If the global minima are too deep, however, the quantity under the square root will become negative before
the global minimum is reached, and a real solution will not exist. Clearly the critical potential corresponds
to one where $V+3 P^2$ just vanishes as the global minimum is reached. In other words, the condition for a
potential $V$ to be on the verge of violating the PET is simply $P'(\phi_m)=0$.
We find that the critical potential of the form (\ref{pot3}) has $B_c=.1138$. For $B < B_c$ the PET does
not hold for solutions where $\phi \rightarrow 0$ at infinity, whereas scalar potentials (\ref{pot3}) with
$B \geq B_c$ can be written in terms of a superpotential, and hence admit the PET for solutions where
$\phi \sim 1/r^{9/4}$ asymptotically. We plot the critical potential in Figure 3
(dashed curve), as well as the $B=.125$ potential whose properties we discuss in more detail below.
In the regime where a superpotential exists, the lower bounds (\ref{bound}) would imply that the
theory should satisfy the PET not only for $\alpha=0$, but also for generalized
AdS-invariant boundary conditions (\ref{genadsbc}) with $k \geq 0$. We now show, however, there are
$B > B_c$ for which (\ref{pot3}) admits exactly one regular static spherical soliton solution for
all $k \geq 0$. This means the bounds (\ref{bound}) cannot hold, because one can again
conformally rescale the asymptotically AdS solitons to construct negative mass initial data.
The set of soliton solutions of a particular potential with a negative maximum is found by integrating the
field equations (\ref{hairy14d})-(\ref{hairy34d}) for different values of $\phi$ at the origin.
For $\phi(0)$ in the range $-\phi_m < \phi (0) < \phi_m$ the scalar asymptotically behaves as (\ref{genfall3})
so we get a point in the $(\alpha,\beta)$ plane. Repeating for all $\phi (0)$ yields a curve $\beta_{s}(\alpha)$. Given a
choice of boundary condition $\beta(\alpha)$, the allowed solitons are simply given by the points where the soliton
curve intersects the boundary condition curve: $\beta_{s}(\alpha)=\beta(\alpha)$.
\begin{figure}
\begin{picture}(0,0)
\put(90,201){$\beta$}
\put(388,22){$\alpha$}
\end{picture}
\centerline{\epsfig{file=pot-alphabeta.eps,width=4.5in}}
\caption{The function $\beta_{s}(\alpha)$ obtained from the solitons.}
\label{4}
\end{figure}
A section of the soliton curve $\beta_{s}(\alpha)$ for the $B=.125$ potential is plotted in Figure 4.
Along the curve $\phi (0)$ increases from $\phi(0) \approx 1.32$, which corresponds to $\beta_{s}=0$, to the global
minimum at $\phi_m=2.16$ where $\beta_{s} \rightarrow \infty$. One sees that for all $k \geq 0$ the soliton curve
$\beta_{s}(\alpha)$ has precisely one intersection point with the boundary condition function $\beta=k\alpha^3$.
Hence the conformally rescaled configurations $\phi_{\lambda}( r)=\phi_{s}(\lambda r)$ provide, for $\lambda < 1/\sqrt{3}$,
examples of negative mass initial data.
When $\phi \rightarrow \phi_m$ one has $\alpha \rightarrow .2$ in the $B=.125$ theory. This limiting value of $\alpha$
decreases towards zero, however, for $B \rightarrow B_c$. Furthermore, when $B < B_c$ the soliton curve intersects the
$\alpha=0$ axis at finite
$\beta$, yielding a regular asymptotically AdS soliton solution for standard $\alpha=0$ boundary conditions.
This is not surprising, because the potential cannot be derived from a superpotential when $B <B_c$, and hence the
PET cannot hold \cite{Townsend84}.
\section{Conclusion}
We have studied the stability of designer gravity theories, where one considers AdS gravity coupled to a scalar
field with mass at or slightly above the BF bound and with boundary conditions specified by an essentially arbitrary
function $W$.
By conformally rescaling spherical static solitons that obey AdS-invariant boundary conditions specified by a
non-negative function $W$, we have constructed solutions with arbitrary negative mass in a class of theories
where the scalar potential $V$ arises from a superpotential $P$, and $\phi$ reaches an extremum of $P$ at infinity.
These solutions violate the lower bounds (\ref{bound}) on the conserved energy that were obtained in \cite{Amsel06}, and
they indicate that this class of theories does not have a stable ground state. We expect that similar instabilities can
be found in designer gravity theories in $d>4$ dimensions, and for boundary conditions $W$ that break the asymptotic
AdS symmetry to $\Re \times SO(d-1)$.
The derivation of the lower bounds (\ref{bound}) relies crucially on the positivity of the spinor charge
in designer gravity. Our findings suggest, therefore, that superpotentials for which these bounds do not hold, do not
admit asymptotically supercovariantly constant spinor solutions to the spatial Dirac equation, at least for some
designer gravity boundary conditions. This argument has been advanced long ago in \cite{Hawking83}.
It would be interesting to clarify this point, and to identify the precise criteria that $P$ and $W$
must satisfy in order for these spinor solutions to exist.
In this context we should mention that we have found no examples of supergravity theories that violate the energy bounds
and that have a dual description in terms of a field theory which is supersymmetric for $W=0$. Hence the
positive energy conjectures of \cite{Hertog05} appear to be correct when restricted to this class of theories
with an AdS/CFT dual. In fact, the
lower bounds (\ref{bound}) seem rather natural from the point of view of the dual field theory. Remember that
imposing $ W \neq 0$ boundary conditions on one (or several) tachyonic bulk scalars corresponds to adding a
potential term $\int W({\cal O})$ to the dual CFT action, where ${\cal O}$ is the field theory operator that is
dual to the bulk scalar \cite{Witten02, Berkooz02}.
The change in the energy by this deformation is $\oint < W(O(x))> d \Omega$, which leads in the
large $N$ limit - which corresponds to the supergravity approximation - to $\oint W(<O>) d \Omega$.
This clearly leads to (\ref{bound}) provided all configurations in the dual CFT with $ W=0$ satisfy $E \geq |J|$.
The AdS/CFT correspondence even suggests one should be able to generalize the bounds (\ref{bound}) to
certain classes of $W$ that are unbounded from below. Indeed, the precise correspondence between solitons and field theory
vacua is captured by the following function \cite{Hertog05},
\begin{equation} \label{effpot}
{\cal V}(\alpha) = -\int_{0}^{\alpha} \beta_{s} (\tilde \alpha) d\tilde \alpha + W(\alpha)
\end{equation}
where $\beta_{s}(\alpha)$ is the function obtained from the set of soliton solutions. It can be shown \cite{Hertog05} that for any
$W$ the location of the extrema of ${\cal V}$ yield the vacuum expectation values $ \langle {\cal O} \rangle = \alpha$, and that
the value of ${\cal V}$ at each extremum yields the energy of the corresponding soliton.
This suggests there should be a lower bound on the energy in all designer gravity theories where
${\cal V}(\alpha)$ has a global minimum. For this it is sufficient that $\beta_{s} <W' $ at large $\alpha$. This includes a class of
boundary condition functions $W$ that are unbounded from below, since $\beta_{s}(\alpha) <0$ for $\alpha >0$ in theories where
(\ref{bound}) holds.
|
1,116,691,500,120 | arxiv | \section{\label{sec:level1}First-level heading}
The $SU(2)_L\times U(1)_Y$ structure of the standard model (SM) Lagrangian
requires that the massive electroweak gauge bosons, the $W$ and $Z$ bosons,
interact with one another at trilinear and quadrilinear vertices.
In the SM, the production cross section for $p\bar{p}\rightarrow WZ+X$,
$\sigma(WZ)$, depends on the strength of the $WWZ$ coupling,
$g_{WWZ} = -e \cot \theta_{W}$,
where $e$ is the positron charge and $\theta_{W}$ is the weak mixing angle.
At $\sqrt{s}=1.96$ TeV, the SM predicts $\sigma_{WZ}=3.68\pm 0.25$
pb~\cite{ref:RunIITheorySigma}.
Any significant deviation from this prediction would be evidence for new
physics.
The $WWZ$ interaction can be parameterized by a generalized effective
Lagrangian~\cite{ref:HPZH,ref:HISZ} with $CP$-conserving
trilinear gauge coupling parameters
(TGCs) $g^Z_1$, $\kappa_Z$, and $\lambda_Z$ that
describe the coupling strengths of the vector bosons to the weak field.
The TGCs are commonly presented as deviations from their SM values,
i.e. as $\Delta g^Z_1 = g^Z_1 - 1$, $\Delta \kappa_Z = \kappa_Z - 1$, and
$\lambda_Z$, where $\lambda_Z = 0$ in the SM.
Since tree-level unitarity restricts the anomalous couplings
to their SM values at asymptotically high energies, each of the
couplings must be parameterized as a form factor, e.g.
$\lambda_Z(\hat{s})=\lambda_Z/(1+\hat{s}/\Lambda^2)^2]$, where
$\Lambda$ is the form factor scale and $\hat{s}$ is the
square of the invariant mass of the $WZ$ system.
New physics will result in anomalous TGCs and an enhancement in the production
cross section as well as modifications to the shapes of
kinematic distributions, such as the $W$ and $Z$ bosons transverse momenta.
Because the Fermilab Tevatron is the only particle accelerator that can
produce the charged state $WZ+X$, this measurement provides a unique
opportunity to study the $WWZ$ TGCs without any assumption on the
values of the $WW\gamma$ couplings.
Measurements of TGCs using the $WW$ final
state~\cite{CDF1, run1wz, LEPTGC, D0RunIIWW, cdfwwwz2007} are sensitive to
both the $WW\gamma$ and $WWZ$ couplings at the same time and
must make some assumption as to how they are related to each
other.
$WZ$ production measurements and studies of $WWZ$ couplings have been
presented previously. The D0 Collaboration
measured $\sigma_{WZ} = 4.5 ^{+3.8}_{-2.6}$
pb, with a 95\% C.L. upper limit
of 13.3 pb, using 0.3 fb$^{-1}$ of $p\bar{p}$ collisions at $\sqrt{s}=1.96$
TeV~\cite{ref:RunIIWZ300pb}. The observed number of candidates was used to
derive the most restrictive available limits on anomalous $WWZ$ couplings.
More recently, the CDF Collaboration measured
$\sigma_{WZ} = 5.0 ^{+1.8}_{-1.6}$ pb using 1.1 fb$^{-1}$ of $p\bar{p}$
collisions at $\sqrt{s}=1.96$ TeV~\cite{ref:CDFRunIIWZPub},
but did not present any results on $WWZ$ couplings.
This communication
describes a significant improvement to the previous D0 analysis.
Not only is the data sample more than three times larger, but an
improved technique is used to constrain the $WWZ$ couplings.
Instead of merely the total number of observed events,
the number and the $p_T$ distribution of the $Z$ bosons $(p_T^Z)$
produced in the collisions are compared to the expectations of
non-SM $WWZ$ couplings,
significantly increasing the power of the $WWZ$ coupling measurement
over previous measurements~\cite{run1wz,ref:RunIIWZ300pb}.
We search for $WZ$ candidate events in final states with three charged leptons,
referred to as trileptons, produced when $Z\rightarrow\ell^+\ell^-$
and $W\rightarrow\ell'\nu$, where $\ell$ and $\ell '$ are
$e^{\pm}$ or $\mu^{\pm}$.
SM backgrounds can be suppressed by requiring three isolated high-$p_T$
leptons and large missing transverse energy
(\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T$})
from the neutrino.
The combined branching fraction for these four possible final
states ($eee,~ee\mu,~\mu\mu e$ and $\mu\mu\mu$) is $1.5\%$ ~\cite{ref:pdg}.
D0 is a multipurpose detector~\cite{ref:run2det}
composed of several subdetectors and a fast triggering system.
At the center of the detector is a central tracking system, consisting of a
silicon microstrip tracker (SMT) and a central fiber tracker (CFT),
both located within a 2~T superconducting solenoidal
magnet. These detectors are optimized for tracking and
vertexing at pseudorapidities~\cite{ref:pseudo}
$|\eta|<3$ and $|\eta|<2.5$, respectively.
The liquid-argon and uranium calorimeter has a
central section (CC) covering $|\eta| < 1.1$,
and two end calorimeters (EC) that extend coverage
to $|\eta|\approx 4.2$, with all three housed in separate
cryostats~\cite{ref:run1det}. An outer muon system, covering $|\eta|<2$,
consists of a layer of tracking detectors and scintillation trigger
counters in front of 1.8~T iron toroids, followed by two similar layers
after the toroids~\cite{ref:run2muon}.
Electrons are identified by their distinctive pattern of energy deposits in
the calorimeter and by the presence of a track in the central tracker that
can be extrapolated from the interaction vertex to a cluster of energy in the
calorimeter.
Electrons measured in the CC (EC) must have $|\eta|<1.1$ $(1.5<|\eta|<2.5)$.
Electrons must have transverse energy $E_T > 15$ GeV and
be isolated from other energy clusters.
A likelihood variable, formed from the quality of the electron track and
its spatial and momentum match to the calorimeter cluster
and the calorimeter cluster information, is used to discriminate
electron candidates from instrumental backgrounds.
Muons tracks are reconstructed using information from the muon drift chambers and
scintillation detectors and must have a matching central track with
$p_T >15$ GeV/$c$. Candidate muons are
required to be isolated in the calorimeter and tracker to
minimize the contribution of muons originating from jets~\cite{topprd}.
Events collected from 2002--2006 using single muon, single electron,
di-electron, and jet triggers were used for signal and background studies.
The integrated luminosities~\cite{lum} for the $eee$, $ee\mu$,
$\mu\mu e$, and $\mu\mu\mu$ final states are $1070$ pb$^{-1}$,
$1020$ pb$^{-1}$, $944$ pb$^{-1}$, and $944$ pb$^{-1}$, respectively.
There is a common $6.1\%$ systematic uncertainty on the integrated
luminosities.
The $WZ$ event selection requires three reconstructed, well-isolated
leptons with $p_T > 15$ GeV/$c$. All three leptons must be associated
with isolated tracks that originate from the same collision point
and must satisfy the electron or muon identification criteria outlined
above. To select $Z$ bosons, and further suppress background, the
invariant mass of a like-flavor lepton pair must fall within the range $71$ to
$111$ GeV/$c^2$ for $Z\rightarrow ee$ events, and $50$ to $130$ GeV/$c^2$
for $Z\rightarrow \mu\mu$ events, with the mass ranges set by the
mass resolution. For $eee$ and $\mu\mu\mu$ decay channels, the
lepton pair with invariant mass closest to that of the $Z$ boson mass
are chosen to define the $Z$ boson daughter particles. The
\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T$}
is required to be greater than 20 GeV, consistent
with the decay of a $W$ boson.
The transverse recoil of the $WZ$ system, calculated using the
vector sum of the transverse momenta of the charged leptons
and missing transverse energy, is required to be less than 50~GeV/$c$.
This selection reduces the background contribution from $t\bar{t}$
production to a negligible level.
\begin{figure}
\includegraphics[scale=0.4]
{Fig_1.eps}
\caption{\label{Fig:massvsmet}
\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T$}
versus dilepton invariant mass of $WZ$ candidate events.
The open boxes represent the expected $WZ$ signal.
The grey boxes represent the sum of the
estimated backgrounds. The black stars are the data that survive
all selection criteria. The open circles are data that fail either
the dilepton invariant mass criterion or have
\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T$}
$ < 20$ GeV. }
\end{figure}
\begin{figure}
\includegraphics[angle=0,scale=0.4]{Fig_2.eps}
\caption{\label{Fig:Zpt}The reconstructed $Z$ boson $p_T$ of the
$WZ$ candidate events used in the $WWZ$ coupling parameter limit setting
procedure. The solid histogram is the expected sum of signal and background for
the case of the $WWZ$ coupling parameters set to their SM values. The dotted and
double dotted histograms are the expected sums of signal and background for two
different cases of anomalous $WWZ$ coupling parameter values. The black
dots are the data. The final bin is the overflow bin.}
\end{figure}
$WZ$ event detection efficiencies are determined for each
of the four final states. Monte Carlo (MC) events are generated
using {\sc pythia}~\cite{ref:pythia} and a {\sc geant}~\cite{ref:geant}
detector simulation and then processed using the same reconstruction
chain as the data.
Lepton identification efficiences are determined
from study of $Z$ bosons in the D0 data.
The average efficiencies for detecting an electron or muon with $E_T$
$(p_T) >$ 15 GeV are $(91 \pm 2)\%$ and $(90 \pm 2) \%$,
respectively. The trigger efficiency for events with two (or more)
electrons is estimated to be $(99 \pm 1)\%$.
For events with two or three muons, the trigger efficiencies
are estimated to be $(91\pm5) \%$ and $(98\pm 2)\%$, respectively.
The kinematic and geometric acceptances range from 29\%
for the $eee$ decay mode to 45\% for the $\mu\mu\mu$ decay mode.
It is also necessary to account for $\tau \rightarrow e,\mu$
final states of $WZ$ that contribute to the signal.
The number of $\tau$ events expected to satisfy the selection criteria is
$0.67\pm 0.11$ events. These are treated as signal in the cross section
analysis, but are treated as background in the TGC analysis.
Table~\ref{tab:eventsum} summarizes the efficiency determinations.
A total of 13 $WZ$ candidate events is found.
Figure~\ref{Fig:massvsmet} shows
\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T$} versus
the dilepton invariant mass for the
background, the expected $WZ$ signal, and the data, including
the candidates. Table~\ref{tab:eventsum}
also details the number of candidates in each channel.
The main background for $WZ\rightarrow \ell^\prime \nu \ell \ell$ are $Z+X$
events where $X$ is a jet that has been
misidentified as an electron or muon.
We assess the background from $Z$+jets production by using an
inclusive jet data sample that is selected with an independent jet
trigger. Events characteristic of QCD two-jet production are used
to measure the probability, as a function of jet $E_T$ and $\eta$,
that a single jet will be misidentified as a muon or electron. Next,
sub-samples of $ee$+jets, $e\mu$+jets, and $\mu\mu$+jets events are
selected using the same criteria as for
the $WZ$ signal except that the requirements for a third lepton in the
event are dropped. The single jet-lepton misidentification
probabilities are then convoluted with the measured jet distributions
in the dilepton+jets sub-samples to provide an estimate of the
background from $Z$+jets events. The contribution for all four
decay modes totals $1.3\pm0.1$ events.
All other backgrounds are determined using MC. Non-negligible backgrounds
include SM $ZZ$ production, $Z\gamma$ production, and $W^*Z$, $WZ^*$,
or $W\gamma^*$ production. We define these
processes as three-lepton final states produced through the decay
of one on-mass-shell and one off-shell vector boson.
These backgrounds and their determination are described as follows.
$ZZ$ production becomes a background when
both $Z$ bosons decay to charged leptons and one of the final state leptons
escapes detection, thus mimicking a neutrino.
The total contribution from $ZZ$ production is $0.70\pm 0.08$ events.
$Z\gamma$ final states can be misidentified as $WZ$ events
if the photon is mis-reconstructed as an electron and there is
sufficient \mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T$}.
We estimate the $\ell \bar{\ell} + \gamma$ contribution using
$Z+\gamma$ MC~\cite{ref:baur} combined with the probability for a
photon to be misidentified as an electron $(4.2\pm1.5\%)$ determined
from studies of events with photons. This process is a background
only to the $eee$ and $\mu \mu e$ final states. The total contribution
is $1.4\pm 0.5$ events.
The contribution to the background from off-shell bosons
should be nearly the same as occurs in similar processes and a fraction
relative to the expected signal is determined from $ZZ$ MC events
generated using {\sc pythia}.
It depends on the decay channel and varies from $8\%$ for the $ee\mu$
mode to $15\%$ for the $\mu\mu\mu$ mode. The uncertainties
include all of those used for the signal plus an additional
16\% systematic component to account for uncertainties in the
off-shell component of the MC.
The total contribution of this background is $0.99\pm 0.19$ events.
To cross check the background estimates, we compare the number
of observed events with that expected when we do not apply the
dilepton invariant mass selection and the
\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T$}
selection.
We expected to observe $12.5 \pm 1.4$ events from signal and
$62.9 \pm 8.4$ events from backgrounds. We observe the
$78$ events shown in Fig.~\ref{Fig:massvsmet}.
\begin{table}
\caption{\label{tab:eventsum}
The numbers of candidate events, expected signal events,
and estimated background events, and the overall detection
efficiency for the four final states.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
Final & Number of & Expected & Estimated & Overall \\
State & Candidate & Signal & Background & Efficiency \\
& Events & Events & Events & \\ \hline
$eee$ & 2 & $2.3\pm 0.2$
& $1.2\pm 0.1$& $0.16\pm 0.02$ \\
$ee\mu$
& 1 & $2.2\pm 0.2$
& $0.46\pm 0.03$
& $0.17\pm 0.02$ \\
$\mu\mu e$
& 8 & $2.2\pm 0.3$
& $2.0\pm 0.4$& $0.17\pm 0.03$ \\
$\mu\mu\mu$
& 2 & $2.5\pm 0.4$
& $0.86\pm 0.06$
& $0.21\pm0.03$ \\ \hline
Total & 13 & $9.2\pm 1.0$
& $4.5\pm 0.6$& -- \\
\end{tabular}
\end{ruledtabular}
\end{table}
The SM predicts that $9.2\pm 1.0$ $WZ$ events are
expected to be observed in the
data sample. The probability for the background,
$4.5\pm 0.6$ events, to
fluctuate to 13 or more events is $1.2\times 10^{-3}$,
which translates to a one-sided Gaussian significance of $3.0 \sigma$,
determined by using a Poisson distribution for the number of observed
events in each channel convoluted with a Gaussian to model the
systematic uncertainty on the background.
A likelihood method~\cite{ref:stats} taking into account correlations
among systematic uncertainties is used to determine the
most probable $WZ$ cross section. The cross section
$\sigma(WZ)$ is
$2.7^{+1.7}_{-1.3}$~pb, where the $\pm 1 \sigma$ uncertainties
are the $68\%$ C.L. limits from the minimum of the negative log likelihood.
The uncertainty is
dominated by the statistics of the number of observed events.
By comparing the measured cross section and $p_T^Z$ distribution
to models with anomalous
TGCs, we set one- and two-dimensional limits on the three
$CP$-conserving coupling parameters. A comparison of the observed $Z$ boson
$p_T$ distribution with MC predictions is shown in Fig.~\ref{Fig:Zpt}.
We use the Hagiwara-Woodside-Zeppenfeld (HWZ)~\cite{ref:HWZ}
leading-order event generator
processed with a fast detector and event reconstruction simulation to
produce events with anomalous $WWZ$ couplings and simulate their efficiencies
and acceptances.
The HWZ event generator does not account for $\tau$ final
states, and as a result, we treat the
$0.7$ event $\tau$ contribution as background
for the $WWZ$ coupling limit setting procedure.
The method used to determine the
coupling limits is described in Ref.~\cite{ref:Cooke&Illinworth}.
Limits are set on the coupling parameters
$\lambda_Z$, $\Delta g^Z_1,\text{ and } \Delta \kappa_Z$.
Two-dimensional grids are constructed in which the parameters $\lambda_Z$
and $\Delta g^Z_1$ are allowed to vary simultaneously.
Table \ref{Tab:1DLimits} presents the one-dimensional 95\% C.L.
limits on $\lambda_Z$, $\Delta g^Z_1$ and $\Delta \kappa_Z$.
Figure~\ref{fig:twoDcontour} presents the two-dimensional 95\% C.L. limits
under the assumption $\Delta g^Z_1 = \Delta \kappa_Z$~\cite{ref:HISZ}
for $\Lambda=2$ TeV.
The form factor scale, $\Lambda$~\cite{ref:formfactor},
associated with each grid, is chosen such that the limits are within
the unitarity bound.
\begin{table}
\caption{\label{Tab:1DLimits}One-dimensional 95\% C.L. intervals
on $\lambda_Z$, $\Delta g^Z_1$, and $\Delta \kappa_Z$ for two sets
of form factor scale, $\Lambda$.}
\begin{ruledtabular}
\begin{tabular}{cc}
$ \Lambda = 1.5 \text{~TeV} $ & $ \Lambda = 2.0 \text{~TeV} $ \\ \hline
$ -0.18<\lambda_Z<0.22$ & $ -0.17<\lambda_Z<0.21$ \\
$ -0.15<\Delta g^Z_1<0.35 $ & $ -0.14< \Delta g^Z_1<0.34 $ \\
$ -0.14<\Delta \kappa_Z = \Delta g^Z_1 <0.31$
& $-0.12<\Delta \kappa_Z =
\Delta g^Z_1 <0.29$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}
\includegraphics[scale=0.4]{Fig_3.eps}
\caption{\label{fig:twoDcontour} Two-dimensional 95\% C.L. contour limit in
$\Delta g_1^Z = \Delta \kappa_Z$ versus $\Delta \lambda_Z$ space
(inner contour).
The form factor scale for this contour is $\Lambda = 2$ TeV.
The physically
allowed region (unitarity limit) is bounded by the outer contour.
The cross hairs are the 95\% C.L. one-dimensional limits.}
\end{figure}
In summary, we present the results of a search for
$WZ$ production in $1.0$ fb$^{-1}$ of
$p\bar{p}$ collisions at $\sqrt{s} = 1.96$ TeV.
We observe 13 trilepton candidate events
with an expected $9.2\pm 1.0$
signal events and $4.5\pm 0.6$ events from
background. This gives an observed significance of 3.0$\sigma$.
We measure the $WZ$ production cross section
to be $2.7^{+1.7}_{-1.3}$~pb, in agreement with the SM prediction.
We use the measured cross section and $p_T^Z$
distribution to improve constraints on $WWZ$ trilinear gauge couplings
by a factor of two over the previous best results.
We thank the staffs at Fermilab and collaborating institutions,
and acknowledge support from the
DOE and NSF (USA);
CEA and CNRS/IN2P3 (France);
FASI, Rosatom and RFBR (Russia);
CAPES, CNPq, FAPERJ, FAPESP and FUNDUNESP (Brazil);
DAE and DST (India);
Colciencias (Colombia);
CONACyT (Mexico);
KRF and KOSEF (Korea);
CONICET and UBACyT (Argentina);
FOM (The Netherlands);
Science and Technology Facilities Council (United Kingdom);
MSMT and GACR (Czech Republic);
CRC Program, CFI, NSERC and WestGrid Project (Canada);
BMBF and DFG (Germany);
SFI (Ireland);
The Swedish Research Council (Sweden);
CAS and CNSF (China);
Alexander von Humboldt Foundation;
and the Marie Curie Program.
|
1,116,691,500,121 | arxiv | \section{Introduction}
Firstly electric circuit models representing a quantum particle in
the one-dimensional potential
\begin{equation}\label{schreq}
-\frac
{\hbar^2}{2m}\frac{\partial^2\psi(x)}{\partial
x^2}+V(x)\psi(x)=E\psi(x)
\end{equation}
were considered by Kron in 1945
\cite{kron}. Three types of equivalent circuits were established.
The first one contains positive and negative resistors and in each
state the currents and voltages are constant in time. The second
and third models are similar and consist of inductors and
capacitors and the currents and voltages are sinusoidal in time.
Here we consider the stationary Schr\"odinger equation in
two-dimensional billiards in hard wall approximation
\begin{equation}\label{laplace}
-\nabla^2 \psi(x,y)=\epsilon \psi(x,y),
\end{equation}
where the Dirchlet boundary condition is implied at the boundary
$C$ of the billiard:
\begin{equation}\label{DB}
\psi_{|{_C}}=0.
\end{equation}
Here we use Cartesian coordinates $x, y$ which are dimensionless
via a characteristic size of the billiard $L$, and correspondingly
$\epsilon=\frac{E}{E_0}, E_0=\frac{\hbar^2}{2mL^2}$.
There is a complete equivalence of the two-dimensional
Schr\"odinger equation for a particle in a hard wall box to
microwave billiards \cite{stockmann}. A wave function is exactly
corresponds to the the electric field component of the TM mode of
electromagnetic field: $\psi(x,y)\leftrightarrow E_z(x,y)$ with
the same Dirichlet boundary conditions. This equivalence is turned
out very fruitful and allowed to test a mass of predictions found
in the quantum mechanics of billiards \cite{stockmann}. On the
other hand, models for the equivalent RLC circuit of a resonant
microwave cavity exist which establish the analogy near an
eigenfrequency \cite{sucher}. Manolache and Sandu \cite{manolache}
proposed a model of resonant cavity associated to an equivalent
circuit consisting of an infinite set of coupled RLC oscillators.
Therefore, there to be a bridge between quantum billiards and the
set of coupled RLC oscillators \cite{berggren1}. In fact, we show
here that at least, two models of electric resonance circuits
(ERC) can be proposed. In the first model shown in Fig. \ref{fig1}
the eigen wave functions correspond correspond to voltages and
eigen energies do to squared eigen frequencies of ERC. In the
second model shown in Fig. \ref{fig2} the eigen energies of
quantum billiard correspond to the inverse squared eigen
frequencies of the electric network. The electric network analogue
systems allow to measure not only typically quantum variables such
as probability and probability current distributions but also a
distribution of heat power in chaotic billiards. Moreover
intrinsic resistances of the RLC circuit allow to model the
processes of decoherence.
\section{Electric resonance circuits equivalent to quantum
billiards}
If to map the two-dimensional Schr\"odinger equation onto numerical grid
$(x,y)=a_0(i,j), i=1,2,\ldots N_x, j=1,2,\ldots N_y$ one can
easily obtain equation in finite element approximation
\begin{equation}\label{finelem}
\psi_{i,j+1}+\psi_{i,j-1}+\psi_{i+1,j}+\psi_{i-1,j}+(a_0^2E-4)\psi_{i,j}=0.
\end{equation}
The equivalent Hamiltonian is the tight-binding one
\begin{equation}\label{tight}
H=-\sum_{i,j}\sum_{{\bf b}}|ij\rangle\langle|ij+{\bf b}|,
\end{equation}
where vector ${\bf b}, |{\bf b}|=1$ runs over the nearest
neighbors.
Let us consider the electric resonance circuit shown in Fig.
\ref{fig1}. Each link of the two-dimensional network is given by
the inductor L with the impedance
\begin{equation}\label{zL}
z_L=i\omega L+R
\end{equation}
where $R$ is the resistance of the inductor and $\omega$ is the
frequency. Each site of the network is grounded via the capacitor
$C$ with the impedance
\begin{equation}\label{zC}
z_C=\frac{1}{i\omega C}.
\end{equation}
\begin{figure}[ht]
\includegraphics[width=0.5\textwidth]{fig1.eps}
\caption{The first model of resonance RLC circuits.} \label{fig1}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=.55\textwidth]{fig2.eps}
\caption{The second model of resonance RLC circuits.} \label{fig2}
\end{figure}
The Kirchoff's current law at each site of the network gives
\begin{equation}\label{kirchoff}
\frac{1}{z_L}[V_{i,j+1}-V_{i,j}+V_{i,j-1}-V_{i,j}+V_{i+1,j}-V_{i,j}
+V_{i-1,j}-V_{i,j}]-\frac{1}{z_C}V_{i,j}=0,
\end{equation}
where $V_{i,j}$ are values of voltage at the site $(i,j)$. One can
see that this equation coincides with the discretized version of
the Schr\"odinger equation (\ref{finelem}) with the eigenenergies
as
\begin{equation}\label{eigenenergy}
a_0^2k^2=-\frac{z_L}{z_C}=LC\omega^2-iRC\omega=
\frac{\omega^2}{\omega_0^2}-i\frac{\gamma\omega}{\omega_0^2},
\end{equation}
where $\omega_0=1/\sqrt{LC}$ and $\gamma=R/L$ are the eigen
frequency and the linewidth of each resonance circuit.
For the second network of electric resonance circuits shown in
Fig. \ref{fig2} we obtain
\begin{equation}\label{kirchoff2}
\frac{1}{z_C}[V_{i,j+1}-V_{i,j}+V_{i,j-1}-V_{i,j}+V_{i+1,j}-V_{i,j}
+V_{i-1,j}-V_{i,j}]-\frac{1}{z_L}V_{i,j}=0.
\end{equation}
Therefore, comparing with (\ref{finelem}) we have
\begin{equation}\label{eigenenergy2}
a_0^2k^2=-\frac{z_C}{z_L}=\frac{1}{LC\omega^2}+\frac{iR}{L\omega}=
\frac{\omega_0^2}{\omega^2}+i\frac{\gamma\omega_0^2}{\omega}.
\end{equation}
where $\gamma=RC$.
This network is interesting in that
its eigen frequencies are inverse to the eigenenergies of the
quantum billiard.
There are many ways to define the boundary conditions (BC). Let it
be $(i_B, j_B)$ are sites which belong to the boundary of the
network. If these sites are grounded, we obtain obviously the
Dirichlet BC (\ref{DB}) $V_{\vert_B}=0$. It they are shunted
through capacitors we obtain the free BC (the von Neumann BC). At
last, if the boundary sites are shunted through resistive
inductors, the BC correspond to mixed BC.
\section{Analog of the chaotic Bunimovich billiard}
A real electric circuit network has three features which can make
a difference if to compare to the quantum billiards. These are 1)
a discreteness of resonance circuits, 2) tolerance of electric
elements, and 3) resistance of inductors. In practice the
discreteness has no effect for $\lambda \geq 10 a_0$ where
$\lambda$ is a characteristic wavelength of wave function, and
$a_0$ is the elementary unit of the network. Numerically we
consider the electric network with shape as a quarter of the
Bunimovich billiard. The distribution of real part of the wave
function of the billiard mapped on the electric circuit network
with $a_0=1/100$ is shown In Fig. \ref{fig3} (a). The wavelength
$\lambda=2\pi a_0\omega_0/\omega=0.115$ for parameters given in
caption of Fig. \ref{fig3}.
We take the width of the billiard as
unit. One can see distinctive deviation from the Gaussian
distribution which is result of multiple interference on discrete
elements of the network.
It is known that a noise, for example, temperature, smoothes the
conduction fluctuations for transmission through quantum billiards
\cite{iida,prigodin}. In present case the tolerance of circuit elements,
capacity and inductance, plays role of the noise. Therefore we can
expect that increasing of the tolerance can suppress fluctuations
of the distribution of the wave function of the discrete electric
circuit network. In fact, even the $1\%$ tolerance substantially
smoothes the distribution of the wave function as shown in Fig.
\ref{fig3} (b)-(d). We consider that the fluctuations of
capacitors and inductors are not correlated at different sites.
\begin{figure}[ht]
\includegraphics[width=.6\textwidth]{fig3.eps}
\caption{The distribution of real part of the wave function of the
quarter Bunimovich billiard mapped on resonance RLC circuit with
elementary unit $a_0=0.01, \omega=1.722 MHz , L=0.1~ mH , C=1~ nF , R=0$.
(a) There is no tolerance of the
electric circuit elements. (b) The tolerance equals to $1\%$.
(c) The tolerance equals to $3\%$.
(d) The tolerance equals to $5\%$. Each distribution in (b) - (d)
is averaged over 100 realizations of the electric network. }
\label{fig3}
\end{figure}
Finally we consider as a damping caused by resistance of the
inductors effects the distribution of the wave function in the
electric RLC resonance circuit network. In order to excite the
network we apply external ac current at single site of the
network. Fig. \ref{fig4} shows the probability density in the
quarter of the Bunimovich billiard for two values of the
resistance $R$. One can see from Fig. \ref{fig4} (right) a
localization effect because of a damping of the probability
density flowing from ac source (see also Fig. \ref{streamline}).
The characteristic length of space damping can be easily estimated
from Eq. (\ref{eigenenergy}) which gives us
\begin{equation}
\lambda_R \approx \frac{4\pi a_0}{R}\sqrt{\frac{L}{C}}.
\end{equation}
The distributions of the probability density $\rho=|V|^2$ for open
quantum chaotic billiards were considered in many articles
\cite{lenz,lenz1,kanzieper,pnini,ishio} for the case of zero
damping. Here we follow \cite{saichev,ishio} and perform the phase
transformation $V\rightarrow V\exp(i\theta)=p+iq$ which makes the
real and imaginary parts of the wave function $V$ independent.
Introducing a parameter of openness of the billiard \cite{saichev}
\begin{equation}\label{epsilon}
\epsilon^2=\frac{\sigma_q^2}{\sigma_p^2}.
\end{equation}
where $\sigma_p^2=\langle p^2\rangle, ~ \sigma_q^2=\langle
q^2\rangle$ we can write the distribution of probability density
as \cite{ishio}
\begin{equation}\label{probability}
f(\rho)=\mu\exp(-\mu^2\rho)I_0(\mu\nu\rho),
\end{equation}
with the following notations
\begin{equation}\label{munu}
\mu=\frac{1}{2}\left(\frac{1}{\epsilon}+\epsilon\right),
\nu=\frac{1}{2}\left(\frac{1}{\epsilon}-\epsilon\right),
\end{equation}
and $I_0(x)$ is the modified Bessel function of zeroth order, This
distribution is shown in Fig. \ref{fig5} by solid lines while the
Rayleigh distribution $f(\rho)=\exp(-\rho)$ is shown by dashed
lines. The Rayleigh distribution specifies the distribution of
completely open system. One can see from Fig. \ref{fig5} (a, b)
that the statistics of the probability density follows the
distribution (\ref{probability}) irrespective to resistance $R$.
However with growth of the resistance the distribution
(\ref{probability}) tends to the Rayleigh distribution (Fig.
\ref{fig5} (c, d)). Since the larger resistance the more quantum
system is open, this tendency of statistics of the probability
density is clear.
\begin{figure}[ht]
\includegraphics[width=.34\textwidth]{fig4a.eps}
\includegraphics[width=.34\textwidth]{fig4b.eps}
\caption{Views of probability density of the quarter
Bunimovich billiard
mapped on resonance RLC circuit with elementary unit $a_0=0.005$,
$\omega=0.8611 ~MHz, L=0.1~ mH , C=1~ nF$. Left $R=0.5~\Omega$, right $R=1~\Omega$.
The point of connection
of external ac current is at maximum of the probability density.} \label{fig4}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=.6\textwidth]{fig5.eps}
\caption{(Color online) Distribution of probability density of
the quarter Bunimovich billiard mapped on resonance RLC circuit
with the same parameters as given in Fig. \ref{fig4}. (a)
$R=0.1~\Omega, Q=3162, \epsilon=0.2488$, (b) $R=0.3~\Omega,
Q=1054, \epsilon=0.5308$, (c) $R=0.5~\Omega, Q=632,
\epsilon=0.6996$ and (d) $R=1~\Omega, Q=316, \epsilon=0.9164$. The
distribution (\ref{probability}) is shown by solid red line, the
Rayleigh distribution $f(\rho)=\exp(-\rho)$ is shown by dashed
green line. } \label{fig5}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=.6\textwidth]{fig6.eps}
\caption{(Color online) (a) Statistics of the real part of the
x-component of electric current $I_x$ compared to the Gaussian
distribution shown by solid red line. (b) Statistics of the heat
power compared to the distribution \ref{fPeps} shown by solid red
line. Here the quarter Bunimovich billiard is taken with
$\omega=1.163~ MHz, R=0.1~ \Omega, L=0.1~ mH , C=1~ nF$.}
\label{fig6}
\end{figure}
\section{The heat power}
In open systems the probability current density corresponds to the
Poynting vector. The last equivalence allowed to test in
particular universal current statistics in chaotic billiards
\cite{barth,saichev}. However in the electric resonance circuit
there is the heat losses because of the resistance. A local power of the
heat losses is defined by formula \cite{heat}
\begin{equation}\label{heat}
P=\frac{R}{2}[Re(I_x)^2+Im(I_x)^2+Re(I_y)^2+Im(I_y)^2]=
\frac{R}{2}[|I_x|^2+|I_y|^2],
\end{equation}
where $I_x, I_y$ are local components of the electric power
flowing between sites of the electric network:
\begin{equation}\label{heat power}
RI_x(i,j)=V_{i+1,j}-V_{i,j},\quad RI_y(i,j)=V_{i,j+1}-V_{i,j}.
\end{equation}
If to approximate the true state with the Berry conjecture
\begin{equation}\label{berry}
V(x,y)=\sum_j a_j\exp[i~({\bf k_j r}+\phi_j)]
\end{equation}
where $a_j$ and $\phi_j$ are independent random real variables and
$k_j$ are randomly oriented wave vectors of equal length, then $V$
is the complex random Gaussian field (RGF) in the chaotic
Bunimovich billiard. The derivatives of $V$ are also independent
complex RGFs. The components $I_x, ~I_y$ form two complex RGFs
with the probability density of these fields
\begin{equation}
\label{fIxIy}
f(I_x',I_y',I_x'',I_y'')=
\frac{1}{4\pi^2\sigma_r^2\sigma_i^2}\exp\left\{-\frac{1}{2}
\left(\frac{I_x'^2+I_y'^2}{\sigma_r^2}+\frac{I_x''^2+I_y''^2}{\sigma_i^2}\right)\right\}
\end{equation}
where $I_x'=Re(I_x), ~I_y'=Re(I_y), ~I_x''=Im(I_x),
~I_y''=Re(I_y), ~\sigma_r^2=\langle I_x'^2\rangle,\langle
I_y'^2\rangle, ~\sigma_i^2=\langle I_x''^2\rangle,\langle
I_y''^2\rangle$. In numerical computations we use that average over
the billiard area
\begin{equation}\label{averdef}
\langle \ldots \rangle = \frac{1}{A}\int d^2{\bf x} \ldots,
\end{equation}
is equivalent to average over three complex GRFs
\begin{equation}\label{aver}
\langle \ldots\rangle=\int d^2V d^2I_x d^2I_y f(Re(V),Im(V))
f(I_x',I_y',I_x'',I_y'')\ldots.
\end{equation}
An example of the distribution of the real part of $I_x$ is
presented in Fig. \ref{fig6} (a) which shows that numerically this
value is, in fact, the RGF. A definition of the probability
distribution (\ref{fIxIy}) is relied on that the Berry function
(\ref{berry}) is isotropic in space : $\langle
I_x'^2\rangle=\langle I_y'^2\rangle, ~\langle
I_x''^2\rangle=\langle I_y''^2\rangle$. In fact, an anisotropy of
the shape of billiard effects an anisotropy. However this effect
is the boundary condition's one which has of order
$L_P\lambda/A\sim \lambda $, where $L_P$ is a length of the
billiard perimeter, and $\lambda$ is a characteristic wave length
of wave function in terms of the width of the billiard. Therefore,
for the excitation of the eigenfunction with sufficiently high
frequency we can use the distribution function (\ref{fIxIy}).
Table 1 of numerically computed mean values confirms this
conclusion.\\ \ \\ Table 1. Numerically computed mean values. \\ \
\\
\begin{tabular}{|c|c|c|c|c|c|} \hline
$\omega$, MHz & the wavelength $\lambda $
in terms of the billiard's width& $\frac{\langle I_x'^2\rangle-\langle
I_y'^2\rangle}{\langle I_x'^2\rangle+\langle
I_y'^2\rangle}$ & $\frac{\langle I_x''^2\rangle-\langle
I_y''^2\rangle}{\langle I_x''^2\rangle+\langle
I_y''^2\rangle}$ & $\epsilon$ \cr \hline
0.8611 & 0.1154 & 0.095 & -0.128 &0.2488 \cr \hline
1.1623 & 0.0854 & 0.056 &0.050 & $0.6103$ \cr \hline
\end{tabular}\\ \ \\
To find the distribution of the heat power (\ref{heat}) it is
convenient to begin with a characteristic function
\begin{equation}\label{char0}
\Theta(a)=\langle\exp(iaP)\rangle=\int d^2I_x d^2I_y
f(I_x',I_y',I_x'',I_y'')\exp(iaR[|I_x|^2+|I_y|^2]/2).
\end{equation}
Substituting (\ref{fIxIy}) we obtain
\begin{equation}\label{char}
\Theta(a)=-\frac{(\sigma_r^2+\sigma_i^2)^2}{\sigma_r^2\sigma_i^2}
\frac{1}{\left(a+i\frac{\sigma_r^2+\sigma_i^2}{\sigma_r^2}\right)
\left(a+i\frac{\sigma_r^2+\sigma_i^2}{\sigma_i^2}\right)}.
\end{equation}
A knowledge of the characteristic function allows to find the heat
power distribution function
\begin{equation}\label{fP}
f(P)=\frac{1}{2\pi}\int_{-\infty}^{\infty}da\Theta(a)\exp(-iaP)=
\frac{2\mu }{\nu\langle P\rangle}\exp(-\mu P/\langle P\rangle)\sinh(\nu P/\langle
P\rangle),
\end{equation}
where formulas (\ref{munu} take the following form
\begin{equation}\label{munu1}
\mu=\frac{(\sigma_r^2+\sigma_i^2)^2}{2\sigma_r^2\sigma_i^2},\quad
\nu=\frac{(\sigma_r^2-\sigma_i^2)^2}{2\sigma_r^2\sigma_i^2}.
\end{equation}
For $\sigma_r^2\approx \sigma_i^2$ the distribution takes the very
simple form
\begin{equation}\label{fPsimple}
f(P)=\frac{4P}{\langle P\rangle^2}\exp(-2P/\langle P\rangle),
\end{equation}
Even for this case the distribution of heat power differs from the
distribution of the probability current \cite{saichev}. The
parameter of openness of the billiard (\ref{epsilon}) can be
approximated as
\begin{equation}\label{epsilon1}
\epsilon^2=\frac{\sigma_i^2}{\sigma_r^2}.
\end{equation}
It is easy to obtain from the Schr\"odinger equation that
$2\sigma_r^2=E\sigma_p^2, ~2\sigma_i^2=E\sigma_q^2$ from which the
last equality in (\ref{epsilon1}) follows.
Then the heat power distribution function
(\ref{fP}) can be written as follows
\begin{equation}\label{fPeps}
f(P)=\frac{1+\epsilon^2}{1-\epsilon^2}\left\{\exp\left(-\frac{(1+\epsilon^2)P}
{\langle P\rangle}\right)-\exp\left(-\frac{(1+\epsilon^2)P}
{\epsilon^2\langle P\rangle}\right)\right\}.
\end{equation}
This distribution is shown in Fig. \ref{fig6} (b) which as one can
see nicely describes numerically computed statistics of the heat
power.
If to introduce a value
\begin{equation}\label{sigmaP}
\sigma_P^2=\frac{\langle (P-\langle P\rangle)^2\rangle}{\langle
P\rangle^2},
\end{equation}
then one can derive the relation between this parameter and the
parameter of openness (\ref{epsilon1})
\begin{equation}\label{relation}
\sigma_P^2=\frac{\epsilon^4+1}{(\epsilon^2+1)^2}.
\end{equation}
If the quantum system is fully opened, $\epsilon=1$, and we have
from (\ref{relation}) that $\sigma_P^2=1/2$. For the limit of
closed quatum system we obtain correspondingly that
$\sigma_P^2=1$.
\section{summary and conclusions}
\begin{figure}[ht]
\includegraphics[width=.7\textwidth]{figstream.eps}\\ \ \\
\includegraphics[width=.5\textwidth]{figstreamfine.eps}
\caption{Top: quantum streamlines in the quarter of the Bunimovich
billiard flowing from point shown by star at which the external ac
current is applied. (Color online) Bottom: zoomed part of top
figure. Blue lines show the streamline, red and green lines are
the nodal lines of the real and imaginary parts of wave function
correspondingly. The points at which the nodal lines intersecting
are centers of the vortices \cite{berggren}. The wave function
corresponds to Fig. \ref{fig4} (right) with the same parameters.}
\label{streamline}
\end{figure}
We established two types of the electric circuit networks of the
RLC resonant oscillators in which voltages play role of quantum
wave function. Specifically we considered the electric networks
with the Dirichlet boundary conditions which are equivalent to the
quarter of the Bunimovich quantum billiard. In fact, the electric
circuit network has three features which can make a difference if
to compare with the quantum billiards. These are a discreteness of
resonance circuits, tolerance of electric elements, and
resistance of inductors. We showed numerically that the first two
features are conceal each other. The resistance of the electric
network gives rise to a heat which can be described locally by the
heat currents. Assuming that the wave function in the billiard can
be given as the complex random Gaussian field we derived the
distribution of the heat power which well describe numerical
statistics.
The third feature of the electric network, resistance has
principal importance. The resistance of the electric network is
originated from inelastic interactions of electrons with phonons
and other electrons which give rise to irreversible processes of
decoherence. With growth of the resistance the wave function is
becoming localized. We studied as the probability density and the
probability currents evolves with increasing of the resistance.
Therefore we can conclude that the resistance violates the
equation $\nabla {\bf j}=0$. In fact Fig. \ref{streamline}
demonstrates unusual behavior of quantum
streamlines \cite{berggren} with growth of the resistance. One can
see that the quantum streamline are terminating at the vortex
cores. The vortices serve as sinks for the probability density
shown in Fig. \ref{streamline} (top) as spots. Therefore the
resistance of the inductors in the equivalent electric networks is
simple mechanism of a deterioration of ballistic transport similar
to the B\"uttiker mechanism \cite{buttiker}.
\acknowledgments AS is grateful to K.-F. Berggren for numerous
fruitful discussions. This work by Russian Foundation for Basic
Research (RFBR Grants 05-02-97713, 05-02-17248). AS acknowledges support of the
Swedish Royal Academy of sciences.
|
1,116,691,500,122 | arxiv | \section{Introduction}
Labeled transition systems constitute a widely used model of
concurrent computation. They model processes by explicitly
describing their states and their transitions from state to state,
together with the actions that produce these transitions. Several
notions of behavioral semantics have been proposed, with the aim
to identify those states that afford the same observations
\cite{Gla01,Gla93}. For equational reasoning about processes, one
needs to find an axiomatization that is sound and
\emph{ground-complete} modulo the semantics under consideration,
meaning that all equivalent closed terms can be equated. Ideally,
such an axiomatization is also \emph{$\omega$-complete}, meaning
that all equivalent \emph{open} terms can be equated. If such a
finite axiomatization exists, it is said that there is a
\emph{finite basis} for the equational theory.
For concrete semantics, so in the absence of the silent action
$\tau$, the existence of finite bases is well-studied
\cite{Gro90,Gla01,CFLN07}, in the context of the process algebra
BCCSP, containing the basic process algebraic operators from CCS
and CSP\@. However, for weak semantics, that take into account the
$\tau$, hardly anything is known on finite bases. In \cite{Gla93},
Van Glabbeek presented a spectrum of weak semantics. For several
of the semantics in this spectrum, a sound and ground-complete
axiomatization has been given, in the setting of the process
algebra BCCS (BCCSP extended by $\tau$), see, e.g., \cite{Gla97}.
But a finite basis has been given only for \emph{weak}, \emph{delay},
$\eta$- and \emph{branching bisimulation} semantics \cite{Mi89a,vG93a},
and in case of an infinite alphabet of actions also for \emph{weak
impossible futures} semantics \cite{VM01}. The reason for this lack of
results on finite bases, apart from the inherent difficulties arising
with weak semantics, may be that it is usually not so straightforward
to define a notion of unique normal form for \emph{open} terms in a
\emph{weak} semantics. Here we will employ a saturation technique,
in which normal forms are saturated with subterms.
In this paper, we focus on two closely related weak semantics,
based on failures and impossible futures. A \emph{weak failure}
consists of a trace $a_1\cdots a_n$ and a set $A$, both of
concrete actions. A state exhibits this weak failure pair if it
can perform the trace $a_1\cdots a_n$ (possibly intertwined with
$\tau$'s) to a state that cannot perform any action in $A$ (even
after performing $\tau$'s). In a \emph{weak impossible future},
$A$ can be a set of traces. Weak failures semantics plays an
essential role for the process algebra CSP \cite{BHR84}. For
convergent processes, it coincides with testing semantics
\cite{DeNHen84,RenVog07}, and thus is the coarsest congruence for
the CCS parallel composition that respects deadlock behavior.
Weak impossible futures semantics \cite{Vog92} is a natural
variant of possible futures semantics \cite{RB81}. In \cite{GV06}
it is shown that weak impossible futures semantics, with an
additional root condition, is the coarsest congruence containing
weak bisimilarity with explicit divergence that respects
deadlock/livelock traces (or fair testing, or any liveness
property under a global fairness assumption) and assigns unique
solutions to recursive equations.
The heart of our paper is a finite basis for the inequational theory
of BCCS modulo the weak failures \emph{preorder}. The axiomatization
consists of the standard axioms A1-4 for bisimulation, three extra
axioms WF1-3 for failures semantics, and in case of a finite alphabet
$A$, an extra axiom WF$_A$. The proof that A1-4 and WF1-3 are a finite
basis in case of an infinite alphabet is a sub-proof of the proof that
A1-4, \mbox{WF1-3} and WF$_A$ are a finite basis in case of a finite
alphabet. Our proof has the same general structure as the beautiful
proof for testing equivalences given in \cite{DeNHen84} and further
developed in \cite{Hen88}. Pivotal to this is the construction of
``saturated'' sets of actions within a term \cite{DeNHen84}. Since
here we want to obtain an $\omega$-completeness result, we extend this
notion to variables. Moreover, to deal with $\omega$-completeness, we
adopt the same general proof structure as in the strong case
\cite{FN05}. In this sense, our proof strategy can be viewed as a
combination of the strategies proposed in \cite{DeNHen84} and
\cite{FN05}. Furthermore, we apply an algorithm from
\cite{AFI07,FG07,CFG08b} to obtain a finite basis for BCCS modulo weak
failures \emph{equivalence} for free.
At the end, we investigate the equational theory of BCCS modulo
weak impossible futures semantics. This shows a remarkable
difference with weak failures semantics, in spite of the strong
similarity between the definitions of these semantics (and between
their ground-complete axiomatizations). As said, in case of an
infinite alphabet, BCCS modulo the weak impossible futures
preorder has a finite basis \cite{VM01}. However, we show that in
case of a finite alphabet, such a finite basis does not exist.
Moreover, in case of weak impossible futures \emph{equivalence},
there is no finite ground-complete axiomatization, regardless of the
cardinality of the alphabet.
A finite basis for the equational theory of BCCSP modulo
(concrete) failures semantics was given in \cite{FN05}. The
equational theory of BCCSP modulo (concrete) impossible futures
semantics is studied in \cite{CF08}. It is interesting to see that
our results for weak semantics agree with their concrete
counterparts, with very similar proofs. This raises a challenging open
question: can one establish a general theorem to link the
axiomatizability (or nonaxiomatizability) of concrete and weak semantics?
An extended abstract of this paper appears as \cite{CFG08}.
\newpage
\section{Preliminaries} \label{sec2}
${\rm BCCS}(A)$ is a basic process algebra for expressing finite
process behavior. Its signature consists of the constant $\mathbf{0}$,
the binary operator $\_+\_$\,, and unary prefix operators $\tau\_$
and $a\_$\,, where $a$ is taken from a nonempty set $A$ of visible
actions, called the \emph{alphabet}, ranged over by $a,b,c$. We
assume that $\tau\notin A$ and write $A_\tau$ for $A\cup\{\tau\}$,
ranged over by $\alpha,\beta$.
\[ t::= 0 \mid at \mid \tau t \mid t+t \mid x\]
Closed ${\rm BCCS}(A)$ terms, ranged over by $p,q$, represent
finite process behaviors, where $\mathbf{0}$ does not exhibit any
behavior, $p+q$ offers a choice between the behaviors of $p$ and
$q$, and $\alpha p$ executes action $\alpha$ to transform into
$p$. This intuition is captured by the transition rules below.
They give rise to $A_\tau$-labeled transitions between closed BCCS terms.
\[
\frac{~}{\alpha x\mv{\alpha}x}
\qquad
\frac{x\mv{\alpha}x'}{x+y\mv \alpha x'}
\qquad
\frac{y\mv{\alpha}y'}{x+y\mv \alpha y'}
\]
We assume a countably infinite set $V$ of variables; $x,y,z$
denote elements of $V$. Open BCCS terms, denoted by $t,u,v,w$, may
contain variables from $V$. Write $\var(t)$ for the set of variables
occurring in $t$.
The operational semantics is extended verbatim to open terms;
variables generate no transition.
We write $t\Rightarrow u$ if there is a sequence of
$\tau$-transitions $t\mv{\tau}\cdots\mv{\tau}u$; furthermore
$t\mv{\alpha}$ denotes that there is a term $u$ with
$t\mv{\alpha}u$, and likewise $t\Rightarrow\mv{\alpha}$ denotes that
there are a terms $u,v$ with $t\Rightarrow u \mv{\alpha} v$.
The \emph{depth} of a term $t$, denoted by $|t|$, is the length of
the \emph{longest} trace of $t$, not counting $\tau$-transitions.
It is defined inductively as follows: $|\mathbf{0}|=|x|=0$;
$|at|=1+|t|$; $|\tau t| = |t|$; $|t+u|=\max\{|t|, |u|\}$.
A (closed) substitution, ranged over by $\sigma,\rho$, maps variables
in $V$ to (closed) terms. For open terms $t$ and $u$, and a
preorder $\sqsubseteq$ (or equivalence $\equiv$) on closed terms, we
define $t\sqsubseteq u$ (or $t\equiv u$) if $\sigma(t)\sqsubseteq\sigma(u)$
(resp.\ $\sigma(t)\equiv\sigma(u)$) for all closed substitutions
$\sigma$. Clearly, $t\mv{a}t'$ implies that
$\sigma(t)\mv{a}\sigma(t')$ for all substitutions $\sigma$.
An \emph{axiomatization} is a collection of equations $t \approx
u$ or of inequations $t \preccurlyeq u$. The (in)equations in an
axiomatization $E$ are referred to as \emph{axioms}. If $E$ is an
equational axiomatization, we write $E\vdash t\approx u$ if the
equation $t\approx u$ is derivable from the axioms in $E$ using
the rules of equational logic (reflexivity, symmetry,
transitivity, substitution, and closure under BCCS contexts). For
the derivation of an inequation $t\preccurlyeq u$ from an
inequational axiomatization $E$, denoted by $E\vdash t\preccurlyeq
u$, the rule for symmetry is omitted. We will also allow equations
$t\approx u$ in inequational axiomatizations, as an abbreviation
of $t\preccurlyeq u\land u\preccurlyeq t$.
An axiomatization $E$ is \emph{sound} modulo a preorder $\sqsubseteq$
(or equivalence $\equiv$) if for all terms $t,u$, from $E\vdash
t\preccurlyeq u$ (or $E\vdash t\approx u$) it follows that
$t\sqsubseteq u$ (or $t\equiv u$). $E$ is \emph{ground-complete} for
$\sqsubseteq$ (or $\equiv$) if $p\sqsubseteq q$ (or $p\equiv q$)
implies $E\vdash p\preccurlyeq q$ (or $E\vdash p\approx q$) for all
closed terms $p,q$. Moreover, $E$ is \emph{$\omega$-complete} if for
all terms $t,u$ with $E\vdash\sigma(t)\preccurlyeq\sigma(u)$ (or
$E\vdash\sigma(t)\approx\sigma(u)$) for all closed substitutions
$\sigma$, we have $E\vdash t\preccurlyeq u$ (or $E\vdash t\approx u$).
When $E$ is $\omega$-complete as well as ground-complete, it is
\emph{complete} for $\sqsubseteq$ (or $\equiv$) in the sense that
$t\sqsubseteq u$ (or $t\equiv u$) implies $E\vdash t\preccurlyeq u$
(or $E\vdash t\approx u$) for all terms $t,u$. The equational
theory of BCCS modulo a preorder $\sqsubseteq$ (or equivalence
$\equiv$) is said to be \emph{finitely based} if there exists a
finite, $\omega$-complete axiomatization that is sound and
ground-complete for BCCS modulo $\sqsubseteq$ (or $\equiv$).
A1-4 below are the core axioms for BCCS modulo bisimulation
semantics. We write $t=u$ if $\mbox{A1-4}\vdash t\approx u$.
{\small\[
\begin{array}{l@{\qquad}rcl}
{\rm A}1&x+y &~\approx~& y+x\\
{\rm A}2&(x+y)+z &~\approx~& x+(y+z)\\
{\rm A}3&x+x &~\approx~& x\\
{\rm A}4&x+\mathbf{0} &~\approx~& x\\
\end{array}
\]}%
Summation $\sum_{i\in\{1,\ldots,n\}}t_i$ denotes $t_1+\cdots+t_n$,
where summation over the empty set denotes $\mathbf{0}$. As binding
convention, $\_+\_$ and summation bind weaker than $\alpha\_$\,.
For every term $t$ there exists a finite set $\{\alpha_i t_i\mid
i\in I\}$ of terms and a finite set $Y$ of variables such that
$t=\sum_{i\in I} \alpha_i t_i + \sum_{y\in Y}y$. The $\alpha_i
t_i$ for $i\in I$ and the $y\in Y$ are called the \emph{summands}
of $t$. For a set of variables $Y$, we will often denote the term
$\sum_{y\in Y}y$ by $Y$.
\begin{definition}[Initial actions]\rm
For any term $t$, the set $\mc{I}(t)$ of initial actions is
defined as $\mc{I}(t)=\{a\in A\mid t\Rightarrow \mv{a}\}$.
\end{definition}
\begin{definition}[Weak failures]\rm \label{def:weak-failures}
\vspace{-1ex}
\begin{itemize}
\item A pair $(a_1\cdots a_k,B)$, with $k\geq 0$ and $B \subseteq A$,
is a \emph{weak failure pair} of a process $p_0$ if there is a path
$p_0\Rightarrow \mv{a_1} \Rightarrow \cdots \Rightarrow \mv{a_k} \Rightarrow p_k$
with $\mc{I}(p_k) \cap B = \emptyset$.
\item Write $p\leq_{\rm WF} q$ if the weak failure pairs of $p$ are
also weak failure pairs of $q$.
\item The \emph{weak failures preorder} $\sqsubseteq_{\rm WF}$ is
given by\\ $p\sqsubseteq_{\rm WF} q$ iff (1) $p\leq_{\rm WF} q$ and
(2) $p\mv{\tau}$ implies that $q\mv{\tau}$.
\item \emph{Weak failures equivalence} $\equiv_{\rm WF}$ is defined
as $\sqsubseteq_{\rm WF} \cap \sqsubseteq_{\rm WF}^{-1}$.
\end{itemize}
\end{definition}
It is well-known that $p\leq_{\rm WF} q$ is \emph{not} a
\emph{precongruence} for BCCS: e.g., $\tau\mathbf{0}\leq_{\rm WF}\mathbf{0}$
but $\tau\mathbf{0}+a\mathbf{0} \not\leq_{\rm WF} \mathbf{0}+a\mathbf{0}$. However,
$\sqsubseteq_{\rm WF}$ is, meaning that $p_1\sqsubseteq_{\rm WF} q_1$
and $p_2\sqsubseteq_{\rm WF} q_2$ implies $p_1+p_2\sqsubseteq_{\rm WF}
q_1+q_2$ and $\alpha p_1\sqsubseteq_{\rm WF} \alpha q_1$ for
$\alpha\in A_\tau$. In fact, $\sqsubseteq_{\rm WF}$ is the coarsest
precongruence contained in $\leq_{\rm WF}$.
Likewise, $\equiv_{\rm WF}$ is a \emph{congruence} for BCCS.
\section{A Finite Basis for Weak Failures Semantics} \label{sec3}
\subsection{Axioms for the Weak Failures Preorder}
On BCCS processes, the weak failures preorder as defined above
coincides with the inverse of the must-testing preorder of \cite{DeNHen84}.
A sound and ground-complete axiomatization of the must-testing preorder
preorder has been given in \cite{DeNHen84}, in terms of a language
richer than BCCS\@. After restriction to BCCS processes, and reversing
the axioms, it consists of A1-4 together with the axioms: {\small
$$\begin{array}{l@{\qquad}rcl}
\mbox{N1} & \alpha x + \alpha y &\approx& \alpha (\tau x + \tau y)\\
\mbox{N2} & \tau(x+y) &\preccurlyeq& x + \tau y\\
\mbox{N3} & \alpha x + \tau(\alpha y + z) &\approx& \tau(\alpha x + \alpha y + z)\\
\mbox{E1} & x &\preccurlyeq& \tau x + \tau y
\end{array}$$}%
Here we simplify this axiomatization to A1-4 and WF1-3 from Tab.~\ref{tab1}.
In fact it is an easy exercise to derive WF1-3 from N1, N2 and E1,
and N1, N2 and E1 from WF1-3. It is a little harder to check that N3
is derivable from the other three axioms (cf.~Lem.~\ref{lemma5}).
\begin{table}[t] \center
\begin{tabular}{l@{\qquad}rcl}
WF1 & $a x+ay$ & $\approx$ & $a(\tau x+\tau y)$\\
WF2 & $\tau(x+y)$ & $\preccurlyeq$ & $\tau x+ y$\\
WF3 & $x$ & $\preccurlyeq $ & $ \tau x+ y$\medskip\\
\end{tabular}\caption{Axiomatization for the weak failures preorder} \label{tab1}
\vspace{-5mm}
\end{table}
\begin{theorem} \label{theo:ground-complete}
\mbox{\rm A1-4+WF1-3} is sound and ground-complete for
$\mathrm{BCCS}(A)$ modulo $\sqsubseteq_{\rm WF}$.
\end{theorem}
In this section, we extend this ground-completeness result with
two $\omega$-completeness results. The first one says, in
combination with Theo.~\ref{theo:ground-complete}, that as long
as our alphabet of actions is infinite, the axioms A1-4+WF1-3
constitute a finite basis for the inequational theory of BCCS$(A)$
modulo $\sqsubseteq_{\rm WF}$.
\begin{theorem} \label{theo:A-infinite}
If $|A|\mathbin=\infty$, then \mbox{\rm A1-4+WF1-3} is $\omega$-complete for
$\mathrm{BCCS}(A)$ modulo $\sqsubseteq_{\rm WF}$.
\end{theorem}
To get a finite basis for the inequational theory of BCCS modulo
$\sqsubseteq_{\rm WF}$ in case $|A|<\infty$, we need to add the
following axiom:
\[\mathrm{WF}_A\qquad \sum_{a\in A} ax_a \preccurlyeq \sum_{a\in A}a x_a +y\]
where the $x_a$ for $a\in A$ and $y$ are distinct variables.
\begin{theorem} \label{theo:A-finite}
If $|A|<\infty$, then \mbox{\rm A1-4+WF1-3+WF}$_A$ is $\omega$-complete for
$\mathrm{BCCS}(A)$ modulo $\sqsubseteq_{\rm WF}$.
\end{theorem}
The rest of this section up to Sec.~\ref{sec:wfe} is devoted to the
proofs of Theorems~\ref{theo:ground-complete}--\ref{theo:A-finite}.
For a start, the inequations in Tab.~\ref{tab:D-eqs} can be derived
from A1-4+WF1-3:
\vspace{-3mm}
\begin{table}
\begin{center}
\begin{tabular}{l@{\qquad}rcl}
D1 & $ \tau (x+y)+x$ & $\approx$ & $\tau (x+y)$\\
D2 & $\tau(\tau x+y)$ & $\approx$ & $\tau x+ y$\\
D3 & $ ax + \tau(ay+z)$ & $\approx$ & $\tau(a x+ a y+z)$\\
D4 & $ \tau x $ & $\preccurlyeq$ & $\tau x +y$\\
D5 & $ \sum_{i\in I} a x_i $ & $\approx$ & $a (\sum_{i\in I}\tau
x_i)\mbox{ for finite nonempty index sets }I $\\
D6 & $ \tau x + y$ & $\approx$ & $\tau x + \tau (x+y) $\\
D7 & $ \tau x + \tau y$ & $\approx$ & $\tau x + \tau (x+y) + \tau y $\\
D8 & $ \tau x + \tau (x+y+z)$ & $\approx$ & $\tau x + \tau (x+y) + \tau(x+y+z) $\\
D9 & $ \sum_{i\in I}\tau (at_i+y_i)$ & $\approx$ & $\sum_{i\in I}\tau (at + y_i)$
for finite $I$, where $t = \sum_{i\in I} \tau t_i$.
\end{tabular}
\end{center}
\caption{Derived inequations} \label{tab:D-eqs}
\end{table}
\vspace{-7mm}
\begin{lemma} \label{lemma5}
\mbox{\rm D1-9} are derivable from \mbox{\rm A1-4+WF1-3}.
\end{lemma}
\begin{proof}
We shorten ``$\mbox{A1-4+WF1-3} \vdash$'' to ``$\vdash$''.\vspace{-1ex}
\begin{enumerate}
\item
By WF3, $\vdash x \preccurlyeq \tau x$, and thus $\vdash\tau x+x\preccurlyeq
\tau x$. Moreover, by WF2,\\ $\vdash\tau (x+x) \preccurlyeq \tau x+x$,
hence $\vdash\tau x\preccurlyeq \tau x+x$. In summary, $\vdash\tau x\approx
\tau x+x$.
So $\vdash \tau(x+y)\approx \tau(x+y)+x+y+x \approx \tau(x+y)+x$.
\medskip
\item
By WF2, $\vdash \tau(x+\tau x)\preccurlyeq \tau x+\tau x = \tau
x$, so by D1, $\vdash \tau\tau x\preccurlyeq \tau x$.
Hence, by WF2, $\vdash\tau (\tau x+y) \preccurlyeq \tau \tau x+y \preccurlyeq \tau x+y$.
Moreover, by WF3, $\vdash\tau x+y \preccurlyeq \tau(\tau x+y)$.
\medskip
\item By WF3, $\vdash y \preccurlyeq \tau y+\tau x$. So by WF1, $\vdash ay\preccurlyeq a(\tau x+\tau y)\approx ax+ay$. This implies $\vdash \tau(ay+z)\preccurlyeq \tau(ax+ay+z)$. Hence, by D1, $\vdash ax+\tau(ay+z)\preccurlyeq ax+ \tau(ax+ay+z) \approx \tau(ax+ay+z)$.
Moreover, by WF2, $ \vdash \tau(a x+ a y+z) \preccurlyeq ax +
\tau(ay+z)$.
\medskip
\item By WF3 and D2, $\vdash \tau x\preccurlyeq \tau\tau x+y \approx \tau x+y$.
\medskip
\item By induction on $|I|$, using WF1 and D2.
\medskip
\item By D4 and D1, $\vdash \tau x+y \preccurlyeq \tau x+\tau(x+y)+y \approx \tau x+\tau(x+y)$.
Moreover, by WF2, $\vdash \tau x+\tau(x+y)\preccurlyeq \tau x+\tau x+y=\tau x+y$.
\medskip
\item By D4 in one direction; by D6 and D1 in the other.
\medskip
\item By D4 in one direction; by D6 and D1 in the other.
\medskip
\item By D1, $\vdash \sum_{i\in I}\tau (at_i+y_i) \approx
\sum_{i\in I}\tau (at_i+y_i) +u$, where $u=\sum_{i\in I}at_i$.
Thus, by repeated application of D3, $\vdash \sum_{i\in I}\tau (at_i+y_i) \approx
\sum_{i\in I}\tau (at_i+u+y_i) = \sum_{i\in I}\tau (u+y_i)$.
By D5 we have $u=at$.
\qed
\end{enumerate}
\end{proof}
\subsection{Normal Forms}
The notion of a normal form, which is formulated in the following
two definitions, will play a key role in the forthcoming proofs.
For any set $L\subseteq A\cup V$ of actions and variables let $A_L
= L \cap A$, the set of actions in $L$, and $V_L = L \cap V$, the
set of variables in $L$.
\begin{definition}[Saturated family]\rm Suppose $\mc{L}$ is a finite family of
finite sets of {actions} and {variables}. We say $\mc{L}$ is
\emph{saturated} if it is nonempty and
\begin{itemize}
\item $L_1, L_2\in \mc{L}$ implies that $L_1\cup L_2 \in
\mc{L}$; and
\item $L_1, L_2\in \mc{L}$ and $L_1\subseteq L_3 \subseteq L_2$
imply that $L_3\in \mc{L}$.
\end{itemize}
\end{definition}
\begin{definition}[Normal form]\rm
\begin{list}{$\bullet$}{\leftmargin 18pt
\labelwidth\leftmargini\advance\labelwidth-\labelsep}
\item[(i)] A term $t$ is in $\tau$ normal form if\vspace{-6pt}
\[t = \sum_{L\in \mc{L}}\tau \left(\sum_{a \in A_L} at_a+ V_L\right)\vspace{-3pt}\]
where the $t_a$ are in normal form and $\mc{L}$ is a saturated
family of sets of actions and variables. We write $L(t)$ for
$\bigcup_{L\in\mc{L}}L$; note that $L(t) \in \mc{L}$.
\medskip
\item[(ii)] $t$ is in action normal form if
\[t= \sum_{a\in A_L} at_a + V_L \]
where the $t_a$ are in normal form and $L\subseteq A \cup V$. We
write $L(t)$ for $L$.
\medskip
\item[(iii)] $t$ is in normal form if it is either in $\tau$
normal form or in action normal form.\pagebreak[3]
\end{list}
\end{definition}
Note that the definition of a normal form requires that for any $a\in A$, if
$t\Rightarrow\mv{a} t_1$ and $t\Rightarrow\mv{a} t_2$, then $t_1$ and $t_2$ are
syntactically identical.
We prove that every term can be equated to a normal form. We start
with an example.
\begin{example}
Suppose $t= \tau(at_1+\tau(bt_2+ct_3)+x) +\tau(at_4+\tau x+\tau y)+z$.
Then $t$ can be equated to a $\tau$ normal form with
$\mc{L}=\{${\small$\{a,b,c, x\},\{a, x,y\}, \{a,b,c, x,y,z\}$,
$\{a,b,c, x,y\},\hspace{-.6pt}\{a,b,c, x,z\},\hspace{-.6pt}
\{a,b, x,y\},\hspace{-.5pt} \{a,c, x,y\},\hspace{-.5pt}
\{a, x,y,z\},\hspace{-.6pt} \{a,b, x,y,z\},\hspace{-.6pt}
\{a,c, x,y,z\}$}$\}$.
We give a detailed derivation. By D2,
\[
\vdash t \approx \tau(at_1+bt_2+ct_3+x) +\tau(at_4+ x+ y)+z
\]
By D6,
\[
\vdash t \approx \tau(at_1+bt_2+ct_3+x) +\tau(at_4+x+y)+\tau(at_4+x+y+z)
\]
Let $u_a = \tau t_1+\tau t_4$, $u_b=t_2$ and $u_c=t_3$. By D9,
\[
\vdash t \approx \tau(au_a+bu_b+cu_c+x)+\tau(au_a+x+y)+\tau(au_a+x+y+z)
\]
By induction, $u_a$ can be brought into a normal form $t_a$, and
likewise for $u_b$ and $u_v$. So
\[
\vdash t \approx \tau(tu_a+bt_b+ct_c+x)+\tau(at_a+x+y)+\tau(at_a+x+y+z)
\]
By D7,
\[
\vdash t \approx \begin{array}[t]{l}
\tau(at_a+bt_b+ct_c+x) +\tau(at_a+x+y)\\
+\tau(at_a+x+y+z) + \tau(at_a+bt_b+ct_c+x+y+z)
\end{array}
\]
Finally, by D8,
\[
\vdash t \begin{array}[t]{l@{}}
\approx \begin{array}[t]{@{}l@{}} \tau(at_a+bt_b+ct_c+x)
+\tau(at_a+x+y) +\tau(at_a+x+y+z)\\
+ \tau(at_a+bt_b+ct_c+x+y+z)
+ \tau(at_a+bt_b+ct_c+x+y)\\
+ \tau(at_a+bt_b+ct_c+x+z)
+ \tau(at_a+bt_b+x+y)
+ \tau(at_a+ct_c+x+y)\\
+ \tau(at_a+bt_b+x+y+z)
+ \tau(at_a+ct_c+x+y+z)
\end{array}\\
= \sum_{L\in \mc{L}}\tau(\sum_{a\in A_L} a t_a + V_L)
\end{array}
\]
\end{example}
\begin{lemma} \label{nf}
For any term $t$, $\mbox{\rm A1-4+WF1-3} \vdash t\approx t'$ for some normal form $t'$.
\end{lemma}
\begin{proof}
By induction on $|t|$. We distinguish two cases.
\medskip
\begin{itemize}
\item $t\,\not\!\mv{\tau}$. Let $t= \sum_{i\in I}a_i t_i+Y$. By D5,
\[\vdash t\approx \sum_{a\in \mc{I}(t)} a(\!\!\!\!\sum_{i\in I, a_i=a}\!\!\!\!\tau t_i)+Y~.\]
By induction, for each $a\in\mc{I}(t)$,
\[\vdash \!\!\!\!\sum_{i\in I,a_i=a}\!\!\!\! \tau t_i\approx t_a\]
for some normal form $t_a$. So we are done.
\medskip
\item $t \mv{\tau}$. By D6, $t$ can be brought in the form
$\sum_{i\in I}\tau t_i$ with $I\neq\emptyset$, and using D2 one can
even make sure that $t_i\,\not\!\mv{\tau}$ for $i\in I$.
Using the first case in this proof, we obtain,
for each $i\in I$,
\[
\vdash t_i \approx \sum_{a\in A_{L(i)}}a t_{a,i} + V_{L(i)}
\]
for some $L(i) \subseteq A \cup V$.
Thus\vspace{-3pt}
\[
\vdash t \approx \sum_{i\in I}
\tau \left(\sum_{a\in A_{L(i)}}a t_{a,i} + V_{L(i)}\right) ~.
\vspace{-3pt}\]
For each $a\in \mc{I}(t)$, we define \qquad
$\displaystyle u_a=\!\!\!\!\sum_{i\in I,~ a \in A_{L(i)}}\!\!\!\! \tau t_{a,i}~.$\\[1ex]
Then $|u_a| < |t|$.
By induction, $\vdash u_a \approx t_a$ for some normal form $t_a$.\\
Define $\mc{L} = \{L(i) \mid i \in I\}$.
By repeated application of D9 we obtain\vspace{-3pt}
\[
\vdash t \approx \sum_{i\in I} \tau \left( \sum_{a\in
A_{L(i)}}\!\!a u_{a} + V_{L(i)}\right)
\approx \sum_{L\in \mc{L}}\tau\left( \sum_{a\in A_L}\ at_a +V_L\right)
~.\vspace{-3pt}\]
The latter term has the required form, except that the family $\mc{L}$
need not be saturated. However, it is straightforward to saturate
$\mc{L}$ by application of D7 and D8.
\qed
\end{itemize}
\end{proof}
\begin{lemma} \label{essential}
Suppose $t$ and $u$ are both in normal forms and $t\sqsubseteq_{\rm WF} u$.
If $t\Rightarrow \mv{a}t_a$, then there exists a term $u_a$ such that $u\Rightarrow\mv{a} u_a$ and $t_a\leq_{\rm WF} u_a$.
\end{lemma}
\begin{proof}
Suppose $t\sqsubseteq_{\rm WF} u$ and $t\Rightarrow \mv{a} t_a$.
Let $\sigma$ be the closed substitution given by $\sigma(x)=\mathbf{0}$ for
all $x\in V$. As $(a,\emptyset)$ is a weak failure
pair of $\sigma(t)$ and $\sigma(t)\sqsubseteq_{\rm WF} \sigma(u)$, it
is also a weak failure pair of $u$. Thus there
exists a term $u_a$ such that
\raisebox{0pt}[0pt][0pt]{$u\Rightarrow\mv{a} u_a$}.
By the definition of a normal form, this term is unique.\hfill(*)
We now show that $t_a \leq_{\rm WF} u_a$.
Let $\rho$ be a closed substitution.
Consider a weak failure pair $(a_1\cdots a_k,B)$ of $\rho(t_a)$.
Then $(aa_1\cdots a_k,B)$ is a weak failure pair of
$\rho(t)$, and hence also of $\rho(u)$. It suffices to conclude that
$(a_1\cdots a_k,B)$ is a weak failure pair of $\rho(u_a)$.
However, we can \emph{not} conclude this directly, as
possibly $u\Rightarrow x+u'$ where
$(a a_1\cdots a_k,B)$ is a weak failure pair of $\rho(x)$.
To ascertain that nevertheless $(a_1\cdots a_k,B)$ is a weak
failure pair of $\rho(u_a)$,
we define a modification $\rho'$ of $\rho$ such that
for all $\ell\leq k$ and for all terms $v$, $\rho(v)$ and $\rho'(v)$
have the same weak failure pairs $(c_1\cdots c_\ell,B)$, while for all
$x\in V$, $(a a_1\cdots a_k,B)$ is not a weak failure pair of $\rho'(x)$.
We obtain $\rho'(x)$ from $\rho(x)$ by replacing subterms $bp$ at
depth $k$ by $\mathbf{0}$ if $b\not\in B$ and by $bb\mathbf{0}$ if $b\in B$.
That is,
\newcommand{\chop}[1][]{\mathit{chop}_{#1}}
\[
\rho'(x)=\chop[k](\rho(x))
\]
with ${\it chop}_m$ for all $m\geq 0$ inductively defined by
\[ \begin{array}{lcl}
\chop[m](\mathbf{0}) &=& \mathbf{0} \\
\chop[m](p+q) &=& \chop[m](p)+\chop[m](q) \\
\chop[m](\tau p) &=& \tau\,\chop[m](p)\\
\chop[0](bp) &=& \left \{ \begin{array}{ll}
\mathbf{0} & \mbox{if $b\not\in B$} \\
bb\mathbf{0} & \mbox{if $b\in B$}
\end{array} \right. \\
\chop[m+1](bp) &=& b\,\chop[m](p)
\end{array}
\]
We proceed to prove that $\rho'$ has the desired properties
mentioned above.
\begin{enumerate}
\renewcommand{\theenumi}{\Alph{enumi}}
\item \label{rhomodA}
For all $\ell\leq k$ and $c_1,\ldots,c_\ell\in A$ and for all terms
$v$, $\rho(v)$ and
$\rho'(v)$ have the same weak failure pairs $(c_1\cdots c_\ell,B)$,
The difference between $\rho(v)$ and $\rho'(v)$ only appears within
subterms of depth $k$, that is for terms $p$ such that
$\rho(v)\Rightarrow \mv{c_1} \Rightarrow \cdots \Rightarrow
\mv{c_k} \Rightarrow p$ for certain $c_1,\ldots,c_k \in A$. Such a
subterm $p$ of $\rho(v)$ corresponds to a subterm $p'$ of
$\rho'(v)$---still satisfying $\rho'(v)\Rightarrow \mv{c_1}
\Rightarrow \cdots \Rightarrow \mv{c_k} \Rightarrow p'$---in which
certain subterms $bq$ are replaced by $\mathbf{0}$ if $b\not\in B$ and by
$bb\mathbf{0}$ if $b\in B$. For such corresponding subterms $p$ and $p'$
we have $\mc{I}(p)\cap B=\emptyset$ if and only if $\mc{I}(p')\cap
B=\emptyset$. From this the claim follows immediately.
\medskip
\item \label{rhomodB}
For all $x\in V$,
$(a a_1\cdots a_k,B)$ is not a weak failure pair of $\rho'(x)$.
\medskip
To this end we show that for all closed terms $p$, $\chop[m](p)$
does not have any weak failure pair $(c_0\cdots c_{m},B)$
with $c_0,\ldots,c_m\in A$.
We apply induction on $m$.
\smallskip
\noindent
\textit{Base case:} Since the summands of $\chop[0](p)$, when
skipping over initial $\tau$-steps, are
$bb\mathbf{0}$ with $b\in \mc{I}(p)\cap B$, $\chop[0](p)$ does not
have a weak failure pair $(c_0,B)$.
\smallskip
\noindent
\textit{Induction step:} Let $m>0$. By induction, for closed terms $q$,
$\chop[m-1](q)$ does not have weak failure pairs $(c_1\cdots c_{m},B)$.
Since the transitions of $\chop[m](p)$ are
$\chop[m](p)\mv{c}\chop[m-1](q)$ for $p\mv{c}q$, it
follows that $\chop[m](p)$ does not have weak failure pairs
$(c_0\cdots c_{m},B)$.
\end{enumerate}
\noindent
Now, since $(a_1\cdots a_k,B)$ is a weak failure pair of $\rho(t_a)$,
by property (\ref{rhomodA}) it is also a weak failure pair of $\rho'(t_a)$,
Therefore $(a a_1\cdots a_k,B)$ is a weak failure pair of $\rho'(t)$,
and hence also of $\rho'(u)$.
Since according to property (\ref{rhomodB}) it is \emph{not} the case that
$u\Rightarrow x+u'$ with $(a a_1\cdots a_k,B)$ a weak failure pair
of $\rho'(x)$, it must be the case that $u\Rightarrow \mv{a}u''$ such
that $(a_1\cdots a_k,B)$ is a weak failure pair of $\rho'(u'')$.
By (*), $u''=u_a$.
Again by property (\ref{rhomodA}), $(a_1\cdots a_k,B)$ is a
weak failure pair of $\rho(u_a)$.
\qed
\end{proof}
\subsection{$\omega$-Completeness Proof}
We are now in a position to prove Theo.~\ref{theo:A-infinite}
($\omega$-completeness in case of an infinite alphabet) and
Theo.~\ref{theo:A-finite} ($\omega$-completeness in case of a
finite alphabet), along with Theo.~\ref{theo:ground-complete}
(ground completeness). We will prove these three theorems in one
go. Namely, in the proof, two cases are distinguished; only in the
second case ($\mc{I}(t)=A$), in which the $A$ is guaranteed to be
finite, will the axiom WF$_A$ play a role.
\begin{proof}
Let $t\sqsubseteq_{\rm WF} u$. We need to show that $\vdash
t\preccurlyeq u$. We apply induction on $|t|+|u|$. By
Lem.~\ref{nf}, we can write $t$ and $u$ in normal form.
We first prove that $L(t)\subseteq L(u)$. Suppose this is not the
case. Then there exists some $a\in A_{L(t)}\setminus A_{L(u)}$ or
some $x\in V_{L(t)}\setminus V_{L(u)}$. In the first case, let
$\sigma$ be the closed substitution with $\sigma(z)=\mathbf{0}$ for all
$z\in V$; we find that $(a,\emptyset)$ is a weak failure pair of
$\sigma(t)$ but not of $\sigma(u)$, which contradicts the fact that
$\sigma(t)\sqsubseteq_{\rm WF} \sigma(u)$. In the second case, pick some
$d>\max\{|t|,|u|\}$, and consider the closed substitution
$\sigma(x)=a^d\mathbf{0}$ and $\sigma(z)=\mathbf{0}$ for $z\neq x$. Then
$(a^d,\emptyset)$ is weak failure pair of $\sigma(t)$. However, it
can \emph{not} be a weak failure pair of $\sigma(u)$, again
contradicting $\sigma(t)\sqsubseteq_{\rm WF} \sigma(u)$.
We distinguish two cases, depending on whether $\mc{I}(t)=A$ or
not.
\begin{enumerate}
\item $\mc{I}(t) \neq A$. We distinguish three cases.
Due to the condition that $t\mv{\tau}$ implies $u\mv{\tau}$, it
cannot be the case that $t$ is an action normal form and $u$ a
$\tau$ normal form.
\medskip
\begin{enumerate}
\item $t$ and $u$ are both action normal forms.
So $t= \sum_{a\in A_L} at_a + V_L$ and $u= \sum_{a\in A_M}
au_a+V_M$. We show that $L(t)=L(u)$. Namely, pick $b\in A\setminus
A_L$, and let $\sigma$ be the closed substitution with
$\sigma(z)=\mathbf{0}$ for any $z\in V_L$, and $\sigma(z)=b\mathbf{0}$ for
$z\not\in V_L$. As $(\varepsilon,A\setminus\mc{I}(t))$ is a weak
failure pair of $t$, and
hence of $u$, it must be that $L(u)\subseteq L(t)$.
Together with $L(t)\subseteq L(u)$ this gives $L(t)=L(u)$. By
Lem.~\ref{essential}, for each $a\in \mc{I}(t)$, $t_a \leq_{\rm WF}
u_a$, and thus clearly $t_a\sqsubseteq_{\rm WF} \tau u_a$. By
induction, $\vdash t_a\preccurlyeq \tau u_a$ and hence $\vdash
at_a\preccurlyeq au_a$. It follows that
\[\vdash t = \sum_{a\in A_L} at_a +V_L \preccurlyeq \sum_{a\in A_L}
au_a +V_L = \sum_{a\in A_M} au_a +V_M = u\]
\item Both $t$ and $u$ are $\tau$ normal forms:
\[ t= \sum_{L\in \mc{L}}\tau (\sum_{a \in A_L} at_a+ V_L)\]
and
\[ u = \sum_{M\in \mc{M}}\tau (\sum_{a \in A_M} au_a+ V_M)\]
By Lem.~\ref{essential}, for each $a\in \mc{I}(t)$, $t_a \leq_{\rm WF}
u_a$, and thus clearly $t_a\sqsubseteq_{\rm WF} \tau u_a$. By
induction, $\vdash t_a\preccurlyeq \tau u_a$. By these
inequalities, together with D4,
\begin{equation} \label{eqnX}
\vdash t \preccurlyeq \sum_{L\in \mc{L}} \tau(\sum_{a\in A_L} au_a
+ V_L) + u
\end{equation}
\medskip
We now show that $\mc{L}\subseteq \mc{M}$. Take any $L\in \mc{L}$,
pick $b\in A \setminus A_L$, and consider the closed substitution
$\sigma(z)=\mathbf{0}$ for any $z\in V_L$, and $\sigma(z)=b\mathbf{0}$ for
$z\not\in V_L$. Since $\sigma(t)\mv{\tau} \sigma(\sum_{a\in L}at_a)$
and $\sigma(t)\sqsubseteq_{\rm WF} \sigma(u)$, there exists an $M\in
\mc{M}$ with $A_M\subseteq A_L$ and $V_M \subseteq V_L$. Since
also $L\subseteq L(t) \subseteq L(u)$, and $\mc{M}$ is saturated,
it follows that $L\in\mc{M}$. Hence, $\mc{L} \subseteq \mc{M}$.
\medskip
Since $\mc{L} \subseteq \mc{M}$,
\begin{equation} \label{eqnY}
\sum_{L\in \mc{L}}\tau(\sum_{a\in A_L} au_a + V_L) + u = u
\end{equation}
By (\ref{eqnX}) and (\ref{eqnY}), $ \vdash t\preccurlyeq u$.
\medskip
\item $t$ is an action normal form and $u$ is a $\tau$ normal
form. Then $\tau t \sqsubseteq_{\rm WF} u$. Note that $\tau t$ is a
$\tau$ normal form, so according to the previous case,
\[\vdash \tau t \preccurlyeq u\]
By WF3,
\[\vdash t\preccurlyeq \tau t \preccurlyeq u \]
\end{enumerate}
\medskip
\item $\mc{I}(t)=A$. Note that in this case, $|A|<\infty$. So, according to
Theo.~\ref{theo:A-finite}, axiom WF$_A$ is at our disposal. As
before, we distinguish three cases.
\medskip
\begin{enumerate}
\item Both $t$ and $u$ are action normal forms. Since
$L(t)\subseteq L(u)$ we have
$t= \sum_{a\in A} at_a +W$ and $u= \sum_{a\in A} au_a+X$ with $W
\subseteq X$. By WF$_A$,
\[ \vdash \sum_{a\in A}a t_a \preccurlyeq \sum_{a\in A}a t_a + u\]
By Lem.~\ref{essential}, for each $a\in A$, $t_a \leq_{\rm WF} u_a$, and
thus clearly $t_a\sqsubseteq_{\rm WF} \tau u_a$. By induction,
$\vdash t_a\preccurlyeq \tau u_a$. It follows, using $W\subseteq X$, that
\[ \vdash t=\sum_{a\in A}a t_a +W \preccurlyeq \sum_{a\in A}a u_a + u + W=u\]
\medskip
\item Both $t$ and $u$ are $\tau$ normal forms.
\[t= \sum_{L\in \mc{L}}\tau (\sum_{a \in A_L} at_a+V_L)\]
and
\[ u = \sum_{M\in \mc{M}}\tau (\sum_{a \in A_M} au_a+V_M)\]
By D1 and WF$_A$ (clearly, in this case $A_{L(t)}=A$),
\begin{equation} \label{eqPP}
\vdash t\approx t+\sum_{a\in A}a t_a \preccurlyeq t+\sum_{a\in A}a t_a+u
\end{equation}
By Lem.~\ref{essential}, for each $a\in A$, $t_a \leq_{\rm WF} u_a$, and thus clearly
$t_a\sqsubseteq_{\rm WF} \tau u_a$. By induction, $\vdash
t_a\preccurlyeq \tau u_a$. By these inequalities, together with (\ref{eqPP}),
\[
\vdash t \preccurlyeq \sum_{L\in \mc{L}}\tau (\sum_{a \in A_L} au_a+V_L)
+ \sum_{a\in A}a u_a + u
\]
So by D1,
\begin{equation} \label{eqnQQ} \vdash t \preccurlyeq \sum_{L\in \mc{L}}
\tau (\sum_{a \in A_L} au_a+V_L) + u\end{equation}
Now for $L\in \mc{L}$ with $A_L\neq A$ we have $L \in \mc{M}$ using
the same reasoning as in 1(b). For $L\in \mc{L}$ with $A_L=A$ we
have $V_L \subseteq V_{L(t)} \subseteq V_{L(u)}$. By WF$_A$ we have
\begin{equation} \label{eqnSS}
\vdash \tau(\sum_{a \in A_L} au_a+V_L) \preccurlyeq
\tau(\sum_{a \in A} au_a+V_{L(u)})
\end{equation}
As the latter is a summand of $u$ we obtain $t\preccurlyeq u$.
\medskip
\item $t$ is an action normal form and $u$ is a $\tau$ normal
form. This can be dealt with as in case 1(c).
\end{enumerate}
\end{enumerate}
This completes the proof. \qed
\end{proof}
\subsection{Weak Failures Equivalence} \label{sec:wfe}
In \cite{AFI07,FG07} an algorithm is presented which takes as
input a sound and ground-complete inequational axiomatization $E$
for BCCSP modulo a preorder $\sqsubseteq$ which \emph{includes the
ready simulation preorder} and is \emph{initials
preserving},\footnote{meaning that $p\sqsubseteq q$ implies that
$I(p)\subseteq I(q)$, where the set $I(p)$ of \emph{strongly}
initial actions is $I(p) = \{\alpha \in A_{\tau} \mid
p\mv{\alpha}\}$} and generates as output an equational
axiomatization $\mc{A}(E)$ which is sound and ground-complete for
BCCSP modulo the corresponding equivalence---its kernel: $\sqsubseteq
\cap \sqsubseteq^{-1}$. Moreover, if the
original axiomatization $E$ is $\omega$-complete, so is the
resulting axiomatization. The axiomatization $\mc{A}(E)$ generated
by the algorithm from $E$ contains the axioms A1-4 for
bisimulation equivalence and the axioms $\beta(\alpha x + z) +
\beta (\alpha x + \alpha y + z) \approx
\beta (\alpha x + \alpha y + z)$ for $\alpha,\beta\in A_\tau$
that are valid in ready simulation semantics, together with the
following equations, for each inequational axiom $t \preccurlyeq
u$ in $E$:
\begin{itemize}
\item $ t + u \approx u$; and
\item $\alpha(t + x) + \alpha(u + x) \approx \alpha(u + x)$ (for
each $\alpha\in A_\tau$, and some variable $x$ that does not occur
in $t + u$).
\end{itemize}
Moreover, if $E$ contains an equation (formally abbreviating two
inequations), this equation is logically equivalent to the four
equations in $\mc{A}(E)$ that are derived from it, and hence can be
incorporated in the equational axiomatization unmodified.
Recently, we lifted this result to weak semantics
\cite{CFG08b}, which makes the aforementioned algorithm applicable
to all 87 preorders surveyed in \cite{Gla93} that are at least as
coarse as the ready simulation preorder. Namely, among others, we
show that
\begin{theorem}\label{alg}
Let $\sqsubseteq$ be a weak initials preserving precongruence%
\footnote{meaning that $p\sqsubseteq q$ implies that
$\mc{I}_{\tau}(p)\subseteq \mc{I}_\tau(q)$, where the set
$\mc{I}_\tau(p)$ of \emph{weak} initial actions is $\mc{I}_\tau(p)
= \{\alpha \in A_{\tau} \mid p\Rightarrow\mv{\alpha}\}$} that
contains the strong ready simulation preorder $\sqsubseteq_{\rm RS}$
and satisfies T2 (the second $\tau$-law of CCS: $\tau x \approx
\tau x + x$), and let $E$ be a sound and ground-complete
axiomatization of $\sqsubseteq$. Then $\mc{A}(E)$ is a sound and
ground-complete axiomatization of the kernel of $\sqsubseteq$.
Moreover, if $E$ is $\omega$-complete, then so is $\mc{A}(E)$.
\end{theorem}
It is straightforward to check that weak failures meets the
prerequisites of Theo.~\ref{alg}, and thus we can run the
algorithm and obtain the axiomatization in Tab.~\ref{tab-aux} for
weak failures equivalence.
\begin{table}
\vspace{-5pt}
\begin{center}
\begin{tabular}{l@{\qquad}rcl}
WF1 & $ax+ay$ & $\approx$ & $a(\tau x+\tau y)$\\
WF2$^a$ & $\tau(x+y)+\tau x +y$ & $\approx$ & $\tau x+ y$\\
WF2$^b$ & $\alpha(\tau(x+y)+z) + \alpha (\tau x+y +z) $ & $\approx$ & $\alpha(\tau x+ y+z)$\\
WF3$^a$ & $x+\tau x+y$ & $\approx $ & $ \tau x+ y$\\
WF3$^b$ & $\alpha(x+z)+\alpha(\tau x+y+z)$ & $\approx $ & $ \alpha(\tau x+ y+z)$\\
RS & $\beta(\alpha x + z) + \beta(\alpha x + \alpha y + z)$ & $\approx $ &
$\beta(\alpha x + \alpha y + z)$\\
WF$_A^{~~a}$ & $\sum_{a\in A} ax_a + \sum_{a\in A} ax_a + y$ & $\approx $ &
$\sum_{a\in A} ax_a + y$\\
WF$_A^{~~b}$ & $\beta(\sum_{a\in A} ax_a + z) + \beta(\sum_{a\in A} ax_a + y+z)$ & $\approx $ & $\beta(\sum_{a\in A} ax_a + y+z)$
\end{tabular}
\end{center}\caption{Axiomatization generated from the algorithm}\label{tab-aux}
\vspace{-5mm}
\end{table}
\noindent After simplification and omission of redundant axioms,
we obtain the axiomatization in Tab.~\ref{tab5}.
\begin{table}
\vspace{-5pt}
\begin{center}
\begin{tabular}{l@{\qquad}rcl}
WF1 & $ax+ay$ & $\approx$ & $a(\tau x+\tau y)$\\
WFE2 & $\tau(x+y)+\tau x$ & $\approx$ & $\tau x+ y$\\
WFE3 & $ax+\tau(ay+z)$ & $\approx$ & $\tau(a x+ a y+z)$\\
WFE$_A$ & $\tau(\sum_{a\in A} ax_a + z) + \tau(\sum_{a\in A} ax_a + y+z)$
& $\approx $ & $\tau(\sum_{a\in A} ax_a + y+z)$
\end{tabular}
\end{center}
\caption{Axiomatization for weak failures equivalence} \label{tab5}
\vspace{-5mm}
\end{table}
\begin{lemma} \label{lem:four}
The axioms in \emph{Tab.~\ref{tab-aux}} are
derivable from the axioms in
\emph{Tab.~\ref{tab5}} together with A1-4.
\end{lemma}
\begin{proof}
WF1 is unmodified. WF2$^a$ and WF3$^a$ can be trivially
derived from WFE2. WF$_A^{~~a}$ is derivable using A3.
To proceed, we have that WFE2$\vdash \tau\tau x \approx \tau x$ (namely by
substituting $\tau x$ for $y$ and invoking D1) and hence also WFE2$\vdash$D2
(namely by substituting $\tau x $ for $x$ in WFE2 and invoking D1);
using D2, the instances of WF2$^b$ and WF3$^b$ with $\alpha=\tau$, as well as the
instance of RS with $\beta=\alpha=\tau$,
are derivable from WFE2.
The instances of WF2$^b$ and WF3$^b$ with $\alpha\neq\tau$,
are derivable from WF1 and the instances with $\alpha=\tau$;
the same holds for the instances of RS and WF$_A^{~~b}$ with
$\beta\neq\tau$.
Finally in the remaining instances of RS
(with $\beta=\tau$ and $\alpha=a\in A$), we have WFE2$\vdash \tau(ax+z)+\tau(ax+ay+z)\approx
\tau(ax+z)+ay$,
and thus it can be derived from WFE3. The instance of
WF$_A^{~~b}$ with $\beta=\tau$ is exactly WFE$_A$. \qed
\end{proof}
The axioms WF1, \mbox{WFE2-3} already appeared in \cite{Gla97}.
A1-4+WF1+WFE2-3 is sound and ground-complete for BCCS modulo
$\equiv_{\rm WF}$ (see also \cite{Gla97,CFG08b}). By
Theo.~\ref{theo:A-infinite} and Theo.~\ref{theo:A-finite}
(together with Lem.~\ref{lem:four}), we have:
\begin{corollary}
If $|A|=\infty$, then the axiomatization \mbox{\rm A1-4+WF1+WFE2-3} is
$\omega$-complete for $\mathrm{BCCS}(A)$ modulo $\equiv_{\rm WF}$.
\end{corollary}
\begin{corollary}
If $|A| <\infty$, then the axiomatization \mbox{\rm A1-4+WF1+WFE2-3+WFE$_A$} is
$\omega$-complete for
$\mathrm{BCCS}(A)$ modulo $\equiv_{\rm WF}$.
\end{corollary}
\section{Weak Impossible Futures Semantics} \label{sec6}
\emph{Weak impossible futures} semantics is closely related to
weak failures semantics. Only, instead of the set of actions in
the second argument of a weak failure pair (see
Def.~\ref{def:weak-failures}), an impossible future pair contains
a set of \emph{traces}.
\begin{definition}[Weak impossible futures]\rm\label{wif}
\vspace{-1ex}
\begin{itemize}
\item A sequence $a_1\cdots a_k \in A^*$, with $k\geq 0$, is a
\emph{trace} of a process $p_0$ if there is a path
$p_0\Rightarrow \mv{a_1} \Rightarrow \cdots \Rightarrow \mv{a_k} \Rightarrow p_k$;
it is a \emph{completed trace} of $p_0$ if moreover $\mathcal{I}(p_k)=\emptyset$.
Let $\mc{T}(p)$ denote the set of traces of process $p$, and
$\mc{CT}(p)$ its set of completed traces.
\item A pair $(a_1\cdots a_k,B)$, with $k\geq 0$ and $B \subseteq A^*$,
is a \emph{weak impossible future} of a process $p_0$ if there is a
path $p_0\Rightarrow \mv{a_1} \Rightarrow \cdots \Rightarrow \mv{a_k} \Rightarrow p_k$
with $\mc{T}(p_k) \cap B = \emptyset$.
\item The \emph{weak impossible futures preorder} $\sqsubseteq_{\rm WIF}$ is
given by $p\sqsubseteq_{\rm WIF} q$ iff (1) the weak impossible
futures of $p$ are also weak impossible futures of $q$,
(2) $\mc{T}(p)=\mc{T}(q)$ and
(3) $p\mv{\tau}$ implies that $q\mv{\tau}$.
\item \emph{Weak impossible futures equivalence} $\equiv_{\rm WIF}$ is defined
as $\sqsubseteq_{\rm WIF} \cap \sqsubseteq_{\rm WIF}^{-1}$.
\end{itemize}
\end{definition}
$\sqsubseteq_{\rm WIF}$ is a precongruence, and $\equiv_{\rm WF}$ a
congruence, for BCCS \cite{VM01}.
The requirement (2) $\mc{T}(p)=\mc{T}(q)$ is necessary for this
precongruence property. Without it we would have $\tau a\mathbf{0} \sqsubseteq
\tau a\mathbf{0} + b\mathbf{0}$ but $c(\tau a\mathbf{0}) \not\sqsubseteq c(\tau a\mathbf{0} + b\mathbf{0})$.
A sound and ground-complete axiomatization for $\sqsubseteq_{\rm WIF}$
is obtained by replacing axiom WF3 in Tab.~\ref{tab1} by the
following axiom (cf.\ \cite{VM01}, where a slightly more
complicated, but equivalent, axiomatization is given):
\[
\mbox{WIF3}~~~~ x ~\preccurlyeq~ \tau x
\]
However, surprisingly, there is no finite sound and
ground-complete axiomatization for $\equiv_{\rm WIF}$.
We will show this in Sec.~\ref{nonax}. A similar
difference between the impossible futures preorder and equivalence
in the concrete case (so in the absence of $\tau$) was found
earlier in \cite{CF08}. We note that, since weak impossible
futures semantics is not coarser than ready simulation semantics,
the algorithm from \cite{AFI07,FG07,CFG08b} to generate an
axiomatization for the equivalence from the one for the preorder,
does not work in this case.
In Sec.~\ref{sec:42} we establish that the sound and ground-complete
axiomatization for BCCS modulo $\sqsubseteq_{\rm WIF}$ is
$\omega$-complete in case $|A|=\infty$, and in Sec.~\ref{sec:43} that
there is no such finite basis for the inequational theory of BCCS
modulo $\sqsubseteq_{\rm WIF}$ in case $|A|<\infty$. Again, these
results correspond to (in)axiomatizability results for the impossible
futures preorder in the concrete case \cite{CF08}, with very similar
proofs.
\subsection{Nonexistence of an Axiomatization for Equivalence}\label{nonax}
We now prove that for any (nonempty) $A$ there does \emph{not} exist
any finite, sound, ground-complete axiomatization for
$\mathrm{BCCS}(A)$ modulo $\equiv_{\rm WIF}$. The cornerstone for this
negative result is the following infinite family of closed equations,
for $m\geq 0$:
\[ \tau a^{2m}\mathbf{0}+\tau(a^m\mathbf{0}+a^{2m}\mathbf{0}) ~\approx~ \tau(a^m\mathbf{0}+a^{2m}\mathbf{0}) \]
It is not hard to see that they are sound modulo $\equiv_{\rm WIF}$.
We start with a few lemmas.
\begin{lemma}\label{res}
If $p\sqsubseteq_{\rm WIF} q$ then $\mc{CT}(p) \subseteq \mc{CT}(q)$.
\end{lemma}
\begin{proof}
A process $p$ has a completed trace $a_1 \cdots a_k$ iff it has a weak
impossible future $(a_1 \cdots a_k,A)$.
\qed
\end{proof}
\begin{lemma} \label{newvariable}
Suppose $t\sqsubseteq_{\rm WIF} u$. Then for any $t'$ with
$t\Rightarrow\mv{\tau} t'$ there is some $u'$ with
$u\Rightarrow\mv{\tau} u'$ such that $\var(u')\subseteq \var(t')$.
\end{lemma}
\begin{proof}
Let $t\Rightarrow\mv{\tau} t'$. Fix some $m>|t|$, and consider
the closed substitution $\rho$ defined by $\rho(x)=\mathbf{0}$ if
$x\in\var(t')$ and $\rho(x)=a^m\mathbf{0}$ if $x\not\in\var(t')$. Since
$\rho(t)\Rightarrow\rho(t')$ with $|\rho(t')|=|t'|<m$, and
$\rho(t)\sqsubseteq_{\rm WIF}\rho(u)$, clearly $\rho(u)\Rightarrow q$
for some $q$ with $|q|<m$. From the definition of $\rho$ it then
follows that there must exist $u\Rightarrow u'$ with
$\var(u')\subseteq \var(t')$. In case $u \Rightarrow\mv{\tau} u'$ we
are done, so assume $u'=u$. Let $\sigma$ be the substitution with
$\sigma(x)=\mathbf{0}$ for all $x\in V$. Since $\sigma(t) \mv{\tau}$ and $t
\sqsubseteq_{\rm WIF} u$ we have $\sigma(u)\mv{\tau}$, so $u \mv{\tau}
u''$ for some $u''$. Now $\var(u'') \subseteq \var(u) = \var(u')
\subseteq \var(t')$.\qed
\end{proof}
\pagebreak[3]
\begin{lemma} \label{key2}
Assume that, for terms $t,u$, closed substitution $\sigma$, action $a$
and integer $m$:
\begin{enumerate}\vspace{-1ex}
\item $t \equiv_{\rm WIF} u$;
\item $m > |u|$;
\item $\mc{CT}(\sigma(u)) \subseteq \{a^{m}, a^{2m}\}$; and
\item there is a closed term $p'$ such that
$\sigma(t)\Rightarrow\mv{\tau} p'$ and $\mc{CT}(p') = \{a^{2m}\}$.
\vspace{-1ex}
\end{enumerate}
Then there is a closed term $q'$ such that $\sigma(u)
\Rightarrow\mv{\tau} q'$ and $\mc{CT}(q') = \{a^{2m}\}$.
\end{lemma}
\begin{proof}
According to proviso (4) of the lemma, we can distinguish two
cases.
\begin{itemize}
\item There exists some $x\in V$ such that $t \Rightarrow t'$ with
$t'=t''+x$ and $\sigma(x)\Rightarrow\mv{\tau} p'$ where $\mc{CT}(p')=\{a^{2m}\}$.
Consider the closed substitution $\rho$ defined by $\rho(x)=a^m\mathbf{0}$
and $\rho(y)=\mathbf{0}$ for any $y\neq x$. Then $a^m \in \mc{CT}(\rho(t))
= \mc{CT}(\rho(u))$, using Lem.~\ref{res}, and this is only
possible if $u \Rightarrow u'$ for some $u'=u''+x$. Hence
$\sigma(u)\Rightarrow\mv{\tau} p'$.
\medskip
\item $t\Rightarrow\mv{\tau} t'$ with $\mc{CT}(\sigma(t'))=\{a^{2m}\}$.
Since $|t'|\leq|t|=|u|<m$, clearly, for any \mbox{$x\mathbin\in \var(t')$},
either $|\sigma(x)|\mathbin=0$ or $\ensuremath{\mathit{norm}}(\sigma(x))\mathbin>m$,
where $\ensuremath{\mathit{norm}}(p)$ denotes the length of the shortest completed trace of $p$.
Since $t\equiv_{\rm WIF} u$, by Lem.~\ref{newvariable},
$u\mathbin\Rightarrow\mv{u} u'$ with $\var(u')\subseteq \var(t')$. Hence,
for any $x\mathbin\in \var(u')$, either $|\sigma(x)|\mathbin=0$
or $\ensuremath{\mathit{norm}}(\sigma(x))\mathbin>m$. Since $|u'|\mathbin<m$,
$a^m\mathbin{\notin} \mc{CT}(\sigma(u'))$. It follows from
$\mc{CT}(\sigma(u))\subseteq\{a^{m}, a^{2m}\}$ that
$\mc{CT}(\sigma(u'))= \{a^{2m}\}$. And $u\Rightarrow\mv{\tau} u'$
implies $\sigma(u)\Rightarrow\mv{\tau} \sigma(u')$.\hfill$\Box$
\end{itemize}
\end{proof}
\begin{lemma} \label{key2cont}
Assume that, for $E$ an axiomatization sound for
$\sqsubseteq_{\rm WIF}$, closed terms $p,q$, closed
substitution $\sigma$, action $a$ and integer $m$:
\begin{enumerate}\vspace{-1ex}
\item $E\vdash p \approx q$;
\item $m > \max \{ |u|\mid t \approx u \in E\}$;
\item $\mc{CT}(q) \subseteq \{a^{m},a^{2m}\}$; and
\item there is a closed term $p'$ such that $p\Rightarrow\mv{\tau} p'$ and
$\mc{CT}(p') = \{a^{2m}\}$.
\vspace{-1ex}
\end{enumerate}
Then there is a closed term $q'$ such that $q \Rightarrow\mv{\tau} q'$ and
$\mc{CT}(q') = \{a^{2m}\}$.
\end{lemma}
\begin{proof}
By induction on the derivation of $E\vdash p \approx q$.
\begin{itemize}
\item Suppose $E\vdash p\approx q$ because $\sigma(t)=p$ and
$\sigma(u)=q$ for some $t\approx u\in E$ or $u\approx t\in E$ and
closed substitution $\sigma$. The claim then follows by Lem.~\ref{key2}.
\medskip
\item Suppose $E\vdash p\approx q$ because $E\vdash p\approx r$ and
$E\vdash r\approx q$ for some $r$. Since $r\equiv_{\rm WIF} q$, by
proviso (3) of the lemma and Lem.~\ref{res},
$\mc{CT}(r)\subseteq \{a^{m},a^{2m}\}$. Since there is a $p'$
such that $p\Rightarrow\mv{\tau} p'$ with $\mc{CT}(p') = \{a^{2m}\}$, by
induction, there is an $r'$ such that $r\Rightarrow\mv{\tau} r'$ and
$\mc{CT}(r') = \{a^{2m}\}$. Hence, again by induction, there is a
$q'$ such that $q\Rightarrow\mv{\tau} q'$ and $\mc{CT}(q') = \{a^{2m}\}$.
\medskip
\item Suppose $E\vdash p\approx q$ because $p=p_1+p_2$ and
$q=q_1+q_2$ with $E\vdash p_1\approx q_1$ and $E\vdash p_2\approx
q_2$. Since there is a $p'$ such that $p\Rightarrow\mv{\tau} p'$ and
$\mc{CT}(p') = \{a^{2m}\}$, either $p_1\Rightarrow \mv{\tau} p'$ or
$p_2\Rightarrow\mv{\tau} p'$. Assume, without loss of generality, that
$p_1\Rightarrow\mv{\tau} p'$. By induction, there is a $q'$ such that
$q_1\Rightarrow\mv{\tau} q'$ and $\mc{CT}(q') = \{a^{2m}\}$.
Now $q\Rightarrow\mv{\tau} q'$.
\medskip
\item Suppose $E\vdash p \approx q$ because $p=cp_1$ and $q=cq_1$
with $c\in A$ and $E\vdash p_1\approx q_1$. In this case, proviso (4)
of the lemma can not be met.
\medskip
\item Suppose $E\vdash p \approx q$ because $p=\tau p_1$ and $q=\tau
q_1$ with $E\vdash p_1\approx q_1$. By proviso (4) of the lemma,
either $\mc{CT}(p_1)=\{a^{2m}\}$ or there is a $p'$ such that
$p_1\Rightarrow\mv{\tau} p'$ and $\mc{CT}(p') = \{a^{2m}\}$.
In the first case, $q \Rightarrow\mv{\tau} q_1$ and $\mc{CT}(q_1) = \{a^{2m}\}$
by Lem.~\ref{res}. In the second, by induction, there is a $q'$
such that $q_1\Rightarrow\mv{\tau} q'$ and $\mc{CT}(q') = \{a^{2m}\}$.
Again $q\Rightarrow\mv{\tau} q'$.
\qed
\end{itemize}
\end{proof}
\begin{theorem}
There is no finite, sound, ground-complete axiomatization for
$\mathrm{BCCS}(A)$ modulo $\equiv_{\rm WIF}$.
\end{theorem}
\begin{proof}
Let $E$ be a finite axiomatization over $\mathrm{BCCS}(A)$ that is
sound modulo $\equiv_{\rm WIF}$. Let $m$ be greater than the depth
of any term in $E$. Clearly, there is no term $r$ such that
$\tau(a^m\mathbf{0}+a^{2m}\mathbf{0}) \Rightarrow\mv{\tau} r$ and
$\mc{CT}(r)=\{a^{2m}\}$. So according to Lem.~\ref{key2cont}, the
closed equation $\tau a^{2m}\mathbf{0}+\tau(a^m\mathbf{0}+a^{2m}\mathbf{0}) \approx
\tau(a^m\mathbf{0}+a^{2m}\mathbf{0})$ cannot be derived from $E$. Nevertheless, it is
valid modulo $\equiv_{\rm WIF}$. \qed
\end{proof}
In the same way as above, one can establish the nonderivability of
the equations $a^{2m+1}\mathbf{0}+a(a^m\mathbf{0}+a^{2m}\mathbf{0}) ~\approx~
a(a^m\mathbf{0}+a^{2m}\mathbf{0})$ from any given finite equational
axiomatization sound for $\equiv_{\rm WIF}$. As these equations
are valid modulo (strong) 2-nested simulation equivalence, this
negative result applies to all BCCS-congruences that are at least
as fine as weak impossible futures equivalence and at least as
coarse as strong 2-nested simulation equivalence. Note that
the corresponding result of \cite{AFGI04} can be inferred.
\subsection{A Finite Basis for Preorder if $|A|=\infty$} \label{sec:42}
In this section, we show that A1-4+WF1-2+WIF3 is $\omega$-complete
in case $|A|=\infty$. Note that this result was originally
obtained in \cite{VM01}. However, our proof is much simpler.
First, let us note that A1-4+WF1-2+WIF3 $\vdash$ D1, D2, D5.
\begin{lemma}
For any closed terms $p, q$, if $p\sqsubseteq_{\rm WIF} q$, then
\mbox{\rm A1-4+WF1-2+WIF3} $\vdash p\preccurlyeq q$.
\end{lemma}
\begin{proof} Let $p\sqsubseteq_{\rm WIF} q$. We prove $\vdash
p\preccurlyeq q$ by induction on $|p|+|q|$. We distinguish two
cases:
\begin{itemize}
\item $q\not\!\mv{\tau}$. Then $p\not\!\mv{\tau}$ since $p\sqsubseteq_{\rm WIF} q$. Suppose
$p=\sum_{i\in I}a_ip_i$ and $q=\sum_{j\in J}b_j q_j$. Clearly,
we have $\mc{I}(p)=\mc{I}(q)$.
By D5, we have
\[ \vdash p\approx \sum_{a\in \mc{I}(p)} a(\sum_{a_i=a, i\in I}\tau p_i)\]
and
\[ \vdash q\approx \sum_{a\in \mc{I}(p)} a(\sum_{b_j=a, j\in J}\tau q_j)\]
Since $p\sqsubseteq_{\rm WIF} q$, for each $a\in \mc{I}(p)$, the following relation holds:
\[ \sum_{a_i=a, i\in I}\tau p_i \sqsubseteq \sum_{b_j=a, j\in J} \tau q_j\]
By induction,
\[ \vdash \sum_{a_i=a, i\in I}\tau p_i \preccurlyeq \sum_{b_j=a, j\in J}\tau q_j\]
and thus
\[ \vdash a(\sum_{a_i=a, i\in I}\tau p_i) \preccurlyeq a(\sum_{b_j=a, j\in J}\tau q_j)\]
Summing these up for $a\in \mc{I}(p)$, we obtain that
\[ \vdash p\preccurlyeq q\]
\item $q\mv{\tau}$. By D2, we can write $ p\approx
\sum_{i\in I}\alpha_ip_i$ and $q\approx \sum_{j\in J} \beta_jq_j$ such that
for each $\alpha_i=\tau$ (resp. $\beta_j=\tau$), $p_i\not\!\mv{\tau}$
(resp. $q_j\not\!\mv{\tau}$).
Applying D1, for each $i\in I$ with $\alpha_i=\tau$, the summands of
$p_i$ are also made summands of $p$, and likewise for $q$.\hfill (*)
\medskip
For each $i\in I$ with $\alpha_i=\tau$ we have $p\mv{\tau} p_i$. Since
$p\sqsubseteq_{\rm WIF}q$ and no $q_j$ with $\beta_j=\tau$ contains a
$\tau$-summand, either $\mc{T}(q)\subseteq\mc{T}(p_i)$ or there exists
\raisebox{0pt}[0pt][0pt]{$q\mv{\tau}q_j$} such that $\mc{T}(q_j)\subseteq \mc{T}(p_i)$.
Since \raisebox{0pt}[0pt][0pt]{$q\mv{\tau}$}, in either case there exists some $j_i\in J$ such
that $b_{j_i}=\tau$ and $\mc{T}(q_{j_i})\subseteq \mc{T}(p_i)$. It follows that
\[ p_i \sqsubseteq_{\rm WIF} p_i + q_{j_i}\]
Since $p_i \not\!\mv{\tau}$ and $q_{j_i} \not\!\mv{\tau}$, by the previous
case,
\[ \vdash p_i \preccurlyeq p_i + q_{j_i}\]
Hence by WF2,
\[ \vdash \tau p_i \preccurlyeq \tau (p_i + q_{j_i}) \preccurlyeq p_i
+\tau q_{j_i} \]
and thus
\[ \vdash p= \sum_{\alpha_i=\tau} \tau p_i + \sum_{a\in \mc{I}(p)}\sum_{\alpha_i=a, i\in I} ap_i
\preccurlyeq \sum_{\alpha_i=\tau} (p_i +\tau q_{j_i}) + \sum_{a\in
\mc{I}(p)}\sum_{\alpha_i=a, i\in I} ap_i\]
By (*),
\[\vdash \sum_{\alpha_i=\tau} p_i + \sum_{a\in
\mc{I}(p)}\sum_{\alpha_i=a, i\in I} ap_i \approx \sum_{a\in
\mc{I}(p)}\sum_{\alpha_i=a, i\in I} ap_i \]
Since $p\sqsubseteq_{\rm WIF} q$, $\mc{I}(p)=\mc{I}(q)$.
Using (*), it is easy to see that for each $a\in \mc{I}(p)$,
\[ \sum_{\alpha_i=a, i\in I} ap_i \sqsubseteq_{\rm WIF} \sum_{\beta_j=a,j\in J} aq_j\]
So by the previous case,
\[ \vdash \sum_{\alpha_i=a, i\in I} ap_i \preccurlyeq \sum_{\beta_j=a,j\in J} aq_j\]
It follows that
\[ \vdash p \preccurlyeq \sum_{\alpha_i=\tau} \tau q_{j_i} + \sum_{a\in
\mc{I}(p)}\sum_{\alpha_i=a, i\in I} ap_i \preccurlyeq \sum_{\alpha_i=\tau}
\tau q_j + \sum_{a\in \mc{I}(p)}\sum_{\beta_j=a,j\in J} aq_j\]
By WIF3,
\[\vdash \sum_{\alpha_i=\tau} \tau q_j + \sum_{a\in
\mc{I}(p)}\sum_{\beta_j=a,j\in J} aq_j \preccurlyeq q \]
Hence \hfill $\vdash p \preccurlyeq q$\hfill\mbox{} \qed
\end{itemize}
\end{proof}
With this ground-completeness result at hand, it is
straightforward to apply the \emph{inverted substitution}
technique of Groote \cite{Gro90} to derive (see also \cite{CF08}):
\begin{theorem}
If $|A|=\infty$, then \mbox{\rm A1-4+WF1-2+WIF3} is $\omega$-complete
for $\mathrm{BCCS}(A)$ modulo $\sqsubseteq_{\rm WIF}$.
\end{theorem}
\begin{proof}
Given an inequational axiomatization $E$ and open terms $t,u$ such
that $E \vdash \sigma(t) \preccurlyeq \sigma(u)$ for all closed
substitutions $\sigma$, the technique of inverted substitutions is a
method to prove $E \vdash t \preccurlyeq u$. It does so by means of a
closed substitution $\rho$ encoding open terms into closed terms, and
an decoding operation $R$ that turns closed terms back into
open terms. By assumption we have $E \vdash \rho(t) \preccurlyeq
\rho(u)$. The pair $(\rho,R)$ should be chosen in such a way that, in
essence, applying $R$ to all terms occurring in a proof of $\rho(t)
\preccurlyeq \rho(u)$ yields a proof of $t \preccurlyeq u$. As
observed in \cite{Gro90}, this technique is applicable when three
conditions are met, one of which being that $R(\rho(t))=t$ and
$R(\rho(u))=u$. In fact, \cite{Gro90} dealt with equational logic
only, but the very same reasoning applies to inequational logic.
Here we use the same pair $(\rho,R)$ that was used by Groote to obtain
most of the applications of the technique in \cite{Gro90}---it could
be called the \emph{default} (inverted) substitution. It is obtained
by selecting for each variable $x\in V$ an action $a_x\in A$, not
occurring in $t$ or $u$. This is possible because $|A|=\infty$. Now
the default substitution $\rho$ is given by $\rho(x)=a_x\mathbf{0}$ and the
default inverted substitution $R$ replaces any maximal subterm of the
form $a_x p$ into the variable $x$. Groote showed that with this
particular (inverted) substitution, 2 out of his 3 conditions are
always met, and the third one simply says that for each axiom $t
\preccurlyeq u$ in $E$ we should have that $E \vdash R(t) \preccurlyeq
R(u)$. This condition is clearly met for the axioms A1-4+WF1-2+WIF3,
and hence this axiomatization is $\omega$-complete.\qed
\end{proof}
Note that we could have used the same method to obtain
Theo.~\ref{theo:A-infinite}, but not Theo.~\ref{theo:A-finite}.
\subsection{Nonexistence of a Finite Basis for Preorder if
$|A|<\infty$} \label{sec:43}
\subsubsection{$1< |A| <\infty$.}
We prove that the inequational theory of $\mathrm{BCCS}(A)$ modulo
$\sqsubseteq_{\rm WIF}$ does \emph{not} have a finite basis in case of
a finite alphabet with at least two elements.
The cornerstone for this negative result is the following
infinite family of inequations, for $m\geq 0$:
\[\tau(a^mx) + \Phi_m ~\preccurlyeq~ \Phi_m \]
with
\[\Phi_m ~=~ \tau(a^mx+x) + \sum_{b\in A} \tau(a^mx+a^mb\mathbf{0})\]
It is not hard to see that these inequations are sound modulo
$\sqsubseteq_{\rm WIF}$. Namely, given a closed substitution $\rho$,
we have $\mc{T}(\rho(\tau(a^mx))) \subseteq \mc{T}(\rho(\Phi_m))$ and
$\rho(\Phi_m)\mv{\tau}$.
To argue that $\rho(\tau(a^mx) + \Phi_m)$ and $\rho(\Phi_m)$ have the
same impossible futures, we only need to consider the
transition $\rho(\tau(a^mx)+\Phi_m)\mv{\tau}a^m\rho(x)$ (all other
cases being trivial). If
$\rho(x)=\mathbf{0}$, then $\rho(\Phi_m)\mv{\tau}a^m\mathbf{0} + \mathbf{0}$ generates
the same impossible futures $(\varepsilon,B)$. If, on the other hand,
$b\mathbin\in\mc{I}(\rho(x))$ for some $b\mathbin\in A$, then
$\mc{T}(a^m\rho(x)+a^mb\mathbf{0})=\mc{T}(a^m\rho(x))$, so
$\rho(\Phi_m)\mv{\tau} a^m\rho(x)+a^mb\mathbf{0}$ generates the same
impossible futures $(\varepsilon,B)$.
We have already defined the traces and completed traces of closed
terms. Now we extend these definitions to open terms by allowing
(completed) traces of the form $a_1 \cdots a_k x \in A^\ast V$.
We do this by treating each variable occurrence $x$ in a term as if it
were a subterm $x\mathbf{0}$ with $x$ a visible action, and then apply
Def.~\ref{wif}. Under this convention,
$\mc{CT}(\Phi_m) =\{a^{m}x, x, a^{m}b \mid b\mathbin\in A\}$.
We write $\mc{T}_V(t)$ for the set of traces of $t$ that end in a
variable, and $\mc{T}_A(t)$ for ones that end in an action.
\begin{observation}\label{traces-substitution}
Let $m>|t|$ or $a_m \in V$.
Then $a_1\cdots a_m \in \mc{T}(\sigma(t))$ iff there is a $k < m$
and $y\mathbin\in V$ such that $a_1\cdots a_k y \in \mc{T}_V(t)$ and
$a_{k+1} \cdots a_m \in \mc{T}(\sigma(y))$.
\end{observation}
\begin{lemma}\label{trace-preservation}
If $|A|>1$ and $t \sqsubseteq_{\rm WIF} u$ then
$\mc{T}_A(t)= \mc{T}_A(u)$ and $\mc{T}_V(t)= \mc{T}_V(u)$.
\end{lemma}
\begin{proof}
Let $\sigma$ be the closed substitution defined by
$\sigma(x)\mathbin=\mathbf{0}$ for all $x\mathbin\in V$. Then
$t\sqsubseteq_{\rm WIF} u$ implies $\sigma(t)\sqsubseteq_{\rm WIF}
\sigma(u)$ and hence $\mc{T}_A(t)=\mc{T}(\sigma(t))=
\mc{T}(\sigma(u))=\mc{T}_A(u)$ by Def.~\ref{wif}.
For the second statement fix distinct actions $a,b\mathbin\in A$ and
an injection $\ulcorner \cdot \urcorner: V \rightarrow
\mathbb{Z}_{>0}$ (which exists because $V$ is countable). Let
$m=|u|+1=|t|+1$. Define the closed substitution $\rho$ by
$\rho(z)=a^{\ulcorner\! z\!\urcorner{\cdot}m}b\mathbf{0}$ for all $z\in V$.
Again, by Def.~\ref{wif}, $t\sqsubseteq_{\rm WIF} u$ implies
$\mc{T}(\rho(t))= \mc{T}(\rho(u))$. By Obs.~\ref{traces-substitution},
for all terms $v$ we have $a_1\cdots a_k y \in \mc{T}_V(v)$ iff $a_1\cdots a_k
a^{\ulcorner\! y\!\urcorner{\cdot}m}b \in \mc{T}(\rho(v))$ with
$k\mathbin<m$. Hence $\mc{T}_V(v)$ is completely determined by
$\mc{T}(\rho(v))$ and thus $\mc{T}_V(t) \mathbin= \mc{T}_V(u)$.\qed
\end{proof}
\begin{lemma} \label{newvariable3}
Let $|A|>1$. Suppose $t\sqsubseteq_{\rm WIF} u$ and $t\Rightarrow\mv{\tau} t'$.
Then there is a term $u'$ such that $u\Rightarrow\mv{\tau} u'$ and
$\mc{T}_V(u')\subseteq \mc{T}_V(t')$.
\end{lemma}
\begin{proof}
Define $\rho$ exactly as in the previous proof. Since
$\rho(t)\Rightarrow\rho(t')$ and $t\sqsubseteq_{\rm WIF} u$ there must
be a $u'$ with $\rho(u) \Rightarrow q$ and $\mc{T}(q)\subseteq
\mc{T}(\rho(t'))$. Since $\rho(x)$ is $\tau$-free for $x\in V$ it must
be that $q=\rho(u')$ for some term $u'$ with $u \Rightarrow u'$.
Given the relationship between $\mc{T}_V(v)$ and $\mc{T}(\rho(v))$ for
terms $v$ observed in the previous proof, it follows that
$\mc{T}_V(u')\subseteq\mc{T}_V(t')$. In case $u \Rightarrow\mv{\tau}
u'$ we are done, so assume $u'=u$. Let $\sigma$ be the substitution
with $\sigma(x)=\mathbf{0}$ for all $x\in V$. Since $\sigma(t) \mv{\tau}$
and $t \sqsubseteq_{\rm WIF} u$ we have $\sigma(u)\mv{\tau}$, so $u
\mv{\tau} u''$ for some $u''$. Now $\mc{T}_V(u'') \subseteq \mc{T}_V(u) =
\mc{T}_V(u') \subseteq \mc{T}_V(t')$.
\qed
\end{proof}
\begin{lemma} \label{no-traces1}
Let $|A|>1$. Assume that, for some terms $t,u$, substitution
$\sigma$, action $a$ and integer $m$:
\begin{enumerate}\vspace{-1ex}
\item $t \sqsubseteq_{\rm WIF} u$;
\item $m \geq |u|$; and
\item $\sigma(t)\Rightarrow\mv{\tau} \hat t$ for a term $\hat t$ without
traces $ax$ for $x\mathbin\in V$ or $a^mb$ for $b\mathbin\in A$.
\vspace{-1ex}\end{enumerate}
Then $\sigma(u)\Rightarrow\mv{\tau} \hat u$ for a term $\hat u$ without
traces $ax$ for $x\mathbin\in V$ or $a^mb$ for $b\mathbin\in A$.
\end{lemma}
\begin{proof}
Based on proviso (3) there are two cases to consider.\vspace{-1ex}
\begin{itemize}
\item $y \in \mc{T}_V(t)$ for some $y\mathbin\in V$ and
$\sigma(y)\Rightarrow\mv{\tau} \hat t$.
In that case $y \mathbin\in \mc{T}_V(u)$ by
Lem.~\ref{trace-preservation}, so $\sigma(u)\Rightarrow\mv{\tau} \hat t$.
\item $t \Rightarrow\mv{\tau} t'$ for some term $t'$ such that $\hat t
= \sigma(t)$. By Lem.~\ref{newvariable3} there is a term $u'$ with
$u\Rightarrow\mv{\tau} u'$ and $\mc{T}_V(u')\subseteq \mc{T}_V(t')$.
Take $\hat u = \sigma(u')$. Clearly $\sigma(u)\Rightarrow\mv{\tau}
\sigma(u')$. Suppose $\sigma(u')$ would have a trace $a^mb$. Then,
by Obs.~\ref{traces-substitution}, there is a $k \leq m$ and
$y\mathbin\in V$ such that $a^k y \in \mc{T}_V(u')$ and
$a^{m-k}b \in \mc{T}(\sigma(y))$. Since $\mc{T}_V(u')\subseteq
\mc{T}_V(t')$ we have $a^mb \in \mc{T}(\sigma(t'))$, which is a
contradiction. The case $ax \in \mc{T}(\sigma(u))$ is dealt with in
the same way.\qed
\end{itemize}
\end{proof}
\begin{lemma} \label{no-traces2}
Let $|A|>1$ and let $E$ be an axiomatization sound for
$\sqsubseteq_{\rm WIF}$. Assume that, for some terms $v,w$, action
$a$ and integer $m$:
\begin{enumerate}\vspace{-1ex}
\item $E\vdash v \preccurlyeq w$;
\item $m \geq \max \{|u| \mid t \preccurlyeq u \in E\}$; and
\item $v\Rightarrow\mv{\tau} \hat v$ for a term $\hat v$ without
traces $ax$ for $x\mathbin\in V$ or $a^mb$ for $b\mathbin\in A$.
\vspace{-1ex}\end{enumerate}
Then $w\Rightarrow\mv{\tau} \hat w$ for a term $\hat w$ without
traces $ax$ for $x\mathbin\in V$ or $a^mb$ for $b\mathbin\in A$.
\end{lemma}
\begin{proof}
By induction on the derivation of $E\vdash v \preccurlyeq w$.
\begin{itemize}
\item Suppose $E\vdash v\preccurlyeq w$ because $\sigma(t)=v$ and
$\sigma(u)=w$ for some $t\preccurlyeq u\in E$ and substitution
$\sigma$. The claim then follows by Lem.~\ref{no-traces1}.
\medskip
\item Suppose $E\vdash v\preccurlyeq w$ because $E\vdash v\preccurlyeq u$
and $E\vdash u\preccurlyeq w$ for some $u$. By induction, $u
\Rightarrow\mv{\tau} \hat u$ for a term $\hat u$ without traces $ax$ or
$a^mb$. Hence, again by induction, \mbox{$w\Rightarrow\mv{\tau} \hat w$}
for a term $\hat w$ without traces $ax$ or $a^mb$.
\medskip
\item Suppose $E\vdash v\preccurlyeq w$ because $v=v_1+v_2$ and
$w=w_1+w_2$ with $E\vdash v_1 \preccurlyeq w_1$ and $E\vdash
v_2\preccurlyeq w_2$. Since $v \Rightarrow\mv{\tau} \hat v$, either
$v_1 \Rightarrow\mv{\tau} \hat v$ or $v_2 \Rightarrow\mv{\tau} \hat v$.
Assume, without loss of generality, that $v_1 \Rightarrow\mv{\tau} \hat v$.
By induction, $w_1 \Rightarrow\mv{\tau} \hat w$ for a term $\hat w$ without
traces $ax$ or $a^mb$. Now $w \Rightarrow\mv{\tau} \hat w$.
\medskip
\item Suppose $E\vdash v \preccurlyeq w$ because $v=cv_1$ and $w=cw_1$ with
$c\in A$ and $E\vdash v_1\approx w_1$. In this case, proviso (3) of
the lemma can not be met.
\medskip
\item Suppose $E\vdash v \preccurlyeq w$ because $v=\tau v_1$ and
$w=\tau w_1$ with $E\vdash v_1\approx w_1$. Then either $v_1=\hat
v$ or $v_1 \Rightarrow\mv{\tau} \hat v$. In the first case, $w_1$
has no traces $ax$ or $a^{m}b$ by Lem.~\ref{trace-preservation}
and proviso (3) of the lemma; hence $w$ has no such traces either.
In the second case, by induction, $w_1\Rightarrow\mv{\tau} \hat w$
for a term $\hat w$ without traces $ax$ or $a^mb$. Again
$w\Rightarrow\mv{\tau} \hat w$. \qed
\end{itemize}
\end{proof}
\begin{theorem} \label{thm:alphabetn}
If $1<|A|<\infty$, then the inequational theory of $\mathrm{BCCS}(A)$
modulo $\sqsubseteq_{\rm WIF}$ does not have a finite basis.
\end{theorem}
\begin{proof}
Let $E$ be a finite axiomatization over $\mathrm{BCCS}(A)$ that is
sound modulo $\sqsubseteq_{\rm WIF}$. Let $m$ be greater than the
depth of any term in $E$. According to Lem.~\ref{no-traces2}, the
inequation $\tau(a^mx) + \Phi_m \preccurlyeq \Phi_m$ cannot be derived
from $E$. Yet it is sound modulo $\sqsubseteq_{\rm WIF}$.
\qed
\end{proof}
\subsubsection{$|A|=1$.} We prove that the inequational theory of
$\mathrm{BCCS}(A)$ modulo $\sqsubseteq_{\rm WIF}$ does \emph{not} have
a finite basis in case of a singleton alphabet. The cornerstone
for this negative result is the following infinite family of
inequations, for $m\geq 0$:
\[ a^m x ~\preccurlyeq~ a^m x + x\]
If $|A|=1$, then these inequations are clearly sound modulo
$\sqsubseteq_{\rm WIF}$. Note that given a closed substitution
$\rho$, $\mc{T}(\rho(x))\subseteq \mc{T}(\rho(a^m x))$.
\begin{lemma}\label{trace-preservation2}
If $t \sqsubseteq_{\rm WIF} u$ then
$\mc{T}_V(t) \subseteq \mc{T}_V(u)$.
\end{lemma}
\begin{proof}
Fix $a\in A$ and an injection $\ulcorner \cdot \urcorner: V
\rightarrow \mathbb{Z}_{>0}$. Let $m=|u|+1$. Define the closed
substitution $\rho$ by $\rho(z)=a^{\ulcorner\!
z\!\urcorner{\cdot}m}\mathbf{0}$ for all $z\in V$. By Lem.~\ref{res},
$\mc{CT}(\rho(t)) \subseteq \mc{CT}(\rho(u))$. Now suppose
$a_1\cdots a_k y \in \mc{T}_V(t)$. Then $a_1\cdots a_k a^{\ulcorner\!
y\!\urcorner{\cdot}m} \in \mc{CT}(\rho(t)) \subseteq
\mc{CT}(\rho(u))$ and $k<m$. This is only possible if $a_1\cdots a_k y
\in \mc{T}_V(u)$. \qed
\end{proof}
\begin{lemma} \label{1}
Assume that, for terms $t,u$, substitution $\sigma$, action $a$,
variable $x$, integer $m$:
\begin{enumerate}\vspace{-1ex}
\item $t \sqsubseteq_{\rm WIF} u$;
\item $m>|u|$; and
\item $x \in \mc{T}_V(\sigma(u))$ and
$a^k x \not\in \mc{T}_V(\sigma(u))$ for $1\leq k <m$.
\vspace{-1ex}\end{enumerate}
Then $x \in \mc{T}_V(\sigma(t))$ and
$a^k x \not\in \mc{T}_V(\sigma(t))$ for $1\leq k <m$.
\end{lemma}
\begin{proof}
Since $x\in \mc{T}_V(\sigma(u))$, by
Obs.~\ref{traces-substitution} there is a variable $y$ with $y\in
\mc{T}_V(u)$ and $x \in \mc{T}_V(\sigma(y))$. Consider the closed
substitution $\rho$ given by $\rho(y)=a^m\mathbf{0}$ and $\rho(z)=\mathbf{0}$
for $z\neq y$. Then $m>|u|=|t|$, and $y\in \mc{T}_V(u)$ implies
$a^m \in \mc{T}(\rho(u)) = \mc{T}(\rho(t))$, so by
Obs.~\ref{traces-substitution} there is some $k < m$ and
$z\mathbin\in V$ such that $a^k z \in \mc{T}_V(t)$ and $a^{m-k}
\in \mc{T}(\rho(z))$. As $k<m$ it must be that $z=y$. Since
$a^ky\in \mc{T}_V(t)$ and $x\in \mc{T}_V(\sigma(y))$,
Obs.~\ref{traces-substitution} implies that $a^k x \in
\mc{T}_V(\sigma(t))$. By Lem.~\ref{trace-preservation2}, $a^k x
\not\in \mc{T}_V(\sigma(t))$ for $1\leq k < m$. Hence we obtain
$k=0$.\qed
\end{proof}
\begin{lemma} \label{lemmaalphabet1}
Assume that, for $E$ an axiomatization sound for $\sqsubseteq_{\rm
WIF}$ and for terms $v,w$, action $a$, variable $x$ and integer $m$:
\begin{enumerate}\vspace{-1ex}
\item $E\vdash v \preccurlyeq w$;
\item $m > \max \{ |u| \mid t \preccurlyeq u \in E\}$; and
\item $x \in \mc{T}_V(w)$ and
$a^k x \not\in \mc{T}_V(w)$ for $1\leq k < m$.
\vspace{-1ex}\end{enumerate}
Then $x \in \mc{T}_V(v)$ and
$a^k x \not\in \mc{T}_V(v)$ for $1\leq k < m$.
\end{lemma}
\begin{proof}
By induction on the derivation of $E\vdash v \preccurlyeq w$.
\begin{itemize}
\item Suppose $E\vdash v\preccurlyeq w$ because $\sigma(t)=v$ and
$\sigma(u)=w$ for some $t\preccurlyeq u\in E$ and substitution
$\sigma$. The claim then follows by Lem.~\ref{1}.
\medskip
\item Suppose $E\vdash v\preccurlyeq w$ because $E\vdash v\preccurlyeq
u$ and $E\vdash u\preccurlyeq w$ for some $u$. By induction, $x \in
\mc{T}_V(u)$ and $a^k x \not\in \mc{T}_V(u)$ for $1\leq k < m$.
Hence, again by induction, $x \in \mc{T}_V(v)$ and
$a^k x \not\in \mc{T}_V(v)$ for $1\leq k < m$.
\medskip
\item Suppose $E\vdash v\preccurlyeq w$ because $v=v_1+v_2$ and
$w=w_1+w_2$ with $E\vdash v_1 \preccurlyeq w_1$ and $E\vdash
v_2\preccurlyeq w_2$. Since $x \in \mc{T}_V(w)$, either $x \in
\mc{T}_V(w_1)$ or $x \in \mc{T}_V(w_2)$. Assume, without loss of
generality, that $x \in \mc{T}_V(w_1)$. Since $a^k x \not\in
\mc{T}_V(w)$ for $1\leq k < m$, surely $a^k x \not\in \mc{T}_V(w_1)$
for $1\leq k < m$. By induction, $x \in \mc{T}_V(v_1)$, and hence $x
\in \mc{T}_V(v)$. For $1\leq k < m$ we have $a^k x \not\in
\mc{T}_V(w)$ and hence $a^k x \not\in \mc{T}_V(v)$, by
Lem.~\ref{trace-preservation2}.
\medskip
\item Suppose $E\vdash v \preccurlyeq w$ because $v=cv_1$ and $w=cw_1$ with
$c\in A$ and $E\vdash v_1\approx w_1$. In this case, proviso (3) of
the lemma can not be met.
\medskip
\item Suppose $E\vdash v \preccurlyeq w$ because $v=\tau v_1$ and
$w=\tau w_1$ with $E\vdash v_1\approx w_1$. Then, by proviso (3) of
the lemma, $x \in \mc{T}_V(w_1)$ and $a^k x \not\in \mc{T}_V(w_1)$ for
$1\leq k < m$. By induction, $x \in \mc{T}_V(v_1)$ and $a^k x \not\in
\mc{T}_V(v_1)$ for $1\leq k < m$. Hence $x \in \mc{T}_V(v)$ and $a^k
x \not\in \mc{T}_V(v)$ for $1\leq k < m$. \qed
\end{itemize}
\end{proof}
\begin{theorem}\label{thm:alphabet1}
If $|A|=1$, then the inequational theory of $\mathrm{BCCS}(A)$
modulo $\sqsubseteq_{\rm WIF}$ does not have a finite basis.
\end{theorem}
\begin{proof}
Let $E$ be a finite axiomatization over $\mathrm{BCCS}(A)$ that is
sound modulo $\sqsubseteq_{\rm WIF}$. Let $m$ be greater than the
depth of any term in $E$. According to Lem.~\ref{lemmaalphabet1}, the
inequation $a^mx \preccurlyeq a^m x + x$ cannot be derived
from $E$. Yet, since $|A|=1$, it is sound modulo $\sqsubseteq_{\rm WIF}$.
\qed
\end{proof}
To conclude this subsection, we have
\begin{theorem}
If $|A|<\infty$, then the inequational theory of $\mathrm{BCCS}(A)$
modulo $\sqsubseteq_{\rm WIF}$ does not have a finite basis.
\end{theorem}
Concluding, in spite of the close resemblance between weak
failures and weak impossible futures semantics, there is a
striking difference between their axiomatizability properties.
|
1,116,691,500,123 | arxiv | \section{Introduction}
Weakly Interacting Massive Particles (WIMPs), hypothetical particles able to account for most observations pointing at a cosmological dark matter, are expected to interact via elastic scattering off nuclei in detecting media. Detector signals would arise from the energy loss of the recoiling nucleus as it slows down. The interpretation of WIMP searches crucially depends on a correct understanding of the mechanisms governing the stopping of low-energy ions in the target material. This concern can be extended to experimental efforts aiming to measure coherent elastic neutrino-nucleus scattering \cite{coherent}, where the mode of interaction and energy regime are the same.
At the few-keV energies expected from WIMP or low-energy neutrino interactions, nuclear recoils typically induce a smaller response than electron recoils of the same energy. Depending on detector type, this response is often measured through the scintillation or ionization yield. In the case of standard germanium diodes operated at liquid nitrogen temperature, it is the second mechanism that is exploited to extract signals. An energy-dependent quenching factor can then be defined as the ratio between the ionization generated by the recoil of a germanium nucleus, and that from an electron recoil of the same energy.
We report on a new measurement of the germanium quenching factor at $\sim$77 K, using a P-type Point Contact (PPC) detector \cite{barbeau-01}, and a calibration technique recently described in \cite{collar-01}. This approach employs a photoneutron radioactive source, exploiting its monochromatic low-energy neutron emission to create nuclear recoils having a well-defined maximum recoil energy of just a few keV$_\text{nr}$ (the suffix stands for ``nuclear recoil", as opposed to the smaller ``electron equivalent" (ee) ionization energy that is actually measured post-quenching). The modest electronic noise characteristic of a PPC allows to include the contribution from sub-keV$_\text{nr}$ nuclear recoils. This technique has been used thus far in the characterization of the quenching factor of sodium recoils in NaI(Tl) scintillators \cite{collar-01}, and carbon and fluorine recoils in superheated fluids \cite{piconim,alanthesis,pico60}.
\section{Experimental Setup}
The experimental arrangement is illustrated in Fig. \ref{fig:daq}. All measurements took place in a shallow underground laboratory (6 m.w.e.) at the University of Chicago. A $\SI{50.7}{\mm}$ (diameter) $\times~ \SI{43.0}{\mm}$ (length) PPC germanium detector manufactured by Canberra Industries with an original active mass of \SI{0.475}{\kg} was surrounded by \SI{20}{\cm} of lead. This shielding reduces the intense gamma emissions from the source to a manageable level, avoiding pile-up and data throughput limitations, while causing only minimal changes to neutron energies \cite{collar-01}. The detector was previously used by the \cogent/ collaboration \cite{aalseth-01, aalseth-02}. An \isotope{Y}{88} gamma source was encapsulated by a $\SI{1}{cm}$-thick gamma-to-neutron BeO converter, and placed \SI{23}{\cm} away from the front of the PPC detector. The dominant neutron energy emitted by the source is $E_{n}\,=\,$\SI{152}{\keV} with an additional small ($\SI{0.5}{\percent}$) component of $E_{n}\,=\,\SI{963}{\keV}$ \cite{collar-01}. The maximum nuclear recoil energy transferred within a single scatter event in Ge for these neutron energies is $E_\text{nr}^{max}=(4MmE_{n})/(M+m)^2=\SI{8.5}{\keVnr}$ and $E_\text{nr}^{max}=\SI{51}{\keVnr}$, respectively, where $M$ and $m$ stand for Ge nucleus and neutron masses.
\begin{figure}[tbp]
\input{./Plots/daq.tex}
\caption{Experimental setup: the preamplifier output is digitized using a NI 5734 16-bit ADC, and shaped with a digital trapezoidal pulse shaper implemented on a NI 7966R Field-Programmable Gate Array (FPGA). The preamplifier trace is stored on the host PC if the corresponding shaped signal triggers on a rising edge threshold set at $\sim$\SI{0.8}{\keVee}, also implemented in the FPGA.}
\label{fig:daq}
\end{figure}
A \isotope{He}{3} neutron counter surrounded by HDPE moderator was employed to measure the isotropic neutron yield of the source, found to be in the range 574-580 neutrons/s, depending on the orientation of the source with respect to the counter. Prior experience with this \isotope{He}{3} counter and other neutron sources (\isotope{Am}{241}/Be, \isotope{Pu}{239}/Be, \isotope{Cf}{252}) of known activity point at an ability to characterize their yield within a few percent of its nominal value. More specifically, seven previous measurements involving four different commercial neutron sources displayed a systematic trend to underestimate their nominal neutron yield by $\sim$12\% \cite{drew}. The activity of the source was separately assessed via a gamma emission measurement employing a dedicated coaxial germanium detector. This gamma yield was used as an input to a \mcnp/ \cite{mcnpx} simulation employing a revised cross-section \cite{alan2} for the $^{9}$Be($\gamma$,n)$^{8}$Be reaction. The neutron yield obtained via this simulation is compatible with \isotope{He}{3} counter measurements, at $\sim573$ neutrons/s. Combining all measurements and accounting for statistical, simulation, and cross-section uncertainties, we estimate a source activity of 0.640$\pm$4\% mCi, corresponding to an emission of 574$\pm$5\% neutrons/s.
Preamplifier power and detector high voltage to the PPC were provided by a Polaris XIA DGF. The preamplifier signal output was fed into a 16-bit National Instruments (NI) 5734 ADC, connected to an NI PXIe-7966R FPGA module. The host PC was a NI PXIe 8133. A trapezoidal, digital pulse shaper was implemented on the FPGA using the recursive algorithm in \cite{jordanov-01}. The total shaping time was set to \SI{16}{\us} with a peaking time of $\SI{8}{\us}$ and a zero length flat top. A rising edge threshold trigger set to approximately \SI{0.8}{\keVee} was used for real-time detection of digitally-shaped pulses. The trigger position was set to \SI{80}{\percent} of the \SI{400}{\us}-long waveforms, with a sampling rate set to \SI{40}{\mega\sample\per\second}. The \SI{320}{\us}-long pre-trigger trace allowed monitoring of detector noise and baseline stability. An electron-equivalent energy scale was established using the \SI{59.5}{\keV} $\gamma$-emission from \isotope{Am}{241}, as well as the four main emission lines from \isotope{Ba}{133}.
\begin{figure}[tbp]
\includegraphics[width=\linewidth]{./Plots/experimental-spectra.pdf}
\caption{Normalized energy spectra recorded for the two different source configurations. Their difference (residual) is shown in blue. The digitizer gain setting limited usable data to $>1$ keV$_{ee}$. The low-energy residual excess arises from neutron-induced nuclear recoils. Additional neutron-induced signals are visible at \SI{13.3}{\keV}, \SI{53.3}{\keV}, and \SI{68.8}{\keV}. These peaks are the result of $^{72}$Ge$(n,\gamma)$ and $^{73}$Ge$(n,n^\prime\gamma)$ interactions \cite{jones-02,clover}. The cancelation of Pb fluorescence lines in the range \SI{72}{\keV_{ee}} to \SI{87}{\keV_{ee}} illustrates the absence of isolated x/$\gamma$-ray contributions to the residual spectrum.}
\label{fig:spectra}
\end{figure}
In order to separate neutron-induced signals from those generated by gamma interactions from the source, a second measurement was performed where the BeO converter was replaced by an aluminum cap of identical geometry. Aluminum has a total attenuation for dominant (898 keV) \isotope{Y}{88} gamma-rays of $\lambda_\text{Al}(\SI{1}{\MeV}) = \SI{0.06146}{\cm\squared\per\gram}$, which closely matches that from BeO, $\lambda_\text{BeO}(\SI{1}{\MeV}) = \SI{0.06112}{\cm\squared\per\gram}$ \cite{berger-01}. A total \SI{19.3}{\hour} of exposure with the \isotope{Y}{88}/BeO source configuration and \SI{20.0}{\hour} with \isotope{Y}{88}/Al were collected. The energy spectra are normalized to account for the difference in run times, and the decay of the source (T$_{\nicefrac{1}{2}}$ = \SI{106.65}{\day}). The residual spectrum, i.e. the difference between the \isotope{Y}{88}/BeO (gammas and neutrons) and \isotope{Y}{88}/Al (gammas) spectra contains neutron-induced signals only \cite{collar-01}. Fig. \ref{fig:spectra} shows both normalized spectra, and the resulting residual spectrum. The low-energy excess in the residual is caused by neutron-induced germanium recoils. As expected, the residual rapidly converges to zero above few keV$_{ee}$, except for discrete peaks arising from inelastic scattering and neutron capture in $^{72,73}$Ge \cite{jones-02,clover}. These peaks can display a characteristic asymmetry towards high energies, due to the addition of gamma and nuclear recoil energy depositions \cite{skoro,jova}.
In addition to these measurements, a total of $10^8$ neutrons emitted by the BeO converter was simulated using \mcnp/ \cite{mcnpx}. The geometry included fine details such as the known internal structure of the PPC, chemical impurity content of lead, and source encapsulation. It also involved new improved cross-section libraries specifically developed for dark matter detector simulations \cite{alan}. Approximately \SI{0.4}{\percent} of these simulated neutrons produce at least one recoil within the detector. The interaction depth, measured from the nearest surface of the germanium crystal, and recoil energy from each nuclear elastic scattering event were recorded. The unquenched energy distribution of these individual recoils is shown in Fig.
\ref{fig:unquenched-recoil-spectrum}. Approximately \SI{50}{\percent} of neutrons interacting with the germanium crystal do so only once, a fraction large enough to expect a readily visible endpoint energy in the ionization spectrum, corresponding to the expected maximum recoil energy transfer of \SI{8.5}{\keV_{nr}}. Multi-scatter events allow to study the contribution from nuclear recoils individually depositing energies below the 0.8 keV$_\text{ee}$ triggering threshold (Fig. \ref{fig:ss-ms-vs-res}). More precisely, 30(15)\% of simulated neutrons interacting with the detector produce at least one recoil depositing less than 1(0.5) keV$_\text{nr}$.
\begin{figure}[tbp]
\includegraphics[width=\linewidth]{./Plots/unquenched-recoil-spectrum.pdf}
\caption{Simulated, unquenched distribution of nuclear recoil energies deposited for each individual neutron scatter event. As expected, primary $E_{n}\,=\,$\SI{152}{\keV} neutrons produce recoil energies of up to \SI{8.5}{\keV_{nr}}. The 0.5\% $E_{n}\,=\,\SI{963}{\keV}$ branch contributes a small fraction of higher recoil energies up to \SI{51}{\keV_{nr}}. The inset shows the multiplicity of interactions in the PPC for all simulated neutron histories. }
\label{fig:unquenched-recoil-spectrum}
\end{figure}
\section{Analysis}
To extract the quenching factor we compare the simulated data to the experimental residual spectrum. In a first step, the energy deposition of each simulated nuclear recoil is converted into an electron-equivalent energy via an energy-dependent quenching model $Q(E_\text{nr})$. Previous measurements of the quenching factor in germanium suggest that the Lindhard theory \cite{lindhard-01} provides an adequate description of $Q$ down to very low energies. This formalism can be written as \cite{barker-01,benoit-01}
\begin{align}
Q&= \frac{k\,g(\epsilon)}{1+k\,g(\epsilon)}\label{eq:lindhard-model-1}\\
g(\epsilon) &= 3\,\epsilon^{0.15} + 0.7\,\epsilon^{0.6}+\epsilon\label{eq:lindhard-model-2}\\
\epsilon &= 11.5\,Z^{-\nicefrac{7}{3}}\,E_\text{nr}\label{eq:lindhard-model-3}.
\end{align}
Here $Z$ is the atomic number of the recoiling nucleus, $\epsilon$ a dimensionless energy, $E_\text{nr}$ is the recoil energy in keV$_\text{nr}$, and $k$ describes the electronic energy loss. In the original description by Lindhard, a value $k\,=\,0.133 Z^{\nicefrac{2}{3}} A^{-\nicefrac{1}{3}}\,(=\,0.157$ for Ge) was adopted, with $A$ the mass number of the nucleus. Lindhard-like models have been fitted to previous quenching factor measurements using comparable $k$ values \cite{barker-01,hooper-01}. Accordingly, we treat $k$ as the free parameter of prime interest in our analysis.
In a second step, we acknowledge that the charge collection efficiency $\eta$ within a PPC detector varies with interaction depth into the crystal. This is due to the effect of a lithium-diffused external contact covering most of the outer surface of the diode \cite{aalseth-04}. Following \cite{aalseth-03} we adopt a sigmoid-shaped charge collection efficiency profile
\begin{align}
\eta(x,\delta,\tau)\;=\;1-\frac{1}{\exp\left[{\frac{x-(\delta+0.5\,\tau)}{0.17\,\tau}}\right]+1},
\end{align}
where $\delta$ is an outermost dead layer thicknesses for which $\eta$ is negligible. $\tau$ is an underlaying transition layer thickness over which the charge collection efficiency rises from $\eta=0.05$ to $0.95$, and $x$ is the interaction depth.
In a third step, we account for the possibility of a reduced ionization
efficiency for slow-moving nuclear recoils, by introducing a smooth
adiabatic correction factor $F_\text{AC}$ to the Lindhard stopping. The concept of a
"kinematic threshold" below which
the minimum excitation energy of the detector system is larger than
the maximum possible energy transfer to an electron by a
slow-moving ion, can be traced back to Fermi and Teller \cite{fermi}.
We adopt the same correction factor model previously employed in
\cite{ahlen-01,ahlen-02},
\begin{align}
F_\text{AC}\left(E_\text{nr},\xi\right) = 1 -
\text{exp}\left[-\nicefrac{E_\text{nr}}{\xi}\right],
\end{align}
where the adiabatic energy scale factor $\xi$ corresponds to the threshold energy below which a rapid drop in ionization efficiency can be expected.
The total simulated electron equivalent energy measured for a neutron interacting $n$ times with the crystal can now be written as
\begin{align}
E_\text{ee} = \sum\limits_{i=1}^n
E_\text{nr}^{(i)}Q\left(E_\text{nr}^{(i)},k\right)\eta\left(x^{(i)},\delta,\tau\right)F_\text{AC}\left(E_\text{nr}^{(i)},\xi\right)\label{eq:electron-equivalent-energy},
\end{align}
where $E_\text{nr}^{(i)}$ is the recoil energy deposited at the $i^{th}$ interaction site. The resulting nuclear recoil energy spectrum in units of electron equivalent energy is convolved with a resolution $\sigma^2(E_\text{ee}) = (\SI{69.7}{eV})^2 + 0.98$ eV $E_\text{ee}$(eV), specific for this detector \cite{aalseth-01,aalseth-02}.
In a final step, the simulated spectrum is normalized to match the integrated neutron yield over the time span of the measurements. To account for the mentioned significant uncertainty in source neutron yield we introduce an additional free global scaling parameter $\gamma$. Our full analysis therefore involves a total of five free parameters, three of which ($\delta,\tau,\gamma$) are treated as nuisance parameters as they are not of immediate interest to our measurement of the quenching factor, even if they must be accounted for.
We employ a Monte Carlo Markov Chain (MCMC) to find the parameter set $\vec{\pi} = \left(k,\delta,\tau,\xi,\gamma\right)$ that provides the best fit of the simulated data to the experimental residual spectrum. Assuming an underlying Poisson distribution for each bin of the simulated residual spectrum, the probability to count $N_i$ events in bin $i$ given $\mu_i$ simulated counts in the same bin can simply be written as
\begin{align}
P(N_i|\mu_i)\;=\;\frac{\mu_i^{N_i}\,\text{e}^{-\mu_i}}{N_i!},
\end{align}
where $\mu_i$ solely depends on our choice of fit parameters $\vec{\pi}$. The corresponding log-likelihood function is given by
\begin{align}
\ln \text{L}(\vec{N}|\vec{\pi}) = & \sum\limits_i N_i\ln(\mu_i(\vec{\pi})) - \sum\limits_i \mu_i(\vec{\pi})\label{eq:log-likelihood}\\
& - \sum\limits_i \ln(N_i!).\nonumber
\end{align}
The last sum is constant for all choices of $\vec{\pi}$. We will therefore not include it in the final posterior probability sampling process. From Bayes' theorem we know that
\begin{align}
P(\vec{\pi}|\vec{N}) \propto P(\vec{N}|\vec{\pi})P(\vec{\pi}), \label{eq:bayes-theorem}
\end{align}
with
\begin{align}
P(\vec{\pi}) = P(k)P(\delta)P(\tau)P(\xi)P(\gamma), \label{eq:independent-parameters}
\end{align}
where we assume that all parameters are independent. For our analysis we choose a bound, flat prior for each parameter (Table \ref{tab:fit-parameters}) for their respective limits. Neglecting the normalization constant of Eq. (\ref{eq:bayes-theorem}), the final logarithmic posterior probability distribution can be written as
\begin{align}
\ln P(k,\delta,\tau,\xi,\gamma|\vec{N}) = &\ln L(\vec{N}|k,\delta,\tau,\xi,\gamma)\label{eq:posterior}\\
& + \ln P(k,\delta,\tau,\xi,\gamma).\nonumber
\end{align}
The last logarithm is either 0 or $-\infty$, depending on whether all parameters are within their respective bounds or not. We use \textbf{emcee} \cite{goodman-02}, a pure Python implementation of Goodman and Weare's affine invariance ensemble sampler \cite{goodman-01} to sample Eq. (\ref{eq:posterior}).
\begin{figure}[tbp]
\includegraphics[width=\linewidth]{./Plots/walker-trajectories.png}
\caption{Full MCMC chain consisting of 320 walkers with $10^5$
iterations each. The walkers were initialized uniformly within the
allowed parameter limits (Table \ref{tab:fit-parameters}). Most
walkers converge onto their final probability distribution after
$\sim500$ steps. The right-side plots show the kernel density
estimation using a bandwidth chosen according to Silverman's rule
\cite{silverman-01}. The dashed red line highlights the most probable
value of the resulting marginalized posterior probability
distribution function. The shaded red area shows the $1\sigma$
credible region.}
\label{fig:mcmc-parameter-distribution}
\end{figure}
\section{Results}
The first MCMC run performed consists of 320 walkers with $10^5$
steps each. The walkers are initialized uniformly within the allowed
parameter space. The full chain is shown in Fig.\
\ref{fig:mcmc-parameter-distribution}. Most walkers are observed to
converge onto the target distribution after $\sim500$ steps. The
adiabatic energy scale factor $\xi$ exhibits the longest auto-correlation time
with $\tau_\text{acor}\approx 87$ steps. The full chain therefore
covers a total of approximately 1150 auto-correlation lengths,
whereas the burn-in time is limited to the first six. Following
\cite{sokal-01} we choose to discard the first twenty
$\tau_\text{acor}$ to eliminate any remaining initialization bias.
The mean acceptance probability for the remaining chain is
P$_\text{acc}=0.46$. All parameters show a monotonically decreasing
Gelman-Rubin potential scale reduction factor R$_\text{GR}$
\cite{gelman-01}, the largest of which is R$_\text{GR}(\xi)$ = 1.073
after $10^5$ steps. An additional visual inspection of all walker
trajectories suggests proper mixing within each chain. The marginalized best-fit values including
their $1\sigma$ credible region are provided in Table
\ref{tab:fit-parameters}. To further investigate the presence of any
possible meta-stable states, we run three additional, shorter MCMC
chains of 320 walkers and $2\times10^4$ steps with differing starting
conditions. For the first two additional runs all
parameters are set below, or above, their respective best-fit values (Table \ref{tab:fit-parameters}). The third run probes
a possibly meta-stable state visible at $\xi\approx\SI{1.65}{\keVnr}$ in Fig.\
\ref{fig:mcmc-parameter-distribution} by initializing all walkers within the
vicinity of $\xi=\SI{1.65}{\keVnr}$, whereas all other parameters are
uniformly distributed within their respective bounds. All three runs
converge onto the same posterior distribution as the initial MCMC
run. The burn-in times, mean acceptance fractions and
auto-correlation lengths are generally identical. We conclude that the investigated possibly meta-stable state
bears no significance, and that all walkers have properly explored the
phase space and fully stabilized on the final posterior probability
distribution.
\renewcommand{\arraystretch}{1.5}
\begin{table}[tbp]
\begin{tabular}{lcc}
\\
\hline
\hline
Parameter & Boundaries & Best Fit\\
\hline
$k$ & $\left[0.1,0.3\right]$ &$0.1789^{+0.0014}_{-0.0010}$\\[5pt]
$\delta$ [mm] & $\left[0.5,6.0\right]$
&$3.60^{+0.22}_{-0.31}\,$\\[5pt]
$\tau$ [mm] & $\left[0.5,6.0\right]$ & $3.44^{+0.53}_{-0.43}\,$\\[5pt]
$\xi$ [keV$_{nr}$] & $\left[0.0,2.0\right]$ &
$0.16^{+0.10}_{-0.13}\,$\\[5pt]
$\gamma$ & $\left[0.5,2.5\right]$ & $1.367^{+0.015}_{-0.014}$\\
\hline
\hline
\end{tabular}
\caption{Parameter space and marginalized best-fit values for all
free parameters. The errors provided represent the $1\sigma$ credible
region obtained from the MCMC analysis. The upper boundary on the
explored adiabatic energy scale ($\xi$) space has been chosen
arbitrarily, but large enough such that it does not affect walker
movement.}
\label{tab:fit-parameters}
\end{table}
\begin{figure}[tbp]
\includegraphics[width=\linewidth]{./Plots/single-multiple-scatter-spectrum-vs-residual.pdf}
\caption{Contributions from single and multiple neutron scattering interactions to the measured ionization energy spectrum. Below \SI{2}{\keV_{ee}} single, double, and multi-scatter ($n>$2) events contribute approximately the same to the overall spectrum. The endpoint of the single scatter spectrum corresponds to an energy of approximately \SI{2}{\keVee}, as expected from previous measurements of the germanium quenching factor at 77 K. This endpoint is readily visible as an inflection in the experimental residual. The shaded red band in the inset shows the one-sigma credible band for the fit. The quality of the fit is $\chi^2/\text{d.o.f}$ = \nicefrac{19.3}{13}. }
\label{fig:ss-ms-vs-res}
\end{figure}
The most probable value of $k=0.1789$ is close to the semi-empirical prediction by Lindhard of $k=0.157$, previous modeling and fits \cite{barker-01,hooper-01}, and in good agreement with existing experimental data at discrete energies. Below \SI{0.8}{\keVnr} our
quenching model starts to deviate from a pure Lindhard model due to
the adiabatic correction factor $F_\text{AC}$. The
corresponding best-fit value of the adiabatic energy scale factor
$\xi=\SI{0.16}{\keVnr}$ is seen to be in good agreement with kinematic threshold predictions recently made for germanium
\cite{sorensen-01}. As discussed above, $\xi$ lies well below our triggering threshold of $\sim\SI{0.8}{\keVee}$. However, our simulations show that approximately one third of the triggering events between 1-\SI{2}{\keVee} involve three or more interactions with the detector (Fig. \ref{fig:ss-ms-vs-res}). The cumulative ionization energy from events involving multiple scatters can surpass the triggering threshold, contributing to the experimental residual. The energy range for which
our analysis provides a valid description of the quenching factor is
limited from above by the maximum recoil energy from a single
(dominant branch) neutron scatter,
$E_\text{nr}^\text{max}=\SI{8.52}{\keVnr}(\approx\SI{2.15}{\keVee})$.\\
\begin{figure}[tbp]
\includegraphics[width=\linewidth]{./Plots/equivalent-energies.pdf}
\caption{Best-fit germanium quenching factor obtained from this work. Data points correspond to previous measurements from \cite{barbeau-01,chasman-01,chasman-02,jones-01,messous-01,texono} in this recoil energy region at 77 K. The solid line shows the modified
Lindhard model for our best-fit $k=0.1789$ and $\xi=\SI{0.16}{\keVnr}$,
over the energy region probed by this calibration. Below
$\sim$\SI{0.8}{\keVnr} the quenching factor is affected by the adiabatic correction factor $F_\text{AC}$. The
maximum recoil energy probed is given by the maximum energy transfer
of a single (dominant branch) neutron scatter, i.e.
$\SI{8.5}{\keVnr}$. Grayed lines represent the combined $1\sigma$
credible region for $k$ and $\xi$. Additional data points at 50 mK are shown \cite{CDMS}. See text for a discussion on a possible temperature dependence for this quenching factor.}
\label{fig:equivalent-energies}
\end{figure}
The best-fit overall scaling $\gamma=1.367$ would suggest a neutron yield
from the source \SI{36.7}{\percent} larger than measured with the
\isotope{He}{3} counter. This best-fit value was found
to be robust ($\pm^{2.3}_{2.1}\%$) against small ($\pm 7\%$)
variations in the magnitude of the neutron cross-section in lead,
representative of its known uncertainty. We performed a similar study of the dependence of $\gamma$ on the $\pm 5\%$ estimated uncertainty in germanium cross-sections, and $\pm 20\%$ uncertainty in the strength function (a measure of resonance contribution) for this element. These result in an additional variation in $\gamma$ by $\pm^{3.8}_{5.0}\%$. The obtained best-fit value for $\gamma$ is deemed satisfactory, in
view of the uncertainties involved, and in particular the mentioned tendency for our \isotope{He}{3} measurements to underestimate the nominal neutron yield from commercial sources. In addition to this, an anti-correlation between the active volume of the detector (i.e., the bulk unaffected by dead or transition layer) and $\gamma$ exists. This active volume changes rapidly with the adopted value of $\delta$ and $\tau$, e.g., already by $\sim$15\% over the uncertainty in their best-fit values (Table \ref{tab:fit-parameters}). While this correlation is unavoidable, the best-fit values of $\delta$ and $\tau$ can be contrasted with expectations, as follows. The thickness of these layers was measured soon after detector acquisition in 2005, using an uncollimated \isotope{Am}{241} source, finding them similar at $\sim$1.2 mm each \cite{aalseth-03}. This was in line with the deep lithium diffusion requested from the manufacturer. Lithium diffusion in the external n+ contact in P-type germanium detectors is known to progress in time, specially for crystals stored at room temperature, as has been the case for most of this detector's history. Based on the few available measurements for this evolution (an increase in thickness by factors 3.3 (4.2) over 9 (13) years \cite{huy-01,huy-02}) we allowed a large parameter space for $\delta,\tau \in \left[0.5\,\text{mm},6\,\text{mm}\right]$. The obtained best-fit values for $\delta$ and $\tau$ correspond to an increase in the sum of dead and transition layer thicknesses in our PPC by a factor of 2.9 over a decade, compatible with the observations in \cite{huy-01,huy-02}.
The quenching factor corresponding to our best-fit $k=0.1789$ and
$\xi=\SI{0.16}{\keVnr}$ is shown in Fig. \ref{fig:equivalent-energies}. A good agreement with previous measurements at 77 K is evident. Fig. \ref{fig:ss-ms-vs-res} shows a comparison of best-fit simulated recoil spectrum and experimental residual over the 1-8 keV$_{ee}$ fitting range.
\section{Conclusions}
We have demonstrated a new calibration method described in \cite{collar-01}, expanding its use to germanium targets at 77 K, finding an excellent agreement with previous quenching factor measurements at discrete recoil energies. The simplicity of the experimental setup, combined with a straightforward data analysis, invites to apply this method to other WIMP and neutrino detector technologies. The emitted neutron energy can be adjusted by replacing the \isotope{Y}{88} source with other suitable isotopes such as \isotope{Sb}{124} ($E_n=\SI{24}{\keV}$) or \isotope{Ba}{207} ($E_n=\SI{94}{\keV}$). In upcoming publications we will report on results already obtained for silicon recoils in CCDs \cite{alvaro}, and xenon recoils in a single-phase liquid xenon detector \cite{luca}.
Recent work \cite{dm,rom} points at a possible dependence of the low-energy quenching factor in germanium on detector temperature and internal electric field, potentially related to the disagreement between all present results at 77 K, and those obtained at 50 mK \cite{benoit-01,CDMS,shutt} (Fig. \ref{fig:equivalent-energies}). This disagreement must be understood, as it might impact the physics reach of competing detector technologies. Use of the presently described technique on cryogenic germanium detectors \cite{cdmstalk} should help clarify the origin and extent of these discrepancies.
\begin{acknowledgements}
This work was supported in part by the Kavli Institute for Cosmological Physics at the University of Chicago through grant NSF PHY-1125897 and an endowment from the Kavli Foundation and its founder Fred Kavli. It was also completed in part with resources provided by the University of Chicago Research Computing Center.
\end{acknowledgements}
\bibliographystyle{plain}
\section{Introduction}
Weakly Interacting Massive Particles (WIMPs), hypothetical particles able to account for most observations pointing at a cosmological dark matter, are expected to interact via elastic scattering off nuclei in detecting media. Detector signals would arise from the energy loss of the recoiling nucleus as it slows down. The interpretation of WIMP searches crucially depends on a correct understanding of the mechanisms governing the stopping of low-energy ions in the target material. This concern can be extended to experimental efforts aiming to measure coherent elastic neutrino-nucleus scattering \cite{coherent}, where the mode of interaction and energy regime are the same.
At the few-keV energies expected from WIMP or low-energy neutrino interactions, nuclear recoils typically induce a smaller response than electron recoils of the same energy. Depending on detector type, this response is often measured through the scintillation or ionization yield. In the case of standard germanium diodes operated at liquid nitrogen temperature, it is the second mechanism that is exploited to extract signals. An energy-dependent quenching factor can then be defined as the ratio between the ionization generated by the recoil of a germanium nucleus, and that from an electron recoil of the same energy.
We report on a new measurement of the germanium quenching factor at $\sim$77 K, using a P-type Point Contact (PPC) detector \cite{barbeau-01}, and a calibration technique recently described in \cite{collar-01}. This approach employs a photoneutron radioactive source, exploiting its monochromatic low-energy neutron emission to create nuclear recoils having a well-defined maximum recoil energy of just a few keV$_\text{nr}$ (the suffix stands for ``nuclear recoil", as opposed to the smaller ``electron equivalent" (ee) ionization energy that is actually measured post-quenching). The modest electronic noise characteristic of a PPC allows to include the contribution from sub-keV$_\text{nr}$ nuclear recoils. This technique has been used thus far in the characterization of the quenching factor of sodium recoils in NaI(Tl) scintillators \cite{collar-01}, and carbon and fluorine recoils in superheated fluids \cite{piconim,alanthesis,pico60}.
\section{Experimental Setup}
The experimental arrangement is illustrated in Fig. \ref{fig:daq}. All measurements took place in a shallow underground laboratory (6 m.w.e.) at the University of Chicago. A $\SI{50.7}{\mm}$ (diameter) $\times~ \SI{43.0}{\mm}$ (length) PPC germanium detector manufactured by Canberra Industries with an original active mass of \SI{0.475}{\kg} was surrounded by \SI{20}{\cm} of lead. This shielding reduces the intense gamma emissions from the source to a manageable level, avoiding pile-up and data throughput limitations, while causing only minimal changes to neutron energies \cite{collar-01}. The detector was previously used by the \cogent/ collaboration \cite{aalseth-01, aalseth-02}. An \isotope{Y}{88} gamma source was encapsulated by a $\SI{1}{cm}$-thick gamma-to-neutron BeO converter, and placed \SI{23}{\cm} away from the front of the PPC detector. The dominant neutron energy emitted by the source is $E_{n}\,=\,$\SI{152}{\keV} with an additional small ($\SI{0.5}{\percent}$) component of $E_{n}\,=\,\SI{963}{\keV}$ \cite{collar-01}. The maximum nuclear recoil energy transferred within a single scatter event in Ge for these neutron energies is $E_\text{nr}^{max}=(4MmE_{n})/(M+m)^2=\SI{8.5}{\keVnr}$ and $E_\text{nr}^{max}=\SI{51}{\keVnr}$, respectively, where $M$ and $m$ stand for Ge nucleus and neutron masses.
\begin{figure}[tbp]
\input{./Plots/daq.tex}
\caption{Experimental setup: the preamplifier output is digitized using a NI 5734 16-bit ADC, and shaped with a digital trapezoidal pulse shaper implemented on a NI 7966R Field-Programmable Gate Array (FPGA). The preamplifier trace is stored on the host PC if the corresponding shaped signal triggers on a rising edge threshold set at $\sim$\SI{0.8}{\keVee}, also implemented in the FPGA.}
\label{fig:daq}
\end{figure}
A \isotope{He}{3} neutron counter surrounded by HDPE moderator was employed to measure the isotropic neutron yield of the source, found to be in the range 574-580 neutrons/s, depending on the orientation of the source with respect to the counter. Prior experience with this \isotope{He}{3} counter and other neutron sources (\isotope{Am}{241}/Be, \isotope{Pu}{239}/Be, \isotope{Cf}{252}) of known activity point at an ability to characterize their yield within a few percent of its nominal value. More specifically, seven previous measurements involving four different commercial neutron sources displayed a systematic trend to underestimate their nominal neutron yield by $\sim$12\% \cite{drew}. The activity of the source was separately assessed via a gamma emission measurement employing a dedicated coaxial germanium detector. This gamma yield was used as an input to a \mcnp/ \cite{mcnpx} simulation employing a revised cross-section \cite{alan2} for the $^{9}$Be($\gamma$,n)$^{8}$Be reaction. The neutron yield obtained via this simulation is compatible with \isotope{He}{3} counter measurements, at $\sim573$ neutrons/s. Combining all measurements and accounting for statistical, simulation, and cross-section uncertainties, we estimate a source activity of 0.640$\pm$4\% mCi, corresponding to an emission of 574$\pm$5\% neutrons/s.
Preamplifier power and detector high voltage to the PPC were provided by a Polaris XIA DGF. The preamplifier signal output was fed into a 16-bit National Instruments (NI) 5734 ADC, connected to an NI PXIe-7966R FPGA module. The host PC was a NI PXIe 8133. A trapezoidal, digital pulse shaper was implemented on the FPGA using the recursive algorithm in \cite{jordanov-01}. The total shaping time was set to \SI{16}{\us} with a peaking time of $\SI{8}{\us}$ and a zero length flat top. A rising edge threshold trigger set to approximately \SI{0.8}{\keVee} was used for real-time detection of digitally-shaped pulses. The trigger position was set to \SI{80}{\percent} of the \SI{400}{\us}-long waveforms, with a sampling rate set to \SI{40}{\mega\sample\per\second}. The \SI{320}{\us}-long pre-trigger trace allowed monitoring of detector noise and baseline stability. An electron-equivalent energy scale was established using the \SI{59.5}{\keV} $\gamma$-emission from \isotope{Am}{241}, as well as the four main emission lines from \isotope{Ba}{133}.
\begin{figure}[tbp]
\includegraphics[width=\linewidth]{./Plots/experimental-spectra.pdf}
\caption{Normalized energy spectra recorded for the two different source configurations. Their difference (residual) is shown in blue. The digitizer gain setting limited usable data to $>1$ keV$_{ee}$. The low-energy residual excess arises from neutron-induced nuclear recoils. Additional neutron-induced signals are visible at \SI{13.3}{\keV}, \SI{53.3}{\keV}, and \SI{68.8}{\keV}. These peaks are the result of $^{72}$Ge$(n,\gamma)$ and $^{73}$Ge$(n,n^\prime\gamma)$ interactions \cite{jones-02,clover}. The cancelation of Pb fluorescence lines in the range \SI{72}{\keV_{ee}} to \SI{87}{\keV_{ee}} illustrates the absence of isolated x/$\gamma$-ray contributions to the residual spectrum.}
\label{fig:spectra}
\end{figure}
In order to separate neutron-induced signals from those generated by gamma interactions from the source, a second measurement was performed where the BeO converter was replaced by an aluminum cap of identical geometry. Aluminum has a total attenuation for dominant (898 keV) \isotope{Y}{88} gamma-rays of $\lambda_\text{Al}(\SI{1}{\MeV}) = \SI{0.06146}{\cm\squared\per\gram}$, which closely matches that from BeO, $\lambda_\text{BeO}(\SI{1}{\MeV}) = \SI{0.06112}{\cm\squared\per\gram}$ \cite{berger-01}. A total \SI{19.3}{\hour} of exposure with the \isotope{Y}{88}/BeO source configuration and \SI{20.0}{\hour} with \isotope{Y}{88}/Al were collected. The energy spectra are normalized to account for the difference in run times, and the decay of the source (T$_{\nicefrac{1}{2}}$ = \SI{106.65}{\day}). The residual spectrum, i.e. the difference between the \isotope{Y}{88}/BeO (gammas and neutrons) and \isotope{Y}{88}/Al (gammas) spectra contains neutron-induced signals only \cite{collar-01}. Fig. \ref{fig:spectra} shows both normalized spectra, and the resulting residual spectrum. The low-energy excess in the residual is caused by neutron-induced germanium recoils. As expected, the residual rapidly converges to zero above few keV$_{ee}$, except for discrete peaks arising from inelastic scattering and neutron capture in $^{72,73}$Ge \cite{jones-02,clover}. These peaks can display a characteristic asymmetry towards high energies, due to the addition of gamma and nuclear recoil energy depositions \cite{skoro,jova}.
In addition to these measurements, a total of $10^8$ neutrons emitted by the BeO converter was simulated using \mcnp/ \cite{mcnpx}. The geometry included fine details such as the known internal structure of the PPC, chemical impurity content of lead, and source encapsulation. It also involved new improved cross-section libraries specifically developed for dark matter detector simulations \cite{alan}. Approximately \SI{0.4}{\percent} of these simulated neutrons produce at least one recoil within the detector. The interaction depth, measured from the nearest surface of the germanium crystal, and recoil energy from each nuclear elastic scattering event were recorded. The unquenched energy distribution of these individual recoils is shown in Fig.
\ref{fig:unquenched-recoil-spectrum}. Approximately \SI{50}{\percent} of neutrons interacting with the germanium crystal do so only once, a fraction large enough to expect a readily visible endpoint energy in the ionization spectrum, corresponding to the expected maximum recoil energy transfer of \SI{8.5}{\keV_{nr}}. Multi-scatter events allow to study the contribution from nuclear recoils individually depositing energies below the 0.8 keV$_\text{ee}$ triggering threshold (Fig. \ref{fig:ss-ms-vs-res}). More precisely, 30(15)\% of simulated neutrons interacting with the detector produce at least one recoil depositing less than 1(0.5) keV$_\text{nr}$.
\begin{figure}[tbp]
\includegraphics[width=\linewidth]{./Plots/unquenched-recoil-spectrum.pdf}
\caption{Simulated, unquenched distribution of nuclear recoil energies deposited for each individual neutron scatter event. As expected, primary $E_{n}\,=\,$\SI{152}{\keV} neutrons produce recoil energies of up to \SI{8.5}{\keV_{nr}}. The 0.5\% $E_{n}\,=\,\SI{963}{\keV}$ branch contributes a small fraction of higher recoil energies up to \SI{51}{\keV_{nr}}. The inset shows the multiplicity of interactions in the PPC for all simulated neutron histories. }
\label{fig:unquenched-recoil-spectrum}
\end{figure}
\section{Analysis}
To extract the quenching factor we compare the simulated data to the experimental residual spectrum. In a first step, the energy deposition of each simulated nuclear recoil is converted into an electron-equivalent energy via an energy-dependent quenching model $Q(E_\text{nr})$. Previous measurements of the quenching factor in germanium suggest that the Lindhard theory \cite{lindhard-01} provides an adequate description of $Q$ down to very low energies. This formalism can be written as \cite{barker-01,benoit-01}
\begin{align}
Q&= \frac{k\,g(\epsilon)}{1+k\,g(\epsilon)}\label{eq:lindhard-model-1}\\
g(\epsilon) &= 3\,\epsilon^{0.15} + 0.7\,\epsilon^{0.6}+\epsilon\label{eq:lindhard-model-2}\\
\epsilon &= 11.5\,Z^{-\nicefrac{7}{3}}\,E_\text{nr}\label{eq:lindhard-model-3}.
\end{align}
Here $Z$ is the atomic number of the recoiling nucleus, $\epsilon$ a dimensionless energy, $E_\text{nr}$ is the recoil energy in keV$_\text{nr}$, and $k$ describes the electronic energy loss. In the original description by Lindhard, a value $k\,=\,0.133 Z^{\nicefrac{2}{3}} A^{-\nicefrac{1}{3}}\,(=\,0.157$ for Ge) was adopted, with $A$ the mass number of the nucleus. Lindhard-like models have been fitted to previous quenching factor measurements using comparable $k$ values \cite{barker-01,hooper-01}. Accordingly, we treat $k$ as the free parameter of prime interest in our analysis.
In a second step, we acknowledge that the charge collection efficiency $\eta$ within a PPC detector varies with interaction depth into the crystal. This is due to the effect of a lithium-diffused external contact covering most of the outer surface of the diode \cite{aalseth-04}. Following \cite{aalseth-03} we adopt a sigmoid-shaped charge collection efficiency profile
\begin{align}
\eta(x,\delta,\tau)\;=\;1-\frac{1}{\exp\left[{\frac{x-(\delta+0.5\,\tau)}{0.17\,\tau}}\right]+1},
\end{align}
where $\delta$ is an outermost dead layer thicknesses for which $\eta$ is negligible. $\tau$ is an underlaying transition layer thickness over which the charge collection efficiency rises from $\eta=0.05$ to $0.95$, and $x$ is the interaction depth.
In a third step, we account for the possibility of a reduced ionization
efficiency for slow-moving nuclear recoils, by introducing a smooth
adiabatic correction factor $F_\text{AC}$ to the Lindhard stopping. The concept of a
"kinematic threshold" below which
the minimum excitation energy of the detector system is larger than
the maximum possible energy transfer to an electron by a
slow-moving ion, can be traced back to Fermi and Teller \cite{fermi}.
We adopt the same correction factor model previously employed in
\cite{ahlen-01,ahlen-02},
\begin{align}
F_\text{AC}\left(E_\text{nr},\xi\right) = 1 -
\text{exp}\left[-\nicefrac{E_\text{nr}}{\xi}\right],
\end{align}
where the adiabatic energy scale factor $\xi$ corresponds to the threshold energy below which a rapid drop in ionization efficiency can be expected.
The total simulated electron equivalent energy measured for a neutron interacting $n$ times with the crystal can now be written as
\begin{align}
E_\text{ee} = \sum\limits_{i=1}^n
E_\text{nr}^{(i)}Q\left(E_\text{nr}^{(i)},k\right)\eta\left(x^{(i)},\delta,\tau\right)F_\text{AC}\left(E_\text{nr}^{(i)},\xi\right)\label{eq:electron-equivalent-energy},
\end{align}
where $E_\text{nr}^{(i)}$ is the recoil energy deposited at the $i^{th}$ interaction site. The resulting nuclear recoil energy spectrum in units of electron equivalent energy is convolved with a resolution $\sigma^2(E_\text{ee}) = (\SI{69.7}{eV})^2 + 0.98$ eV $E_\text{ee}$(eV), specific for this detector \cite{aalseth-01,aalseth-02}.
In a final step, the simulated spectrum is normalized to match the integrated neutron yield over the time span of the measurements. To account for the mentioned significant uncertainty in source neutron yield we introduce an additional free global scaling parameter $\gamma$. Our full analysis therefore involves a total of five free parameters, three of which ($\delta,\tau,\gamma$) are treated as nuisance parameters as they are not of immediate interest to our measurement of the quenching factor, even if they must be accounted for.
We employ a Monte Carlo Markov Chain (MCMC) to find the parameter set $\vec{\pi} = \left(k,\delta,\tau,\xi,\gamma\right)$ that provides the best fit of the simulated data to the experimental residual spectrum. Assuming an underlying Poisson distribution for each bin of the simulated residual spectrum, the probability to count $N_i$ events in bin $i$ given $\mu_i$ simulated counts in the same bin can simply be written as
\begin{align}
P(N_i|\mu_i)\;=\;\frac{\mu_i^{N_i}\,\text{e}^{-\mu_i}}{N_i!},
\end{align}
where $\mu_i$ solely depends on our choice of fit parameters $\vec{\pi}$. The corresponding log-likelihood function is given by
\begin{align}
\ln \text{L}(\vec{N}|\vec{\pi}) = & \sum\limits_i N_i\ln(\mu_i(\vec{\pi})) - \sum\limits_i \mu_i(\vec{\pi})\label{eq:log-likelihood}\\
& - \sum\limits_i \ln(N_i!).\nonumber
\end{align}
The last sum is constant for all choices of $\vec{\pi}$. We will therefore not include it in the final posterior probability sampling process. From Bayes' theorem we know that
\begin{align}
P(\vec{\pi}|\vec{N}) \propto P(\vec{N}|\vec{\pi})P(\vec{\pi}), \label{eq:bayes-theorem}
\end{align}
with
\begin{align}
P(\vec{\pi}) = P(k)P(\delta)P(\tau)P(\xi)P(\gamma), \label{eq:independent-parameters}
\end{align}
where we assume that all parameters are independent. For our analysis we choose a bound, flat prior for each parameter (Table \ref{tab:fit-parameters}) for their respective limits. Neglecting the normalization constant of Eq. (\ref{eq:bayes-theorem}), the final logarithmic posterior probability distribution can be written as
\begin{align}
\ln P(k,\delta,\tau,\xi,\gamma|\vec{N}) = &\ln L(\vec{N}|k,\delta,\tau,\xi,\gamma)\label{eq:posterior}\\
& + \ln P(k,\delta,\tau,\xi,\gamma).\nonumber
\end{align}
The last logarithm is either 0 or $-\infty$, depending on whether all parameters are within their respective bounds or not. We use \textbf{emcee} \cite{goodman-02}, a pure Python implementation of Goodman and Weare's affine invariance ensemble sampler \cite{goodman-01} to sample Eq. (\ref{eq:posterior}).
\begin{figure}[tbp]
\includegraphics[width=\linewidth]{./Plots/walker-trajectories.png}
\caption{Full MCMC chain consisting of 320 walkers with $10^5$
iterations each. The walkers were initialized uniformly within the
allowed parameter limits (Table \ref{tab:fit-parameters}). Most
walkers converge onto their final probability distribution after
$\sim500$ steps. The right-side plots show the kernel density
estimation using a bandwidth chosen according to Silverman's rule
\cite{silverman-01}. The dashed red line highlights the most probable
value of the resulting marginalized posterior probability
distribution function. The shaded red area shows the $1\sigma$
credible region.}
\label{fig:mcmc-parameter-distribution}
\end{figure}
\section{Results}
The first MCMC run performed consists of 320 walkers with $10^5$
steps each. The walkers are initialized uniformly within the allowed
parameter space. The full chain is shown in Fig.\
\ref{fig:mcmc-parameter-distribution}. Most walkers are observed to
converge onto the target distribution after $\sim500$ steps. The
adiabatic energy scale factor $\xi$ exhibits the longest auto-correlation time
with $\tau_\text{acor}\approx 87$ steps. The full chain therefore
covers a total of approximately 1150 auto-correlation lengths,
whereas the burn-in time is limited to the first six. Following
\cite{sokal-01} we choose to discard the first twenty
$\tau_\text{acor}$ to eliminate any remaining initialization bias.
The mean acceptance probability for the remaining chain is
P$_\text{acc}=0.46$. All parameters show a monotonically decreasing
Gelman-Rubin potential scale reduction factor R$_\text{GR}$
\cite{gelman-01}, the largest of which is R$_\text{GR}(\xi)$ = 1.073
after $10^5$ steps. An additional visual inspection of all walker
trajectories suggests proper mixing within each chain. The marginalized best-fit values including
their $1\sigma$ credible region are provided in Table
\ref{tab:fit-parameters}. To further investigate the presence of any
possible meta-stable states, we run three additional, shorter MCMC
chains of 320 walkers and $2\times10^4$ steps with differing starting
conditions. For the first two additional runs all
parameters are set below, or above, their respective best-fit values (Table \ref{tab:fit-parameters}). The third run probes
a possibly meta-stable state visible at $\xi\approx\SI{1.65}{\keVnr}$ in Fig.\
\ref{fig:mcmc-parameter-distribution} by initializing all walkers within the
vicinity of $\xi=\SI{1.65}{\keVnr}$, whereas all other parameters are
uniformly distributed within their respective bounds. All three runs
converge onto the same posterior distribution as the initial MCMC
run. The burn-in times, mean acceptance fractions and
auto-correlation lengths are generally identical. We conclude that the investigated possibly meta-stable state
bears no significance, and that all walkers have properly explored the
phase space and fully stabilized on the final posterior probability
distribution.
\renewcommand{\arraystretch}{1.5}
\begin{table}[tbp]
\begin{tabular}{lcc}
\\
\hline
\hline
Parameter & Boundaries & Best Fit\\
\hline
$k$ & $\left[0.1,0.3\right]$ &$0.1789^{+0.0014}_{-0.0010}$\\[5pt]
$\delta$ [mm] & $\left[0.5,6.0\right]$
&$3.60^{+0.22}_{-0.31}\,$\\[5pt]
$\tau$ [mm] & $\left[0.5,6.0\right]$ & $3.44^{+0.53}_{-0.43}\,$\\[5pt]
$\xi$ [keV$_{nr}$] & $\left[0.0,2.0\right]$ &
$0.16^{+0.10}_{-0.13}\,$\\[5pt]
$\gamma$ & $\left[0.5,2.5\right]$ & $1.367^{+0.015}_{-0.014}$\\
\hline
\hline
\end{tabular}
\caption{Parameter space and marginalized best-fit values for all
free parameters. The errors provided represent the $1\sigma$ credible
region obtained from the MCMC analysis. The upper boundary on the
explored adiabatic energy scale ($\xi$) space has been chosen
arbitrarily, but large enough such that it does not affect walker
movement.}
\label{tab:fit-parameters}
\end{table}
\begin{figure}[tbp]
\includegraphics[width=\linewidth]{./Plots/single-multiple-scatter-spectrum-vs-residual.pdf}
\caption{Contributions from single and multiple neutron scattering interactions to the measured ionization energy spectrum. Below \SI{2}{\keV_{ee}} single, double, and multi-scatter ($n>$2) events contribute approximately the same to the overall spectrum. The endpoint of the single scatter spectrum corresponds to an energy of approximately \SI{2}{\keVee}, as expected from previous measurements of the germanium quenching factor at 77 K. This endpoint is readily visible as an inflection in the experimental residual. The shaded red band in the inset shows the one-sigma credible band for the fit. The quality of the fit is $\chi^2/\text{d.o.f}$ = \nicefrac{19.3}{13}. }
\label{fig:ss-ms-vs-res}
\end{figure}
The most probable value of $k=0.1789$ is close to the semi-empirical prediction by Lindhard of $k=0.157$, previous modeling and fits \cite{barker-01,hooper-01}, and in good agreement with existing experimental data at discrete energies. Below \SI{0.8}{\keVnr} our
quenching model starts to deviate from a pure Lindhard model due to
the adiabatic correction factor $F_\text{AC}$. The
corresponding best-fit value of the adiabatic energy scale factor
$\xi=\SI{0.16}{\keVnr}$ is seen to be in good agreement with kinematic threshold predictions recently made for germanium
\cite{sorensen-01}. As discussed above, $\xi$ lies well below our triggering threshold of $\sim\SI{0.8}{\keVee}$. However, our simulations show that approximately one third of the triggering events between 1-\SI{2}{\keVee} involve three or more interactions with the detector (Fig. \ref{fig:ss-ms-vs-res}). The cumulative ionization energy from events involving multiple scatters can surpass the triggering threshold, contributing to the experimental residual. The energy range for which
our analysis provides a valid description of the quenching factor is
limited from above by the maximum recoil energy from a single
(dominant branch) neutron scatter,
$E_\text{nr}^\text{max}=\SI{8.52}{\keVnr}(\approx\SI{2.15}{\keVee})$.\\
\begin{figure}[tbp]
\includegraphics[width=\linewidth]{./Plots/equivalent-energies.pdf}
\caption{Best-fit germanium quenching factor obtained from this work. Data points correspond to previous measurements from \cite{barbeau-01,chasman-01,chasman-02,jones-01,messous-01,texono} in this recoil energy region at 77 K. The solid line shows the modified
Lindhard model for our best-fit $k=0.1789$ and $\xi=\SI{0.16}{\keVnr}$,
over the energy region probed by this calibration. Below
$\sim$\SI{0.8}{\keVnr} the quenching factor is affected by the adiabatic correction factor $F_\text{AC}$. The
maximum recoil energy probed is given by the maximum energy transfer
of a single (dominant branch) neutron scatter, i.e.
$\SI{8.5}{\keVnr}$. Grayed lines represent the combined $1\sigma$
credible region for $k$ and $\xi$. Additional data points at 50 mK are shown \cite{CDMS}. See text for a discussion on a possible temperature dependence for this quenching factor.}
\label{fig:equivalent-energies}
\end{figure}
The best-fit overall scaling $\gamma=1.367$ would suggest a neutron yield
from the source \SI{36.7}{\percent} larger than measured with the
\isotope{He}{3} counter. This best-fit value was found
to be robust ($\pm^{2.3}_{2.1}\%$) against small ($\pm 7\%$)
variations in the magnitude of the neutron cross-section in lead,
representative of its known uncertainty. We performed a similar study of the dependence of $\gamma$ on the $\pm 5\%$ estimated uncertainty in germanium cross-sections, and $\pm 20\%$ uncertainty in the strength function (a measure of resonance contribution) for this element. These result in an additional variation in $\gamma$ by $\pm^{3.8}_{5.0}\%$. The obtained best-fit value for $\gamma$ is deemed satisfactory, in
view of the uncertainties involved, and in particular the mentioned tendency for our \isotope{He}{3} measurements to underestimate the nominal neutron yield from commercial sources. In addition to this, an anti-correlation between the active volume of the detector (i.e., the bulk unaffected by dead or transition layer) and $\gamma$ exists. This active volume changes rapidly with the adopted value of $\delta$ and $\tau$, e.g., already by $\sim$15\% over the uncertainty in their best-fit values (Table \ref{tab:fit-parameters}). While this correlation is unavoidable, the best-fit values of $\delta$ and $\tau$ can be contrasted with expectations, as follows. The thickness of these layers was measured soon after detector acquisition in 2005, using an uncollimated \isotope{Am}{241} source, finding them similar at $\sim$1.2 mm each \cite{aalseth-03}. This was in line with the deep lithium diffusion requested from the manufacturer. Lithium diffusion in the external n+ contact in P-type germanium detectors is known to progress in time, specially for crystals stored at room temperature, as has been the case for most of this detector's history. Based on the few available measurements for this evolution (an increase in thickness by factors 3.3 (4.2) over 9 (13) years \cite{huy-01,huy-02}) we allowed a large parameter space for $\delta,\tau \in \left[0.5\,\text{mm},6\,\text{mm}\right]$. The obtained best-fit values for $\delta$ and $\tau$ correspond to an increase in the sum of dead and transition layer thicknesses in our PPC by a factor of 2.9 over a decade, compatible with the observations in \cite{huy-01,huy-02}.
The quenching factor corresponding to our best-fit $k=0.1789$ and
$\xi=\SI{0.16}{\keVnr}$ is shown in Fig. \ref{fig:equivalent-energies}. A good agreement with previous measurements at 77 K is evident. Fig. \ref{fig:ss-ms-vs-res} shows a comparison of best-fit simulated recoil spectrum and experimental residual over the 1-8 keV$_{ee}$ fitting range.
\section{Conclusions}
We have demonstrated a new calibration method described in \cite{collar-01}, expanding its use to germanium targets at 77 K, finding an excellent agreement with previous quenching factor measurements at discrete recoil energies. The simplicity of the experimental setup, combined with a straightforward data analysis, invites to apply this method to other WIMP and neutrino detector technologies. The emitted neutron energy can be adjusted by replacing the \isotope{Y}{88} source with other suitable isotopes such as \isotope{Sb}{124} ($E_n=\SI{24}{\keV}$) or \isotope{Ba}{207} ($E_n=\SI{94}{\keV}$). In upcoming publications we will report on results already obtained for silicon recoils in CCDs \cite{alvaro}, and xenon recoils in a single-phase liquid xenon detector \cite{luca}.
Recent work \cite{dm,rom} points at a possible dependence of the low-energy quenching factor in germanium on detector temperature and internal electric field, potentially related to the disagreement between all present results at 77 K, and those obtained at 50 mK \cite{benoit-01,CDMS,shutt} (Fig. \ref{fig:equivalent-energies}). This disagreement must be understood, as it might impact the physics reach of competing detector technologies. Use of the presently described technique on cryogenic germanium detectors \cite{cdmstalk} should help clarify the origin and extent of these discrepancies.
\begin{acknowledgements}
This work was supported in part by the Kavli Institute for Cosmological Physics at the University of Chicago through grant NSF PHY-1125897 and an endowment from the Kavli Foundation and its founder Fred Kavli. It was also completed in part with resources provided by the University of Chicago Research Computing Center.
\end{acknowledgements}
\bibliographystyle{plain}
|
1,116,691,500,124 | arxiv | \section{Introduction}
More than 30 years ago \cite{HV76} the linear mixing rule for
multicomponent strongly coupled mixtures was shown to be highly
accurate. However, only recent studies \cite{PCR09,Mixt_New} have
achieved enough accuracy to describe the corrections to the linear
mixing rule for a wide range of plasma parameters; previous
attempts, e.g.\ \cite{DWSC96,DWS03}, were restricted at least by a
limited number of data points. We discuss the corrections to the
linear mixing rule in application to the plasma screening of nuclear
reactions in strongly coupled mixtures. Following Ref.\ \cite{DWS99}
we apply two approaches to calculate the screening enhancement: one
is based on the thermodynamic relations and the other on fitting the
mean-field potentials.
The main
advance of the present work is in using a much wider set of
numerical data and most
precise thermodynamic results.
\section{Plasma screening enhancement of nuclear reaction rates}\label{Sec:TwoApproaches}
Let us study a multicomponent mixture of ions $j=1, 2, \ldots$ with
atomic mass numbers $A_j$ and charge numbers $Z_j$. The ions are
supposed to be fully ionized.
Their total number density is the sum of partial densities,
$n_\mathrm{i}=\sum_j\, n_j$. It is useful to introduce the
fractional number $x_j=n_j/n_\mathrm{i}$ of ions $j$. Let us also
define the average charge number
$\langle Z \rangle=\sum_j\, x_j Z_j$
and mass number
$\langle A \rangle=\sum_j\, x_j A_j$
of the ions. The charge neutrality implies that the electron number
density is $n_\mathrm{e}=\langle Z \rangle n_\mathrm{i}$.
The electron plasma
screening is typically weak and will be neglected.
Thermonuclear reactions in stars take place after the atomic nuclei
collide and penetrate through the Coulomb barrier. For a not too
cold and dense stellar matter the tunneling length $r_\mathrm{t}$ is
much smaller than interionic distances (for recent result of nuclear
fusion with large tunneling distances see
\cite{OCP_react,BIM_react}). The interaction of the reacting ions
with neighboring plasma particles creates a potential well which
enlarges the number of close encounters and enhances the reaction
rate. Before the tunneling event the reactants $j$ and $k$ behave as
classical particles. Their correlations can be described by the
classical radial pair distribution function $g_{jk}(r)$. It can be
calculated by the classical Monte Carlo technique and written as
$ g_{jk}(r)=\exp\left[-\Gamma_{jk}\,a_{jk}/r+H_{jk}(r)/T\right]$,
where $\Gamma_{jk}=Z_j Z_ke^2/(a_{jk}T)$ is the correponding Coulomb
coupling parameter, and $T$ is the temperature.
The ion sphere radius $a_{jk}$ can be defined as \cite{ikm90}
$a_{jk}=(a_j+a_k)/2$ and $a_j=Z_j^{1/3} a_\mathrm{e}$, where
$a_\mathrm{e}=(4\pi n_\mathrm{e}/3)^{-1/3}$. The function
$H_{jk}(r)$ is the mean-field plasma potential. The plasma
enhancement factor is then given by
$
F_{jk}(r_\mathrm{t})=g_{jk}(r_\mathrm{t})/g^\mathrm{id}_{jk}(r_\mathrm{t})
=\exp\left[H_{jk}(r_\mathrm{t})/T\right]\approx \exp\left[H_{jk}(0)/T\right].
$
Here,
$g^\mathrm{id}_{jk}(r_\mathrm{t})=\exp\left(-\Gamma_{jk}\,a_{jk}/r_\mathrm{t}\right)$
is the pair distribution function in the absence of screening. In
the last equality we neglect variations of $H_{jk}(r)$ over scales
$\sim r_\mathrm{t}$ which are much lower than scales $\sim a_{jk}$
of $H_{jk}(r)$.
\paragraph{Widom expansion.}
The enhancement factor of nuclear reaction rates can be determined
in following way: one can calculate $g_{jk}(r)$ by classical
Monte Carlo, extract $H_{jk}(r)$
and extrapolate the results to $H_{jk}(0)$. The extrapolation is
delicate \cite{rosenfeld96} because of poor Monte Carlo statistics
at small separations. We expect that the expansion of $H_{jk}(r)$
contains only even powers of $r/a_{jk}$ (the Widom expansion,
\cite{widom63}); its quadratic term is known \cite{oii91}:
\begin{equation}
H_{jk}(r)=H_0-\frac{Z_j Z_k e^2
}{2a^\mathrm{comp}_{jk}}
\left(\frac{r}{a^\mathrm{comp}_{jk}}\right)^2
+H_4\,\left(\frac{r}{a_{jk}}\right)^4
-H_6\,\left(\frac{r}{a_{jk}}\right)^6
+\ldots
\label{widom}
\end{equation}
Here, $H_0=H_{jk}(0)$ and $a^\mathrm{comp}_{jk}=(Z_j+Z_k)^{1/3}
a_\mathrm{e}$ is the ion-sphere radius of the compound nuclei. Let
us also introduce the dimensionless parameter
$h^0_{jk}=H_{jk}(0)/T$. We have performed a large number of Monte
Carlo simulations of mean field potentials in binary ionic mixtures.
For each simulation, we fit $H_{jk}(r)$ by Eq.\ (\ref{widom}) taking
$H_0$, $H_4$ and $H_6$ as free parameters.
To estimate error bars we have varied $H_0$ and made additional fits
with two free parameters, $H_4$ and $H_6$.
Fig.\ \ref{Fig_Compare} shows the normalized enhancement parameter
$h_{jk}^0/\Gamma_{jk}$ (dots with error bars)
calculated in this way.
\paragraph{Thermodynamic enhancement factors.}
The second approach to calculate the enhancement factors comes from
thermodynamics.
One can estimate
$H_{jk}(0)$ as a difference of the Helmgoltz Coulomb free energies
$F$ of the system before and after the reaction event (e.g.,
\cite{ys89}):
\begin{equation}
h_{jk}^0=\left[F(\ldots,N_j,N_k,N_{jk}^\mathrm{comp},\ldots)
-F(\ldots,N_j-1,N_k-1,N_{jk}^\mathrm{comp}+1,\ldots)\right]/T,
\label{h0}
\end{equation}
where $N_j$, $N_k$, $N_{jk}^\mathrm{comp}$ are the numbers of the
reacting nuclei and the compound nuclei $(Z_j+Z_k,A_j+A_k)$.
Usually (see, e.g.\ \cite{BIM_react}) one assumes the linear mixing model
and presents the free energy of the Coulomb mixture $F$ as
$ F^\mathrm{lin}\left(\left\{N_j\right\}\right)=T\sum_j N_j
f_0\left(\Gamma_{jj}\right),
$
where $f_0(\Gamma)$ is the Coulomb free energy (normalized to temperature
$T$) per one nucleus in one component plasma.
We use the well known approximation of
$f_0(\Gamma)$
suggested by Potekhin \& Chabrier
\cite{pc00}.
In linear mixing model Eq.\ (\ref{h0}) can be written in
the convenient form:
\begin{equation}
h_{jk}^\mathrm{lin}=f_0(\Gamma_{jj})+f_0(\Gamma_{kk})-f_0(\Gamma_{jk}^\mathrm{comp}),
\label{h0lin}
\end{equation}
where
$\Gamma_{jk}^\mathrm{comp}=\left(Z_j+Z_k\right)^{5/3}\Gamma_\mathrm{e}$
is the Coulomb coupling parameter for the compound nucleus. The values
of $h_{jk}^\mathrm{lin}$ are shown by the solid line in
Fig.\ \ref{Fig_Compare}.
Our aim is to check the accuracy of the linear mixing and analyze
deviations from this model.
To do this we apply the best available results for the
thermodynamics of multicomponent mixtures \cite{PCR09,Mixt_New},
which are valid for any value of the coupling parameter. The values
of the corresponding enhancement parameter $h_{jk}^0/\Gamma_{jk}$
are shown by the long-dash line in Fig.\ \ref{Fig_Compare}.
\section{Comparison of different approaches}\label{Sec:compare}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=155mm \epsfbox{ChugunovAI_fig1.eps}
\end{center}
\caption{(Color online)
Histogram of the enhancement factors
extracted from:
1) Widom fitting
(dots with error bars);
2) linear mixing [Eq.\ (\ref{h0lin}); solid line];
3) thermodynamics
(long-dash lines);
4) our approximation [Eq.\ (\ref{appr}); short-dash lines].
}
\label{Fig_Compare}
\end{figure}
In Fig.\ \ref{Fig_Compare} we compare the plasma screening function at zero
separation calculated by different methods.
Each of six panels demonstrates the histogram of normalized
screening functions $h_{jk}^0/\Gamma_{jk}$ versus simulation number.
Three left panels show simulations with numbers from 1 to 50, and
three right panels show simulations from 51 to 100. For each 3-panel
block, the lower panel presents $h^0_{11}/\Gamma_{11}$, the middle
panel shows $h_{12}^0/\Gamma_{12}$, and the upper panel gives
$h^0_{22}/\Gamma_{22}$. The parameters of simulations
$(\Gamma_{11},\ Z_2/Z_1,\ x_1)$ are also shown on each block by
vertically aligned numbers: $\Gamma_{11}$ on the lower panel,
$Z_2/Z_1$ on the middle and $x_1$ on the upper panel. For example,
the simulation number 1
is done
for $\Gamma_{11}\approx0.33$, $Z_2/Z_1=2$, and $x_1=0.7$.
Each panel contains a set of dots with error bars, which represent
the values of $h_{jk}^0/\Gamma_{jk}$ calculated by fitting
$H_{jk}(r)$ with the aid of (\ref{widom}). Each panel contains 3
lines: the solid line shows the results of the linear mixing model,
Eq.\ (\ref{h0lin}); the long-dash line is calculated with the best
available thermodynamics of the multicomponent plasma (Eq.\
(\ref{h0}) with the free energy taken from \cite{Mixt_New}); the
short-dash line is our approximation (\ref{appr}). Note, that the
normalized enhancement parameter is approximately constant at large
$\Gamma_{jk}$. This property is well known \cite{salpeter54}.
The linear mixing is a highly accurate as long as
$\Gamma_{jk}\gtrsim 10$. For lower $\Gamma_{jk}$ the relative
corrections can be much larger and well described by both
dashed-lines (the accurate thermodynamics and approximation). The
most noticeable difference between dots and the short-dashed lines
takes place for $h_{22}/\Gamma_{22}$ in simulations 6, 7, 8, and 9
that are done for low fractions of highly charged ions and large
ratio $Z_2/Z_1\ge5$. Such a difference is unimportant for
applications
--- it translates into the correction to the reaction rate within
a factor of two.
Also, there are three large $\Gamma$ simulations (96, 97, and 99),
where dots are divergent. They started with lattice configurations
of ions. Thus
the corrections to the linear mixing in crystalline phase are larger
(as noted in \cite{DWS03}).
\section{Approximation of enhancement factors and conclusions}\label{Sec:approx}
We suggest to use the following approximation for the enhancement
factor for all $\Gamma$ and mixture composition
\begin{equation}
h^0_{jk}=h^\mathrm{lin}_{jk}/
\left[1
+C_{jk}\left(1-C_{jk}\right)\,
\left(h^\mathrm{lin}_{jk}/h^\mathrm{DH}_{jk}
\right)^2
\right].
\label{appr}
\end{equation}
Here, $h^\mathrm{lin}_{jk}$ is given by (\ref{h0lin}),
$ h^\mathrm{DH}_{jk}=3^{1/2}
Z_j\,Z_k\left<Z^2\right>^{1/2}\Gamma_\mathrm{e}^{3/2}/\left<Z\right>^{1/2}$ is
the well known Debye-H\"{u}ckel enhancement parameter, and
$C_{jk}=
3Z_j\,Z_k\left<Z^2\right>^{1/2}\left<Z\right>^{-1/2}/\left[\left(Z_j+Z_k\right)^{5/2}-Z_j^{5/2}-Z_k^{5/2}\right]
$.
Eq.\ (\ref{appr}) reproduces the Debye-H\"{u}ckel asymptote at low
$\Gamma$ and the linear mixing at strong coupling.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=150mm \epsfbox{ChugunovAI_fig2.eps}
\end{center}
\caption{(Color online) Enhancement factors $h_{jk}^0/\Gamma_{11}^{3/2}$ vs $\Gamma_{11}$
for three binary ionic mixtures.
}
\label{Fig_hvsG}
\end{figure}
In Fig.\ \ref{Fig_hvsG} we show the dependence of the approximated
enhancement factors $h_{jk}^0/\Gamma_{11}^{3/2}$ on $\Gamma_{11}$.
The figure contains three panels; each for a specific binary ionic
mixture. Each panel shows three groups of four lines. They are (from
top to bottom) $h^0_{22}/\Gamma_1^{3/2}$, $h^0_{12}/\Gamma_1^{3/2}$
and $h_{11}^0/\Gamma_1^{3/2}$. Two of any four lines (solid and
thick dashed lines) are almost the same in the majority of cases.
This couple represents the approximation (\ref{appr}) and the
thermodynamic enhancement factor (\ref{h0}), respectively. The
dotted horizontal lines refer to the Debye-H\"{u}ckel model and the
dash-dot lines are the linear mixing results. One can see that our
approximation is in a good agreement with thermodynamic results for
most of cases, especially in panel (a) (for all mixtures with not
too large $Z_2/Z_1$). If $Z_2/Z_1$ becomes too large [panel (c)],
the thermodynamic model of $h_{11}^0/\Gamma_1$, calculated in
accordance with \cite{Mixt_New}, has a specific feature
($h_{11}/\Gamma_{11}^{3/2}$ increases at $\Gamma_{11}\sim10^{-2}$),
while our approximation has not. We expect that this feature is not
real, but results from not too accurate extractions of the
enhancement factors from thermodynamic data. The free energy is
almost fully determined by larger charges $Z_2$ which also dominate
by number ($99\%$) in panel (c). Using Eq.\ \ref{h0} to get
$h_{11}^0$, one should differentiate the free energy with respect to
$N_1$, which provides vanishing contribution to the free energy.
Hence this procedure is very delicate and can strongly amplify the
errors of original thermodynamic approximation. We expect that our
approximation can be more accurate than the original thermodynamic
result. Another, less probable option is that we still have not
enough data to prove the presence of the feature of
$h^0_{11}/\Gamma_{11}^{3/2}$.
To conclude, we have calculated the enhancement factors of nuclear
reactions in binary ionic mixtures by two methods and showed good
agreement of the results. We have proposed a simple approximation of
the enhancement factors valid for any Coulomb coupling. This
approximation is almost the same as thermodynamic ones for not too
specific mixtures. It does not confirm some questionable features of
the enhancement factors for mixtures with large $Z_2/Z_1$ and small
$x_1$.
\begin{acknowledgement}
We are grateful to D.G.~Yakovlev and A.Y.~Potekhin for useful remarks. Work of AIC was
partly supported by the Russian Foundation for Basic Research (grant
08-02-00837), and by the State Program ``Leading Scientific Schools
of Russian Federation'' (grant NSh 2600.2008.2). Work of HED was
performed under the auspices of the US Department of Energy by the
Lawrence Livermore National Laboratory under contract number
W-7405-ENG-48.
\end{acknowledgement}
|
1,116,691,500,125 | arxiv | \section{Introduction}
The present paper reports a brief synopsis of our work on an algebraic model of {\em classical} information theory based on operator algebras. Let us recall a simple model of a communication system proposed by Shanon \cite{Shannon48}. This model has essentially four components: source, channel, encoder/decoder and receiver. Some amount of noise affects every stage of the operation and the behavior of components are generally modeled as stochastic processes. In this work our primary focus will be on discrete processes. A discrete source can be viewed as a generator of a countable set of random variables. In a communication process the source generates sequence of random variables. Then it is sent through the channel (with encoding/decoding) and the output at the receiver is another sequence of random variables. Thus, the concrete objects or {\em observables}, to use the language of quantum theory, are modeled as random variables. The underlying probability space is primarily used to define probability distributions or {\em states} associated with the relevant random variables. In the algebraic approach we directly model the observables. Since random variables can be added and multiplied \footnote{We assume that they are real or complex valued.}they constitute an {\em algebra}. This is our starting point. In fact, the algebra of random variables have a richer structure called a $\cstar$ algebra. Starting with a $\cstar$ algebra of observables we can define most important concepts in probability theory in general and information theory in particular. A natural question is: why should we adopt this algebraic approach? We discuss the reasons below.
First, it seems more appropriate to deal with the ``concrete'' quantities, {\em viz}.\ observables and their intrinsic structure. The choice of underlying probability space is somewhat arbitrary as a comparison of standard textbooks on information theory \cite{CoverT,Ciszar} reveals. Moreover, from the algebra of observables we can recover particular probability spaces from representations of the algebra. Second, some constraints, may have to be imposed on the set of random variables. In security protocols different participants have access to different sets of observables and may assign different probability structures. In this case, the algebraic approach seems more natural: we have to study different subalgebras. Third, the algebraic approach gives us new theoretical insights and computational tools. This will be justified in the following sections. Finally, and this was our original motivation, the algebraic approach provides the basic framework for a unified approach to classical {\em and} quantum information. All quantum protocols have some classical components, e.g.\ classical communication, ``coin-tosses'' etc. But the language of the two processes, classical and quantum, seem quite different. In the former we are dealing with random variables defined on one or more probability spaces where as in the latter we are processing quantum states which also give complete information about the measurement statistics of {\em quantum} observables. The algebraic framework is eminently suitable for bringing together these somewhat disparate viewpoints. Classical observables are simply elements that commute with every element in the algebra.
The connection between operator algebras and information theory---classical {\em and} quantum---have appeared in the scientific literature since the beginnings of information theory and operator algebras---both classical and quantum (see e.g.\ \cite{Umegaki4, Segal60,araki75,Keyl,Beny,Kretschmann}). Most previous work focus on some aspects of information theory like the noncommutative generalizations of the concepts of entropy. There does not appear to be a unified and coherent approach based on intrinsically algebraic notions. The construction of such a model is one of the goals of the paper. As probabilistic concepts play such an important role in the development of information theory we first present an algebraic approach to probability. I. E. Segal \cite{Segal54} first proposed such an algebraic approach model of probability theory. Later Voiculescu \cite{Voic} developed noncommutative or ``free probability'' theory. We believe several aspects of our approach are novel and yield deeper insights to information processes. In this summary, we have omitted most proofs or give only brief outlines. The full proofs can be found in our \href{http://arxiv.org/abs/0910.1536}{arXiv submission} \cite{Patra09}. A brief outline of the paper follows.
In Section \ref{sec:algebra} we give the basic definitions of the $\cstar$ algebras. This is followed by an account of probabilistic concepts from an algebraic perspective. In particular, we investigate the fundamental notion of independence and demonstrate how it relates to the algebraic structure. One important aspect in which our approach seems novel is the treatment of probability distribution functions. In Section \ref{sec:info} we give a precise algebraic model of information/communication system. The fundamental concept of entropy is introduced. We also define and study the crucial notion of a channel as a (completely) positive map. In particular, the {\em channel coding theorem} is presented as an approximation result. Stated informally: {\em Every channel other than the useless ones can be approximated by a lossless channel under appropriate coding}. We conclude the paper with some comments and discussions.
\section{$C^*$ Algebras and Probability} \label{sec:algebra}
A Banach algebra $A$ is a complete normed algebra \cite{Rudin,KR1}. That is, $A$ is an algebra over real ($\real$) or complex numbers ($\comp$), for every $x\in A$ the norm $\norm{x}\geq 0$ is defined satisfying the usual properties and every Cauchy sequence converges in the norm.
A $\cstar$ algebra $B$ is a Banach algebra\cite{KR1} with
an anti-linear involution $^*$ ($x^{**}=x$ and $(x+cy)^*=x^*+\conj{c}y^*$, $x,y\in B$ and $c\in\comp$) such that
\(\norm{xx^*}=\norm{x}^2\text{ and } (xy)^*=y^*x^*\forall x,y \in B\).
This implies that $\norm{x}=\norm{x^*}$. We often assume that the unit $I\in B$.
The fundamental Gelfand-Naimark-Segal ({\bf GNS}) theorem states that
every $\cstar$ algebra can be isometrically embedded in some $\cali{L}(H)$, the set of bounded operators on a Hilbert space of $H$. The spectrum of an element $x\in B$ is defined by $\Sp(x)=\{c\in \comp: x-cI \text{ invertible }\}$. The spectrum is a nonempty closed and bounded set and hence compact.
An element $x$ is self-adjoint if $x=x^*$, normal if $x^*x=xx^*$ and positive (strictly positive) if $x$ is self-adjoint and $\Sp(x)\subset [0,\infty)((0,\infty))$. A self-adjoint
element has a real spectrum and conversely. Since $x=x_1+ix_2$ with $x_1=(x+x^*)/2$ and $x_1=(x+x^*)/2i$ any element of a $\cstar$ algebra can be decomposed into self-adjoint ``real'' and ``imaginary'' parts.
The positive elements define a partial order on $A$:
$x\leq y$ iff $y-x\geq 0$ (positive). A positive element $a$ has a unique square-root $\sqrt{a}$ such that $\sqrt{a}\geq 0\text{ and } (\sqrt{a})^2=a$. If $x$ is self-adjoint, $x^2\geq 0$ and $|x|=\sqrt{x^2}$. A self-adjoint element $x$ has a decomposition $x=x_+-x_-$ into positive and negative parts where \(x_+=(|x|+x)/2\ \text{ and } x_-=(|x|-x)/2)\) are positive. An element $p\in B$ is a projection
if $p$ is self-adjoint and $p^2=p$. Given two $\cstar$-algebras $A$ and $B$ a homomorphism $F$ is a linear map preserving the product and $^*$ structures.
A homomorphism is positive if it maps positive elements to positive elements. A (linear) functional on $A$ is a linear map $A\rightarrow \comp$. A positive functional $\omega$ such that $\omega(\unit)=1$ is called a {\em state}. The set of states $G$ is convex. The extreme points
are called {\em pure states} and $G$ is the convex closure of pure states (Krein-Millman theorem). A set $B\subset A$ is called a subalgebra if it is a
$\cstar$ algebra with the inherited product. A subalgebra is called unital if it contains the identity of $A$.
Our primary interest will be on {\em abelian} or commutative algebras.
The basic representation theorem (Gelfand-Naimark) \cite{KR1} states that: {\em An abelian $\cstar$ algebra with unity is isomorphic to the algebra $C(X)$ continuous complex-valued functions on a compact Hausdorff space $X$}.
Now let $X=\{a_1, \dotsc, a_n\}$ be a finite set with discreet topology. Then $A=C(X)$ is the set of all functions $X\rightarrow \comp$. The algebra $C(X)$ can be considered as the algebra of (complex) random variables on the finite probability space $X$. Let $x_i(a_j)=\delta_{ij},\; i,j=1,\dotsc, n$. Here $\delta_{ij}=1 \text{ if }i=j \text{ and } 0$ otherwise. The functions $x_i\in A$ form a basis for $A$. Their multiplication table is particularly simple: $x_ix_j=\delta_{ij}x_i$. They also satisfy $\sum_i x_i=\unit$. These are projections in $A$. They are orthogonal in the sense that $x_ix_j=0\text{ for }i\neq j$. We call any basis consisting of elements of norm 1 with distinct elements orthogonal {\em atomic}. A set of linearly independent elements $\{y_i\}$ satisfying $\sum_i y_i=\unit$ is said to be complete. The next theorem gives us the general structure of any finite-dimensional algebra.
\begin{thm}\label{thm:structFinite}
Let $A$ be a finite-dimensional abelian $\cstar$ algebra. Then there is a unique (up to permutations) complete atomic basis $\cali{B}=\{x_1, \dotsc, x_n\}$. That is, the basis elements satisfy
\beq \label{eq:structFinite}
x_i^*=x_i,\; x_ix_j=\delta_{ij}x_i,\; \norm{x_i}=1 \text{ and }\sum_i x_i =\unit,\;
\eeq
Let $x=\sum_i a_ix_i\in A$. Then $\Sp(x)=\{a_i\}$ and hence $\norm{x}=\max_i\{|a_i|\}$.
\end{thm}
We next describe an important construction for $\cstar$ algebras. Given two $\cstar$ algebras $A$ and $B$, the tensor product $A\tensor B$ is defined as follows. As a set it consists of all finite linear combinations of symbols of the form $\{x\tensor y:x\in A,y\in B\}$ subject to the conditions that the map $(x,y)\rightarrow x\tensor y$ is bilinear in each variable. Hence, if $\{x_i\} \text{ and } \{y_j\}$ are bases for $A$ and $B$ respectively then $\{x_i\tensor y_j\}$ is a basis for $A\tensor B$. The linear space $A\tensor B$ becomes an algebra by defining
$(x\tensor y)(u\tensor z)=xu\tensor yz$ and extending by bilinearity.
The $*$ is defined by $(x\tensor y)^*=x^*\tensor y^*$ and extending {\em anti-linearly}. We will define the norm in a more general setting. Our basic model will be an {\em infinite} tensor product of finite dimensional $\cstar$ algebras which we present next.
\newcommand{\inftens}[1]{\bigotimes^{\infty}{#1}}
Let $A_k,\;k=1,2,\dotsc,$ be finite dimensional abelian $\cstar$ algebras with atomic basis $B_k=\{x_{k1},\dotsc,x_{kn_k}\}$. Let $B^{\infty}$ be the set consisting of all infinite strings of the form \(z_{i_1}\tensor z_{i_2}\tensor \cdots\) where all but a finite number ($>0$) of $z_{i_k}$s are equal to $\unit$ and if some $z_{i_k}\neq \unit$ then $z_{i_k}\in B_k$.
\commentout{
Explicitly, $B^{\infty}$ consists of strings of the form $z_{i_1}\tensor z_{i_2}\tensor \cdots \tensor z_{i_k}\tensor\unit\tensor\unit\tensor\cdots,\;k=1,2,\dotsc $ and $z_i\in B$.} Let $\tilde{\mathfrak{A}}=\tensor_{i=1}^{\infty}{A}_i $ be the vector space with basis $B^{\infty}$ such that $z_{i_1}\tensor z_{i_2}\tensor \cdots \tensor z_{i_k}\tensor\cdots $ is linear in each factor separately.
\commentout{
\[
\begin{split}
&z_{1_1}\tensor\cdots \tensor (az_{i_k}+bz'_{i_k})\tensor z_{i_{k+1}}\tensor \cdots= \\
&a(z_{1_1}\tensor \cdots \tensor z_{i_k}\tensor z_{i_{k+1}}\tensor \cdots)+
b(z_{1_1}\tensor \cdots \tensor z'_{i_k}\tensor z_{i_{k+1}}\tensor \cdots).\\
\end{split}
\]
Clearly every $\alpha \in \tilde{\mathfrak{A}} $ is a finite linear combination of elements in $B^{\infty}$. }
We define a product in $\tilde{\mathfrak{A}}$ as follows. First, for elements of $B^{\infty}$:
\((z_{i_1}\tensor z_{i_2}\tensor\cdots )(z'_{i_1}\tensor z'_{i_2}\tensor\cdots )=(z_{i_1}z'_{i_1}\tensor z_{i_2}z'_{i_2}\tensor\cdots )\)
We extend the product to whole of $\tilde{\mathfrak{A}}$ by linearity. Next define a norm by:
\(\norm{\sum_{i_1,i_2,\dotsc}a_{i_1i_2\cdots}z_{i_1}\tensor z_{i_2}\tensor \cdots }=\sup\{|a_{i_1i_2\cdots}|\}\). $B^{\infty}$ is an atomic basis. It follows that $\tilde{\mathfrak{A}}$ is an abelian normed algebra. We define $*$-operation by
\(\left(\sum_{i_1,i_2,\dotsc}a_{i_1i_2\cdots}z_{i_1}\tensor z_{i_2}\tensor \cdots \right)^*=\sum_{i_1,i_2,\dotsc}\conj{a_{i_1i_2\cdots}}z_{i_1}\tensor z_{i_2}\tensor \cdots \)
It follows that for $x\in \tilde{\mathfrak{A}}$, $\norm{xx^*}=\norm{x}^2$. Finally, we complete the norm \cite{KR1} and call the resulting $\cstar$ algebra $\mathfrak{A}$. With these definitions $\mathfrak{A}$ is a $\cstar$ algebra. We call a $\cstar$ algebra $B$ of {\bf finite type} if it is either finite dimensional or infinite tensor product of finite-dimensional algebras. An important special case is when all the factor algebras $A_i=A$. We then write the infinite tensor product $\cstar$ algebra as $\inftens{A}$. Intuitively, the elements of an atomic basis $B^{\infty}$ of $\inftens{A}$ correspond to strings from an alphabet (represented by the basis $B$). Of particular interest is the 2-dimensional algebra $D$ corresponding to a binary alphabet.
\commentout{
It can be shown that there are injective algebra maps---\(\cali{J}: \inftens{G}\rightarrow \inftens{A} \text{ and } \cali{J}': \inftens{A} \rightarrow \inftens{G}\). This is relevant for coding theory. }
The next step is to describe the state space. Given a $\cstar$ subalgebra $V\subset A$ the set of states of $V$ will be denoted by $\mathscr{S}(V)$. Let $\mathfrak{A}=\tensor^{\infty}_{i=1} A_i$ denote the infinite tensor product of finite-dimensional algebras $A_i$. An infinite product state of $\mathfrak{A}$ is a functional of the form
\( \Omega=\omega_1\tensor \omega_2\tensor\cdots \text{ such that }\omega_i\in \mathscr{S}(A_i)\)
This is indeed a state of $\mathfrak{A}$ for if $\alpha_k = z_1\tensor z_2 \tensor \cdots \tensor z_k\tensor\unit\tensor\unit\cdots\in \mathfrak{A}$ then
\(\Omega(\alpha)=\omega_1(z_1)\omega_2(z_2)\cdots \omega_k(z_k), \)
a {\em finite} product.
\commentout{
Since an arbitrary element of $\mathfrak{A}$ is the limit of sequence of finite sums of elements of the form $\alpha_k,\; k=1,2,\dotsc $ $\Omega$ is bounded by the principle of uniform boundedness. Clearly, it is positive.}
A general state on $\mathfrak{A}$ is a convex combination of product states like $\Omega$.
Finally, we discuss another useful construction in a $\cstar$ algebra $A$. If $f(z)$ is an analytic function whose Taylor series $\sum_{n=0}^{\infty}a_n (z-c)^n$ converges in a region $|z-c|<R$. Then the series $\sum _{n=0}^{\infty}(x-c\unit)^n$ converges and it makes sense to talk of analytic functions on a $\cstar$ algebra. If we have an atomic basis $\{x_1,x_2,\dotsc \}$ in an abelian $\cstar$ algebra then the functions are particularly simple in this basis. Thus if $x=\sum_i a_ix_i$ then $f(x)=\sum_i f(a_i)x_i$ provided that $f(a_i)$ are defined in an appropriate domain.
We gave a brief description of $\cstar$ algebras. We now
introduce an algebraic model of probability which is used later to model communication processes.
In this model we treat random variables as elements of a $\cstar$ algebra. The probabilities are introduced via states.
\commentout{
We emphasize again that random variables often represent quantities that are actually measured or observed- the voltage across a resistor, the currents in an antenna, the position of a Brownian particle and so on. The probability distribution corresponds to the {\em state} of the devices that produce these outputs. We will take the alternative view and start with these observables as our basic objects.}
\newcommand{\scrp}[1]{{\mathscr #1}}
\newcommand{\intd}{\mathrm{d}}
A classical observable algebra is a complex abelian $\cstar$ algebra $A$. We can restrict our attention to real algebras whenever necessary. The Riesz representation theorem \cite{Rudin} makes it possible identify $\omega$ with some {\em probability measure}.
A {\em probability algebra} is a pair $(A, S)$ where $A$ is an observable algebra and $S\subset \scrp{S}(A)$ is a set of states. A probability algebra is defined to be {\em fixed} if $S$ contains only one state.
\noindent
Let $\omega$ be a state on an abelian $\cstar$ algebra $A$. Call two elements $x,y\in A$ {\em uncorrelated in the state} $\omega$ if $\omega(xy)=\omega(x)\omega(y)$. This definition depends on the state: two uncorrelated elements can be correlated in some other state $\omega'$. A state $\omega$ is called multiplicative if $\omega(xy)=\omega(x)\omega(y)$ for all $x,y\in A$. The set of states, $\mathscr{S}$, is convex. The extreme points of $\mathscr{S}$ are called {\em pure} states. In the case of abelian $\cstar$ algebras a state is pure if and only of it is multiplicative \cite{KR1}. Thus, in a pure state any two observables are uncorrelated. This is not generally true in the non-abelian quantum case. Now we can introduce the important notion of {\em independence}. Given $S\subset A$ let $A(S)$ denote the subalgebra generated by $S$ (the smallest subalgebra of $A$ containing $S$). Two subsets $S_1,S_2\subset A$ are defined to be {\em independent} if all the pairs $\{(x_1,x_2): x_1\in A(S_1), x_2\in A(S_2)\}$ are uncorrelated. As independence and correlation depend on the state we sometimes write $\omega$-independent/uncorrelated. Independence is a much stronger condition than being uncorrelated.
\commentout{
However, in 2 dimensions $x\text{ and }x'$ are uncorrelated if and only if one of them is 0 or $c\unit$. Let us note that as in the quantum case two dimensions is an exceptional case.}
The next theorem states the structural implications of independence.
\begin{thm} \label{thm:structIndep}
Two sets of observables $S_1,S_2$ in a finite dimensional abelian $\cstar$ algebra $A$ are independent in a state $\omega$ if and only if for the subalgebras $A(S_1)$ and $A(S_2)$ generated by $S_1$ and $S_2$ respectively there exist states $\omega_1 \in \scrp{S}(A(S_1)),\; \omega_2\in \scrp{S}(A(S_2))$ such that $(A(S_1)\tensor A(S_2),\{\omega_1\tensor\omega_2\})$ is a cover of $(A(S_1S_2),\omega')$ where $A(S_1S_2)$ is the subalgebra generated by $\{S_1,S_2\}$ and $\omega'$ is the restriction of $\omega$ to $A(S_1S_2)$.
\end{thm}
\commentout{
The next step is to extend the notion of independence to more than two subsets. Let $S_1,\dotsc,S_k\subset A$ and $\omega$ a state of $A$. Then the subsets are defined to be $\omega$-independent if for all $x_i\in A(S_i),\; i=1,\dotsc, k$ we have
\[ \omega(x_1\cdots x_k)=\omega(x_1)\cdots \omega(x_k)\]
Here $A(S_i)$ is the subalgebra generated by $S_i$. We can then show that for states $\omega_i\in \scrp(A(S_i))$, the restriction of $\omega$ to $A(S_i)$ the pair
\((A(S_1)\tensor\cdots \tensor A(S_k), \omega_1\tensor\cdots\tensor \omega_k)\) is a cover of $A(S_1\dotsc S_k),\omega'$, where $\omega'$ is the restriction of $\omega$ to $A(S_1\dotsc S_k)$, the algebra generated by $S_1,\dotsc, S_k$.}
We thus see the relation between independence and (tensor) product states in the classical theory. Next we show how one can formulate another important concept, {\em distribution function} ({\bf d.f}) in the algebraic framework. We restrict our analysis to $\cstar$ algebras of finite type. The general case is more delicate and is defined using approximate identities in subalgebras in \cite{Patra09}. The idea is that we approximate indicator functions of sets by a sequence of elements in the algebra. In the case of finite type algebras the sequence converges to a projection operator $J_S$.
\commentout{
\noindent
{\bf Definition.} Let $S=\{x_1,x_2,\dotsc, x_n\}$ be a finite self-adjoint subset of $A$ where $(A,\omega)$ is a fixed probability algebra of finite type. For ${\tt t}=(t_1,t_2,\dotsc, t_n )\in \real$ let $S_{\tt t}\subset A$ denote the set of elements $\{(t_i\unit - x_i):i=1,\dots, n\}$. Let $J_S$ be the identity in the (annihilator) subalgebra $(S_{\tt t})_a\equiv \{ x\in A : xs=0\forall s\in S_{\tt t}\}$. Then the $\omega$-distribution of $S$ is defined to be the real function \( f_S({\tt t})= \omega(J_S) \). Using the d.f we can define the cumulative distribution function $F_S({\tt t})= \sum _{q_i} f_S({\tt t}: q_i\leq t_i)$. The sum is well defined since $f_S$ has only finitely many nonzero values.
\noindent
We explain the rationale of this definition.} Thus, if we consider a representation where the elements of $A$ are functions on some finite set $F$ then $J_S$ is precisely the indicator function of the set $S'=\{c:x_i(c)-t_i=0:c\in F\text{ and }i=1,\dotsc, n\}$. The set $S'$ corresponds to the subalgebra $(S_{\tt t})_a$ and $J_S$, a projection in $A$, acts as identity in $(S_{\tt t})_a$.
\commentout{
The following shows the existence of $J_S$ by an explicit construction. Suppose $S$ contains a single element $x$. Writing $x=\sum_ia_ix_i$ in some atomic basis we have $x-t=\sum a_jP_j$ where $a_j\neq 0$ are distinct and $P_j$ are projections. Now use Lagrange interpolation to obtain polynomials $g_j$ such that $g_j(a_k)=\delta_{jk}$ and $g_j(0)=0$. Then $Q=\sum_j P_j$ is the required projection. If there are more elements in $S$ then let $Q_j$ be the projection for $x_j-t_j$ and $Q$ the product of $Q_j$'s. Then $J_s=\unit-Q$. Note that we have an explicit formula for distribution function that we can use it to prove properties of d.f. }
From the notion of distribution functions we can define now probabilities $Pr(a\leq x\leq b)$ in the algebraic context. We can now formulate problems in any discrete stochastic process in finite dimensions. The algebraic method actually provides practical tools besides theoretical insights as the example of ``waiting time'' shows \cite{Patra09}.
Now we consider the algebraic formulation of a basic limit theorem of probability theory: the {\bf weak law of large numbers}. From information theory perspective it is perhaps the most useful limit theorem. Let $X_1,X_2,\cdots, X_n$ be independent, identically distributed (i.i.d) bounded random variables on a probability space $\Omega$ with probability measure $P$. Let $\mu$ be the mean of $X_1$. Recall the
{\em Weak law of large numbers}. Given $\epsilon>0$
\[\lim_{n\rightarrow \infty}P(|S_n=\frac{X_1+\cdots +X_N}{n}-\mu |>\epsilon)=0\]
We have an algebraic version of this important result.
\begin{thm} [Law of large numbers (weak)] \label{thm:weak-law}
If $x_1,\dotsc,x_n,\dotsc$ are
$\omega$-\\independent self-adjoint elements in an observable algebra and $\omega(x_i^k)=\omega(x_j^k)$ for all positive integers $i,j\text{ and }k$ (identically distributed) then
\[\lim_{n\rightarrow \infty} \omega(|\frac{x_1+\dotsb+x_n}{n}-\mu|^k)=0 \text{ where } \mu=\omega(x_1) \text{ and } k>0\]
\end{thm}
\commentout{
\begin{proof}
We may assume $\mu=0$ (by reasoning with $x_i-\omega(x_i)$ instead of $x_i$). First we prove the statement for $k=2$. Then $\omega(|\frac{x_1+\dotsb+x_n}{n}|)^2=\sum_i\omega(x_i^2)/n^2=\omega(x_1^2)/n$. The first equality follows from independence ($\omega(x_ix_j)=\omega(x_i)\omega(x_j)=0\text{ for }i\neq j$) the second from the fact that they are identically distributed. The case $k=2$ is now trivial. Now let $k=2m$. Then $|x_1+\dotsb +x_n|^k=(x_1+\dotsb +x_n)^k$. Put $s_n=(x_1+\dotsb +x_n)/n$. Expanding $s_n^k$ in a multinomial series we note that independence and the fact that $\omega(x_i)=0$ implies that all terms in which at least one of the $x_i$ has power 1 do not contribute to $\omega(s_n^k)$. The total number of the remaining terms is $O(n^{m})$. Since the denominator is $n^{2m}$ we see that $\omega(s_n^k)\rightarrow 0$. Since for any $x\in A$, $|x|=(x^2)^{1/2}$ can be approximated by polynomials in $x^2$ we conclude that $\omega(|s_n|)\rightarrow 0$. Finally, using the Cauchy-Schwartz type inequality $\omega(|s_n|^{2r+1})\leq \omega(s_n^2)\omega(s_n^{2r})$ we see that the theorem is true for all $k$.
\end{proof}
}
Using the algebraic version of Chebysev inequality the above result implies the following.
Let $x_1,\dotsc, x_n \text{ and } \mu$ be as in the Theorem and set $s_n=(x_1+\dotsb+x_n)/n$. Then for any $\epsilon >0$ there exist $n_0$ such that for all $n>n_0$
\(
P(|s_n-\mu|>\epsilon) <\epsilon
\)
\commentout{
\begin{proof}
Using Chebysev inequality we have \( P(|s_n-\mu|>\epsilon)= P(|s_n-\omega(s_n)|>\epsilon) \leq \frac{\omega(|s_n-\mu|^2)}{\epsilon^2}\). As $\omega(|s_n-\mu|^2)\rightarrow 0$ (Theorem \ref{thm:weak-law}) there is $n_0$ such that $\omega(|s_n-\mu|^2)<\epsilon^3$ for $n>n_0$.
\end{proof}
}
\section{Communication and Information}\label{sec:info}
We now come to our original theme: an algebraic framework for communication and information processes. Since our primary goal is the modeling of information processes we refer to the simple model of communication in the Introduction and model different aspects of it. In this work we will only deal with sources with a finite alphabet.
\noindent
{\bf Definition.} {\em A source is a pair $\scrp{S}=(B,\Omega)$ where $B$ is an atomic basis of a finite-dimensional abelian $\cstar$ algebra $A$ and $\Omega$ is a state in $\inftens{A}$}.
\noindent
This definition abstracts the essential properties of a source. The basis $B$ is called the {\em alphabet}. A typical output of the source is of the form $x_1\tensor x_2\tensor \dotsb\tensor x_k\tensor \unit \tensor \dotsb \in B^{\infty}$, the infinite product basis of $\inftens{A}$. We identify \(\hat{x}_k= \unit\tensor\dotsb\tensor\unit\tensor x_k\tensor\unit\tensor\dotsb\) with the $k$th signal. If these are independent
then Theorem \ref{thm:structIndep} tells us that $\Omega$ must be product state. Further, if the state of the source does not change then $\Omega=\omega\tensor\omega\tensor\dotsb$ where $\omega$ is a state in $A$.
For a such state $\omega$ define:
\(\cali{O}_{\omega}=\sum_{i=1}^n \omega(x_i)x_i, \; \{x_1, \dotsc, x_n\},\; x_i\in B\)
We say that $\cali{O}_{\omega}$ is the ``instantaneous'' output of the source in state $\omega$.
\commentout{
Intuitively, $\cali{O}_{\omega}$ is a kind of mean ``point'' in the space of outputs (compare it with the notion of center of mass in mechanics). More importantly, it facilitates calculation of important quantities and has close analogy with the quantum case. The quantum analogue may be pictured as follows. The source outputs ``particles'' in definite ``states'' $x_i$ with probability $p_i=\omega(x_i)$. Note that here state corresponds to a projection operator. A measurement for $x_i$ means applying the dual operator $\omega_i\;(\omega_i(x_j)=\delta_{ij})$ giving $\omega_i(\cali{O}_\omega)= p_i$.
Let $\mathscr{Z}=(X,\omega)$ be a static discrete source. Suppose every $x\in X$ belongs to a finite-dimensional subalgebra generated by a (finite) set of $\omega$-independent elements. Then using the Theorem \ref{thm:structIndep} we may assume that $A=\inftens B$ where $B$ is finite-dimensional abelian $\cstar$ algebra and $\omega$ is an (infinite) product state. In this case, each element of $X$ is a tensor product of elements of an atomic basis of $B$. In the rest of the paper we assume that $X$ is the product basis of atomic elements. For example, if $B$ is the two dimensional algebra with atomic basis $\{y_0,y_1\}$ then $X$ is the set of elements of the form
$z_1\tensor z_2\tensor\dotsb \tensor z_k\tensor\unit\tensor\unit\tensor\dotsb$ where $z_i\in\{y_0,y_1\}$.
Let $B$ be a finite-dimensional $\cstar$ algebra and $A=\inftens B$ . We consider $\tensor^n B$ as a subalgebra of $A$ via the standard embedding (all ``factors'' beyond the $n$th place equal $\unit$). Let $X_n$ be its atomic basis in some fixed ordering and let $X=\bigcup_n X_n$. We can consider $B$ as the source alphabet and $X_n$ as strings of length $n$.} Let $A'$ be another finite-dimensional $\cstar$ algebra with atomic basis $B'$
A source coding is a linear map $f:B\rightarrow T= \sum_{k= 1}^m \tensor^k A' $. Such that for $x\in B$, $f(x)=x'_{i_1}\tensor x'_{i_2}\tensor \dotsb\tensor x'_{i_r},\; r\leq k$ with $x'_{i_j}\in B'$. Thus each ``letter'' in the alphabet $B$ is coded by ``words'' of maximum length $k$ from $B'$.
\commentout{
Let us consider an example to clarify these points.
Let $\{x_0,x_1,x_2,x_3\}$ be an atomic basis for $B$. Let $B'=G$ with atomic basis $\{y_0,y_1\}$. Define $f_1$ by $f_1(x_0)=y_0,f_1(x_1)= y_1, f_1(x_2)=y_0\tensor y_1 \text{ and } f_1(x_3)=y_1\tensor y_0$. Denote by $\hat{f}_1$ its extension to tensor products. Since $\hat{f}_1(x_0\tensor x_1)=y_0\tensor y_1=\hat{f}_1(x_2)$, $\hat{f}_1$ is not injective. Hence it cannot be inverted on its range. Consider next the map $f_2(x_0)=y_0,f_2(x_1)=y_0\tensor y_1,f_2(x_2)=y_0\tensor y_1\tensor y_1\text{ and }f_2(x_3)=y_1\tensor y_1\tensor y_1$. This map is invertible but one has to look at the complete product before finding the inverse. It is not {\em prefix-free}.}
A code $f:B\rightarrow T$ is defined to be prefix-free if for distinct members $x_1,x_2$ in an atomic basis of $B$, $f'(x_1)f'(x_2)=0$ where $f'$ is the map $f': B\rightarrow \inftens B'$ induced by $f$. That is, distinct elements of the atomic basis of $B$ are mapped to {\em orthogonal} elements. Thus the
``code-word'' $ z_1\tensor z_1\tensor\dotsb \tensor z_k \tensor \unit\tensor \unit\tensor\dotsb$ is not orthogonal to another $ z'_1\tensor z'_1\tensor\dotsb \tensor z'_m \tensor \unit\tensor \unit\tensor\dotsb$ with $k\leq m$ if and only if $z_1=z'_1,\dotsc, z_k=z'_k$.
\commentout{
We observe that one has to be careful about correspondence between the two approaches. For example, one might be tempted to identify the identity $\unit$ with the empty string but the $\unit$ is the sum of the members of an atomic basis! The binary operation ``+'' has a relatively lesser role in the classical formalism but it is crucial in the quantum framework (via superposition principle).}
The useful Kraft inequality can be proved using algebraic techniques.
Corresponding to a finite sequence $k_1\leq k_2\leq \dotsb \leq k_m$ of positive integers let $\alpha_1,\dotsc, \alpha_m$ be a set of prefix-free elements in $\sum_{i\geq 1} \tensor^i A'$ such that $\alpha_i\in \tensor^{k_i} A'$. Further, suppose that each $\alpha_i$ is a tensor product of elements from $B'$. Then
\beq \label{eq:Kraft}
\sum_{i=1}^m n^{k_m-k_i} \leq n^{k_m}
\eeq
This inequality is proved by looking at bounds on dimensions of a sequence of orthogonal subspaces.
\commentout{
The Kraft inequality is valid for decipherable sequences \cite{McMillan}. However, the proof is essentially combinatorial. The Kraft inequality also provides a sufficiency condition for prefix-free code \cite{Ciszar,CoverT}. Thus the existence of a decipherable code of word-lengths $(k_1,k_2,\dotsc,k_m)$ implies the existence of a prefix-free code of same word-lengths.}
In the following, we restrict ourselves to prefix-free codes.
\commentout{
\begin{lem}
Let $f$ be a continuous real function on $(0,\infty )$ such that $xf(x)$ is convex and $\lim_{x\rightarrow 0} xf(x)=0$. Let $A$ be a finite-dimensional $\cstar$ algebra with atomic basis $\{x_1,\dotsc, x_n\}$ and $\omega$ a state on $A$. Then for any set of numbers $\{a_i\; :i=1,\dotsc, n;\; a_i>0\text{ and } \sum_i a_i\leq 1\}$ we have
\(\omega(\sum_i f(\frac{\omega(x_i)}{a_i})x_i) \geq f(1)\)
\end{lem}
\begin{proof}
Let $\omega(x_i)=p_i$. We have to show that $\sum p_if(p_i/a_i) \geq f(1)$. First assume that all $p_i>0$ and $\sum_i a_i=1$. Then
\[ \sum_i p_i f(p_i/a_i) =\sum_i a_i\frac{p_i }{a_i}f(\frac{p_i }{a_i}) \geq f(\sum p_i)=f(1)\]
by convexity of $xf(x)$. The general case can be proved by starting with $a_i$ corresponding to $p_i>0$ and adding extra $a_j$'s to satisfy $\sum_i a_i=1$ if necessary. The corresponding $p_j$ is set to $0$. Now define a new function $g(x)=xf(x), \; x>0$ and $g(0)=0$. The conclusion of the lemma follows by arguing as above with $g$.
\end{proof}
}
Using convexity function $f(x)=-\log {x}$ and the Kraft inequality \ref{eq:Kraft} we deduce the following.
\begin{propn}[Noiseless coding]
Let $\mathscr{S}$ be a source with output $\cali{O}_{\omega}\in A$, a finite-dimensional $\cstar$ algebra with atomic basis $\{x_1,\dotsc, x_n\}$ (the alphabet). Let $g$ be prefix-free code such that $g(x_i)$ is a tensor product of $k_i$ members of the code basis. Then
\(\omega(\sum_i k_ix_i+\log{\cali{O}_{\omega}})\geq 0\)
\end{propn}
Next we give a simple application of the law of large numbers. First define a positive functional $\tr$ on a finite dimensional abelian $\cstar$ algebra $A$ with an atomic basis $\{x_1,\dotsc, x_d\}$ by
\(\tr =\omega_1+\dotsb+\omega_d\) where $\omega_i$ are the dual functionals. It is clear that $\tr$ is independent of the choice of atomic basis.
\begin{thm}[Asymptotic Equipartition Property (AEP)]\label{thm:AEP}
Let $\scrp{S}$ be a source with output $\cali{O}_{\omega}=\sum_{i=1}^d \omega(x_i)x_i$ where $\omega$ is a state on the finite dimensional algebra with atomic basis $\{x_i\}$. Then given $\epsilon >0$ there is a positive integer $n_0$ such that for all $n>n_0$
\[ P(2^{n(H(\omega)-\epsilon)}\leq \tensor^n \cali{O}_{\omega}\leq 2^{n(H(\omega)+\epsilon)}) > 1- \epsilon \]
where $H=\omega(\log_2(\cali{O}_{\omega}))$ is the {\em entropy} of the source and the probability distribution is calculated with respect to the state $\Omega_n=\omega\tensor\dotsm\tensor\omega$ ($n$ factors) of $\tensor^n A$. If $Q$ denotes the identity in the subalgebra generated by $(\epsilon I-|\log_2(\tensor^n\cali{O}_\omega)+nH|)_+$ then
\[(1-\epsilon)2^{n(H(\omega)-\epsilon)}\leq \tr(Q) \leq 2^{n(H(\omega)+\epsilon)} \]
\end{thm}
\commentout{
The function $\log{x}\equiv \log_2{x}\;(=\ln{x}/\ln{2})$ is defined for strictly positive elements of a $\cstar$ algebra. We extend the definition to all non-zero $x\geq 0$. Let $\{y_i\}$ be a atomic basis in an abelian $\cstar$ algebra. Let $y=\sum_i a_iy_i$ with $a_i\geq 0$. Then define $\log_2{y}=\sum_i b_i y_i$ where $b_i=\log{a_i}$ if $a_i>0$ and 0 otherwise. This definition implies that some standard properties of $\log$ are no longer true (e.g.\ $2^{\log{x}}\neq x$). But in the present context it gives the correct result when we take expectation values as in the formulas in the theorem.
A somewhat longer but mathematically better justified route is to ``renormalize'' the state. Thus if $\omega(x_i)=0$ for $k$ indices we define $\omega'(x_i)=\delta$ where $\delta$ is arbitrarily small but positive and $\omega'(x_j)=\omega(x_j)-k\delta$ where $\omega'(x_j)>k\delta$. If we can prove the theorem now for $\omega'$ and since the relations are valid in the limit $\delta\rightarrow 0$ then we are done. We will not take this path but implicitly assume that the probabilities are positive.} Note that the element $Q$ is a projection on the subalgebra generated by $(\epsilon I-|\log_2(\tensor^n\cali{O}_\omega)-nH|)_+$. It corresponds to the set of strings whose probabilities are between $2^{-nH-\epsilon}$ and $2^{-nH+\epsilon}$. The integer $\tr(Q)$ is simply the cardinality of this set.
\commentout{
\begin{proof}[Proof of the theorem]
First note that $\log{ab}=\log a+\log b$ for elements $a,b\geq 0$ in $A$. We can write $\tensor^n\cali{O}_{\omega}=X_1X_2\dotsb X_n$ where $X_i=\unit\tensor\unit\tensor\dotsm\tensor\cali{O}_{\omega}\tensor\unit\tensor\dotsm\tensor\unit$ with $\log{\cali{O}_{\omega}}$ in the $i$th place. The fact that $\Omega_n$ is a product state on $\tensor^n A$ (corresponding to a source whose successive outputs are independent) implies that $X_i$ are independent and identically distributed. We can now apply the corollary to Theorem \ref{thm:weak-law} yielding
\(P(|\log{(\tensor^n\cali{O}_{\omega})} -\Omega_n(\log{X_1})|>\epsilon)=P(|\log{(\tensor^n\cali{O}_{\omega})} -\omega(\log{(\cali{O}_{\omega})})|>\epsilon)\).
\end{proof}
}
We now come to the most important part of the communication model: the {\em channel}.
The original paper of Shannon characterized channels by a transition probability function. We will consider only (discrete) memoryless channel (DMS). A DMS channel has an input alphabet $X$ and output alphabet $Y$ and
\commentout{
a sequence of random functions $\phi_n: X^n\rightarrow Y^n$. The latter are characterized by probability distributions $p_n(y^{(n)}|x^{(n)})$, the interpretation being: $\phi_n(x^{(n)})=y^{(n)}$ with conditional probability $p_n(y^{(n)}|x^{(n)})$. Note that the distribution depends on the entire history. We say that such a channel has (infinite) memory. A channel has finite memory if there is an integer $k\geq 0$ such that if $x^{(n)}=x_nx_{n-1}\cdots x_{n-k+1}\dotsc x_1$ then $p_n(y^{(n)}|x^{(n)})= p_n(y^{(n)}|x'^{(n)})$ for any string $x_n'$ of length $n$ such that $x_n'=x_n, \dotsc, x_{n-k+1}'=x_{n-k+1}$. That is, the probability distribution depends on the most recent $k$ symbols seen by the channel. A channel is {\em memoryless} if $k=1$. Since we will be dealing mostly with discrete memoryless channels (DMS) this property will be tacitly assumed unless stated otherwise. In the memoryless case it is easy to show the simple form of transition probabilities
\beq
\begin{split}
&p_n(y^{(n)}|x^{(n)})=p_n(y_1\dotsc y_n|x_1\dotsc x_n)\\
& =p(y_1|x_1)p(y_2|x_2)\dotsb p(y_n|x_n)\\
\end{split}
\eeq
This motivates us to define the}
a {\em channel transformation matrix} $C(y_j|x_i)$ with $y_j\in Y$ and $x_i\in X$. Since the matrix $C(y_j|x_i)$ represents the probability that the channel outputs $y_j$ on input $x_i$ we have $\sum_j C(y_j|x_i)=1$ for all $i$: $C(ij)=C(y_j|x_i)$ is {\em row stochastic}. This is the standard formulation. \cite{Ciszar,CoverT}. We now turn to the algebraic formulation.
\noindent
{\bf Definition.} A DMS channel $\cali{C}=\{X,Y,C\}$ where $X$ and $Y$ are abelian $\cstar$ algebras of dimension $m$ and $n$ respectively and $C: Y \rightarrow X$ is a unital positive map. The algebras $X$ and $Y$ will be called the input and output algebras of the channel respectively. Given a state $\omega$ on $X$ we say that $(X,\omega)$ is the input source for the channel.
\noindent
Sometimes we write the entries of $C$ in the more suggestive form $C_{ij}=C(y_j|x_i)$ where $\{y_j\}$ and $\{x_i\}$ are atomic bases for $Y$ and $X$ respectively. Thus $C(y_j)=\sum_i C_{ij}x_i= \sum_i C(y_j|x_i)x_i$. Note that in our notation $C$ is an $m\times n$ matrix. Its transpose $C^T_{ji}=C(y_j|x_i)$ is the channel matrix in the standard formulation. We have to deal with the transpose because the channel is a map {\em from} the output alphabet to the input alphabet. This may be counterintuitive but observe that any map $Y\rightarrow X$ defines a unique dual map $\cali{S}(X)\rightarrow \cali{S}(Y)$, on the respective state spaces. Informally, a channel transforms a probability distribution on the input alphabet to a distribution on the output.
\commentout{
Let us note that in case of abelian algebras every positive map is guaranteed to be {\em completely positive} \cite{Tak1}. This is no longer true in the non-abelian case. Hence for the quantum case completely positivity has to be explicitly imposed on (quantum) channels.}
We characterize a channel by input/output algebras (of observables) and a positive map. Like the source output we now define a useful quantity called {\em channel output}. Corresponding to the atomic basis $\{y_i\}$ of $Y$ let $\tensor^k y_{i(k)}$ be an atomic basis in $\tensor^n Y$. Here $i(k)=(i_1i_2\dotsc i_k)$ is a multi-index. Similarly we have an atomic basis $\{\tensor^k x_{j(k)}\}$ for $\tensor^k X$. The level-$k$ channel output is defined to be
\(O^k_C = \sum_{i(k)} y_{i(k)}\tensor C^{(k)}(y_{i(k)}) \).
Here $C^{(k)}$ represents the channel transition probability matrix on the $k$-fold tensor product corresponding to strings of length $k$. In the DMS case it is simply the $k$-fold tensor product of the matrix $C$. The channel output defined here encodes most important features of the communication process. First, given the input source function \(\cali{I}_{\omega^k}=\sum_i\omega^k(x_{i(k)})x_{i(k)}\) the output source function is defined by
\(\cali{O}_{\tilde{\omega}^k} = I\tensor \tr_{\tensor^kX}((\unit \tensor \cali{I}_{\omega^k})O^k_c)
=\sum_i \sum_j C(y_{i(k)}|x_{j(k)})\omega^k(x_{j(k)})y_{i(k)}\).
Here, the state $\tilde{\omega}^k$ on the output space $\tensor^k Y$ can be obtained via the dual $\tilde{\omega}^k(y)=\tilde{C}^k(\omega^k)(y)=\omega^k(C^k(y))$. The formula above is an alternative representation which is very similar to the quantum case. The {\em joint output} of the channel can be considered as the combined output of the two terminals of the channel. Thus the joint output
\beq \label{eq:chOutputJoint}
\begin{split}
&\cali{J}_{\tilde{\Omega}^k} = (\unit \tensor \cali{I}_{\omega^k})O^k_C=\sum_{ij} \Omega^k(y_{i(k)}\tensor x_{j(k)})y_{i(k)}\tensor x_{j(k)}, \\
& \Omega^k(y_{i(k)}\tensor x_{j(k)})\equiv C(y_{i(k)}|x_{j(k)})\omega(x_{j(k)})
\end{split}
\eeq
Let us analyze the algebraic definition of channel given above. For simplicity of notation, we restrict ourselves to level 1. The explicit representation of channel output is
\(\sum_i y_i\tensor \sum_j C(y_i|x_j)x_j \)
We interpreted this as follows: if on the channel's out-terminal $y_i$ is observed then the input could be $x_j$ with probability \(C(y_i|x_j)\omega(x_j)/\sum_jC(y_i|x_j)\omega(x_j)\). Now suppose that for a fixed $i$ $C(y_i|x_j)=0$ for all $j$ except one say, $j_i$. Then on observing $y_i$ at the output we are certain that the the input is $x_{j_i}$. If this is true for all values of $y$ then we have an instance of a lossless channel. Given $1\leq j\leq n$ let $d_j$ be the set of integers $i$ for which $C(y_i|x_j)> 0$. If the channel is lossless then $\{d_j\}$ form a partition of the set $\{1,\dotsc, m\}$. The corresponding channel output is
\( O_C= \sum_j \Bigl(\sum_{i\in d_j} C(y_i|x_j)y_i\Bigr)\tensor x_j\).
At the other extreme is the {\em useless} channel in which there is no correlation between the input and the output.
To define it formally, consider a channel $\cali{C}=\{X,Y,C\}$ as above. The map $C$ induces a map $C': Y\tensor X\rightarrow X$ defined by $C'(y\tensor x)=xC(y)$. Given a state $\omega$ on $X$ the
dual of the map $C'$ defines a state $\Omega_C$ on $Y\tensor X$: \(\Omega_C(y\tensor x)=\omega(C'(y\tensor x))=C(y|x)\omega(x)\). We call $\Omega_C$ the joint (input-output) state of the channel. A channel is
useless if $Y$ and $X$ (identified as $Y\tensor \unit$ and $\unit \tensor X$ resp.) are $\Omega_C$-independent. It is easily shown that: {\em a channel $\cali{C}=\{X,Y,C\}$ with input source $(X,\omega)$ is useless iff the matrix $C_{ij}=C(y_j|x_i)$ is of rank 1}.
The algebraic version of the channel coding theorem assures that it is possible to approximate, in the long run, an arbitrary channel (excepting the useless case) by a lossless one.
\begin{thm}[Channel coding]\label{thm:ch_coding}
Let $\cali{C}$ be a channel with input algebra $X$ and output algebra $Y$. Let \(\{x_i\}_{i=1}^n\text{ and } \{y_j\}_{j=1}^m\) be atomic bases for $X$ and $Y$ resp. Given a state $\omega$ on $X$, if the channel is not useless then for each $k$ there are subalgebras \(Y_k\subset \tensor^k Y, X_k\subset \tensor^k X\), a map $C_k: Y_k\rightarrow X_k$ induced by $C$ and a lossless channel $L_k: Y_k\rightarrow X_k$ such that
\[\lim_{k\rightarrow \infty} \Omega(|O_{C_k}-O_{L_k}|) = 0 \text{ on } T_k=Y_k\tensor X_k\]
Here $\Omega=\tensor^{\infty}\Omega_C$ and on $\tensor^k Y\tensor \tensor^k Y$ it acts as $\Omega^k=\tensor^k\Omega_C$ where $\Omega_C$ is the state induced by the channel and a given input state $\omega$. Moreover, if $r_k=\text{dim}(X_k)$ then $R=\frac{\log{r_k}}{k}$, called transmission rate, is independent of $k$.
\end{thm}
Let us clarify the meaning of the above statements. The theorem simply states that on the chosen set of codewords the channel output of $C_k$ induced by the given channel can be made arbitrarily close to that of a lossless channel $L_k$. Since a lossless channel has a definite decision scheme for decoding the choice of $L_k$ is effectively a decision scheme for decoding the original channel's output when the input is restricted to our ``code-book''. This implies probability of error tends to 0 it is possible to choose a set of ``codewords'' which can be transmitted with high reliability. The proof of the theorem \cite{Patra09} uses algebraic arguments only. The theorem guarantees ``convergence in the mean'' in the appropriate subspace which implies convergence in probability. For a lossless channel the input entropy $H(X)$ is equal to the mutual information. We may think of this as conservation of entropy or information which justifies the term ``lossless''. Since it is always the case that $H(X)-H(X|Y)=I(X,Y)$ the quantity $H(X|Y)$ can be considered the loss due to the channel. The algebraic version of the theorem serves two primary purposes. It gives us the abelian perspective from which we will seek possible extensions to the non-commutative case. Secondly, the channel map $L$ can be used for a decoding scheme. Thus we may think of a coding-decoding scheme for a given channel as a sequence of pairs $(X_k, L_k)$ as above.
The coding theorems can be extended to more complicated scenarios like ergodic sources and channels with finite memory. We will not pursue these issues further here. But we are confident that these generalizations can be appropriately formulated and proved in the algebraic framework.
In the preceding sections we have laid the basic algebraic framework for classical information theory. Although, we often confined our discussion to finite-dimensional algebras corresponding to finite sample spaces it is possible to extend it to infinite-dimensional algebras of continuous sample spaces.
\commentout{
In this regard, a natural question is: can the algebraic formulation replace Kolmogorov axiomatics based on measure theory? Naively, the answer is no because the assumption of a norm-compete algebra imposes the restriction that the random variables that they represent must be {\em bounded}. Moreover, the GNS construction implies that the algebraic framework is essentially equivalent to (almost) bounded random variables on a locally compact space. In order to deal with the unbounded case we have to go beyond the normed algebra structures. A possible course of action is indicated in the examples given in section \ref{sec:examples}: via the use of a ``cut-off''. A more general approach would be to consider sequences which converge in a topology weaker than the norm topology to elements of a larger algebra. These and other related issues on foundations are deep and merit a separate investigation.}
These topics will be investigated in the future in the non-commutative setting. We will delve deeper into these analogies and aim to throw light on some basic issues like quantum Huffman coding \cite{Braunstein}, channel capacities and general no-go theorems among others, once we formulate the appropriate models.
|
1,116,691,500,126 | arxiv | \section{Introduction}\label{sec:intro}
\renewcommand{\baselinestretch}{0.98}\normalsize\noindent
{\it Private information retrieval (PIR)} protocols, first introduced by Chor, Goldreich, Kushilevitz,
and Sudan in~\cite{CKGS98}, allow a user to retrieve a data item from a database without
revealing any information about the identity of the item to any single server. The original formulation of the PIR problem considers replicating a binary string on several non-communicating servers. The objective is to optimize the communication cost, including both the upload cost and the download cost, for privately retrieving one single bit. In recent years, the information-theoretic reformulation of the PIR problem assumes the more practical scenario in which the files are of arbitrarily large size. Under this setup, the number of uploaded bits can be neglected with respect to the corresponding number of downloaded bits since the upload does not depend on the size of the file~\cite{CHY15}. This reformulation introduces the \emph{rate} of a PIR scheme to be the ratio between the size of the retrieved file and the total number of downloaded bits from all servers. The supremum of achievable rates over all PIR schemes is defined as the \emph{PIR capacity}. In their pioneering work~\cite{SJ17B} Sun and Jafar determine the exact PIR capacity of the classical PIR model of replication.
Starting from~\cite{SRR14}, the research of PIR has been combined with distributed storage system instead of the replication-based system.
This brings in the other important parameter, i.e., the {\it storage overhead} of the distributed storage system, defined as the ratio between the total number of bits stored on all the servers and the number of bits of the database. Several papers have been studying the relation between the storage overhead and the rate of a PIR scheme.
Chan et al.~\cite{CHY15} offer a tradeoff between the storage overhead and rate for linear PIR schemes. They show that when each server stores a fraction $0<\epsilon\le 1$ of the database, then the rate of a linear PIR scheme should be at most $\frac{N-1/\epsilon}{N}$, where $N$ is the number of server. Tajeddine et al.~\cite{TGE17} propose a PIR scheme achieving this upper bound when the storage code is an arbitrary $(N,K)$-MDS code, so $\epsilon=\frac{1}{K}$ and the PIR rate is $\frac{N-K}{N}$. Banawan and Ulukus~\cite{BU16} show that the exact PIR capacity when using an arbitrary $(N,K)$-MDS storage code is $(1+\frac{K}{N}+\cdots+\frac{K^{M-1}}{N^{M-1}})^{-1}$, a value dependent on the number of files $M$ and tends to $\frac{N-K}{N}$ when $M$ approaches infinity. However, similar to the scheme of Sun and Jafar~\cite{SJ17B}, this optimal scheme can be implemented only if the file size $L$ is an exponential function of $M$~\cite{SJ16C,XZ17}. For a more practical setting we are more interested in the case when $L$ is at most a polynomial value in terms of $M$ and the scheme of Tajeddine et al.~\cite{TGE17} works for this setup.
Recall the development of the research on distributed storage systems:
Besides optimizing repair bandwidth or storage for distributed storage
systems, \emph{access complexity} is also a concern since the time of
reading data may cause a bottleneck. The research of optimal-access
MDS codes started in \cite{TWB14}. A similar idea in locally
repairable codes was introduced in~\cite{GHSY12} for the sake of reducing
the nodes to be accessed. The complexity of the computations done by
the servers for the various tasks of the distributed storage system is
an important parameter in such systems which didn't get enough
attention in PIR schemes. The only work which took the computational
complexity of the servers, in the new PIR model, into account, was
done by Lavauzelle~\cite{Lav18}. Our approach is completely different.
For practical use of PIR protocols in distributed storage systems, we
should also consider the access complexity in the scheme. However, to
the best of our knowledge, the access complexity of PIR has not been
studied in previous works so far. In fact, most known PIR schemes
require accessing almost all of the data stored on each server in the
worst case. The next example demonstrates the concepts and
improvements for the access complexity that we study in this work. We
will consider the worst case in this paper, but the average case is
also very interesting from a theoretical and practical points of view.
\begin{example}
{\it Consider the following 2-server PIR scheme where each server stores the whole database ${\boldsymbol x}=({\boldsymbol x}^1,{\boldsymbol x}^2,\dots,{\boldsymbol x}^M)$. A user chooses an arbitrary binary vector ${\boldsymbol a}=(a_1,\dots,a_M)\in \mathbb{F}_2^{M}$ and then sends ${\boldsymbol a}$ and ${\boldsymbol a}+{\boldsymbol e}_f$ to the two servers respectively. From the responses $\sum_{i=1}^Ma_i{\boldsymbol x}^i$ and $\sum_{i=1}^Ma_i{\boldsymbol x}^i+{\boldsymbol x}^f$ the user successfully retrieves the desired file ${\boldsymbol x}^f$ privately. While the main advantage of this solution is its low download complexity, it suffers from extremely large access complexity since in the worst case almost all $M$ files are accessed on each server. Hence, in this scheme the bottleneck will no longer be the upload or download time, but the access time to read all files. The access complexity can be improved at the cost of increasing the storage overhead. That is, when storing more information in the servers the computation ${\boldsymbol a} \cdot {\boldsymbol x}$ will require to access a fewer number of files. For example, assume we also store in each server the file ${\boldsymbol x}_\Sigma$ given by ${\boldsymbol x}_\Sigma=\sum_{i=1}^M{\boldsymbol x}^i$. Then, in the worst case, the server will read only $M/2$ files. Thus we save half of the access complexity in the tradeoff of storing one additional file on each server.}
\end{example}
Intuitively for a PIR scheme in a distributed storage system there will be a relationship among the three parameters: storage overhead, PIR rate, and access complexity. The ultimate objective is to characterize the tradeoff of any two parameters when fixing the third. In this paper, we make a first step towards solving this problem. Given the number of servers $N$, the number of files $M$, we fix the size of the storage space for each server (and thus fix the storage overhead) and analyze the rate and access complexity of several PIR schemes.
The rest of the paper is organized as follows. In Section~\ref{sec:stat}, we give a formal statement of the PIR problem studied in the paper. In Section~\ref{sec:cov}, we discuss how to improve the access complexity using covering codes. In Section~\ref{sec:main}, we analyze the rate and access complexity for several PIR schemes. Finally, Section~\ref{sec:concl} concludes the paper.
\section{Problem Statement}\label{sec:stat}
A PIR scheme for a distributed storage system consists of the following parameters:
\begin{itemize}
\item The system has $N$ servers. A database consists of $M$ files ${\boldsymbol x}^1,{\boldsymbol x}^2,\dots,{\boldsymbol x}^M$ of equal length $L$. The size of the database is then $ML$.
\item Each server stores $\epsilon ML$ bits, $\epsilon>0$. Thus the total storage is $\epsilon NML$.
\item The {\it storage overhead} is defined as the ratio between the total storage and the size of the database, i.e., $\epsilon N$.
\item The storage code of the system is an encoding mapping $({\boldsymbol x}^1,{\boldsymbol x}^2,\dots,{\boldsymbol x}^M)\in\mathbb{F}_2^{ML}\longrightarrow({\boldsymbol y}_1,\dots,{\boldsymbol y}_N),~{\boldsymbol y}_n\in\mathbb{F}_2^{\epsilon ML}.$
\item To retrieve a file, a user downloads $\rho_n$ bits from the $n$th server. The total download cost is then $\sum_{n=1}^N \rho_n$.
\item The {\it rate} $\Omega$ of a PIR scheme is defined as the ratio of the size of a desired file and the number of downloaded bits, i.e. $\Omega=\frac{L}{\sum_{n=1}^N \rho_n}$.
\item The $\rho_n$ downloaded bits from the $n$th server are functions of the data $\mathbf{y}_n$ it stores. The calculation of these downloaded bits requires the server to access $\delta_n ML$ bits in $\mathbf{y}_n$. $\delta_n$ is called the \emph{access complexity} of the $n$th server. The {\it total access complexity} is defined as $\Delta=\sum_{n=1}^N \delta_n$.
\end{itemize}
We call a 6-tuple $(N,M,L,\Omega,\Delta,\epsilon)$ {\it achievable}, if for a distributed storage system with parameters $N$, $M$ and $L$, we have a PIR scheme with rate $\Omega$, total access complexity $\Delta$, and each server stores a fraction $\epsilon>0$ of the whole database (and thus the storage overhead is $\epsilon N$). When $N$, $M$ and $L$ are clear from the context or not relevant, we abbreviate the 6-tuple as a 3-tuple $(\Omega,\Delta,\epsilon)$. The ultimate objective is to characterize the exact tradeoff between any two of the parameters $\Omega$, $\Delta$ and $\epsilon$ when fixing the third. In this paper we make a first step towards solving this problem by finding some achievable 3-tuples of $(\Omega,\Delta,\epsilon)$ with a predetermined $\epsilon$.
Intuitively the storage space can be divided into two parts. One part represents the indispensable storage for a particular PIR scheme and is referred to as {\it the storage for PIR}. This part represents the independent symbols stored on each server. The remaining part is jointly designed with the former part for improving the access complexity on each server. We illustrate this idea via the following example.
\begin{example}
{\it Consider a distributed storage system storing a database containing $M$ files ${\boldsymbol x}^1,{\boldsymbol x}^2,\dots,{\boldsymbol x}^M$ of equal size $L$. Assume we have $N=3$ servers with $\epsilon=1$, i.e., each server can store $ML$ bits. A user wants to retrieve a specific file ${\boldsymbol x}^f$.
One way is to allocate all the storage space to be used for PIR, so each server stores the whole database. Divide each file into two equal parts ${\boldsymbol x}^m=({\boldsymbol x}^m_1,{\boldsymbol x}^m_2)$. A user chooses two independent random vectors ${\boldsymbol a}$ and ${\boldsymbol b}$ in $\mathbb{F}_2^{M}$. He asks for $\sum_{i=1}^Ma_i{\boldsymbol x}^i_1+\sum_{i=1}^Mb_i{\boldsymbol x}^i_2$, $\sum_{i=1}^Ma_i{\boldsymbol x}^i_1+\sum_{i=1}^Mb_i{\boldsymbol x}^i_2+{\boldsymbol x}^f_1$ and $\sum_{i=1}^Ma_i{\boldsymbol x}^i_1+\sum_{i=1}^Mb_i{\boldsymbol x}^i_2+{\boldsymbol x}^f_2$ from the three servers respectively. Therefore he downloads $\frac{3L}{2}$ bits, so the rate of the scheme will be $\Omega=2/3$. Each server will access almost all the data in the worst case. Altogether almost $3ML$ bits should be accessed throughout the scheme. Then the total access complexity will be $\Delta=3$. So we have an achievable 3-tuple $(\Omega=2/3,\Delta=3,\epsilon=1)$.
Yet another way is to only use half of the storage for PIR and the other half for improving the access complexity. Let each server store only half of the database. Say we have $\{{\boldsymbol x}^m_1:1\le m \le M\}$ on the first server, $\{{\boldsymbol x}^m_2:1\le m \le M\}$ on the second server and a coded form $\{{\boldsymbol x}^m_1+{\boldsymbol x}^m_2:1\le m \le M\}$ on the third server. Again a user chooses two independent random vectors ${\boldsymbol a}$ and ${\boldsymbol b}$ in $\mathbb{F}_2^{M}$. He makes two queries from each server and gets the responses as follows:
\begin{center}
\begin{small}
$\begin{array}{ccc}
\text{Server I} & \text{Server II} & \text{Server III} \\\hline
\sum_{i=1}^Ma_i{\boldsymbol x}^i_1+{\boldsymbol x}^f_1 & \sum_{i=1}^Ma_i{\boldsymbol x}^i_2 & \sum_{i=1}^Ma_i({\boldsymbol x}^i_1+{\boldsymbol x}^i_2) \\
\sum_{i=1}^Mb_i{\boldsymbol x}^i_1 & \sum_{i=1}^Mb_i{\boldsymbol x}^i_2+{\boldsymbol x}^f_2 & \sum_{i=1}^Mb_i({\boldsymbol x}^i_1+{\boldsymbol x}^i_2) \\\hline
\end{array}$
\end{small}
\end{center}
This is exactly the scheme of Tajeddine et al. in~\cite{TGE17} when using a $(3,2)$-MDS storage code. In this scheme the download will be $3L$ bits so the rate will be $\Omega=1/3$. To improve the access complexity, each server stores a coded form of the data using a covering code approach instead of storing $\{{\boldsymbol x}^m_1:1\le m \le M\}$, $\{{\boldsymbol x}^m_2:1\le m \le M\}$ or $\{{\boldsymbol x}^m_1+{\boldsymbol x}^m_2:1\le m \le M\}$ in their original form. For each query a server will only need to read about $0.22ML$ bits (to be explained in Section \ref{sec:cov}). So altogether at most $1.32ML$ bits are accessed in the scheme, resulting in the total access complexity $\Delta=1.32$. So we have an achievable 3-tuple $(\Omega=1/3,\Delta=1.32,\epsilon=1)$.}
\end{example}
\section{Access Complexity Using Covering Codes}
\label{sec:cov}
A (binary) {\it covering code} ${\cal C}$ of length $\ell$ with {\it covering radius} $R$ is a set of vectors in $\{0,1\}^\ell$
such that for every vector ${\boldsymbol u}\in\{0,1\}^\ell$ there exists a codeword ${\boldsymbol c}\in{\cal C}$
with Hamming distance $d_H({\boldsymbol c},{\boldsymbol u})\leq R$. Covering codes were extensively studied and comprehensive information on them can be found in~\cite{ccbook}. For linear covering codes this property can be translated as follows.
\begin{proposition}{\rm\cite{ckms85}}
Let ${\cal C}$ be a linear code of length $\ell$, dimension $k$, redundancy $r=\ell-k$, and a parity check matrix~${\cal H}$ of
size $r\times \ell$. Then, ${\cal C}$ is a covering code with covering radius~$R$ if and only if for every column vector ${\boldsymbol s}\in\{0,1\}^r$
there exists a row vector ${\boldsymbol y} \in \{0,1\}^{\ell}$ of Hamming weight at most $R$, such that ${\cal H} \cdot {\boldsymbol y}^T = {\boldsymbol s}$.
\end{proposition}
The other way to explain the covering radius of a linear code is as follows.
A column vector ${\boldsymbol s}\in\{0,1\}^r$ is actually a syndrome corresponding
to a particular coset of the code~${\cal C}$ in~$\mathbb{F}_2^{\ell}$. In this coset
one can find a vector ${\boldsymbol y}\in\mathbb{F}_2^{\ell}$
(not necessarily unique) with minimum Hamming weight. The vector ${\boldsymbol y}$ is
known as a {\it coset leader} and its weight is known
as the {\it coset weight}. Then one can get the vector ${\boldsymbol s}$ by summing up
the columns of ${\cal H}$ indexed by the support set of ${\boldsymbol y}$.
Thus the covering radius of a linear code is exactly the maximum of all its
coset weights. Linear covering codes can be used to improve the access complexity as follows.
Suppose we have a database ${\boldsymbol x}$ which can viewed as a $t \times r$ matrix, i.e.
${\boldsymbol x}=({\boldsymbol x}_1,\dots,{\boldsymbol x}_r)$, where each~${\boldsymbol x}_{i}$, ${1 \le i \le r}$,
is a column vector of length $t$.
Let ${\cal C}$ be a linear code of length $\ell$, dimension $k$, redundancy $r=\ell-k$,
covering radius~$R$ and an $r \times \ell$ parity check matrix ${\cal H}=[{\boldsymbol h}_1,\dots,{\boldsymbol h}_{\ell}]$.
Each server stores the database ${\boldsymbol x}$ encoded by the columns of ${\cal H}$.
That is, the server stores $\ell$ column vectors ${\boldsymbol z}_i={\boldsymbol x} \cdot {\boldsymbol h}_i$
for $1\leq i\leq \ell$. In other words, ${\boldsymbol z}_i$ is a linear combination of the files (column vectors) of the database ${\boldsymbol x}$,
with coefficients taken from ${\boldsymbol h}_i$. The user who chooses
an arbitrary binary column vector ${\boldsymbol s}=({\boldsymbol s}_1,\ldots,{\boldsymbol s}_r)^T$, for the query, to
the $j$-th server, wants to retrieve from the $j$-th server the vector ${\boldsymbol x} \cdot {\boldsymbol s}$.
To compute ${\boldsymbol x} \cdot {\boldsymbol s}$, the server first finds the coset
leader~${\boldsymbol y}$ such that ${{\cal H} \cdot {\boldsymbol y}^T={\boldsymbol s}}$. Then computing ${\boldsymbol x} \cdot {\boldsymbol s}$ is equivalent to
$$
{\boldsymbol x} \cdot {\boldsymbol s} = {\boldsymbol x} \cdot ({\cal H} \cdot {\boldsymbol y}^T) ={\boldsymbol x} \cdot \left( \sum_{i:y_i=1} {\boldsymbol h}_i \right)= \sum_{i:y_i=1} {\boldsymbol z}_i.
$$
Since the Hamming weight of the coset leader ${\boldsymbol y}$ is at most~$R$, it follows that
we only need to access at most $R$ columns of ${\cal H}$ to compute ${\boldsymbol x} \cdot {\boldsymbol s}$. Moreover, ${\cal H}$ can be chosen in the form ${\cal H}=[I_r~|~A_{r\times(\ell-r)}]$ and thus we can always have a systematic form of the original data.
The asymptotic connection between the length $\ell$, covering radius $R$, and the
dimension $k$ of the linear covering code can be roughly estimated by the sphere-covering bound
$$2^{k}\cdot 2^{H(R/\ell)\ell} \approx 2^\ell,$$
or
$$\frac{k}{\ell}+H(R/\ell) = 1,$$
so $H(R/\ell) = 1-\frac{k}{\ell} = \frac{r}{\ell}$, where $H(\cdot)$ is the binary entropy function.
By setting the covering radius to be $R=\alpha r$ and the size of the storage $\ell=\beta r$, we have
\begin{equation} \label{coveringcode}
H\left(\frac{\alpha}{\beta}\right) = \frac{1}{\beta}.
\end{equation}
Solving this equation and the relation between $\alpha$ and $\beta$ can be represented as a function $\alpha=f(\beta)$, depicted in Fig.~\ref{fig:PIR}.
\medskip
\begin{figure}[h
\centering
\vspace{-3ex}
\includegraphics[width=0.9\linewidth]{PIRfigupdated.pdf}%
\caption[Access vs. Storage]
{Access vs. Storage}\vspace{-3ex}
\label{fig:PIR}%
\end{figure}
\medskip
It is not an easy task to explicitly construct linear codes achieving the above sphere-covering bound.
However, the existence of such codes has been proved:
\begin{proposition}{\rm\cite{CF85}}
Let $0\le R\le \ell/2$. Then there exists an infinite sequence of linear codes $C_\ell$ of growing length $\ell$ with covering radius $R(C_\ell)\rightarrow R$ and the code rate is between $1-H(R/\ell)$ and $1-H(R/\ell)+O(\ell^{-1}\log_2\ell)$.
\end{proposition}
In a PIR scheme which does not take access complexity into consideration, the data
stored on a server can be usually represented as $r$ independent strings
of the same length and a query asks for a linear combination of these strings.
Using the covering code approach above, we may store a coded version of the data
instead of only storing the original form. The access complexity could be improved as follows.
\begin{theorem}
\label{thm:cov}
Suppose there exists a linear binary covering code with redundancy $r$, code length $\beta r$ and covering radius $\alpha r$. Given a set of $r$ independent strings $\{{\boldsymbol x}_1,\dots,{\boldsymbol x}_r\}$, a server can store a coded form of these strings as $\{{\boldsymbol z}_1,\dots,{\boldsymbol z}_{\beta r}\}$, such that computing any linear combination of $\{{\boldsymbol x}_1,\dots,{\boldsymbol x}_r\}$ requires only accessing at most $\alpha r$ substrings in $\{{\boldsymbol z}_1,\dots,{\boldsymbol z}_{\beta r}\}$. The asymptotic relation of $\alpha$ and $\beta$ is $H(\frac{\alpha}{\beta})=\frac{1}{\beta}$.
\end{theorem}
A further remark is that adding redundancies in the storage does not affect
the privacy of the original scheme. In essence the privacy is only related to the set of queries.
Finally, we note that the problem of reducing the access complexity when
replying to queries of the form mentioned in this section is not relevant only for PIR schemes.
The approach can be relevant for other models which require this or similar computation.
An example for such a problem was studied in~\cite{HBA98} for the partial-sum problem where the authors also used covering codes. However, since the computations involved integer numbers, the storage overhead was exponential with the number of items.
\section{PIR Rate vs. Access Complexity}\label{sec:main}
Now we begin to analyze the PIR rate and access complexity for two kinds of PIR schemes,
a scheme by Tajeddine et al.~\cite{TGE17} and a scheme
of Blackburn, Etzion and Paterson ({B-E-P} scheme)~\cite{BEP17}. Given $\epsilon$ indicating the
size of the storage space of each server, we first choose some $0 \le \pi \le \epsilon$ indicating
that the size of the storage for PIR, i.e., the amount of storage of independent symbols.
Using this $\pi$ fraction of storage we implement a proper PIR scheme with high rate.
Then we analyze the total access complexity of this scheme by making use of the remaining $\epsilon-\pi$ fraction of the storage space.
\subsection{The scheme of Tajeddine et al.~\cite{TGE17}}
When $\pi=\frac{1}{K}$, $K<N$, the rate of the scheme of Tajeddine et al.~\cite{TGE17} achieves the upper bound $\Omega=\frac{N-K}{N}$ proposed by Chan et al. in~\cite{CHY15}. So we begin with analyzing how to improve the access complexity of this scheme using the covering code approach.
Recall the framework of the scheme. Let each file ${{\boldsymbol x}^m\in\mathbb{F}_2^L}$ be represented
in the form of a matrix ${\mathbf X}^m=\{{\boldsymbol x}^m_{i,j}:{1\le i \le N-K}, 1\le j \le K\}$, where
each ${\boldsymbol x}^m_{i,j}$ represents a binary substring of length $\frac{L}{K(N-K)}$. Let $\Lambda=[\lambda_1,\dots,\lambda _N]$ be a $K\times N$ generator matrix of the storage code. Then the $n$th server stores ${\mathbf X}^m\lambda_n$, which
are $N-K$ linearly independent substrings as functions of ${\boldsymbol x}^m$. The whole storage
on the $n$th server is thus a concatenation of altogether $M(N-K)$ linearly independent substrings.
Each server will receive $K$ queries, where each query asks for a certain linear combination of these $M(N-K)$ substrings.
We make use of the additional storage of size $(\epsilon-\frac{1}{K})ML$ bits on each server.
Select a covering code with redundancy $r=M(N-K)$ with code length $\beta r$ where $\beta=K\epsilon$.
Instead of storing the $r$ substrings of ${\boldsymbol y}_n$ in their original form,
the server stores $\beta r$ substrings according to the covering code approach.
Then by Theorem \ref{thm:cov}, to answer each query, the server only needs
to access at most $\alpha r=f(\beta)r$ substrings. Recall that each server
receives $K$ queries. Thus each server will access at most $\min\{f(\beta)rK,r\}$ substrings,
since accessing the $r$ linearly independent substrings are already enough for computing any linear combination.
Therefore, the total number of bits accessed by each server is $\min\{ f(K\epsilon)ML,\frac{ML}{K}\}$
and thus the total access complexity over all servers will be $\Delta=\min\{ Nf(K\epsilon), \frac{N}{K}\}$.
For example, select $N=10$. Let $\epsilon=1$. We can take arbitrary $1\le K \le 9$ and apply the scheme of Tajeddine et al. The PIR rate and total access complexity are listed as follows. $\Delta'=\frac{N}{K}$ corresponds to the total access complexity when there is no redundancy in each server.
\begin{center}
$\begin{array}{cccc}
K & \Omega=\frac{N-K}{N} & \Delta & \Delta'\\\hline
1 & 0.9 & 5.000 & 10.000\\\hline
2 & 0.8 & 2.201 & 5.000\\\hline
3 & 0.7 & 1.845 & 3.333\\\hline
4 & 0.6 & 1.668 & 2.500\\\hline
5 & 0.5 & 1.556 & 2.000\\\hline
6 & 0.4 & 1.477 & 1.667\\\hline
7 & 0.3 & 1.418 & 1.429\\\hline
8 & 0.2 & 1.250 & 1.250\\\hline
9 & 0.1 & 1.111 & 1.111\\\hline
\end{array}$
\end{center}
The table above indicates the covering code approach does improve the total access complexity for $K\neq 8,9$.
We further note that Tajeddine et al. mention that their scheme could be implemented
with cutting each file into $\frac{\text{l.c.m.}(K,N-K)}{K}\times K$ substrings instead
of $(N-K)\times K$, and correspondingly the number of subqueries for each server
is $\frac{\text{l.c.m}(K,N-K)}{N-K}$ instead of $K$. This modification allows us to further
improve the access complexity. Select a covering code with redundancy $r'=M\frac{\text{l.c.m.}(K,N-K)}{K}$ with
code length $\beta r'$ where $\beta=K\epsilon$. Then by Theorem \ref{thm:cov} each
query will access at most $\alpha r'=f(\beta)r'$ substrings. Each server
receives $\frac{\text{l.c.m}(K,N-K)}{N-K}$ queries and thus the number of bits to be
accessed on each server is at most $f(\beta)r'\frac{\text{l.c.m}(K,N-K)}{N-K}\times\frac{L}{K\times\frac{\text{l.c.m.}(K,N-K)}{K}}
=f(K\epsilon)ML\frac{\text{l.c.m.}(K,N-K)}{K(N-K)}.$
Therefore when $K$ and $N-K$ are not coprime, the total access complexity will be further improved as $\Delta=Nf(K\epsilon)\frac{\text{l.c.m.}(K,N-K)}{K(N-K)}=\frac{Nf(K\epsilon)}{\gcd(K,N-K)}$. Thus some of the results in the example above can be improved as follows.
\begin{center}
$\begin{array}{ccc}
K & \Omega=\frac{N-K}{N} & \Delta \\\hline
2 & 0.8 & 1.100 \\\hline
4 & 0.6 & 0.834 \\\hline
5 & 0.5 & 0.311 \\\hline
6 & 0.4 & 0.739 \\\hline
8 & 0.2 & 0.685 \\\hline
\end{array}$
\end{center}
In conclusion, applying the scheme of Tajeddine et al. results in several achievable 3-tuples as follows.
\begin{theorem}
In a distributed storage system consisting of $N$ servers, for every $1\le K <N$, $\epsilon\ge \frac{1}{K}$, the tuple $(\Omega=\frac{N-K}{N},\Delta=\min\{\frac{Nf(K\epsilon)}{\gcd(K,N-K)},N/K\},\epsilon)$ is achievable.
\end{theorem}
We close this subsection by discussing the possibility of further improving the access complexity for the scheme of Tajeddine et al.
Note that each server may receive multiple queries. The data accessed by a server when
responding to different queries may have certain overlap. Reconsider Example 1, if we
have two queries, then the server may read only $\frac{3M}{4}$ files instead of
reading $\frac{M}{2}$ files twice. Further improving the access complexity for the scheme
of Tajeddine et al. (by taking advantage of possible overlap when reading multiple queries)
relies on a good solution to the coding theoretic problem presented in the next subsection.
\subsection{Generalized coset weights}
{Given a binary linear code, for every $\tau$ cosets of the code, choose one vector from each coset and find the size of the union of their support sets.
The minimum of this value is called the {\it $\tau$-coset weight} of these $\tau$ cosets. What is the maximum value of all $\tau$-coset weights? When $\tau=1$ this is the covering radius $R$ of the linear code. When $\tau \ge 2$, we would like to see weights smaller than $\tau R$.}
The $\tau$-coset weights (for covering) are akin to the generalized Hamming weights
(for distance) defined in~\cite{Wei91} which
were considered in hundreds of papers.
Let $[n,k,d]$ code denote a binary linear code of length $n$, dimension $k$, and minimum Hamming distance $d$.
\begin{lemma}
The $\tau$-coset weight of a code ${\cal C}$ is the minimum number of columns $\ell$ in the parity check matrix ${\cal H}$ of ${\cal C}$,
such that for each $\tau$ syndromes of ${\cal C}$, there exists a set of $\ell$ columns of ${\cal H}$ which has
$\tau$ linear combinations of this set to form these $\tau$ syndromes.
\end{lemma}
\begin{theorem}
The $\tau$-coset weight of an $[n,k,d]$ code ${\cal C}$ is at most $n-k$ for each $\tau \geq 1$.
\end{theorem}
\begin{proof}
The parity check matrix ${\cal H}$ of ${\cal C}$ has $n-k$ linearly independent columns. A set of such $n-k$ columns
covers a word in each coset of ${\cal C}$.
\end{proof}
\begin{theorem}
The $\tau$-coset weight of the $[2^m-1,2^m -1 -m,3]$ Hamming weight is $\tau$ for each $1 \leq \tau \leq m$.
\end{theorem}
\begin{theorem}
The $\tau$-coset weight of the $[2^m,2^m -1 -m,4]$ extended Hamming weight is $\tau +1$ for each $1 \leq \tau \leq m$.
\end{theorem}
For many types of BCH codes with minimum distance $d$ and covering radius $R$ we have proved that
the 2-coset weight is smaller than $2R$. This was generalized in some cases for $\tau$-coset weights
with $\tau >2$. This and other related results will be considered in the full version of this paper.
\subsection{Using several parallel B-E-P schemes}
Consider the scheme of Tajeddine et al. when $K=1$, i.e., replicated databases. Each file ${\boldsymbol x}^m\in\mathbb{F}_2^L$ is divided into $N-1$ substrings ${\boldsymbol x}^m_1,\dots,{\boldsymbol x}^m_{N-1}$ of length $\frac{L}{N-1}$. Each server stores ${\boldsymbol y}=\{{\boldsymbol x}^1_1,\dots,{\boldsymbol x}^1_{N-1},{\boldsymbol x}^2_1,\dots,{\boldsymbol x}^2_{N-1},\dots,{\boldsymbol x}^M_1,\dots,{\boldsymbol x}^M_{N-1}\}$, altogether $(N-1)M$ substrings. A user chooses a random binary vector ${\boldsymbol v}$ of length $(N-1)M$. The $N$th server receives the query vector ${\boldsymbol v}$ and the $n$th server receives the query vector ${\boldsymbol v}+{\boldsymbol e}_{(f-1)(N-1)+n}$, $1\le n \le N-1$, where $f$ is the index of the desired file. Then from the response of the $n$th server and the $N$th server, the user retrieves the string ${\boldsymbol x}^f_n$, $1\le n \le N-1$.
The B-E-P scheme recently proposed by Blackburn, Etzion and Paterson suggests a different way, whose original motivation is to optimize the upload complexity of the query vectors. A user who wants to retrieve the file ${\boldsymbol x}^f$ chooses $M$ elements $z_1,\dots,z_M\in\mathbb{Z}_{N}$ uniformly and independently at random. The $n$th server receives $(b_{1n},\dots,b_{Mn})$ where $b_{fn}=z_f+n\pmod{N}$ and $b_{mn}=z_m$ for $m\neq f$ and then responds with $\sum_{m=1}^{M} {\boldsymbol x}^m_{b_{mn}}$, where ${\boldsymbol x}^m_{0}$ represents the all-zero vector.
The main difference is that a query in the former scheme asks for an arbitrary linear combination of all the $(N-1)M$ substrings while a query in the latter scheme asks for a linear combination with a restricted pattern, i.e., at most one substring from each file is involved in the linear combination. This restriction may allow for a better way to improve the access complexity than the covering code approach.
\begin{example}
$N=3$, $M=3$. Consider the necessary amount of storage overhead for a PIR scheme with rate $2/3$ and total access complexity $1$.
The scheme of Tajeddine et al. gives an achievable tuple $(2/3,1,13/6)$, which
applies a covering code of length 13, redundancy 6 and covering radius 2 to store the six substrings
$\{{\boldsymbol x}^1_1,{\boldsymbol x}^1_2,{\boldsymbol x}^2_1,{\boldsymbol x}^2_2,{\boldsymbol x}^3_1,{\boldsymbol x}^3_2\}$. $\epsilon=13/6$ cannot
be improved for the scheme of Tajeddine et al. since 13 is the minimum length of a linear covering
code with redundancy 6 and covering radius 2~\cite[p. 202]{ccbook}.
However, if we use the B-E-P scheme, then each server can store the following 11 substrings:
$\{{\boldsymbol x}^1_1,{\boldsymbol x}^1_2,{\boldsymbol x}^2_1,{\boldsymbol x}^2_2,{\boldsymbol x}^3_1,{\boldsymbol x}^3_2,{\boldsymbol x}^1_1+{\boldsymbol x}^2_2,
{\boldsymbol x}^2_1+{\boldsymbol x}^3_2,{\boldsymbol x}^3_1+{\boldsymbol x}^1_2,{\boldsymbol x}^1_1+{\boldsymbol x}^2_1+{\boldsymbol x}^3_1,{\boldsymbol x}^1_2+{\boldsymbol x}^2_2+{\boldsymbol x}^3_2\}$.
This already guarantees that we only need to read at most two substrings for any
query in the B-E-P scheme. Thus the B-E-P scheme gives an achievable tuple (2/3,1,11/6).
\end{example}
In the former two subsections, on each server $\frac{1}{K}ML$ bits are allocated as the storage for PIR and the remaining ${(\epsilon-\frac{1}{K})ML}$ bits are designed for improving the access complexity. Now consider the case when $\frac{p}{q}ML$ bits are allocated for the PIR scheme and the remaining $(\epsilon-\frac{p}{q})ML$ bits are used for improving the access complexity, where $\frac{1}{N}\le\frac{p}{q}\le \epsilon$ cannot be simplified to the form $\frac{1}{K}$. Once we have a proper PIR scheme with good rate in this setup, the idea for improving the access complexity will be exactly the same approach aforementioned.
As shown by~\cite{CHY15}, such a PIR scheme will have rate at most $\frac{N-\frac{q}{p}}{N}$. This model was then named as the storage constrained PIR and bounds on the capacity were considered in \cite{AKT18}. Particularly, when further restricting the $\frac{p}{q}ML$ bits of storage to be uncoded, \cite{AKT18} determined the exact capacity which is achieved by a memory sharing method plus the capacity-achieving schemes of \cite{SJ17B}. Since the B-E-P scheme has asymptotically optimal rate, it is natural to consider the memory sharing method using several parallel B-E-P schemes.
Suppose that the file size is $L=t\ell$ and we divide each file ${\boldsymbol x}^m$ into $t$ parts of equal size $\ell$, $\{{\boldsymbol x}^m_1,\dots,{\boldsymbol x}^m_{t}\}$. We choose some $d$ parts from each file and consider them as a new subdatabase, say $\{{\boldsymbol x}^m_j:1\le m \le M, 1\le j \le d\}$. Then we may perform a B-E-P subscheme for this subdatabase on some $d+1$ servers.
This subscheme occupies $\frac{d}{t}ML$ bits on each of the $d+1$ servers involved and contributes $\frac{d+1}{t}L$ bits to the download cost. A combined PIR scheme by the memory sharing method, is done by just dividing the database into several subdatabases and then implementing several parallel B-E-P subschemes, each on a certain subset of servers. Note that a sufficiently large $t$ and a proper way to allocate servers for each subscheme (say, by permutations) will guarantee that each server stores exactly $\frac{p}{q}ML$ bits. Since in this scheme each server has uncoded storage, then as suggested by \cite{AKT18}, to achieve the asymptotically optimal rate, each subscheme should be implemented on either $\lceil \frac{Np}{q} \rceil$ or $\lfloor \frac{Np}{q} \rfloor$ servers.
The rate of this scheme can be calculated as follows. Altogether a proportion $\eta$ of the database is involved in subschemes on $\lceil \frac{Np}{q} \rceil$ servers and the rest proportion $1-\eta$ is involved in subschemes on $\lfloor \frac{Np}{q} \rfloor$ servers, where $\eta \lceil \frac{Np}{q} \rceil + (1-\eta) \lfloor \frac{Np}{q} \rfloor = Np/q$. The total download is then
$$ L \cdot \big( \eta \frac{\lceil \frac{Np}{q} \rceil}{\lceil \frac{Np}{q} \rceil-1} + (1-\eta) \frac{\lfloor \frac{Np}{q} \rfloor}{\lfloor \frac{Np}{q} \rfloor-1} \big).$$
Finally, the rest $(\epsilon-\frac{p}{q})ML$ bits on each server are used for improving the access complexity via the covering code approach aforementioned.
\begin{theorem}
In a distributed storage system consisting of $N$ servers, for every rational number $\frac{1}{N}\le\frac{p}{q}\le \epsilon$, the tuple $(\Omega,\frac{Np}{q}f(\epsilon\frac{q}{p}),\epsilon)$ is achievable, where
$$\Omega= \big( \eta \frac{\lceil \frac{Np}{q} \rceil}{\lceil \frac{Np}{q} \rceil-1} + (1-\eta) \frac{\lfloor \frac{Np}{q} \rfloor}{\lfloor \frac{Np}{q} \rfloor-1} \big)^{-1}.$$
\end{theorem}
\section{Conclusion}\label{sec:concl}
In this paper we took into consideration the access complexity of a PIR scheme. PIR schemes with low access complexity reduce the amount of data to be accessed throughout a PIR scheme and are therefore suitable for practical use.
A few methods were considered, especially ones which use covering codes.
Some of these codes were applied on known schemes. It should be noted that these methods
are not useful for all the known schemes, e.g. the one of Sun and Jafar~\cite{SJ17B}.
Finally, the problem of generalized coset weights, which will be helpful when there are multiple
queries on each server, has independent interest in coding theory.
\vspace{-1ex}
\section*{Acknowledgment}
E. Yaakobi and Y. Zhang were supported in part by the ISF grant 1817/18.
T. Etzion and Y. Zhang were supported in part by the BSF-NSF grant 2016692.
Y. Zhang was also supported in part by a Technion Fellowship.
\vspace{-1ex}
|
1,116,691,500,127 | arxiv | \section{Introduction}
Let $s\geq 1$ be an integer.
For a $s$-tuple $\ul a=(a_1,\ldots,a_s)\in\Z^s_p$ of $p$-adic integers,
let
\[
F_{\ul a}(t)={}_sF_{s-1}\left({a_1,\ldots,a_s\atop 1,\ldots, 1}:t\right)
=\sum_{n=0}^\infty\frac{(a_1)_n}{n!}\cdots\frac{(a_s)_n}{n!}t^n
\]
be the hypergeometric power series where
$(\alpha)_n=\alpha(\alpha+1)\cdots(\alpha+n-1)$ denotes the Pochhammer symbol.
In his seminal paper \cite{Dwork-p-cycle}, B. Dwork discovered that
certain ratios of the hypergeometric power series are the uniform limit of rational functions.
We call his functions {\it Dwork's $p$-adic hypergeometric
functions}.
Let $\alpha'$ denote the Dwork prime, which is defined to be $(\alpha+l)/p$ where $l\in
\{0,1,\ldots,p-1\}$ is the unique integer such that $\alpha+l\equiv0$ mod $p$.
The $i$-th Dwork prime $a^{(i)}$ is defined by $a^{(i+1)}=(a^{(i)})'$ and $a^{(0)}=a$.
Write $\ul a'=(a'_1,\ldots,a'_s)$ and $\ul a^{(i)}=(a^{(i)}_1,\ldots,a^{(i)}_s)$.
Then Dwork's $p$-adic hypergeometric function is defined
to be
\begin{equation}\label{Dwork}
\cF^{\mathrm{Dw}}_{\ul a}(t)=F_{\ul a}(t)/F_{\ul a'}(t^p).
\end{equation}
This is a convergent function in the sense of Krasner.
More precisely
Dwork proved the congruence relations (\cite[p.41, Theorem 3]{Dwork-p-cycle})
\begin{equation}\label{Dwork-congruence}
\cF^{\mathrm{Dw}}_{\ul a}(t)
\equiv \frac{[F_{\ul a}(t)]_{<p^n}}{[F_{\ul a'}(t^p)]_{<p^n}}\mod p^n\Z_p[[t]]
\end{equation}
where for a power series $f(t)=\sum c_nt^n$, we write
$[f(t)]_{<m}:=\sum_{n<m}c_nt^n$ the truncated polynomial.
This implies that
$\cF^{\mathrm{Dw}}_{\ul a}(t)$ is a convergent function.
More precisely, for $f(t)\in\Z_p[t]$,
let $\ol{f(t)}\in \F_p[t]$ denote the reduction modulo $p$.
Let $I\subset \Z_{\geq0}$ be a finite subset
such that $\{\ol{[F_{\ul a^{(i)}}(t)]}_{<p}\}_{i\in \Z_{\geq0}}
=\{\ol{[F_{\ul a^{(i)}}(t)]}_{<p}\}_{i\in I}$
as sets. Put $h(t)=\prod_{i\in I}[F_{\ul a^{(i)}}(t)]_{<p}$.
Then \eqref{Dwork-congruence} implies
\[
\cF^{\mathrm{Dw}}_{\ul a}(t)
\in\Z_p\langle t,h(t)^{-1}\rangle:=\varprojlim_{n\geq1}(\Z_p/p^n\Z_p[t,h(t)^{-1}]),
\]
and hence that $\cF^{\mathrm{Dw}}_{\ul a}(t)$ is a convergent function on a domain
$\{z\in\C_p\mid |h(z)|_p=1\}$.
Dwork showed a geometric aspect of his $p$-adic hypergeometric functions.
Let $E$ be the elliptic curve over $\F_p$ defined by a Weierstrass equation
$y^2=x(1-x)(1- a x)$ with $a\in\F_p\setminus\{0,1\}$.
Suppose that $E$ is ordinary, which means that $p\not{\hspace{-0.5mm}|}\,a_p$
where
$T^2-a_pT+p$ is the characteristic polynomial of the Frobenius on $E$.
Let $\alpha_E$ be the root of $T^2-a_pT+p$ in $\Z_p$ such that $|\alpha_E|_p=1$,
which is often referred to as the unit root.
Then Dwork proved a formula
\[
\alpha_E=(-1)^{\frac{p-1}{2}}\cF^{\mathrm{Dw}}_{\frac{1}{2},\frac{1}{2}}(\wh a)
\]
where $\wh a\in \Z_p^\times$ is the Teichm\"uller lift of $a\in \F_p^\times$.
This is now called Dwork's unit root formula (cf. \cite[\S 7]{Put})
\medskip
In this paper, we introduce new $p$-adic hypergeometric functions, which we call
the {\it $p$-adic hypergeometric functions of logarithmic type}.
We shall first introduce the $p$-adic polygamma functions $\psi^{(r)}_p(z)$
in \S \ref{poly-sect}, which are slight modifications of Diamond's $p$-adic
polygamma functions \cite{Diamond}.
Let $W=W(\ol\F_p)$ be the Witt ring of $\ol\F_p$, and $K=\Frac W$ the fractional field.
Let $\sigma$ be a $p$-th Frobenius on $W[[t]]$
given by $\sigma(t)=ct^p$
with $c\in 1+pW$. Let $\log:\C^\times_p\to\C_p$ be the Iwasawa logarithmic function.
Let
\[
G_{\ul a}(t):=\psi_p(a_1)+\cdots+\psi_p(a_s)+s\gamma_p
-p^{-1}\log(c)+
\int_0^t(F_{\underline{a}}(t)-F_{\ul a'}(t^\sigma))\frac{dt}{t}
\]
be a power series
where
$\int_0^t(-)\frac{dt}{t}$ means the
operator which sends $\sum_{n\geq 1}a_nt^n$ to $\sum_{n\geq 1}\frac{a_n}{n}t^n$.
It is not hard to show $G_{\ul a}(t)\in W[[t]]$ (Lemma \ref{G-int}).
Then our new function is defined to be a ratio
\[
\cF^{(\sigma)}_{\underline{a}}(t):=
G_{\ul a}(t)/F_{\ul a}(t).
\]
Notice that $\cF^{(\sigma)}_{\ul a}(t)$ is also $p$-adically continuous with respect to
$\ul a$.
In case $a_1=\cdots=a_s=c=1$, one has $\cF^{(\sigma)}_{\ul a}(t)
=(1-t)\ln^{(p)}_1(t)$ the $p$-adic logarithm. In this sense, we can regard
$\cF^{(\sigma)}_{\ul a}(t)$ as a deformation of the $p$-adic logarithm.
The first main result of this paper
is the congruence relations for $\cF^{(\sigma)}_{\ul a}(t)$ that are
similar to Dwork's.
\begin{thm}[Theorem \ref{cong-thm}]
Suppose that $a_i\not\in\Z_{\leq 0}$ for all $i$. Then
\[
\cF^{(\sigma)}_{\underline{a}}(t)\equiv \frac{[G_{\ul a}(t)]_{<p^n}}{[F_{\ul a}(t)]_{<p^n}}
\mod p^nW[[t]]
\]
if $c\in 1+2pW$. If $p=2$ and $c\in 1+2W$, then the congruence holds modulo $p^{n-1}$.
\end{thm}
Thanks to this,
$\cF^{(\sigma)}_{\ul a}(t)$ is a convergent function,
\[
\cF^{\mathrm{Dw}}_{\ul a}(t)
\in\Z_p\langle t,h(t)^{-1}\rangle
\]
and then the special value at $t=\alpha$ is defined for $\alpha\in\C_p$ such that
$|h(\alpha)|_p=1$.
\medskip
The second main result
is to give a geometric aspect of our $\cF^{(\sigma)}_{\ul a}(t)$,
which concerns with the {\it syntomic regulator map}.
Let $X$ be a smooth variety over $W$.
We denote by $H^\bullet_{\syn}(X,\Q_p(j))$ the rigid syntomic cohomology groups by
Besser \cite{Be1} (see also \cite[1B]{NN}), which agree with the syntomic cohomology
groups by Fontaine-Messing \cite{FM} (see also \cite{KaV}) when $X$ is projective.
Let $K_i(X)$ be Quillen's algebraic $K$-groups.
Then there is the syntomic regulator map
\[
\reg_\syn^{i,j}:K_i(X)\lra H^{2j-i}_\syn(X,\Q_p(j))
\]
for each $i,j\geq0$ (\cite[Theorem 7.5]{Be1}, \cite[Theorem A]{NN}).
We shall concern ourselves with only $\reg_\syn^{2,2}$, which we
abbreviate to $\reg_\syn$ in this paper.
Note that there is the natural isomorphism
$H^2_\syn(X,\Q_p(2))\cong H^1_\dR(X_K/K)$ where $K=\Frac W$ is the fractional
field and $X_K:=X\times_WK$.
Our second main result is to relate $\cF^{(\sigma)}_{a,b}(t)$ with
the syntomic regulator of a certain element of $K_2$ of a
{\it hypergeometric curve}, which is defined
in the following way.
Let $N,M\geq 2$ be integers.
Let $p$ be a prime such that $p\not\hspace{-0.5mm}|NM$.
Let $\alpha\in W$ satisfy that $\alpha\not\equiv 0,1$ mod $p$.
Then we define a hypergeometric curve $X_\alpha$ to
be a projective smooth scheme over $W$ defined by
a bihomogeneous equation
\[
(X_0^N-X_1^N)(Y_0^M-Y_1^M)=\alpha X^N_0Y_0^M
\]
in $\P^1_W(X_0,X_1)\times\P^1_W(Y_0,Y_1)$ (\S \ref{fermat-sect}).
We put $x=X_1/X_0$ and $y=Y_1/Y_0$.
\begin{thm}[Corollary \ref{main-thm4}]\label{intro-thm1}
Let $p>2$ be a prime such that $p\not|NM$.
Let
\[
\xi=\left\{\frac{x-1}{x-\nu_1},\frac{y-1}{y-\nu_2}\right\}\in K_2(X_\alpha)\ot\Q
\]
for $(\nu_1,\nu_2)\in\mu_N(K)\times\mu_M(K)$ where $\mu_m(K)$ denotes
the group of $m$-th roots of unity in $K$ (cf. \eqref{m-fermat-eq1}).
Let
\[
Q: H^1_\dR(X_{\alpha,K}/K)\ot H^1_\dR(X_{\alpha,K}/K)\lra
H^2_\dR(X_{\alpha,K}/K)\cong K
\]
be the cup-product pairing.
Suppose that
$h(\alpha)\not\equiv0$ mod $p$ where $h(t)$ is as above.
For a pair of integers $(i,j)$ such that $0<i<N$ and $0<j<M$, we put
$\omega_{i,j}:=Nx^{i-1}y^{j-M}/(1-x^N)dx$ a regular $1$-form (cf. \eqref{fermat-form}),
and
$e_{i,j}^{\text{\rm unit}}$ the unit root vectors which are explicitly given in Theorem \ref{uroot-thm}.
Then we have
\[
Q(\reg_\syn(\xi), e_{N-i,M-j}^{\text{\rm unit}})
=N^{-1}M^{-1}
(1-\nu^{-i}_1)(1-\nu^{-j}_2)
\cF_{a_i,b_j}^{(\sigma_\alpha)}(\alpha)
Q(\omega_{i,j}, e_{N-i,M-j}^{\text{\rm unit}}).
\]
\end{thm}
Similar results hold for other curves (see \S \ref{gauss-sect} -- \S \ref{elliptic-sect}).
A more striking application of our new function $\cF^{(\sigma)}_{\ul a}(t)$
is that one can describe
the syntomic regulators of the Ross symbols in $K_2$ of the Fermat curves.
\begin{thm}[Theorem \ref{fermat-main2}]
Let $F$ be the Fermat curve over $K$ defined by an affine equation $z^N+w^M=1$
with $N|(p-1)$ and $M|(p-1)$.
Let $\{1-z,1-w\}\in K_2(F)$ be the Ross symbol \cite{ross2}.
Let
\[
\reg_\syn(\{1-z,1-w\})=
\sum_{(i,j)\in I} A_{i,j}M^{-1}z^{i-1}w^{j-M}dz\in H^1_\dR(F/K)
\]
where the notation be as in the beginning of \S \ref{fermatcurve-sect}.
Then \[A_{i,j}=\cF_{a_i,b_j}^{(\sigma)}(1)\] for $(i,j)$ such that $i/N+j/M<1$.
\end{thm}
As long as the author knows, this is the first explicit description of the Ross symbol
in $p$-adic cohomology.
\medskip
We hope that our results will provide
a new direction of the study of the $p$-adic Beilinson conjecture
by Perrin-Riou \cite[4.2.2]{Perrin-Riou} (see also \cite[Conj.2.7]{Colmez}),
especially for $K_2$ of elliptic curves.
Let $E$ be an elliptic curve over $\Q$.
Let $p$ be a prime at which $E$
has a good ordinary reduction.
Let $\alpha_{E,p}$ be the unit root of the reduction $\ol E$ at $p$,
and $e_\unit\in H^1_\dR(E/\Q_p)$ the eigenvector with eigenvalue $\alpha_{E,p}$.
Let $\omega_E\in \vg(E,\Omega^1_{E/\Q})$ be a regular $1$-form.
Let $L_p(E,\chi,s)$ be the $p$-adic $L$-function
defined by Mazur and Swinnerton-Dyer \cite{MS}.
Then as a consequence of the $p$-adic Beilinson conjecture
for elliptic curves,
one can expect
that there is an element $\xi\in K_2(E)$ which is integral in the sense of \cite{Scholl}
such that
\[
(1-p\alpha^{-1}_{E,p})\frac{Q(\reg_\syn(\xi),e_\unit)}{
Q(\omega_E,e_\unit)}\sim_{\Q^\times}L_p(E,\omega^{-1},0)
\]
where $x\sim_{\Q^\times}y$ means $xy\ne0$ and $x/y\in \Q^\times$.
We also refer to \cite[Conjecture 3.3]{ABC} for more precise statement.
According to our main results, we can replace the left hand side with the special values
of the $p$-adic hypergeometric functions of logarithmic type.
For example, let $E_a$ be the elliptic curve defined by $y^2=x(1-x)(1-(1-a)x)$ with $a\in \Q\setminus\{0,1\}$ and $p>3$ a prime where $E_a$ has a good ordinary reduction.
Then one predicts
\[
(1-p\alpha^{-1}_{E_a,p})
\cF_{\frac{1}{2},\frac{1}{2}}^{(\sigma_a)}(a)\sim_{\Q^\times}
L_p(E_a,\omega^{-1},0)
\]
if $a=-1,\pm2,\pm4,\pm8,\pm16,
\pm\frac{1}{2},
\pm\frac{1}{8},\pm\frac{1}{4}, \pm\frac{1}{16}$ (Conjecture \ref{ell1-conj}).
See \S \ref{RZ-sect} for other cases.
The author has no idea how to attack the question in general, while
we have one example (the proof relies on Brunault's appendix in \cite{ABC}).
\begin{thm}[Theorem \ref{brunault}]\label{F. Brunault}
$(1-p\alpha^{-1}_{E_4,p})\cF_{\frac{1}{2},\frac{1}{2}}^{(\sigma_4)}(4)=-
L_p(E_4,\omega^{-1},0).$
\end{thm}
We note that this is a $p$-adic counterpart of
a formula of Rogers and Zudilin (\cite[Theorem 2, p.399 and (6), p.386]{RZ})
\[
2L'(E_4,0)=
\mathrm{Re}\left[\log 4-
{}_4F_3\left({\frac{3}{2},\frac{3}{2},1,1\atop 2,2,2};4\right)\right]
\left(={}_3F_2\left({\frac{1}{2},\frac{1}{2},\frac{1}{2}\atop \frac{3}{2},1};
\frac{1}{4}\right)\right).
\]
The conjectures in \S \ref{RZ-sect} give the first formulation of
the $p$-adic counterparts
of Rogers-Zudilin type formulas.
\medskip
This paper is organized as follows.
\S \ref{poly-sect} is the preliminary section on
Diamond's $p$-adic polygamma functions.
More precisely we shall give a slight modification of Diamond's polygamma
(though it might be known to the experts).
We give a self-contained exposition, because
the author does not find a suitable reference, especially
concerning with our modified functions.
In \S \ref{pHGlog-sect}, we introduce the $p$-adic hypergeometric functions of
logarithmic type, and prove the congruence relations.
In \S \ref{reg-sect}, we show that our new $p$-adic hypergeometric functions appear in
the syntomic regulators of the hypergeometric curves.
Finally we discuss the $p$-adic Beilinson conjecture for $K_2$ of elliptic curves
in \S \ref{weakB-sect}.
\medskip
\noindent{\bf Acknowledgement.}
The author would like to express sincere gratitude
to Professor Masataka Chida for the stimulating discussion on
the $p$-adic Beilinson conjecture. The discussion with him is the origin of this work.
He very much appreciate Professor Fran\c{c}ois Brunault
for giving him lots of comments on his paper \cite{regexp}, and for the help of the proof
of Theorem \ref{F. Brunault}.
\medskip
\noindent{\bf Notation.}
We denote by $\mu_n(K)$ the group of $n$-th roots of unity in a field $K$. We write $\mu_\infty(K)=\cup_{n\geq 1}\mu_n(K)$.
If there is no fear of confusion, we drop ``$K$'' and simply write $\mu_n$.
For a power series $f(t)=\sum_{i=0}^\infty a_it^i\in R[[t]]$ with coefficients in a
commutative ring $R$,
we denote
$f(t)_{<n}:=\sum_{i=0}^{n-1}a_it^i$ the truncated polynomial.
\section{$p$-adic polygamma functions}\label{poly-sect}
The complex analytic polygamma functions are the $r$-th derivative
\[
\psi^{(r)}(z):=\frac{d^r}{dz^r}\left(\frac{\Gamma'(z)}{\Gamma(z)}\right), \quad r\in\Z_{\geq0}.
\]
In his paper \cite{Diamond}, Jack Diamond gave a $p$-adic counterpart of the polygamma functions $\psi^{(r)}_{D,p}(z)$ which are given in the following way.
\begin{equation}\label{diamond0}
\psi^{(0)}_{D,p}(z)
=\lim_{s\to\infty}\frac{1}{p^s}\sum_{n=0}^{p^s-1}\log(z+n),
\end{equation}
\begin{equation}\label{diamond1}
\psi^{(r)}_{D,p}(z)=(-1)^{r+1}r!\lim_{s\to\infty}\frac{1}{p^s}\sum_{n=0}^{p^s-1}\frac{1}{(z+n)^r},
\quad r\geq 1,
\end{equation}
where $\log:\C_p^\times\to\C_p$ is the Iwasawa logarithmic function which is characterized as a continuous
homomorphism satisfying
$\log(p)=0$ and
\[
\log(z)=-\sum_{n=1}^\infty\frac{(1-z)^n}{n},\quad |z-1|_p<1.
\]
It should be noticed that the series \eqref{diamond0} and \eqref{diamond1}
converge only when $z\not\in \Z_p$, and hence $\psi^{(r)}_{D,p}(z)$
turn out to be locally analytic functions on $\C_p\setminus\Z_p$.
This causes inconvenience in our discussion.
In this section we give a continuous function $\psi_p^{(r)}(z)$ on $\Z_p$
which is a slight {\it modification} of $\psi^{(r)}_{D,p}(z)$.
See \S \ref{polygamma-sect} for the definition and also \S \ref{measure-sect}
for an alternative definition in terms of $p$-adic measure.
\subsection{$p$-adic polylogarithmic functions}
Let $x$ be an indeterminate.
For an integer $r\in\Z$, the $r$-th $p$-adic polylogarithmic function $\ln_r^{(p)}(x)$
is defined as a formal power series
\[
\ln_r^{(p)}(x):=\sum_{k\geq1,p\not{\hspace{0.7mm}|}\,k}\frac{x^k}{k^r}=\lim_{s\to\infty}
\left(\frac{1}{1-x^{p^s}}\sum_{1\leq k<p^s,\, p\not{\hspace{0.7mm}|}\,k}\frac{x^k}{k^r}\right)
\in \Z_p[[x]]
\]
which belongs to the ring
\[
\Z_p\langle x,(1-x)^{-1}\rangle:=\varprojlim_{s}(\Z/p^s\Z[x,(1-x)^{-1}])
\]
of convergent power series.
If $r\leq 0$, this is a rational function, more precisely
\[
\ln_0^{(p)}(x)=\frac{1}{1-x}-\frac{1}{1-x^p},\quad
\ln_{-r}^{(p)}(x)=\left(x\frac{d}{dx}\right)^r\ln_0^{(p)}(x).
\]
If $r>0$, this is known to be an {\it overconvergent function}, more precisely
it has a (unique) analytic continuation to the domain
$|x-1|>|1-\zeta_p|$ where $\zeta_p\in \ol\Q_p$
is a primitive $p$-th root of unity.
Let $W(\ol\F_p)$ be the Witt ring of $\ol\F_p$ and $F$ the $p$-th Frobenius endomorphism.
Define the {\it $p$-adic logarithmic function}
\begin{equation}\label{log(p)}
\log^{(p)}(z):=\frac{1}{p}\log\left(\frac{z^p}{F(z)}\right):=-\sum_{n=1}^\infty
\frac{p^{-1}}{n}\left(1-\frac{z^p}{F(z)}\right)^n
\end{equation}
on $W(\ol\F_p)^\times$.
This is different from the Iwasawa $\log(z)$ in general, but
one can show
$\log^{(p)}(1-z)=-\ln^{(p)}_1(z)$ for $z\in W(\ol\F_p)^\times$ such that $F(z)=z^p$
and $z\not\equiv 1$ mod $p$.
\begin{prop}[cf. \cite{C-dlog} IV Prop.6.1, 6.2]
Let $r\in\Z$ be an integer. Then
\begin{equation}\label{polylog-diffeq}
\ln^{(p)}_r(x)=x\frac{d}{dx}\ln^{(p)}_{r+1}(x),
\end{equation}
\begin{equation}\label{polylog-refl}
\ln^{(p)}_r(x)=(-1)^{r+1}\ln^{(p)}_r(x^{-1}),
\end{equation}
\begin{equation}\label{polylog-dstr}
\sum_{\zeta\in\mu_N}\ln_r^{(p)}(\zeta x)=\frac{1}{N^{r-1}}\ln^{(p)}_r(x^N)
\quad \mbox{\rm(distribution formula)}.
\end{equation}
\end{prop}
\begin{pf}
\eqref{polylog-diffeq} and \eqref{polylog-dstr} are immediate from the power series
expansion $\ln_r^{(p)}(x)=\sum_{k\geq 1,p\not{\hspace{0.7mm}|}\,k} x^k/k^r$.
On the other hand \eqref{polylog-refl} follows from the fact
\[
\frac{1}{1-x^{-p^s}}\sum_{1\leq k<p^s,\, p\not{\hspace{0.7mm}|}\,k}\frac{x^{-k}}{k^r}
=
\frac{-1}{1-x^{p^s}}\sum_{1\leq k<p^s,\, p\not{\hspace{0.7mm}|}\,k}\frac{x^{p^s-k}}{k^r}
\equiv
\frac{(-1)^{r+1}}{1-x^{p^s}}\sum_{1\leq k<p^s,\, p\not{\hspace{0.7mm}|}\,k}
\frac{x^{p^s-k}}{(p^s-k)^r}
\]
modulo $p^s\Z[x,(1-x)^{-1}]$.
\end{pf}
\begin{lem}\label{ln-formula1}
Let $m, N\geq 2$ be integers prime to $p$.
Let $\ve\in\mu_m\setminus\{1\}$.
Then for any $n\in \{0,1,\ldots,N-1\}$, we have
\[
N^r\sum_{\nu^N=\ve}\nu^{-n}\ln^{(p)}_{r+1}(\nu)=\lim_{s\to\infty}
\frac{1}{1-\ve^{p^s}}\sum_{\us{k+n/N\not\equiv0\text{ \rm mod }p}{0\leq k<p^s}}
\frac{\ve^k}{(k+n/N)^{r+1}}.
\]
\end{lem}
\begin{pf}
Note $\sum_{\nu^N=\ve}\nu^i=N\ve^{i/N}$ if $N|i$ and $=0$ otherwise.
We have
\begin{align*}
N^r\sum_{\nu^N=\ve}\nu^{-n}\ln^{(p)}_{r+1}(\nu x)
&=N^r\sum_{k\geq1,p\not{\hspace{0.7mm}|}\,k}\sum_{\nu^N=\ve}\frac{\nu^{k-n}x^k}{k^{r+1}}\\
&=N^{r+1}\sum_{N|(k-n),p\not{\hspace{0.7mm}|}\,k}\frac{\ve^{(k-n)/N}x^k}{k^{r+1}}\\
&=\sum_{n+\ell N\not\equiv 0\text{ mod }p,\,\ell\geq 0}\frac{(\ve x^N)^\ell x^n}{(\ell+n/N)^{r+1}}
\quad(\ell=(k-n)/N)\\
&\equiv \frac{1}{1-(\ve x^N)^{p^s}}\sum_{\us{n+\ell N\not\equiv 0
\text{ mod }p}{0\leq \ell<p^s}}\frac{(\ve x^N)^\ell x^n}{(\ell+n/N)^{r+1}}\\
\end{align*}
modulo $p^s\Z[x,(1-\ve x^N)^{-1}]$.
Since $\ve\not\equiv 1$ mod $p$, the evaluation at $x=1$ makes sense, and then we have the desired equation.
\end{pf}
\begin{lem}\label{p-zeta-def}
Let $r\ne1$ be an integer.
Then for any integer $N\geq 2$ prime to $p$,
\begin{equation}\label{p-zeta-def-eq1}
\sum_{\ve\in \mu_N\setminus\{1\}}\ln_r^{(p)}(\ve)
=-(1-N^{1-r})L_p(r,\omega^{1-r})
\end{equation}
where $L_p(s,\chi)$ is the $p$-adic $L$-function
and $\omega$ is the Teichm\"uller character.
\end{lem}
\begin{pf}
This
is well-known to experts as {\it Coleman's formula}.
We give a self-contained and straightforward proof for convenience of the reader,
because the author does not find a suitable literature (note that \eqref{p-zeta-def-eq1}
is not included in \cite[I, (3)]{C-dlog}).
\medskip
We first show \eqref{p-zeta-def-eq1} in case $r=-m$ with $m\in \Z_{\geq1}$.
Note that $\ln_{-m}^{(p)}(x)$ is a rational function.
More precisely let
\[
\ln_0(x):=\frac{x}{1-x},\quad
\ln_{-m}(x):=\left(x\frac{d}{dx}\right)^m\ln_0(x),
\]
then $\ln_{-m}^{(p)}(x)=\ln_{-m}(x)-p^m\ln_{-m}(x^p)$.
Therefore
\[
\sum_{\ve\in \mu_N\setminus\{1\}}\ln_{-m}^{(p)}(\ve)
=(1-p^{m})
\sum_{\ve\in \mu_N\setminus\{1\}}\ln_{-m}(\ve).
\]
Since $L_p(-m,\omega^{1+m})=-(1-p^{m})B_{m+1}/(m+1)$
where $B_n$ are the Bernoulli numbers,
\eqref{p-zeta-def-eq1} for $r=-m$ is equivalent to
\begin{equation}\label{p-zeta-def-eq4}
(1-N^{m+1})\frac{B_{m+1}}{m+1}=
\sum_{\ve\in \mu_N\setminus\{1\}}\ln_{-m}(\ve).
\end{equation}
Put $\ell_r(x):=\ln_r(x)-N^{1-r}\ln_r(x^N)$.
By the distribution property
\[
\sum_{\ve\in \mu_N}\ln_r(\ve x)=
N^{1-r}\ln_r(x^N)
\]
which can be easily shown by a computation of power series expansions,
the right hand side of \eqref{p-zeta-def-eq4} equals to
the evaluation $-\ell_{-m}(x)|_{x=1}$ at $x=1$, and hence
\begin{equation}\label{p-zeta-def-eq5}
\sum_{\ve\in \mu_N\setminus\{1\}}\ln_{-m}(\ve)
=-\left(x\frac{d}{dx}\right)^m\ell_0(x)\bigg|_{x=1}.
\end{equation}
On the other hand, letting $x=e^z$, one has
\begin{align*}
\ell_0(e^z)=\frac{e^z}{1-e^z}-\frac{Ne^{Nz}}{1-e^{Nz}}
=-\sum_{n=1}^\infty \left(B_n\frac{z^{n-1}}{n!}-B_n\frac{N^nz^{n-1}}{n!}
\right)
=-\sum_{n=1}^\infty (1-N^n)B_n\frac{z^{n-1}}{n!}
\end{align*}
and hence
\begin{equation}\label{p-zeta-def-eq6}
\left(x\frac{d}{dx}\right)^m\ell_0(x)\bigg|_{x=1}=
\frac{d^m}{dz^m}\ell_0(e^z)\bigg|_{z=0}=
-(1-N^{m+1})\frac{B_{m+1}}{m+1}.
\end{equation}
Now \eqref{p-zeta-def-eq4} follows from \eqref{p-zeta-def-eq5} and
\eqref{p-zeta-def-eq6}.
\medskip
We have shown \eqref{p-zeta-def-eq1} for negative $r$.
Let $r\ne1$ be an arbitrary integer.
Since $\ln_r^{(p)}(x)=\sum_{p\not{\hspace{0.7mm}|}\, k}x^k/k^r$,
one has that
for any integers $r,r'$ such that $r\equiv r'$ mod $(p-1)p^{s-1}$,
\[\ln_r^{(p)}(x)\equiv \ln_{r'}^{(p)}(x)
\mod p^s\Z_p[[x]]
\]
and hence modulo $p^s\Z_p\langle x,(1-x)^{-1}\rangle$.
This implies
\[\ln_r^{(p)}(\ve)\equiv \ln_{r'}^{(p)}(\ve)
\mod p^sW(\ol\F_p).
\]
Take $r'=r-p^{s+a-1}(p-1)$ with $a\gg 0$.
It follows that
\[
(1-N^{1-r'})L_p(r',\omega^{1-r'})
=(1-N^{1-r'})L_p(r',\omega^{1-r})\to
(1-N^{1-r})L_p(r,\omega^{1-r})
\]
as $a\to\infty$ by the continuity of the $p$-adic $L$-functions.
Since $r'<0$, one can apply \eqref{p-zeta-def-eq1} and then
\[
-(1-N^{1-r})L_p(r,\omega^{1-r})
\equiv \sum_{\ve\in \mu_N\setminus\{1\}}\ln_r(\ve)\mod p^sW(\ol\F_p)
\]
for any $s>0$. This completes the proof.
\end{pf}
\subsection{$p$-adic polygamma functions}\label{polygamma-sect}
Let $r\in\Z$ be an integer. For $z\in \Z_p$, define
\begin{equation}\label{wt-polygamma-def}
\wt{\psi}_p^{(r)}(z):=\lim_{n\in\Z_{>0},n\to z}\sum_{1\leq k<n,p\not{\hspace{0.7mm}|}\, k}
\frac{1}{k^{r+1}}.
\end{equation}
The existence of the limit follows from the fact that
\begin{equation}\label{equiv}
\sum_{1\leq k<p^s,p\not{\hspace{0.7mm}|}\,k}k^m\equiv
\begin{cases}
-p^{s-1}&p\geq 3\mbox{ and }(p-1)|m\\
2^{s-1}&p=2\mbox{ and }2 |m\\
1&p=2\mbox{ and }s=1\\
0&\mbox{otherwise}
\end{cases}
\end{equation}
modulo $p^s$.
Thus $\wt\psi^{(r)}_p(z)$ is a $p$-adic continuous function on $\Z_p$.
More precisely
\begin{equation}\label{equiv-psi}
z\equiv z'\hspace{-0.3cm}\mod p^s\Longrightarro
\wt\psi^{(r)}_p(z)-\wt\psi^{(r)}_p(z') \equiv
\begin{cases}
0\hspace{-0.3cm}\mod p^s&p\geq 3\mbox{ and }(p-1)\not|(r+1)\\
0\hspace{-0.3cm}\mod p^s&p=2,\, s\geq 2\mbox{ and }2\not|(r+1)\\
0\hspace{-0.3cm}\mod p^{s-1}&\mbox{othewise.}
\end{cases}
\end{equation}
Define the {\it $p$-adic Euler constant} \footnote{This is different from
Diamond's $p$-adic Euler constant. His constant
is $p/(p-1)\gamma_p$, \cite[\S 7]{Diamond}.} by
\[
\gamma_p:=-\lim_{s\to\infty}\frac{1}{p^s}\sum_{0\leq j<p^s,p\not{\hspace{0.7mm}|}\,j}\log(j),\quad
(\log=\mbox{Iwasawa log}).
\]
where the convergence follows by
\begin{align}
\sum_{0\leq j<p^{s+1},p\not{\hspace{0.7mm}|}\,j}\log(j)-p
\sum_{0\leq j<p^s,p\not{\hspace{0.7mm}|}\,j}\log(j)
&=\sum_{k=0}^{p-1}\sum_{0\leq j<p^s,p\not{\hspace{0.7mm}|}\,j}
\log\left(1+\frac{kp^s}{j}\right)\notag\\
&\equiv\sum_{k=0}^{p-1}\sum_{0\leq j<p^s,p\not{\hspace{0.7mm}|}\,j}
\frac{kp^s}{j}\mod p^{2s-1}\notag\\
&\os{\eqref{equiv}}{\equiv} 0\mod p^{2s-1}.\label{equiv-log}
\end{align}
We define the $r$-th {\it $p$-adic polygamma function} to be
\begin{equation}\label{polygamma-def}
\psi_p^{(r)}(z):=\begin{cases}
-\gamma_p+\wt{\psi}^{(0)}_p(z)&r=0\\
-L_p(1+r,\omega^{-r})+\wt{\psi}^{(r)}_p(z)&r\ne0.
\end{cases}
\end{equation}
If $r=0$, we also write $\psi_p(z)=\psi_p^{(0)}(z)$ and call it the {\it $p$-adic digamma function}.
\subsection{Formulas on $p$-adic polygamma functions}\label{formula-sect}
\begin{thm}\label{polygamma-thm1}
\begin{enumerate}
\item[\rm(1)] $\wt\psi^{(r)}_p(0)=\wt\psi^{(r)}_p(1)=0$ or equivalently
$\psi^{(r)}_p(0)=\psi^{(r)}_p(1)=-\gamma_p$ or $=-\zeta_p(r+1)$.
\item[\rm(2)]
$\wt\psi^{(r)}_p(z)=(-1)^r\wt\psi^{(r)}_p(1-z)$ or equivalently
$\psi^{(r)}_p(z)=(-1)^r\psi^{(r)}_p(1-z)$ (note $L_p(1+r,\omega^{-r})=0$ for odd $r$).
\item[\rm(3)]
\[
\wt\psi^{(r)}_p(z+1)-\wt\psi^{(r)}_p(z)=\psi^{(r)}_p(z+1)-\psi^{(r)}_p(z)=\begin{cases}
z^{-r-1}&z\in \Z_p^\times\\
0&z\in p\Z_p.
\end{cases}
\]
\end{enumerate}
\end{thm}
Compare the above with \cite{NIST} p.144, 5.15.2, 5.15.5 and 5.15.6.
\begin{pf}
(1) and (3) are immediate from definition on noting \eqref{equiv}.
We show (2).
Since $\Z_{>0}$ is a dense subset in $\Z_p$, it is enough to show in case $z=n>0$ an integer.
Let $s>0$ be arbitrary such that $p^s>n$.
Then
\begin{align*}
\wt\psi^{(r)}_p(n)&\equiv
\sum_{1\leq k<n,p\not{\hspace{0.7mm}|}\,k}\frac{1}{k^{r+1}}\equiv
(-1)^{r+1}\sum_{-n< k\leq -1,p\not{\hspace{0.7mm}|}\,k}\frac{1}{k^{r+1}}
\equiv
(-1)^{r+1}\sum_{p^s-n+1\leq k< p^s,p\not{\hspace{0.7mm}|}\,k}\frac{1}{k^{r+1}}\\
&\equiv
(-1)^{r+1}\sum_{0\leq k< p^s,p\not{\hspace{0.7mm}|}\,k}\frac{1}{k^{r+1}}-
(-1)^{r+1}\sum_{0\leq k< p^s-n+1,p\not{\hspace{0.7mm}|}\,k}\frac{1}{k^{r+1}}\\
&\equiv(-1)^{r}\sum_{0\leq k< p^s-n+1,p\not{\hspace{0.7mm}|}\,k}\frac{1}{k^{r+1}}\\
&\equiv (-1)^r\wt\psi^{(r)}_p(1-n)
\end{align*}
modulo $p^s$ or $p^{s-1}$. Since $s$ is an arbitrary large integer, this means
$\wt\psi^{(r)}_p(n)=(-1)^r\wt\psi^{(r)}_p(1-n)$ as required.
\end{pf}
\begin{thm}\label{polygamma-thm2}
Let $0\leq n<N$ be integers and suppose $p\not|N$. Then
\begin{equation}\label{polygamma-thm2-eq}
\wt{\psi}_p^{(r)}\left(\frac{n}{N}\right)=N^r
\sum_{\ve\in \mu_N\setminus\{1\}}(1-\ve^{-n})\ln_{r+1}^{(p)}(\ve).
\end{equation}
\end{thm}
For example
\[
{\psi}_p^{(r)}\left(\frac{1}{2}\right)=
-L_p(1+r,\omega^{-r})+2^{r+1}\ln_{r+1}^{(p)}(-1)=(1-2^{r+1})L_p(1+r,\omega^{-r}).
\]
Compare this with \cite{NIST} p.144, 5.15.3.
\begin{pf}
We may assume $n>0$.
Let $s>0$ be an integer such that $p^s\equiv 1$ mod $N$.
Write $p^s-1=lN$.
\begin{align*}
S:=\sum_{\ve\in \mu_N\setminus\{1\}}(1-\ve^{-n})\ln_{r+1}^{(p)}(\ve)
&\equiv
\sum_{1\leq k<p^s,\, p\not{\hspace{0.7mm}|}\,k}\left(
\sum_{\ve\in \mu_N\setminus\{1\}}\frac{1-\ve^{-n}}{1-\ve^{p^s}}\frac{\ve^k}{k^{r+1}}\right)
\\
&\equiv
\sum_{1\leq k<p^s,\, p\not{\hspace{0.7mm}|}\,k}\left(
\sum_{\ve\in \mu_N\setminus\{1\}}\frac{\ve^k+\cdots+\ve^{k+N-n-1}}{k^{r+1}}\right)
\end{align*}
modulo $p^s$.
Note $\sum_{\ve\in \mu_N\setminus\{1\}}\ve^i=N-1$ if $N|i$ and $=-1$ otherwise.
By \eqref{equiv}, we have
\[
S\equiv
\sum_{k}
\frac{N}{k^{r+1}}\mod p^{s-1}
\]
where $k$ runs over the integers such that
$0\leq k<p^s$, $p\not{\hspace{-0.4mm}|}\,k$ and there is an integer $0\leq i<N-n$ such
that
$k+i\equiv 0$ mod $N$.
Hence
\begin{align*}
&N^rS
\equiv
\sum_{k}\frac{1}{(k/N)^{r+1}}=
\sum_{k\equiv 0\text{ mod } N}
+\sum_{k\equiv -1\text{ mod } N}+\cdots+\sum_{k\equiv n-N+1\text{ mod }N}\\
&=
\sum_{\us{j\not\equiv 0\text{ mod }p}{1\leq j<p^s/N}}\frac{1}{j^{r+1}}
+\sum_{\us{j-1/N\not\equiv 0\text{ mod }p}{1\leq j<(p^s+1)/N}}\frac{1}{(j-1/N)^{r+1}}
+\cdots+
\sum_{\us{j-(N-n-1)/N\not\equiv 0\text{ mod }p}{1\leq j<(p^s+N-n-1)/N}}
\frac{1}{(j-(N-n-1)/N)^{r+1}}\\
&\equiv
\sum_{\us{j\not\equiv 0\text{ mod }p}{1\leq j\leq l}}\frac{1}{j^{r+1}}
+\sum_{\us{j+l\not\equiv 0\text{ mod }p}{1\leq j\leq l}}\frac{1}{(j+l)^{r+1}}
+\cdots+
\sum_{\us{j+l(N-n-1)\not\equiv 0\text{ mod }p}{1\leq j\leq l}}\frac{1}{(j+l(N-n-1))^{r+1}}\\
&=\sum_{\us{j\not\equiv 0\text{ mod }p}{1\leq j\leq l(N-n)}}\frac{1}{j^{r+1}}
=\sum_{\us{j\not\equiv 0\text{ mod }p}{0\leq j<l(N-n)+1}}\frac{1}{j^{r+1}}.
\end{align*}
Since $l(N-n)+1\equiv n/N$ mod $p^s$, the last summation is equivalent to $\wt\psi^{(r)}(n/N)$
modulo $p^{s-1}$ by definition.
\end{pf}
\begin{rem}\label{definition-rem}
The complex analytic analogy of
Theorem \ref{polygamma-thm2} is the following.
Let $\ln_r(z)=\ln^{an}_{r}(z)=\sum_{n=1}^\infty z^n/n^r$ be the analytic polylog.
Then
\begin{align*}
N^r\sum_{k=1}^{N-1}(1-e^{-2\pi ikn/N})\ln_{r+1}(e^{2\pi ik/N})&=
\sum_{m=1}^\infty\sum_{k=1}^{N-1}\frac{N^r}{m^{r+1}}(e^{2\pi ikm/N}-e^{2\pi ik(m-n)/N})\\
&=
\sum_{k=1}^\infty\frac{N^{r+1}}{(kN)^{r+1}}-\frac{N^{r+1}}{(kN-N+n)^{r+1}}\\
&=
\sum_{k=1}^\infty\frac{1}{k^{r+1}}-\frac{1}{(k-1+n/N)^{r+1}}.
\end{align*}
If $r=0$, then this is equal to $\psi(z)-\psi(1)$ (\cite{NIST} p.139, 5.7.6).
If $r\geq 1$, then this is equal to $\zeta(r+1)+(-1)^r/r!\psi^{(r)}(n/N)$
(\cite{NIST} p.144, 5.15.1).
\end{rem}
\begin{thm}\label{polygamma-thm4}
Let $m\geq 1$ be an positive integer prime to $p$.
\begin{enumerate}
\item[\rm(1)]
Let $\psi_p(z)=\psi_p^{(0)}(z)$ be the $p$-adic digamma function.
Then
\[
\psi_p(mz)-\log^{(p)}(m)=\frac{1}{m}\sum_{i=0}^{m-1}\psi_p(z+\frac{i}{m}),
\]
(see \eqref{log(p)} for the definition of $\log^{(p)}(z)$).
\item[\rm(2)] If $r\ne 0$, we have
\[
\psi^{(r)}_p(mz)=\frac{1}{m^{r+1}}\sum_{i=0}^{m-1}\psi^{(r)}_p(z+\frac{i}{m}).
\]
\end{enumerate}
\end{thm}
Compare the above with \cite{NIST} p.144, 5.15.7.
\begin{pf}
By Lemma \ref{p-zeta-def}, the assertions are equivalent to
\begin{equation}\label{polygamma-thm4-eq}
\frac{1}{m^{r+1}}\sum_{i=0}^{m-1}\wt\psi^{(r)}_p(z+\frac{i}{m})
=\wt\psi^{(r)}_p(mz)
+\sum_{\ve\in\mu_N\setminus\{1\}}\ln^{(p)}_{r+1}(\ve)
\end{equation}
for all $r\in\Z$.
Since $\Z_{(p)}\cap[0,1)$ is a dense subset in $\Z_p$, it is enough to show the above
in case $z=n/N$ with $0\leq n<N$, $p\not|N$.
By Theorem \ref{polygamma-thm2},
\begin{align*}
\frac{1}{m^{r+1}}\sum_{i=0}^{m-1}\wt\psi^{(r)}_p(z+\frac{i}{m})
&=\frac{1}{m^{r+1}}\sum_{i=0}^{m-1}\wt\psi^{(r)}_p(\frac{nm+iN}{mN})\\
&=\frac{N^r}{m}\sum_{i=0}^{m-1}\sum_{\nu\in\mu_{mN}\setminus\{1\}}
(1-\nu^{-nm-iN})\ln^{(p)}_{r+1}(\nu).
\end{align*}
The last summation is divided into the following two terms
\[
\sum_{i=0}^{m-1}
\sum_{\nu\in\mu_{N}\setminus\{1\}}
(1-\nu^{-nm})\ln^{(p)}_{r+1}(\nu)=
m\sum_{\nu\in\mu_{N}\setminus\{1\}}
(1-\nu^{-nm})\ln^{(p)}_{r+1}(\nu),
\]
\begin{align*}
\sum_{i=0}^{m-1}\sum_{\ve\in\mu_{m}\setminus\{1\}}\sum_{\nu^N=\ve}
(1-\nu^{-nm}\ve^{-i})\ln^{(p)}_{r+1}(\nu)
&=m\sum_{\ve\in\mu_{m}\setminus\{1\}}\sum_{\nu^N=\ve}
\ln^{(p)}_{r+1}(\nu)\\
&=\frac{m}{N^r}\sum_{\ve\in\mu_{m}\setminus\{1\}}
\ln^{(p)}_{r+1}(\ve)
\end{align*}
where the last equality follows from the distribution formula \eqref{polylog-dstr}.
Since the former is equal to $\wt\psi^{(r)}_p(nm/N)$ by Theorem \ref{polygamma-thm2},
the equality \eqref{polygamma-thm4-eq} follows.
\end{pf}
\subsection{$p$-adic measure}\label{measure-sect}
For a function $g:\Z_p\to \C_p$, the Volkenborn integral is defined by
\[
\int_{\Z_p}g(t)dt=\lim_{s\to\infty}\frac{1}{p^s}\sum_{0\leq j<p^s}g(j)
\]
if the limit exists.
\begin{thm}\label{meas-thm2}
Let $\log:\C_p^\times\to \C_p$ be the Iwasawa logarithmic function.
Let
\[
{\mathbf 1}_{\Z_p^\times}(z):=\begin{cases}
1&z\in\Z_p^\times\\
0&z\in p\Z_p
\end{cases}
\]
be the characteristic function.
Then
\[
\psi_p(z)=\int_{\Z_p}\log(z+t){\mathbf 1}_{\Z_p^\times}(z+t)dt.
\]
\end{thm}
\begin{pf}
Using \eqref{equiv-log}, one can easily show that
the Volkenborn integral
$Q(z):=\int_{\Z_p}{\mathbf 1}_{\Z_p^\times}(z+t)\log(z+t)dt$ is defined.
Moreover we have
\[
Q(z+1)-Q(z)\equiv\begin{cases}
p^{-s}(\log(z)-\log(z+p^s))& z\in \Z_p^\times\\
0&z\in p\Z_p
\end{cases}\mod p^{s'}
\]
where $s'=s-1$ if $p=2$ and $s'=s$ if $p\geq 3$.
For $z\in \Z_p^\times$, since
\[
p^{-s}(\log(z)-\log(z+p^s))=-p^{-s}\log(1+z^{-1}p^s)
\equiv z^{-1}\mod p^{s'},
\]
it follows from Theorem \ref{polygamma-thm1} (3) that
$Q(z)$ differs from $\psi_p(z)$ by a constant.
Since
\[
Q(0)=\lim_{s\to\infty} \frac{1}{p^s}\sum_{0\leq j<p^s,p\not{\hspace{0.7mm}|}\,j}\log(j)= -\gamma_p=\psi_p(0),
\]
we obtain $Q(z)=\psi_p(z)$.
\end{pf}
\begin{thm}\label{meas-thm1}
If $r\ne0$, then
\[
\psi^{(r)}_p(z)=-\frac{1}{r}\int_{\Z_p}(z+t)^{-r}{\mathbf 1}_{\Z_p^\times}(z+t)dt
\]
where ${\mathbf 1}_{\Z_p^\times}(z)$ denotes the characteristic function as in Theorem \ref{meas-thm2}.
\end{thm}
\begin{pf}
Using \eqref{equiv}, one sees that
the Volkenborn integral
$Q(z)=-\frac{1}{r}\int_{\Z_p}(z+t)^{-r}{\mathbf 1}_{\Z_p^\times}(z+t)dt$ is defined.
Moreover
if $z\in\Z_p^\times$, then
\[
Q(z+1)-Q(z)\equiv\frac{-1}{rp^s}\left(
\frac{1}{(z+p^s)^r}-\frac{1}{z^r}\right)\equiv z^{-1-r}\mod p^{s-\ord_p(r)},
\]
and if $z\in p\Z_p$, then $Q(z+1)\equiv Q(z)$.
This shows that $Q(z)-\psi^{(r)}_p(z)$ is a constant by Theorem \ref{polygamma-thm1} (3).
We show $Q(0)=\psi^{(r)}_p(0)$.
By definition
\[
Q(0)= \lim_{n\to\infty}\frac{-1}{rp^n}\sum_{0\leq k<p^n,p\not{\hspace{0.7mm}|}\,k}
\frac{1}{k^r}.
\]
Recall the original definition of
the $p$-adic $L$-function by Kubota-Leopoldt \cite{KL}
\[
L_p(s,\chi)=\frac{1}{s-1}\lim_{n\to\infty}\frac{1}{fp^n}
\sum_{0\leq k<fp^{n},p\not{\hspace{0.7mm}|}\,k}\chi(k)\langle k\rangle^{1-s},
\quad \langle k\rangle:=k/\omega(k)
\]
for a primitive Dirichlet character $\chi$ with conductor $f\geq1$.
This immediately implies $Q(0)=-L_p(1+r,\omega^{-r})=\psi^{(r)}_p(0)$.
\end{pf}
\section{$p$-adic hypergeometric functions of logarithmic type}\label{pHGlog-sect}
For an integer $n\geq 0$, we denote by $(a)_n$ the Pochhammer symbol,
\[
(a)_0:=1,\quad (a)_n:=a(a+1)\cdots(a+n-1),\,n\geq 1.
\]
For $a\in \Z_p$, we denote by $a':=(a+l)/p$ the {\it Dwork prime}
where $l\in \{0,1,\ldots,p-1\}$ is the unique integer such that $a+l\equiv 0$ mod $p$.
We denote the $i$-th Dwork prime by
$a^{(i)}$ which is defined to be $(a^{(i-1)})'$ with $a^{(0)}=a$.
\subsection{Definition}\label{pHGlog-defn}
Let $s\geq 1$ be a positive integer.
Let $a_i,b_j\in\Q_p$ with $b_j\not\in \Z_{\leq0}$. Let
\[
{}_sF_{s-1}\left({a_1,\ldots,a_s\atop b_1,\ldots b_{s-1}}:t\right)
=\sum_{n=0}^\infty\frac{(a_1)_n\cdots(a_s)_n}{(b_1)_n\cdots(b_{s-1})_n}\frac{t^n}{n!}.
\]
be the {\it hypergeometric power series} with coefficients.
In what follows we only consider the cases $a_i\in \Z_p$ and $b_j=1$, and then
we write
\[
F_{\ul a}(t):={}_sF_{s-1}\left({a_1,\ldots,a_s\atop 1,\ldots 1}:t\right)\in \Z_p[[t]]
\]
for
$\ul a=(a_1,\ldots,a_s)\in \Z_p^s$.
\begin{defn}[$p$-adic hypergeometric functions of logarithmic type]
Let $\ul a'=(a'_1,\ldots,a'_s)$ where $a_i'$ denotes the Dwork prime.
Let $W=W(\ol\F_p)$ be the Witt ring of $\ol\F_p$.
Let $\sigma:W[[t]]\to W[[t]]$ be the $p$-th Frobenius endomorphism given by
$\sigma(t)=ct^p$ with $c\in 1+pW$,
compatible with the Frobenius on $W$.
Put a power series
\[
G_{\ul a}(t):=\psi_p(a_1)+\cdots+\psi_p(a_s)+s\gamma_p
-p^{-1}\log(c)+
\int_0^t(F_{\underline{a}}(t)-F_{\ul a'}(t^\sigma))\frac{dt}{t}
\]
where $\psi_p(z)$ is the $p$-adic digamma function defined in \S \ref{polygamma-sect},
and $\log(z)$ is the Iwasawa logarithmic function.
Then we define
\[
\cF^{(\sigma)}_{\underline{a}}(t)=G_{\ul a}(t)/F_{\underline{a}}(t),
\]
and call the {\rm $p$-adic hypergeometric functions of logarithmic type}.
\end{defn}
\begin{lem}\label{G-int}
$G_{\ul a}(t)\in W[[t]]$. Hence it follows $\cF^{(\sigma)}_{\underline{a}}(t)\in W[[t]]$.
\end{lem}
\begin{pf}
Let $G_{\underline{a}}(t)=\sum B_it^i$.
Let $F_{\underline{a}}(t)=\sum A_it^i$ and
$F_{\ul a'}(t)=\sum A^{(1)}_it^i$.
If $p{\not|}i$, then
$B_i=A_i/i$ is obviously a $p$-adic integer.
For $i=mp^k$ with $k\geq 1$ and $p\not| m$, one has
\[
B_i=B_{mp^k}=\frac{A_{mp^k}-c^{mp^{k-1}}A^{(1)}_{mp^{k-1}}}{mp^k}.
\]
Since $c^{mp^{k-1}}\equiv 1$ mod $p^k$, it is enough to see
$A_{mp^k}\equiv A^{(1)}_{mp^{k-1}}$ mod $p^k$.
However this follows from \cite[p.36, Cor. 1]{Dwork-p-cycle}.
\end{pf}
\subsection{Congruence relations}\label{cong-pf-sect1}
For a power series $f(t)=\sum_{n=0}^\infty A_nt^n$, we denote $f(t)_{<m}:=\sum_{n<m}A_nt^n$
the truncated polynomial.
\begin{thm}\label{cong-thm}
Suppose that $a_i\not\in \Z_{\leq0}$ for all $i$.
Let us write $\cF^{(\sigma)}_{\underline{a}}(t)=G_{\ul a}(t)/F_{\ul a}(t)$.
If $c\in 1+2pW$, then for all $n\geq 1$
\begin{equation}\label{cong-thm-eq0}
\cF^{(\sigma)}_{\underline{a}}(t)\equiv
\frac{G_{\underline{a}}(t)_{<p^n}}{F_{\underline{a}}(t)_{<p^n}}\mod p^nW[[t]].
\end{equation}
If $p=2$ and $c\in 1+2W$ (not necessarily $c\in 1+4W$),
then the above holds modulo $p^{n-1}$.
\end{thm}
\begin{cor}\label{cong-cor}
Suppose that there exists an integer $r\geq 0$ such that $a_i^{(r+1)}=a_i$ for all $i$
where $(-)^{(r)}$ denotes the $r$-th Dwork prime.
Then
\[
\cF_{\ul a}^{(\sigma)}(t)\in
W\langle t,F_{\ul a}(t)_{<p}^{-1}, \ldots,F_{\ul a^{(r)}}(t)_{<p}^{-1}\rangle:=
\varprojlim_n (W/p^n[t,F_{\ul a}(t)_{<p}^{-1}, \ldots,F_{\ul a^{(r)}}(t)_{<p}^{-1}])
\]
is a convergent function.
For $\alpha\in W$ such that $F_{\ul a^{(i)}}(\alpha)_{<p}\not
\equiv 0$ mod $p$ for all $i$, the special value of $\cF_{\ul a}^{(\sigma)}(t)$ at $t=\alpha$
is defined, and it is explicitly given by
\[
\cF_{\ul a}^{(\sigma)}(\alpha)=\lim_{n\to\infty}
\frac{G_{\underline{a}}(\alpha)_{<p^n}}{F_{\underline{a}}(\alpha)_{<p^n}}.
\]
\end{cor}
\subsection{Proof of Congruence relations : Reduction to the case $c=1$}\label{cong-sect2}
Throughout the sections \ref{cong-sect2}, \ref{cong-sect3} and \ref{cong-sect4},
we use the following notation. Fix $s\geq 1$ and $\ul a=(a_1,\ldots,a_s)$ with $a_i\not\in \Z_{\leq0}$. Let $\sigma(t)=ct^p$ be the Frobenius. Put
\begin{equation}\label{cong-A}
F_{\ul a}^{(i)}(t):=\sum_{n=0}^\infty A^{(i)}_nt^n,\quad
A^{(i)}_n:=\frac{(a_1^{(i)})_n}{n!}\cdots\frac{(a_s^{(i)})_n}{n!}
\end{equation}
where $a_k^{(i)}$ denotes the $i$-th Dwork prime.
Letting $\cF^{(\sigma)}_{\underline{a}}(t)=G_{\ul a}(t)/F_{\ul a}(t)$, we put
\[
G_{\ul a}(t)=\sum_{n=0}^\infty B_nt^n
\]
or explicitly
\begin{equation}\label{cong-sect1-eq1-0}
B_0=\psi_p(a_1)+\cdots+\psi_p(a_s)+s\gamma_p,
\end{equation}
\begin{equation}\label{cong-sect1-eq1}
B_n=\frac{A_n}{n},\,(p\not|n),\quad
B_{mp^k}=\frac{A_{mp^k}-c^{mp^{k-1}}A^{(1)}_{mp^{k-1}}}{mp^k},\,(m,k\geq1).
\end{equation}
\begin{lem}\label{cong-lem0}
The proof of Theorem \ref{cong-thm} is reduced to the case $\sigma(t)=t^p$ (i.e. $c=1$).
\end{lem}
\begin{pf}
Write $f(t)_{\geq m}:=f(t)-f(t)_{<m}$.
Put $n^*:=n$ if $c\in 1+2pW$ and $n^*=n-1$ if $p=2$ and $c\not
\in 1+4W$.
Theorem \ref{cong-thm}
is equivalent to saying
\[
F_{\ul a}(t)G_{\ul a}(t)_{\geq p^n}
\equiv F_{\ul a}(t)_{\geq p^n}G_{\ul a}(t)\mod p^{n^*}W[[t]],
\]
namely
\[
\sum_{i+j=m,\,i,j\geq0}A_{i+p^n}B_j-A_iB_{j+p^n}\equiv 0\mod p^{n^*}
\]
for all $m\geq 0$.
Suppose that this is true when $c=1$, namely
\begin{equation}\label{cong-lem0-eq1}
\sum_{i+j=m}A_{i+p^n}B^\circ_j-A_iB^\circ_{j+p^n}\equiv 0\mod p^{n^*}
\end{equation}
where $B^\circ_i$ are the coefficients \eqref{cong-sect1-eq1-0}
or \eqref{cong-sect1-eq1} when $c=1$.
We denote by $B_i$ the coefficients for an arbitrary $c\in 1+pW$.
We then want to show
\begin{equation}\label{cong-lem0-eq2}
\sum_{i+j=m}A_{i+p^n}(B^\circ_j-B_j)-A_i(B^\circ_{j+p^n}-B_{j+p^n})\equiv 0\mod p^{n^*}.
\end{equation}
Let $c=1+pe$ with $e\ne0$ (if $e=0$, there is nothing to prove).
Then
\begin{align*}
\sum_{i+j=m}A_{i+p^n}(B^\circ_j-B_j)
&=A_{m+p^n}p^{-1}\log(c)+\sum_{1\leq j\leq m} p^{-1}\frac{(c^{j/p}-1)A_{m+p^n-j}A^{(1)}_{j/p}}{j/p}\\
&=A_{m+p^n}\sum_{i=1}^\infty\frac{(-1)^{i+1}}{i}p^{i-1}e^i+\sum_{1\leq j\leq m}(j/p)^{-1}
\sum_{i=1}^\infty\binom{j/p}{i}p^{i-1}e^iA_{m+p^n-j}A^{(1)}_{j/p}\\
&=\sum_{i=1}^\infty\left(A_{m+p^n}\frac{(-1)^{i+1}}{i}+\sum_{1\leq j\leq m}(j/p)^{-1}
\binom{j/p}{i}A_{m+p^n-j}A^{(1)}_{j/p}\right)p^{i-1}e^i\\
&=\sum_{i=1}^\infty\left(A_{m+p^n}\frac{(-1)^{i+1}}{i}+\sum_{1\leq j\leq m}i^{-1}
\binom{j/p-1}{i-1}A_{m+p^n-j}A^{(1)}_{j/p}\right)p^{i-1}e^i\\
&=\sum_{i=1}^\infty\left(\sum_{0\leq j\leq m}i^{-1}
\binom{j/p-1}{i-1}A_{m+p^n-j}A^{(1)}_{j/p}\right)p^{i-1}e^i
\end{align*}
where $A^{(k)}_{j/p}:=0$ if $p{\not|}j$.
Similarly
\[
\sum_{i+j=m}A_i(B^\circ_{j+p^n}-B_{j+p^n})
=\sum_{i=1}^\infty\left(\sum_{0\leq j\leq m}i^{-1}
\binom{(m+p^n-j)/p-1}{i-1}A_{j}A^{(1)}_{(m+p^n-j)/p}\right)p^{i-1}e^i.
\]
Therefore it is enough to show that
\[
\frac{p^{i-1}e^i}{i}\sum_{0\leq j\leq m}
\binom{j/p-1}{i-1}A_{m+p^n-j}A^{(1)}_{j/p}
\equiv\frac{p^{i-1}e^i}{i}\sum_{0\leq j\leq m}
\binom{(m+p^n-j)/p-1}{i-1}A_{j}A^{(1)}_{(m+p^n-j)/p}\mod p^{n^*}
\]
equivalently
\begin{equation}\label{cong-lem0-eq3}
\sum_{0\leq j\leq m}
(1-j/p)_{i-1}
A_{m+p^n-j}A^{(1)}_{j/p}
\equiv\sum_{0\leq j\leq m}
(1-(m+p^n-j)/p)_{i-1}A_{j}A^{(1)}_{(m+p^n-j)/p}\mod p^{n^*-i+1}i!e^{-i}
\end{equation}
for all $i\geq 1$ and $m\geq 0$.
Recall the Dwork congruence
\[
\frac{F(t^p)}{F(t)}\equiv
\frac{[F(t^p)]_{<p^m}}{F(t)_{<p^m}}\mod p^l\Z_p[[t]],\quad m\geq l
\]
from \cite[p.37, Thm. 2, p.45]{Dwork-p-cycle}.
This immediately imples \eqref{cong-lem0-eq3} in case $i=1$.
Suppose $i\geq 2$.
To show \eqref{cong-lem0-eq3}, it is enough to show
\begin{equation}\label{cong-lem0-eq4}
\sum_{0\leq j\leq m}
(j/p)^k
A_{m+p^n-j}A^{(1)}_{j/p}
\equiv\sum_{0\leq j\leq m}
((m+p^n-j)/p)^kA_{j}A^{(1)}_{(m+p^n-j)/p}\mod p^{n^*-i+1}i!e^{-i}
\end{equation}
for each $k\geq 0$.
We write $A^*_j:=j^kA^{(1)}_j$, and put $F^*(t):=\sum_{j=0}^\infty A^*_jt^j$.
Then \eqref{cong-lem0-eq4} is equivalent to saying
\begin{equation}\label{cong-lem0-eq5}
F(t)_{<p^n}F^*(t^p)\equiv
F(t)[F^*(t^p)]_{<p^n}\mod p^{n^*-i+1}i!e^{-i}\Z_p[[t]].
\end{equation}
We show \eqref{cong-lem0-eq5}, which finishes the proof of Lemma \ref{cong-lem0}.
It follows from \cite[p.45, Lem. 3.4 ]{Dwork-p-cycle}
that we have
\[
\frac{F^*(t)}{F(t)}\equiv
\frac{F^*(t)_{<p^m}}{F(t)_{<p^m}}\mod p^l\Z_p[[t]],\quad m\geq l.
\]
This implies
\[
\frac{F^*(t^p)}{F(t^p)}\equiv
\frac{F^*(t^p)_{<p^n}}{[F(t^p)]_{<p^n}}\mod p^{n-1}\Z_p[[t]].
\]
Therefore we have
\[
\frac{F^*(t^p)}{F(t)}=\frac{F(t^p)}{F(t)}\frac{F^*(t^p)}{F(t^p)}\equiv
\frac{[F(t^p)]_{<p^n}}{F(t)_{<p^n}}
\frac{[F^*(t^p)]_{<p^n}}{F(t^p)_{<p^n}}=\frac{[F^*(t^p)]_{<p^n}}{F(t)_{<p^n}}
\mod p^{n-1}\Z_p[[t]].
\]
If $p\geq 3$, then $\ord_p(p^{n^*-i+1}i!)=\ord_p(p^{n-i+1}i!)\leq n-1$ for any $i\geq2$, and hence
\eqref{cong-lem0-eq5} follows.
If $p=2$, then $\ord_p(p^{n-i+1}i!)\leq n$ but not necessarily $\ord_p(p^{n-i+1}i!)\leq n-1$.
If $e\in 2W\setminus\{0\}$, then $\ord_p(p^{n^*-i+1}i!e^{-i})=\ord_p(p^{n-i+1}i!e^{-i})\leq n-i<n-1$,
and hence
\eqref{cong-lem0-eq5} follows.
If $e$ is a unit, then
$\ord_p(p^{n^*-i+1}i!e^{-i})=\ord_p(p^{n-i}i!)\leq n-1$ for any $i\geq2$, as required again.
This completes the proof.
\end{pf}
\subsection{Proof of Congruence relations : Preliminary lemmas}\label{cong-sect3}
Until the end of \S \ref{cong-sect4}, let $\sigma$ be the Frobenius given by $\sigma(t)=t^p$
(i.e. $c=1$).
Therefore
\begin{equation}\label{cong-B}
B_0=\psi_p(a_1)+\cdots+\psi_p(a_s)+s\gamma_p,\quad
B_i=\frac{A_i-A^{(1)}_{i/p}}{i},\quad i\in\Z_{\geq1}
\end{equation}
where $A^{(k)}_i$ are as in \eqref{cong-A}, and
we mean $A^{(k)}_{i/p}=0$ unless $i/p\in\Z_{\geq0}$.
\begin{lem}\label{prod-lem}
For an $p$-adic integer $\alpha\in \Z_p$ and $n\in\Z_{\geq1}$, we define
\[
\{\alpha\}_n:=\prod_{\underset{p\not{\hspace{0.7mm}|}\,(a+i-1)}{1\leq i\leq n}}(\alpha+i-1),
\]
and $\{\alpha\}_0:=1$.
Let $a\in\Z_p\setminus\Z_{\leq 0}$, and
$l\in \{0,1,\ldots,p-1\}$ the integer such that $a\equiv -l$ mod $p$.
Then for any $m\in \Z_{\geq0}$, we have
\[
m\equiv 0,1,\ldots,l\text{ \rm mod }p\quad\Longrightarrow\quad
\frac{(a)_m}{m!}\left(\frac{(a')_{\lfloor m/p\rfloor}}
{\lfloor m/p\rfloor !}\right)^{-1}=\frac{\{a\}_{m}}{\{1\}_{m}}\in\Z_p^\times,
\]
\[
m\equiv l+1,\ldots,p-1\text{ \rm mod }p\quad\Longrightarrow\quad
\frac{(a)_m}{m!}\left(\frac{(a')_{\lfloor m/p\rfloor}}
{\lfloor m/p\rfloor !}\right)^{-1}=\left(a+l+p\lfloor\frac{m}{p}\rfloor\right)
\frac{\{a\}_m}{\{1\}_m}
\]
where $a'=a^{(1)}$ is the Dwork prime.
\end{lem}
\begin{pf}
Straightforward.
\end{pf}
\begin{lem}[Dwork]\label{cong-lem5}
For any $m\in\Z_{\geq0}$, $A_m/A^{(1)}_{\lfloor m/p\rfloor}$ are $p$-adic integers, and
\[
m\equiv m'\mod p^n\quad\Longrightarrow\quad
\frac{A_m}{A^{(1)}_{\lfloor m/p\rfloor}}\equiv
\frac{A_{m'}}{A^{(1)}_{\lfloor m'/p\rfloor}}\mod p^n.
\]
\end{lem}
\begin{pf}
This is \cite[p.36, Cor. 1]{Dwork-p-cycle}, or one can easily show this
by using Lemma \ref{prod-lem} on noticing the fact that
\[
\{\alpha\}_{p^n}
\equiv \prod_{i\in (\Z/p^n\Z)^\times}i
\equiv\begin{cases}
1&p=2,\,n\ne 2\\
-1&\text{otherwise}
\end{cases}
\mod p^n.
\]
\end{pf}
\begin{lem}\label{b0-lem}
Let $a\in\Z_p\setminus\Z_{\leq 0}$ and $m,n\in \Z_{\geq1}$. Then
\begin{equation}\label{b0-lem-eq1}
1-\frac{(a')_{mp^{n-1}}}{(mp^{n-1})!}
\left(\frac{(a)_{mp^n}}{(mp^n)!}\right)^{-1}\equiv mp^n
(\psi_p(a)+\gamma_p)\mod p^{2n}.
\end{equation}
Moreover $A^{(1)}_{mp^{n-1}}/A_{mp^n}$
and $B_k/A_k$
are $p$-adic integers for all $k,m\geq 0$, $n\geq 1$, and
\begin{equation}\label{b0-lem-eq2}
\frac{A^{(1)}_{mp^{n-1}}}{A_{mp^n}}\equiv1-mp^n(\psi_p(a_1)+\cdots+\psi_p(a_s)+s\gamma_p)
\mod p^{2n},
\end{equation}
\begin{equation}\label{b0-lem-eq3}
p\not|m\quad \Longrightarrow\quad\frac{B_{mp^n}}{A_{mp^n}}=
\frac{1-A^{(1)}_{mp^{n-1}}/A_{mp^n}}{mp^n}\equiv B_0\mod p^n.
\end{equation}
\end{lem}
\begin{pf}
We already see that $A^{(1)}_{mp^{n-1}}/A_{mp^n}\in \Z_p$ in Lemma \ref{prod-lem}.
It is enough to show \eqref{b0-lem-eq1}.
Indeed \eqref{b0-lem-eq2} is immediate from \eqref{b0-lem-eq1}, and
\eqref{b0-lem-eq3} is immediate from \eqref{b0-lem-eq2}.
Moreover \eqref{b0-lem-eq2} also implies that $B_k/A_k\in \Z_p$ for any $k\in\Z_{\geq0}$.
Let us show \eqref{b0-lem-eq1}.
Let $a=-l+p^nb$ with $l\in \{0,1,\ldots,p^n-1\}$.
Then
\[
\frac{(a')_{mp^{n-1}}}{(mp^{n-1})!}
\left(\frac{(a)_{mp^n}}{(mp^n)!}\right)^{-1}
=\frac{\{1\}_{mp^n}}{\{a\}_{mp^n}}=\prod_{\underset{k-l\not\equiv0\text{ mod }p}{l<k<mp^n}} \frac{k-l}{k-l+p^nb}
\times\prod_{\underset{k-l\not\equiv0\text{ mod }p}{0\leq k<l}} \frac{k-l+mp^n}{k-l+p^nb}
\]
by Lemma \ref{prod-lem}.
If $(p,n)\ne(2,1)$, we have
\begin{align*}
\frac{\{1\}_{mp^n}}{\{a\}_{mp^n}}
&\equiv
\prod_{\underset{p\not{\hspace{0.7mm}|}\,k-l}{l<k<mp^n}}\left( 1-\frac{p^nb}{k-l}\right)
\prod_{\underset{p\not{\hspace{0.7mm}|}\,k-l}{0\leq k<l}}\left( 1-\frac{p^n(b-m)}{k-l}\right)\\
&\equiv
1-p^n\left(\sum_{\underset{p\not{\hspace{0.7mm}|}\,k-l}{l<k<mp^n}}\frac{b}{k-l}
+\sum_{\underset{p\not{\hspace{0.7mm}|}\,k-l}{0\leq k<l}}\frac{b-m}{k-l}\right)\\
&\os{\eqref{equiv}}{\equiv}
1-mp^n\sum_{\underset{p\not{\hspace{0.7mm}|}\,k-l}{l<k<mp^n}}\frac{1}{k-l}\\
&=
1-mp^n\sum_{1\leq k<mp^n-l,p \not{\hspace{0.7mm}|}\,k}\frac{1}{k}\\
&\os{\eqref{equiv-psi}}{\equiv}1-mp^n(\psi_p(a)+\gamma_p)
\end{align*}
modulo $p^{2n}$, which completes the proof of \eqref{b0-lem-eq1}.
In case $(p,n)=(2,1)$, we need another observation
(since the 3rd equivalence does not hold in general).
In this case
we have
\begin{align*}
\frac{\{1\}_{2m}}{\{a\}_{2m}}
&\equiv
1-2
\left(\sum_{\underset{2\not{\hspace{0.7mm}|}\, k-l}{l<k<2m}}\frac{b}{k-l}
+\sum_{\underset{2\not{\hspace{0.7mm}|}\,k-l}{0\leq k<l}}\frac{b-m}{k-l}\right)\mod 4\\
&=
1-2
\left(\sum_{\underset{2\not{\hspace{0.7mm}|}\, k-l}{l<k<2m}}\frac{m}{k-l}
+\sum_{\underset{2\not{\hspace{0.7mm}|}\,k-l}{0\leq k<2m}}\frac{b-m}{k-l}\right)\\
&
\equiv
1-2m
\left(\sum_{0<k<2m-l,\,2\not{\hspace{0.7mm}|}\,k}\frac{1}{k}
+b-m\right)\mod 4
\end{align*}
and
\[
\psi_2(a)+\gamma_2\equiv \sum_{0<k<L,\,2 \not{\hspace{0.7mm}|}\,k}\frac{1}{k}\mod 2
\]
where $L\in\{0,1,2,3\}$ such that $a=-l+2b\equiv L$ mod $4$.
Therefore \eqref{b0-lem-eq1} is equivalent to
\[
m
\left(\sum_{0<k<2m-l,\,2 \not{\hspace{0.7mm}|}\,k}\frac{1}{k}
-\sum_{0<k<L,\,2 \not{\hspace{0.7mm}|}\,k}\frac{1}{k}
+b-m\right)\equiv 0\mod 2.
\]
We may assume that $m>0$ is odd and $b=0,1$ (hence $a=0,\pm 1,2$).
Then one can check this
on a case-by-case analysis.
\end{pf}
\begin{lem}\label{cong-lem3}
For any $m, m'\in\Z_{\geq 0}$ and $n\in \Z_{\geq 1}$, we have \[
m\equiv m'\mod p^n\quad\Longrightarrow\quad
\frac{B_m}{A_m}\equiv
\frac{B_{m'}}{A_{m'}}\mod p^n.
\]
\end{lem}
\begin{pf}
If $p{\not|} m$, then $B_m/A_m=1/m$ and hence the assertion is obvious.
Let $m=kp^i$ with $i\geq 1$ and $p{\not|}k$.
It is enough to show the assertion in case $m'=m+p^n$.
If $n\leq i$, then
\[
\frac{B_m}{A_m}\equiv \frac{B_{m'}}{A_{m'}}\equiv B_0\mod p^n
\]
by \eqref{b0-lem-eq3}. Suppose $n\geq i$.
Notice that
\[
1-m\frac{B_m}{A_m}=\frac{A^{(1)}_{m/p}}{A_m}=\prod_{r=1}^s \frac{\{1\}_m}{\{a_r\}_m}
\]
by \eqref{cong-B} and Lemma \ref{prod-lem}.
We have
\begin{align*}
1-m'\frac{B_{m'}}{A_{m'}}
&=\prod_r \frac{\{1\}_{kp^i+p^n}}{\{a_r\}_{kp^i+p^n}}\\
&=\prod_r \frac{\{1\}_{kp^i}}{\{a_r\}_{kp^i}}\frac{\{1+kp^i\}_{p^n}}{\{a_r+kp^i\}_{p^n}}\\
&=\left(1-m\frac{B_m}{A_m}\right)\prod_r
\frac{\{1+kp^i\}_{p^n}}{\{a_r+kp^i\}_{p^n}}\\
&=\left(1-m\frac{B_m}{A_m}\right)\prod_r
\frac{\{1\}_{p^n}}{\{a_r+kp^i\}_{p^n}}
\frac{\{1+kp^i\}_{p^n}}{\{1\}_{p^n}}\\
&\os{(\ast)}{\equiv}\left(1-m\frac{B_m}{A_m}\right)\prod_r
(1-p^n(\psi_p(a_r+kp^i)-\psi_p(1+kp^i)))\mod p^{2n}\\
&\os{(\ast\ast)}{\equiv}\left(1-m\frac{B_m}{A_m}\right)
(1-p^nB_0)\mod p^{n+i}
\end{align*}
where
$(\ast)$ follows from Lemmas \ref{prod-lem} and \ref{b0-lem}.
The equivalence
$(\ast\ast)$ follows from \eqref{equiv-psi} in case $(p,i)\ne(2,1)$, and in case $(p,i)=(2,1)$,
it does from the fact that
\[
\psi_2(z+2)-\psi_2(z)\equiv1\mod 2,\quad z\in \Z_2.
\]
Therefore we have
\[
kp^i\left(\frac{B_{m'}}{A_{m'}}-\frac{B_{m}}{A_{m}}\right)\equiv -p^n\frac{B_{m'}}{A_{m'}}+p^nB_0
\mod p^{i+n}.
\]
By \eqref{b0-lem-eq3}, the right hand side vanishes.
This is the desired assertion.
\end{pf}
\begin{lem}\label{cong-lem4}
Put $S_m:=\sum_{i+j=m}A_{i+p^n}B_{j}-A_iB_{j+p^n}$ for $m\in \Z_{\geq0}$.
Then \[
S_m\equiv\sum_{i+j=m}(A_{i+p^n}A_{j}-A_iA_{j+p^n})\frac{B_{j}}{A_{j}}\mod p^n.
\]
\end{lem}
\begin{pf}
\begin{align*}
S_m
&=\sum_{i+j=m}A_{i+p^n}B_{j}-A_iA_{j+p^n}\frac{B_{j+p^n}}{A_{j+p^n}}\\
&\equiv\sum_{i+j=m}A_{i+p^n}B_{j}-A_iA_{j+p^n}\frac{B_{j}}{A_{j}}\mod p^n\quad(\mbox{Lemma \ref{cong-lem3}})\\
&=\sum_{i+j=m}(A_{i+p^n}A_{j}-A_iA_{j+p^n})\frac{B_{j}}{A_{j}}
\end{align*}
as required.
\end{pf}
\begin{lem}\label{cong-lem6}
\[
S_m\equiv\sum_{i+j=m}
(A^{(1)}_{\lfloor j/p\rfloor}A^{(1)}_{\lfloor i/p\rfloor+p^{n-1}}
-A^{(1)}_{\lfloor i/p\rfloor}A^{(1)}_{\lfloor j/p\rfloor+p^{n-1}}
)\frac{A_i}{A^{(1)}_{\lfloor i/p\rfloor}}\frac{A_j}{A^{(1)}_{\lfloor j/p\rfloor}}
\frac{B_j}{A_j}\mod p^n.
\]
\end{lem}
\begin{pf}
This follows from Lemma \ref{cong-lem4} and Lemma \ref{cong-lem5}.
\end{pf}
\begin{lem}\label{cong-lem7}
For all $m,k,s\in \Z_{\geq0}$ and $0\leq l\leq n$,
we have
\begin{equation}\label{cong-lem7-eq1}
\sum_{\underset{i\equiv k\text{ mod }p^{n-l}}{i+j=m}}A_iA_{j+p^{n-1}}-A_jA_{i+p^{n-1}}
\equiv0
\mod p^l.
\end{equation}
\end{lem}
\begin{pf}
There is nothing to prove in case $l=0$. If $l=n$, then \eqref{cong-lem7-eq1} is obvious as
\[
\mbox{LHS}=\sum_{i+j=m}A_iA_{j+p^{n-1}}-A_jA_{i+p^{n-1}}=0.
\]
Suppose that $1\leq l\leq n-1$.
Let $A^{(r)}_i$ be as in \eqref{cong-A}.
For $r,k\in \Z_{\geq0}$ we put
\[
F^{(r)}(t):=\sum_{i=0}^\infty A^{(r)}_it^i,
\]
\begin{equation}\label{cong-lem7-eq3}
F^{(r)}_k(t):=\sum_{i\equiv k\text{ mod }p^{n-l}} A^{(r)}_it^i
=p^{-n+l}\sum_{s=0}^{p^{n-l}-1}\zeta^{-sk}F(\zeta^st)
\end{equation}
where $\zeta$ is a primitive $p^{n-l}$-th root of unity.
Then \eqref{cong-lem7-eq1} is equivalent to
\begin{equation}\label{cong-lem7-eq2}
F_k(t)F_{m-k}(t)_{<p^{n-1}}\equiv F_k(t)_{<p^{n-1}}F_{m-k}(t) \mod p^l
\end{equation}
where $F_k(t)=F^{(0)}_k(t)$.
It follows from the Dwork congruence
\cite[p.37, Thm. 2]{Dwork-p-cycle} that one has
\[
\frac{F^{(i)}(t)}{F^{(i+1)}(t^p)}\equiv
\frac{F^{(i)}(t)_{<p^m}}{[F^{(i+1)}(t^p)]_{<p^m}}\mod p^n
\]
for any $m\geq n\geq1$.
This implies
\[
\frac{F^{(i)}(t^p)}{F^{(i+1)}(t^{p^2})}\equiv
\frac{F^{(i)}(t^p)_{<p^{n+1}}}{[F^{(i+1)}(t^{p^2})]_{<p^{n+1}}}\mod p^n,\quad
\frac{F^{(i)}(t^{p^2})}{F^{(i+1)}(t^{p^3})}\equiv
\frac{F^{(i)}(t^{p^2})_{<p^{n+2}}}{[F^{(i+1)}(t^{p^3})]_{<p^{n+2}}}\mod p^n,\ldots.
\]
Hence we have
\begin{align*}
\frac{F(t)}{F^{(n-l)}(t^{p^{n-l}})}
&=\frac{F(t)}{F^{(1)}(t^p)}
\frac{F^{(1)}(t^p)}{F^{(2)}(t^{p^2})}\cdots
\frac{F^{(n-l-1)}(t^{p^{n-l-1}})}{F^{(n-l)}(t^{p^{n-l}})}\\
&\equiv\frac{[F(t)]_{<p^d}}{[F^{(1)}(t^p)]_{<p^d}}\frac{[F(t^p)]_{<p^d}}{[F^{(1)}(t^{p^2})]_{<p^d}}\cdots
\frac{[F^{(n-l-1)}(t^{p^{n-l-1}})]_{<p^d}}{[F^{(n-l)}(t^{p^{n-l}})]_{<p^d}}
\mod p^{d-n+l+1} \Z_p[[t]]\\
&=\frac{[F(t)]_{<p^d}}{[F^{(n-l)}(t^{p^{n-l}})]_{<p^d}},
\end{align*}
namely there are $a_i\in \Z_p$ such that
\[
\frac{F(t)}{F^{(n-l)}(t^{p^{n-l}})}=
\frac{F(t)_{<p^d}}{[F^{(n-l)}(t^{p^{n-l}})]_{<p^d}}
+p^{d-n+l+1}\sum_{i}a_it^i.
\]
Substitute $t$ for $\zeta^s t$ in the above and
multiply it by
\[
\left(\frac{F(t)}{F^{(n-l)}(t^{p^{n-l}})}\right)^{-1}=
\left(\frac{F(t)_{<p^d}}{[F^{(n-l)}(t^{p^{n-l}})]_{<p^d}}
+p^{d-n+l+1}\sum_{i}a_it^i\right)^{-1}.
\]
Then we have
\[
F(\zeta^st)F(t)_{<p^d}-F(\zeta^st)_{<p^d}F(t)=p^{d-n+l+1}\sum_{i=0}^\infty b_i(\zeta^s)t^i
\]
where $b_i(x)\in \Z_p[x]$ are polynomials which do not depend on $s$.
Applying $\sum_{s=0}^{p^{n-l}-1}\zeta^{-sk}(-)$ on both side,
one has
\[
p^{n-l}F_k(t)F(t)_{<p^d}-p^{n-l}F_k(t)_{<p^d}F(t)=p^{d-n+l+1}\sum_{i=0}^\infty
\sum_{s=0}^{p^{n-l}-1}\zeta^{-sk}b_i(\zeta^s)t^i
\]
by \eqref{cong-lem7-eq3}. Since $\sum_{s=0}^{p^{n-l}-1}\zeta^{sj}=0$ or $p^{n-l}$,
the right hand side is zero modulo $p^{d+1}$.
Therefore
\[
\frac{F_k(t)}{F(t)}\equiv
\frac{F_k(t)_{<p^d}}{F(t)_{<p^d}}
\mod p^{d-n+l+1}\Z_p[[t]].
\]
This implies
\[
\frac{F_k(t)F_j(t)_{<p^d}-F_k(t)_{<p^d}F_j(t)}{F(t)}\equiv
\frac{F_k(t)_{<p^d}F_j(t)_{<p^d}-F_k(t)_{<p^d}F_j(t)_{<p^d}}{F(t)_{<p^d}}=0\mod p^{d-n+l+1}.
\]
Now \eqref{cong-lem7-eq2} is the case $(d,j)=(n-1,s-k)$.
\end{pf}
\subsection{Proof of Congruence relations : End of proof}\label{cong-sect4}
We finish the proof of Theorem \ref{cong-thm}.
Let $S_m$ be as in Lemma \ref{cong-lem4}.
The goal is to show
\[
S_m\equiv 0\mod p^n,\quad \forall\, m\geq 0.
\]
Let us put
\[
q_i:=\frac{A_i}{A^{(1)}_{\lfloor i/p\rfloor}},\quad
A(i,j):=A^{(1)}_iA^{(1)}_j,\quad A^*(i,j):=A(j,i+p^{n-1})-A(i,j+p^{n-1})
\]
\[
B(i,j):=A^*(\lfloor i/p\rfloor,\lfloor j/p\rfloor).
\]
Then
\[
S_m\equiv\sum_{i+j=m}B(i,j)q_iq_j\frac{B_j}{A_j}\mod p^n
\]
by Lemma \ref{cong-lem6}.
It follows from Lemma \ref{cong-lem3} and Lemma \ref{cong-lem5} that we have
\begin{equation}\label{pf-eq1}
k\equiv k'\mod p^i\quad\Longrightarrow\quad \frac{B_k}{A_k}\equiv\frac{B_{k'}}{A_{k'}},
\,q_k\equiv q_{k'}
\mod p^{i}.
\end{equation}
By Lemma \ref{cong-lem7}, we have
\begin{equation}\label{pf-eq2}
\sum_{\underset{i\equiv k\text{ mod }p^{n-l}}{i+j=s}}A^*(i,j)\equiv0 \mod p^l,\quad 0\leq l\leq n
\end{equation}
for all $s\geq0$.
Let
$m=l+sp$ with $l\in\{0,1,\ldots,p-1\}$.
Note
\[
B(i,m-i)=\begin{cases}
A^*(k,s-k)&kp\leq i\leq kp+l\\
A^*(k,s-k-1)&kp+l< i\leq (k+1)p-1.
\end{cases}
\]
Therefore
\begin{align*}
S_{m}&\equiv \sum_{i+j=m}B(i,j)q_iq_j\frac{B_j}{A_j}\mod p^n\\
&=
\sum_{i=0}^{p-1}\sum_{k=0}^{\lfloor(m-i)/p\rfloor}B(i+kp,m-(i+kp))q_{i+kp}q_{m-(i+kp)}
\frac{B_{m-(i+kp)}}{A_{m-(i+kp)}}
\\
&=
\sum_{k=0}^s
B(i+kp,m-(i+kp))
\sum_{i=0}^{l}q_{i+kp}q_{m-(i+kp)}
\frac{B_{m-(i+kp)}}{A_{m-(i+kp)}}
\\
&\quad+\sum_{k=0}^{s-1
B(i+kp,m-(i+kp))
\sum_{i=l+1}^{p-1}q_{i+kp}q_{m-(i+kp)}
\frac{B_{m-(i+kp)}}{A_{m-(i+kp)}}
\\
&=
\sum_{k=0}^s
A^*(k,s-k)
\overbrace{\left(\sum_{i=0}^{l}q_{i+kp}q_{m-(i+kp)}
\frac{B_{m-(i+kp)}}{A_{m-(i+kp)}}\right)}^{P_k}
\\
&\quad+\sum_{k=0}^{s-1}
A^*(k,s-k-1)
\underbrace{\left(\sum_{i=l+1}^{p-1}q_{i+kp}q_{m-(i+kp)}
\frac{B_{m-(i+kp)}}{A_{m-(i+kp)}}\right)}_{Q_k}.
\end{align*}
We show that the first term vanishes modulo $p^n$.
It follows from \eqref{pf-eq1} that we have
\begin{equation}\label{pf-eq3}
k\equiv k'\mod p^i\quad\Longrightarrow\quad
P_k\equiv P_{k'}
\mod p^{i+1}.
\end{equation}
Therefore one can write
\[
\sum_{k=0}^{s}A^*(k,s-k)P_k\equiv\sum_{i=0}^{p^{n-1}-1}P_i
\overbrace{\left(\sum_{k\equiv i\text{ mod } p^{n-1}}A^*(k,s-k)
\right)}^{(\ast)}\mod p^n.
\]
It follows from \eqref{pf-eq2} that $(\ast)$ is zero modulo $p$.
Therefore, again by \eqref{pf-eq3}, one can rewrite
\[
\sum_{k=0}^{s}A^*(k,s-k)P_k\equiv\sum_{i=0}^{p^{n-2}-1}P_i
\overbrace{\left(\sum_{k\equiv i\text{ mod } p^{n-2}}A^*(k,s-k)
\right)}^{(\ast\ast)}\mod p^n.
\]
It follows from \eqref{pf-eq2} that $(\ast\ast)$ is zero modulo $p^2$, so that one has
\[
\sum_{k=0}^{s}A^*(k,s-k)P_k\equiv\sum_{i=0}^{p^{n-3}-1}P_i
\left(\sum_{k\equiv i\text{ mod } p^{n-3}}A^*(k,s-k)
\right)\mod p^n
\]
by \eqref{pf-eq3}.
Continuing the same discussion, one finally obtains
\[
\sum_{k=0}^{s}A^*(k,s-k)P_k\equiv
\sum_{k=0}^sA^*(k,s-k)=0\mod p^n
\]
the vanishing of the first term.
In the same way one can show the vanishing of the second term,
\[
\sum_{k=0}^{s}A^*(k,s-1-k)Q_k\equiv0\mod p^n.
\]
We thus have $S_m\equiv 0$ mod $p^n$.
This completes the proof of Theorem \ref{cong-thm}.
\section{Geometric aspect of $p$-adic hypergeometric functions of logarithmic type}
\label{reg-sect}
We mean by a {\it fibration} over a ring $R$ a projective flat morphism
of quasi-projective smooth $R$-schemes.
Let $X$ be a smooth $R$-scheme. We mean by a relative {\it normal crossing divisor}
(abbreviated to NCD) in $X$ over $R$
a divisor in $X$ which is locally defined by an equation $x^{r_1}_1\cdots x_s^{r_s}$
where $r_i>0$ are integers and $(x_1,\ldots,x_n)$ is a local coordinate of $X/R$.
We say a divisor $D$ simple if $D$ is a union of $R$-smooth divisors.
\subsection{Hypergeometric Curves}
\label{fermat-sect}
Let $A$ be a commutative ring.
We denote by $\P^1_A(Z_0,Z_1)$ the projective line over $A$ with homogeneous coordinate
$(Z_0:Z_1)$.
Let $N,M\geq 2$ be integers which are invertible in $A$.
Let $t\in A$ such that $t(1-t)\in A^\times$.
Define $X$ to be a projective scheme over $A$ defined by a bihomogeneous equation
\begin{equation}\label{HGC}
(X_0^N-X_1^N)(Y_0^M-Y_1^M)=tX^N_0Y_0^M
\end{equation}
in $\P^1_A(X_0,X_1)\times\P^1_A(Y_0,Y_1)$
We call it
a {\it hypergeometric curve} over $A$.
The morphism $X\to\Spec A$ is smooth projective
with connected fibers of relative dimension one, and
the genus of a geometric fiber is $(N-1)(M-1)$ (e.g. Hurwitz formula).
We put $x:=X_1/X_0$ and $y:=Y_1/Y_0$, and often refer to an affine equation
\[
(1-x^N)(1-y^M)=t.
\]
In what follows, we only consider the case $A=W[t,(t-t^2)^{-1}]$ where
$W$ is a commutative ring in which $NM$ is invertible.
Then the morphism $X\to \Spec W[t,(t-t^2)^{-1}]$ extends to a projective flat morphism
\[
f:Y\lra \P^1_W=\P^1_W(T_0,T_1)
\]
of smooth $W$-schemes with $t:=T_1/T_0$ in the following way.
Let
\[
Y_s:=\{(X_0:X_1)\times(Y_0:Y_1)\times(T_0:T_1)\mid
T_0(X_0^N-X_1^N)(Y_0^M-Y_1^M)=T_1X^N_0Y_0^M\}
\]
and let $f_s:Y_s\to \P^1_W$ be the 3rd projection, which is smooth over $S=\Spec W[t,(t-t^2)^{-1}]\subset \P^1_W$.
In a neighborhood of the fiber $f_s^{-1}(t=0)$, $Y_s$ is smooth over $W$ where
$t=0$ denotes the closed subscheme $\Spec W[t]/(t)$.
Moreover
the fiber $f_s^{-1}(t=0)$ is a simple relative NCD over $W$,
and all multiplicities of components of $f_s^{-1}(t=0)$ are one.
In a neighborhood of the fiber $f_s^{-1}(t=1)$, $Y_s$ is also smooth over $W$.
However we note that
the fiber $f_s^{-1}(t=1)$ is irreducible but not a NCD.
Finally, in a neighborhood of the fiber $f_s^{-1}(t=\infty)$ there are singular loci in $Y_s$.
Put $z=1/x$, $w=1/y$ and $s=1/t$.
Then the singular loci are $\{s=1-z^N=w=0\}$ and $\{s=z=1-w^M=0\}$, and
the neighborhood of each locus
is locally isomorphic to
\[\Spec W[[u_1,u_2,u_3]]/(u_1u_2-u_3^k),\quad
k=N,M.
\]
Then, there is the standard way to resolve the singularities so that we have
a smooth projective $W$-scheme $Y$.
\medskip
Summing up the above construction, we have
\begin{lem}\label{int.model-lem}
Suppose that $NM$ is invertible in $W$.
Then there is a morphism
\[
f:Y\lra \P^1_W=\P^1_W(T_0,T_1)
\]
of smooth projective $W$-schemes satisfying the following.
\begin{enumerate}
\item[$(1)$]
Let $S:=\Spec W[t,(t-t^2)^{-1}]\subset \P^1_W$ with $t:=T_1/T_0$.
Then $X=f^{-1}(S)\to S$ is the hypergeometric curve \eqref{HGC}.
\item[$(2)$]
$f$ has a semistable reduction at $t=0$.
The fiber $D:=f^{-1}(t=0)$ is a relative simple NCD, and
the multiplicities of the components are one.
\end{enumerate}
\end{lem}
\begin{rem}
One can further resolve $Y$ so that we have $\wt f:\wt Y\to \P^1_W$ such that
$\wt f^{-1}(t=1)$ and $\wt f^{-1}(t=\infty)$ are relative simple NCD's, while
the multiplicities are not necessarily one.
\end{rem}
\subsection{Gauss-Manin connection}
In this section we assume that $W$ is an integral domain of characteristic zero.
Let $K=\Frac W$ be the fractional field.
For a $W$-scheme $Z$ and a $W$-algebra $R$, we write $Z_R=Z\times_W R$.
The group $\mu_N\times\mu_M=\mu_N(\ol K)\times \mu_M(\ol K)$ acts on $\ol X$ in the following way
\begin{equation}\label{fermat-ss}
[\zeta, \nu]\cdot(x,y,t)=(\zeta x,\nu y,t),\quad(\zeta ,\nu )\in \mu_N\times \mu_M.
\end{equation}
We denote by $V(i,j)$ the subspace on which $(\zeta ,\nu )$ acts by multiplication
by $\zeta^i\nu^j$ for all $(\zeta ,\nu )$.
Then one has the eigen decomposition
\[
H^1_\dR(X_{\ol K}/ S_{\ol K})=\bigoplus_{i=1}^{N-1}\bigoplus_{j=1}^{M-1}
H^1_\dR(X_{\ol K}/S_{\ol K})(i,j),
\]
and each eigenspace $H^1_\dR(X_{\ol K}/ S_{\ol K})(i,j)$ is free of rank $2$ over
$\O(S_{\ol K})$
(\cite[Lemma 2.2]{A}).
Put
\begin{equation}\label{fermat-form-ab}
a_i:=1-\frac{i}{N},\quad
b_j:=1-\frac{j}{M}.
\end{equation}
Let
\begin{equation}\label{fermat-form}
\omega_{i,j}:=N\,\frac{x^{i-1}y^{j-M}}{1-x^N}dx
=-M\,\frac{x^{i-N}y^{j-1}}{1-y^M}dy,
\end{equation}
\begin{equation}\label{fermat-form-2}
\eta_{i,j}:
=\frac{1}{x^N-1+t}\omega_{i,j}=Mt^{-1}x^{i-N}y^{j-M-1}dy
\end{equation}
for integers $i,j$ such that $1\leq i\leq N-1,\,1\leq j\leq M-1$.
Then $\omega_{i,j}$ is the 1st kind, and $\eta_{i,j}$ is the 2nd kind.
They form a $\O(S_{\ol K})$-free basis of $H^1_\dR(X_{\ol K}/S_{\ol K})(i,j)$.
According to this, we put
\[H_\dR^1(X_K/S_K)(i,j):=\O(S_K)\omega_{i,j}+\O(S_K)\eta_{i,j}\subset H^1_\dR(X_K/S_K).
\]
Let
\[
F_{a,b}(t):={}_2F_1\left({a,b\atop 1};t\right)=\sum_{i=0}^\infty\frac{(a)_i}{i!}\frac{(b)_i}{i!}
t^i\in K[[t]]
\]
be the hypergeometric series.
Put
\begin{equation}\label{fermat-form-wt}
\wt\omega_{i,j}:=\frac{1}{F_{a_i,b_j}(t)}\omega_{i,j},\quad
\wt\eta_{i,j}:=-t(1-t)^{a_i+b_j}(F'_{a_i,b_j}(t)\omega_{i,j}+b_jF_{a_i,b_j}(t)\eta_{i,j}).
\end{equation}
For the later use, we give notation $V(i,j),\omega_{i,j},\eta_{i,j},\wt\omega_{i,j},\wt\eta_{i,j}$
for $(i,j)$ not necessarily a pair of integers.
Let $(i,j)=(q/r,q'/r')\in \Q^2$ such that $\gcd(r,N)=\gcd(r',M)=1$ and
$N\not| q$ and $M\not| q'$.
Let $i_0$, $j_0$ be the unique integers such that $i_0\equiv i$ mod $N$,
$j_0\equiv j$ mod $M$ and $1\leq i_0<N$, $1\leq j_0<M$. Then we define
\begin{equation}\label{rule}
V(i,j)=V(i_0,j_0),\quad \omega_{i,j}=\omega_{i_0,j_0},\quad\ldots\quad
\wt\eta_{i,j}=\wt\eta_{i_0,j_0}.
\end{equation}
\begin{prop}\label{fermat-GM}
Let $\nabla:H^1_\dR(X_K/S_K)\to\O(S_K)dt\ot H^1_\dR(X_K/S_K)$
be the Gauss-Manin connection. It naturally extends on $K((t))\ot_{\O(S)}H^1_\dR(X_K/S_K)$
which we also write by $\nabla$. Then
\[
\begin{pmatrix}
\nabla(\omega_{i,j})&\nabla(\eta_{i,j})
\end{pmatrix}
=dt\ot\begin{pmatrix}
\omega_{i,j}&\eta_{i,j}
\end{pmatrix}
\begin{pmatrix}
0&-a_i(t-t^2)^{-1}\\
-b_j&(-1+(1+a_i+b_j)t)(t-t^2)^{-1}
\end{pmatrix},
\]
\[
\begin{pmatrix}
\nabla(\wt\omega_{i,j})&\nabla(\wt\eta_{i,j})
\end{pmatrix}
=dt\ot\begin{pmatrix}
\wt\omega_{i,j}&\wt\eta_{i,j}
\end{pmatrix}
\begin{pmatrix}
0&0\\
t^{-1}(1-t)^{-a_i-b_j}F_{a_i,b_j}(t)^{-2}&0
\end{pmatrix}.
\]
\end{prop}
\begin{pf}
We may replace the base field with $\C$.
Since $\nabla$ is commutative with the action of $\mu_N(\C)\times\mu_M(\C)$,
$\nabla$ preserves the eigen components $H^1_\dR(X/S)(i,j)$.
We think $X$ and $S$ of being complex manifolds.
For $\alpha\in \C\setminus\{0,1\}$ we write $X_\alpha=f^{-1}(t=\alpha)$.
Then there is a homology cycle $\delta_\alpha\in H_1(X_\alpha,\Q)$
such that
\[
\int_{\delta_\alpha}\omega_{i,j}=2\pi\sqrt{-1}\,{}_2F_1\left({a_i,b_j\atop 1};\alpha\right)
\]
(\cite[Lemma 2.3]{A}).
Let $\partial_t=\nabla_{\frac{d}{dt}}$ be the differential operator on
$\O(S)^{an}\ot_{\O(S)} H^1_\dR(X/S)$.
Put $D=t\partial_t$ and $P_{\mathrm{HG}}=D^2-t(D+a_i)(D+b_j)$ the hypergeometric
differential operator. Therefore
\[
\int_{\delta_\alpha}P_{\mathrm{HG}}(\omega_{i,j})=0.
\]
Since $H_1(X_\alpha,\C)(i,j)$ is a 2-dimensional irreducible $\pi_1(S,\alpha)$-module,
we have $\int_{\gamma}P_{\mathrm{HG}}(\omega_{i,j})=0$ for all
$\gamma\in\pi_1(S,\alpha)$ which means
\begin{equation}\label{fermat-GM-eq1}
P_{\mathrm{HG}}(\omega_{i,j})=(D^2-t(D+a_i)(D+b_j))(\omega_{i,j})=0.
\end{equation}
Next we show
\begin{equation}\label{fermat-GM-eq2}
\partial_t(\omega_{i,j})=-b_j\eta_{i,j}.
\end{equation}
Write $\omega_{i,j}=y^{j-M}\phi$, and think of it being an element of
$\vg(U,\Omega^1_{X/\C})$ with $U=\{x\ne\infty,y\ne\infty\}\subset X$. Since $\phi$ is a linear combination of $dx/(x-\nu)$'s,
one has $d(\phi)=0$.
Therefore
\[
d(\omega_{i,j})=
d(y^{j-M})\wedge\phi=(j-M)y^{j-M-1}dy\wedge\phi\in \vg(U,\Omega^2_{X/\C}).
\]
Apply
\[
\frac{dt}{t}=\frac{Nx^{N-1}dx}{x^N-1}+\frac{My^{M-1}dy}{y^M-1}
\]
to the above, we have
\[
d(\omega_{i,j})=(j-M)y^{j-M-1}\frac{y^M-1}{Mty^{M-1}}dt\wedge\phi
=-(1-j/M)\frac{y^M-1}{ty^{M}}dt\wedge\omega_{i,j}
=-b_jdt\wedge\eta_{i,j},
\]
which means $\nabla(\omega_{i,j})=-b_jdt\ot\eta_{i,j}$. This completes the proof of
\eqref{fermat-GM-eq2}.
Now all the formulas on $\nabla$ follow from \eqref{fermat-GM-eq1} and
\eqref{fermat-GM-eq2}.
\end{pf}
The following is straightforward from Proposition \ref{fermat-GM}.
\begin{cor}\label{fermat-GM-cor1}
Let $\nabla_{i,j}$ be the connection on the eigen component
$H_{i,j}:=K((t))\ot_{\O(S)}H^1_\dR(X_K/S_K)(i,j)$. Then
$\ker\nabla_{i,j}=K\wt{\eta}_{i,j}$.
Moreover let $\ol\nabla_{i,j}$ be the connection on $M_{i,j}:=H_{i,j}/K((t))\wt\eta_{i,j}$
induced from $\nabla_{i,j}$.
Then $\ker\ol\nabla_{i,j}=K\wt{\omega}_{i,j}$.
\end{cor}
We mean by
a semistable family
$g:\cX\to \Spec R[[t]]$ over a commutative ring $R$
that $g$ is a proper flat morphism, smooth
over $\Spec R((t))$ and it is locally described by
\[
g:\Spec R[[x_1,\ldots,x_n]]\lra \Spec R[[t],\quad g^*(t)=x_1\cdots x_r
\]
in each formal neighborhood.
Let $D$ be the fiber at $\Spec R[[t]]/(t)$, which is a relative NCD in $\cX$ over $R$
with no multiplicities.
Let ${\mathscr U}:=\cX\setminus D$.
We define the log de Rham complex $\omega^\bullet_{\cX/R[[t]]}$
to be the subcomplex of $\Omega^\bullet_{{\mathscr U}/R((t))}$ generated by $dx_i/x_i$
($1\leq i\leq r$) and $dx_j$ ($j>r$) over $\O_\cX$.
Equivalently,
\begin{equation}\label{log-dR-cpx}
\omega^\bullet_{\cX/R[[t]]}:=\Coker\left[
\frac{dt}{t}\ot\Omega^{\bullet-1}_{\cX/R}(\log D)
\to \Omega^{\bullet}_{\cX/R}(\log D)\right]
\end{equation}
where $\Omega^{\bullet}_{\cX/R}(\log D)$ denotes the $t$-adic completion of
the complex of the
algebraic K\"ahler differentials,
\[
\Omega^{\bullet}_{\cX/R}(\log D):=
\varprojlim_{n\geq1}\big(\Omega^{\bullet,\text{alg}}_{\cX/R}(\log D)/t^n\Omega^{\bullet,\text{alg}}_{\cX/R}(\log D)\big)
\]
\begin{cor}\label{fermat-GM-cor2}
Let $f:Y\to\P^1_W$ be the morphism of projective smooth $W$-schemes in Lemma \ref{int.model-lem}.
Let $Y_K:=Y\times_WK$.
Let $\Delta_K:=\Spec K[[t]]\hra \P^1_K$ be the formal neighborhood, and put
$\cY_K:=f^{-1}(\Delta_K)$. Let $D_K\subset\cY_K$ be the fiber at $t=0$.
Let
\[
\xymatrix{
D_K\ar[r]\ar[d]&\cY_K\ar[d]\ar[r]&Y_K\ar[d]\\
\{0\}\ar[r]&\Delta_K\ar[r]&\P^1_K
}\]
Put $H_K:=H^1_\zar(\cY_K,\omega^\bullet_{\cY_K/K[[t]]})$.
It follows from \cite[(2.18)--(2.20)]{steenbrink} or
\cite[(17)]{zucker} that $H_K\to K((t))\ot_{\O(S)} H^1_\dR(X/S)$ is injective.
We identify $H_K$ with its image.
Then the eigen component
$H_K(i,j)$ is a free $K[[t]]$-module with basis
$\{\wt\omega_{i,j},\wt\eta_{i,j}\}$.
\end{cor}
\begin{pf}
$H_K$ is called {\it Deligne's canonical extension}, and
is characterized by the following conditions (\cite[(17)]{zucker}).
\begin{description}
\item[(D1)] $H_K$ is a free $K[[\l]]$-module such that $K((t))\ot H_K=K((t))\ot_{\O(S)} H^1_\dR(X/S)$,
\item[(D2)]
the connection extends to have log pole,
$\nabla:H_K\to\frac{dt}{t}\ot H_K$,
\item[(D3)]
each eigenvalue $\alpha$ of $\Res(\nabla)$ satisfies $0\leq \mathrm{Re}(\alpha)<1$,
where $\Res(\nabla)$ is the $K$-linear endomorphism defined by a commutative diagram
\[
\xymatrix{
H_K\ar[r]^\nabla\ar[d]&
\frac{dt}{t}\ot H_K\ar[d]^{\Res\ot1}\\
H_K/tH_K\ar[r]^{\Res(\nabla)}&H_K/tH_K.
}
\]
\end{description}
Put $H^0_K:=\bigoplus_{i,j}K[[t]]\wt\omega_{i,j}+K[[t]]\wt\eta_{i,j}$.
We can directly check
that $H^0_K$ satisfies {\bf(D1)}--{\bf(D3)} by Propositioon \ref{fermat-GM}.
We then conclude $H_K=H^0_K$ thanks to the uniqueness of Deligne's canonical extension.
\end{pf}
\subsection{Rigid cohomology and a category $\FilFMIC(S)$}
In what follows the base ring $W$ is the Witt ring $W(\ol\F_p)$ with $p$ prime to $NM$.
Put $K:=\Frac W$ the fractional field.
We denote the $p$-th Frobenius on $W$ by $F$.
\medskip
Let $X\to S$ be the hypergeometric curve as before.
Write $X_{\ol\F_p}:=X\times_{W}\ol\F_p$ and $S_{\ol\F_p}:=S\times_{W}\ol\F_p$.
Let $\sigma$ be a $F$-linear $p$-th Frobenius on $W[t,(t-t^2)^{-1}]^\dag$ the ring of
overconvergent power series, which naturally extends on $K[t,(t-t^2)^{-1}]^\dag:=
K\ot W[t,(t-t^2)^{-1}]^\dag$.
Then the {\it rigid cohomology} groups
\[
H^\bullet_\rig(X_{\ol\F_p}/S_{\ol\F_p})
\]
are defined. We refer the book \cite{LS} for the general theory of rigid cohomology.
The required properties in below is the following.
\begin{itemize}
\item
$H^\bullet_\rig(X_{\ol\F_p}/S_{\ol\F_p})$ is a finitely generated
$\O(S)^\dag=K[t,(t-t^2)^{-1}]^\dag$-module.
\item(Frobenius)
The $p$-th Frobenius $\Phi_{X/S}$ on $H^\bullet_\rig(X_{\ol\F_p}/S_{\ol\F_p})$ (depending on $\sigma$)
is defined in a natural way. This is a $\sigma$-linear endomorphism :
\[
\Phi_{X/S}(f(t)x)=\sigma(f(t))\Phi_{X/S}(x),\quad \mbox{for }x\in
H^\bullet_\rig(X_{\ol\F_p}/S_{\ol\F_p}),\,
f(t)\in \O(S)^\dag.
\]
\item(Comparison)
There is the comparison isomorphism with the algebraic de Rham cohomology,
\[
c:H^\bullet_\rig(X_{\ol\F_p}/S_{\ol\F_p})\cong H^\bullet_\dR(X/S)
\ot_{\O(S)} \O(S)^\dag.
\]
\end{itemize}
In \cite[2,1]{AM} we introduce
a category $\FilFMIC(S)=\FilFMIC(S,\sigma)$.
It consists of collections of datum $(H_{\dR}, H_{\rig}, c, \Phi, \nabla, \Fil^{\bullet})$ such that
\begin{itemize}
\setlength{\itemsep}{0pt}
\item $H_{\dR}$ is a finitely generated $\O(S)$-module,
\item $H_{\rig}$ is a finitely generated $\O(S)^\dag$-module,
\item $c\colon H_\rig\cong H_{\dR}\otimes_{\O(S)}
\O(S)^\dag$, the comparison
\item $\Phi\colon \sigma^{\ast}H_{\rig}\xrightarrow{\,\,\cong\,\,} H_{\rig}$ is an isomorphism of $\O(S)^\dag$-module,
\item $\nabla\colon H_{\dR}\to \Omega_{S/\Q_p}^1\otimes H_{\dR}$ is an integrable connection
that satisfies $\Phi\nabla=\nabla\Phi$.
\item $\Fil^{\bullet}$ is a finite descending filtration on $H_{\dR}$ of locally free
$\O(S)$-module (i.e. each graded piece is locally free),
that satisfies $\nabla(\Fil^i)\subset \Omega^1\ot \Fil^{i-1}$.
\end{itemize}
Let $\mathrm{Fil}^\bullet$ denote the Hodge filtration on the de Rham cohomology,
and $\nabla$ the Gauss-Manin connection.
Let
\[
H^i(X/S):=(H^i_\dR(X/S),H^i_\rig(X_{\F_p}/S_{\F_p}),c,\Phi_{X/S},\nabla,\mathrm{Fil}^\bullet)
\]
be an object of $\FilFMIC(S)$.
For an integer $r$, the Tate object $\O_S(r)\in \FilFMIC(S)$
is defined in a customary way (loc.cit.).
We simply write
\[
M(r)=M\ot \O_S(r)
\]
for an object $M\in \FilFMIC(S)$.
\begin{lem}\label{frobenius-lem}
Suppose that $\sigma$ is given by $\sigma(t)=ct^p$ with some $c\in 1+pW$.
Then, with the notation in Corollary \ref{fermat-GM-cor2},
the Frobenius $\Phi_{X/S}$ induces the action on
$H_K$ in a natural way.
\end{lem}
\begin{pf}
Let $W((t))^\wedge$ be the $p$-adic completion and write $K((t))^\wedge:=K\ot_WW((t))^\wedge$ on which $\sigma$ extends as $\sigma(t)=ct^p$.
The Frobenius $\Phi_{X/S}$ on $H^1_\dR(X/S)\ot_{\O(S)}\O(S)^\dag$ naturally extends
on $H^1_\dR(X/S)\ot_{\O(S)}K((t))^\wedge$ via the homomorphism
$\O(S)^\dag\to K((t))^\wedge$. We show that the action of $\Phi_{X/S}$ preserves
the subspace $H_K$
Let $f:Y\to\P^1_W$ be the morphism of projective smooth $W$-schemes in Lemma \ref{int.model-lem}.
Let
\[
\xymatrix{
D_W\ar[r]\ar[d]&\cY\ar[d]\ar[r]&Y\ar[d]\\
0\ar[r]&\Delta_W\ar[r]&\P^1_W
}\]
where $\Delta_W:=\Spec W[[t]]\hra \P^1_W$ is the formal neighborhood, and
$0=\Spec W[[t]]/(t)$.
Note that $D_W$ is reduced, namely $\cY/\Delta_W$ has a semistable reduction.
Write $\cY_{\ol\F_p}:=\cY\times_W\ol\F_p$ etc.
We employ the log-crystalline cohomology
\begin{equation}\label{log-crys}
H^\bullet_{\text{log-crys}}((\cY_{\ol\F_p},D_{\ol\F_p})/(\Delta_W,0))
\end{equation}
where $(\cX,\cD)$ denotes the log scheme with log structure
induced by the divisor $\cD$.
There is the comparison theorem by Kato \cite[Theorem 6.4]{Ka-log},
\begin{equation}\label{log-crys-isom}
H^\bullet_{\text{log-crys}}((\cY_{\ol\F_p},D_{\ol\F_p})/(\Delta_W,0))
\cong
H^\bullet(\cY,\omega^\bullet_{\cY/W[[t]]})
\end{equation}
(see \eqref{log-dR-cpx} for the complex $\omega^\bullet_{\cY/W[[t]]}$).
The log-crystalline cohomology is endowed with
the $p$-th Frobenius $\Phi_{(\cY,D_W)}$
which is compatible with $\Phi_{X/S}$ under the map
\begin{align*}
H^1(\cY,\omega^\bullet_{\cY/W[[t]]})&\to
H^1(\cY_K,\omega^\bullet_{\cY_K/K[[t]]}))\\
&\hra
H^1_\dR(X/S)\ot_{\O(S)} K((t))\\
&\hra H^1_\dR(X/S)\ot_{\O(S)} K((t))^\wedge
\end{align*}
where $\cY_K:=\cY\times_{W[[t]]}K[[t]]$ and $D_K:=D_W\times_WK$.
Thus the assertion follows.
\end{pf}
\begin{prop}\label{frobenius-thm}
Let $\wt\omega_{i,j},\wt\eta_{i,j}$ be as in \eqref{fermat-form} and \eqref{fermat-form-2}.
Suppose that $\sigma$ is given by $\sigma(t)=ct^p$ with some $c\in 1+pW$.
Then
\
\Phi_{X/S}(\wt\eta_{p^{-1}i,p^{-1}j})\in K\wt\eta_{i,j},\quad
\Phi_{X/S}(\wt\omega_{p^{-1}i,p^{-1}j})\equiv p\wt\omega_{i,j}\mod K((t))\wt\eta_{i,j}
\
where we use the notation \eqref{rule}.
\end{prop}
\begin{pf}
Let $\nabla$ be the Gauss-Manin connection on $H^1_\dR(X/S)\ot_{\O(S)}K((t))$.
Since $\Phi_{X/S}\nabla=\nabla\Phi_{X/S}$, we have
$\Phi_{X/S}\ker(\nabla)\subset \ker(\nabla)$.
Moreover,
$\Phi_{X/S}$ sends the eigencomponents $H_{i,j}:=H^1_\dR(X/S)(i,j)\ot_{\O(S)}K((t))$
onto the component $H_{pi,pj}$ as
$\Phi_{X/S}[\zeta,\nu]=[\zeta,\nu]\Phi_{X/S}$.
Therefore we have
\[
\Phi_{X/S}(\wt\eta_{p^{-1}i,p^{-1}j})\in K\wt\eta_{i,j}
\]
by Corollary \ref{fermat-GM-cor1}.
We show the latter.
By Lemma \ref{frobenius-lem} together with Corollary \ref{fermat-GM-cor2},
there are $f_{i,j}(t), g_{i,j}(t)\in K[[t]]$ such that
$\Phi_{X/S}(\wt\omega_{p^{-1}i,p^{-1}j})=f_{i,j}(t)\wt\omega_{i,j}+g_{i,j}(t)\wt\eta_{i,j}$.
Put $M_{i,j}:=H_{i,j}/K((t))\wt\eta_{i,j}$.
Then $\Phi_{X/S}(M_{p^{-1}i,p^{-1}j})\subset M_{i,j}$ and $\Phi_{X/S}$ is
commutative with the connection $\ol\nabla$ on $M_{i,j}$.
Therefore $f_{i,j}(t)=C_{i,j}$ is a constant as $\ker(\ol\nabla)=K\wt\omega_{i,j}$
by Corollary \ref{fermat-GM-cor1},
\begin{equation}\label{frob-lem-eq1}
\Phi_{X/S}(\wt\omega_{p^{-1}i,p^{-1}j})=C_{i,j}\wt\omega_{i,j}+g_{i,j}(t)\wt\eta_{i,j}.
\end{equation}
We want to show $C_{i,j}=p$.
To do this, we recall the log-crystalline cohomology \eqref{log-crys}
\[
H^\bullet_{\text{log-crys}}((\cY_{\ol\F_p},D_{\ol\F_p})/(\Delta_W,0))
\cong H^\bullet(\cY,\omega^\bullet_{\cY/W[[t]]}).
\]
where we keep the notation in the proof of Lemma \ref{frobenius-lem}.
Let $Z_W$ be the intersection locus of $D_W$.
This is a disjoint union of $NM$-copies of $\Spec W$.
More precisely, let $P_{\zeta,\nu}$ be the point of $Z_W$ defined by $x=\zeta$ and
$y=\nu$.
Then $Z_W=\{P_{\zeta,\nu}\mid(\zeta,\nu)\in\mu_N\times\mu_M\}$.
We consider the composition of morphisms
\[
\omega^\bullet_{\cY/W[[t]]}\os{\wedge\frac{dt}{t}}{\lra}
\Omega^{\bullet+1}_{\cY/W}(\log D)\os{\Res}{\lra}
\O_{Z_W}[-1
\]
of complexes where $\Res$ is the Poincare residue.
This gives rise to the map
\[
R:H^1(\cY_K,\Omega^\bullet_{\cY_K/K[[t]]}(\log D_K))\lra
H^0(Z_K,\O_{Z_K})=\bigoplus_{\zeta,\nu}K\cdot P_{\zeta,\nu}
\]
which is compatible with respect to the Frobenius $\Phi_{X/S}$
on the left
and the Frobenius $\Phi_Z$ on the right in the sense that
\begin{equation}\label{frobenius-thm-eq0}
R\circ \Phi_{X/S}=p\Phi_Z\circ R.
\end{equation}
Notice that $\Phi_Z$ is a $F$-linear map such that
$\Phi_Z(P_{\zeta,\nu})=P_{\zeta,\nu}$ where $F$ is the Frobenius
on $W$.
We claim
\begin{equation}\label{frobenius-thm-eq1}
R(\wt\eta_{i,j})=0,
\end{equation}
and
\begin{equation}\label{frobenius-thm-eq2}
R(\wt\omega_{i,j})=\sum_{\zeta,\nu}\zeta^i\nu^jP_{\zeta,\nu}.
\end{equation}
To show \eqref{frobenius-thm-eq1}, we recall the definition \eqref{fermat-form-wt}.
Since $R(t\wt\omega_{i,j})=0$ obviously,
it is enough to show $R(t\eta_{i,j})=0$.
However since $t\eta_{i,j}=Mx^{i-N}y^{j-M}dy$ by
\eqref{fermat-form-2}, we have
\[
R(t\eta_{i,j})=\Res\left(Mx^{i-N}y^{j-M}dy\frac{dt}{t}\right)=0
\]
as required. One can show \eqref{frobenius-thm-eq2} as follows.
\begin{align*}
R(\wt\omega_{i,j})=R(\omega_{i,j})
&=\Res\left(
M\,\frac{x^{i-N}y^{j-1}}{y^M-1}dy\wedge\frac{dt}{t}\right)\\
&=\Res\left(
\frac{Mx^{i-N}y^{j-1}}{y^M-1}dy\wedge\frac{Nx^{N-1}}{x^N-1}dx\right)\\
&=\sum_{\zeta,\nu}\zeta^i\nu^jP_{\zeta,\nu}.
\end{align*}
We turn to the proof of $C_{i,j}=p$ in \eqref{frob-lem-eq1}.
Apply $R$ on the both side of \eqref{frob-lem-eq1}.
By \eqref{frobenius-thm-eq1}, the right hand side is $\alpha_{i,j}R(\wt\omega_{i,j})$,
and
the left hand side is $p\Phi_Z\circ R(\wt\omega_{p^{-1}i,p^{-1}j})$
by \eqref{frobenius-thm-eq0},
\[
C_{i,j}R(\wt\omega_{i,j})=p\Phi_Z\circ R(\wt\omega_{p^{-1}i,p^{-1}j}).
\]
Apply \eqref{frobenius-thm-eq1} to the above. We have
\[
C_{i,j}\left(\sum_{\zeta,\nu}\zeta^i\nu^jP_{\zeta,\nu}\right)=
p\Phi_Z\left(\sum_{\zeta,\nu}\zeta^{p^{-1}i}\nu^{p^{-1}j}\cdot P_{\zeta,\nu}\right)
=p\left(\sum_{\zeta,\nu}\zeta^{i}\nu^{j}P_{\zeta,\nu}\right)
\]
and hence $C_{i,j}=p$ as required.
\end{pf}
\begin{thm}[Unit root formula]\label{uroot-thm}
Suppose $\sigma(t)=t^p$.
For a pair of integers $(i,j)$ with $1\leq i<N$ and $1\leq j<M$, we put
\[
e^{\mathrm{unit}}_{i,j}:=(1-t)^{-a_i-b_j}F_{a_i,b_j}(t)^{-1}\wt\eta_{i,j}.
\]
Let $s\geq 0$ be the minimal
integer such that $a_i^{(s+1)}=a_i$ and $b_j^{(s+1)}=b_j$ and
put $h(t):=\prod_{m=0}^s F_{a_i^{(m)},b_j^{(m)}}(t)_{<p}$. Then
\begin{equation}\label{uroot-thm-1}
e^{\mathrm{unit}}_{i,j}\in
H^1_\dR(X/S)\ot_{\O(S)}K\langle t,(t-t^2)^{-1},h(t)^{-1}\rangle
\end{equation}
and
\begin{equation}\label{uroot-thm-2}
\Phi_{X/S}(e^{\mathrm{unit}}_{p^{-1}i,p^{-1}j})=
\frac{(1-t)^{a_i+b_j}}{(1-t^p)^{a^{(1)}_i+b^{(1)}_j}}
\cF^{\mathrm{Dw}}_{a_i,b_j}(t)e^{\mathrm{unit}}_{i,j}
\end{equation}
where $\cF^{\mathrm{Dw}}_{a_i,b_j}(t)$ is the Dwork $p$-adic hypergeometric function
\eqref{Dwork},
and we apply the convention \eqref{rule} to the notation
$e^{\mathrm{unit}}_{p^{-1}i,p^{-1}j}$.
In particular
$e^{\mathrm{unit}}_{p^{-1}i,p^{-1}j}$ is the eigen vector of
$\Phi_{X/S}^{s+1}=\overbrace{\Phi_{X/S}\circ\cdots\circ\Phi_{X/S}}^{s+1}$, and
\begin{equation}\label{uroot-thm-3}
\Phi_{X/S}^{s+1}(e^{\mathrm{unit}}_{i,j})
=\left(\prod_{m=0}^s
\frac{(1-t^{p^m})^{a^{(m)}_i+b^{(m)}_j}}{(1-t^{p^{m+1}})^{a^{(m+1)}_i+b^{(m+1)}_j}}
\cF^{\mathrm{Dw}}_{a^{(m)}_i,b^{(m)}_j}(t^{p^m})\right)e^{\mathrm{unit}}_{i,j}.
\end{equation}
\end{thm}
Notice that $(1-t)^{a_i+b_j}\not\in \Z_p\langle t,(1-t)^{-1}\rangle$
but $(1-t)^{a_i+b_j}/(1-t^p)^{a_i^{(1)}+b_j^{(1)}}\in \Z_p\langle t,(1-t)^{-1}\rangle$.
\begin{pf}
Since
\begin{equation}\label{uroot-thm-eq1}
\frac{F'_{a_i,b_j}(t)}{F_{a_i,b_j}(t)}\in \Z_p\langle t,(t-t^2)^{-1},h(t)^{-1}\rangle
\end{equation}
by \cite[p.45, Lem. 3.4 ]{Dwork-p-cycle}, \eqref{uroot-thm-1} follows.
We show \eqref{uroot-thm-2}, which is equivalent to
\begin{equation}\label{uroot-thm-eq2}
\Phi_{X/S}(\wt\eta_{p^{-1}i,p^{-1}j})=\wt\eta_{i,j}\in H^1_\dR(X/S)\ot_{\O(S)}K((t)).
\end{equation}
Let
\[
Q:H^1_\dR(X_K/S_K)\ot H^1_\dR(X_K/S_K)\lra H^2_\dR(X_K/S_K)\cong \O(S_K)
\]
be the cup-product pairing which is anti-symmetric and non-degenerate.
This extends on $H^1_\dR(X/S)\ot_{\O(S)}K((t))$ which we also write by $Q$.
Then the following is satisfied.
\begin{description}
\item[(Q1)]
$Q(\Phi_{X/S}(x),\Phi_{X/S}(y))=pQ(x,y)^\sigma$ for $x,y\in H^1_\dR(X/S)\ot_{\O(S)}K((t))$,
\item[(Q2)]
$Q(gx,gy)=Q(x,y)$ for $g=(\zeta,\nu)\in \mu_N\times \mu_M$,
\item[(Q3)]
$Q(F^1,F^1)=0$ where $F^1=\vg(X_K,\Omega^1_{X_K/S_K})$ is the Hodge filtration,
\item[(Q4)]
$Q(\nabla(x),y)+Q(x,\nabla(y))=dQ(x,y)$.
\end{description
Put $H_{i,j}:=H^1_\dR(X/S)(i,j)\ot_{\O(S)}K((t))$ eigen components.
By {\bf(Q2)}, $Q$ induces a perfect pairing $H_{i,j}\ot H_{N-i,M-j}\to\O(S)$.
Therefore $Q(\wt\omega_{i,j},\wt\eta_{N-i,M-j})\ne0$ by {\bf(Q3)}.
We claim
\begin{equation}\label{uroot-thm-eq3}
Q(\wt\eta_{i,j},\wt\eta_{N-i,M-j})=0,
\end{equation}
\begin{equation}\label{uroot-thm-eq4}
Q(\wt\omega_{i,j},\wt\eta_{N-i,M-j})\in \Q^\times.
\end{equation}
To show \eqref{uroot-thm-eq3}, we recall $H_K$ in Corollary \ref{fermat-GM-cor2}.
Since $Q$ is the cup-product pairing, this induces a pairing $H_K\ot H_K\to K[[t]]$, and hence
\[
\ol Q:H_K/tH_K\ot_K H_K/tH_K\lra K.
\]
Since $\nabla(\wt\eta_{i,j})=0$, $Q(\wt\eta_{i,j},\wt\eta_{N-i,M-j})$ is a constant by {\bf(Q4)}.
Therefore if one can show $\ol Q(\wt\eta_{i,j},\wt\eta_{N-i,M-j})=0$,
then \eqref{uroot-thm-eq3} follows. It follows from {\bf(Q4)} that
\[
\ol Q(\Res(\nabla)(x),y)+\ol Q(x,\Res(\nabla)(y))=0,\quad \forall\,x,y\in H_K/tH_K
\]
where $\Res(\nabla)$ is as in the proof of Corollary \ref{fermat-GM-cor2}.
Since $\Res(\nabla)(\wt\omega_{i,j})=\wt\eta_{i,j}$
and $\Res(\nabla)\wt\eta_{i,j}=0$ by Proposition \ref{fermat-GM},
one has
\[
\ol Q(\wt\eta_{i,j},\wt\eta_{N-i,M-j})
=\ol Q(\Res(\nabla)\wt\omega_{i,j},\wt\eta_{N-i,M-j})
=-\ol Q(\wt\omega_{i,j},\Res(\nabla)\wt\eta_{N-i,M-j})=0
\]
as required. We show \eqref{uroot-thm-eq4}.
Since $\nabla(\wt\omega_{i,j})
\in K((t))\wt\eta_{i,j}$, we have
\[
dQ(\wt\omega_{i,j},\wt\eta_{N-i,M-j})=
Q(\nabla(\wt\omega_{i,j}),\wt\eta_{N-i,M-j})=0
\]
by \eqref{uroot-thm-eq3} which means that $Q(\wt\omega_{i,j},\wt\eta_{N-i,M-j})$ is a constant.
Since $X/S$, $Q$ and $\wt\omega_{i,j},\wt\eta_{i,j}$ are defined over $\Q((t))$,
the constant
should belong to $\Q^\times$.
This completes the proof of \eqref{uroot-thm-eq4}.
\medskip
We turn to the proof of \eqref{uroot-thm-eq2}.
By Proposition \ref{frobenius-thm}, there is a constant $\alpha\in K$ such that
$\Phi_{X/S}(\wt\eta_{p^{-1}i,p^{-1}j})=\alpha\wt\eta_{i,j}$.
Put $c:=Q(\wt\omega_{i,j},\wt\eta_{N-i,M-j})$ which belongs to $\Q^\times$
by \eqref{uroot-thm-eq4}.
By {\bf(Q1)}, we have
$Q(\Phi_{X/S}(\wt\omega_{i,j}),\Phi_{X/S}(\wt\eta_{N-i,M-j}))=pQ(\wt\eta_{i,j},\wt\eta_{N-i,M-j})^\sigma=pc$,
and hence \[\alpha Q(\Phi_{X/S}(\wt\omega_{i,j}),\wt\eta_{N-i,M-j})=pc.\]
It follows from \eqref{uroot-thm-eq3} and Proposition \ref{frobenius-thm}
that the left hand side is
\[
p\alpha Q(\wt\omega_{i,j},\wt\eta_{N-i,M-j})=p\alpha c.
\]
Therefore $\alpha=1$. This completes the proof.
\end{pf}
\subsection{Syntomic Regulators of hypergeometric curves} \label{syn-reg-sect}
Let $W=W(\ol\F_p)$ and $K:=\Frac W$ be as before.
Let $f_R:Y_R\to \P^1_R$ be the fibration of hypergeometric curves
in Lemma \ref{int.model-lem}
over a ring $R:=\Z[\zeta_N,\zeta_M,(NM)^{-1}]\subset W$
where $\zeta_n$ is a primitve $n$-th root of unity in $W$.
Let $\ol X_R:=f_R^{-1}(\Spec R[t,t^{-1}])$
and $U_R:=\Spec R[x,y,t,t^{-1}]/((1-x^N)(1-y^M)-t)$.
Put $Z_R:=\ol X_R\setminus U_R$
a disjoint union of copies of $\bG_{m,R}=\Spec R[t,t^{-1}]$.
\medskip
For $(\nu_1,\nu_2)\in\mu_N(R)\times \mu_M(R)$, let
\begin{equation}\label{m-fermat-eq1}
\xi=\xi(\nu_1,\nu_2)=\left\{
\frac{x-1}{x-\nu_1},\frac{y-1}{y-\nu_2}
\right\}\in K^M_2(\O(U_R))
\end{equation}
be a symbol in Milnor's $K_2$.
Let $K_2(U_R)^{(2)}\os{\partial}{\lra} K_1(Z_R)^{(1)}=(R[t,t^{-1}]^\times)^\op\ot\Q$
be the boundary map where $K_i(-)^{(j)}$ denotes the Adams weight piece,
which is explicitly described by
\[
\{f,g\}\longmapsto (-1)^{\ord_P(f)\ord_P(g)}\frac{g^{\ord_P(f)}}{f^{\ord_P(g)}}\bigg|_P,\quad
P\in Z_R.
\]
It is a simple exercise to show that
$\partial(\xi)=0$ and hence $\xi$ lies in the image of $K_2(\ol X_R)^{(2)}$.
Since $K_2(\ol X_R)^{(2)}\to K_2(U_R)^{(2)}$ is injective as $K_2(\bG_{m,R})^{(1)}=0$,
we have the element in $K_2(\ol X_R)^{(2)}$ which we also write by $\xi$.
Let $\ol X:=\ol X_R\times _RW$ and $i:\ol X\to \ol X_R$ the projection.
We then have an element $i^*(\xi) \in K_2(\ol X)$, which we also write by $\xi$.
Let $\mathrm{dlog}:K_2(\ol X)^{(2)}\to \vg(\ol X,\Omega^2_{\ol X/W})\ot\Q$
be the dlog map which is given by $\{f,g\}\mapsto\frac{df}{f}\wedge\frac{dg}{g}$.
One immediately has
\begin{equation}\label{m-fermat-eq2}
\mathrm{dlog}(\xi)=N^{-1}M^{-1}
\sum_{i=1}^{N-1}\sum_{j=1}^{M-1}(1-\nu^{-i}_1)(1-\nu^{-j}_2)\frac{dt}{t}\omega_{i,j}.
\end{equation}
Let $\sigma$ be a $p$-th Frobenius on $W[[t]]$ given by $\sigma(t)=ct^p$ with
$c\in 1+pW$.
According to \cite[\S 4.1]{AM},
one can associate a $1$-extension
\begin{equation}\label{syn-reg-ext}
0\lra H^1(X/S)(2)\lra M_\xi(X/S)\lra \O_S\lra 0
\end{equation}
in the exact category $\FilFMIC(S)$ (loc.cit. Prop.4.2).
Let $e_\xi\in \mathrm{Fil}^0M_\xi(X/S)_\dR$ be the unique lifting of $1\in \O_S(S)$.
Let $\ve_k^{(i,j)}(t)$ and $E_k^{(i,j)}(t)$ be defined by
\begin{align}
e_\xi-\Phi(e_\xi)
&=N^{-1}M^{-1}
\sum_{i=1}^{N-1}\sum_{j=1}^{M-1}(1-\nu^{-i}_1)(1-\nu^{-j}_2)[\ve^{(i,j)}_1(t)\omega_{i,j}
+\ve^{(i,j)}_2(t)\eta_{i,j}]\label{fermat-e-eq1}\\
&=N^{-1}M^{-1}
\sum_{i=1}^{N-1}\sum_{j=1}^{M-1}(1-\nu^{-i}_1)(1-\nu^{-j}_2)[E^{(i,j)}_1(t)\wt\omega_{i,j}
+E^{(i,j)}_2(t)\wt\eta_{i,j}].\label{fermat-e-eq2}
\end{align}
Notice that $\ve_k^{(i,j)}(t)$ and $E^{(i,j)}_k(t)$
depend on the choice of the Frobenius $\sigma$.
The relation between $\ve_k^{(i,j)}(t)$ and $E^{(i,j)}_k(t)$ is explicitly given by
\begin{align}
\ve^{(i,j)}_1(t)&=E^{(i,j)}_1(t)F_{a_i,b_j}(t)^{-1}-t(1-t)^{a_i+b_j}F'_{a_i,b_j}(t)E_2^{(i,j)}(t)
\label{syn-reg-eq3}
\\
\ve^{(i,j)}_2(t)&=-b_jt(1-t)^{a_i+b_j}F_{a_i,b_j}(t)E_2^{(i,j)}(t)\label{syn-reg-eq4}.
\end{align}
By the definition $\ve^{(i,j)}_k(t)$ are automatically overconvergent functions,
\[\ve^{(i,j)}_k(t)\in K[t,(t-t^2)^{-1}]^\dag.\]
On the other hand, since $F'_{a_i,b_j}(t)/F_{a_i,b_j}(t)$ is a convergent function
(cf. \eqref{uroot-thm-eq1}), so is
$E^{(i,j)}_1(t)/F_{a_i,b_j}(t)$,
\begin{equation}\label{syn-reg-eq5}
\frac{E^{(i,j)}_1(t)}{F_{a_i,b_j}(t)}\in K\langle
t,(t-t^2)^{-1},h(t)^{-1}\rangle,\quad h(t):=\prod_{m=0}^s F_{a_i^{(m)},b_j^{(m)}}(t)_{<p}
\end{equation}
where $s\geq 0$ is the minimal
integer such that $a_i^{(s+1)}=a_i$ and $b_j^{(s+1)}=b_j$.
\medskip
The following is the main theorem in this paper, which provides
a geometric aspect of
$\cF^{(\sigma)}_{a,b}(t)$ the $p$-adic hypergeometric function
of logarithmic type defined in \S \ref{pHGlog-defn}.
\begin{thm}\label{fermat-main1}
Suppose that $\sigma$ is given by $\sigma(t)=ct^p$ with $c\in 1+pW$.
Then
\begin{equation}\label{main-thm1-eq1}
\frac{E_1^{(i,j)}(t)}{F_{a_i,b_j}(t)}=-\cF^{(\sigma)}_{a_i,b_j}(t).
\end{equation}
Hence
\[
e_\xi-\Phi(e_\xi)\equiv
\sum_{i=1}^{N-1}\sum_{j=1}^{M-1}\frac{(1-\nu^{-i}_1)(1-\nu^{-j}_2)}{NM}\cF^{(\sigma)}_{a_i,b_j}(t)
\omega_{i,j}
\mod \sum_{i,j}K\langle t,(t-t^2),h(t)^{-1}\rangle e^{\mathrm{unit}}_{i,j}.\]
\end{thm}
\begin{pf}
The Frobenius $\sigma$ extends on $K((t))$, and
$\Phi$ also extends on $K((t))\ot H^1_\dR(X/S)\cong
H^1_{\text{log-crys}}((\cY_{\ol\F_p},D_{\ol\F_p})/(\Delta_W,0))\ot_{W[[t]]}K((t))
$ in the natural way where the isomorphism follows from \eqref{log-crys-isom}.
Apply the Gauss-Manin connection $\nabla$ on \eqref{fermat-e-eq2}.
Since $\nabla\Phi=\Phi\nabla$ and $\nabla(e_\xi)=-\mathrm{dlog}\xi$ (\cite[(2.30)]{AM}),
we have
\begin{align}\label{main-thm1-eq2-left}
&(1-\Phi)\left(-N^{-1}M^{-1}\sum_{i=1}^{N-1}\sum_{j=1}^{M-1}(1-\nu^{-i}_1)(1-\nu^{-j}_2)
\frac{dt}{t}\omega_{i,j}\right)\\
=&N^{-1}M^{-1}\sum_{i=1}^{N-1}\sum_{j=1}^{M-1}(1-\nu^{-i}_1)(1-\nu^{-j}_2)
\nabla(E^{(i,j)}_1(t)\wt\omega_{i,j}+E^{(i,j)}_2(t)\wt\eta_{i,j})\label{main-thm1-eq2-right}
\end{align}
by \eqref{m-fermat-eq2}.
Let $\Phi_{X/S}$ denote the $p$-th Frobenius on $H^1_\rig(X_{\ol\F_p}/S_{\ol\F_p})$.
Then the $\Phi$ on $H^1_\rig(X/S)(2)$ agrees with $p^{-2}\Phi_{X/S}$ by definition of Tate twists.
It follows from Proposition \ref{frobenius-thm} that we have
\[
\Phi_{X/S}(\wt\omega_{p^{-1}i,p^{-1}j})\equiv p\wt\omega_{i,j}\mod K((t))\wt\eta_{i,j}
\]
where $m\in \{1,\ldots,N-1\}$ with $pm\equiv n$ mod $N$.
Therefore
\[
\eqref{main-thm1-eq2-left}\equiv-
N^{-1}M^{-1}\sum_{i=1}^{N-1}\sum_{j=1}^{M-1}(1-\nu^{-i}_1)(1-\nu^{-j}_2)
(F_{a_i,b_j}(t)-F_{a_i^{(1)},b_j^{(1)}}(t^\sigma))\frac{dt}{t}\wt\omega_{i,j}
\]
modulo $\sum_{i,j} K((t))\wt\eta_{i,j}$.
On the other hand,
\[
\eqref{main-thm1-eq2-right}\equiv
N^{-1}M^{-1}\sum_{i=1}^{N-1}\sum_{j=1}^{M-1}(1-\nu^{-i}_1)(1-\nu^{-j}_2)
t\frac{d}{dt}(E^{(i,j)}_1(t))\cdot\frac{dt}{t}\wt\omega_{i,j}
\mod \sum_{i,j}K((t))\wt\eta_{i,j}
\]
by Proposition \ref{fermat-GM}.
We thus have
\begin{equation}\label{main-thm1-eq3}
t\frac{d}{dt}E_1^{(i,j)}(t)=-F_{a_i,b_j}(t)+F_{a_i^{(1)},b_j^{(1)}}(t^\sigma),
\end{equation}
and hence
\[
E_1^{(i,j)}(t)=-\left(C+\int_0^tF_{a_i,b_j}(t)-F_{a_i^{(1)},b_j^{(1)}}(t^\sigma)\frac{dt}{t}\right)
\]
for some constant $C\in K$.
We determine the constant $C$ in the following way.
Firstly $E_1^{(i,j)}(t)/F_{a_i,b_j}(t)$ is a convergent function by \eqref{syn-reg-eq5}.
If $C=\psi_p(a_i)+\psi_p(b_j)+2\gamma_p$, then
$E_1^{(i,j)}(t)/F_{a_i,b_j}(t)=\cF^{(\sigma)}_{a_i,b_j}(t)$ belongs to
$K\langle t,(t-t^2)^{-1},h(t)^{-1}\rangle$ by Corollary \ref{cong-cor}.
If there is another $C'$ such that
$E_1^{(i,j)}(t)/F_{a_i,b_j}(t)\in K\langle t,(t-t^2)^{-1},h(t)^{-1}\rangle$,
then it follows
\[
\frac{C-C'}{F_{a_i,b_j}(t)}\in K\langle t,(t-t^2)^{-1},h(t)^{-1}\rangle.
\]
This is impossible, and hence
there is no possibility other than
$C=\psi_p(a_i)+\psi_p(b_j)+2\gamma_p$.
To see this,
we recall from Theorem \ref{uroot-thm}
the formula \eqref{uroot-thm-3}
\begin{align*}
\Phi_{X/S}^{s+1}(e^{\mathrm{unit}}_{i,j})
&=\left(\prod_{m=0}^s
\frac{(1-t^{p^m})^{a^{(m)}_i+b^{(m)}_j}}{(1-t^{p^{m+1}})^{a^{(m+1)}_i+b^{(m+1)}_j}}
\cF^{\mathrm{Dw}}_{a^{(m)}_i,b^{(m)}_j}(t^{p^m})\right)e^{\mathrm{unit}}_{i,j}\\
&=\left(\frac{(1-t)^{a_i+b_j}}{(1-t^{p^{s+1}})^{a_i+b_j}}\right)
\frac{F_{a_i,b_j}(t)}{F_{a_i,b_j}(t^{p^{s+1}})}e^{\mathrm{unit}}_{i,j}.
\end{align*}
Iterating $\Phi_{X/S}^{s+1}$ to the above, we have
\[
(\Phi_{X/S}^{s+1})^n(e^{\mathrm{unit}}_{i,j})
=\overbrace{\left(\frac{(1-t)^{a_i+b_j}}{(1-t^{p^{n(s+1)}})^{a_i+b_j}}\right)}^{\mu(t)}
\frac{F_{a_i,b_j}(t)}{F_{a_i,b_j}(t^{p^{n(s+1)}})}e^{\mathrm{unit}}_{i,j}.
\]
Put $q:=p^{n(s+1)}$.
Let $\alpha\in W$ satisfy $\alpha^q=\alpha$ and
$(\alpha-\alpha^2)h(\alpha)\not\equiv0$ mod $p$.
Then the evaluation $\mu(\alpha)$ is a root of unity.
Suppose $g(t):=F_{a_i,b_j}(t)^{-1}\in K\langle t,(t-t^2)^{-1},h(t)^{-1}\rangle$.
Let $g(t)=(t-\alpha)^kg_0(t)$ with $g_0(t)\in
K\langle t,(t-t^2)^{-1},h(t)^{-1}\rangle$ such that $g_0(\alpha)\ne0$.
Then we have
\[
\frac{F_{a_i,b_j}(t)}{F_{a_i,b_j}(t^q)}\bigg|_{t=\alpha}
=\frac{(t^q-\alpha)^kg_0(t^q)}{(t-\alpha)^kg_0(t)}\bigg|_{t=\alpha}
=(q\alpha^{q-1})^k=q^k.
\]
Since the first evaluation is a unit in $W$, we have $k=0$.
Thus
an eigen value of $(\Phi_{X/S}^{s+1})^n|_{t=\alpha}$ is a root of unity.
This contradicts with the Weil-Riemann hypothesis.
\end{pf}
\begin{thm}[Syntomic Regulator Formula]\label{main-thm3}
Suppose that $p>2$ is prime to $NM$.
Let $\alpha\in W$ such that $\alpha\not\equiv0,1$ mod $p$.
Let $\sigma_{\alpha}$ be the Frobenius given by $t^\sigma=F(\alpha)\alpha^{-p}t^p$
where $F$ is the Frobenius on $W$.
Let $X_\alpha$ be the fiber at $t=\alpha$ ($\Leftrightarrow$ $\l=1-\alpha$),
which is a smooth projective variety over $W$ of relative dimension one.
Let
\[
\reg_\syn:K_2(X_\alpha)\lra H^2_\syn(X_\alpha,\Q_p(2))\cong H^1_\dR(X_\alpha/K)
\]
be the syntomic regulator map.
Then
\[
\reg_\syn(\xi|_{X_\alpha})=-
N^{-1}M^{-1}
\sum_{i=1}^{N-1}\sum_{j=1}^{M-1}(1-\nu^{-i}_1)(1-\nu^{-j}_2)
[\ve^{(i,j)}_1(\alpha)\omega_{i,j}
+\ve^{(i,j)}_2(\alpha)\eta_{i,j}].
\]
\end{thm}
\begin{pf}
This follows from \cite[Theorem 4.4]{AM}.
\end{pf}
\begin{cor}\label{main-thm4}
Let the notation and assumption be as in Theorem \ref{main-thm3}. Suppose further that
$h(\alpha)\not\equiv0$ mod $p$ where $h(t)$ is as in \eqref{syn-reg-eq5}.
Let $e_{N-i,M-j}^{\text{\rm unit}}$ be as in Theorem \ref{uroot-thm}, and
$Q: H^1_\dR(X_\alpha/K)\ot H^1_\dR(X_\alpha/K)\to H^2_\dR(X_\alpha/K)\cong K$
the cup-product pairing.
Then we have
\[
Q(\reg_\syn(\xi|_{X_\alpha}), e_{N-i,M-j}^{\text{\rm unit}})
=-N^{-1}M^{-1}
(1-\nu^{-i}_1)(1-\nu^{-j}_2)
\cF_{a_i,b_j}^{(\sigma_\alpha)}(\alpha)
Q(\omega_{i,j}, e_{N-i,M-j}^{\text{\rm unit}}).
\]
\end{cor}
\begin{pf}
Noticing $Q(e_{i,j}^{\text{\rm unit}},e_{N-i,M-j}^{\text{\rm unit}})=0$ by \eqref{uroot-thm-eq3},
this is immediate from Theorem \ref{main-thm3}.
\end{pf}
\subsection{Syntomic regulator of the Ross symbols of Fermat curves}
\label{fermatcurve-sect}
We apply Theorem \ref{fermat-main1} to the study of the syntomic regulator
of the {\it Ross symbol} \cite{ross2}
\[
\{1-z,1-w\}\in K_2(F)\ot\Q
\]
of the Fermat curve
\[
F:z^N+w^M=1,\quad p{\not|}NM.
\]
The group $\mu_N\times \mu_M$ acts on $F$ by $(\ve_1,\ve_2)\cdot (z,w)=(\ve_1z,\ve_2w)$.
Let $H^1_\dR(F/K)(i,j)$ denote the subspace on which $(\ve_1,\ve_2)$ acts by multiplication
by $\ve_1^i\ve_2^j$.
Let
\[
I=\left\{(i,j)\in\Z^2\mid1\leq i\leq N-1,1\leq j\leq M-1,\,\frac{i}{N}+\frac{j}{M}\ne1\right\},
\]
then
\[
H^1_\dR(F/K)=\bigoplus_{(i,j)\in I}H^1_\dR(F/K)(i,j).
\]
Each eigen space $H^1_\dR(F/K)(i,j)$
is one-dimensional with basis
$z^{i-1}w^{j-M}dz=-N^{-1}Mz^{i-N}w^{j-1}dw$, and
\[
H^1_\dR(F/K)(i,j)\subset \vg(F,\Omega^1_{F/K})\quad\Longleftrightarrow\quad
\frac{i}{N}+\frac{j}{M}<1
\]
(e.g. \cite[\S 2]{gross}).
In particular, the genus of $F$ is $1+\frac{1}{2}(NM-N-M-\gcd(N,M))$.
\begin{thm}\label{fermat-main2}
Suppose that $p>2$ is prime to $NM$.
Let $F:z^N+w^M=1$ be the Fermat curve over $W$.
Let
\[
\reg_\syn:K_2(F)\ot\Q\lra H^2_\syn(F,\Q_p(2))\cong H^1_\dR(F/K)
\]
be the syntomic regulator map and let $A^{(i,j)}\in K$ be defined by
\[
\reg_\syn(\{1-z,1-w\})=
\sum_{(i,j)\in I} A^{(i,j)}M^{-1}z^{i-1}w^{j-M}dz.
\]
Suppose that $(i,j)\in I$ satisfies
\begin{equation}\label{fermat-main2-eq1}
\mbox{\rm(i) }
\frac{i}{N}+\frac{j}{M}<1,\quad
\mbox{\rm(ii) }
F_{\frac{i}{N},\frac{j}{M}}(1)_{<p^n}\not\equiv0\mod p,\,\forall n\geq1.
\end{equation}
Then we have
\begin{equation}\label{fermat-main2-eq2}
A^{(i,j)}=\cF^{(\sigma)}_{\frac{i}{N},\frac{j}{M}}(1)
\end{equation}
where $\sigma=\sigma_1$ (i.e. $\sigma(t)=t^p$).
\end{thm}
The following lemma gives a sufficient condition for that
the conditions \eqref{fermat-main2-eq1} are satisfied.
\begin{lem}\label{fermat-main2-lem1}
\begin{enumerate}
\item[$(1)$]
Let $a,b\in \Z_p$. Then
$F_{a,b}(1)_{<p^n}\not\equiv0$ mod $p$ for all $n\geq1$ if and only if
$F_{a^{(k)},b^{(k)}}(1)_{<p}\not\equiv0$ mod $p$ for all $k\geq0$ where $a^{(k)}$ denotes the
Dwork $k$-th prime.
\item[$(2)$]
Let $a_0,b_0\in\{0,1,,\ldots,p-1\}$ satisfy $a\equiv -a_0$ and $b\equiv -b_0$ mod $p$.
Then
\[
F_{a,b}(1)_{<p}\equiv \frac{\Gamma(1+a_0+b_0)}{\Gamma(1+a_0)\Gamma(1+b_0)}
=\frac{(a_0+b_0)!}{a_0!b_0!}
\mod p.
\]
In particular
\[
F_{a,b}(1)_{<p}\not\equiv 0
\quad\Longleftrightarrow\quad
a_0+b_0\leq p-1.
\]
\item[$(3)$]
Suppose that $N|(p-1)$ and $M|(p-1)$. Then for any
$(i,j)$ such that $0<i<N$ and $0<j<M$ and $i/N+j/M<1$,
the conditions \eqref{fermat-main2-eq1} hold.
\end{enumerate}
\end{lem}
\begin{pf}
(1) is a consequence of the Dwork congruence \eqref{Dwork-congruence}.
We show (2). Obviously $F_{a,b}(t)_{<p}\equiv F_{-a_0,-b_0}(t)_{<p}$ mod $p\Z_p[t]$, and
$F_{-a_0,-b_0}(t)_{<p}=F_{-a_0,-b_0}(t)$ as $a_0$ and
$b_0$ are non-positive integers greater than $-p$.
Then apply Gauss' formula (e.g. \cite{NIST} 15.4.20)
\[
{}_2F_1\left({a,b\atop c};1\right)=
\frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)},\quad \mathrm{Re}(c-a-b)>0.
\]
To see (3), letting $a=i/N$ and $b=j/M$, we note that
$a^{(k)}=a$, $b^{(k)}=b$ and $a_0=i(p-1)/N$, $b_0=j(p-1)/M$.
Then the condition \eqref{fermat-main2-eq1} (ii) follows by (1) and (2).
\end{pf}
\bigskip
\noindent{\it Proof of Theorem \ref{fermat-main2}}.
We show that the theorem is reduced to the case $M=N$. Let $L$ be the least common multiple of $M,N$,
and $F_1$ the Fermat curve defined by an affine equation $F_1:z^L_1+w_1^L=1$.
There is a finite surjective map $\rho:F_1\to F$ given by $
\rho^*(z)=z_1^A$ $\rho^*(w)=w_1^B$ where $AN=BM=L$.
There is a commutative diagram
\[
\xymatrix{
K_2(F_1)\ot\Q\ar[r]\ar[d]_{\rho_*}&H^2_\syn(F_1,\Q_p(2))\ar[d]^{\rho_*}\\
K_2(F)\ot\Q\ar[r]&H^2_\syn(F,\Q_p(2))
}
\]
with surjective vertical arrows. It is a simple exercise to show that $\rho_*\{1-z_1,
1-w_1\}=\{1-z.1-w\}$ and $\rho_*(L^{-1}z_1^{i-1}w_1^{j-L}dz_1)
=M^{-1}z^{i'-1}w^{j'-M}dz$ if $(i,j)=(i'A,j'B)$ and $=0$ otherwise.
Thus the theorem for $F$ can be deduced from the theorem for $F_1$.
\medskip
We assume $N=M$ until the end of the proof.
Let $f:Y\to\P^1_W$ be as in Lemma \ref{int.model-lem}, which
has a bad reduction at $t=1$.
Let $\l:=1-t$ be a new parameter, and let $\l_0^N=\l$.
Let $\ol S_0:=\Spec W[\l_0,(1-\l^N_0)^{-1}]\to \P^1_W$
and $S_0:=\Spec W[\l_0,\l_0^{-1},(1-\l^N_0)^{-1}]\subset \ol S_0$.
Let $\ol X_s:=Y\times_{\P^1_W}\ol S$. Then
$\ol X_s$ has a unique singular point $(x,y,\l_0)=(0,0,0)$ in an affine open set
\[U_s=\Spec W[x,y,\l_0,(1-\l^N_0)^{-1}]/(x^Ny^N-x^N-y^N-\l_0^N)\subset \ol X_s.\]
Let $\ol X_0\to \ol X_s$ be the blow-up at $(x,y,\l_0)=(0,0,0)$. Then $\ol X_0\to \Spec W$ is smooth, and the morphism
\begin{equation}\label{fermat-main2-lem2}
\xymatrix{
f_0:\ol X_0\ar[r]& \ol S_0
}
\end{equation}
is projective flat
such that $X_0:=f_0^{-1}(S_0)\to S_0$ smooth and $f_0$ has a
semistable reduction at $\l_0=0$.
The fiber $Z:=f^{-1}(\l_0=0)$ is a reduced divisor with two irreducible components
$F$ and $E$ where $F$ is the proper transform of the curve $x^Ny^N-x^N-y^N=0$
$\Leftrightarrow$ $z^N+w^N=1$ $(z:=x^{-1},w:=y^{-1})$, and $E$ is the exceptional curve. Both curves are
isomorphic to the Fermat curve $u^N+v^N=1$.
Moreover $E$ and $F$ intersects transversally at $N$-points.
We recall the $K_2$-symbols $\xi(\nu_1,\nu_2)$ in \eqref{m-fermat-eq1}.
We think them to be elements of $K_2(\ol X_0)\ot\Q$, and
put
\[
\Xi:=\sum_{(\nu_1,\nu_2)\in \mu_N\times \mu_N}\xi(\nu_1,\nu_2)
=
\left\{\frac{(x-1)^N}{x^N-1},\frac{(y-1)^N}{y^N-1}\right\}\in K_2(\ol X_0)\ot\Q.
\]
Then the restriction of $\Xi$ on $F$ is
\begin{align*}
\Xi|_F
&=\{(1-z)^N,(1-w)^N\}-\{(1-z)^N,1-w^N\}-\{1-z^N,(1-w)^N\}+\{1-z^N,1-w^N\}\notag\\
&=\{(1-z)^N,(1-w)^N\}-\{(1-z)^N,z^N\}-\{w^N,(1-w)^N\}+\{w^N,1-w^N\}\notag\\
&=N^2\{1-z,1-w\}
\end{align*}
This is the Ross symbol.
We thus have
\begin{equation}\label{Xi-eq1}
N^2\reg_\syn(\{1-z,1-w\})
\sum_{(\nu_1,\nu_2)\in \mu_N\times \mu_N}\reg_\syn(\xi(\nu_1,\nu_2)|_F).
\end{equation}
Write $\xi=\xi(\nu_1,\nu_2)$ simply.
Let $\sigma$ be the $p$-th Frobenius on $W[\l_0,\l_0^{-1}]$ such that $\sigma(t)=t^p$
$\Leftrightarrow$ $\sigma(\l_0)=(1-(1-\l_0^N)^p)^{\frac1N}$.
Recall \eqref{fermat-e-eq1} and \eqref{fermat-e-eq2},
\begin{align*}
e_\xi-\Phi_\sigma(e_\xi)
&=N^{-2}\sum_{1\leq i,j\leq N-1}(1-\nu^{-i}_1)(1-\nu^{-j}_2)[
\ve^{(i,j)}_{1,\sigma}(t)\omega_{i,j}
+\ve^{(i,j)}_{2,\sigma}(t)\eta_{i,j}]\\
&=N^{-2}\sum_{1\leq i,j\leq N-1}
(1-\nu^{-i}_1)(1-\nu^{-j}_2)[
E^{(i,j)}_{1,\sigma}(t)\wt\omega_{i,j}
+E^{(i,j)}_{2,\sigma}(t)\wt\eta_{i,j}]
\end{align*}
where we write ``$(-)_\sigma$''
to emphasize that they depend on $\sigma$.
Let $\tau$ be the $p$-th Frobenius on $W[[\l_0]]$ such that $\tau(\l_0)=\l_0^p$.
Let
\begin{equation}\label{fermat-main2-eq5}
e_\xi-\Phi_\tau(e_\xi)
=N^{-2}\sum_{1\leq i,j\leq N-1}
(1-\nu^{-i}_1)(1-\nu^{-j}_2)[\ve^{(i,j)}_{1,\tau}(\l)\omega_{i,j}
+\ve^{(i,j)}_{2,\tau}(\l)\eta_{i,j}]
\end{equation}
be defined in the same way.
This is related to \eqref{Xi-eq1} in the following way.
Let
$\Delta:=\Spec W[[\l_0]]\to \ol S_0$, and $\cX:=f^{-1}(\Delta)$.
We have the syntomic regulator
\[
\reg_\syn(\xi)\in H^2_\syn(\cX,\Z_p(2))
\]
in the syntomic cohomology group.
We endow the log structure on $\Delta$ (resp. $\cX$)
defined by the divisor $O=\Spec W[[\l_0]]/(\l_0)$ (resp. $E+F$)
which we denote by the same notation $O$ (resp. $E+F$).
Let $\omega_{\cX/\Delta}$ be the log de Rham complex for $(\cX,E+F)/(\Delta,O)$.
Recall the log syntomic cohomology groups (e.g. \cite[\S 2]{T})
\[
H^i_\syn((X,M),\Z_p(j))
\]
of a log scheme $(X,M)$ satisfying several conditions
(all log schemes appearing in this proof satisfy them).
Moreover one can further define the syntomic cohomology groups
$H^i_\syn((\cX,E+F)/(\Delta,O,\tau),\Z_p(j))$ following the construction in \cite[\S 3.1]{AM},
where we note that
$\tau$ induces the $p$-th Frobenius on $(\Delta,O)$ (while so does not $\sigma$).
Let
\[
\rho_\tau:H^2_\syn(\cX,\Z_p(2))
\lra H^2_\syn((\cX,E+F)/(\Delta,O,\tau),\Z_p(2))
\os{\cong}{\longleftarrow}H^1_\zar(\cX,\omega^\bullet_{\cX/\Delta})
\]
be the composition of natural maps.
We endow the log structure on $F$ defined by the divisor $T:=E\cap F$ which we denote by $T$. Put $U:=F\setminus T$.
Let $\omega^\bullet_F:=\Omega^\bullet_{F/W}(\log T)$
the log de Rham complex for $(F,T)/W$.
Let $\iota:F\hra\cX$ be the closed immersion.
Then there is a commutative diagram
\[
\xymatrix{
H^2_\syn(\cX,\Z_p(2))\ar[d]_{\rho_\tau}\ar[r]& H^2_\syn((F,T),\Z_p(2))\\
H^1_\zar(\cX,\omega^\bullet_{\cX/\Delta})\ar[r]^-{\iota^*}\ar[d]_\pi
&H^1_\zar(F,\omega^\bullet_{F})\ar[u]_\cong\ar[r]^-{\subset}&
H^1_\dR(U/K)\\
W((\l_0))\ot H^1_\dR(X_0/S_0)
}
\]
and we have
\begin{equation}\label{1-ext-thm1-eq1}
(\iota^*\circ\rho_\tau)(\reg_\syn(\xi))=\reg_\syn(\xi|_F)\in H^1_\dR(U/K).
\end{equation}
Moreover it follows from \cite[Theorem 3.8]{AM} that
\begin{equation}\label{1-ext-thm1-eq2}
(\pi\circ\rho_\tau)(\reg_\syn(\xi))=\Phi(e_\xi)-e_\xi
\in
W((\l_0))\ot H^1_\dR(X_0/S_0).
\end{equation}
Note that $H^1_\dR(F/K)\to H^1_\dR(U/K)$ is injective, and the above element belongs to
the image of $H^1_\dR(F/K)$, so that we may replace $H^1_\dR(U/K)$ with $H^1_\dR(F/K)$
in \eqref{1-ext-thm1-eq1}.
\medskip
Write $X_{0,K}:=X_0\times_WK$ etc.
Let $\Delta_K:=\Spec K[[\l_0]]$ and $\cX_K:=\ol X_0\times_{\ol S_0} \Delta_K$.
Put
\[
H_K:=H^1(\cX_K,\omega^\bullet_{\cX_K/\Delta_K})\hra K((\l_0))\ot H^1_\dR(X_0/S_0).
\]
\begin{lem}\label{fermat-main2-lem3}
Put $s:=(a_i+b_j)N$ which is a positive integer.
If $a_i+b_j<1$, then the eigencomponent $H_K(i,j)$ is a free $K[[\l_0]]$-module
of rank two
with a basis $\{\omega_{i,j},\l_0^{s}\eta_{i,j}\}$.
\end{lem}
\begin{pf}
This is proven in the same way as the proof of
Corollary \ref{fermat-GM-cor2}.
\end{pf}
\begin{lem}\label{fermat-main2-lem0}
Let $1\leq i,j\leq N-1$ be integers, and put
$a_i:=1-i/N$ and $b_j:=1-j/N$.
Put
\[
f_n(t)=f_{n,i,j}(t):=-\frac{(1-\nu_1^{-i})(1-\nu_2^{-j})}{N^2}\frac{1}{F_{a_i,b_j}(t)}
\left(\frac{d^{n-1}}{dt^{n-1}}\left(\frac{F_{a^{(1)}_i,b^{(1)}_j}(t)}{t}\right)\right)^\sigma
\]
for $n\in\Z_{\geq 1}$.
Then
\[
\ve^{(i,j)}_{1,\tau}(\l)-\cF^{(\sigma)}_{a_i,b_j}(t)
=\sum_{n=1}^\infty\frac{(t^\tau-t^\sigma)^n}{n!}
p^{-1}f_n(t)
+b_j^{-1}\frac{F'_{a_i,b_j}(t)}{F_{a_i,b_j}(t)}
\ve^{(i,j)}_{2,\tau}(\l).
\]
\end{lem}
Notice that $f_n(t)$ is a convergent function on the rigion
$\{F_{a_i,b_j}(t)_{<p^n}\not\equiv 0\}$ by
\cite[p.37, Thm. 2, p.45 Lem. 3.4]{Dwork-p-cycle}
\begin{pf}
The relation between $\ve^{(i,j)}_{k,\sigma}(t)$ and $\ve^{(i,j)}_{k,\tau}(t)$ is the following (e.g. \cite[6.1]{EK},
\cite[17.3.1]{Ke})
\begin{equation}\label{fermat-main2-eq6}
\Phi_\tau(e_\xi)-\Phi_\sigma(e_\xi)=
\sum_{n=1}^\infty\frac{(t^\tau-t^\sigma)^n}{n!}
\Phi_\sigma\partial^n_t e_\xi
\end{equation}
where $\partial_t=\nabla_{\frac{d}{dt}}$ is the differential operator on $M_\xi(X/S)_\dR$.
By \eqref{m-fermat-eq2},
\[
\partial_t(e_\xi)
=-\sum_{1\leq i,j\leq N-1}\frac{(1-\nu^{-i}_1)(1-\nu^{-j}_2)}{N^2}\frac{1}{t}\omega_{i,j}
=-\sum_{1\leq i,j\leq N-1}\frac{(1-\nu^{-i}_1)(1-\nu^{-j}_2)}{N^2}\frac{F_{a_i,b_j}(t)}{t}\wt\omega_{i,j}.
\]
Let $\eta_{i,j}^*:=(1-t)^{-a_i-b_j}F_{a_i,b_j}(t)^{-1}\wt\eta_{i,j}\in H^1_\dR(X/S)\ot
K\langle t,(t-t^2)^{-1},h(t)^{-1}\rangle$ where $h(t)=\prod_{m=0}^N
F_{a^{(m)}_i,b^{(m)}_j}(t)$ with $N\gg0$.
By Proposition \ref{fermat-GM},
\[
\partial^n_t(e_\xi)
=-\sum_{1\leq i,j\leq N-1}\frac{(1-\nu^{-i}_1)(1-\nu^{-j}_2)}{N^2}
\frac{d^{n-1}}{dt^{n-1}}\left(\frac{F_{a_i,b_j}(t)}{t}\right)\wt\omega_{i,j}+(\cdots)\wt\eta_{i,j}
\]
and hence
\[
\Phi_\sigma\partial^n_t(e_\xi)
\equiv \sum_{1\leq i,j\leq N-1}
p^{-1}f_{n,i,j}(t)\mod K\langle t,(t-t^2)^{-1},h(t)^{-1}\rangle\eta^*_{i,j}\]
by Proposition \ref{frobenius-thm}.
Take the reduction of the both side of \eqref{fermat-main2-eq6} modulo
$K\langle t,(t-t^2)^{-1},h(t)^{-1}\rangle\eta^*_{i,j}$.
We then have
\[
\ve^{(i,j)}_{1,\tau}(\l)-\ve^{(i,j)}_{1,\sigma}(t)
-b_j^{-1}\frac{F'_{a_i,b_j}(t)}{F_{a_i,b_j}(t)}
(\ve^{(i,j)}_{2,\tau}(\l)-\ve^{(i,j)}_{2,\sigma}(t))
=\sum_{n=1}^\infty\frac{(t^\tau-t^\sigma)^n}{n!}
p^{-1}f_n(t).
\]
On the other hand,
\[
\ve^{(i,j)}_{1,\sigma}(t)=\cF^{(\sigma)}_{a_i,b_j}(t)+b_j^{-1}\frac{F'_{a_i,b_j}(t)}{F_{a_i,b_j}(t)}
\ve^{(i,j)}_{2,\sigma}(t)
\]
by \eqref{syn-reg-eq3}, \eqref{syn-reg-eq4} and Theorem \ref{fermat-main1}.
Hence
\[
\ve^{(i,j)}_{1,\tau}(\l)-\cF^{(\sigma)}_{a_i,b_j}(t)
=\sum_{n=1}^\infty\frac{(t^\tau-t^\sigma)^n}{n!}
p^{-1}f_n(t)
+b_j^{-1}\frac{F'_{a_i,b_j}(t)}{F_{a_i,b_j}(t)}
\ve^{(i,j)}_{2,\tau}(\l)
\]
as required.
\end{pf}
\begin{lem}\label{fermat-main2-lem4}
If $a_i+b_j<1$, then $\ord_{\l=0}(\ve_{1,\tau}^{(i,j)}(\l))\geq 0$ and
$\ord_{\l=0}(\ve_{2,\tau}^{(i,j)}(\l))\geq 1$.
\end{lem}
\begin{pf}
Since $e_\xi-\Phi_\tau(e_\xi)\in H_K$,
we have
\[
\ve_{1,\tau}^{(i,j)}(\l)\omega_{i,j}+\ve_{2,\tau}^{(i,j)}(\l)\eta_{i,j}\in H_K(i,j).
\]
If $a_i+b_j<1$, then this means
\[
\ve_{1,\tau}^{(i,j)}(\l_0^N)\, , \l_0^{-s}\ve_{2,\tau}^{(i,j)}(\l_0^N)\in K[[\l_0]].
\]
by Lemma \ref{fermat-main2-lem3}. Since $s=(a_i+b_j)N<N$, the assertion follows.
\end{pf}
\begin{lem}\label{fermat-main2-lem5}
If $a_i+b_j<1$ and $F_{a_i,b_j}(1)_{<p^n}\not\equiv 0$ mod $p$ for all $n\geq 1$, then
\[
\ve_{1,\tau}^{(i,j)}(0)=\cF^{(\sigma)}_{a_i,b_j}(1)
\]
where the left hand side denotes the evaluation at $\l=0$ ($\Leftrightarrow$ $t=1$)
and the right hand side denotes the evaluation at $t=1$.
Note that the left value is defined by Lemma \ref{fermat-main2-lem4}.
\end{lem}
\begin{pf}
This is straightforward from Lemma \ref{fermat-main2-lem0} on noticing that
$F'_{a_i,b_j}(t)/F_{a_i,b_j}(t)$ and $f_n(t)$ are convergent at $t=1$ by
\cite[p.45, Lem. 3.4 ]{Dwork-p-cycle} under the condition that
$F_{a_i,b_j}(1)_{<p^n}\not\equiv 0$ mod $p$ for all $n\geq 1$.
\end{pf}
\medskip
We finish the proof of Theorem \ref{fermat-main2}.
Let $(i,j)$ satisfy $a_i+b_j<1$.
Let
\[\rho_\tau(\reg_\syn(\xi))(i,j)\in H_K(i,j)\] be the eigencomponent of
$\psi_\tau(\reg_\syn(\xi))$, which agrees with
\[
-N^{-2}(1-\nu^{-i}_1)(1-\nu^{-j}_2)[
\ve^{(i,j)}_{1,\tau}(t)\omega_{i,j}
+\ve^{(i,j)}_{2,\tau}(t)\eta_{i,j}]
\]
by \eqref{fermat-main2-eq5} and \eqref{1-ext-thm1-eq2}.
It is straightforward to see $\iota^*(\omega_{i,j})=Nz^{N-i-1}w^{-j}dz.$
Then
\begin{align*}
&\iota^*[\ve^{(i,j)}_{1,\tau}(\l)\omega_{i,j}+\ve^{(i,j)}_{2,\tau}(\l)\eta_{i,j}]\\
=&
\ve_{1,\tau}^{(i,j)}(\l_0^L)|_{\l_0=0}\cdot \iota^*(\omega_{i,j})+
(\l_0^{-s}\ve_{2,\tau}^{(i,j)}(\l_0^L))|_{\l_0=0}\cdot \iota^*(\l_0^s\eta_{i,j})]
&\text{(Lemma \ref{fermat-main2-lem3})}\\
=&
\ve_{1,\tau}^{(i,j)}(0)\cdot Nz^{N-i-1}w^{-j}dz
&\text{(Lemma \ref{fermat-main2-lem4} and $s<L$)}\\
=&
\cF_{a_i,b_j}^{(\sigma)}(1)\cdot Nz^{N-i-1}w^{-j}dz
& \text{(Lemma \ref{fermat-main2-lem5})}.
\end{align*}
Therefore
\[\reg_\syn(\xi|_F)
=-N^{-2}
\sum_{a_i+b_j<1}(1-\nu^{-i}_1)(1-\nu^{-j}_2)
\cF_{a_i,b_j}^{(\sigma)}(1)Nz^{N-i-1}w^{-j}dz+\sum_{a_i+a_j> 1}(-)
\]
by \eqref{1-ext-thm1-eq1}.
Taking the summation over $(\nu_1,\nu_2)\in \mu_N\times \mu_M$, we have
\[
N^2\reg_{\syn}(\{1-z,1-w\})=-\sum_{a_i+a_j< 1}\cF_{a_i,b_j}^{(\sigma)}(1)Nz^{N-i-1}w^{-j}dz
+\sum_{a_i+a_j> 1}(-)
\]
by \eqref{Xi-eq1}. This finishes the proof of Theorem \ref{fermat-main2}.
\bigskip
In \cite{ross2}, Ross showed the non-vanishing of the Beilinson regulator
\[
\reg_B\{1-z,1-w\}\in H^2_\cD(F,\R(2))\cong H^1_B(F,\R)^{F_\infty=-1}
\]
of his element in the Deligne-Beilinson cohomology group.
We expect the non-vanishing also in the $p$-adic situation.
\begin{conj}
Under the condition \eqref{fermat-main2-eq1},
$\cF_{\frac{i}{N},\frac{j}{M}}^{(\sigma)}(1)\ne0$.
\end{conj}
By the congruence relation for $\cF^{(\sigma)}_{\ul a}(t)$ (Theorem \ref{cong-thm}),
the non-vanishing $\cF_{\frac{i}{N},\frac{j}{M}}^{(\sigma)}(1)\ne0$ is equivalent to
\[
G_{\frac{i}{N},\frac{j}{M}}^{(\sigma)}(1)_{<p^n}\not\equiv 0\mod p^n
\]
for some $n\geq 1$.
A number of computations by computer indicate that this holds
(possibly $n\ne1$).
Moreover if the Fermat curve has a quotient to an elliptic curve over $\Q$, one can expect
that the syntomic regulator agrees with the special value of the $p$-adic $L$-function
according to the $p$-adic Beilinson conjecture by Perrin-Riou \cite[4.2.2]{Perrin-Riou}.
See Conjecture \ref{ell6-conj} below for detail.
\subsection{Syntomic Regulators of Hypergeometric curves of Gauss type}
\label{gauss-sect}
Let $W=W(\ol\F_p)$ and $K=\Frac W$.
Let $N\geq 2$, $A,B>0$ be integers such that $0<A,B<N$ and $\gcd(N,A)=\gcd(N,B)=1$.
Let $X_{\gauss,K}\to\Spec K[\l,(\l-\l^2)^{-1}]$ a smooth projective morphism
of relative dimension one whose generic fiber is
defined from an affine equation
\[
v^N=u^A(1-u)^B(1-\l u)^{N-B}.
\]
We call $X_{\gauss,K}$ the {\it hypergeometric curve of Gauss type} (\cite[\S 2.3]{A}).
The genus of a smooth fiber is $N-1$.
Let $X$ be the hypergeometric curve in \S \ref{fermat-sect} defined by an
affine equation $(1-x^N)(1-y^N)=t$.
Then there is a finite cyclic covering
\begin{equation}
\rho:X\times_WK\lra X_{\gauss,K},\quad
\begin{cases}
\rho^*(u)=x^{-N}\\
\rho^*(v)=x^{-A}(1-x^{-N})y^{N-B}\\
\rho^*(\l)=1-t
\end{cases}
\end{equation}
of degree $N$ whose Galois group is generated by an automorphism
$g_{A,B}:=[\zeta_N^B,\zeta_N^{-A}]\in \mu_N(K)\times \mu_N(K)$
(see \eqref{fermat-ss} for the notation) where $\zeta_N$ is
a fixed primitive $N$-th root of unity
($\rho$ is a generalization of the Fermat quotient, e.g. \cite[p.211]{gross}).
\medskip
Suppose that $N$ is prime to $p$.
We construct an integral model $X_\gauss$ over $W$ in the following way.
The cyclic group $\langle g_{A,B}\rangle$ generated by $g_{A,B}$ acts on $X$,
and it is a free action.
We define
$X_\gauss$ to be the quotient
\[
f:X_\gauss\os{\text{def}}{=}X/\langle g_{A,B}\rangle\lra S=\Spec W[t,(t-t^2)^{-1}].
\]
of $X$ by the cyclic group $\langle g_{A,B}\rangle$. Since it is a free action, $X_\gauss$
is smooth over $W$.
The cyclic group $\mu_N(K)\times \mu_N(K)/\langle g_{A,B}\rangle$ acts on $X_{\gauss,K}$,
and it is generated by an automorphism $h$ given by $(u,v)\mapsto (u,\zeta^{-1}_N v)$.
For an integer $n$, we denote by $V(n)$ the eigenspace on which $h$ acts by
multiplication by $\zeta_N^n$ for all $\zeta_N\in \mu_N(K)$.
Then the pull-back $\rho^*$ satisfies
\begin{equation}\label{gauss-eq1}
\rho^*(H^1_\dR(X_{\gauss,K}/S_K)(n))=H^1_\dR(X_K/S_K)(-nA,-nB),\quad
0< n<N
\end{equation}
and the push-forward $\rho_*$ satisfies
\begin{equation}\label{gauss-eq2}
\rho_*(H^1_\dR(X_K/S_K)(i,j))=\begin{cases}
H^1_\dR(X_{\gauss,K}/S_K)(n)&(i,j)\equiv (-nA,-nB)\text{ mod }N\\
0&\text{otherwise}
\end{cases}
\end{equation}
for $0<i,j<N$.
Put
\begin{equation}\label{gauss-eq3}
\omega_n:=\rho_*(\omega_{-nA,-nB}),\quad
\eta_n:=\rho_*(\eta_{-nA,-nB})
\end{equation}
a basis of $H^1_\dR(X_{\gauss,K}/S_K)(n)$ (see \eqref{rule} for the notation $(-)_{-nA,-nB}$).
Recall $e^{\mathrm{unit}}_{i,j}$ from
Theorem \ref{uroot-thm}.
Put
\[
e^{\mathrm{unit}}_n:=\rho_*e^{\mathrm{unit}}_{-nA,-nB}\in
H^1_\dR(X_{\gauss,K}/S_K)(n)
\ot K\langle t,(t-t^2)^{-1},h(t)^{-1}\rangle
\]
for $0<n<N$.
Notice that
$\rho^*(\omega_{-nA,-nB})=N\omega_n$,
$\rho^*(\eta_{-nA,-nB})=N\eta_n$ and
$\rho^*e^{\mathrm{unit}}_{-nA,-nB}=Ne^{\mathrm{unit}}_n$ by \eqref{gauss-eq1} and
\eqref{gauss-eq2} together with the fact that $\rho_*\rho^*=N$.
We put
\begin{equation}\label{gauss-eq4}
\xi_\gauss=\xi_\gauss(\nu_1,\nu_2):=\rho_*\xi(\nu_1,\nu_2)\in K_2(X_\gauss)^{(2)}\subset
K_2(X_\gauss)\ot\Q
\end{equation}
where $\xi(\nu_1,\nu_2)$ is as in the beginning of \S \ref{syn-reg-sect}.
Let $\sigma_{\alpha}$ be the Frobenius given by $t^\sigma=ct^p$ with $c\in 1+pW$.
Taking the fixed part of \eqref{syn-reg-ext} by $\langle g_{A,B}\rangle$,
we have a $1$-extension
\[
0\lra H^1(X_\gauss/S)(2)\lra M_{\xi_\gauss}(X_\gauss/S)\lra \O_S\lra 0
\]
in the exact category $\FilFMIC(S)$.
Let $e_{\xi_\gauss}\in \Fil^0M_{\xi_\gauss}(X_\gauss/S)_\dR$
be the unique lifting of $1\in \O(S)$.
\begin{thm}\label{gauss-main1}
Put $a_n:=nA/N-\lfloor nA/N\rfloor$ and $b_n:=nB/N-\lfloor nB/N\rfloor$.
Let $h(t)=\prod_{m=0}^sF_{a_n^{(m)},b_n^{(m)}}(t)_{<p}$ where $s$ is the minimal
integer such that $(a_n^{(s+1)},b_n^{(s+1)})=(a_n,b_n)$ for all $n\in\{1,2,\ldots,N-1\}$.
Then
\[
e_{\xi_\gauss}-\Phi(e_{\xi_\gauss})\equiv -
\sum_{n=1}^{N-1}\frac{(1-\nu^{nA}_1)(1-\nu^{nB}_2)}{N^2}\cF^{(\sigma)}_{a_n,b_n}(t)
\omega_n\]
modulo $\sum_{n=1}^{N-1}K\langle t,(t-t^2),h(t)^{-1}\rangle e^{\mathrm{unit}}_n.$
\end{thm}
\begin{pf}
This is immediate by applying $\rho_*$ on the formula in Theorem \ref{fermat-main1}.
\end{pf}
\begin{cor}\label{gauss-main2}
Suppose that $p>2$ is prime to $N$.
Let $a\in W$ such that $a\not\equiv0,1$ mod $p$.
Let $\sigma_a$ be the Frobenius given by $t^\sigma=F(a)a^{-p}t^p$
where $F$ is the Frobenius on $W$.
Let $X_{\gauss,a}$ be the fiber at $t=a$ ($\Leftrightarrow$ $\l=1-a$),
which is a smooth projective variety over $W$ of relative dimension one.
Let
\[
\reg_\syn:K_2(X_{\gauss,a})\lra
H^2_\syn(X_{\gauss,a},\Q_p(2))\cong H^1_\dR(X_{\gauss,a}/K)
\]
be the syntomic regulator map.
Let $Q$ be the cup-product pairing on $H^1_\dR(X_{\gauss,a}/K)$.
Then
\[
Q(\reg_\syn(\xi_\gauss|_{X_a}),e^{\mathrm{unit}}_{N-n})=
N^{-2}
(1-\nu^{nA}_1)(1-\nu^{nB}_2)
\cF_{a_n,b_n}^{(\sigma_a)}(a)
Q(\omega_n,e^{\mathrm{unit}}_{N-n}).
\]
\end{cor}
\begin{pf}
This is immediate from Theorem \ref{gauss-main1}
on noticing $Q(e^{\mathrm{unit}}_n,e^{\mathrm{unit}}_m)=0$, cf. Corollary \ref{main-thm4}.
\end{pf}
\subsection{Syntomic Regulators of elliptic curves}\label{elliptic-sect}
The methods in \S \ref{syn-reg-sect} work not only for the hypergeometric curves but
also for the elliptic fibrations listed in \cite[\S 5]{A}.
We here give the results together with a sketch of the proof because the discussion
is similar to before.
\begin{thm}\label{elliptic-thm1}
Let $p\geq 5$ be a prime number. Let $W=W(\ol\F_p)$ be the Witt ring and $F:=\Frac(W)$
the fractional field.
Let $f:Y\to\P^1$ be the elliptic fibration defined by a Weierstrass equation
$3y^2=2x^3-3x^2+1-t$ over $W$. Put $\omega=dx/y$.
Let
\[\xi:=\left\{
\frac{y-x+1}{y+x-1},
\frac{t}{2(x-1)^3}
\right\}\in K_2(X),\quad X:=Y\setminus f^{-1}(0,1,\infty).
\]
Let $a\in W$ satisfy that
$a\not\equiv0,1$ mod $p$ and $X_a$
has a good ordinary reduction where $X_a$ is the fiber at
$\Spec W[t]/(t-a)$.
Let $e_{\text{\rm unit}}\in H^1_\dR(X_a/K)$ be the eigen vector of
the unit root with respect to the $p$-th Frobenius $\Phi$.
Let $\sigma_a$ denote the $p$-th Frobenius given by $\sigma_a(t)=F(a)
a^{-p}t^p$.
Then, we have
\[
Q(\reg_\syn(\xi|_{X_a}), e_{\text{\rm unit}})
=\cF_{\frac{1}{6},\frac{5}{6}}^{(\sigma_a)}(a)
Q(\omega,e_{\text{\rm unit}})
\]
\end{thm}
\begin{pf} (sketch).
We first note that
\[
\dlog(\xi)=\frac{dx}{y}\frac{dt}{t}=\omega\wedge\frac{dt}{t}.
\]
Let $\cE$ be the fiber over the formal neighborhood $\Spec \Z_p[[t]]\hra \P^1_{\Z_p}$.
Let $\rho:\bG_m\to \cE$ be the uniformization, and $u$ the uniformizer of $\bG_m$.
Then we have
\[
\rho^*\omega=F(t)\frac{du}{u}
\]
with $F(t)$ a formal power series in $\Z_p[[t]]$ which
satisfies the Picard-Fuchs equation. Namely $F(t)$ is a solution
of the differential equation
\[
(t-t^2)\frac{d^2y}{dt^2}+(1-2t)\frac{dy}{dt}-\frac{5}{36}y=0.
\]
Therefore $F(t)$ coincides with the hypergeometric power series
\[
F_{\frac{1}{6},\frac{5}{6}}(t)={}_2F_1\left({\frac{1}{6},\frac{5}{6}\atop 1};t\right)
\]
up to scalar.
Looking at the residue of $\omega$ at the point $(x,y,t)=(1,0,0)$, one finds that
the constant is $1$.
Hence we have
\[
\rho^*\omega=F_{\frac{1}{6},\frac{5}{6}}(t)\frac{du}{u}.
\]
Then the rest of the proof goes in the same way as Theorem \ref{fermat-main1}.
\end{pf}
\begin{thm}\label{elliptic-thm2}
Let $f:Y\to\P^1$ be the elliptic fibration defined by a Weierstrass equation
$y^2=x^3+(3x+4t)^2$, and
\[\xi:=\left\{
\frac{y-3x-4t}{-8t},
\frac{y+3x+4t}{8t}
\right\}.
\]
Then, under the same notation and assumption in Theorem \ref{elliptic-thm1}, we have
\[
Q(\reg_\syn(\xi|_{X_a}), e_{\text{\rm unit}})
=3\cF_{\frac{1}{3},\frac{2}{3}}^{(\sigma_a)}(a)Q(
\omega,e_{\text{\rm unit}}).
\]
\end{thm}
\begin{pf}
Let $\cE$ be the fiber over the formal neighborhood $\Spec \Z_p[[t]]\hra \P^1_{\Z_p}$,
and let $\rho:\bG_m\to \cE$ be the uniformization.
Then one finds
\[
\dlog(\xi)=-3\frac{dx}{y}\frac{dt}{t}=-3\omega\wedge\frac{dt}{t}
\]
and
\[
\rho^*\omega=\frac{1}{3}F_{\frac{1}{3},\frac{2}{3}}(t)\frac{du}{u}.
\]
The rest is the same as before.
\end{pf}
\begin{thm}\label{elliptic-thm3}
Let $f:Y\to\P^1$ be the elliptic fibration defined by a Weierstrass equation
$y^2=x^3-2x^2+(1-t)x$, and
\[\xi:=\left\{\frac{y-(x-1)}{y+(x-1)},\frac{-t x}{(x-1)^3}\right\}.
\]
Then, under the same notation and assumption in Theorem \ref{elliptic-thm1}, we have
\[
Q(\reg_\syn(\xi|_{X_a}), e_{\text{\rm unit}})
=\cF_{\frac{1}{4},\frac{3}{4}}^{(\sigma_a)}(a)
Q(\omega,e_{\text{\rm unit}}).
\]
\end{thm}
\begin{pf}
One finds
\[
\dlog(\xi)=\frac{dx}{y}\frac{dt}{t}=\omega\wedge\frac{dt}{t}
\]
and
\[
\rho^*\omega=F_{\frac{1}{4},\frac{3}{4}}(t)\frac{du}{u}.
\]
The rest is the same as before.
\end{pf}
\section{$p$-adic Beilinson conjecture for elliptic curves over $\Q$}\label{weakB-sect}
\subsection{Statement}
The Beilinson regulator is a generalization of Dirichlet's regulators of number fields.
in higher $K$-groups of varieties.
He conjectured the formulas on the regulators and special values of motivic $L$-functions
which generalize the analytic class number formula.
For an elliptic curve $E$ over $\Q$, Beilinson proved
that there is an integral symbol $\xi\in K_2(E)$
such that the real regulator $\reg_\R(\xi)\in H^2_\cD(E,\R(2))\cong \R$
agrees with the special value of the $L$-function $L(E,s)$ of $E$,
\[
\reg_\R(\xi)\sim_{\Q^\times}L'(E,0)\quad
(\Longleftrightarrow\,
\reg_\R(\xi)/L'(E,0)\in \Q^\times)
\]
where
$L'(E,s)=\frac{d}{ds}L(E,s)$ (\cite[Theorem 1.3]{B-reg}).
The $p$-adic counterpart of the Beilinson conjecture
was formulated by Perrin-Riou \cite[4.2.2]{Perrin-Riou},
which we call the {\it $p$-adic Beilinson conjecture}.
See also \cite{Colmez} for a general survey.
Her conjecture is formulated in terms of the $p$-adic etale regulators (which
are compatible with the syntomic regulators thanks to Besser's theorem) and the conjectural
$p$-adic measures which provide the $p$-adic $L$-functions of motives.
However
there are only a few of literatures
due to the extremal difficulty of the statement.
In a joint paper \cite{ABC} with Chida,
we give a concise statement of the $p$-adic Beilinson
conjecture by restricting ourselves to the case of elliptic curves over $\Q$.
\begin{conj}[Weak $p$-adic Beilinson conjecture]\label{weakB-conj}
Let $E$ be an elliptuic curve over $\Q$.
Let $p>2$ be a prime at which $E$ has a good ordinary reduction.
Let $E_{\Q_p}:=E\times_\Q\Q_p$ and
let $e_{\text{\rm unit}}\in H^1_\dR(X_\alpha/K)$ be the eigen vector for
the unit root $\alpha_{E,p}$ of the $p$-th Frobenius $\Phi$.
Let $L_p(E,\chi,s)$ be the $p$-adic $L$-function by Mazur and Swinnerton-Dyer
\cite{MS}.
Let $Q:H^1_\dR(E_{\Q_p}/\Q_p)^{\ot2}\to H^2_\dR(E_{\Q_p}/\Q_p)\cong\Q_p$ be the cup-product pairing.
Let
\[
\reg_\syn:K_2(E)\to H^2_\syn(E,\Q_p(2))\cong H^1_\dR(E_{\Q_p}/\Q_p)
\]
be the syntomic regulator map.
Fix a regular 1-form $\omega_E\in \vg(E,\Omega^1_{E/\Q})$.
Let $\omega:\Z_p^\times\to\mu_{p-1}$ be the Teichm\"uller character.
Then there is a constant $C\in \Q^\times$ which does not depend on $p$ such that
\[
(1-p\alpha_{E,p}^{-1})
\frac{Q(\reg_\syn(\xi),e_\unit)}{Q(\omega_E,e_\unit)}
=CL_p(E,\omega^{-1},0).
\]
\end{conj}
In \cite[Conjecture 3.3]{ABC}, we give a precise description of the constant $C$
in terms of the real regulator.
\subsection{Conjecture on Rogers-Zudilin type formulas}\label{RZ-sect}
In their paper \cite{RZ}, Rogers and Zudilin give descriptions of special values of
$L$-functions of elliptic curves in terms of
the hypergeometric functions ${}_3F_2$ or ${}_4F_3$.
Apply theorems in \S \ref{elliptic-sect} to Conjecture \ref{weakB-conj}, we have
a statement of the $p$-adic counterpart of the Rogers-Zudilin type formulas.
\medskip
The following is obtained from Conjecture \ref{weakB-conj} and
Corollary \ref{gauss-main2}
(note that the symbol \eqref{RZ-eq2} below
agrees with $\xi_\gauss|_{X_a}$ in \S \ref{gauss-sect} up to a constant).
\begin{conj}\label{ell1-conj}
Let $X\to \P^1$ be the elliptic fibration defined by a Weierstrass equation
$y^2=x(1-x)(1-(1-t)x)$ (i.e. the hypergeometric curve of Gauss type, cf. \S \ref{gauss-sect}).
Let $X_a$ be the fiber at $t=a\in\Q\setminus\{0,1\}$.
Suppose that the symbol
\begin{equation}\label{RZ-eq2}
\xi_a=\left\{\frac{y-1+x}{y+1-x},\frac{a x^2}{(1-x)^2}\right\}\in K_2(X_a)
\end{equation}
is integral in the sense of Scholl \cite{Scholl}.
Let $p>2$ be a prime such that
$X_a$ has a good ordinary reduction at $p$.
Let $\sigma_a:\Z_p[[t]]\to\Z_p[[t]]$ be the $p$-th Frobenius given by
$\sigma_a(t)=a^{1-p}t^p$.
Let $\alpha_{X_a,p}$ be the unit root.
Then there is a rational number $C_a\in \Q^\times$ not depending on $p$ such that
\[
(1-p\alpha^{-1}_{X_a,p})\cF_{\frac{1}{2},\frac{1}{2}}^{(\sigma_a)}(a)=C_a
L_p(X_a,\omega^{-1},0)
\]
where $\omega$ is the Teichm\"uller character.
\end{conj}
Here are examples of $a$ such that the symbol \eqref{RZ-eq2} is integral
(cf. \cite[5.4]{A})
\[
a=-1,\pm2,\pm4,\pm8,\pm16,
\pm\frac{1}{2},
\pm\frac{1}{8},\pm\frac{1}{4}, \pm\frac{1}{16}.
\]
F. Brunault compared
the symbol \eqref{RZ-eq2} with the Beilinson-Kato element in case $a=4$
(\cite[Appendix B]{ABC}).
Then, thanks to the main result of his paper \cite{regexp},
it follows that $\reg_\syn(\xi_a)$ gives the $p$-adic $L$-value
of $X_4$ (see \cite[Theorem 5.2]{ABC} for the precise statement).
We thus have
\begin{thm
\label{brunault}
When $a=4$, Conjecture \ref{ell1-conj} is true and $C_4=-1$.
\end{thm}
We obtain the following statements from
Theorems \ref{elliptic-thm1}, \ref{elliptic-thm2} and \ref{elliptic-thm3}.
\begin{conj}\label{ell2-conj}
Let $a\in \Q\setminus\{0,1\}$ and
let $X_a$ be the ellptic curve over $\Q$ defined by an affine equation
$3y^2=2x^3-3x^2+1-a$.
Suppose that the symbol
\begin{equation}\label{RZ-eq3}
\left\{
\frac{y-x+1}{y+x-1},
\frac{a}{2(x-1)^3}
\right\}\in K_2(X_a)
\end{equation}
is integral.
Let $p>3$ be a prime such that
$X_a$ has a good ordinary reduction at $p$.
Then there is a rational number $C_a\in \Q^\times$ not depending on $p$ such that
\[
(1-p\alpha^{-1}_{X_a,p})\cF_{\frac{1}{6},\frac{5}{6}}^{(\sigma_a)}(a)=C_a
L_p(X_a,\omega^{-1},0).
\]
\end{conj}
There are infinitely many $a$ such that
the symbol \eqref{RZ-eq3} is integral.
For example, if $a=1/n$ with
$n\in\Z_{\geq 2}$ and $n\equiv 0,2$ mod $6$,
then
the symbol \eqref{RZ-eq3} is integral (cf. \cite[5.4]{A}).
\begin{conj}\label{ell3-conj}
Let $a\in \Q\setminus\{0,1\}$ and
let $X_a$ be the ellptic curve over $\Q$ defined by an affine equation
$y^2=x^3+(3x+4a)^2$.
Suppose that the symbol
\begin{equation}\label{RZ-eq4}
\left\{
\frac{y-3x-4a}{-8a},
\frac{y+3x+4a}{8a}
\right\}
\in K_2(X_a)
\end{equation}
is integral.
Let $p>3$ be a prime such that
$X_a$ has a good ordinary reduction at $p$.
Then there is a rational number $C_a\in \Q^\times$ not depending on $p$ such that
\[
(1-p\alpha^{-1}_{X_a,p})\cF_{\frac{1}{3},\frac{2}{3}}^{(\sigma_a)}(a)=C_a
L_p(X_a,\omega^{-1},0).
\]
\end{conj}
If $a=\frac{1}{6n}$ with
$n\in\Z_{\geq 1}$ arbitrary,
then
the symbol \eqref{RZ-eq4} is integral (cf. \cite[5.4]{A}).
\begin{conj}\label{ell4-conj}
Let $a\in \Q\setminus\{0,1\}$ and
let $X_a$ be the elliptic curve over $\Q$ defined by an affine equation
$y^2=x^3-2x^2+(1-a)x$.
Suppose that the symbol
\begin{equation}\label{RZ-eq5}
\left\{\frac{y-(x-1)}{y+(x-1)},\frac{-a x}{(x-1)^3}\right\}
\in K_2(X_a)
\end{equation}
is integral.
Let $p>2$ be a prime such that
$X_a$ has a good ordinary reduction at $p$.
Then there is a rational number $C_a\in \Q^\times$ not depending on $p$ such that
\[
(1-p\alpha^{-1}_{X_a,p})\cF_{\frac{1}{4},\frac{3}{4}}^{(\sigma_a)}(a)=C_a
L_p(X_a,\omega^{-1},0).
\]
\end{conj}
If the denominator of $j(X_a)=64(1+3a)^3/(a(1-a)^2)$ is prime to
$a$ (e.g. $a=1/n$, $n\in \Z_{\geq 2}$), then
the symbol \eqref{RZ-eq5} is integral.
\medskip
From Corollary \ref{main-thm4} and Theorem \ref{fermat-main2},
we have the following conjectures.
\begin{conj}\label{ell6-conj}
Let $a\in \Q\setminus\{0,1\}$ and
let $X_a$ be the ellptic curve over $\Q$ defined by an affine equation
$(x^2-1)(y^2-1)=a$.
Suppose that the symbol
\begin{equation}\label{RZ-eq6}
\left\{\frac{x-1}{x+1},\frac{y-1}{y+1}\right\}\in K_2(X_a)
\end{equation}
is integral.
Let $p>2$ be a prime such that
$X_a$ has a good ordinary reduction at $p$.
Then there is a rational number $C_a\in \Q^\times$ not depending on $p$ such that
\[
(1-p\alpha^{-1}_{X_a,p})\cF_{\frac{1}{2},\frac{1}{2}}^{(\sigma_a)}(1)=C_a
L_p(X_a,\omega^{-1},0).
\]
\end{conj}
If the denominator of $j(X_a)=16(a^2-16a+16)^3/((1-a)a^4)$
is prime to $a$ (e.g. $a=\pm 2^n$, $n\in \{\pm 1,\pm 2,\pm 3\}$), then
the symbol \eqref{RZ-eq6} is integral.
\begin{conj}\label{ell7-conj}
Let $F_{N,M}$ be the Fermat curve defined by an affine equation
$z^N+w^M=1$, and $F^*_{2,4}$ the curve $z^2=w^4+1$.
Let $\sigma=\sigma_1$ (i.e. $\sigma(t)=t^p$).
Then there are rational numbers $C,C^\prime,
C^{\prime\prime}\in \Q^\times$ not depending on $p$ such that
\[
(1-p\alpha^{-1}_{F_{3,3},p})\cF_{\frac{1}{3},\frac{1}{3}}^{(\sigma)}(1)=C
L_p(F_{3,3},\omega^{-1},0),
\]
\[
(1-p\alpha^{-1}_{F_{2,4},p})\cF_{\frac{1}{2},\frac{1}{4}}^{(\sigma)}(1)=C^\prime
L_p(F_{2,4},\omega^{-1},0),
\]
\[
(1-p\alpha^{-1}_{F_{2,4}^*,p})\cF_{\frac{1}{4},\frac{1}{4}}^{(\sigma)}(1)=C^{\prime\prime}
L_p(F_{2,4}^*,\omega^{-1},0).
\]
\end{conj}
|
1,116,691,500,128 | arxiv | \section{Introduction}
Sasakian manifolds possess a rich geometric structure (cf.
\cite{kn:Bla}, p. 73-80) and are perhaps the closest odd
dimensional analog of K\"ahlerian manifolds. In particular the
concept of holomorphic sectional curvature admits a Sasakian
counterpart, the so called $\varphi$-sectional curvature $H(X)$
(cf. \cite{kn:Bla}, p. 94) and it is a natural problem (as well as
in K\"ahlerian geometry, cf. e.g. \cite{kn:KoNo}, p. 171, and p.
368-373) to investigate how restrictions on $H(X)$ influence upon
the topology of the manifold. An array of findings in this
direction are described in \cite{kn:Bla}, p. 77-80. For instance,
by a result of M. Harada, \cite{kn:Har1}, for any compact regular
Sasakian manifold $M$ satisfying the inequality $h > k^2$ the
fundamental group $\pi_1 (M)$ is cyclic. Here $h = \inf \{ H(X) :
X \in T_x (M), \; \| X \| = 1, \; x \in M \}$ and it is also
assumed that the least upper bound of the sectional curvature of
$M$ is $1/k^2$. Moreover, if additionally $M$ has minimal diameter
$\pi$ then $M$ is isometric to the standard sphere $S^{2n+1}$, cf.
\cite{kn:Har2}, p. 200.
\par
In the present paper we embrace a different point of view, that of
pseudohermitian geometry (cf. \cite{kn:Web}). To describe it we
need to introduce a few basic objects (cf. \cite{kn:Bla}, p.
19-28). Let $M$ be a $(2n+1)$-dimensional $C^\infty$ manifold and
$(\varphi , \xi , \eta , g)$ a {\em contact metric structure} i.e.
$\varphi$ is an endomorphism of the tangent bundle, $\xi$ is a
tangent vector field, $\eta$ is a differential $1$-form, and $g$
is a Riemannian metric on $M$ such that
\[ \varphi^2 = - I + \eta \otimes \xi , \;\;\; \varphi (\xi ) = 0,
\;\;\; \eta (\xi ) = 1, \]
\[ g(\varphi X , \varphi Y) = g(X,Y) - \eta (X) \eta (Y), \;\;\;
X,Y \in T(M), \] and $\Omega = d \eta$ (the {\em contact
condition}) where $\Omega (X,Y) = g(X , \varphi Y)$. Any contact
Riemannian manifold $(M, (\varphi , \xi , \eta , g))$ admits a
natural almost CR structure
\[ T_{1,0}(M) = \{ X - i J X : X \in {\rm Ker}(\eta )\} \]
($i = \sqrt{-1}$) i.e. it satisfies (\ref{e:int1}) below. By a
result of S. Ianu\c{s}, \cite{kn:Ian}, if $(\varphi , \xi , \eta
)$ is {\em normal} (i.e. $[\varphi , \varphi ] + 2 (d \eta )
\otimes \xi = 0$) then $T_{1,0}(M)$ is integrable, i.e. it obeys
to (\ref{e:int2}) in Section 2. Cf. \cite{kn:Bla}, p. 57-61, for
the geometric interpretation of normality, as related to the
classical embeddability theorem for real analytic CR structures
(cf. \cite{kn:AnHi}). Integrability of $T_{1,0}(M)$ is required in
the construction of the Tanaka-Webster connection of $(M , \eta
)$, cf. \cite{kn:Tan}, \cite{kn:Web} and definitions in Section 2
(although many results in pseudohermitian geometry are known to
carry over to arbitrary contact Riemannian manifolds, cf.
\cite{kn:Tann} and more recently \cite{kn:BaDr2}, \cite{kn:BlDr}).
A manifold carrying a contact metric structure $(\varphi , \xi ,
\eta , g)$ whose underlying contact structure $(\varphi , \xi ,
\eta )$ is normal is a {\em Sasakian manifold} (and $g$ is a {\em
Sasakian metric}). The main tool in the Riemannian approach to the
study of Sasakian geometry is the availability of a variational
theory of geodesics of the Levi-Civita connection of $(M , g)$
(cf. e.g. \cite{kn:Har2}, 194-197). In this paper we start the
elaboration of a similar theory regarding the geodesics of the
Tanaka-Webster connection $\nabla$ of $(M , \eta )$ and give a few
applications (cf. Theorems \ref{t:conj1}-\ref{t:conj2} and
\ref{t:14} below). Our motivation is twofold. First, we aim to
study the topology of Sasakian manifolds under restrictions on the
curvature of $\nabla$ and conjecture that Carnot-Carath\'eodory
complete Sasakian manifolds whose pseudohermitian Ricci tensor
$\rho$ satisfies $\rho (X,X) \geq (2n-1) k_0 \| X \|^2$ for some
$k_0 > 0$ and any $X \in {\rm Ker}(\eta )$ must be compact.
Second, the relationship between the sub-Riemannian geodesics of
the sub-Riemannain manifold $(M , {\rm Ker}(\eta ), g)$ and the
geodesics of $\nabla$ (emphasized by our Corollary
\ref{c:relation}) together with R.S. Strichartz's arguments (cf.
\cite{kn:Str}, p. 245 and 261-262) clearly indicates that a
variational theory of geodesics of $\nabla$ is the key requirement
in bringing results such as those in \cite{kn:Str2} or
\cite{kn:Oba} into the realm of subelliptic theory. In
\cite{kn:Ba} one obtains a pseudohermitian version of the Bochner
formula (cf. e.g. \cite{kn:BGM}, p. 131) implying a lower bound on
the first nonzero eigenvalue $\lambda_1$ of the sublaplacian
$\Delta_b$ of a compact Sasakian manifold
\begin{equation}
- \lambda_1 \geq 2nk/(2n-1) \label{e:Lic}
\end{equation}
(a CR analog to the Lichnerowicz theorem, \cite{kn:Lic}). It is
likely that a theory of geodesics of $\nabla$ may be employed to
show that equality in (\ref{e:Lic}) implies that $M$ is CR
isomorphic to a sphere $S^{2n+1}$ (the CR analog to Obata's
result, \cite{kn:Oba}).
\vskip 0.1in {\small {\bf Acknowledgements}. The Authors are
grateful to the anonymous Referee who pointed out a few errors in
the original version of the manuscript. The Authors acknowledge
support from INdAM (Italy) within the interdisciplinary project
{\em Nonlinear subelliptic equations of variational origin in
contact geometry}.}
\section{Sub-Riemannian geometry on CR manifolds}
Let $M$ be an orientable $(2n+1)$-dimensional $C^\infty$ manifold.
A {\em CR structure} on $M$ is a complex distribution $T_{1,0}(M)
\subset T(M) \otimes {\mathbb C}$, of complex rank $n$, such that
\begin{equation}
T_{1,0}(M) \cap T_{0,1}(M) = (0)
\label{e:int1}
\end{equation}
and
\begin{equation}
\label{e:int2}
Z, W \in T_{1,0}(M) \Longrightarrow [Z, W] \in T_{1,0}(M)
\end{equation}
(the {\em formal integrability property}). Here $T_{0,1}(M) =
\overline{T_{1,0}(M)}$ (overbars denote complex conjugates). The
integer $n$ is the {\em CR dimension}. The pair $(M, T_{1,0}(M))$
is a {\em CR manifold} (of {\em hypersurface type}). Let $H(M) =
{\rm Re}\{ T_{1,0}(M) \oplus T_{0,1}(M) \}$ be the {\em Levi
distribution}. It carries the complex structure $J : H(M) \to
H(M)$ given by $J (Z + \overline{Z}) = i (Z - \overline{Z})$ ($i =
\sqrt{-1}$). Let $H(M)^\bot \subset T^* (M)$ the conormal bundle,
i.e. $H(M)^\bot_x = \{ \omega \in T_x^* (M) : {\rm Ker}(\omega )
\supseteq H(M)_x \}$, $x \in M$. A {\em pseudohermitian structure}
on $M$ is a globally defined nowhere zero cross-section $\theta$
in $H(M)^\bot$. Pseudohermitian structures exist as the
orientability assumption implies that $H(M)^\bot \approx M \times
{\mathbb R}$ (a diffeomorphism) i.e. $H(M)^\bot$ is a trivial line
bundle. For a review of the main notions of CR and pseudohermitian
geometry one may see \cite{kn:Dra}.
\par
Let $(M, T_{1,0}(M))$ be a CR manifold, of CR dimension $n$. Let
$\theta$ be a pseudohermitian structure on $M$. The {\em Levi
form} is
\[ L_\theta (Z , \overline{W}) = - i (d \theta )(Z ,
\overline{W}), \;\;\; Z,W \in T_{1,0}(M). \] $M$ is {\em
nondegnerate} if $L_\theta$ is nondegenerate for some $\theta$.
Two pseudohermitian structures $\theta$ and $\hat{\theta}$ are
related by
\begin{equation}
\hat{\theta} = f \; \theta \label{e:intro1}
\end{equation}
for some $C^\infty$ function $f : M \to {\mathbb R} \setminus \{ 0
\}$. Since $L_{\hat{\theta}} = f L_\theta$ nondegeneracy of $M$ is
a {\em CR invariant} notion, i.e. it is invariant under a
transformation (\ref{e:intro1}) of the pseudohermitian structure.
The whole setting bears an obvious analogy to conformal geometry
(a fact already exploited by many authors, cf. e.g.
\cite{kn:DrTo}, \cite{kn:Tan}-\cite{kn:Web}). If $M$ is
nondegenerate then any pseudohermitian structure $\theta$ on $M$
is actually a {\em contact form}, i.e. $\theta \wedge (d \theta
)^n$ is a volume form on $M$. By a fundamental result of N. Tanaka
and S. Webster (cf. {\em op. cit.}) on any nondegenerate CR
manifold on which a contact form $\theta$ has been fixed there is
a canonical linear connection $\nabla$ (the {\em Tanaka-Webster
connection} of $(M , \theta )$) compatible to the Levi
distribution and its complex structure, as well as to the Levi
form. Precisely, let $T$ be the globally defined nowhere zero
tangent vector field on $M$, transverse to $H(M)$, uniquely
determined by $\theta (T) = 1$ and $T \, \rfloor \, d \theta = 0$
(the {\em characteristic direction} of $d \theta$). Let
\[ G_\theta (X,Y) = (d \theta )(X , J Y), \;\;\; X,Y \in H(M), \]
(the {\em real} Levi form) and consider the semi-Riemannian metric
$g_\theta$ on $M$ given by
\[ g_\theta (X,Y) = G_\theta (X,Y), \;\;\; g_\theta (X,T) = 0,
\;\;\; g_\theta (T,T) = 1, \] for any $X,Y \in H(M)$ (the {\em
Webster metric} of $(M , \theta )$). Let us extend $J$ to an
endomorphism of the tangent bundle by setting $J T = 0$. Then
there is a unique linear connection $\nabla$ on $M$ such that i)
$H(M)$ is parallel with respect to $\nabla$, ii) $\nabla g_\theta
= 0$, $\nabla J = 0$, and iii) the torsion $T_\nabla$ of $\nabla$
is {\em pure}, i.e.
\begin{equation}
\label{e:intro2} T_\nabla (Z,W) = T_\nabla (\overline{Z},
\overline{W}) = 0, \;\;\; T_\nabla (Z, \overline{W}) = 2 i
L_\theta (Z, \overline{W}) T,
\end{equation}
for any $Z,W \in T_{1,0}(M)$, and
\begin{equation}\label{e:intro3}
\tau \circ J + J \circ \tau = 0,
\end{equation}
where $\tau (X) = T_\nabla (T , X)$ for any $X \in T(M)$ (the {\em
pseudohermitian torsion} of $\nabla$). The Tanaka-Webster
connection is a pseudohermitian analog to both the Levi-Civita
connection in Riemannian geometry and the Chern connection in
Hermitian geometry.
\par
A CR manifold $M$ is {\em strictly pseudoconvex} if $L_\theta$ is
positive definite for some $\theta$. If this is the case then the
Webster metric $g_\theta$ is a Riemannian metric on $M$ and if we
set $\varphi = J$, $\xi = - T$, $\eta = - \theta$ and $g =
g_\theta$ then $(\varphi , \xi , \eta , g)$ is a contact metric
structure on $M$. Also $(\varphi , \xi , \eta , g)$ is normal if
and only if $\tau = 0$. If this is the case $g_\theta$ is a
Sasakian metric and $(M , \theta )$ is a Sasakian manifold.
We proceed by recalling a few concepts from {\em sub-Riemannian
geometry} (cf. e.g. R.S. Strichartz, \cite{kn:Str}) on a strictly
pseudoconvex CR manifold. Let $(M , T_{1,0}(M))$ be a strictly
pseudoconvex CR manifold, of CR dimension $n$. Let $\theta$ be a
contact form on $M$ such that the Levi form $G_\theta$ is positive
definite. The Levi distribution $H(M)$ is {\em bracket generating}
i.e. the vector fields which are sections of $H(M)$ together with
all brackets span $T_x (M)$ at each point $x \in M$, merely as a
consequence of the nondegeneracy of the given CR structure.
Indeed, let $\nabla$ be the Tanaka-Webster connection of $(M ,
\theta )$ and let $\{ T_\alpha : 1 \leq \alpha \leq n \}$ be a
local frame of $T_{1,0}(M)$, defined on the open set $U \subseteq
M$. By the purity property (\ref{e:intro2})
\begin{equation}
\Gamma_{\alpha\overline{\beta}}^{\overline{\gamma}}
T_{\overline{\gamma}} - \Gamma_{\overline{\beta}\alpha}^\gamma
T_\gamma - [T_\alpha , T_{\overline{\beta}}] = 2 i
g_{\alpha\overline{\beta}} T, \label{e:29}
\end{equation}
where $\Gamma^A_{BC}$ are the coefficients of $\nabla$ with
respect to $\{ T_\alpha \}$
\[ \nabla_{T_B} T_C = \Gamma^A_{BC} T_A \]
and $g_{\alpha\overline{\beta}} = L_\theta (T_\alpha ,
T_{\overline{\beta}})$. Our conventions as to the range of indices
are $A,B,C, \cdots \in \{ 0, 1, \cdots , n , \overline{1}, \cdots
, \overline{n} \}$ and $\alpha , \beta , \gamma , \cdots \in \{
1, \cdots , n \}$ (where $T_0 = T$). Note that $\{ T_\alpha ,
T_{\overline{\alpha}} , T \}$ is a local frame of $T(M) \otimes
{\mathbb C}$ on $U$. If $T_\alpha = X_\alpha - i J X_\alpha$ are
the real and imaginary parts of $T_\alpha$ then (\ref{e:29}) shows
that $\{ X_\alpha , J X_\alpha \}$ together with their brackets
span the whole of $T_x (M)$, for any $x \in U$. Actually more has
been proved. Given $x \in M$ and $v \in H(M)_x \setminus \{ 0 \}$
there is an open neighborhood $U \subseteq M$ of $x$ and a local
frame $\{ T_\alpha \}$ of $T_{1,0}(M)$ on $U$ such that $T_1 (x) =
v - i J_x v$, hence $v$ is a $2$-{\em step bracket generator} so
that $H(M)$ satisfies the {\em strong bracket generating
hypothesis} (cf. the terminology in \cite{kn:Str}, p. 224).
\par
Let $x \in M$ and $g(x) : T^*_x (M) \to H(M)_x$ determined by
\[ G_{\theta , x} (v , g(x) \xi ) = \xi (v), \;\;\; v \in H(M)_x ,
\;\; \xi \in T_x^* (M). \] Note that the kernel of $g$ is
precisely the conormal bundle $H(M)^\bot$. In other words
$G_\theta$ is a {\em sub-Riemannian metric} on $H(M)$ and $g$ its
alternative description (cf. also (2.1) in \cite{kn:Str}, p. 225).
If $\hat{\theta} = e^u \theta$ is another contact form such that
$G_{\hat{\theta}}$ is positive definite ($u \in C^\infty (M)$)
then $\hat{g} = e^{-u} g$. Clearly if the Levi form $L_\theta$ is
only nondegenerate then $(M , H(M), G_\theta )$ is a {\em
sub-Lorentzian manifold}, cf. the terminology in \cite{kn:Str}, p.
224.
\par
Let $\gamma : I \to M$ be a piecewise $C^1$ curve (where $I
\subseteq {\mathbb R}$ is an interval). Then $\gamma$ is a {\em
lengthy curve} if $\dot{\gamma}(t) \in H(M)_{\gamma (t)}$ for
every $t \in I$ such that $\dot{\gamma}(t)$ is defined. For
instance, any geodesic of $\nabla$ (i.e. any $C^1$ curve $\gamma
(t)$ such that $\nabla_{\displaystyle{\dot{\gamma}}} \dot{\gamma}
= 0$) of initial data $(x, v)$, $v \in H(M)_x$, is lengthy (as a
consequence of $\nabla g_\theta = 0$ and $\nabla T = 0$). A
piecewise $C^1$ curve $\xi : I \to T^* (M)$ is a {\em cotangent
lift} of $\gamma$ if $\xi (t) \in T_{\gamma (t)}^* (M)$ and
$g(\gamma (t)) \xi (t) = \dot{\gamma}(t)$ for every $t$ (where
defined). Clearly cotangent lifts of a given lengthy curve
$\gamma$ exist (cf. also Proposition \ref{p:xi0} below). Also,
cotangent lifts of $\gamma$ are uniquely determined modulo
sections of the conormal bundle $H(M)^\bot$ along $\gamma$. That
is, if $\eta : I \to T^* (M)$ is another cotangent lift of
$\gamma$ then $\eta (t) - \xi (t) \in H(M)^\bot_{\gamma (t)}$ for
every $t$. The {\em length} of a lengthy curve $\gamma : I \to M$
is given by
\[ L(\gamma ) = \int_I \{ \xi (t) \left[ g(\gamma (t))
\xi (t) \right]\}^{1/2} \; d t . \] The definition doesn't depend
upon the choice of cotangent lift $\xi$ of $\gamma$. The {\em
Carnot-Carath\'eodory distance} $\rho (x,y)$ among $x, y \in M$ is
the infimum of the lengths of all lengthy curves joining $x$ and
$y$. That $\rho$ is indeed a distance function on $M$ follows from
a theorem of W.L. Chow, \cite{kn:Cho}, according to which any two
points $x , y \in M$ may be joined by a lengthy curve (provided
that $M$ is connected).
\par
Let $g_\theta$ be the Webster metric of $(M , \theta )$. Then
$g_\theta$ is a {\em contraction} of the sub-Riemannian metric
$G_\theta$ ($G_\theta$ is an {\em expansion} of $g_\theta$), cf.
\cite{kn:Str}, p. 230. Let $d$ be the distance function
corresponding to the Webster metric. The length $L(\gamma )$ of a
lengthy curve $\gamma$ is precisely its length with respect to
$g_\theta$ hence
\begin{equation} d(x,y) \leq \rho (x,y), \;\;\; x,y \in M.
\label{e:30}
\end{equation}
While $\rho$ and $d$ are known to be inequivalent distance
functions, they do determine the same topology. For further
details on Carnot-Ca\-ra\-th\'e\-o\-do\-ry metrics see J.
Mitchell, \cite{kn:Mic}.
\par
Let $(U, x^1 , \cdots , x^{2n+1})$ be a system of local
coordinates on $M$ and let us set $G_{ij} = g_\theta (\partial_i ,
\partial_j )$ (where $\partial_i$ is short for $\partial /\partial
x^i$) and $[G^{ij}] = [G_{ij}]^{-1}$. Using
\[ G_\theta (X , g \; d x^i ) = (d x^i )(X), \;\;\; X \in H(M), \]
for $X = \partial_k - \theta_k T$ (where $\theta_i = \theta
(\partial_i )$) leads to
\begin{equation}\label{e:31} g^{ij}
(G_{jk} - \theta_j \theta_k ) = \delta^i_k - \theta_k T^i
\end{equation}
where $g \; d x^i = g^{ij} \partial_j$ and $T = T^i \partial_i$.
On the other hand $g^{ij} \theta_j = \theta (g \; d x^i ) = 0$ so
that (\ref{e:31}) yields
\begin{equation}
g^{ij} = G^{ij} - T^i T^j . \label{e:32}
\end{equation}
As an application we introduce a {\em canonical} cotangent lift of
a given lengthy curve on $M$.
\begin{proposition} Let $\gamma : I \to M$ be a lengthy curve and
let $\xi : I \to T^* (M)$ be given by $\xi (t) T_{\gamma (t)} = 1$
and $\xi (t) X = g_\theta (\dot{\gamma} , X)$, for any $X \in
H(M)_{\gamma (t)}$. Then $\xi$ is a cotangent lift of $\gamma$.
\label{p:xi0}
\end{proposition}
\noindent {\em Proof}. Let $x^i (t)$ be the components of $\gamma$
with respect to the chosen local coordinate system. By the very
definition of $\xi$
\begin{equation}
\label{e:xi0} \xi_j = G_{ij} \; \frac{d x^i}{d t} + \theta_j \, .
\end{equation}
Hence
\[ g \; \xi = \xi_j g^{ij} \partial_i = g^{ij} (G_{jk} \, \frac{d
x^k}{d t} + \theta_j ) \partial_i = g^{ij} G_{jk} \, \frac{d
x^k}{d t} \, \partial_i = \]
\[ = (G^{ij} - T^i T^j )G_{jk} \, \frac{d
x^k}{d t} \, \partial_i = (\delta^i_k - T^i \theta_k )\frac{d
x^k}{d t} \, \partial_i = \]
\[ = \dot{\gamma}(t) - \theta (\dot{\gamma}(t)) T =
\dot{\gamma}(t). \] We recall (cf. \cite{kn:Str}, p. 233) that a
{\em sub-Riemannian geodesic} is a $C^2$ curve $\gamma (t)$ in $M$
satisfying the Hamilton-Jacobi equations associated to the
Hamiltonian function $H(x, \xi ) = \frac{1}{2} \; g^{ij}(x) \xi_i
\xi_j$ that is
\begin{equation}
\frac{d x^i}{d t} = g^{ij}(\gamma (t)) \xi_j (t), \label{e:33}
\end{equation}
\begin{equation}
\frac{d \xi_k}{d t} = - \frac{1}{2} \; \frac{\partial
g^{ij}}{\partial x^k}(\gamma (t)) \xi_i (t) \xi_j (t),
\label{e:34}
\end{equation}
for some cotangent lift $\xi (t) \in T^* (M)$ of $\gamma (t)$. Our
purpose is to show that
\begin{theorem}
Let $M$ be a strictly pseudoconvex CR manifold and
$\theta$ a contact form on $M$ such that $G_\theta$ is positive
definite. A $C^2$ curve $\gamma (t) \in M$, $|t| < \epsilon$, is a
sub-Riemannian geodesic of $(M , H(M), G_\theta )$ if and only if
$\gamma (t)$ is a solution to
\begin{equation}
\nabla_{\displaystyle{\dot{\gamma}}} \dot{\gamma} = - 2 b(t) J
\dot{\gamma}, \;\;\; b^\prime (t) = A (\dot{\gamma},
\dot{\gamma}), \;\;\; |t| < \epsilon , \label{e:35}
\end{equation}
with $\dot{\gamma}(0) \in H(M)_{\gamma (0)}$, for some $C^2$
function $b : (-\epsilon , \epsilon ) \to {\mathbb R}$. Here $A
(X,Y) = g_\theta (\tau X , Y)$ is the pseudohermitian torsion of
$(M , \theta )$.
\label{t:2}
\end{theorem}
According to the terminology in \cite{kn:Str}, p. 237, the
canonical cotangent lift $\xi (t)$ of a given lengthy curve
$\gamma (t)$ is the one determined by the orthogonality
requirement
\begin{equation}
V_j (\xi ) \Gamma^j (\xi , v) = 0, \label{e:36}
\end{equation}
for any $v \in H(M)^\bot_{\gamma (t)}$ and any $|t| < \epsilon$,
where
\[ V_k (\xi ) = \frac{d \xi_k}{d t} + \frac{1}{2} \frac{\partial
g^{ij}}{\partial x^k} \xi_i \xi_j \, , \]
\[ \Gamma^i (\xi , v) = \Gamma^{ijk} \xi_j v_k \, , \;\;\;
\Gamma^{ijk} = \frac{1}{2} ( g^{\ell j} \frac{\partial
g^{ik}}{\partial x^\ell} + g^{\ell k} \frac{\partial
g^{ij}}{\partial x^\ell} - g^{\ell i} \frac{\partial
g^{jk}}{\partial x^\ell} ). \] Let $\gamma (t)$ be a lengthy curve
and $\xi_0 (t)$ the cotangent lift of $\gamma (t)$ furnished by
Proposition \ref{p:xi0}. Then any other cotangent lift $\xi (t)$
is given by
\begin{equation}
\xi (t) = \xi_0 (t) + a(t) \, \theta_{\gamma (t)} \, , \;\;\; |t|
< \epsilon , \label{e:37}
\end{equation}
for some $a : (- \epsilon , \epsilon ) \to {\mathbb R}$. We shall
need the following result (a replica of Lemma 4.4. in
\cite{kn:Str}, p. 237)
\begin{lemma}
The unique cotangent lift $\xi (t)$ of $\gamma (t)$ satisfying the
orthogonality condition {\rm (\ref{e:36})} is given by {\em
(\ref{e:37})} where
\[ a(t) = - \frac{1}{2} |\dot{\gamma}(t)|^{-2} g_\theta
(\nabla_{\displaystyle{\dot{\gamma}}} \dot{\gamma} \, , \, J
\dot{\gamma}) - 1, \;\;\; |t| < \epsilon . \] \label{l:2}
\end{lemma}
\noindent {\em Proof}. By (\ref{e:xi0}) and (\ref{e:37})
\[ V_k (\xi ) = V_k (\xi_0 ) + a^\prime (t) \theta_k + a(t)
\frac{\partial \theta_k}{\partial x^\ell} \frac{d x^\ell}{d t} +
\] \[ + \frac{1}{2} \frac{\partial g^{ij}}{\partial x^k} [ a(t)
(\xi^0_i \theta_j + \xi^0_j \theta_i ) + a(t)^2 \theta_i \theta_j
] \] (where $\xi_0 = \xi^0_i \, d x^i$) and using
\[ \frac{\partial g^{ij}}{\partial x^k} \, \theta_i \theta_j = 0
\]
we obtain
\begin{equation}
V_i (\xi ) = V_i (\xi_0 ) + a^\prime (t) \theta_i + 2 a (t) (d
\theta )(\dot{\gamma} , \partial_i ). \label{e:39}
\end{equation}
Note that $\Gamma^i (\xi , v) = \Gamma^i (\xi_0 , v)$ and
$\Gamma^{ijk} \theta_j v_k = 0$, for any $v \in H(M)^\bot_{\gamma
(t)}$. Let us contract (\ref{e:39}) with $\Gamma^i (\xi , v)$ and
use (\ref{e:36}) and $\Gamma^i (\xi_0 , v) \theta_i = 0$. This
ought to determine $a(t)$. Indeed
\begin{equation}
\label{e:40} V_i (\xi_0 ) \Gamma^i (\xi_0 , v) + 2a(t) (d \theta
)(\dot{\gamma} , \Gamma (\xi_0 , v)) = 0,
\end{equation}
where $\Gamma (\xi , v) = \Gamma^i (\xi , v) \partial_i$. On the
other hand, a calculation based on (\ref{e:32})-(\ref{e:xi0})
shows that
\[ V_k (\xi_0 ) = G_{k\ell} (\frac{d^2 x^\ell}{d t^2} + \left|
\begin{array}{c} \ell \\ ij \end{array} \right| \frac{d x^i}{d t}
\frac{dx^j}{d t} ) + 2 (d \theta )(\dot{\gamma} , \partial_k ), \]
\[ \left|
\begin{array}{c} \ell \\ ij \end{array} \right| = G^{\ell k}
|ij,k| , \;\;\; |ij,k| = \frac{1}{2} (\frac{\partial
G_{ik}}{\partial x^j} + \frac{\partial G_{jk}}{\partial x^i} -
\frac{\partial G_{ij}}{\partial x^k} ), \] hence
\begin{equation}
V_i (\xi_0 ) = G_{ij} (D_{\displaystyle{\dot{\gamma}}}
\dot{\gamma} )^j + 2 (d \theta )(\dot{\gamma} , \partial_i ) ,
\label{e:41}
\end{equation}
where $D$ is the Levi-Civita connection of $(M , g_\theta )$. Then
(\ref{e:40})-(\ref{e:41}) yield
\[ g_\theta (D_{\displaystyle{\dot{\gamma}}} \dot{\gamma} \, , \,
\Gamma (\xi_0 , v)) + 2 (a(t) +1) (d \theta )(\dot{\gamma} \, , \,
\Gamma (\xi_0 , v)) = 0, \] for any $v \in H(M)^\bot_{\gamma
(t)}$. Yet $H(M)^\bot$ is the span of $\theta$ hence
\[ g_\theta (\Gamma (\xi_0 , \theta ) \, , \,
D_{\displaystyle{\dot{\gamma}}} \dot{\gamma} + 2(a(t)+1) J
\dot{\gamma}) = 0 \] and
\[ \Gamma^i (\xi_0 , \theta ) = - G^{ij} (d \theta
)(\dot{\gamma} \, , \, \partial_j ), \] (because of $T \, \rfloor
\, d \theta = 0$) yields
\begin{equation}
\label{e:42} 2(a(t) + 1) |\dot{\gamma}(t)|^2 + g_\theta
(D_{\displaystyle{\dot{\gamma}}} \dot{\gamma} \, , \, J
\dot{\gamma}) = 0.
\end{equation}
Lemma \ref{l:2} is proved. At this point we may prove Theorem
\ref{t:2}. Let $\gamma (t) \in M$ be a sub-Riemannian geodesic of
$(M , H(M), G_\theta )$. Then there is a cotangent lift $\xi (t)
\in T^* (M)$ of $\gamma (t)$ (given by (\ref{e:37}) for some $a :
(-\epsilon , \epsilon ) \to {\mathbb R}$) such that $V (\xi ) = 0$
(where $V(\xi ) = V^i (\xi )
\partial_i$). In particular the orthogonality condition
(\ref{e:36}) is identically satisfied, hence $a(t)$ is determined
according to Lemma \ref{l:2}. Using (\ref{e:39}) and (\ref{e:41})
the sub-Riemannian geodesics equations are
\[ G_{ij} (D_{\displaystyle{\dot{\gamma}}} \dot{\gamma} )^j +
a^\prime (t) \theta_i + 2 (a(t)+1) (d \theta )(\dot{\gamma} \, ,
\, \partial_i ) = 0 \] or
\begin{equation}
D_{\displaystyle{\dot{\gamma}}} \dot{\gamma} + a^\prime (t) T +
2(a(t) + 1) J \dot{\gamma} = 0. \label{e:43}
\end{equation}
We recall (cf. e.g. \cite{kn:DrTo}) that $D = \nabla - (d \theta +
A) \otimes T$ on $H(M) \otimes H(M)$ hence (by the uniqueness of
the direct sum decomposition $T(M) = H(M) \oplus {\mathbb R} T$)
the equations (\ref{e:43}) become
\[ \nabla_{\displaystyle{\dot{\gamma}}} \dot{\gamma} + 2 (a(t)+1) J
\dot{\gamma} = 0, \;\;\; a^\prime (t) = A(\dot{\gamma} ,
\dot{\gamma}), \] (and we set $b = a + 1$). Theorem \ref{t:2} is
proved.
\begin{corollary} Let $M$ be a strictly pseudoconvex CR manifold
and $\theta$ a contact form on $M$ with vanishing pseudohermitian
torsion $(\tau = 0)$. Then any lengthy geodesic of the
Tanaka-Webster connection $\nabla$ of $(M , \theta )$ is a
sub-Riemannian geodesic of $(M , H(M), G_\theta )$. Viceversa, if
every lengthy geodesic $\gamma (t)$ of $\nabla$ is a
sub-Riemannian geodesic then $\tau = 0$.
\label{c:relation}
\end{corollary}
\noindent Indeed, if $\nabla_{\displaystyle{\dot{\gamma}}}
\dot{\gamma} = 0$ then the equations (\ref{e:35}) (with $b = 0$)
are identically satisfied.
\begin{proposition} Let $\gamma (t) \in M$ be a sub-Riemannian
geodesic and $s = \phi (t)$ a $C^2$ diffeomorphism. If $\gamma (t)
= \overline{\gamma}(\phi (t))$ then $\overline{\gamma}(s)$ is a
sub-Riemannian geodesic if and only if $\phi$ is affine, i.e.
$\phi (t) = \alpha t + \beta$, for some $\alpha , \beta \in
{\mathbb R}$. In particular, every sub-Riemannian geodesic may be
reparametrized by arc length $\phi (t) = \int_0^t
|\dot{\gamma}(u)| \, d u$. \label{p:2}
\end{proposition}
\noindent {\em Proof}. Set $k = |\dot{\gamma}(0)|^2 > 0$. By
taking the inner product of the first equation in (\ref{e:35}) by
$\dot{\gamma}(t)$ it follows that $d |\dot{\gamma}(t)|^2 /d t =
0$, hence $|\dot{\gamma}(t)|^2 = k$, $|t| < \epsilon$. Throughout
the proof an overbar indicates the similar quantities associated
to $\overline{\gamma}(s)$. In particular $\overline{k} =
\phi^\prime (0)^{-2} k$. Locally
\begin{equation}
\frac{d^2 x^i}{d t^2} + \Gamma^i_{jk} \frac{d x^j}{dt} \frac{d
x^k}{dt} = - 2(a+1) J^i_j \, \frac{d x^j}{d t} . \label{e:44}
\end{equation}
On the other hand, using (\ref{e:42}) and
\[ \frac{d^2 x^i}{d t^2} + \Gamma^i_{jk} \frac{d x^j}{d t} \frac{d
x^k}{d t} = \phi^{\prime\prime}(t) \frac{d^2 \overline{x}^i}{d
s^2} + \phi^\prime (t)^2 (\frac{d^2 \overline{x}^i}{d s^2} +
\Gamma^i_{jk} \frac{d \overline{x}^j}{d s} \frac{d
\overline{x}^k}{d s} ) \] we obtain
\[ k(a+1) = \overline{k} (\overline{a} + 1) \phi^\prime (t)^3 . \]
Then (\ref{e:44}) may be written
\[ k \phi^{\prime\prime}(t) \frac{d \overline{\gamma}}{d s} + 2
(\overline{a} + 1) \phi^\prime (t)^2 [\overline{k} \phi^\prime
(t)^2 - k ] J \, \frac{d \overline{\gamma}}{d s} = 0 \] hence
$\phi^{\prime\prime}(t) = 0$. Proposition \ref{p:2} is proved.
\par
Let $S^1 \to C(M) \stackrel{\pi}{\longrightarrow} M$ be the
canonical circle bundle over $M$ (cf. e.g. \cite{kn:Dra}, p. 104).
Let $\Sigma$ be the tangent to the $S^1$-action. Next, let us
consider the $1$-form $\sigma$ on $C(M)$ given by
\[ \sigma = \frac{1}{n+2} \{ d r + \pi^* (i
\omega^\alpha_\alpha - \frac{i}{2} \, g^{\alpha\overline{\beta}} d
g_{\alpha\overline{\beta}} - \frac{R}{4(n+1)} \, \theta ) \} , \]
where $r$ is a local fibre coordinate on $C(M)$ (so that locally
$\Sigma = \partial /\partial r$) and $R =
g^{\alpha\overline{\beta}} R_{\alpha\overline{\beta}}$ is the
pseudohermitian scalar curvature of $(M , \theta )$. Then $\sigma$
is a connection $1$-form in $S^1 \to C(M) \to M$. Given a tangent
vector $v \in T_x (M)$ and a point $z \in \pi^{-1}(x)$ we denote
by $v^\uparrow$ its horizontal lift with respect to $\sigma$, i.e.
the unique tangent vector $v^\uparrow \in {\rm Ker}(\sigma_z )$
such that $(d_z \pi )v^\uparrow = v$. The {\em Fefferman metric}
of $(M , \theta )$ is the Lorentz metric on $C(M)$ given by
\[ F_\theta = \pi^* \tilde{G}_\theta + 2 (\pi^* \theta ) \odot
\sigma , \] where $\tilde{G}_\theta = G_\theta$ on $H(M) \otimes
H(M)$ and $\tilde{G}_\theta (X, T) = 0$, for any $X \in T(M)$.
Also $\odot$ is the symmetric tensor product. We close this
section by demonstrating the following geometric interpretation of
sub-Riemannian geodesics (of a strictly pseudoconvex CR manifold).
\begin{theorem}
Let $M$ be a strictly pseudoconvex CR manifold, $\theta$ a contact
form on $M$ such that $G_\theta$ is positive definite, and
$F_\theta$ the Fefferman metric of $(M , \theta )$. For any
geodesic $z : (-\epsilon , \epsilon ) \to C(M)$ of $F_\theta$ if
the projection $\gamma (t) = \pi (z(t))$ is lengthy then $\gamma :
(- \epsilon , \epsilon ) \to M$ is a sub-Riemannian geodesic of
$(M , H(M), G_\theta )$. Viceversa, let $\gamma (t) \in M$ be a
sub-Riemannian geodesic. Then any solution $z(t) \in C(M)$ to the
ODE
\begin{equation}
\dot{z}(t) = \dot{\gamma}(t)^\uparrow + ((n+2)/2) b(t)
\Sigma_{z(t)}, \label{e:45}
\end{equation}
where $b(t) = a(t) + 1$ is given by {\rm (\ref{e:42})}, is a
geodesic of $F_\theta$. \label{t:3}
\end{theorem}
\noindent Here $\dot{\gamma}(t)^\uparrow \in {\rm
Ker}(\sigma_{z(t)})$ and $(d_{z(t)} \pi ) \dot{\gamma}(t)^\uparrow
= \dot{\gamma}(t)$. To prove Theorem \ref{t:3} we shall need the
following
\begin{lemma} For any $X,Y \in H(M)$
\[ \nabla^{C(M)}_{X^\uparrow} Y^\uparrow = (\nabla_X Y)^\uparrow -
(d \theta )(X,Y) T^\uparrow - (A(X,Y) + (d \sigma )(X^\uparrow ,
Y^\uparrow )) \hat{\Sigma}, \]
\[ \nabla^{C(M)}_{X^\uparrow} T^\uparrow = (\tau X + \phi
X)^\uparrow , \]
\[ \nabla^{C(M)}_{T^\uparrow} X^\uparrow = (\nabla_T X + \phi
X)^\uparrow + 2(d \sigma )(X^\uparrow , T^\uparrow ) \hat{\Sigma},
\]
\[ \nabla^{C(M)}_{X^\uparrow} \hat{\Sigma} = \nabla^{C(M)}_{\hat{\Sigma}} X^\uparrow = (J
X)^\uparrow , \]
\[ \nabla^{C(M)}_{T^\uparrow} T^\uparrow = V^\uparrow , \;\;
\nabla^{C(M)}_{\hat{\Sigma}} \hat{\Sigma} = 0, \]
\[ \nabla^{C(M)}_{\hat{\Sigma}} T^\uparrow = \nabla^{C(M)}_{T^\uparrow} \hat{\Sigma} = 0,
\]
where $\phi : H(M) \to H(M)$ is given by $G_\theta (\phi X , Y) =
(d \sigma )(X^\uparrow , Y^\uparrow )$, and $V \in H(M)$ is given
by $G_\theta (V , Y) = 2 (d \sigma )(T^\uparrow , Y^\uparrow )$.
Also $\hat{\Sigma} = ((n+2)/2) \Sigma$. \label{l:3}
\end{lemma}
\noindent This relates the Levi-Civita connection $\nabla^{C(M)}$
of $F_\theta$ to the Tanaka-Webster connection of $(M, \theta )$.
Cf. \cite{kn:Dra2} for a proof of Lemma \ref{l:3}.
\par
{\em Proof of Theorem} \ref{t:3}. Let $z(t) \in C(M)$ be a
geodesic of $\nabla^{C(M)}$ and $\gamma (t) = \pi (z(t))$. Assume
that $\dot{\gamma}(t) \in H(M)_{\gamma (t)}$. Note that
$\dot{z}(t) - \dot{\gamma}(t)^\uparrow \in {\rm Ker}(d_{z(t)} \pi
)$ hence $\dot{z}(t)$ is given by (\ref{e:45}), for some $b : (-
\epsilon , \epsilon ) \to {\mathbb R}$. Then (by Lemma \ref{l:3})
\[ 0 = \nabla^{C(M)}_{\displaystyle{\dot{z}}} \dot{z} =
\nabla^{C(M)}_{\displaystyle{\dot{\gamma}^\uparrow}}
\dot{\gamma}^\uparrow + b^\prime (t) \hat{\Sigma} + 2 b(t) (J
\dot{\gamma})^\uparrow = \]
\[ = (\nabla_{\displaystyle{\dot{\gamma}}} \dot{\gamma} )^\uparrow
+ [b^\prime (t) - A(\dot{\gamma} \, , \, \dot{\gamma})]
\hat{\Sigma} + 2 b(t) (J \dot{\gamma})^\uparrow \] hence (by
$T(C(M)) = {\rm Ker}(\sigma ) \oplus {\mathbb R} \Sigma$) $\gamma
(t)$ satisfies the equations (\ref{e:35}), i.e. $\gamma (t)$ is a
sub-Riemannian geodesic. The converse is obvious.
\section{Jacobi fields on CR manifolds}
Let $M$ be a strictly pseudoconvex CR manifold endowed with a
contact form $\theta$ such that $G_\theta$ is positive definite.
Let $\nabla$ be the Tanaka-Webster connection of $(M , \theta)$.
Let $\gamma (t) \in M$ be a geodesic of $\nabla$, parametrized by
arc length. A {\em Jacobi field} along $\gamma$ is vector field
$X$ on $M$ satisfying to the second order ODE \begin{equation}
\nabla^2_{\dot{\gamma}} X + \nabla_{\dot{\gamma}} T_\nabla (X ,
\dot{\gamma}) + R(X , \dot{\gamma}) \dot{\gamma} = 0. \label{e:J1}
\end{equation}
Let $J_\gamma$ be the real linear space of all Jacobi fields of
$(M , \nabla )$. Then $J_\gamma$ is $(4n+2)$-dimensional (cf.
Prop. 1.1 in \cite{kn:KoNo}, Vol. II, p. 63). We denote by
$\hat{\gamma}$ the vector field along $\gamma$ defined by
$\hat{\gamma}_{\gamma (t)} = t \dot{\gamma}(t)$ for every value of
the parameter $t$. Note that $\dot{\gamma}$, $\hat{\gamma} \in
J_\gamma$. We establish
\begin{theorem} Every
Jacobi field $X$ along a lengthy geodesic $\gamma$ of $\nabla$ can
be uniquely decomposed in the following form
\begin{equation}
X = a \dot{\gamma} + b \hat{\gamma} + Y \label{e:J2}
\end{equation}
where $a , b \in {\mathbb R}$ and $Y$ is a Jacobi field along
$\gamma$ such that
\begin{equation}
g_\theta (Y, \dot{\gamma})_{\gamma (t)} = - \int^t_0 \theta
(X)_{\gamma (s)} A(\dot{\gamma} , \dot{\gamma})_{\gamma (s)} d s .
\label{e:J3}
\end{equation}
In particular, if {\rm i)} $X_{\gamma (t)} \in H(M)_{\gamma (t)}$
for every $t$, or {\rm ii)} $(M , \theta )$ is a Sasakian manifold
{\rm (}i.e. $\tau = 0${\rm )}, then $Y$ is perpendicular to
$\gamma$. \label{t:J1}
\end{theorem} \noindent We need the following
\begin{lemma}
For any Jacobi field $X \in J_\gamma$ \[ \frac{d}{d t} \{ g_\theta
(X , \dot{\gamma}) \} + \theta (X)_{\gamma (t)} A(\dot{\gamma} ,
\dot{\gamma})_{\gamma (t)} = {\rm const.} \] \label{l:J1}
\end{lemma}
\noindent {\em Proof}. Let us take the inner product of the Jacobi
equation (\ref{e:J1}) by $\dot{\gamma}$ and use the skew symmetry
of $g_\theta (R(X,Y)Z , W)$ in the arguments $(Z, W)$ (a
consequence of $\nabla g_\theta = 0$) so that to get
\[ \frac{d^2}{d t^2} \{ g_\theta (X, \dot{\gamma})\} + \frac{d}{d
t} \{ g_\theta (T_\nabla (X , \dot{\gamma}) , \dot{\gamma}) \} = 0
. \] On the other hand, let us set $X_H = X - \theta (X)T$ (so
that $X_H \in H(M)$). Then
\[ g_\theta (T_\nabla (X , \dot{\gamma}), \dot{\gamma}) = - 2 \Omega (X_H ,
\dot{\gamma}) g_\theta (T , \dot{\gamma}) + \theta (X) g_\theta
(\tau (\dot{\gamma}) , \dot{\gamma}) \] or (as $\gamma$ is
lengthy)
\[ g_\theta (T_\nabla (X , \dot{\gamma}), \dot{\gamma}) = \theta (X)
A(\dot{\gamma} , \dot{\gamma}). \] Lemma \ref{l:J1} is proved.
Throughout the section we adopt the notation $X^\prime =
\nabla_{\dot{\gamma}} X$ and $X^{\prime\prime} =
\nabla^2_{\dot{\gamma}} X$.
\par
{\em Proof of Theorem} \ref{t:J1}. We set by definition
\[ a = g_\theta (X , \dot{\gamma})_{\gamma (0)} \, , \;\;\;
b = g_\theta (X^\prime , \dot{\gamma})_{\gamma (0)} + \theta
(X)_{\gamma (0)} A(\dot{\gamma} , \dot{\gamma})_{\gamma (0)} \, ,
\] and $Y = X - a \dot{\gamma} - b \hat{\gamma}$. Clearly $Y \in
J_\gamma$. Then, by Lemma \ref{l:J1}
\[ \frac{d}{d t} \{ g_\theta (Y , \dot{\gamma}) \} + \theta (Y)
A(\dot{\gamma} , \dot{\gamma}) = \alpha , \] for some $\alpha \in
{\mathbb R}$. Next we integrate from $0$ to $t$
\[ g_\theta (Y , \dot{\gamma})_{\gamma (t)} - g_\theta (Y ,
\dot{\gamma})_{\gamma (0)} + \int_0^t \theta (Y)_{\gamma (s)}
A(\dot{\gamma} , \dot{\gamma})_{\gamma (s)} d s = \alpha t \] and
substitute $Y$ from (\ref{e:J2}) (and use $\dot{\gamma}$,
$\hat{\gamma} \in H(M)$). Differentiating the resulting relation
with respect to $t$ at $t = 0$ gives $\alpha = 0$. Hence
\[ g_\theta (Y , \dot{\gamma}) + \int_0^t \theta (X)_{\gamma (s)}
A(\dot{\gamma} , \dot{\gamma})_{\gamma (s)} d s = 0. \] The
existence statement in Theorem \ref{t:J1} is proved. We need the
following terminology. Given $X \in J_\gamma$ a Jacobi field $Y
\in J_\gamma$ satisfying (\ref{e:J3}) is said to be {\em slant at
$\gamma (t)$ relative to $X$}. Also $Y$ is {\em slant} if it slant
at any point of $\gamma$. To check the uniqueness statement let $X
= a^\prime \dot{\gamma} + b^\prime \hat{\gamma} + Z$ be another
decomposition of $X$, where $a^\prime$, $b^\prime \in {\mathbb R}$
and $Z \in J_\gamma$ is slant (relative to $X$). Then
\[ (a + b t) \dot{\gamma}(t) + Y_{\gamma (t)} = (a^\prime +
b^\prime t)\dot{\gamma}(t) + Z_{\gamma (t)} \] and taking the
inner product with $\dot{\gamma}(t)$ yields $a + b t = a^\prime +
b^\prime t$, i.e. $a = a^\prime$, $b = b^\prime$ and $Y_{\gamma
(t)} = Z_{\gamma (t)}$. Q.e.d.
\begin{corollary} Suppose a Jacobi field $X \in J_\gamma$ is slant
at $\gamma (r)$ and at $\gamma (s)$ relative to itself, for some
$r \neq s$. Then $X$ is slant. In particular, if {\rm i)}
$X_{\gamma (t)} \in H(M)_{\gamma (t)}$ for every $t$, or {\rm ii)}
$(M , \theta )$ is a Sasakian manifold, and $X$ is perpendicular
to $\gamma$ at two points, it is perpendicular to $\gamma$ at
every point of $\gamma$. \label{c:ppdue}
\end{corollary}
\noindent {\em Proof}. By Theorem \ref{t:J1} we may decompose $X =
a \dot{\gamma} + b \hat{\gamma} + Y$, where $Y \in J_\gamma$ is
slant (relative to $X$). Taking the inner product of $X_{\gamma
(r)} = (a + b r) \dot{\gamma}(r) + Y_{\gamma (r)}$ with
$\dot{\gamma}(r)$ gives $a + b r = 0$. Similarly $a + b s = 0$
hence (as $r \neq s$) $a = b = 0$, so that $X = Y$. Q.e.d.
\section{CR manifolds without conjugate points}
Two points $x$ and $y$ on a lengthy geodesic $\gamma (t)$ are {\em
horizontally conjugate} if there is a Jacobi field $X \in
J_\gamma$ such that $X_{\gamma (t)} \in H(M)_{\gamma (t)}$ for
every $t$ and $X_x = X_y = 0$. As $T_\nabla$ is pure, the Jacobi
equation (\ref{e:J1}) may also be written
\begin{equation}
\label{e:J4} X^{\prime\prime} - 2 \Omega (X^\prime , \dot{\gamma})
T + \theta (X^\prime ) \tau (\dot{\gamma}) + \theta (X)
(\nabla_{\dot{\gamma}} \tau ) \dot{\gamma} + R(X , \dot{\gamma})
\dot{\gamma} = 0.
\end{equation}
Given $X \in J_\gamma$ one has (by (\ref{e:J4}))
\[ \frac{d}{d t} \{ g_\theta (X^\prime , X) \} = g_\theta
(X^{\prime\prime} , X) + g_\theta (X^\prime , X^\prime ) = \]
\[ = |X^\prime |^2 + 2 \theta (X) \Omega (X^\prime , \dot{\gamma})
- \theta (X^\prime ) A(\dot{\gamma} , X) - \] \[ - \theta (X)
g_\theta (\nabla_{\dot{\gamma}} \tau \dot{\gamma} , X) - g_\theta
(R(X, \dot{\gamma}) \dot{\gamma} , X). \] On the other hand (again
by $\nabla g_\theta = 0$)
\[ \theta (X^\prime ) A(\dot{\gamma} , X) +
\theta (X) g_\theta (\nabla_{\dot{\gamma}} \tau \dot{\gamma} , X)
= \]
\[ = \theta (X^\prime ) A(\dot{\gamma} , X) + \theta (X)
\frac{d}{d t} \{ A(\dot{\gamma} , X)\} - \theta (X) A(\dot{\gamma}
, X^\prime ) = \]
\[ = \frac{d}{d t} \{ \theta (X) A(\dot{\gamma} , X) \} - \theta
(X) A(\dot{\gamma} , X^\prime ) \] hence
\begin{equation}
\frac{d}{d t} \{ g_\theta (X^\prime , X) + \theta (X)
A(\dot{\gamma} , X) \} = \label{e:J5}
\end{equation}
\[ = |X^\prime |^2 - g_\theta (R(X,
\dot{\gamma}) \dot{\gamma} , X) + \theta (X) [A(\dot{\gamma} ,
X^\prime ) + 2 \Omega (X^\prime , \dot{\gamma})]. \] S. Webster
(cf. \cite{kn:Web}) has introduced a notion of pseudohermitian
sectional curvature by setting
\begin{equation} k_\theta (\sigma ) = \frac{1}{4} \; G_\theta (X,X)^{-2}
\; g_{\theta , x} ( R_x (X , J_x X) J_x X , X ) , \label{e:res}
\end{equation}
for any holomorphic $2$-plane $\sigma$ (i.e. a $2$-plane $\sigma
\subset H(M)_x$ such that $J_x (\sigma ) = \sigma$), where $\{ X ,
J_x X \}$ is a basis of $\sigma$. The coefficient $1/4$ makes the
sphere $\iota : S^{2n+1} \subset {\mathbb C}^{n+1}$ (endowed with
the contact form $\theta_0 = \iota^* [\frac{i}{2}
(\overline{\partial} - \partial ) |z|^2 ]$) have constant
curvature $+1$. Clearly, this is a pseudohermitian analog to the
notion of holomorphic sectional curvature in Hermitian geometry.
On the other hand, for any $2$-plane $\sigma \subset T_x (M)$ one
may set
\[ k_\theta (\sigma ) = \frac{1}{4} \; g_{\theta , x} (R_x (X,Y)Y,
X) \] where $\{ X , Y \}$ is a $g_{\theta , x}$-orthonormal basis
of $\sigma$. Cf. \cite{kn:KoNo}, Vol. I, p. 200, the definition of
$k_\theta (\sigma )$ doesn't depend upon the choice of orthonormal
basis in $\sigma$ because the curvature $R(X,Y,Z,W) = g_\theta
(R(Z,W)Y, X)$ of the Tanaka-Webster connection is skew symmetric
in both pairs $(X,Y)$ and $(Z,W)$. We refer to $k_\theta$ as the
{\em pseudohermitian sectional curvature} of $(M , \theta )$. {\em
A posteriori} the restriction (\ref{e:res}) of $k_\theta$ to
holomorphic $2$-planes is referred to as the {\em holomorphic
pseudohermitian sectional curvature} of $(M , \theta )$. As an
application of (\ref{e:J5}) we may establish
\begin{theorem}
If $(M , \theta )$ has nonpositive pseudohermitian sectional
curvature then $(M , \theta )$ has no horizontally conjugate
points. \label{t:J2}
\end{theorem} \noindent We need
\begin{lemma} For every Jacobi field $X \in J_\gamma$
\[ \frac{d}{d t} \{ \theta (X) \} - 2 \Omega (X ,
\dot{\gamma})_{\gamma (t)} = c = {\rm const.} \] \label{l:J2}
\end{lemma} \noindent
To prove Lemma \ref{l:J2} one merely takes the inner product of
(\ref{e:J4}) by $T$. \par {\em Proof of Theorem} \ref{t:J2}. The
proof is by contradiction. If there is a lengthy geodesic $\gamma
(t) \in M$ (parametrized by arc length) and a Jacobi field $X \in
J_\gamma$ such that $X_{\gamma (a)} = X_{\gamma (b)} = 0$ for two
values $a$ and $b$ of the parameter then we may integrate in
(\ref{e:J5}) so that to obtain
\begin{equation}
\label{e:J6} \int_a^b \{ |X^\prime |^2 - g_\theta (R(X ,
\dot{\gamma}) \dot{\gamma} , X) + \theta (X) [A(\dot{\gamma} ,
X^\prime ) + 2 \Omega (X^\prime , \dot{\gamma})]\} dt = 0.
\end{equation}
On the other hand \[ \theta (X) \Omega (X^\prime , \dot{\gamma}) =
\theta (X) \frac{d}{d t} \{ \Omega (X , \dot{\gamma})\} = \] \[ =
\frac{d}{d t} \{ \theta (X) \Omega (X , \dot{\gamma}) \} - \Omega
(X , \dot{\gamma}) \theta (X^\prime ). \] Then (by Lemma
\ref{l:J2})
\[ 2 \int^b_a \theta (X) \Omega (X^\prime , \dot{\gamma}) d t = - 2 \int_a^b
\Omega (X , \dot{\gamma}) \frac{d}{d t} \{ \theta (X) \} d t = \]
\[ = c \int_a^b \frac{d}{d t} \{ \theta (X) \} d t - \int_a^b
\theta (X^\prime )^2 d t = - \int_a^b \theta (X^\prime )^2 d t \]
hence (\ref{e:J6}) becomes
\[ \int_a^b \{ |X^\prime |^2 - g_\theta (R(X , \dot{\gamma})
\dot{\gamma} , X) + \theta (X) A(\dot{\gamma}, X^\prime ) - \theta
(X^\prime )^2 \} d t = 0. \] Finally, if $X \in H(M)$ then
$X^\prime \in H(M)$ and then (under the assumptions of Theorem
\ref{t:J2}) $X^\prime = 0$, a contradiction.
\section{Jacobi fields on CR manifolds of constant
pseudohermitian sectional curvature} As well known (cf. Example
2.1 in \cite{kn:KoNo}, Vol. II, p. 71) one may determine a basis
of $J_\gamma$ for any elliptic space form (a Riemannian manifold
of positive constant sectional curvature). Similarly, we shall
prove
\begin{proposition} Let $M$ be a strictly pseudoconvex CR
manifold of CR dimension $n$, $\theta$ a contact form with
$G_\theta$ positive definite and constant pseudohermitian
sectional curvature. Let $\gamma (t) \in M$ be a lengthy geodesic
of the Tanaka-Webster connection $\nabla$ of $(M , \theta )$,
parametrized by arc length. For each $v \in T_{\gamma (0)} (M)$ we
let $E(v)$ be the space of all vector fields $X$ along $\gamma$
defined by $X_{\gamma (t)} = (a t + b)Y_{\gamma (t)}$, where $a,b
\in {\mathbb R}$, $\nabla_{\dot{\gamma}} Y = 0$, $Y_{\gamma (0)} =
v$. Assume that $(M, \theta )$ has parallel pseudohermitian
torsion, i.e. $\nabla \tau = 0$. Then $T \in J_\gamma$. Let $\{
v_1 , \cdots , v_{2n-2} \} \subset H(M)_{\gamma (0)}$ such that
$\{ \dot{\gamma}(0), J_{\gamma (0)} \dot{\gamma}(0) , v_1 , \cdots
, v_{2n-2} \}$ is a $G_{\theta , \gamma (0)}$-orthonormal basis of
$H(M)_{\gamma (0)}$. Then
\[ E(\dot{\gamma}(0)) \oplus E(v_1 ) \oplus \cdots \oplus
E(v_{2n-2}) \subseteq {\mathcal H}_\gamma := J_\gamma \cap
\Gamma^\infty (\gamma^{-1} H(M)) \] if and only if \[ A_{\gamma
(0)} (\dot{\gamma}(0), \dot{\gamma}(0)) = 0, \;\;\; A_{\gamma (0)}
(v_i , \dot{\gamma}(0)) = 0, \;\;\; 1 \leq i \leq 2n-2, \] where
$\gamma^{-1} H(M)$ is the pullback of $H(M)$ by $\gamma$. If
additionally $(M , \theta)$ has vanishing pseudohermitian torsion
{\rm (}i.e. $(M , \theta )$ is Tanaka-Webster flat{\rm )} then
$E(T_{\gamma (0)}) \subset J_\gamma$. \label{p:psh1}
\end{proposition}
\noindent The proof of Proposition \ref{p:psh1} requires the
explicit form of the curvature tensor of the Tanaka-Webster
connection of $(M , \theta )$ when $k_\theta =$ const. This is
provided by
\begin{theorem} Let $M$ be a strictly pseudoconvex CR manifold
and $\theta$ a contact form on $M$ such that $G_\theta$ is
positive definite and $k_\theta (\sigma ) = c$, for some $c \in
{\mathbb R}$ and any $2$-plane $\sigma \subset T_x (M)$, $x \in
M$. Then $c = 0$ and the curvature of the Tanaka-Webster
connection of $(M , \theta )$ is given by
\begin{equation} R(X,Y)Z = \Omega (Z,Y) \tau (X) - \Omega (Z,X) \tau (Y)
+ \label{e:J7}
\end{equation}
\[ + A(Z,Y) J X - A(Z,X) J Y, \]
for any $X,Y,Z \in T(M)$. In particular, if $(M , \theta)$ has
constant pseudohermitian sectional curvature and CR dimension $n
\geq 2$ then the Tanaka-Webster connection of $(M , \theta )$ is
flat if and only if $(M , \theta )$ has vanishing pseudohermitian
torsion {\rm (}$\tau = 0${\rm )}. \label{t:J3}
\end{theorem}
\noindent The proof of Theorem \ref{t:J3} is given in Appendix A.
By Theorem \ref{t:J3} there are no ``pseudohermitian space forms''
except for those of zero pseudohermitian sectional curvature and
these aren't in general flat. Cf. \cite{kn:DrTo} the term {\em
pseudohermitian space form} is reserved for manifolds of constant
{\em holomorphic} pseudohermitian sectional curvature (and then
examples with arbitrary $c \in {\mathbb R}$ abound, cf.
\cite{kn:DrTo}, Chapter 1).
\par
{\em Proof of Proposition} \ref{p:psh1}. By (\ref{e:J7})
\[ R(X, \dot{\gamma})\dot{\gamma} = \Omega (X, \dot{\gamma}) \tau
(\dot{\gamma}) + A(\dot{\gamma} , \dot{\gamma}) J X - A(X ,
\dot{\gamma}) J \dot{\gamma} \] hence the Jacobi equation
(\ref{e:J4}) becomes
\begin{equation}
X^{\prime\prime} - 2 \Omega (X^\prime , \dot{\gamma}) T + \theta
(X^\prime ) \tau (\dot{\gamma}) + \label{e:psh1}
\end{equation}
\[ + \theta (X) (\nabla_{\dot{\gamma}} \tau )\dot{\gamma} + \Omega
(X, \dot{\gamma}) \tau (\dot{\gamma}) + A(\dot{\gamma} ,
\dot{\gamma}) J X - A(X , \dot{\gamma}) J \dot{\gamma} = 0. \] We
look for solutions to (\ref{e:psh1}) of the form $X_{\gamma (t)} =
f(t) T_{\gamma (t)}$. The relevant equation is
\[ f^{\prime\prime}(t) T + f^\prime (t) \tau (\dot{\gamma}) + f(t)
(\nabla_{\dot{\gamma}} \tau )\dot{\gamma} = 0 \] (by $\nabla T =
0$) or $f^{\prime\prime}(t) = 0$ and $f^\prime (t) \tau
(\dot{\gamma}) + f(t) (\nabla_{\dot{\gamma}} \tau )\dot{\gamma} =
0$. Therefore, if $\nabla \tau = 0$ then $T \in J_\gamma$ while if
$\tau = 0$ then $T, \hat{T} \in J_\gamma$, where $\hat{T}_{\gamma
(t)} = t T_{\gamma (t)}$. Next, we look for solutions to
(\ref{e:psh1}) of the form $X_{\gamma (t)} = f(t) Y_{\gamma (t)}$
where $Y$ is a vector field along $\gamma$ such that
$\nabla_{\dot{\gamma}} Y = 0$, $Y_{\gamma (0)} =: v \in
H(M)_{\gamma (0)}$, $|v| = 1$, and $g_{\theta , \gamma (0)}(v ,
J_{\gamma (0)} \dot{\gamma}(0)) = 0$. Substitution into
(\ref{e:psh1}) gives
\[ f^{\prime\prime}(t) Y + f(t) [A(\dot{\gamma}, \dot{\gamma}) J Y
- A(Y, \dot{\gamma}) J \dot{\gamma}] = 0 \] or (by taking the
inner product with $Y$) $f^{\prime\prime}(t) = 0$, i.e. $f(t) = a
t + b$, $a,b \in {\mathbb R}$. Therefore (with the notations in
Proposition \ref{p:psh1}) $E(v_i ) \subset J_\gamma \cap
\Gamma^\infty (\gamma^{-1} H(M))$ if and only if $A_{\gamma
(0)}(\dot{\gamma}(0), \dot{\gamma}(0)) = 0$ and $A_{\gamma
(0)}(v_i , \dot{\gamma}(0)) = 0$. Also, to start with,
$E(\dot{\gamma}(0))$ (the space spanned by $\dot{\gamma}$ and
$\hat{\gamma}$) consists of Jacobi fields lying in $H(M)$. As $\{
\dot{\gamma}(t) , J_{\gamma (t)} \dot{\gamma}(t) , Y_{1, \gamma
(t)} , \cdots , Y_{2n-2 , \gamma (t)} \}$ is an orthonormal basis
of $H(M)_{\gamma (t)}$ (where $Y_i$ is the unique solution to
$(\nabla_{\dot{\gamma}} Y)_{\gamma (t)} = 0$, $Y_{\gamma (0)} =
v_i$) it follows that the sum $E(\dot{\gamma}(0)) + E(v_1 ) +
\cdots + E(v_{2n-2})$ is direct. Q.e.d. \vskip 0.1in Let $(M,
(\varphi , \xi , \eta , g))$ be a contact Riemannian manifold. Let
$X \in T_x (M)$ be a unit tangent vector orthogonal to $\xi$ and
$\sigma \subset T_x (M)$ the $2$-plane spanned by $\{ X , \varphi
X \}$ (a $\varphi$-{\em holomorphic plane}). We recall (cf. e.g.
\cite{kn:Bla}, p. 94) that the $\varphi$-{\em sectional curvature}
is the restriction of the sectional curvature $k$ of $(M , g)$ to
the $\varphi$-holomorphic planes. Let us set $H(X) = k(\sigma )$.
A Sasakian manifold of constant $\varphi$-sectional curvature
$H(X) = c$, $c \in {\mathbb R}$, is a {\em Sasakian space form}.
Compact Sasakian space forms have been classified in
\cite{kn:Kam}. By a result in \cite{kn:Bla}, p. 97, the Riemannian
curvature $R^D$ of a Sasakian space form $M$ (of
$\varphi$-sectional curvature $c$) is given by
\begin{equation}
R^D (X,Y) Z = \frac{c+3}{4} \{ g(Y,Z) X - g(X,Z) Y \} +
\label{e:curv0}
\end{equation}
\[ + \frac{c-1}{4} \{ \eta (Z)[\eta (X) Y - \eta (Y) X ] + [g(X,Z)
\eta (Y) - g(Y,Z) \eta (X)] \xi + \]
\[ + \Omega (Z,Y) \varphi X - \Omega (Z,X) \varphi Y + 2 \Omega
(X,Y) \varphi Z \} \] for any $X,Y,Z \in T(M)$.
Given a strictly pseudoconvex CR manifold $M$ and a
contact form $\theta$ we recall (cf. e.g. (1.59) in
\cite{kn:DrTo}) that
\begin{equation} D = \nabla + (\Omega - A) \otimes T + \tau
\otimes \theta + 2 \theta \odot J. \label{e:DeNabla}
\end{equation}
A calculation based on (\ref{e:DeNabla}) leads to
\[ R^D (X,Y) Z = R(X,Y) Z + (L X \wedge L Y ) Z - 2 \Omega (X,Y) J
Z - \]
\[ - g_\theta (S(X,Y) , Z) T + \theta (Z) S(X,Y) - \]
\[ - 2 g_\theta ((\theta \wedge {\mathcal O})(X,Y), Z) T + 2
\theta (Z) (\theta \wedge {\mathcal O})(X,Y) \] for any $X,Y,Z \in
T(M)$, relating the Riemannian curvature $R^D$ of $(M , g_\theta
)$ to the curvature $R$ of the Tanaka-Webster connection. Here
\[ L = \tau + J, \;\;\; {\mathcal O} = \tau^2 + 2 J \tau - I, \]
and $(X \wedge Y)Z = g_\theta (X,Z) Y - g_\theta (Y,Z) X$. Also
$S(X,Y) = (\nabla_X \tau )Y - (\nabla_Y \tau ) X$. Let us assume
that $(M , \theta )$ is a Sasakian manifold ($\tau = 0$) whose
Tanaka-Webster connection is flat ($R = 0$). Then $S = 0$, $L = J$
and ${\mathcal O} = - I$ hence
\[ R^D (X,Y) Z = (J X \wedge J Y) Z - 2 \Omega (X,Y) J Z + \]
\[ + 2 g_\theta ((\theta \wedge I)(X,Y) , Z) T - 2 \theta (Z)
(\theta \wedge I)(X,Y) \] and a comparison to (\ref{e:curv0})
shows that
\begin{proposition} Let $(M , \theta )$ be a Sasakian manifold.
Then its Tanaka-Webster connection is flat if and only if $(M ,
(J, - T, - \theta , g_\theta ))$ is a Sa\-sa\-ki\-an space form of
$\varphi$-sectional curvature $c = - 3$. \label{p:menotre}
\end{proposition}
By Lemma \ref{l:conj6} below the dimension of ${\mathcal
H}_\gamma$ is at most $4n$. On a Sasakian space form we may
determine $4n-1$ independent vectors in ${\mathcal H}_\gamma$.
Indeed, by combining Propositions \ref{p:psh1} and \ref{p:menotre}
we obtain
\begin{corollary} $\,$ \par\noindent
Let $(M , \theta )$ be a Sasakian space form of
$\varphi$-sectional curvature $c = - 3$ and $\gamma (t) \in M$ a
lengthy geodesic of the Tanaka-Webster connection $\nabla$,
parametrized by arc length. Let $\{ v_1 , \cdots , v_{2n-2} \}
\subset H(M)_{\gamma (0)}$ such that $\{ \dot{\gamma}(0),
J_{\gamma (0)} \dot{\gamma}(0), v_1 , \cdots , v_{2n-2} \}$ is a
$G_{\theta , \gamma (0)}$-orthonormal basis of $H(M)_{\gamma
(0)}$. Let $X_i$ be the vector field along $\gamma$ determined by
\[ \nabla_{\dot{\gamma} (t)} X_i = 0, \;\;\; X_i (\gamma (0)) = v_i
, \] for $1 \leq i \leq 2n-2$. Then ${\mathcal S} = \{
\dot{\gamma}, \hat{\gamma}, J \dot{\gamma}, X_i , \hat{X}_i : 1
\leq i \leq 2n-2 \}$ is a free system in ${\mathcal H}_\gamma$
while ${\mathcal S} \cup \{ T , \hat{T} \}$ is free in $J_\gamma$.
Here if $Y$ is a vector field along $\gamma (t)$ we set
$\hat{Y}_{\gamma (t)} = t Y_{\gamma (t)}$ for every $t$.
\end{corollary}
\section{Conjugate points on Sasakian manifolds}
Let $(M , \theta )$ be a Sasakian manifold and $\gamma : [a,b] \to
M$ a geodesic of the Tanaka-Webster connection $\nabla$,
parametrized by arc length. Given a piecewise differentiable
vector field $X$ along $\gamma$ we set
\[ I_a^b (X) = \int_a^b \{ g_\theta (\nabla_{\dot{\gamma}} X , \nabla_{\dot{\gamma}}
X) - g_\theta (R(X, \dot{\gamma})\dot{\gamma} , X) \}_{\gamma (t)}
d t \] where $R$ is the curvature of $\nabla$. We shall prove the
following
\begin{proposition} Let $(M , \theta)$ be a Sasakian manifold and
$\gamma (t) \in M$, $a \leq t \leq b$, a lengthy geodesic of
$\nabla$, parametrized by arc length, such that $\gamma (a)$ has
no conjugate point along $\gamma$. Let $Y \in {\mathcal H}_\gamma$
be a horizontal Jacobi field along $\gamma$ such that $Y_{\gamma
(a)} = 0$ and $Y$ is perpendicular to $\gamma$. Let $X$ be a
piecewise differentiable vector field along $\gamma$ such that
$X_{\gamma (a)} = 0$ and $X$ is perpendicular to $\gamma$. If
$X_{\gamma (b)} = Y_{\gamma (b)}$ then
\begin{equation}
I_a^b (X) \geq I_a^b (Y) \label{e:conj1}
\end{equation}
and the equality holds if and only if $X = Y$. \label{p:conj1}
\end{proposition}
{\em Proof}. Let $J_{\gamma , a}$ be the space of all Jacobi
fields $Z \in J_\gamma$ such that $Z_{\gamma (a)} = 0$. By Prop.
1.1 in \cite{kn:KoNo}, Vol. II, p. 63, $J_{\gamma , a}$ has
dimension $2n+1$. Moreover, let $J_{\gamma , a, \bot}$ be the
space of all $Z \in J_{\gamma , a}$ such that $g_\theta (Z ,
\dot{\gamma})_{\gamma (t)} = 0$, for every $t$. Then by Theorem
\ref{t:J1} it follows that $J_{\gamma , a , \bot}$ has dimension
$2n$. We shall need the following
\begin{lemma} For every Sasakian manifold $(M , \theta )$ the
characteristic direction $T$ of $(M , \theta )$ is a Jacobi field
along any geodesic $\gamma : [a,b] \to M$ of $\nabla$. Also, if
$T_a$ is the vector field along $\gamma$ given by $T_{a , \gamma
(t)} = (t-a) T_{\gamma (t)}$, $a \leq t \leq b$, and $\gamma$ is
lengthy then $T_a \in J_{\gamma , a , \bot}$ and $T_{a , \gamma
(t)} \neq 0$, $a < t \leq b$. \label{l:conj1}
\end{lemma}
{\em Proof}. Let ${\mathcal J}_\gamma$ be the Jacobi operator.
Then
\[ {\mathcal J}_\gamma T = T^{\prime\prime} - 2 \Omega (T^\prime ,
\dot{\gamma}) T + R(T , \dot{\gamma}) \dot{\gamma} = R(T,
\dot{\gamma})\dot{\gamma} \] as $T^\prime = \nabla_{\dot{\gamma}}
T = 0$. On the other hand, on any nondegenerate CR manifold with
$S = 0$ (i.e. $S(X,Y) \equiv (\nabla_X \tau )Y - (\nabla_Y \tau )X
= 0$, for any $X,Y \in T(M)$) the curvature of the Tanaka-Webster
connection satisfies
\begin{equation}
R(T, X) X = 0, \;\;\; X \in T(M), \label{e:conjR}
\end{equation}
hence ${\mathcal J}_\gamma T = 0$. As $R(T,T) = 0$ and $R(X,Y) T =
0$ it suffices to check (\ref{e:conjR}) for $X \in H(M)$, i.e.
locally $X = Z^\alpha T_\alpha + Z^{\overline{\alpha}}
T_{\overline{\alpha}}$. Then
\[ R(T,X)X = \{ {{R_\beta}^\gamma}_{0\alpha} Z^\alpha Z^\beta +
{{R_\beta}^\gamma}_{0\overline{\alpha}} Z^{\overline{\alpha}}
Z^\beta \} T_\gamma + \]
\[ + {{R_{\overline{\beta}}}^{\overline{\gamma}}}_{0\alpha} Z^\alpha Z^{\overline{\beta}} +
{{R_{\overline{\beta}}}^{\overline{\gamma}}}_{0\overline{\alpha}}
Z^{\overline{\alpha}} Z^{\overline{\beta}} \}
T_{\overline{\gamma}} \] and (by (1.85)-(1.86) in \cite{kn:DrTo},
section 1.4)
\[ {{R_\beta}^\gamma}_{0\alpha} = g^{\gamma\overline{\lambda}}
g_{\alpha\overline{\mu}}
S^{\overline{\mu}}_{\beta\overline{\lambda}} \, , \;\;\;
{{R_\beta}^\gamma}_{0\overline{\alpha}} =
g_{\lambda\overline{\alpha}} g^{\gamma\overline{\mu}}
S^\lambda_{\beta\overline{\mu}} \, . \] To complete the proof of
Lemma \ref{l:conj1} let $u(t) = t - a$. Then (by $T \, \rfloor \,
\Omega = 0$ and (\ref{e:conjR}))
\[ {\mathcal J}_\gamma T_a = u^{\prime\prime} T - 2 u^\prime
\Omega (T , \dot{\gamma}) T + u R(T , \dot{\gamma}) \dot{\gamma} =
0. \]
\begin{lemma} Let $(M , \theta )$ be a Sasakian manifold and
$\gamma (t) \in M$ a geodesic of $\nabla$. If $X \in J_\gamma$
then $X_H \equiv X - \theta (X) T$ satisfies the second order ODE
\begin{equation}
\nabla^2_{\dot{\gamma}} X_H + R(X_H , \dot{\gamma}) \dot{\gamma} =
0. \label{e:conj3}
\end{equation}
\label{l:conj2}
\end{lemma}
{\em Proof}.
\[ 0 = {\mathcal J}_\gamma X = \nabla^2_{\dot{\gamma}} X_H +
\theta (X^{\prime\prime}) T - 2 \Omega (\nabla_{\dot{\gamma}} X_H
, \dot{\gamma}) T + R(X_H , \dot{\gamma}) \dot{\gamma} \] hence
(by the uniqueness of the direct sum decomposition $T(M) = H(M)
\oplus {\mathbb R} T$) $X_H$ satisfies (\ref{e:conj3}). \par Let
us go back to the proof of Proposition \ref{p:conj1}. Let us
complete $T_a$ to a linear basis $\{ T_a , Y_2 , \cdots , Y_{2n}
\}$ of $J_{\gamma , a , \bot}$ and set $Y_1 = T_a$ for simplicity.
Then $Y = a^i Y_i$ for some $a^i \in {\mathbb R}$, $1 \leq i \leq
2n$. Let us observe that for each $a < t \leq b$ the tangent
vectors
\[ \{ T_{a , \gamma (t)} , Y^H_{2 , \gamma (t)} , \cdots , Y^H_{2n
, \gamma (t)} \} \subset [{\mathbb R} \dot{\gamma}(t)]^\bot
\subset T_{\gamma (t)} (M) \] are linearly independent, where
$Y_j^H := Y_j - \theta (Y_j ) T$, $2 \leq j \leq 2n$. Indeed
\[ 0 = \alpha T_{a , \gamma (t)} + \sum_{j=2}^{2n} \alpha^j
Y^H_{j, \gamma (t)} = \{ \alpha - \sum_{j=2}^{2n}
\frac{\alpha^j}{t-a} \theta (Y_j )_{\gamma (t)} \} T_{a, \gamma
(t)} + \sum_{j=2}^{2n} \alpha^j Y_{j, \gamma (t)} \] implies
$\alpha^j = 0$, and then $\alpha = 0$, because $\{ Y_{i , \gamma
(t)} : 1 \leq i \leq 2n \}$ are linearly independent, for any $a <
t \leq b$. At their turn, the vectors $Y_{i , \gamma (t)}$ are
independent because $\gamma (a)$ has no conjugate point along
$\gamma$. The proof is by contradiction. Assume that
\begin{equation}
\lambda^i Y_{i, \gamma (t_0 )} = 0, \label{e:conj5}
\end{equation}
for some $a < t_0 \leq b$ and some $\lambda = (\lambda^1 , \cdots
, \lambda^{2n}) \in {\mathbb R}^{2n} \setminus \{ 0 \}$. Let us
set $Z_0 = \lambda^i Y_i \in J_\gamma$. Then
\[ \lambda \neq 0 \Longrightarrow Z_0 \neq 0, \]
\[ Z_0 \in J_{\gamma , a} \Longrightarrow Z_{0, \gamma (a)} = 0,
\;\;\; {\rm (\ref{e:conj5})} \Longrightarrow Z_{0, \gamma (t_0 )}
= 0,
\]
hence $\gamma (a)$ and $\gamma (t_0 )$ are conjugate along
$\gamma$, a contradiction. Yet $[{\mathbb R}\dot{\gamma}(t)]^\bot$
has dimension $2n$ hence
\[ X_{\gamma (t)} = f(t) T_{a, \gamma (t)} + \sum_{j=2}^{2n} f^j
(t) Y^H_{j , \gamma (t)} \, , \] for some piecewise differentiable
functions $f(t)$, $f^j (t)$. We set $f^1 = f$, $Z_1 = T_a$ and
$Z_j = Y^H_j$, $2 \leq j \leq 2n$, for simplicity. Then
\begin{equation}
|X^\prime |^2 = | \frac{d f^i}{d t} Z_i |^2 + |f^i Z^\prime_i |^2
+ 2 g_\theta (\frac{d f^i}{d t} Z_i , f^j Z^\prime_j ).
\label{e:conj6}
\end{equation}
Also (by (\ref{e:conjR}) and Lemma \ref{l:conj2})
\[ - g_\theta (R(X , \dot{\gamma})\dot{\gamma} , X) = - f^i
g_\theta (R(Z_i , \dot{\gamma})\dot{\gamma} , X) = \]
\[ = - \sum_{j=2}^{2n} f^j g_\theta (R(Z_j ,
\dot{\gamma})\dot{\gamma} , X) = \sum_{j=2}^{2n} f^j g_\theta
(Z^{\prime\prime}_j , X) \] or (as $T_a^{\prime\prime} = 0$)
\begin{equation}
- g_\theta (R(X, \dot{\gamma})\dot{\gamma} , X) = g_\theta (f^i
Z_i^{\prime\prime} , f^j Z_j ). \label{e:conj7}
\end{equation}
Finally, note that
\begin{equation}
g_\theta (\frac{d f^i}{d t} Z_i , f^j Z^\prime_j ) + |f^i
Z^\prime_i |^2 + g_\theta (f^i Z_i , f^j Z_j^{\prime\prime}) =
\label{e:conj8}
\end{equation}
\[ = \frac{d}{d t} g_\theta (f^i Z_i , f^j Z^\prime_j ) - g_\theta
(f^i Z_i , \frac{d f^j}{d t} Z^\prime_j ). \] Summing up (by
(\ref{e:conj6})-(\ref{e:conj8}))
\begin{equation}
|X^\prime |^2 - g_\theta (R(X, \dot{\gamma})\dot{\gamma} , X) =
\label{e:conj9}
\end{equation}
\[ = \frac{d}{d t} g_\theta (f^i Z_i , f^j Z^\prime_j ) - g_\theta
(f^i Z_i , \frac{d f^j}{d t} Z^\prime_j ) + g_\theta (\frac{d
f^i}{d t} Z_i , f^j Z^\prime_j ) + | \frac{d f^i}{d t} Z_i |^2 \,
. \]
\begin{lemma} Let $(M , \theta )$ be a Sasakian manifold and
$\gamma (t) \in M$ a geodesic of $\nabla$. If $X$ and $Y$ are
solutions to $\nabla^2_{\dot{\gamma}} Z + R(Z,
\dot{\gamma})\dot{\gamma} = 0$ then
\begin{equation}
\label{e:adj4} \frac{d}{d t} \{ g_\theta (X,Y^\prime ) - g_\theta
(X^\prime , Y) \} = 0.
\end{equation}
In particular, if $X_{\gamma (a)} = 0$ and $Y_{\gamma (a)} = 0$ at
some point $\gamma (a)$ of $\gamma$ then
\[ g_\theta (X, Y^\prime ) - g_\theta (X^\prime , Y) = 0. \]
\label{l:conj4}
\end{lemma}
{\em Proof}. As $\tau = 0$ the $4$-tensor $R(X,Y,Z,W)$ possesses
the symmetry property $R(X,Y,Z,W) = R(Z,W,X,Y)$ (cf. (\ref{e:A8})
in Appendix A) one may subtract the identities
\[ \frac{d}{d t} g_\theta (X, Y^\prime ) = g_\theta (X^\prime , Y^\prime )
- g_\theta (X , R(Y, \dot{\gamma}) \dot{\gamma}), \]
\[ \frac{d}{d t} g_\theta (X^\prime , Y ) = g_\theta (X^\prime ,
Y^\prime ) - g_\theta (R(X, \dot{\gamma})\dot{\gamma}, Y) \] so
that to obtain (\ref{e:adj4}). Q.e.d. \par By Lemma \ref{l:conj2}
the fields $Z_j$, $2 \leq j \leq 2n$ satisfy
$\nabla_{\dot{\gamma}}^2 Z_j + R(Z_j , \dot{\gamma})\dot{\gamma} =
0$. Then we may apply Lemma \ref{l:conj4} to conclude that
\[ g_\theta (\frac{df^i}{d t} Z_i , f^j Z^\prime_j ) - g_\theta
(f^i Z_i , \frac{d f^j}{d t} Z^\prime_j ) = \] \[ = f^i \frac{d
f^j}{d t} \{ g_\theta (Z_j , Z^\prime_i ) - g_\theta (Z^\prime_j ,
Z_i ) \} = 0 \] so that (\ref{e:conj9}) becomes
\[ |X^\prime |^2 - g_\theta (R(X, \dot{\gamma})\dot{\gamma} , X) =
\frac{d}{d t} g_\theta (f^i Z_i , f^j Z^\prime_j ) + |\frac{d
f^i}{d t} Z_i |^2 \] and integration gives
\begin{equation}
I_a^b (X) = g_\theta (f^i Z_i , f^j Z^\prime_j )_{\gamma (b)} +
\int_a^b |\frac{d f^i}{d t} Z_i |^2 d t . \label{e:conj10}
\end{equation}
We wish to apply (\ref{e:conj10}) to the vector field $X = Y$. If
this is the case the functions $f^j$ are $f^1 (t) = a^1 +
(1/(t-a)) \sum_{j=2}^{2n} a^j \theta (Y_j )_{\gamma (t)} = 0$
(because of $Y_{\gamma (t)} \in H(M)_{\gamma (t)}$) and $f^j =
a^j$ (so that $d f^j /d t = 0$) for $2 \leq j \leq 2n$. Then (by
(\ref{e:conj10}))
\begin{equation}
I_a^b (Y) = g_\theta (\sum_{i=2}^{2n} a^i Z_i , \sum_{j=2}^{2n}
a^j Z_j^\prime )_{\gamma (b)} \, . \label{e:conj11}
\end{equation}
As $X_{\gamma (b)} = Y_{\gamma (b)}$ it follows that $f^1 (b) = 0$
and $f^j (b) = a^j$, $2 \leq j \leq 2n$, so that by subtracting
(\ref{e:conj10}) and (\ref{e:conj11}) we get
\[ I_a^b (X) - I_a^b (Y) = \int_a^b |\frac{d f^i}{d t} Z_i |^2 d t
\geq 0 \] and (\ref{e:conj1}) is proved. The equality $I_a^b (X) =
I_a^b (Y)$ yields $d f^i /d t = 0$, i.e. $f^1 (t) = f^1 (b) = 0$
and $f^j (t) = f^j (b) = a^j$, $2 \leq j \leq 2n$, hence
\[ X_{\gamma (t)} = \sum_{j=2}^{2n} a^j Y^H_{j, \gamma (t)} =
\sum_{j=2}^{2n} a^j \{ Y_{j , \gamma (t)} - \theta (Y_j )_{\gamma
(t)} T_{\gamma (t)} \} = \]
\[ = \sum_{j=2}^{2n} a^j Y_{j, \gamma (t)} + (t-a) a^1 T_{\gamma
(t)} = a^i Y_{i, \gamma (t)} = Y_{\gamma (t)} \, . \] Q.e.d.
\vskip 0.1in Setting $Y = 0$ in Proposition \ref{p:conj1} leads to
\begin{corollary}
Let $(M , \theta )$ be a Sasakian manifold and $\gamma : [a,b] \to
M$ a lengthy geodesic of the Tanaka-Webster connection,
parametrized by arc length and such that $\gamma (a)$ has no
conjugate point along $\gamma$. If $X$ is a piecewise
differentiable vector field along $\gamma$ such that $X_{\gamma
(a)} = X_{\gamma (b)} = 0$ and $X$ is perpendicular to $\gamma$
then $I_a^b (X) \geq 0$ and equality holds if and only if $X = 0$.
\label{c:conj1}
\end{corollary}
Corollary \ref{c:conj1} admits the following application
\begin{theorem} Let $(M , \theta )$ be a Sasakian manifold and
$\nabla$ its Tanaka-Webster connection. Assume that the
pseudohermitian sectional curvature satisfies $k_\theta (\sigma )
\geq k_0 > 0$, for any $2$-plane $\sigma \subset T_x (M)$, $x \in
M$. Then for any lengthy geodesic $\gamma (t) \in M$ of $\nabla$,
parametrized by arc length, the distance between two consecutive
conjugate points of $\gamma$ is less equal than $\pi
/(2\sqrt{k_0})$. \label{t:conj1}
\end{theorem}
{\em Proof}. Let $\gamma : [a, c] \to M$ be a geodesic of
$\nabla$, parametrized by arc length, such that $\gamma (c)$ is
the first conjugate point of $\gamma (a)$ along $\gamma$. Let $b
\in (a, c)$ and let $Y$ be a unit vector field along $\gamma$ such
that $(\nabla_{\dot{\gamma}} Y)_{\gamma (t)} = 0$ and $Y$ is
perpendicular to $\gamma$. Let $f(t)$ be a nonzero smooth function
such that $f(a) = f(b) = 0$. Then we may apply Corollary
\ref{c:conj1} to the vector field $X = f Y$ so that
\[ 0 \leq I_a^b (X) = \int_a^b \{ f^\prime (t)^2 |Y|^2 - f(t)^2 g_\theta (R(Y,
\dot{\gamma})\dot{\gamma} , Y) \} d t = \] \[ = \int_a^b \{
f^\prime (t)^2 - 4 f(t)^2 k_\theta (\sigma )\} d t \leq \int_a^b
\{ f^\prime (t)^2 - 4 k_0 f(t)^2 \} d t \] where $\sigma \subset
T_{\gamma (t)} (M)$ is the $2$-plane spanned by $\{ Y_{\gamma (t)}
, \dot{\gamma}(t) \}$. Finally, we may choose $f(t) = \sin [\pi
(t-a)/(b-a)]$ and use $\int_0^\pi \cos^2 x \; d x = \int_0^\pi
\sin^2 x \; d x = \pi /2$. We get $b-a \leq \pi /\sqrt{4k_0}$ and
let $b \to c$. Q.e.d. \vskip 0.1in We may establish the following
more general version of Theorem \ref{t:conj1}
\begin{theorem} \label{t:conj2} Let $(M, \theta )$ be a Sasakian
manifold of CR dimension $n$ such that the Ricci tensor $\rho$ of
the Tanaka-Webster connection $\nabla$ satisfies \[ \rho (X,X)
\geq (2n-1) k_0 g_\theta (X,X), \;\;\; X \in H(M), \] for some
constant $k_0 > 0$. Then for any geodesic $\gamma$ of $\nabla$,
parametrized by arc length, the distance between any two
consecutive conjugate points of $\gamma$ is less than $\pi
/\sqrt{k_0}$.
\end{theorem}
\vskip 0.1in {\bf Remark}. The assumption on $\rho$ in Theorem
\ref{t:conj2} involves but the pseudohermitian Ricci curvature.
Indeed (cf. (1.98) in \cite{kn:DrTo}, section 1.4)
\[ {\rm Ric}(T_\alpha , T_{\overline{\beta}}) =
g_{\alpha\overline{\beta}} - \frac{1}{2} \;
R_{\alpha\overline{\beta}} \, , \]
\[ R_{\alpha\beta} = i(n-1) A_{\alpha\beta} \, , \;\;\; R_{0\beta}
= S^{\overline{\alpha}}_{\overline{\alpha}\beta} \, , \;\;\;
R_{\alpha 0} = R_{00} = 0, \] hence (by $\tau = 0$) $\rho (X,X) =
2 R_{\alpha\overline{\beta}} Z^\alpha Z^{\overline{\beta}}$, for
any $X = Z^\alpha T_\alpha + Z^{\overline{\alpha}}
T_{\overline{\alpha}} \in H(M)$. Here ${\rm Ric}$ is the Ricci
tensor of the Riemannian manifold $(M , g_\theta )$ (whose
symmetry yields $R_{\alpha\overline{\beta}} =
R_{\overline{\beta}\alpha}$). Note that $S = 0$ alone implies $T
\, \rfloor \, \rho = 0$. Also, if $(M, g_\theta )$ is Ricci flat
then $(M , \theta )$ is pseudo-Einstein (of pseudohermitian scalar
curvature $R = 2$), in the sense of \cite{kn:Lee}. \vskip 0.1in
{\em Proof of Theorem} \ref{t:conj2}. Let $\gamma (t) \in M$ as in
the proof of Theorem \ref{t:conj1}. Let $\{ Y_1 , \cdots ,
Y_{2n-1} \}$ be parallel (i.e. $(\nabla_{\dot{\gamma}} Y_i
)_{\gamma (t)} = 0$) vector fields such that $Y_i \in H(M)$ and
$\{ \dot{\gamma}(t) , Y_{1 , \gamma (t)} , \cdots , Y_{2n-1,
\gamma (t)} \}$ is an orthonormal basis of $H(M)_{\gamma (t)}$ for
every $t$. Let $f(t)$ be a nonzero smooth function such that $f(a)
= f(b) = 0$ and let us set $X_i = f Y_i$. Then (by Corollary
\ref{c:conj1})
\[ 0 \leq \sum_{i=1}^{2n-1} I_a^b (X_i ) = \sum_{i=1}^{2n-1} \int_a^b \{ f^\prime
(t)^2 |Y_i |^2 - f(t)^2 g_\theta (R(Y_i ,
\dot{\gamma})\dot{\gamma} , Y_i )\} d t = \] \[ = \int_a^b \{
(2n-1) f^\prime (t)^2 - f(t)^2 \rho (\dot{\gamma} , \dot{\gamma})
\} d t \leq \] \[ \leq (2n-1) \int_a^b \{ f^\prime (t)^2 - k_0
f(t)^2 \} dt
\] and the proof may be completed as that of Theorem
\ref{t:conj1}. \vskip 0.1in {\bf Remark}. The assumption in
Theorem \ref{t:conj2} is weaker than that in Theorem
\ref{t:conj1}. Indeed, let $X \in H(M)$, $X \neq 0$, and $V =
|V|^{-1} V$. Let $\{ X_j : 1 \leq j \leq 2n \}$ be a local
orthonormal frame of $H(M)$ and $\sigma_j \subset T_x (M)$ the
$2$-plane spanned by $\{ Y_{j,x} , X_x \}$, where $Y_j := X_j -
g_\theta (V, X_j ) V$. Then $k_\theta (\sigma_j ) = \frac{1}{4}
g_{\theta} (R(V_j , V)V , V_j )_x$ where $V_j = |Y_j |^{-1} Y_j$
and $k_\theta (\sigma_j ) \geq k_0 /4$ yields
\[ \rho (X,X)_x = 4 |X|_x^2 \sum_{j=1}^{2n} k_\theta (\sigma_j )
|Y_j|^2_x \geq (2n-1) k_0 |X|^2_x \, . \]
\vskip 0.1in As another application of Proposition \ref{p:conj1}
we establish
\begin{theorem} Let $(M , \theta )$ be a Sasakian manifold, of CR
dimension $n$. Let $\gamma : [a,b] \to M$ be a lengthy geodesic of
the Tanaka-Webster connection $\nabla$, parametrized by arch
length. Assume that {\rm i)} there is $c \in (a,b)$ such that the
points $\gamma (a)$ and $\gamma (c)$ are horizontally conjugate
along $\gamma$ and {\rm ii)} for any $\delta > 0$ such that
$[c-\delta , c + \delta ] \subset (a,b)$ one has $\dim_{\mathbb R}
{\mathcal H}_{\gamma_\delta} = 4n$, where $\gamma_\delta$ is the
restriction of $\gamma$ to $[c-\delta , c+ \delta ]$. Then there
is a piecewise differentiable horizontal vector field $X$ along
$\gamma$ such that {\rm 1)} $X$ is perpendicular to $\dot{\gamma}$
and $J \dot{\gamma}$, {\rm 2)} $X_{\gamma (a)} = X_{\gamma (b)} =
0$, and {\rm 3)} $I_a^b (X) < 0$. \label{t:conj3}
\end{theorem}
In general, we have
\begin{lemma} Let $(M, \theta )$ be a Sasakian manifold of CR
dimension $n$ and $\gamma (t) \in M$ a lengthy geodesic of
$\nabla$, parametrized by arch length. Then \[ 2n+1 \leq
\dim_{\mathbb R} {\mathcal H}_\gamma \leq 4n. \] \label{l:conj6}
\end{lemma}
Hence the hypothesis in Theorem \ref{t:conj3} is that ${\mathcal
H}_{\gamma_\delta}$ has maximal dimension. We shall prove Lemma
\ref{l:conj6} later on. As to the converse of Theorem
\ref{t:conj3}, Corollary \ref{c:conj1} guarantees only that the
existence of a piecewise differentiable vector field $X$ as above
implies that there is some point $\gamma (c)$ conjugate to $\gamma
(a)$ along $\gamma$.
\par
{\em Proof of Theorem} \ref{t:conj3}. Let $a < c < b$ such that
$\gamma (a)$ and $\gamma (c)$ are horizontally conjugate and let
$Y \in {\mathcal H}_\gamma$ such that $Y_{\gamma (a)} = Y_{\gamma
(c)} = 0$. By Corollary \ref{c:ppdue} (as $(M, \theta )$ is
Sasakian) $Y$ is perpendicular to $\gamma$. Let $(U, x^i )$ be a
normal (with respect to $\nabla$) coordinate neighborhood with
origin at $\gamma (c)$. By Theorem 8.7 in \cite{kn:KoNo}, Vol. I,
p. 149, there is $R > 0$ such that for any $0 < r < R$ the open
set
\[ U(\gamma (c); r) \equiv \{ y \in U : \sum_{i=1}^{2n+1} x^i (y)^2
< r^2 \} \] is convex\footnote{That is any two points of $U(\gamma
(c); r)$ may be joined by a geodesic of $\nabla$ lying in
$U(\gamma (c); r)$.} and each point of $U(\gamma (c); r)$ has a
normal coordinate neighborhood containing $U(\gamma (c); r)$. By
continuity there is $\delta > 0$ such that $\gamma (t) \in
U(\gamma (c); r)$ for any $c - \delta \leq t \leq c + \delta$. Let
$\gamma_\delta$ denote the restriction of $\gamma$ to the interval
$[c-\delta , c+ \delta ]$. We need the following
\begin{lemma} The points $\gamma (c \pm \delta )$ are not
conjugate along $\gamma_\delta$. \label{l:conj5}
\end{lemma}
The proof is by contradiction. If $\gamma (c + \delta )$ is
conjugate to $\gamma (c - \delta )$ along $\gamma_\delta$ then (by
Theorem 1.4 in \cite{kn:KoNo}, Vol. II, p. 67) there is $v \in
T_{\gamma (c-\delta )}(M)$ such that $\exp_{\gamma (c-\delta )} v
= \gamma (c + \delta )$ and the linear map
\[ d_v \exp_{\gamma (c -\delta )} : T_v (T_{\gamma (c-\delta
)}(M)) \to T_{\gamma (c + \delta )} (M) \] is singular, i.e. ${\rm
Ker}(d_v \exp_{\gamma (c - \delta )} ) \neq 0$. Yet $\gamma (c -
\delta ) \in U(\gamma (c); r)$ hence there is a normal (relative
to $\nabla$) coordinate neighborhood $V$ with origin at $\gamma (c
- \delta )$ such that $V \supseteq U(\gamma (c); r)$. In
particular $\exp_{\gamma (c - \delta )} : V \to M$ is a
diffeomorphism on its image, so that $d_v \exp_{\gamma (c -\delta
)}$ is a linear isomorphism, a contradiction. Lemma \ref{l:conj5}
is proved.
\par
Let us go back to the proof of Theorem \ref{t:conj3}. The linear
map
\[ \Phi : J_{\gamma_\delta} \to T_{\gamma (c - \delta )}(M) \oplus
T_{\gamma (c + \delta )}(M), \;\;\; Z \mapsto (Z_{\gamma (c-\delta
)} \, , \, Z_{\gamma (c+ \delta )}), \] is a monomorphism. Indeed
${\rm Ker}(\Phi ) = 0$, otherwise $\gamma (c \pm \delta )$ would
be conjugate (in contradiction with Lemma \ref{l:conj5}). Both
spaces are $(4n+2)$-dimensional so that $\Phi$ is an epimorphism,
as well. By hypothesis ${\mathcal H}_{\gamma_\delta}$ is
$4n$-dimensional hence $\Phi$ descends to an isomorphism
\[ {\mathcal H}_{\gamma_\delta} \approx H(M)_{\gamma (c-\delta )}
\oplus H(M)_{\gamma (c + \delta )} \, . \] Let then $Z \in
{\mathcal H}_{\gamma_\delta}$ be a horizontal Jacobi field such
that
\[ Z_{\gamma (c - \delta )} = Y_{\gamma (c - \delta )}, \;\;\;
Z_{\gamma (c + \delta )} = 0. \] We set
\[ X = \begin{cases} Y & {\rm on} \;\; \left. \gamma \right|_{[a,
c-\delta ]}, \cr Z & {\rm on} \;\; \gamma_\delta , \cr 0 & {\rm
on} \;\; \left. \gamma \right|_{[c+\delta , b]}. \cr \end{cases}
\]
By the very definition $X$ is horizontal, i.e. $X_{\gamma (t)} \in
H(M)_{\gamma (t)}$ for every $t$. Moreover (by ${\mathcal
J}_\gamma Y = 0$ and $\theta (Y) = 0$)
\[ I^c_a (Y ) = \int_a^c \{ |\nabla_{\dot{\gamma}} Y |^2 - g_\theta (R(Y ,
\dot{\gamma})\dot{\gamma} , Y) \} d t = \]
\[ = \int_a^c \{ |\nabla_{\dot{\gamma}} Y |^2 + g_\theta
(\nabla_{\dot{\gamma}}^2 Y, Y)\} d t = \]
\[ = g_\theta (\nabla_{\dot{\gamma}} Y , Y )_{\gamma (c)} -
g_\theta (\nabla_{\dot{\gamma}} Y , Y )_{\gamma (a)} = 0 \] i.e.
$I_a^{c-\delta}(Y ) = - I_{c-\delta}^c (Y )$. Hence
\[ I_a^b (X) = I_a^{c-\delta}(Y ) + I_{c-\delta}^{c+\delta}(Z) = -
I_{c-\delta}^c (Y ) + I_{c-\delta}^{c+\delta}(Z) . \] Finally, let
us consider the vector field along $\gamma_\delta$
\[ W = \begin{cases} Y & {\rm on} \;\; \left. \gamma
\right|_{[c-\delta , c]} , \cr 0, & {\rm on} \;\; \left. \gamma
\right|_{[c , c + \delta ]} . \cr \end{cases} \] Note that
$W_{\gamma (c+\delta )} = 0$, $W_{\gamma (c-\delta )} = Z_{\gamma
(c - \delta )}$ and $W$ is perpendicular to $\gamma$. Thus we may
apply Proposition \ref{p:conj1} to $W$ and to $Z \in {\mathcal
H}_{\gamma_\delta}$ to conclude that $I_{c-\delta}^c (Y) =
I_{c-\delta}^{c+\delta}(W) \geq I_{c-\delta}^{c+\delta}(Z)$.
Consequently $I_a^b (X) < 0$. Let us show that $X$ is orthogonal
to $J \dot{\gamma}$. By Lemma \ref{l:J2} (as $Y \in J_\gamma$)
\[ \theta (Y^\prime )_{\gamma (t)} - 2 \Omega (Y ,
\dot{\gamma})_{\gamma (t)} = {\rm const}. = \theta (Y^\prime
)_{\gamma (a)} - 2 \Omega (Y, \dot{\gamma})_{\gamma (a)} \] hence
(as $Y_{\gamma (a)} = 0$ and $Y_{\gamma (t)} \in H(M)_{\gamma (t)}
\Longrightarrow Y^\prime_{\gamma (t)} \in H(M)_{\gamma (t)}$)
\[ 2 \Omega (Y, \dot{\gamma})_{\gamma (t)} = \theta (Y^\prime
)_{\gamma (t)} - \theta (Y^\prime )_{\gamma (a)} = 0 \] for any $a
\leq t \leq c-\delta$. Similarly (as $Z_{\gamma (c+\delta )} = 0$
and $Z$ is horizontal) $\Omega (Z, \dot{\gamma})_{\gamma (t)} = 0$
for any $c-\delta \leq t \leq c + \delta$. Therefore $\Omega (X,
\dot{\gamma})_{\gamma (t)} = 0$ for every $t$. Theorem
\ref{t:conj3} is proved.
\par
It remains that we prove Lemma \ref{l:conj6}. Let $\gamma (t) \in
M$, $|t| < \epsilon$, be a lengthy geodesic of $\nabla$. Let $X
\in {\mathcal H}_\gamma$ and $\{ Y_j : 1 \leq j \leq 4n+2 \}$ a
linear basis in $J_\gamma$. Then $X = c^j Y_j = c^j Y^H_j + c^j
\theta (Y_j ) T$ (where $Y^H_j \equiv Y_j - \theta (Y_j ) T$) for
some $c^j \in {\mathbb R}$. As $X_{\gamma (t)} \in H(M)_{\gamma
(t)}$ one has i) $c^j \theta (Y_j )_{\gamma (t)} = 0$ on one hand,
and ii) $c^j f_j^a (\gamma (t)) = f^a (\gamma (t))$, $1 \leq a
\leq 2n$, on the other, where $X = f^a X_a$, $Y_j^H = f_j^a X_a$
and $\{ X_a : 1 \leq a \leq 2n \}$ is a local frame of $H(M)$. One
may think of (i)-(ii) as a linear system in the unknowns $c^j$.
Let $r(t)$ be its rank. Then $\dim_{\mathbb R} {\mathcal H}_\gamma
= 4n+2 - r(t) \geq 2n+1$. To prove the remaining inequality in
Lemma \ref{l:conj6} it suffices to observe that ${\mathcal
H}_\gamma$ is contained in the space of all solutions to
$X^{\prime\prime} + R(X , \dot{\gamma}) \dot{\gamma} = 0$ obeying
$X_{\gamma (0)} \in H(M)_{\gamma (0)}$ and $X^\prime_{\gamma (0)}
\in H(M)_{\gamma (0)}$, which is $4n$-dimensional.
\section{The first variation of the length integral}
Let $M$ be a strictly pseudoconvex CR manifold and $y, z \in M$.
Let $\Gamma$ be the set of all piecewise differentiable curves
$\gamma : [a,b] \to M$ parametrized proportionally to arc length,
such that $\gamma (a) = y$ and $\gamma (b) = z$. As usual, for
each $\gamma \in \Gamma$ we let $T_\gamma (\Gamma )$ be the space
of all piecewise differentiable vector fields along $\gamma$ such
that $X_y = X_z = 0$. Given $X \in T_\gamma (\Gamma )$ let
$\gamma^s : [a,b] \to M$, $|s| < \epsilon$, be a family of curves
such that i) $\gamma^s \in \Gamma$, $|s| < \epsilon$, ii)
$\gamma^0 = \gamma$, iii) there is a partition $a = t_0 < t_1 <
\cdots < t_k = b$ such that the map $(t,s) \mapsto \gamma^s (t)$
is differentiable on each rectangle $[t_j , t_{j+1}] \times
(-\epsilon , \epsilon )$, $0 \leq j \leq k-1$, and iv) for each
fixed $t \in [a,b]$ the tangent vector to \[ \sigma_t : (-\epsilon
, \epsilon ) \to M, \;\;\; \sigma_t (s) = \gamma^s (t), \;\;\; |s|
< \epsilon , \] at the point $\gamma (t)$ is $X_{\gamma (t)}$. We
set as usual
\[ (d_\gamma L) X = \frac{d}{d s} \left\{ L(\gamma^s ) \right\}_{s=0} \, . \]
Here $L(\gamma^s )$ is the Riemannian length of $\gamma^s$ with
respect to the Webster metric $g_\theta$ (so that $\gamma^s$ need
not be lengthy to start with). One scope of this section is to
establish the following
\begin{theorem} Let $\gamma^s : [a,b] \to M$, $|s| < \epsilon$, be a $1$-parameter
family of curves such that $(t,s) \mapsto \gamma^s (t)$ is
differentiable on $[a,b] \times (-\epsilon , \epsilon )$ and each
$\gamma^s$ is parametrized proportionally to arc length. Let us
set $\gamma = \gamma^0$. Then
\begin{equation}
\frac{d}{d s} \{ L(\gamma^s ) \}_{s=0} = \frac{1}{r} \{ g_\theta
(X , \dot{\gamma})_{\gamma (b)} - g_\theta (X ,
\dot{\gamma})_{\gamma (a)} - \label{e:V1}
\end{equation}
\[ - \int_a^b [g_\theta (X, \nabla_{\dot{\gamma}} \dot{\gamma}) -
g_\theta (T_\nabla (X , \dot{\gamma}), \dot{\gamma})]_{\gamma (t)}
\; d t \} \] where $X_{\gamma (t)} = \dot{\sigma}_t (0)$, $a \leq
t \leq b$, and $r = |\dot{\gamma}(t)|$ is the common length of all
tangent vectors along $\gamma$. \label{t:V1}
\end{theorem}
This will be shortly seen to imply
\begin{theorem} Let $\gamma \in \Gamma$ and $X \in T_\gamma
(\Gamma )$. Let $a = c_0 < c_1 < \cdots < c_h < c_{h+1} = b$ be a
partition such that $\gamma$ is differentiable on each $[c_j ,
c_{j+1}]$, $0 \leq j \leq h$. Then
\begin{equation}
(d_\gamma L) X = \frac{1}{r} \{ \sum_{j=1}^h g_{\theta , \gamma
(c_j )} (X_{\gamma (c_j )} , \dot{\gamma}(c_j^{-}) -
\dot{\gamma}(c_j^{+})) - \label{e:V2}
\end{equation}
\[ - \int_a^b [g_\theta (X , \nabla_{\dot{\gamma}} \dot{\gamma}) -
g_\theta (T_\nabla (X , \dot{\gamma}), \dot{\gamma}) ]_{\gamma
(t)} \; d t \} \] where $\dot{\gamma}(c_j^{\pm}) = \lim_{t \to
c_j^{\pm}} \dot{\gamma}(t)$. \label{t:V2}
\end{theorem}
Consequently, we shall prove
\begin{corollary} A lengthy curve $\gamma \in \Gamma$ is a geodesic
of the Tanaka-Webster connection if and only if
\begin{equation}\label{e:V3} (d_\gamma L) X = \frac{1}{r} \int_a^b \theta
(X)_{\gamma (t)} A(\dot{\gamma} , \dot{\gamma})_{\gamma (t)} \; d
t \end{equation} for all $X \in T_\gamma (\Gamma )$. In
particular, if $(M , \theta )$ is a Sasakian manifold then lengthy
geodesics belonging to $\Gamma$ are the critical points of $L$ on
$\Gamma$. \label{c:V1}
\end{corollary}
The remainder of this section is devoted to the proofs of the
results above. We adopt the principal bundle approach in
\cite{kn:KoNo}, Vol. II, p. 80-83. The proof is a {\em verbatim}
transcription of the arguments there, except for the presence of
torsion terms.
\par
Let $\pi : O(M , g_\theta ) \to M$ be the ${\rm O}(2n+1)$-bundle
of $g_\theta$-orthonormal frames tangent to $M$. Let $Q = [a,b]
\times (-\epsilon , \epsilon )$. Let $f : Q \to O(M, g_\theta )$
be a parametrized surface in $O(M, g_\theta )$ such that i) $\pi
(f(t,s)) = \gamma^s (t)$, $(t,s) \in Q$, and ii) $f^0 : [a,b] \to
O(M, g_\theta )$, $f^0 (t) = f(t,0)$, $a \leq t \leq b$, is a
horizontal curve. Precisely, the Tanaka-Webster connection
$\nabla$ of $(M , \theta )$ induces an infinitesimal connection in
the principal bundle ${\rm GL}(2n+1, {\mathbb R}) \to L(M) \to M$
(of all linear frames tangent to $M$) descending (because of
$\nabla g_\theta = 0$) to a connection $H$ in ${\rm O}(2n+1) \to
O(M , g_\theta ) \to M$. The requirement is that $(d f^0 /dt) (t)
\in H_{f^0 (t)}$, $a \leq t \leq b$.
\par
Let ${\mathbb S} , {\mathbb T} \in {\mathcal X}(Q)$ be given by
${\mathbb S} = \partial /\partial s$ and ${\mathbb T} = \partial
/\partial t$. Let \[ \mu \in \Gamma^\infty (T^* (O(M, g_\theta ))
\otimes {\mathbb R}^{2n+1}), \;\;\; \Theta = D \mu \, , \] \[
\omega \in \Gamma^\infty (T^* (O(M, g_\theta )) \otimes {\bf
o}(2n+1)), \;\;\; \Omega = D \omega \, , \] be respectively the
canonical $1$-form, the torsion $2$-form, the connection $1$-form,
and the curvature $2$-form of $H$ on $O(M, g_\theta )$. We denote
by
\[ \mu^* = f^* \mu , \;\;\; \Theta^* = f^* \Theta , \;\;\;
\omega^* = f^* \omega , \;\;\; \Omega^* = f^* \Omega , \] the
pullback of these forms to the rectangle $Q$. We claim that
\begin{equation}
[{\mathbb S} , {\mathbb T}] = 0, \label{e:A}
\end{equation}
\begin{equation}
\omega^* ({\mathbb T})_{(t,0)} = 0, \;\;\; a \leq t \leq b.
\label{e:B}
\end{equation}
Indeed (\ref{e:A}) is obvious. To check (\ref{e:B}) one needs to
be a bit pedantic and introduce the injections
\[ \alpha^s : [a,b] \to Q, \;\;\; \beta_t : (-\epsilon , \epsilon
) \to Q, \]
\[ \alpha^s (t) = \beta_t (s) = (t,s), \;\;\; a \leq t \leq b,
\;\;\; |s| < \epsilon , \] so that $f^0 = f \circ \alpha^0$. Then
\[ H_{f^0 (t)} \ni
\left. \frac{d f^0}{d t}(t) = (d_{(t,0)} f)(d_t \alpha^0 )
\frac{d}{d t} \right|_t = (d_{(t,0)} f) {\mathbb T}_{(t,0)} \, ,
\]
\[ \omega^* ({\mathbb T})_{(t,0)} = \omega_{f(t,0)} ((d_{(t,0)} f)
{\mathbb T}_{(t,0)}) = 0. \] Next, we claim that
\begin{equation}
{\mathbb S}(\mu^* ({\mathbb T})) = {\mathbb T}(\mu^* ({\mathbb
S})) + \omega^* ({\mathbb T}) \cdot \mu^* ({\mathbb S}) - \omega^*
({\mathbb S}) \cdot \mu^* ({\mathbb T}) + 2 \, \Theta^* ({\mathbb
S},{\mathbb T}), \label{e:C}
\end{equation}
\begin{equation}
{\mathbb S}(\omega^* ({\mathbb T})) = {\mathbb T}(\omega^*
({\mathbb S})) + \omega^* ({\mathbb T}) \omega^* ({\mathbb S}) -
\omega^* ({\mathbb S}) \omega^* ({\mathbb T}) + 2 \; \Omega^*
({\mathbb S}, {\mathbb T}). \label{e:D}
\end{equation}
The identities (\ref{e:C})-(\ref{e:D}) follow from Prop. 3.11 in
\cite{kn:KoNo}, Vol. I, p. 36, our identity (\ref{e:A}), and the
first and second structure equations for a linear connection (cf.
e.g. Theor. 2.4 in \cite{kn:KoNo}, Vol. I, p. 120). Let us
consider the $C^\infty$ function $F : Q \to [0, + \infty )$ given
by
\[ F(t,s) = \langle \mu^* ({\mathbb T})_{(t,s)} \, , \,
\mu^* ({\mathbb T})_{(t,s)} \rangle^{1/2} \, , \;\;\; (t,s) \in Q.
\] Here $\langle \xi , \eta \rangle$ is the Euclidean scalar
product of $\xi , \eta \in {\mathbb R}^{2n+1}$. Note that
\[ \mu^* ({\mathbb T})_{(t,s)} = \mu_{f(t,s)}((d_{(t,s)} f)
{\mathbb T}_{(t,s)}) = \] \[ = f(t,s)^{-1} (d_{f(t,s)} \pi
)(d_{(t,s)} f) {\mathbb T}_{(t,s)} = \left. f(t,s)^{-1} d_t (\pi
\circ f \circ \alpha^s ) \frac{d}{d t} \right|_t \] i.e.
\begin{equation} \mu^* ({\mathbb T})_{(t,s)} = f(t,s)^{-1} \dot{\gamma}^s
(t). \label{e:I}
\end{equation}
Yet $f(t,s) \in O(M , g_\theta )$, i.e. $f(t,s)$ is a linear
isometry of $({\mathbb R}^{2n+1} , \langle \; , \; \rangle )$ onto
$(T_{\gamma^s (t)} (M), g_{\theta , \gamma^s (t)})$, so that
\[ F(t,s) = g_{\theta , \gamma^s (t)} (\dot{\gamma}^s (t) ,
\dot{\gamma}^s (t))^{1/2} \] and then
\[ L(\gamma^s ) = \int_a^b F(t, s) \; d t. \]
As $\gamma^s$ is parametrized proportionally to arc length
$F(t,s)$ doesn't depend on $t$. In particular
\begin{equation} \label{e:E} F(t,0) = r.
\end{equation}
We claim that
\begin{equation}
{\mathbb S}(F) = \frac{1}{r} \{ \langle {\mathbb T}(\mu^*
({\mathbb S})) , \mu^* ({\mathbb T}) \rangle + 2 \langle \Theta^*
({\mathbb S},{\mathbb T}), \mu^* ({\mathbb T}) \rangle \}
\label{e:F}
\end{equation}
at all points $(t,0) \in Q$. Indeed, by (\ref{e:C})
\[ 2\, F\, {\mathbb S}(F) = {\mathbb S}(F^2 ) =
{\mathbb S}(\langle \mu^* ({\mathbb T}) , \mu^* ({\mathbb
T})\rangle ) = 2 \, \langle {\mathbb S}(\mu^* ({\mathbb T})) ,
\mu^* ({\mathbb T}) \rangle = \]
\[ = 2 \langle {\mathbb T}(\mu^* ({\mathbb S})) , \mu^*
({\mathbb T})\rangle + 2 \langle \omega^* ({\mathbb T}) \cdot
\mu^* ({\mathbb S}) , \mu^* ({\mathbb T}) \rangle - \] \[ - 2
\langle \omega^* ({\mathbb S}) \cdot \mu^* ({\mathbb T}) , \mu^*
({\mathbb T}) \rangle + 4 \langle \Theta^* ({\mathbb S} ,
{\mathbb T}), \mu^* ({\mathbb T}) \rangle . \] On the other hand
$\omega$ is ${\bf o}(2n+1)$-valued (where ${\bf o}(2n+1)$ is the
Lie algebra of ${\rm O}(2n+1)$), i.e. $\omega^* ({\mathbb
S})_{(t,s)} : {\mathbb R}^{2n+1} \to {\mathbb R}^{2n+1}$ is skew
symmetric, hence the last-but-one term vanishes. Therefore
(\ref{e:F}) follows from (\ref{e:B}) and (\ref{e:E}). We may
compute now the first variation of the length integral
\[ \frac{d}{d t} \{ L(\gamma^s ) \}_{s=0} = \int_a^b {\mathbb
S}(F)_{(t,0)} \; d t = \;\;\; ({\rm by} \; {\rm (\ref{e:F})}) \]
\[ = \frac{1}{r} \int_a^b \{ \langle {\mathbb T}(\mu^*
({\mathbb S})) , \mu^* ({\mathbb T}) \rangle_{(t,0)} + 2 \langle
\Theta^* ({\mathbb S},{\mathbb T}), \mu^* ({\mathbb T})
\rangle_{(t,0)} \} \; d t. \] On the other hand
\[ \mu^* ({\mathbb S})_{(t,0)} = \mu_{f^0 (t)} ((d_{(t,0)}
f) {\mathbb S}_{(t,0)}) = \] \[ = \left. f(t,0)^{-1} d_0 (\pi
\circ f \circ \beta_t ) \frac{d}{d s} \right|_0 = f(t,0)^{-1}
\frac{d \sigma_t}{d s}(0) \] i.e.
\begin{equation} \label{e:H} \mu^* ({\mathbb S})_{(t,0)} = f^0
(t)^{-1} X_{\gamma (t)} \, .
\end{equation}
Note that given $u \in C^\infty (Q)$ one has ${\mathbb
T}(u)_{(t,0)} = (u \circ \alpha^0 )^\prime (t)$. Then
\[ {\mathbb T}(\mu^* ({\mathbb T}))_{(t,0)} = \lim_{h \to 0}
\frac{1}{h} \{ \mu^* ({\mathbb T})_{(t+h,0)} - \mu^* ({\mathbb
T})_{(t,0)} \} = \;\;\; ({\rm by} \; {\rm (\ref{e:I})})
\]
\[ = \lim_{h \to 0} \frac{1}{h} \{ f^0 (t+h)^{-1}
\dot{\gamma}(t+h) - f^0 (t)^{-1} \dot{\gamma}(t) \} . \] Yet, as
$f^0$ is an horizontal curve
\[ f^0 (t+h)^{-1} \dot{\gamma}(t+h) = f^0 (t)^{-1} \tau^{t+h}_t
\dot{\gamma}(t+h), \] where $\tau^{t+h}_t : T_{\gamma (t+h)} (M)
\to T_{\gamma (t)} (M)$ is the parallel displacement operator
along $\gamma$ from $\gamma (t+ h)$ to $\gamma (t)$. Hence
\[ {\mathbb T}(\mu^* ({\mathbb T}))_{(t,0)}
= f^0 (t)^{-1} \left( \lim_{h \to 0} \frac{1}{h} \{ \tau^{t+h}_t
\dot{\gamma}(t+h) - \dot{\gamma}(t)\} \right) \] i.e.
\begin{equation} \label{e:J}
{\mathbb T}(\mu^* ({\mathbb T}))_{(t,0)} = f^0 (t)^{-1}
(\nabla_{\dot{\gamma}} \dot{\gamma})_{\gamma (t)} \, .
\end{equation}
To compute the torsion term we recall (cf. \cite{kn:KoNo}, Vol. I,
p. 132)
\[ T_{\nabla , x} (X,Y) = 2 v (\Theta_v (X^* , Y^* )), \]
for any $X,Y \in T_x (M)$, where $v$ is a linear frame at $x$ and
$X^* , Y^* \in T_v (L(M))$ project respectively on $X, Y$. Note
that $(d_{(t,0)} f) {\mathbb S}_{(t,0)}$ and $(d_{(t,0)}
f){\mathbb T}_{(t,0)}$ project on $X_{\gamma (t)}$ and
$\dot{\gamma}(t)$, respectively. Then
\begin{equation}
2 \Theta^* ({\mathbb S}, {\mathbb T})_{(t,0)} = f^0 (t)^{-1}
T_\nabla (X , \dot{\gamma})_{\gamma (t)} \, . \label{e:P}
\end{equation}
Finally (by (\ref{e:I}) and (\ref{e:H})-(\ref{e:P}))
\[ \frac{d}{d t} \{ L(\gamma^s ) \}_{s=0} = \frac{1}{r} \int_a^b
\{ {\mathbb T}(\langle \mu^* ({\mathbb S}), \mu^* ({\mathbb T})
\rangle ) - \] \[ - \langle \mu^* ({\mathbb S}) , {\mathbb
T}(\mu^* ({\mathbb T})) \rangle + 2 \langle \Theta^* ({\mathbb S},
{\mathbb T}), \mu^* ({\mathbb T}) \}_{(t,0)} \; d t = \]
\[ = \frac{1}{r} \{ \langle \mu^* ({\mathbb S}), \mu^*
({\mathbb T}) \rangle_{(b,0)} - \langle \mu^* ({\mathbb S}), \mu^*
({\mathbb T}) \rangle_{(a,0)} \} - \]
\[ - \frac{1}{r} \int_a^b \{ g_\theta (X , \nabla_{\dot{\gamma}}
\dot{\gamma}) - g_\theta (T_\nabla (X , \dot{\gamma}),
\dot{\gamma}) \}_{\gamma (t)} \; d t \] and (\ref{e:V1}) is
proved.
\par
{\em Proof of Theorem} \ref{t:V2}. Let $c_j = t_0^{(j)} <
t_1^{(j)} < \cdots < t^{(j)}_{k_j} = c_{j+1}$ be a partition of
$[c_j , c_{j+1}]$ such that $X$ is differentiable along the
restriction of $\gamma$ at each $[t^{(j)}_i , t^{(j)}_{i+1}]$, $0
\leq i \leq k_j - 1$. Moreover, let $\{ \gamma^s \}_{|s| <
\epsilon}$ be a family of curves $\gamma^s \in \Gamma$ such that
$\gamma^0 = \gamma$, the map $(t,s) \mapsto \gamma^s (t)$ is
differentiable on $[c_j , c_{j+1}] \times (-\epsilon , \epsilon )$
for every $0 \leq j \leq h$, and $X_{\gamma (t)} = (d \sigma_t /d
s)(0)$ for every $t$ (with $\sigma_t (s) = \gamma^s (t)$). Let
$\gamma^s_j$ (respectively $\gamma^s_{ji}$) be the restriction of
$\gamma^s$ (respectively of $\gamma^s_j$) to $[c_j , c_{j+1}]$
(respectively to $[t^{(j)}_i , t^{(j)}_{i+1}]$). We may apply
Theorem \ref{t:V1} (to the interval $[t^{(j)}_i , t^{(j)}_{i+1}]$
rather than $[a,b]$) so that to get
\[ \frac{d}{ds} \{ L(\gamma^s_{ji} )\}_{s=0} = \frac{1}{r} \{
g_\theta (X , \dot{\gamma})_{\gamma (t^{(j)}_{i+1})} - g_\theta (X
, \dot{\gamma})_{\gamma (t^{(j)}_i )} -
\int_{t^{(j)}_i}^{t^{(j)}_{i+1}} F(X , \dot{\gamma}) d t \} \]
where $F(X , \dot{\gamma})$ is short for $g_\theta (X ,
\nabla_{\dot{\gamma}} \dot{\gamma})_{\gamma (t)} - g_\theta
(T_\nabla (X , \dot{\gamma}) , \dot{\gamma})_{\gamma (t)}$. Let us
take the sum over $0 \leq i \leq k_j -1$. The lengths
$L(\gamma^s_{ji})$ ad up to $L(\gamma^s_j )$. Taking into account
that at the points $\gamma (c_j )$ only the lateral limits of
$\dot{\gamma}$ are actually defined, we obtain
\[ \frac{d}{d s} \{ L(\gamma^s_j )\}_{s=0} = \frac{1}{r} \{
g_{\theta , \gamma (c_{j+1})}(X_{\gamma (c_{j+1})} ,
\dot{\gamma}(c_{j+1}^{-})) - \] \[ - g_{\theta , \gamma
(c_{j})}(X_{\gamma (c_{j})} , \dot{\gamma}(c_{j}^{+})) -
\int_{c_j}^{c_{j+1}} F(X, \dot{\gamma}) d t \} \] and taking the
sum over $0 \leq j \leq h$ leads to (\ref{e:V2}) (as $X_{\gamma
(c_0 )} = 0$ and $X_{\gamma (c_{h+1})} = 0$). Q.e.d.
\par
{\em Proof of Corollary} \ref{c:V1}. Let $\gamma (t) \in M$ be a
lengthy curve such that $\gamma \in \Gamma$. If $\gamma$ is a
geodesic of $\nabla$ then $\nabla_{\dot{\gamma}} \dot{\gamma} = 0$
implies (by Theorem \ref{t:V1}) \[ (d_\gamma L )X = \frac{1}{r}
\int_a^b g_\theta (T_\nabla (X , \dot{\gamma}),
\dot{\gamma})_{\gamma (t)} d t \] for any $X \in T_\gamma (\Gamma
)$ and then
\[ T_\nabla (X, \dot{\gamma}) = - 2 \Omega (X_H , \dot{\gamma}) T
+ \theta (X) \tau (\dot{\gamma}), \;\;\; g_\theta (T ,
\dot{\gamma}) = 0, \] yield (\ref{e:V3}). Viceversa, let $\gamma
\in \Gamma$ be a lengthy curve such that (\ref{e:V3}) holds. There
is a partition $a = c_0 < c_1 < \cdots < c_{h+1} = b$ such that
$\gamma$ is differentiable in $[c_j , c_{j+1}]$, $0 \leq j \leq
h$. Let $f$ be a continuous function defined along $\gamma$ such
that $f(\gamma (c_j )) = 0$ for $1 \leq j \leq h$ and $f(\gamma
(t)) > 0$ elsewhere. We may apply (\ref{e:V2}) in Theorem
\ref{t:V2} to the vector field $X = f \; \nabla_{\dot{\gamma}}
\dot{\gamma}$ so that to get
\begin{equation}
(d_\gamma L) X = - \frac{1}{r} \int_a^b f \, \{
|\nabla_{\dot{\gamma}} \dot{\gamma}|^2 - g_\theta (T_\nabla
(\nabla_{\dot{\gamma}} \dot{\gamma} , \dot{\gamma}) ,
\dot{\gamma}) \} d t. \label{e:broken}
\end{equation}
As $\gamma$ is lengthy and $H(M)$ is parallel with respect to
$\nabla$ one has $\nabla_{\dot{\gamma}} \dot{\gamma} \in H(M)$
hence (by (\ref{e:V3})) $(d_\gamma L) (f \nabla_{\dot{\gamma}}
\dot{\gamma} ) = 0$ and
\[ g_\theta (T_\nabla (\nabla_{\dot{\gamma}} \dot{\gamma} ,
\dot{\gamma}), \dot{\gamma}) = - 2 \Omega (\nabla_{\dot{\gamma}}
\dot{\gamma} , \dot{\gamma}) g_\theta (T , \dot{\gamma}) = 0 \] so
that (by (\ref{e:broken})) it must be $\nabla_{\dot{\gamma}}
\dot{\gamma} = 0$ whenever $\nabla_{\dot{\gamma}} \dot{\gamma}$
makes sense, i.e. $\gamma$ is a broken geodesic of $\nabla$. It
remains that we prove differentiability of $\gamma$ at the points
$c_j$, $1 \leq j \leq h$. Let $j \in \{ 1 , \cdots , h \}$ be a
fixed index and let us consider a vector field $X_j \in T_\gamma
(\Gamma )$ such that $X_{j , \gamma (c_j )} = \dot{\gamma}(c_j^- )
- \dot{\gamma}(c_j^+ )$ and $X_{j, \gamma (c_k )} = 0$ for any $k
\in \{ 1 , \cdots h \} \setminus \{ j \}$. Then (by
(\ref{e:V2})-(\ref{e:V3})) one has $|\dot{\gamma}(c_j^- ) -
\dot{\gamma}(c_j^+ )|^2 = 0$. Q.e.d. \vskip 0.1in {\bf Remark}.
The following alternative proof of Theorem \ref{t:V1} is also
available. Since $(M , g_\theta )$ is a Riemannian manifold and
$L(\gamma^s )$ is the Rieman\-nian length of $\gamma^s$ we have
(cf. Theorem 5.1 in \cite{kn:KoNo}, Vol. II, p. 80)
\begin{equation}
\frac{d}{ds} \{ L(\gamma^s )\}_{s=0} = \frac{1}{r} \{ g_\theta (X
, \dot{\gamma})_{\gamma (b)} - g_\theta (X , \dot{\gamma})_{\gamma
(a)} \} - \label{e:fvf}
\end{equation}
\[ - \frac{1}{r} \; \int_a^b g_\theta (X , D_{\dot{\gamma}}
\dot{\gamma})_{\gamma (t)} \; d t \] where $D$ is the Levi-Civita
connection of $(M , g_\theta )$. On the other hand (cf. e.g.
\cite{kn:DrTo}, section 1.3) $D$ is related to the Tanaka-Webster
connection of $(M , \theta )$ by $D = \nabla + (\Omega - A)
\otimes T + \tau \otimes \theta + 2 \theta \odot J$ hence
\[ g_\theta (X , D_{\dot{\gamma}} \dot{\gamma}) = g_\theta (X ,
\nabla_{\dot{\gamma}} \dot{\gamma}) - \theta (X) A(\dot{\gamma} ,
\dot{\gamma}) + \] \[ + \theta (\dot{\gamma}) A(X , \dot{\gamma})
+ 2 \theta (\dot{\gamma}) \Omega (X , \dot{\gamma}) = g_\theta (X
, \nabla_{\dot{\gamma}} \dot{\gamma}) - g_\theta (T_\nabla (X ,
\dot{\gamma}), \dot{\gamma}) \] so that (\ref{e:fvf}) yields
(\ref{e:V1}). Q.e.d.
\section{The second variation of the length integral}
We introduce the Hessian $I$ of $L$ at a geodesic $\gamma \in
\Gamma$ as follows. Given $X \in T_\gamma (\Gamma )$ let us
consider a $1$-parameter family of curves $\{ \gamma^s \}_{|s| <
\epsilon}$ as in the definition of $(d_\gamma L) X$. Let $I(X,X)$
be given by
\[ I(X,X) = \frac{d^2}{d s^2} \left\{ L(\gamma^s ) \right\}_{s=0}
\] and define $I(X,Y)$ by polarization. By analogy to Riemannian geometry
(cf. e.g. \cite{kn:KoNo}, Vol. II, p. 81) $I(X,Y)$ is referred to
as the {\em index form}. The scope of this section is to establish
\begin{theorem} Let $(M , \theta )$ be a Sasakian manifold.
If $\gamma \in \Gamma$ is a lengthy geodesic of the Tanaka-Webster
connection $\nabla$ of $(M , \theta )$ and $X,Y \in T_\gamma
(\Gamma )$ then \begin{equation}\label{e:index} I(X,Y) =
\frac{1}{r} \int_a^b \{ g_\theta (\nabla_{\dot{\gamma}} X^\bot ,
\nabla_{\dot{\gamma}} Y^\bot ) - g_\theta (R(X^\bot ,
\dot{\gamma}) \dot{\gamma} , Y^\bot ) -
\end{equation}
\[ - 2 \Omega (X^\bot , \dot{\gamma}) \theta (\nabla_{\dot{\gamma}}
Y^\bot ) - 2 [\theta (\nabla_{\dot{\gamma}} X^\bot ) - 2
\Omega(X^\bot , \dot{\gamma})] \Omega (Y^\bot , \dot{\gamma})\} d
t
\]
where $X^\bot = X - (1/r^2 )\, g_\theta (X , \dot{\gamma}) \,
\dot{\gamma}$. \label{t:9}
\end{theorem} We shall need the following reformulation of Theorem
\ref{t:9}
\begin{theorem} Let $(M , \theta )$, $\gamma$ and $X,Y$
be as in Theorem {\rm \ref{t:9}}. Then
\begin{equation}
I(X,Y) = - \frac{1}{r} \int_a^b \{ g_\theta ({\mathcal J}_\gamma
X^\bot , Y^\bot ) + 2 [\theta (\nabla_{\dot{\gamma}} X^\bot ) -
\label{e:index2}
\end{equation}
\[ - 2 \Omega (X^\bot , \dot{\gamma})] \Omega (Y^\bot ,
\dot{\gamma})\} d t + \]
\[ + \frac{1}{r} \sum_{j=1}^h g_{\theta , \gamma (t_j )}
((\nabla_{\dot{\gamma}} X^\bot )^-_{\gamma (t_j )} -
(\nabla_{\dot{\gamma}} X^\bot )^+_{\gamma (t_j )} \, , \,
Y^\bot_{\gamma (t_j )}) \] where ${\mathcal J}_\gamma X \equiv
\nabla^2_{\dot{\gamma}} X - 2 \Omega (X^\prime , \dot{\gamma}) T +
R(X, \dot{\gamma}) \dot{\gamma}$ is the Jacobi operator and $a =
t_0 < t_1 < \cdots < t_h < t_{h+1} = b$ is a partition of $[a,b]$
such that $X$ is differentiable in each interval $[t_j ,
t_{j+1}]$, $0 \leq j \leq h$, and $(\nabla_{\dot{\gamma}} X^\bot
)_{\gamma (t_j )}^{\pm} = \lim_{t \to t_j^{\pm}}
(\nabla_{\dot{\gamma}} X^\bot )_{\gamma (t)}$. \label{t:10}
\end{theorem}
This will be seen to imply
\begin{corollary} Let $(M , \theta )$ be a Sasakian manifold, $\gamma \in \Gamma$
a lengthy geodesic of the Tanaka-Webster connection of $(M ,
\theta )$, and $X \in T_\gamma (\Gamma )$. Then $X^\bot$ is a
Jacobi field if and only if there is $\alpha (X) \in {\mathbb R}$
such that
\begin{equation}\label{e:lJ2} \frac{d}{d t} \{ \theta (X^\bot )
\circ \gamma \} (t) - 2 \Omega (X^\bot , \dot{\gamma})_{\gamma
(t)} = \alpha (X)
\end{equation} for any $a \leq t \leq b$, and
\begin{equation}
\label{e:indlJ2}
I(X,Y) = - (2/r) \alpha (X) \int_a^b \Omega (Y^\bot ,
\dot{\gamma})_{\gamma (t)} \, d t \, ,
\end{equation}
for any $Y \in T_\gamma (\Gamma )$.
\label{c:4}
\end{corollary}
{\em Proof of Theorem} \ref{t:9}. We adopt the notations and
conventions in the proof of Theorem \ref{t:V1}. As a byproduct of
the proof of (\ref{e:F}) we have the identity
\begin{equation}
\frac{1}{2} \; {\mathbb S}(F^2 ) = \langle {\mathbb T}(\mu^*
({\mathbb S})), \mu^* ({\mathbb T}) \rangle +
\label{e:K}\end{equation}
\[ + \langle \omega^* ({\mathbb T}) \cdot \mu^* ({\mathbb S}) ,
\mu^* ({\mathbb T}) \rangle + 2 \langle \Theta^* ({\mathbb S},
{\mathbb T}), \mu^* ({\mathbb T})\rangle . \] Applying ${\mathbb
S}$ we get
\[ \frac{1}{2} \; {\mathbb S}^2 (F^2 ) = \langle {\mathbb S} {\mathbb
T}(\mu^* ({\mathbb S})), \mu^* ({\mathbb T}) \rangle + \langle
{\mathbb T}(\mu^* ({\mathbb S})), {\mathbb S}(\mu^* ({\mathbb T}))
\rangle + \]
\[ + \langle {\mathbb S}(\omega^* ({\mathbb T})) \cdot \mu^*
({\mathbb S}) , \mu^* ({\mathbb T})\rangle + \langle \omega^*
({\mathbb T}) \cdot {\mathbb S} (\mu^* ({\mathbb S})), \mu^*
({\mathbb T})\rangle + \] \[ + \langle \omega^* ({\mathbb T})
\cdot \mu^* ({\mathbb S}) , {\mathbb S}(\mu^* ({\mathbb T}))
\rangle + \]
\[ + 2 \langle {\mathbb S}(\Theta^* ({\mathbb S},
{\mathbb T})) , \mu^* ({\mathbb T})\rangle + 2 \langle \Theta^*
({\mathbb S}, {\mathbb T}), {\mathbb S}(\mu^* ({\mathbb T}))
\rangle . \] When calculated at points of the form $(t,0) \in Q$
the $4^{\rm th}$ and $5^{\rm th}$ terms vanish (by (\ref{e:B})).
We proceed by calculating the remaining terms (at $(t,0)$). By
(\ref{e:A})
\[ 1^{\rm st} \; {\rm term} = \langle {\mathbb S} {\mathbb
T}(\mu^* ({\mathbb S})), \mu^* ({\mathbb T}) \rangle = \langle
{\mathbb T} {\mathbb S}(\mu^* ({\mathbb S})), \mu^* ({\mathbb T})
\rangle = \]
\[ = {\mathbb T} \left( \langle {\mathbb S}(\mu^* ({\mathbb S})), \mu^*
({\mathbb T}) \rangle \right) - \langle {\mathbb S}(\mu^*
({\mathbb S})), {\mathbb T}(\mu^* ({\mathbb T})) \rangle . \] Yet
$\gamma \in \Gamma$ is a geodesic hence (by (\ref{e:J})) ${\mathbb
T}(\mu^* ({\mathbb T}))_{(t,0)} = 0$. Hence
\[ 1^{\rm st} \; {\rm term} =
{\mathbb T} \left( \langle {\mathbb S}(\mu^* ({\mathbb S})), \mu^*
({\mathbb T}) \rangle \right)_{(t,0)} . \] Next (by (\ref{e:C}))
\[ 2^{\rm nd} \; {\rm term} =
\langle {\mathbb T}(\mu^* ({\mathbb S})), {\mathbb S}(\mu^*
({\mathbb T})) \rangle = \langle {\mathbb T}(\mu^* ({\mathbb S})),
{\mathbb T}(\mu^* ({\mathbb S})) \rangle + \]
\[ + \langle {\mathbb T}(\mu^* ({\mathbb S})), \omega^* ({\mathbb T})\cdot
\mu^* ({\mathbb S}) \rangle - \langle {\mathbb T}(\mu^* ({\mathbb
S})), \omega^* ({\mathbb S})\cdot \mu^* ({\mathbb T}) \rangle +
\]
\[ + 2 \langle {\mathbb T}(\mu^* ({\mathbb S})), \Theta^* ({\mathbb
S}, {\mathbb T}) \rangle . \] Again terms are evaluated at $(t,0)$
hence $\omega^* ({\mathbb T}) = 0$ (by (\ref{e:B})). On the other
hand $\omega^* ({\mathbb S})$ is ${\bf o}(2n+1)$-valued hence
\[ 2^{\rm nd} \; {\rm term} = \langle {\mathbb T}(\mu^* ({\mathbb
S})), {\mathbb T}(\mu^* ({\mathbb S})) \rangle + \langle \omega^*
({\mathbb S})\cdot {\mathbb T}(\mu^* ({\mathbb S})), \mu^*
({\mathbb T}) \rangle + \]
\[ + 2 \langle {\mathbb T}(\mu^* ({\mathbb S})), \Theta^* ({\mathbb
S}, {\mathbb T}) \rangle \] at each $(t, 0) \in Q$. Next (by
(\ref{e:D}))
\[ 3^{\rm rd} \; {\rm term} = \langle {\mathbb S}(\omega^* ({\mathbb T})) \cdot \mu^*
({\mathbb S}) , \mu^* ({\mathbb T})\rangle = \langle {\mathbb
T}(\omega^* ({\mathbb S})) \cdot \mu^* ({\mathbb S}) , \mu^*
({\mathbb T})\rangle + \]
\[ + \langle \omega^* ({\mathbb T}) \omega^* ({\mathbb S}) \cdot \mu^*
({\mathbb S}) , \mu^* ({\mathbb T})\rangle - \langle \omega^*
({\mathbb T}) \omega^* ({\mathbb S}) \cdot \mu^* ({\mathbb S}) ,
\mu^* ({\mathbb T})\rangle + \] \[ + 2 \, \langle \Omega^*
({\mathbb S} , {\mathbb T}) \cdot \mu^* ({\mathbb S}) , \mu^*
({\mathbb T}) \rangle \] or (by (\ref{e:B}))
\[ 3^{\rm rd} \; {\rm term} = \langle {\mathbb T}(\omega^* ({\mathbb S})) \cdot \mu^*
({\mathbb S}) , \mu^* ({\mathbb T})\rangle + 2 \, \langle \Omega^*
({\mathbb S} , {\mathbb T}) \cdot \mu^* ({\mathbb S}) , \mu^*
({\mathbb T}) \rangle
\]
at each $(t,0) \in Q$. Finally (by (\ref{e:C}))
\[ 7^{\rm th} \; {\rm term} = 2 \langle \Theta^*
({\mathbb S}, {\mathbb T}), {\mathbb S}(\mu^* ({\mathbb T}))
\rangle = 2 \langle \Theta^* ({\mathbb S}, {\mathbb T}), {\mathbb
T}(\mu^* ({\mathbb S})) \rangle + \]
\[ + 2 \langle \Theta^*
({\mathbb S}, {\mathbb T}), \omega^* ({\mathbb T}) \cdot \mu^*
({\mathbb S}) \rangle - 2 \langle \Theta^* ({\mathbb S}, {\mathbb
T}), \omega^* ({\mathbb S}) \cdot \mu^* ({\mathbb T}) \rangle + 4
|\Theta^* ({\mathbb S} , {\mathbb T})|^2 \] or (by (\ref{e:B}) and
the fact that $\omega^* ({\mathbb S})$ is skew)
\[ 7^{\rm th} \; {\rm term} = 2 \langle \Theta^* ({\mathbb S}, {\mathbb T}), {\mathbb
T}(\mu^* ({\mathbb S})) \rangle + \]
\[ + 2 \langle \omega^*
({\mathbb S}) \cdot \Theta^* ({\mathbb S}, {\mathbb T}), \mu^*
({\mathbb T}) \rangle + 4 |\Theta^* ({\mathbb S} , {\mathbb T})|^2
\]
at each $(t,0) \in Q$. Summing up the various expressions and
noting that (again by (\ref{e:J}))
\[ {\mathbb T}\left( \langle {\mathbb S}(\mu^* ({\mathbb S})) ,
\mu^* ({\mathbb T})\rangle \right) + \langle \omega^* ({\mathbb
S}) \cdot {\mathbb T}(\mu^* ({\mathbb S})) , \mu^* ({\mathbb T})
\rangle + \] \[ + \langle {\mathbb T}(\omega^* ({\mathbb S}))
\cdot \mu^* ({\mathbb S}) , \mu^* ({\mathbb T}) \rangle = \]
\[ = {\mathbb T} \left( \langle {\mathbb S}(\mu^* ({\mathbb
S})) + \omega^* ({\mathbb S}) \cdot \mu^* ({\mathbb S}) , \mu^*
({\mathbb T}) \rangle \right) \] we obtain
\begin{equation}
\label{e:M} \frac{1}{2} \; {\mathbb S}^2 (F^2 ) = |{\mathbb
T}(\mu^* ({\mathbb S}))|^2 + 2 \, \langle \Omega^* ({\mathbb S} ,
{\mathbb T}) \cdot \mu^* ({\mathbb S}) , \mu^* ({\mathbb T})
\rangle +
\end{equation}
\[ + {\mathbb T} \left( \langle {\mathbb S}(\mu^* ({\mathbb
S})) + \omega^* ({\mathbb S}) \cdot \mu^* ({\mathbb S}) , \mu^*
({\mathbb T}) \rangle \right) + \] \[ + 4 \langle \Theta^*
({\mathbb S}, {\mathbb T}), {\mathbb T}(\mu^* ({\mathbb S}))
\rangle + 2 \langle {\mathbb S}(\Theta^* ({\mathbb S}, {\mathbb
T})) , \mu^* ({\mathbb T})\rangle + \]
\[ + 2 \langle \omega^*
({\mathbb S}) \cdot \Theta^* ({\mathbb S}, {\mathbb T}), \mu^*
({\mathbb T}) \rangle + 4 |\Theta^* ({\mathbb S} , {\mathbb T})|^2
\]
at each $(t,0) \in Q$. Since $F {\mathbb S}^2 (F) = \frac{1}{2}
{\mathbb S}^2 (F^2 ) - {\mathbb S}(F)^2$ we get (by (\ref{e:F})
and (\ref{e:M}))
\begin{equation}
\label{e:N} r {\mathbb S}^2 (F) = |{\mathbb T}(\mu^* ({\mathbb
S}))|^2 + 2 \, \langle \Omega^* ({\mathbb S} , {\mathbb T}) \cdot
\mu^* ({\mathbb S}) , \mu^* ({\mathbb T}) \rangle +
\end{equation}
\[ + {\mathbb T} \left( \langle {\mathbb S}(\mu^* ({\mathbb
S})) + \omega^* ({\mathbb S}) \cdot \mu^* ({\mathbb S}) , \mu^*
({\mathbb T}) \rangle \right) + \] \[ + 4 \langle \Theta^*
({\mathbb S}, {\mathbb T}), {\mathbb T}(\mu^* ({\mathbb S}))
\rangle + 2 \langle {\mathbb S}(\Theta^* ({\mathbb S}, {\mathbb
T})) , \mu^* ({\mathbb T})\rangle + \]
\[ + 2 \langle \omega^*
({\mathbb S}) \cdot \Theta^* ({\mathbb S}, {\mathbb T}), \mu^*
({\mathbb T}) \rangle + 4 |\Theta^* ({\mathbb S} , {\mathbb T})|^2
- \]
\[ - \frac{1}{r^2} \{ \langle {\mathbb T}(\mu^* ({\mathbb S}))
, \mu^* ({\mathbb T})\rangle^2 + 4 \langle \Theta^* ({\mathbb S},
{\mathbb T}), \mu^* ({\mathbb T})\rangle^2 + \]
\[ + 4 \langle {\mathbb T}(\mu^* ({\mathbb S})) , \mu^*
({\mathbb T}) \rangle \; \langle \Theta^* ({\mathbb S}, {\mathbb
T}) , \mu^* ({\mathbb T}) \rangle \} \] at any $(t,0) \in Q$.
Moreover (by (\ref{e:H})) \[ {\mathbb T}(\mu^* ({\mathbb
S}))_{(t,0)} = \frac{d}{d t} \left\{ \mu^* ({\mathbb S}) \circ
\alpha^0 \right\} (t) =
\] \[ = \lim_{h \to 0} \frac{1}{h} \{ \mu^* ({\mathbb S})_{(t+h, 0)}
- \mu^* ({\mathbb S})_{(t,0)} \} = \]
\[ = \lim_{h \to 0} \frac{1}{h} \{ f^0 (t+h)^{-1} X_{\gamma (t+h)}
- f^0 (t)^{-1} X_{\gamma (t)} \} = \] (as $f^0 : [a,b] \to O(M ,
g_\theta )$ is a horizontal curve)
\[ = \lim_{h \to 0} \frac{1}{h} \{ f^0 (t)^{-1} \tau^{t+h}_t
X_{\gamma (t+h)} - f^0 (t)^{-1} X_{\gamma (t)} \} = \]
\[ = f^0 (t)^{-1} \left( \lim_{h \to 0} \frac{1}{h} \{
\tau^{t+h}_t X_{\gamma (t+h)} - X_{\gamma (t)} \} \right) \] that
is
\begin{equation}
{\mathbb T}(\mu^* ({\mathbb S}))_{(t,0)} = f^0 (t)^{-1}
(\nabla_{\dot{\gamma}} X)_{\gamma (t)} \, . \label{e:O}
\end{equation}
Consequently (by (\ref{e:I}) and (\ref{e:O}))
\begin{equation}
\label{e:TS} |{\mathbb T}(\mu^* ({\mathbb S}))|^2 + 4 \langle
\Theta^* ({\mathbb S}, {\mathbb T}), {\mathbb T}(\mu^* ({\mathbb
S})) \rangle -
\end{equation}
\[ - \frac{1}{r^2} \{ \langle {\mathbb T}(\mu^* ({\mathbb S}))
, \mu^* ({\mathbb T})\rangle^2 + 4 \langle \Theta^* ({\mathbb S},
{\mathbb T}), \mu^* ({\mathbb T})\rangle^2 + \]
\[ + 4 \langle {\mathbb T}(\mu^* ({\mathbb S})) , \mu^*
({\mathbb T}) \rangle \; \langle \Theta^* ({\mathbb S}, {\mathbb
T}) , \mu^* ({\mathbb T}) \rangle \} = \]
\[ = |\nabla_{\dot{\gamma}} X|^2 + 2 g_\theta (T_\nabla (X,
\dot{\gamma}), \nabla_{\dot{\gamma}} X) - \frac{1}{r^2} \{
g_\theta (\nabla_{\dot{\gamma}} X , \dot{\gamma})^2 + \] \[ +
g_\theta (T_\nabla (X , \dot{\gamma}) , \dot{\gamma})^2 + 2
g_\theta (\nabla_{\dot{\gamma}} X , \dot{\gamma}) g_\theta
(T_\nabla (X , \dot{\gamma}), \dot{\gamma}) \} = \]
\[ = |\nabla_{\dot{\gamma}} X^\bot |^2 + 2 g_\theta (T_\nabla (X^\bot ,
\dot{\gamma}), \nabla_{\dot{\gamma}} X) - \] \[ - \frac{1}{r^2} \{
g_\theta (T_\nabla (X^\bot , \dot{\gamma}) , \dot{\gamma})^2 + 2
g_\theta (\nabla_{\dot{\gamma}} X , \dot{\gamma}) g_\theta
(T_\nabla (X^\bot , \dot{\gamma}), \dot{\gamma}) \} \] and (by
(\ref{e:I}) and (\ref{e:H}))
\begin{equation}
{\rm the \; curvature \; term} = 2 \, \langle \Omega^* ({\mathbb
S} , {\mathbb T}) \cdot \mu^* ({\mathbb S}) , \mu^* ({\mathbb T})
\rangle = \label{e:curv}
\end{equation}
\[ = g_\theta (R(X , \dot{\gamma}) X , \dot{\gamma})_{\gamma (t)}
= - g_\theta (R(X^\bot , \dot{\gamma}) \dot{\gamma} , X^\bot ). \]
On the other hand $\pi (f(a,s)) = y$ and $\pi (f(b,s)) = z$ imply
that $(d_{(a,s)} f) {\mathbb S}_{(a,s)}$ and $(d_{(b,s)} f)
{\mathbb S}_{(b,s)}$ are vertical hence
\begin{equation}
\mu^* ({\mathbb S})_{(a,s)} = 0, \;\;\; \mu^* ({\mathbb
S})_{(b,s)} = 0. \label{e:ab}
\end{equation}
Next, we wish to compute ${\mathbb S}(\mu^* ({\mathbb
S}))_{(t,0)}$. To do so we need to further specialize the choice
of $f(t,s)$. Precisely, let $v \in \pi^{-1} (\gamma (a))$ be a
fixed orthonormal frame and let
\begin{equation}
\label{e:lift} f(t,s) = \sigma_t^\uparrow (s), \;\;\; a \leq t
\leq b, \;\; |s| < \epsilon ,
\end{equation}
where $\sigma_t^\uparrow : (-\epsilon , \epsilon ) \to O(M ,
g_\theta )$ is the unique horizontal lift of $\sigma_t :
(-\epsilon , \epsilon ) \to M$ issuing at $\sigma_t (0) =
\gamma^\uparrow (t)$. Also $\gamma^\uparrow : [a,b] \to O(M ,
g_\theta )$ is the horizontal lift of $\gamma : [a,b] \to M$
determined by $\gamma^\uparrow (a) = v$. Therefore $f^0 =
\gamma^\uparrow$ is a horizontal curve, as required by the
previous part of the proof. In addition (\ref{e:lift}) possesses
the property that for each $t$ the curve $s \mapsto f(t,s)$ is
horizontal, as well. Then
\[ {\mathbb S}(\mu^* ({\mathbb S}))_{(t,0)} = \frac{d}{d s}
\left\{ \mu^* ({\mathbb S}) \circ \beta_t \right\} (0) = \] \[ =
\lim_{s \to 0} \frac{1}{s} \{ f(t,s)^{-1} \dot{\sigma}_t (s) -
f(t,0)^{-1} \dot{\sigma}_t (0) \} = \] (as $f_t : (-\epsilon ,
\epsilon ) \to O(M , g_\theta )$, $f_t (s) = f(t,s)$, $|s| <
\epsilon$, is horizontal)
\[ = \lim_{s \to 0} \frac{1}{s} \{ f(t,0)^{-1} \tau^s
\dot{\sigma}_t (s) - f(t,0)^{-1} \dot{\sigma}_t (0) \} = \] \[ =
f(t,0)^{-1} \left( \lim_{s \to 0} \frac{1}{s} \{ \tau^s
\dot{\sigma}_t (s) - \dot{\sigma}_t (0) \} \right) \] where
$\tau^s : T_{\sigma_t (s)} (M) \to T_{\sigma_t (0)}(M)$ is the
parallel displacement along $\sigma_t$ from $\sigma_t (s)$ to
$\sigma_t (0)$, i.e.
\begin{equation}
{\mathbb S}(\mu^* ({\mathbb S}))_{(t,0)} = f(t,0)^{-1}
(\nabla_{\dot{\sigma}_t} \dot{\sigma}_t )_{\gamma (t)} \, .
\label{e:SS}
\end{equation}
By (\ref{e:SS}), $\dot{\sigma}_a (s) = 0$ and $\dot{\sigma}_b (s)
= 0$ (as $\sigma_a (s) = \gamma^s (a) = y =$ const. and $\sigma_b
(s) = \gamma^s (b) = z =$ const.) it follows that
\begin{equation}
\label{e:ab1} {\mathbb S}(\mu^* ({\mathbb S}))_{(a,0)} = 0, \;\;\;
{\mathbb S}(\mu^* ({\mathbb S}))_{(b,0)} = 0.
\end{equation}
Using (\ref{e:ab}) and (\ref{e:ab1}) we may conclude that
\begin{equation}
\int_a^b {\mathbb T} \left( \langle {\mathbb S}(\mu^* ({\mathbb
S})) + \omega^* ({\mathbb S}) \cdot \mu^* ({\mathbb S}) , \mu^*
({\mathbb T}) \rangle \right)_{(t,0)} d t = \label{e:int}
\end{equation}
\[ = \langle {\mathbb S}(\mu^* ({\mathbb
S})) + \omega^* ({\mathbb S}) \cdot \mu^* ({\mathbb S}) , \mu^*
({\mathbb T}) \rangle_{(b,0)} - \] \[ - \langle {\mathbb S}(\mu^*
({\mathbb S})) + \omega^* ({\mathbb S}) \cdot \mu^* ({\mathbb S})
, \mu^* ({\mathbb T}) \rangle_{(a,0)} = 0. \] Similarly
\[ 2\; {\mathbb S}(\Theta^* ({\mathbb S}, {\mathbb T}))_{(t,0)} = 2 \;
\lim_{s \to 0} \frac{1}{s} \{ \Theta^* ({\mathbb S}, {\mathbb
T})_{(t,s)} - \Theta^* ({\mathbb S}, {\mathbb T})_{(t,0)} \} =
\]
\[ = \lim_{s\to 0} \frac{1}{s} \{ f(t,s)^{-1} T_\nabla
(\dot{\sigma}_t (s) , \dot{\gamma}^s (t) ) - f(t,0)^{-1} T_\nabla
(\dot{\sigma}_t (0), \dot{\gamma}^0 (t)) \} =
\]
\[ = f(t,0)^{-1} \left( \lim_{s \to 0} \frac{1}{s} \{ \tau^s \,
V_{\sigma_t (s)} - V_{\sigma_t (0)} \} \right) \] where $V$ is the
vector field defined at each $a(t,s) = \sigma_t (s)$ by
\[ V_{a(t,s)} = T_{\nabla , a(t,s)} (\dot{\sigma}_t
(s) , \dot{\gamma}^s (t)), \;\;\; a \leq t \leq b, \;\; |s| <
\epsilon .
\] Let us assume from now on that $\tau = 0$, i.e. $(M , \theta )$ is Sasakian.
Then
\[ V_{a(t,s)} = - 2 \Omega_{a(t,s)} (\dot{\sigma}_t (s) ,
\dot{\gamma}^s (t)) T_{a(t,s)} \] and $\nabla T = 0$ yields
\[ \tau^s V_{a(t,s)} = - 2 \Omega_{a(t,s)} (\dot{\sigma}_t (s) ,
\dot{\gamma}^s (t)) T_{\gamma (t)} \, . \] Finally
\begin{equation} 2 \; \langle {\mathbb S}(\Theta^* ({\mathbb S}, {\mathbb
T})), \mu^* ({\mathbb T})\rangle_{(t,0)} = \label{e:stor}
\end{equation}
\[ = \lim_{s \to 0}
\frac{1}{s} \{ g_{\theta , a(t,s)}(\tau^s V_{a(t,s)} ,
\dot{\gamma}(t)) - g_{\theta , a(t,0)} (V_{a(t,0)} ,
\dot{\gamma}(t)) \} = 0, \] as $\dot{\gamma}(t) \in H(M)_{\gamma
(t)}$. It remains that we compute the term $2 \, \langle \omega^*
({\mathbb S}) \cdot \Theta^* ({\mathbb S}, {\mathbb T}), \mu^*
({\mathbb T}) \rangle$. As $f(t,s)$ is a linear frame at $a(t,s)$
\[ f(t,s) = ( a(t,s), \{ X_{i, a(t,s)} : 1 \leq i \leq 2n+1 \} ),
\]
where $X_i \in T_{a(t,s)}(M)$. Let $(U , x^i )$ be a local
coordinate system on $M$ and let us set $X_i = X^j_i \; \partial
/\partial x^j$. Let $(\Pi^{-1} (U), \tilde{x}^i , g^i_j )$ be the
naturally induced local coordinates on $L(M)$, where $\Pi : L(M)
\to M$ is the projection. Then $g^i_j (f(t,s)) = X^i_j (a(t,s))$.
As $\omega$ is the connection $1$-form of a linear connection
\[ \omega = \omega^j_i \otimes E^i_j \]
where $\omega^i_j$ are scalar $1$-forms on $L(M)$ and $\{ E^i_j :
1 \leq i , j \leq 2n+1 \}$ is the basis of the Lie algebra ${\bf
gl}(2n+1)$ given by $E^i_j = [\delta^i_\ell \; \delta_j^k ]_{1
\leq k,\ell \leq 2n+1}$. Let $\{ e_1 , \cdots , e_{2n+1} \}$ be
the canonical linear basis of ${\mathbb R}^{2n+1}$. Then
\[ \left. \mu^* ({\mathbb T})_{(t,0)} = f(t,0)^{-1} \dot{\gamma}(t) = \frac{d x^i}{d t}
f(t,0)^{-1} \frac{\partial}{\partial x^i} \right|_{\gamma (t)} =
\frac{d x^i}{d t} Y^j_i e_j \] where $[Y^i_j ] = [X^i_j ]^{-1}$.
Therefore
\[ \omega^* ({\mathbb S})_{(t,0)} \cdot \mu^* ({\mathbb
T})_{(t,0)} = \frac{d x^k}{d t} \, Y^i_k \, (f^* \omega^j_i
)({\mathbb S})_{(t,0)} \, e_j \] (because of $E^i_j \, e_k =
\delta^i_k \, e_j$). On the other hand (by Prop. 1.1 in
\cite{kn:KoNo}, Vol. I, p. 64) $\omega^* ({\mathbb S})_{(t,0)} =
A$ where the left invariant vector field $A \in {\bf gl}(2n+1)$ is
given by
\begin{equation} \label{e:definv}
A^*_{f(t,0)} = (d_{(t,0)} f) {\mathbb S}_{(t,0)} - \ell_{f(t,0)}
\dot{\sigma}_t (0)
\end{equation}
and $\ell_u : T_{\Pi (u)}(M) \to H_u$ is the inverse of $d_u \Pi :
H_u \to T_{\Pi (u)} (M)$, $u \in L(M)$ (the horizontal lift
operator with respect to $H$). Here $A^*$ is the fundamental
vector field associated to $A$, i.e.
\[ A^*_{f(t,0)} = (d_e L_{f(t,0)} ) A_e \]
where $L_u : {\rm GL}(2n+1) \to L(M)$, $u \in L(M)$, is given by
$L_u (g) = u g$ for any $g \in {\rm GL}(2n+1)$, and $e \in {\rm
GL}(2n+1)$ is the unit matrix. If $A = A^j_i E^i_j$ then $A^i_j =
(f^* \omega^i_j )({\mathbb S})_{(t,0)}$. Let $(g^i_j )$ be the
natural coordinates on ${\rm GL}(2n+1)$ so that $L_{f(t,0)}$ is
locally given by
\[ L^i (g) = \tilde{x}^i \, , \;\;\; L^i_j (g) = X^i_k g^k_j \, , \] and then
$(d_e L_{f(t,0)}) (\partial /\partial g^i_j )_e = X_i^k (\partial
/\partial g^k_j )_{f(t,0)}$. Next (cf. \cite{kn:KoNo}, Vol. I, p.
143)
\[ \ell \; \frac{\partial}{\partial x^j} = \partial_j -
(\Gamma^i_{jk} \circ \Pi ) g^k_\ell \frac{\partial}{\partial
g^i_\ell}
\] (where $\partial_i = \partial /\partial \tilde{x}^i$) and
(\ref{e:definv}) lead to
\[ \left. A^k_\ell X^i_k \frac{\partial}{\partial g^i_\ell} \right|_{f(t,0)}
= (d_{(t,0)} f) {\mathbb S}_{(t,0)} - X^j (\gamma (t)) \{
\partial_j - (\Gamma^i_{jk} \circ \Pi ) g^k_\ell \frac{\partial}{\partial
g^i_\ell} \}_{f(t,0)} \] or (by applying this identity to the
coordinate functions $g^i_\ell$)
\begin{equation}
\label{e:64} A^k_\ell X^i_k = {\mathbb S}_{(t,0)} (g^i_\ell \circ
f) + X^j (\gamma (t)) \Gamma^i_{jk} (\gamma (t)) X^k_\ell \, .
\end{equation}
If $f^i_j = g^i_j \circ f$ then
\[ {\mathbb S}_{(t,0)} (f^i_j ) = \frac{d}{d s} \{ f^i_j \circ
\beta_t \} (0) = \frac{\partial f^i_j}{\partial s}(t,0) \]
Therefore (by (\ref{e:64}))
\begin{equation} \label{e:65} A^k_\ell = Y^k_i \{ \frac{\partial
f^i_\ell}{\partial s} (t,0) + X^j (\gamma (t))
\Gamma^i_{jm}(\gamma (t)) X^m_\ell \} \, .
\end{equation}
So far we got (by (\ref{e:65}))
\[ \omega^* ({\mathbb S})_{(t,0)} \cdot \mu^* ({\mathbb
T})_{(t,0)} = Y^\ell_k \; \frac{d x^k}{dt} \; \{ \frac{\partial
f^i_\ell}{\partial s}(t,0) + \] \[ + X^j (\gamma (t))
\Gamma^i_{jm}(\gamma (t)) X^m_\ell \} f(t,0)^{-1} \left.
\frac{\partial}{\partial x^i}\right|_{\gamma (t)} \, . \] Let us
observe that
\[ \frac{\partial f^i_j}{\partial s}(t,0) = \frac{\partial
X^i_j}{\partial x^k}(\gamma (t)) \frac{\partial a^k}{\partial
s}(t,0) = \frac{\partial X^i_j}{\partial x^k} (\gamma (t)) X^k
(\gamma (t)) \] hence
\[ \frac{\partial f^i_j}{\partial s}(t,0) + X^k (\gamma (t))
\Gamma^i_{k\ell}(\gamma (t)) X^\ell_j = (\nabla_X X_j )^i_{\gamma
(t)} \] and we may conclude that
\begin{equation}
\omega^* ({\mathbb S})_{(t,0)} \cdot \mu^* ({\mathbb T})_{(t,0)} =
Y^j_k \; \frac{d x^k}{dt} \; f(t,0)^{-1} (\nabla_X X_j )_{\gamma
(t)} = 0 . \label{e:66}
\end{equation} Indeed
\[ (\nabla_X X_i )_{\gamma (t)} = (\nabla_{\dot{\sigma}_t} X_i
)_{\sigma_t (0)} = \lim_{s \to 0} \frac{1}{s} \{ \tau^s X_{i ,
\sigma_t (s)} - X_{i , \sigma_t (0)} \} = \]
\[ = \lim_{s \to 0} \frac{1}{s} \{ \tau^s f(t,s) e_i - f(t,0)
e_i \} = 0 \] because $f_t$ is horizontal (yielding $\tau^s f(t,s)
= f(t,0)$). By (\ref{e:TS})-(\ref{e:curv}),
(\ref{e:int})-(\ref{e:stor}) and (\ref{e:66}) the identity
(\ref{e:N}) may be written
\[
\frac{d^2}{d s^2} \{ L(\gamma^s )\}_{s=0} = \frac{1}{r} \int_a^b
\{ |\nabla_{\dot{\gamma}} X^\bot |^2 - g_\theta (R(X^\bot ,
\dot{\gamma}) \dot{\gamma} , X^\bot ) + \]
\[ + 2 g_\theta (T_\nabla (X^\bot , \dot{\gamma}) ,
\nabla_{\dot{\gamma}} X ) + |T_\nabla (X^\bot , \dot{\gamma})|^2 -
\]
\[ - \frac{1}{r^2} [ g_\theta (T_\nabla (X^\bot , \dot{\gamma}) , \dot{\gamma})^2 +
2 \, g_\theta (\nabla_{\dot{\gamma}} X , \dot{\gamma}) g_\theta
(T_\nabla (X^\bot , \dot{\gamma}), \dot{\gamma})] \} d t \] or
(by $T_\nabla (X^\bot , \dot{\gamma}) = - 2 \Omega (X^\bot ,
\dot{\gamma}) T$ and $\theta (\dot{\gamma}) = 0$)
\begin{equation} \label{e:68} I(X,X) = \frac{1}{r} \int_a^b
\{ |\nabla_{\dot{\gamma}} X^\bot |^2 - g_\theta (R(X^\bot ,
\dot{\gamma}) \dot{\gamma} , X^\bot ) + \end{equation} \[ + 4
\Omega (X^\bot , \dot{\gamma})^2 - 4 \Omega (X^\bot ,
\dot{\gamma}) \theta (\nabla_{\dot{\gamma}} X) \} \; d t . \]
Finally, by polarization $I(X,Y) = \frac{1}{2} \{ I(X+Y,X+Y) -
I(X,X) - I(Y,Y)\}$ the identity (\ref{e:68}) leads to
(\ref{e:index}).
\par
{\em Proof of Theorem} \ref{t:10}. As $\nabla g_\theta = 0$
\[ \int_{t_j}^{t_{j+1}} \{ g_\theta (\nabla_{\dot{\gamma}} X^\bot
, \nabla_{\dot{\gamma}} Y^\bot ) - 2 \Omega (X^\bot ,
\dot{\gamma}) \theta (\nabla_{\dot{\gamma}} Y^\bot ) \} d t = \]
\[ = \int_{t_j}^{t_{j+1}} \{ \frac{d}{d t}[g_\theta
(\nabla_{\dot{\gamma}} X^\bot , Y^\bot ) - 2 \Omega (X^\bot ,
\dot{\gamma}) \theta (Y^\bot )] - \] \[ - g_\theta
(\nabla^2_{\dot{\gamma}} X^\bot - 2 \Omega (\nabla_{\dot{\gamma}}
X^\bot , \dot{\gamma}) T , Y^\bot ) \} d t = \]
\[ = g_{\theta , \gamma (t_{j+1})} ((\nabla_{\dot{\gamma}} X^\bot
)^-_{\gamma (t_{j+1})} , Y^\bot_{\gamma (t_{j+1})}) - g_{\theta ,
\gamma (t_j )} ((\nabla_{\dot{\gamma}} X^\bot )^+_{\gamma (t_j )}
, Y^\bot_{\gamma (t_j )}) - \]
\[ - 2 \Omega (X^\bot , \dot{\gamma})_{\gamma (t_{j+1})} \theta
(Y^\bot )_{\gamma (t_{j+1})} + 2 \Omega (X^\bot ,
\dot{\gamma})_{\gamma (t_j )} \theta (Y^\bot )_{\gamma (t_j )} -
\]
\[ - \int_{t_j}^{t_{j+1}} \{ g_\theta
(\nabla^2_{\dot{\gamma}} X^\bot - 2 \Omega (\nabla_{\dot{\gamma}}
X^\bot , \dot{\gamma}) T , Y^\bot ) \} d t \] and (\ref{e:index})
implies (\ref{e:index2}). Q.e.d.
\par
{\em Proof of Corollary} \ref{c:4}. If $X^\bot \in J_\gamma$ then
$X^\bot$ is differentiable in $[a,b]$ hence the last term in
(\ref{e:index2}) vanishes. Also ${\mathcal J}_\gamma X^\bot = 0$
and (\ref{e:index2}) yield
\[ I(X,Y) = - \frac{2}{r} \int_a^b \{ \theta (\nabla_{\dot{\gamma}}
X^\bot ) - 2 \Omega (X^\bot , \dot{\gamma})\} \Omega (Y^\bot ,
\dot{\gamma}) \, d t \] which implies (by Lemma \ref{l:J2}) both
(\ref{e:lJ2})-(\ref{e:indlJ2}). Viceversa, let us assume that
(\ref{e:lJ2}) holds for some $\alpha (X) \in {\mathbb R}$. Let $f$
be a smooth function on $M$ such that $f(\gamma (t_j )) = 0$ for
any $0 \leq j \leq h$ and $f(\gamma (t)) > 0$ for any $t \in [a,b]
\setminus \{ t_0 , t_1 , \cdots , t_h \}$ and let us consider the
vector field $Y = f \, {\mathcal J}_\gamma X^\bot$. As ${\mathcal
J}_\gamma X^\bot$ is orthogonal to $\dot{\gamma}$ the identity
(\ref{e:indlJ2}) implies
\[ \int_a^b f(\gamma (t)) |{\mathcal J}_\gamma X^\bot |^2 \, d t =
0 \] hence ${\mathcal J}_\gamma X^\bot = 0$ in each interval $[t_j
, t_{j+1}]$. To prove that $X^\bot \in J_\gamma$ it suffices (by
Prop. 1.1 in \cite{kn:KoNo}, Vol. II, p. 63) to check that
$X^\bot$ is of class $C^1$ at each $t_j$. To this end, for each
fixed $j$ we consider a vector field $Y$ along $\gamma$ such that
\[ Y_{j , \gamma (t)} = \begin{cases} (\nabla_{\dot{\gamma}}
X^\bot )^-_{\gamma (t_j )} - (\nabla_{\dot{\gamma}} X^\bot
)^+_{\gamma (t_j )} \, , & {\rm for} \;\; t = t_j \cr 0, & {\rm
for} \;\;\; t = t_k \, , \;\; k \neq j. \cr \end{cases} \] Then
(by (\ref{e:indlJ2})) $|(\nabla_{\dot{\gamma}} X^\bot )^-_{\gamma
(t_j )} - (\nabla_{\dot{\gamma}} X^\bot )^+_{\gamma (t_j )}|^2 =
0$. Q.e.d. \vskip 0.1in {\bf Remark}. Let $\gamma (t) \in M$ be a
lengthy $C^1$ curve. Then $D_{\dot{\gamma}} \dot{\gamma} =
\nabla_{\dot{\gamma}} \dot{\gamma} - A(\dot{\gamma}, \dot{\gamma})
T$ hence on a Sasakian manifold $\gamma$ is a geodesic of $\nabla$
if and only if $\gamma$ is a geodesic of the Riemannian manifold
$(M , g_\theta )$. This observation leads to the following
alternative proof of Theorem \ref{t:9}. Let $\gamma \in \Gamma$
and $X,Y \in T_\gamma (\Gamma )$ be as in Theorem \ref{t:9}. By
Theorem 5.4 in \cite{kn:KoNo}, Vol. II, p. 81, we have
\begin{equation} I(X,Y) = \frac{1}{r} \int_a^b \{ g_\theta (D_{\dot{\gamma}}
X^\bot , D_{\dot{\gamma}} Y^\bot ) - g_\theta (R^D (X^\bot ,
\dot{\gamma}) \dot{\gamma} , Y^\bot )\} dt .
\label{e:rem0}
\end{equation}
Now on one hand
\begin{equation}
D_{\dot{\gamma}} X^\bot = \nabla_{\dot{\gamma}} X^\bot +
\Omega (\dot{\gamma} , X^\bot ) T + \theta (X^\bot ) J
\dot{\gamma}
\label{e:rem1}
\end{equation}
and on the other the identity
\[ R^D (X,Y) Z = R(X,Y) Z + (J X \wedge J Y) Z - 2 \Omega (X,Y) J
Z + \] \[ + 2 g_\theta ((\theta \wedge I)(X,Y) , Z) T - 2 \theta
(Z) (\theta \wedge I)(X,Y), \;\;\; X,Y,Z \in {\mathcal X}(M), \]
yields
\begin{equation}
R^D (X^\bot , \dot{\gamma})\dot{\gamma} = R(X^\bot ,
\dot{\gamma})\dot{\gamma} - 3 \Omega (X^\bot , \dot{\gamma}) J
\dot{\gamma} + r^2 \theta(X^\bot ) T. \label{e:rem2}
\end{equation}
Let us substitute from (\ref{e:rem1})-(\ref{e:rem2}) into
(\ref{e:rem0}) and use the identity
\[ \theta (X^\bot ) \Omega (\nabla_{\dot{\gamma}} Y^\bot ,
\dot{\gamma}) + \theta (Y^\bot ) \Omega (\nabla_{\dot{\gamma}}
X^\bot , \dot{\gamma}) + \]\[ + \Omega (\dot{\gamma} , X^\bot )
\theta (\nabla_{\dot{\gamma}} Y^\bot ) + \Omega (\dot{\gamma} ,
Y^\bot ) \theta (\nabla_{\dot{\gamma}} X^\bot ) = \]
\[ = \frac{d}{d t} \{ \theta (X^\bot ) \Omega (Y^\bot ,
\dot{\gamma}) + \theta (Y^\bot ) \Omega (X^\bot , \dot{\gamma}) \}
- \]
\[ - 2 \{ \Omega (X^\bot , \dot{\gamma}) \theta
(\nabla_{\dot{\gamma}} Y^\bot ) + \Omega (Y^\bot , \dot{\gamma})
\theta (\nabla_{\dot{\gamma}} X^\bot ) \} \] (together with
$X_{\gamma (a)} = X_{\gamma (b)} = 0$) so that to derive
(\ref{e:index}). Q.e.d.
\vskip 0.1in As an application of Theorems \ref{t:conj3} and
\ref{t:9} we shall establish
\begin{theorem} Let $(M , \theta )$ be a Sasakian manifold of CR dimension $n$
and $\nabla$ its Tanaka-Webster connection. Let $\gamma : [a,b]
\to M$ be a lengthy geodesic of $\nabla$, parametrized by arc
length. If there is $c \in (a,b)$ such that the point $\gamma (c)$
is horizontally conjugate to $\gamma (a)$ and for any $\delta > 0$
with $[c-\delta , c + \delta ] \subset (a,b)$ the space ${\mathcal
H}_{\gamma_\delta}$ has maximal dimension $4n$ {\rm (}where
$\gamma_\delta$ is the geodesic $\gamma : [c-\delta , c + \delta ]
\to M${\rm )} then $\gamma$ is not a minimizing geodesic joining
$\gamma (a)$ and $\gamma (b)$, that is the length of $\gamma$ is
greater than the Riemannian distance {\rm (}associated to $(M ,
g_\theta )${\rm )} between $\gamma (a)$ and $\gamma (b)$.
\label{t:14}
\end{theorem}
{\em Proof}. Let $\gamma : [a,b] \to M$ be a geodesic of the
Tanaka-Webster connection of the Sasakian manifold $(M , \theta
)$, obeying to the assumptions in Theorem \ref{t:14}. Then (by
Theorem \ref{t:conj3}) there is a piecewise differentiable vector
field $X$ along $\gamma$ such that 1) $X$ is orthogonal to
$\dot{\gamma}$ and $J \dot{\gamma}$, 2) $X_{\gamma (a)} =
X_{\gamma (b)} = 0$, and 3) $I_a^b (X) < 0$. Let $\{\gamma^s
\}_{|s| < \epsilon}$ be a $1$-parameter family of curves as in the
definition of $(d_\gamma L) X$ and $I(X,X)$. By Corollary
\ref{c:V1} (as $\gamma$ is a geodesic of $\nabla$) one has
\[ \frac{d}{d s} \left\{ L(\gamma^s )\right\}_{s=0} = 0. \]
On the other hand (by Theorem \ref{t:9} and $X^\bot = X$)
\[ I(X,X) = I_a^b (X) + 4 \int_a^b \Omega (X , \dot{\gamma}) \{ \Omega (X ,
\dot{\gamma}) - \theta (X^\prime )\} d t \] hence (as $X$ is
orthogonal to $J \dot{\gamma}$)
\[ \frac{d^2}{d s^2} \left\{ L(\gamma^s ) \right\}_{s=0} = I_a^b
(X) < 0 \] so that there is $0 < \delta <\epsilon$ such that
$L(\gamma^s ) < L(\gamma )$ for any $|s| < \delta$. \vskip 0.1in
{\bf Remark}. If there is a $1$-parameter variation of $\gamma$
(inducing $X$) by {\em lengthy} curves then $L(\gamma )$ is
greater than the Carnot-Carath\'edory distance between $\gamma
(a)$ and $\gamma (b)$.
\section{Final comments and open problems}
Manifest in R. Strichartz's paper (cf. \cite{kn:Str}) is the
absence of covariant derivatives and curvature. Motivated by our
Theorem \ref{t:2} we started developing a theory of geodesics of
the Tanaka-Webster connection $\nabla$ on a Sasakian manifold $M$,
with the hope that although lengthy geodesics of $\nabla$ form
(according to Corollary \ref{c:relation}) a smaller family than
that of sub-Riemannian geodesics, the former may suffice for
establishing an analog to Theorem 7.1 in \cite{kn:Str}, under the
assumption that $\nabla$ is complete (as a linear connection on
$M$). The advantage of working within the theory of linear
connections is already quite obvious (e.g. any $C^1$ geodesic of
$\nabla$ is automatically of class $C^\infty$, as an integral
curve of some $C^\infty$ basic vector field, while sub-Riemannian
geodesics are assumed to be of class $C^2$, cf. \cite{kn:Str}, p.
233, and no further regularity is to be expected {\em a priori})
and doesn't contradict R. Strichartz's observation that
sub-Riemannian manifolds, and in particular strictly pseudoconvex
CR manifolds endowed with a contact form $\theta$, exhibit no
approximate Euclidean behavior (cf. \cite{kn:Str}, p. 223).
Indeed, while Riemannian curvature measures the higher order
deviation of the given Riemannian manifold from the Euclidean
model, the curvature of the Tanaka-Webster connection describes
the pseudoconvexity properties of the given CR manifold, as
understood in several complex variables analysis. The role as a
possible model space played by the {\em tangent cone} of the
metric space $(M , \rho )$ at a point $x \in M$ (such as produced
by J. Mitchell's Theorem 1 in \cite{kn:Mic}, p. 36) is unclear.
\par
Another advantage of our approach stems from the fact that the
exponential map on $M$ thought of as a sub-Riemannian manifold is
never a diffeomorphism at the origin (because all sub-Riemannian
geodesics issuing at $x \in M$ must have tangent vectors in
$H(M)_x$) in contrast with the ordinary exponential map associated
to the Tanaka-Webster connection $\nabla$. In particular {\em cut
points} (as introduced in \cite{kn:Str}, p. 260) do not possess
the properties enjoyed by conjugate points in Riemannian geometry
because (by Theorem 11.3 in \cite{kn:Str}, p. 260) given $x \in M$
cut points occur arbitrary close to $x$. On the contrary (by
Theorem 1.4 in \cite{kn:KoNo}, Vol. II, p. 67) given $x \in M$ one
may speak about the {\em first} point conjugate to $x$ along a
geodesic of $\nabla$ emanating from $x$, therefore the concept of
conjugate locus $C(x)$ may be defined in the usual way (cf. e.g.
\cite{kn:Man}, p. 117). The systematic study of the properties of
$C(x)$ on a strictly pseudoconvex CR manifold is an open problem.
\par
Yet another concept of exponential map was introduced by D.
Jerison \& J. M. Lee, \cite{kn:JeLe} (associated to {\em parabolic
geodesics} i.e. solutions $\gamma (t)$ to $\left(
\nabla_{\dot{\gamma}} \dot{\gamma} \right)_{\gamma (t)} = 2c
T_{\gamma (t)}$ for some $c \in {\mathbb R}$). A comparison
between the three exponential formalisms (in \cite{kn:Str},
\cite{kn:JeLe}, and the present paper) hasn't been done as yet. We
conjecture that given a $2$-plane $\sigma \subset T_x (M)$ its
pseudohermitian sectional curvature $k_\theta (\sigma )$ measures
the difference between the length of a circle in $\sigma$ (with
respect to $g_{\theta , x}$) and the length of its image by
$\exp_x$ (the exponential mapping at $x$ associated to $\nabla$).
Also a useful relationship among $\exp_x$ and the exponential
mapping associated to the Fefferman metric $F_\theta$ on $C(M)$
should exist (and then an understanding of the singular points of
the latter, cf. e.g. M. A. Javaloyes \& P. Piccione,
\cite{kn:JaPi}, should shed light on the properties of singular
points of the former).
\par
Finally, the analogy between Theorem 7.3 in \cite{kn:Str}, p. 245
(producing ``approximations to unity'' on Carnot-Carath\'eodory
complete sub-Riemannian manifolds) and Lemma 2.2 in
\cite{kn:Str2}, p. 50 (itself a corrected version of a result by
S.-T. Yau, \cite{kn:Yau}) indicates that Theorem 7.3 is the proper
ingredient for proving that the sublaplacian $\Delta_b$ is
essentially self-adjoint on $C^\infty_0 (M)$ and the corresponding
heat operator is given by a positive $C^\infty$ kernel. These
matters are relegated to a further paper.
\begin{appendix}
\section{Contact forms of constant pseudohermitian sectional curvature}
The scope of this section is to give a proof of Theorem
\ref{t:J3}. Let $(M , \theta )$ be a nondegenerate CR manifold and
$\theta$ a contact form on $M$. Let $\nabla$ be the Tanaka-Webster
connection of $(M , \theta )$. We recall the first Bianchi
identity
\begin{equation} \sum_{XYZ} R(X,Y)Z = \sum_{XYZ} \{ T_\nabla
(T_\nabla (X,Y), Z) + (\nabla_X T_\nabla )(Y,Z) \} \label{e:A1}
\end{equation}
for any $X,Y,Z \in T(M)$, where $\sum_{XYZ}$ denotes the cyclic
sum over $X,Y,Z$. Let $X,Y, Z \in H(M)$ and note that
\[ T_\nabla (T_\nabla (X,Y) , Z) = - 2 \Omega (X,Y) \tau (Z), \]
\[ (\nabla_X T_\nabla )(Y,Z) = - 2 (\nabla_X \Omega )(Y,Z) T = 0.
\]
Indeed $\nabla g_\theta = 0$ and $\nabla J = 0$ yield $\nabla
\Omega = 0$. Thus (\ref{e:A1}) leads to
\begin{equation}
\sum_{XYZ} R(X,Y)Z = - 2 \sum_{XYZ} \Omega (X,Y) \tau (Z),
\label{e:A2}
\end{equation}
for any $X,Y,Z \in H(M)$. Let us define a $(1,2)$-tensor field $S$
by setting $S(X,Y) = (\nabla_X \tau )Y - (\nabla_Y \tau )X$. Next,
we set $X,Y \in H(M)$ and $Z = T$ in (\ref{e:A1}) and observe that
\[ T_\nabla (T_\nabla (X,Y), T) + T_\nabla (T_\nabla (Y,T), X) +
T_\nabla (T_\nabla (T,X), Y) = \]
\[ = - T_\nabla (\tau (Y), X) + T_\nabla (\tau (X), Y) = \;\;\;\;\;\; {\rm
(as \; \tau \; is \; H(M)-valued)} \]
\[ = 2 \{ \Omega (\tau (Y) , X) - \Omega (\tau (X) , Y) \} T =
2 g_\theta ((\tau J + J \tau )X , Y) T = 0, \] (by the purity
axiom) and
\[ (\nabla_X T_\nabla )(Y, T) + (\nabla_Y T_\nabla )(T , X) +
(\nabla_T T_\nabla )(X,Y) = \]
\[ = - (\nabla_X \tau )Y + (\nabla_Y \tau )X - 2 (\nabla_T \Omega
)(X,Y) T = - S(X,Y). \] Finally (\ref{e:A1}) becomes
\begin{equation}
R(X,T) Y + R(T, Y) X = S(X,Y), \label{e:A3}
\end{equation}
for any $X,Y \in H(M)$. The $4$-tensor $R$ enjoys the properties
\begin{equation}
R(X,Y,Z,W) = - R(Y,X, Z, W), \label{e:A4}
\end{equation}
\begin{equation}
R(X,Y, Z,W) = - R(X,Y, W, Z), \label{e:A5}
\end{equation}
for any $X,Y,Z,W \in T(M)$. Indeed (\ref{e:A4}) follows from
$\nabla g_\theta = 0$ while (\ref{e:A5}) is obvious. We may use
the reformulation (\ref{e:A2})-(\ref{e:A3}) of the first Bianchi
identity to compute $\sum_{YZW} R(X,Y,Z,W)$ for arbitrary vector
fields. For any $X \in T(M)$ we set $X_H = X - \theta (X) T$ (so
that $X_H \in H(M)$). Then
\[ \sum_{YZW} R(X,Y,Z,W) = \sum_{YZW} g_\theta (R(Z,W) Y_H , X) =
\]
\[ = \sum_{YZW} g_\theta (R(Z_H , W_H ) Y_H + \theta (Y)
[R(W_H , T) Z_H + R(T, Z_H ) W_H ] , X) \] hence
\begin{equation}
\sum_{YZW} R(X,Y,Z,W) = \label{e:A6}
\end{equation}
\[ = - \sum_{YZW} \{ 2 \Omega (Y,Z) A(W, X) +
\theta (Y) g_\theta (X , S(Z_H , W_H )) \} \] for any $X,Y,Z,W \in
T(M)$. Next, we set \[ K(X,Y,Z,W) = \sum_{YZW} R(X,Y,Z,W) \] and
compute (by (\ref{e:A4})-(\ref{e:A5}))
\[ K(X,Y,Z,W) - K(Y,Z,W,X) - K(Z,W,X,Y) + K(W,X,Y,Z) = \]
\[ = 2 R(X,Y,Z,W) - 2 R(Z,W,X,Y) \]
hence (by (\ref{e:A6}))
\[ 2 R(X,Y,Z,W) - 2 R(Z,W,X,Y) = \] \[ = - \sum_{YZW} \{ 2 \Omega (Y,Z)
A(X,W) + \theta (Y) g_\theta (X , S(Z_H , W_H )) \} + \]
\[ + \sum_{ZWX} \{ 2 \Omega (Z,W) A(Y,X) + \theta (Z) g_\theta (Y,
S(W_H , X_H )) \} + \]
\[ + \sum_{WXY} \{ 2 \Omega (W,X) A(Y,Z) + \theta (W) g_\theta (Z
, S(X_H , Y_H )) \} - \]
\[ - \sum_{XYZ} \{ 2 \Omega (X,Y) A(Z,W) + \theta (X) g_\theta (W,
S(Y_H , Z_H )) \} \] or
\begin{equation} 2 R(X,Y,Z,W) - 2 R(Z,W,X,Y) = \label{e:A7}
\end{equation}
\[ = - 4 \Omega (Y,Z) A(X,W) + 4 \Omega (Y,W) A(X,Z) - \] \[ - 4 \Omega
(X,W) A(Y,Z) + 4 \Omega (X,Z) A(Y,W) + \]
\[ + \theta (X) [ g_\theta (Y, S(Z_H , W_H )) + g_\theta (Z, S(Y_H
, W_H )) - g_\theta (W, S(Y_H , Z_H )) ] + \]
\[ + \theta (Y) [ g_\theta (Z, S(W_H , X_H )) - g_\theta (W, S(Z_H
, X_H )) - g_\theta (X , S(Z_H , W_H )) ] + \]
\[ + \theta (Z) [g_\theta (Y, S(W_H , X_H )) - g_\theta (X, S(W_H
, Y_H )) - g_\theta (W, S(X_H , Y_H ))] + \]
\[ + \theta (W)[ g_\theta (Y, S(X_H , Z_H )) - g_\theta (X, S(Y_H
, Z_H )) + g_\theta (Z, S(X_H , Y_H ))]. \] As $\nabla_X \tau$ is
symmetric one has
\[ g_\theta (Y, S(X,Z)) - g_\theta (X, S(Y,Z)) = g_\theta
(S(X,Y),Z) \] for any $X,Y,Z \in H(M)$, so that (\ref{e:A7}) may
be written
\begin{equation} R(X,Y,Z,W) = R(Z,W,X,Y) -
\label{e:A8}
\end{equation}
\[ - 2 \Omega (Y,Z) A(X,W) + 2 \Omega (Y,W) A(X,Z) - \]
\[ - 2 \Omega (X,W) A(Y,Z) + 2 \Omega (X,Z) A(Y,W) + \]
\[ + \theta (X) g_\theta (S(Z_H , W_H ), Y) + \theta (Y) g_\theta
(S(W_H , Z_H ) , X) + \]
\[ + \theta (Z) g_\theta (S(Y_H , X_H ), W) + \theta (W) g_\theta
(S(X_H , Y_H ), Z) , \] for any $X,Y,Z,W \in T(M)$.
\par
The properties (\ref{e:A4})-(\ref{e:A6}) and (\ref{e:A8}) may be
used to compute the full curvature of a manifold of constant
pseudohermitian sectional curvature (the arguments are similar to
those in the proof of Prop. 1.2 in \cite{kn:KoNo}, Vol. I, p.
198). Assume from now on that $M$ is strictly pseudoconvex and
$G_\theta$ positive definite. Let us set
\[ R_1 (X,Y,Z,W) = g_\theta (X,Z) g_\theta (Y,W) - g_\theta (Y,Z)
g_\theta (W,X) \] so that
\begin{equation}
R_1 (X,Y,Z,W) = - R_1 (Y,X,Z,W), \label{e:A9}
\end{equation}
\begin{equation}
R_1 (X,Y,Z,W) = - R_1 (X,Y,W,Z), \label{e:A10}
\end{equation}
\begin{equation}
\sum_{YZW} R_1 (X,Y,Z,W) = 0. \label{e:A11}
\end{equation}
Assume from now on that $k_\theta = c =$ const. Let us set $L = R
- 4 c R_1$ and observe that
\begin{equation}
L(X,Y,X,Y) = 0 \label{e:A12}
\end{equation}
for any $X,Y \in T(M)$. Indeed, if $X, Y$ are linearly dependent
then (\ref{e:A12}) follows from the skew symmetry of $L$ in the
pairs $(X,Y)$ and $(Z,W)$, respectively. If $X , Y$ are
independent then let $\sigma \subset T_x (M)$ be the $2$-plane
spanned by $\{ X_x , Y_x \}$, $x \in M$. Then
\[ L(X,Y,X,Y)_x = R(X,Y,X,Y)_x - 4 c R_1 (X,Y,X,Y)_x = \]
\[ = 4 k_\theta (\sigma ) [|X|^2 |Y|^2 - g_\theta (X,Y)^2 ]_x - 4 c R_1
(X,Y,X,Y)_x = 0. \] Next (by (\ref{e:A12}))
\[ 0 = L(X,Y+W,X,Y+W) = L(X,Y,X,W) + L(X,W,X,Y) \]
i.e.
\begin{equation}
L(X,Y,X,W) = - L(X,W,X,Y) \label{e:A13}
\end{equation}
for any $X,Y,W \in T(M)$. As well known (cf. e.g. Prop. 1.1 in
\cite{kn:KoNo}, Vol. I, p. 198) the properties
(\ref{e:A9})-(\ref{e:A11}) imply as well the symmetry property
\begin{equation}
R_1 (X,Y,Z,W) = R_1 (Z,W, X,Y). \label{e:A14}
\end{equation}
Therefore $L(X,Y,Z,W) - L(Z,W,X,Y) = R(X,Y,Z,W) - R(Z,W,X,Y)$
hence (by (\ref{e:A8}))
\begin{equation}
L(X,Y,Z,W) = L(Z,W,X,Y) + \label{e:A15}
\end{equation}
\[ + 2 \Omega (Y,W) A(X,Z) - 2 \Omega (Y,Z) A(X,W) + \]
\[ + 2 \Omega (X,Z) A(Y,W) - 2 \Omega (X,W) A(Y,Z) + \]
\[ + \theta (X) g_\theta (S(Z_H , W_H ), Y) + \theta (Y) g_\theta
(S(W_H , Z_H ) , X) + \]
\[ + \theta (Z) g_\theta (S(Y_H , X_H ), W) + \theta (W) g_\theta
(S(X_H , Y_H ), Z) . \] Applying (\ref{e:A15}) (to interchange the
pairs $(X,W)$ and $(X,Y)$) we get
\[ L(X,W,X,Y) = L(X,Y,X,W) + \]
\[ + 2 \Omega (W,Y) A(X,X) - 2 \Omega (W,X) A(X,Y) - 2 \Omega
(X,Y) A(W,X) + \]
\[ + \theta (X) g_\theta (S(W_H , Y_H ) , X) + \theta (Y) g_\theta
(S(X_H , W_H ), X) + \] \[ + \theta (W) g_\theta (S(Y_H , X_H ) ,
X)
\] hence (\ref{e:A13}) may be written
\begin{equation} L(X,Y,X,W) = \Omega (W,X) A(X,Y) + \label{e:A16}
\end{equation}
\[ + \Omega (X,Y)
A(W,X) - \Omega (W,Y) A(X,X) - \]
\[ - \frac{1}{2} \{ \theta (X) g_\theta (S(W_H , Y_H ) , X) +
\theta (Y) g_\theta (S(X_H , W_H ) , X) + \]
\[ + \theta (W) g_\theta (S(Y_H , X_H ) , X) \} . \]
Consequently
\[ L(X+Z,Y,X+Z,W) = \Omega (W, X+Z) A(X+Z, Y) + \]
\[ + \Omega (X+Z, Y) A(W, X+Z) - \Omega (W,Y) A(X+Z, X+Z) - \]
\[ - \frac{1}{2} \; g_\theta (X+Z \; , \; \theta (X+Z) S(W_H , Y_H ) + \]
\[ + \theta (Y) S(X_H + Z_H , W_H ) + \theta (W) S(Y_H , X_H + Z_H ))
\]
or (using (\ref{e:A16}) to calculate $L(X,Y,X,W)$ and
$L(Z,Y,Z,W)$)
\begin{equation}
L(X,Y,Z,W) + L(Z,Y,X,W) = \Omega (X,Y) A(W , Z) + \label{e:A17}
\end{equation}
\[ + \Omega (W,X) A(Z,Y) + \Omega (W,Z) A(X,Y) + \] \[ + \Omega
(Z,Y) A(W,X) - 2 \Omega (W,Y) A(X,Z) - \]
\[ - \frac{1}{2} \; g_\theta (X , \theta (Z) S(W_H , Y_H ) +
\theta (Y)S(Z_H , W_H ) + \theta (W) S(Y_H , Z_H )) - \]
\[ - \frac{1}{2} \; g_\theta (Z , \theta (X) S(W_H , Y_H ) +
\theta (Y) S(X_H , W_H ) + \theta (W) S(Y_H , X_H )). \] On the
other hand, by the skew symmetry of $L$ in the first pair of
arguments and by (\ref{e:A15}) (used to interchange the pairs
$(Y,Z)$ and $(X,W)$)
\[ L(Z,Y,X,W) = - L(Y,Z, X,W) = - L(X,W,Y,Z) + \]
\[ + 2 \Omega (Z,X) A(Y,W) - 2 \Omega (Z,W) A(Y,X) + \]
\[ + 2 \Omega (Y,W) A(Z,X) - 2 \Omega(Y,X)A(Z,W) - \]
\[ - \theta (Y) g_\theta (S(X_H , W_H ) , Z) - \theta (Z) g_\theta
(S(W_H , X_H ) , Y) - \]
\[ - \theta (X) g_\theta (S(Z_H , Y_H ) , W) - \theta (W) g_\theta
(S(Y_H , Z_H ) , X) \] so that (\ref{e:A17}) becomes
\begin{equation}
L(X,Y,Z,W) = L(X,W,Y,Z) + 2 \Omega (X,Z) A(Y,W) - \label{e:A18}
\end{equation}
\[ - \Omega (W,Z) A(X,Y) - \Omega (X,Y) A(Z,W) + \] \[ + \Omega (W,X) A(Z,Y) +
\Omega (Z,Y) A(W,X) + \]
\[ + \frac{1}{2}\; \theta (X) \{ g_\theta (S(Z_H , Y_H ), W) +
g_\theta (S(Z_H , W_H ) , Y) \} - \]
\[ - \frac{1}{2} \; \theta (Y) \{ g_\theta (S(Z_H , W_H ) , X) +
g_\theta (S(W_H , X_H ) , Z) \} + \]
\[ + \frac{1}{2} \; \theta (Z) \{ g_\theta (S(W_H , X_H ) , Y) +
g_\theta (S(Y_H , X_H ) , W) \} - \]
\[ - \frac{1}{2} \; \theta (W) \{ g_\theta (S(Y_H , X_H ) , Z) +
g_\theta (S(Z_H , Y_H ) , X) \} . \] By cyclic permutation of the
variables $Y,Z,W$ in (\ref{e:A18}) we obtain another identity of
the sort
\[ L(X,Y,Z,W) = L(X,Z,W,Y) - 2 \Omega (X,W) A(Z,Y) + \]
\[ + \Omega (Y,W) A(X,Z) + \Omega (X,Z) A(W,Y) - \]
\[ - \Omega (Y,X) A(W,Z) - \Omega (W,Z) A(Y,X) - \]
\[ - \frac{1}{2} \; \theta (X) \{ g_\theta (S(W_H , Z_H ), Y) +
g_\theta (S(W_H , Y_H ) , Z) \} + \]
\[ + \frac{1}{2} \; \theta (Y) \{ g_\theta (S(Z_H , X_H ) , W) +
g_\theta ( S(W_H , Z_H ) , X)\} - \]
\[ + \frac{1}{2} \; \theta (Z) \{ g_\theta (S(W_H , Y_H ) , X) +
g_\theta (S(Y_H , X_H ) , W) \} + \]
\[ - \frac{1}{2} \; \theta (W) \{ g_\theta (S(Y_H, X_H ) , Z) +
g_\theta (S(Z_H , X_H ) , Y) \} \] which together with
\eqref{e:A18} leads to
\[ 3 L(X,Y,Z,W) = \sum_{YZW} L(X,Y,Z,W) - 2 \Omega (W,Z) A(X,Y) + \]
\[ + 3 \Omega (X,Z) A(Y,W) - 3 \Omega (X,W) A(Y,Z) + \]
\[ + \Omega (Z,Y) A(W,X) + \Omega (Y, W) A(X,Z) + \]
\[ + \frac{3}{2} \; \theta (X) g_\theta (S(Z_H , W_H ) , Y) - \frac{1}{2} \;
\theta (Y) g_\theta (S(Z_H , W_H ) , X) + \]
\[ + \frac{1}{2} \; \theta (Z) \{ 2 g_\theta (S(Y_H , X_H ) , W) +
\]
\[ + g_\theta (S(W_H , X_H ) , Y) + g_\theta (S( W_H , Y_H ) , X)
\} - \]
\[ - \frac{1}{2} \; \theta (W) \{ 2 g_\theta (S(Y_H , X_H ) , Z) + \]
\[ + g_\theta (S(Z_H , Y_H ) , X) + g_\theta (S(Z_H , X_H ) , Y ) \} \]
or
\[ L(X,Y,Z,W) = \Omega (Y,W) A(X,Z) - \Omega (Y,Z) A(X,W) + \]
\[ + \Omega (X,Z) A(Y,W) - \Omega (X,W) A(Y,Z) + \]
\[ + \frac{1}{2} \; \{ \theta (X) g_\theta (S(Z_H , W_H ) , Y) - \theta (Y)
g_\theta (S(Z_H , W_H ) , X) + \]
\[ + \theta (Z) g_\theta (S(Y_H , X_H ) , W) -
\theta (W) g_\theta (S(Y_H , X_H , Z) \} \] or
\begin{equation} R(X,Y,Z,W) = 4c \{ g_\theta (X,Z) g_\theta (Y,W) -
g_\theta (Y,Z) g_\theta (X,W) \} + \label{e:A19}
\end{equation}
\[ + \Omega (Y,W) A(X,Z) - \Omega (Y,Z) A(X,W) + \]
\[ + \Omega (X,Z) A(Y,W) - \Omega (X,W) A(Y,Z) + \]
\[ + g_\theta (S(Z_H , W_H ) , (\theta \wedge I)(X,Y)) - g_\theta (S(X_H ,
Y_H ), (\theta \wedge I)(Z,W)) \] for any $X,Y,Z, W \in T(M)$,
where $I$ is the identical transformation and $(\theta \wedge
I)(X,Y) = \frac{1}{2} \{ \theta (X) Y - \theta (Y) X \}$. Using
(\ref{e:A19}) one may prove Theorem \ref{t:J3} as follows. Let $Y
= T$ in (\ref{e:A19}). As $R(Z,W) T = 0$ and $S$ is $H(M)$-valued
we get
\begin{equation} 0 = 4c \{ g_\theta (X,Z) \theta (W) -
g_\theta (X,W) \theta (Z) \} - \frac{1}{2} \; g_\theta (S(Z_H ,
W_H ) , X), \label{e:A20}
\end{equation}
for any $X,Z,W \in T(M)$. In particular for $Z,W \in H(M)$
\[ S(Z,W) = 0. \]
Hence $S(Z_H , W_H ) = 0$ and (\ref{e:A20}) becomes
\[ c \{ g_\theta (X,Z) \theta (W) - g_\theta (X,W) \theta (Z) \}
= 0, \] for any $X,Z,W \in T(M)$. In particular for $Z = X \in
H(M)$ and $W = T$ one has $c |X|^2 = 0$ hence $c = 0$ and
(\ref{e:A19}) leads to (\ref{e:J7}). Then $\tau = 0$ yields $R =
0$. To prove the last statement in Theorem \ref{t:J3} let us
assume that $M$ has CR dimension $n \geq 2$ (so that the Levi
distribution has rank $> 3$). Assume that $R = 0$ i.e.
\[ \Omega (X,Z) \tau (Y) - \Omega (Y,Z) \tau (X) = A(X,Z) J Y -
A(Y,Z) J X \] (by (\ref{e:J7})). In particular for $Z = Y$
\begin{equation}
\Omega (X,Y) \tau (Y) = A(X,Y) J Y - A(Y,Y) J X. \label{e:A21}
\end{equation}
Let $X \in H(M)$ such that $|X| = 1$, $g_\theta (X,Y) = 0$ and
$g_\theta (X, J Y) = 0$. Taking the inner product of (\ref{e:A21})
with $J X$ gives $A(Y,Y) = 0$, hence $A = 0$ (as $A$ is
symmetric). Q.e.d.
\end{appendix}
|
1,116,691,500,129 | arxiv | \section{Introduction}
As a fundamental module of quantum coherent control, accurate population transfer of spin states is the prerequisite for many quantum information processing tasks. To implement population transfer, people may first think of using Rabi oscillations. While Rabi oscillations are a convenient tool for population transfer, it can not handle the more challengeable situation where population transfer needs to be implemented between uncoupled or weakly coupled spin states. In such a situation, direct population transfer is forbidden and therefore Rabi oscillations are no longer used. To cope with this more challengeable situation, people resort to using an intermediate state to connect the two uncoupled or weakly coupled spin states, resulting in three-level system based population transfer schemes. Compared to the population transfer using Rabi oscillations, three-level system based population transfer is more difficult to keep accurate, and therefore particular methods need to be developed to make sure the quality of the transfer is satisfactory.
Two well-known population transfer schemes for uncoupled or weakly coupled states are stimulated Raman transition (SRT) \cite{SRT1,SRT2,SRT3} and stimulated Raman adiabatic passage (STIRAP) \cite{STI1,STI2,STI3,STI4,STI5}. Whilst these two schemes have been proven very efficient, there is still room left for improvement. For SRT, it is technically easy to realize and not constrained by the adiabatic condition. But it is sensitive to the frequency errors resulting from the fluctuation of the magnetic field \cite{bohm} that is ubiquitous and dominant in spin systems such as nitrogen vacancy centers in diamond \cite{wanghl1,wanghl2,cai1,bohm,tian} and semiconductor quantum dots \cite{LZD3,LZD4,LZD5,LZD6,LZD7}. On the other hand, STIRAP is insensitive to the frequency errors, which is an absolutely attracting feature, but it requires the quantum system to evolve adiabatically. It is known that adiabatic evolutions require long run time \cite{tong,cp2,optim0} and this makes STIRAP vulnerable to environment-induced decoherence \cite{STI4}. Recently, shortcuts to adiabaticity (STA) \cite{NSTA1,NSTA2}, which includes transitionless quantum driving, invariant-based inverse engineering and fast-forward approaches, has been used to speed up adiabatic population transfers \cite{YXX,bzhou1,Daems,chenxi20122,BZFL,FZB,song2016,dress2016,xia2015,xia2014,xiay2017,duyx2016,chenxi2016,chenxi2012,chenxi2010,inv1,optim1,DSEP1,DSEP2,DSEP3}
and design stimulated Raman exact passage \cite{ssp2,noptim2,liubj1,liubj0,XKS,xl22}. However, when using such STA based schemes in spin systems, the existence of the frequency errors still influences the performance of these schemes.
In this paper, we propose a robust scheme for population transfer between uncoupled or weakly coupled spin states. Our scheme combines invariant-based inverse engineering of STA and geometric formalism for robust quantum control. Geometric formalism for robust quantum control \cite{gh5,gh13,gh12,gh2,gh3,gh4,gh1} is used to suppress the frequency errors resulting from the fluctuation of the magnetic field and for simplicity will be sometimes referred to as geometric formalism in the following. Our scheme has two attracting features: fast implementation and robustness against the frequency errors. Considering fast implementation can reduce the influence of decoherence and the fluctuation of the magnetic field is the dominant noise in spin systems, our scheme has the potential to transfer the population of uncoupled or weakly coupled spin states accurately. Besides the above two features, our scheme is also friendly in experiment. The control parameters of our driving Hamiltonian can be designed by analyzing the curvature and torsion of a three-dimensional space curve that is derived using geometric formalism. We demonstrate the specific realization procedure of our scheme by numerically simulating the ground-state spin transfer in the $^{15}$N nitrogen vacancy center. We also compare our scheme with SRT, STIRAP and conventional STA based schemes, respectively, and the results show that our scheme is advantageous over these previous ones.
\section{theoretical framework}
\begin{figure}
\includegraphics[scale=0.16]{fig1.pdf}
\caption{$V$-type and $\Lambda$-type three-level systems and the corresponding driving fields applied to them. $\Omega_{-}(t)$ and $\Omega_{+}(t)$ are the Rabi frequencies of the pump and Stokes fields, $\phi_{-}(t)$ and $\phi_{+}(t)$ are their phases, and $\omega_{-}(t)$ and $\omega_{+}(t)$ are their frequencies. $\Delta_{-}(t)$ and $\Delta_{+}(t)$ are the detunings from the resonances.}
\label{Fig1}
\end{figure}
We illustrate our theoretical framework in this section. Consider a three-level spin system with states $\ket{-1}$, $\ket{+1}$ and $\ket{0}$. While the transitions $\ket{0}\leftrightarrow\ket{\pm1}$ are dipole coupled, the transition $\ket{-1}\leftrightarrow\ket{+1}$ is dipole forbidden. So, the system we consider can have either a $V$ structure depicted by Fig. \ref{Fig1} (a) or a $\Lambda$ structure depicted by Fig. \ref{Fig1} (b). The population transfer between spin states $\ket{-1}$ and $\ket{+1}$ is required to implement, but since the transition $\ket{-1}\leftrightarrow\ket{+1}$ is dipole forbidden, the state $\ket{0}$ is used as an intermediate state. Our theoretical framework is suitable for both the $V$-type and $\Lambda$-type spin systems, and without loss of generality, we use the $V$-type spin system to do the illustration. Our aim is to realize accurate population transfer from $\ket{-1}$ to $\ket{+1}$ in the presence of noise. To achieve the aim, we consider the following form of Hamiltonian
\begin{align}\label{eq2}
H(t)=\left(
\begin{array}{ccc}
\Delta(t) & \frac{1}{\sqrt{2}}\Omega(t)e^{-i\phi(t)} & 0\\
\frac{1}{\sqrt{2}}\Omega(t)e^{i\phi(t)} & 0 & \frac{1}{\sqrt{2}}\Omega(t)e^{-i\phi(t)}\\
0 & \frac{1}{\sqrt{2}}\Omega(t)e^{i\phi(t)} & -\Delta(t)\\
\end{array}
\right).
\end{align}
The Hamiltonian $H(t)$ is written in the basis $\{\ket{-1}, \ket{0}, \ket{+1}\}$, and it can be realized by applying the driving fields shown in Fig. \ref{Fig1} (a) to the system. In Fig. \ref{Fig1} (a), $\Omega_{-}(t)$ and $\Omega_{+}(t)$ are the Rabi frequencies of the pump and Stokes fields, respectively, and we assume that they have the same envelope $\Omega_{-}(t)=\Omega_{+}(t)=\sqrt{2} \Omega(t)$, where the constant $\sqrt{2}$ is just for the convenience of subsequent calculations. $\phi_{-}(t)$ and $\phi_{+}(t)$ are the phases of these two driving fields, and they have the same value all the time, that is, $\phi_{-}(t)=-\phi_{+}(t)=\phi(t)$. The detunings of these two driving fields are denoted by $\Delta_{-}(t)=(E_{-1}-E_{0})-\omega_{-}(t)$ and $\Delta_{+}(t)=(E_{+1}-E_{0})-\omega_{+}(t)$, respectively, and we also assume they have the same value $\Delta_{-}(t)=-\Delta_{+}(t)=\Delta(t)$, where $\omega_{-}(t)$ and $\omega_{+}(t)$ are the corresponding frequencies of the pump and Stokes fields, and $E_{-1}$, $E_{0}$ and $E_{+1}$ are the bare-basis state energies. It's worth noting that our scheme also applies to effective three-level systems such as two interacting spins \cite{inv1,DSEP1}.
Note that with $\Omega(t)$, $\Delta(t)$ and $\phi(t)$ being different functions with respect to time $t$, the Hamiltonian $H(t)$ in Eq.~(\ref{eq2}) will be different. What we will do is to give an approach to set the functions $\Omega(t)$, $\Delta(t)$ and $\phi(t)$, making the population transfer from $\ket{-1}$ to $\ket{+1}$ accurate even under the influence of noise. The proposed approach combines invariant-based inverse engineering and geometric formalism for robust quantum control. Specifically, the inverse engineering in our approach is inspired by geometric formalism. The procedure of our approach is as follows. We first design the evolution operator $U(t,0)$ of the quantum system with the help of dynamical invariants. Then we analyse the influence of noise on the evolution operator $U(t,0)$ with Dyson series. During this process, geometric formalism is introduced and it turns the population transfer problem into a space curve design problem. The information about how to set the control parameters $\Omega(t)$, $\Delta(t)$ and $\phi(t)$ can be obtained from inversely calculating the curvature and torsion of the space curve. We in the following illustrate our approach in detail.
As illustrated above, we first design the evolution operator $U(t,0)$ induced by the Hamiltonian $H(t)$. Considering directly solving the time-dependent Schr\"{o}dinger equation $i\partial \ket{\psi(t)}/\partial t =H(t)\ket{\psi(t)}$ is hard, we will use the dynamical invariant $I(t)$ related to $H(t)$ to parameterize the evolution operator $U(t,0)$, i.e., to express the evolution operator $U(t,0)$ with some other parameters instead of $\Omega(t)$, $\Delta(t)$ and $\phi(t)$ in the Hamiltonian $H(t)$. Note that although we do not give the expression of $U(t,0)$ in terms of $\Omega(t)$, $\Delta(t)$ and $\phi(t)$, it is sufficient for our subsequent discussion.
To parameterize the evolution operator $U(t,0)$, we rewrite the Hamiltonian $H(t)$ by expanding it with spin-1 angular momentum operators,
\begin{align}
H(t)=\Omega(t)\cos\phi(t)K_{x}+\Omega(t)\sin\phi(t)K_{y}+\Delta(t)K_{z},
\label{eq3}
\end{align}
where the three spin-1 angular momentum operators in the basis $\{\ket{-1}, \ket{0}, \ket{+1}\}$ read
\begin{align}
K_{x}=\frac{1}{\sqrt{2}}\left(
\begin{array}{ccc}
0 & 1 & 0\\
1 & 0 & 1\\
0 & 1 & 0\\
\end{array}
\right),
K_{y}=\frac{1}{\sqrt{2}}\left(
\begin{array}{ccc}
0 & -i & 0\\
i & 0 & -i\\
0 & i & 0\\
\end{array}
\right),
K_{z}=\left(
\begin{array}{ccc}
1 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & -1\\
\end{array}
\right).
\label{eq4}
\end{align}
One can verify that the operators $K_{x}$, $K_{y}$ and $K_{z}$ form a closed algebra and this algebra is isomorphic to the Lie algebra of SU(2), i.e., their commutation relations satisfy
\begin{align}
[K_{x},K_{y}]=iK_{z},[K_{y},K_{z}]=iK_{x},[K_{z},K_{x}]=iK_{y}.
\label{eq5}
\end{align}
From the above, one can see that the Hamiltonian $H(t)$ in Eq.~(\ref{eq2}) possesses SU(2) dynamical symmetry. Because of this feature, the relevant dynamical invariant $I(t)$, such that $dI(t)/dt\equiv\partial I(t)/\partial t-i[I(t),H(t)]=0$, can be constructed as \cite{inv1,inv2}
\begin{align}
I(t)=&\Omega_{0}[\cos\beta(t)\sin\theta(t) K_{x}+\sin\beta(t)\sin\theta(t) K_{y} \nonumber \\
&+\cos\theta(t) K_{z}],
\label{eq51}
\end{align}
where $\Omega_{0}$ is an arbitrary constant with units of frequency, guaranteeing $I(t)$ having the same energy dimension as $H(t)$, and $\beta(t)$ and $\theta(t)$ are time-dependent parameters related to $\Omega(t)$, $\Delta(t)$ and $\phi(t)$. By solving the equation $I(t)\ket{\varphi_{n}(t)}=\lambda_{n}\ket{\varphi_{n}(t)}$, one can readily get the eigenvalues and eigenstates of $I(t)$. The eigenvalues are $\lambda_{1}=\Omega_{0}$, $\lambda_{2}=0$, $\lambda_{3}=-\Omega_{0}$, and the corresponding eigenstates are
\begin{align}\label{eq52}
\ket{\varphi_{1}(t)}&=\left(
\begin{array}{c}
\cos^{2}\frac{\theta(t)}{2}e^{-i\beta(t)}\\
\frac{1}{\sqrt{2}}\sin\theta(t)\\
\sin^{2}\frac{\theta(t)}{2}e^{i\beta(t)}\\
\end{array}
\right),\notag\\
\ket{\varphi_{2}(t)}&=\left(
\begin{array}{c}
-\frac{1}{\sqrt{2}}\sin\theta(t) e^{-i\beta(t)}\\
\cos\theta(t)\\
\frac{1}{\sqrt{2}}\sin\theta(t) e^{i\beta(t)}\\
\end{array}
\right),\\
\ket{\varphi_{3}(t)}&=\left(
\begin{array}{c}
\sin^{2}\frac{\theta(t)}{2}e^{-i\beta(t)}\\
-\frac{1}{\sqrt{2}}\sin\theta(t)\\
\cos^{2}\frac{\theta(t)}{2}e^{i\beta(t)}\\
\end{array}
\right).\notag
\end{align}
According to Lewis-Riesenfeld theory, instead of directly solving the time-dependent
Schr\"{o}dinger equation $i\partial \ket{\psi(t)}/\partial t =H(t)\ket{\psi(t)}$, which is hard, the solution to it can be expanded by the orthonormal dynamical modes $e^{i\alpha_{n}(t)}\ket{\varphi_{n}(t)}$\cite{inv2}, that is,
\begin{align}
\ket{\psi(t)}=\sum_{n=1}^3C_{n}e^{i\alpha_{n}(t)}\ket{\varphi_{n}(t)}.
\label{eq53}
\end{align}
In the above, $C_{n}$ are time-independent amplitudes, $\ket{\varphi_{n}(t)}$ are the eigenstates of the invariant $I(t)$, and the phases
\begin{align}
\alpha_{n}(t)=\int^{t}_{0}\bra{\varphi_{n}(t^{\prime})}i\frac{\partial}{\partial t^{\prime}}-H(t^{\prime})\ket{\varphi_{n}(t^{\prime})}dt^{\prime}.
\label{eq54}
\end{align}
Moreover, by calculation one can get that $\alpha_{2}(t)=0$ and $\alpha_{1}(t)$ is always equal to $-\alpha_{3}(t)$, i.e., $\alpha_{1}(t)=-\alpha_{3}(t)=\alpha(t)$ with $\alpha(t)$ being the common value. From Eq.~(\ref{eq53}), one can see that the evolution driven by $H(t)$ can be divided into three orthonormal dynamical modes $e^{i\alpha_{n}(t)}\ket{\varphi_{n}(t)}$ with $n=1,2,3$. Considering analysing one single dynamical mode is easier than analysing the interference of these modes and our aim is to realize the population transfer from $\ket{-1}$ to $\ket{+1}$, we here set $\ket{\varphi_{1}(0)}=(1,0,0)^{\mathcal {T}}$, $\ket{\varphi_{2}(0)}=(0,1,0)^{\mathcal {T}}$ and $\ket{\varphi_{3}(0)}=(0,0,1)^{\mathcal {T}}$, planning to let the transition from $\ket{-1}$ to $\ket{+1}$ evolve along the first dynamical mode. Note that $\mathcal {T}$ in the above means the transpose of matrixes and the states $\ket{\varphi_{n}(0)}$ are written in the basis $\{\ket{-1}, \ket{0}, \ket{+1}\}$. With the calculated values of $\alpha_{n}(t)$ and the setting of $\ket{\varphi_{n}(0)}$, the evolution operator $U(t,0)$ can be written as
\begin{align}\label{eq6}
\begin{split}
U(t,0)&=\sum_{n=1}^3e^{i\alpha_{n}(t)}\ket{\varphi_{n}(t)}\bra{\varphi_{n}(0)}\\
&=\left(\begin{array}{ccc}
\cos^{2}\frac{\theta}{2}e^{-i(\beta-\alpha)}& -\frac{1}{\sqrt{2}}\sin\theta e^{-i\beta} & \sin^{2}\frac{\theta}{2}e^{-i(\beta+\alpha)}\\
\frac{1}{\sqrt{2}}\sin\theta e^{i\alpha} & \cos\theta & -\frac{1}{\sqrt{2}}\sin\theta e^{-i\alpha}\\
\sin^{2}\frac{\theta}{2}e^{i(\beta+\alpha)} & \frac{1}{\sqrt{2}}\sin\theta e^{i\beta} & \cos^{2}\frac{\theta}{2}e^{i(\beta-\alpha)}\\
\end{array}
\right),
\end{split}
\end{align}
where $\alpha$, $\beta$ and $\theta$ have been used to represent $\alpha(t)$, $\beta(t)$ and $\theta(t)$ for conciseness.
Until now, we have parameterized the evolution operator $U(t,0)$ by expressing it with $\alpha(t)$, $\beta(t)$ and $\theta(t)$. We next will analyse the influence of noise on the population transfer with the help of Dyson series, and based on the analysis, give an approach to realize accurate population transfer. Specifically, we expand the practical final state to the second order with the help of Dyson series and $U(t,0)$ in Eq.~(\ref{eq6}), and define a space curve which can describe the evolution of the system.
For the three-level spin system, the dominant noise is the fluctuation of the magnetic field, which results from the influence of environments and the imperfection of magnetic control. Generally, the fluctuation of the magnetic field is much slower than the typical operate time, allowing one to use the quasistatic noise model to describe it. Moreover, due to the linear dependence of the energies $\ket{\pm1}$ on the magnetic field, the influence of the fluctuation of the magnetic field can be further seen as frequency errors. After accounting into the dominant noise which can be treated as frequency errors, the Hamiltonian of the three-level spin system turns into
\begin{align}
H^{\prime}(t)=H(t)+\delta K_{z},
\label{eq 7}
\end{align}
where $\delta$ represents the strength of the frequency errors, and it is sufficiently small compared to those of the driving fields. In this case, the term $\delta K_{z}$ can be seen as a perturbation to the Hamiltonian $H(t)$. By using Dyson series, we expand the practical final state $\ket{\psi^{\prime}(T)}$ to the second-order,
\begin{align} \label{eq 71}
\ket{\psi^{\prime}(T)}&=\ket{\psi(T)}-i\delta\int^{T}_{0}dtU(T, t)K_{z}\ket{\psi(t)}\\
&-\delta^{2}\int^{T}_{0}dt\int^{t}_{0}dt^{\prime}U(T, t)K_{z}U(t, t^{\prime})K_{z}\ket{\psi(t^{\prime})}+\cdots,\notag
\end{align}
where $\ket{\psi(t)}$ is the unperturbed state and $U(s,t)=\sum_{n}\ket{\psi_{n}(s)}\bra{\psi_{n}(t)}$ is the unperturbed evolution operator, with $\ket{\psi_{n}(t)}=e^{i\alpha_{n}(t)}\ket{\varphi_{n}(t)}$ representing the orthonormal dynamical modes. Recall that we plan to let the transition from states $\ket{-1}$ to $\ket{+1}$ evolve along the first dynamical mode, which means the unperturbed state $\ket{\psi(t)}$ would be $\ket{\psi_{1}(t)}$. With the unperturbed final state described by $\ket{\psi_{1}(T)}$ and the practical final state $\ket{\psi^{\prime}(T)}$ described by Eq.~(\ref{eq 71}), the quality of the population transfer can be assessed by the fidelity $F=|\langle\psi_{1}(T)|\psi^{\prime}(T)\rangle|^{2}$ and it reads
\begin{align} \label{eq 72}
F\approx1-\delta^{2}\sum_{n\neq 1}\big|\int_{0}^{T}dt
\bra{\psi_{1}(t)}K_{z}\ket{\psi_{n}(t)}\big|^{2},
\end{align}
where the second term is the noise term, reflecting the influence of the frequency errors on the population transfer. From the above equation, one can see that if the noise term $\sum_{n\neq 1}\big|\int_{0}^{T}dt\bra{\psi_{1}(t)}K_{z}\ket{\psi_{n}(t)}\big|^{2}$ can be suppressed, the fidelity will approach to one and accurate population transfer in the presence of noise can be realized. For the convenience of subsequent discussions, we define an operator $m(t)$ as
\begin{align} \label{eq 75}
m(t)=\int^{t}_{0}U^{\dag}(t^{\prime},0)K_{z}U(t^{\prime},0)dt^{\prime}.
\end{align}
One can verify that if $m(T)=0$, the noise term $\sum_{n\neq 1}\big|\int_{0}^{T}dt\bra{\psi_{1}(t)}K_{z}\ket{\psi_{n}(t)}\big|^{2}$ will be suppressed. Substituting Eqs.~(\ref{eq4}) and (\ref{eq6}) into Eq.~(\ref{eq 75}), one finds that the operator $m(t)$ can also be expanded with spin-1 angular momentum operators, that is
\begin{align}
m(t)=x(t)K_{x}+y(t)K_{y}+z(t)K_{z}=\mathbf{r}(t)\cdot\hat{K}.
\label{eq10}
\end{align}
In the above, $x(t)=\frac{1}{2}\text{Tr}(K_xm(t))$, $y(t)=\frac{1}{2}\text{Tr}(K_ym(t))$, $z(t)=\frac{1}{2}\text{Tr}(K_zm(t))$ are the expansion coefficients, $K_{x}$, $K_{y}$, $K_{z}$ are spin-1 angular momentum operators described by Eq. (\ref{eq4}). Correspondingly, $\mathbf{r}(t)=(x(t), y(t), z(t))$ is the coefficient vector and $\hat{K}=(K_{x},K_{y},K_{z})$ is the vector composed of spin-1 angular momentum operators. The above equation tells us that the operator $m(t)$ can be totally described by the three-dimensional space curve defined by
\begin{align}\label{eq101}
\mathbf{r}(t)=x(t)\hat{e_x}+ y(t)\hat{e_y}+ z(t)\hat{e_z},
\end{align}
because $K_x$, $K_y$ and $K_z$ are fixed matrixes, where $\hat{e_x}$, $\hat{e_y}$ and $\hat{e_z}$ are orthonormal vectors in the three-dimensional Euclidean space.
The space curve $\mathbf{r}(t)$ defined above induces geometric formalism for robust population transfer and by using geometric formalism, we obtain the main result of our paper. To realize robust population transfer, one needs to satisfy two conditions. Condition (i): the noise term in Eq.~(\ref{eq 72}) can be suppressed, i.e., $m(T)=0$. Condition (ii): the desired population transfer is realized in the ideal case, i.e., the transfer from $\ket{-1}$ to $\ket{+1}$ can be driven by the Hamiltonian $H(t)$. It turns out that these two conditions both can turn into conditions on the space curve $\mathbf{r}(t)$. More importantly, by further calculating the curvature and torsion of the space curve $\mathbf{r}(t)$, one can get $\Omega(t)$, $\Delta(t)$ and $\phi(t)$.
We first show that condition (i) can turn into a condition on the space curve $\mathbf{r}(t)$. Since $m(0)=0$, the space curve starts at the origin $\mathbf{r}(0)=0$. According to Eq.~(\ref{eq 75}), the condition $m(T)=0$ turns into $\mathbf{r}(T)=0$. This condition tell us that if the space curve $\mathbf{r}(t)$ is closed, the noise term $\sum_{n\neq 1}\big|\int_{0}^{T}dt\bra{\psi_{1}(t)}K_{z}\ket{\psi_{n}(t)}\big|^{2}$ can be suppressed. We next discuss condition (ii). To this end, we apply the parameterized evolution operator $U(t,0)$ in Eq.~(\ref{eq6}) to the initial state $\ket{-1}$. As is desired, the system evolves along the first dynamical mode $\ket{\psi_{1}(t)}$, that is,
\begin{align}
\ket{\psi(t)}=\ket{\psi_{1}(t)}=e^{i\alpha(t)}\left(
\begin{array}{ccc}
\cos^{2}\frac{\theta(t)}{2}e^{-i\beta(t)}\\
\frac{1}{\sqrt{2}}\sin\theta(t) \\
\sin^{2}\frac{\theta(t)}{2}e^{i\beta(t)} \\
\end{array}
\right).
\label{eq19}
\end{align}
To fulfill the population transfer, the final state $\ket{\psi(T)}$ should be $\ket{+1}$. This gives us the condition
\begin{align}
\theta(0)=0,~~~~~\theta(T)=\pi.
\label{eq20}
\end{align}
The above condition does not put special conditions on $\mathbf{r}(t)$, but it constrains $\dot{\mathbf{r}}(t)$, the derivative of $\mathbf{r}(t)$ with respect to time $t$. To see this, one can use Eqs. (\ref{eq 75}-\ref{eq101}) to calculate $\dot{\mathbf{r}}(t)$ and it reads
\begin{align}
\dot{\mathbf{r}}(t)=-\sin\theta(t)\cos\alpha(t)\hat{e_x}-\sin\theta(t)\sin\alpha(t)\hat{e_y}
+\cos\theta(t)\hat{e_z}.
\label{eq11}
\end{align}
The length of $\dot{\mathbf{r}}(t)$ is unit, implying that $\dot{\mathbf{r}}(t)$ is the tangent vector of the curve and $\mathbf{r}(t)$ is the parametrization of the curve by arc length. Substituting Eq.~(\ref{eq20}) into Eq. (\ref{eq11}), one can see that the condition in Eq.~(\ref{eq20}) turns into
\begin{align}
\dot{\mathbf{r}}(0)=(0,0,1),~~~\dot{\mathbf{r}}(T)=(0,0,-1),
\label{eq21}
\end{align}
which means that the tangent vectors of the space curve $\mathbf{r}(t)$ at $t=0$ and $t=T$ are fixed.
In the previous paragraph, we have shown conditions (i) and (ii) can turn into conditions on the space curve $\mathbf{r}(t)$, that is, the space curve $\mathbf{r}(t)$ must be closed and the tangent vectors of the space curve $\mathbf{r}(t)$ at $t=0$ and $t=T$ should be $\dot{\mathbf{r}}(0)=(0,0,1)$ and $\dot{\mathbf{r}}(T)=(0,0,-1)$. In the following, we will show that by calculating the curvature and torsion of the space curve, one can get $\Omega(t)$, $\Delta(t)$ and $\phi(t)$.
Because Eq.~(\ref{eq 75}) contains the evolution operator $U(t,0)$, $m(t)$ contains all the information to describe the evolution. This point makes it possible to obtain $\Omega(t)$, $\Delta(t)$ and $\phi(t)$ by calculating the curvature and torsion of the space curve $\mathbf{r}(t)$. By calculation, the derivatives of $m(t)$ are
\begin{align}
\dot{m}(t)=\dot{\mathbf{r}}(t)\cdot\hat{K}=U^{\dag}(t,0)K_{z}U(t,0),
\label{eq12}
\end{align}
\begin{align}
\ddot{m}(t)=\ddot{\mathbf{r}}(t)\cdot\hat{K}=iU^{\dag}(t,0)[H(t),K_{z}]U(t,0),
\label{eq13}
\end{align}
\begin{align}\label{eq14}
\begin{split}
\dddot{m}(t)=\dddot{\mathbf{r}}(t)\cdot\hat{K}=-&U^{\dag}(t,0)H(t)[H(t),K_{z}]U(t,0)\\
+&iU^{\dag}(t,0)[\dot{H}(t),K_{z}]U(t,0)\\
+& U^{\dag}(t,0)[H(t),K_{z}]H(t)U(t,0).
\end{split}
\end{align}
Substituting Eq. (\ref{eq2}) into Eq. (\ref{eq13}), one can get that $\Omega(t)$ is equal to the curvature $\kappa(t)=\|\ddot{\mathbf{r}}(t)\|$ of the space curve $\mathbf{r}(t)$, i.e.,
\begin{align}\label{eq15}
\Omega(t)=\|[H(t),K_{z}]\|_{F}=\|\ddot{\mathbf{r}}(t)\|.
\end{align}
In the above, the scaled Frobenius norm of matrixes is defined as $\|m\|_{F}=\sqrt{\sum^{n}_{i,j}|m_{ij}|^{2}}/\sqrt{2}$, which is invariant under unitary equivalence transformations of $m$.
Next, using the expressions of $\dot{m}(t)$, $\ddot{m}(t)$ and $\dddot{m}(t)$ in Eqs. (\ref{eq12}-\ref{eq14}), one can obtain
\begin{align}\label{eq16}
\dot{\phi}(t)-\Delta(t)=-i\frac{\textmd{Tr}\{\dot{m}(t)\ddot{m}(t)\dddot{m}(t)\}}{\|[\dot{m}(t),\ddot{m}(t)]\|^{2}_{F}}.
\end{align}
The above equation can be rewritten by using the identities of spin-1 angular momentum operators:
\begin{align}\label{eq17}
\begin{split}
\dot{\phi}(t)-\Delta(t)=&-i\frac{\textmd{Tr}\{\dot{m}(t)\ddot{m}(t)\dddot{m}(t)\}}{\|[\dot{m}(t),\ddot{m}(t)]\|^{2}_{F}}\\
=&-i\frac{\frac{1}{2}\textmd{Tr}\{[\dot{m}(t),\ddot{m}(t)]\dddot{m}(t)\}}{\|[\dot{m}(t),\ddot{m}(t)]\|^{2}_{F}}\\
=&-i\frac{\frac{1}{2}\textmd{Tr}\{[i(\dot{\mathbf{r}}(t)\times\ddot{\mathbf{r}}(t))\cdot\hat{K}](\dddot{\mathbf{r}}(t)\cdot\hat{K})\}}{\|i(\dot{\mathbf{r}}(t)\times\ddot{\mathbf{r}}(t))\cdot\hat{K}\|^{2}_{F}} \\
=&\frac{(\dot{\mathbf{r}}(t)\times\ddot{\mathbf{r}}(t))\cdot\dddot{\mathbf{r}}(t)}{\|\dot{\mathbf{r}}(t)\times\ddot{\mathbf{r}}(t)\|^{2}} \end{split}
\end{align}
which is just the torsion $\tau(t)$ of the space curve.
\section{DISCUSSION OF ROBUSTNESS}
While the general idea of our paper is given in the above section, we will give a concrete example to demonstrate the feasibility of the general idea in this section. Specifically, we will demonstrate the specific realization procedure of our geometric formalism based robust population transfer scheme with the ground-state population transfer in the $^{15}$N nitrogen vacancy center. We also numerically simulate the performance of our scheme in practical scenarios and compare it with those of SRT, STIRAP and conventional STA based schemes.
\begin{figure}[htb]
\includegraphics[scale=0.18]{fig2.pdf}
\caption{The energy diagram of the nitrogen vacancy center. Our aim is to drive the transition between states $\ket{m_{s}=-1}\leftrightarrow\ket{m_{s}=+1}$ with $\ket{m_{s}=0}$ as an intermediate state. $\delta$ describes the frequency errors resulting from the fluctuation of the magnetic field.}
\label{Fig2}
\end{figure}
\begin{figure}[htb]
\includegraphics[scale=0.5]{fig3.pdf}
\caption{The curve $\textbf{r}(d)$ in the three-dimensional Euclidean space. The color changes from light to dark with the parameter increasing.}
\label{Fig3}
\end{figure}
\begin{figure*}[htp]
\includegraphics[scale=0.4]{fig4a.pdf}
\includegraphics[scale=0.4]{fig4b.pdf}
\includegraphics[scale=0.4]{fig4c.pdf}
\caption{The control parameters of the driving Hamiltonian in our scheme. (a) The Rabi frequencies of the pump and Stokes fields $\Omega_{-}(t)=\Omega_{+}(t)=\sqrt{2} \Omega(t)$. (b) The detunings of the pump and Stokes fields $\Delta_{-}(t)=-\Delta_{+}(t)=\Delta(t)$. (c) The phases of the pump and Stokes fields $\phi_{-}(t)=-\phi_{+}(t)=\phi(t)$. The unit time $T$ is set as 2.116 $\mu$s corresponding to the arc length of the curve $\textbf{r}(t)$.}
\label{Fig4}
\end{figure*}
Consider a $^{15}$N nitrogen vacancy center in the high-purity type IIa diamond whose host $^{15}\text{N}$ nuclear spin is polarized \cite{nuclpolar}. This system has a spin-triplet ground state $\ket{m_{s}=0}$ and $\ket{m_{s}=\pm1}$. The degeneracy between $\ket{m_{s}=\pm1}$ can be lifted by applying an external magnetic field $B_{z}$ along the symmetry axis of the nitrogen vacancy center, so the ground state of the nitrogen vacancy center can be described by a $V$-type system, as shown in Fig.~\ref{Fig2}, where the transitions $\ket{m_{s}=0}\leftrightarrow\ket{m_{s}=\pm1}$ are dipole coupled and the transition $\ket{m_{s}=-1}\leftrightarrow\ket{m_{s}=+1}$ is dipole forbidden. Our aim is to realize the population transfer from $\ket{m_{s}=-1}$ to $\ket{m_{s}=+1}$ with the help of the state $\ket{m_{s}=0}$.
The fidelity of the population transfer in the nitrogen vacancy center is limited mainly by systematic magnetic errors and dephasing. Systematic magnetic errors and dephasing are the dominant noise for the nitrogen vacancy center and they can be uniformly described by Eq.~(\ref{eq 7}). Systematic magnetic errors in the nitrogen vacancy center result from the imperfect control of the magnetic field used to split states $\ket{m_{s}=\pm1}$. Dephasing in the nitrogen vacancy center is principally caused by the hyperfine interaction with the surrounding $^{13}\text{C}$ nuclear spin bath \cite{noise1,noise2,noise3,noise44,noise31}, which can be described by a random local magnetic field (Overhauser field). Generally, the dynamical fluctuation of the local Overhauser field driven by the pairwise nuclear-spin flip flop is much slower than the typical operate time, making the intensity of the local Overhauser field a random time-independent variable \cite{noise4,noise5,noise3,noise31}. Due to the linear dependence of the states $\ket{m_{s}=\pm1}$ on the magnetic field, the resultant influence of the systematic magnetic errors and dephasing on the nitrogen vacancy center can be seen as frequency errors and therefore can be described by Eq.~(\ref{eq 7}).
As illustrated in Section II, to realize the population transfer from the initial state $\ket{\psi_{i}}=\ket{m_{s}=-1}$ to the final state $\ket{\psi_{f}}=\ket{m_{s}=+1}$ while cancel out the frequency errors to the second-order, we need to find a closed space curve whose tangent vectors at the starting and ending points are along the positive $z$ axis and negative $z$ axis, respectively. Here, we provide a space curve satisfying these conditions and it is constructed as $r(d)=(1-d)r_{1}(d)+dr_{2}(d)$, in which $r_{1}(d)=\sqrt{2}\sin(\pi d)(0,\sin^{2}(\frac{\pi d}{2}),\cos^{2}(\frac{\pi d}{2}))$ , $r_{2}(d)=\sqrt{2}\sin(\pi d)(\cos^{2}(\frac{\pi d}{2}),0,\sin^{2}(\frac{\pi d}{2}))$ and $d\in[0,1]$. The shape of the curve $\textbf{r}(d)$ is shown in Fig.~\ref{Fig3}. It is worth noting that $\textbf{r}(d)$ is not parameterized by arc length, and therefore when using it to calculate the control parameters, one should first transform $\textbf{r}(d)$ into the form parameterized by arc length, i.e., $\textbf{r}(d)\rightarrow\textbf{r}(t)$. The control parameters $\Omega(t)$, $\Delta(t)$ and $\phi(t)$ can be obtained by calculating the curvature and torsion of the space curve $\textbf{r}(t)$. According to Eq. (\ref{eq15}), one can obtain the common Rabi frequency $\Omega(t)$ of the driving fields, which is shown in Fig.~\ref{Fig4} (a). According to Eqs. (\ref{eq16}) and (\ref{eq17}), one can get the information about $\Delta(t)$ and $\phi(t)$. Specifically, if one only changes the detuning $\Delta(t)$ while keeps constant the phase $\phi(t)$ during the evolution, the detuning can be obtained from the torsion of the space curve, which is shown in Fig.~\ref{Fig4} (b). On the other hand, if one only changes the phase $\phi(t)$ of the driving field while keeps constant the detuning $\Delta(t)=0$ during the evolution, the derivative of the phase $\dot{\phi}(t)$ can be obtained from the torsion of the space curve, and correspondingly, by integrating the torsion of the space curve with respect to time $t$, one can obtain the phase $\phi(t)$ of the driving fields, as is shown in Fig.~\ref{Fig4} (c). So, one can choose to change the detuning or phase of the driving fields to realize the population transfer, bringing convenience to the realization in experiment. Without loss of generality, we consider the case of changing the phase $\phi(t)$ while setting the detuning $\Delta(t)=0$ all the time.
\begin{figure*}[tb]
\includegraphics[scale=0.3]{fig5a.pdf}
\includegraphics[scale=0.3]{fig5b.pdf}
\includegraphics[scale=0.3]{fig5c.pdf}
\includegraphics[scale=0.3]{fig5d.pdf}
\includegraphics[scale=0.3]{fig5e.pdf}
\includegraphics[scale=0.3]{fig5f.pdf}
\includegraphics[scale=0.3]{fig5g.pdf}
\includegraphics[scale=0.3]{fig5h.pdf}
\caption{The performance of the population transfers, where the populations of states $\ket{m_{s}=-1}$ (dashed blue), $\ket{m_{s}=0}$ (dotted black), and $\ket{m_{s}=+1}$ (solid red) are presented. The top row shows the simulated results of the ideal case. The bottom row shows the simulated results under the influence of the frequency errors and the longitudinal spin relaxation process, with $\delta=0.5$MHz and $\Gamma=2$KHz. (a) and (e) The simulation results for SRT with $\Omega_{\textrm{srt+}}=\Omega_{\textrm{srt-}}=2\sqrt{2}\pi$MHz and $\Delta_{\textrm{srt}}=8\pi$MHz. (b) and (f) The simulation results for STIRAP with $\Omega_{\textrm{sti}}=5$MHz, $\Lambda=\mu_{-}-\mu_{+}=3\mu$s and $\sigma=2\mu$s. (c) and (g) The simulation results for the conventional STA based scheme with two resonant driving $\Omega_{\textrm{sta+}}=\Omega_{\textrm{sta-}}=\sqrt{2}\pi/2$MHz. (d) and (h) The simulation results for our scheme, with control parameters $\Omega(t)$ and $\phi(t)$ being obtained from Figs.~\ref{Fig4} (a) and (c), respectively, and proper scaling being implemented to make the total evolution time equal to $2\mu$s.}
\label{Fig5}
\end{figure*}
To show the efficiency of our scheme, we will numerically simulate the performance of our scheme and compare it with those of SRT, STIRAP and conventional STA based schemes. To make the simulation closer to reality, we consider not only the dominant noise that comes from systematic magnetic errors and dephasing and can be treated as frequency errors, but also the subordinate noise coming from the longitudinal spin relaxation process. To account into both the dominant and subordinate noise, we use the following quantum master equation
${d\rho}/{dt}=-i[H^{\prime}(t),\rho]+\sum_{j,k}\Gamma_{jk}(a_{jk}^{\dag}\rho a_{jk}-\frac{1}{2}\{a_{jk}a_{jk}^{\dag},\rho\})$,
where $H^{\prime}(t)$ is the total Hamiltonian including the frequency errors, and the Lindblad operators $a_{jk}=\ket{m_{s}=j}\bra{m_{s}=k}$ represent the spin relaxation process with rate $\Gamma_{jk}$ corresponding to the longitudinal spin relaxation time $T_{1}$ of the nitrogen vacancy center electron spin. Here we adopt $\Gamma_{10}=\Gamma_{01}=\Gamma_{-10}=\Gamma_{0-1}=\Gamma=2$ KHz, which is proper for nitrogen vacancy centers \cite{cai}.
As mentioned before, SRT, STIRAP and conventional STA based schemes can realize population transfer for uncoupled or weakly coupled spin states. SRT is usually realized by applying two highly detuned driving fields with the intermediate-level detuning $\Delta_{\textrm{srt}}$ and Rabi frequency $\Omega_{\textrm{srt+}}=\Omega_{\textrm{srt-}}$. In the limit of large detunings, $\Delta_{\textrm{srt}}\gg\Omega_{\textrm{srt+}},\Omega_{\textrm{srt-}}$, the intermediate level is scarcely populated, and therefore the system reduces to a two-level system consisting of levels $\ket{m_{s}=+1}$ and $\ket{m_{s}=-1}$ with an effective Rabi frequency $\Omega_{\textrm{srt}}=\Omega_{\textrm{srt+}}\Omega_{\textrm{srt-}}/(2|\Delta_{\textrm{srt}}|)$. Then the population transfer between states $\ket{m_{s}=-1}\leftrightarrow\ket{m_{s}=+1}$ can be implemented approximatively. STIRAP uses two partially overlapping resonant Raman control pulses with the Gaussian envelopes $\Omega_{\textrm{sti}\pm}(t)=\Omega_{\textrm{sti}}e^{-(t-\mu_{\pm})^{2}/2\sigma^{2}}$. The pulse separation $\Lambda=\mu_{-}-\mu_{+}$ and pulse width $\sigma$ is set properly to make the adiabatic condition satisfied. The population transfer between states $\ket{m_{s}=-1}\leftrightarrow\ket{m_{s}=+1}$ can be realized along the adiabatic eigenstate. As a typical example of conventional STA based schemes, we set the parameters in Eq.~(\ref{eq6}) as $\alpha=0, \beta=3\pi/2, \theta(t)=\pi t/2$. By the reverse calculation, we can obtain the Rabi frequencies of the STA control pulses $\Omega_{\textrm{sta+}}=\Omega_{\textrm{sta-}}=\sqrt{2}\pi/2$MHz. With the increase of time, the parameter $\theta(t)$ changes from 0 to $\pi$. At the finally time $t=2\mu$s, the population transfer is realized.
\begin{figure}[htb]
\includegraphics[scale=0.55]{fig6.pdf}
\caption{Comparison of the robustness of SRT (dashed blue), STIRAP (dashdotted green), the conventional STA based scheme (dotted purple) and our scheme (solid red) against the frequency errors, with the population $P_{+1}$ of state $\ket{m_{s}=+1}$ at the final time $T$ being the vertical axis, and the strength of the frequency errors being the horizontal axis.}
\label{Fig6}
\end{figure}
Our simulation results are shown in Figs.~\ref{Fig5} and \ref{Fig6}. Figure \ref{Fig5} shows the performance of the population transfers, while Fig. \ref{Fig6} shows the robustness against the frequency errors. From Figs.~\ref{Fig5} (a), \ref{Fig5} (e) and \ref{Fig6}, one can see that the present of the frequency errors can seriously affect the performance of SRT. From Figs.~\ref{Fig5} (b), \ref{Fig5} (f) and \ref{Fig6}, one can see that STIRAP is partly robust against the frequency errors, the longitudinal spin relaxation process during the longtime adiabatic evolution reduces the fidelity of the population transfer. From Figs.~\ref{Fig5} (c), \ref{Fig5} (g) and \ref{Fig6}, one can see that the conventional STA based scheme is sensitive to the frequency errors. From Figs.~\ref{Fig5} (d), \ref{Fig5} (h) and \ref{Fig6}, one can see that our scheme can achieve high-fidelity population transfer under the influence of both the frequency errors and the longitudinal spin relaxation process. Moreover, Fig.~\ref{Fig6} shows that our scheme is more robust to the frequency errors than SRT, STIRAP and conventional STA based schemes. So, the simulation results show the superiority of our scheme for realizing population transfer under the influence of noise.
\section{CONCLUSION}
In conclusion, we have shown how to realize accurate population transfer between uncoupled or weakly coupled spin states, even under the influence of noise. In our scheme, the population transfer can be implemented fast and it is robust against the frequency errors, the dominant noise in spin systems. Moreover, our scheme is simple to implement. In our scheme, one only needs to find a closed space curve $\mathbf{r}(t)$ starting and ending both at the origin and with the initial and final tangent vectors being $\dot{\mathbf{r}}(0)=(0,0,1)$ and $\dot{\mathbf{r}}(T)=(0,0,-1)$, respectively. The above conditions are not strict so that many space curves would be found. One could choose a well behaved space curve as the candidate and calculate the curvature $\kappa(t)$ and torsion $\tau(t)$ of it to give the control parameters $\Omega(t)$, $\Delta(t)$ and $\phi(t)$. In the above, well behaved means the space curve can give easily realized $\Omega(t)$, $\Delta(t)$ and $\phi(t)$. To show the efficiency of our scheme, we numerically simulate the ground-state population transfer in the $^{15}$N nitrogen vacancy center and compare our scheme with SRT, STIRAP and conventional STA based schemes. The results show that our scheme can still achieve high fidelity under the influence of noise. We hope our scheme can shed light on the accurate population transfer in spin systems.
\begin{acknowledgments}
K.Z.L. and G.F.X. acknowledge the support from the National Natural Science Foundation of China through Grant No. 11775129 and No. 12174224.
\end{acknowledgments}
|
1,116,691,500,130 | arxiv | \section*{Introduction}
A natural number is a prime if it has only factors of $1$ and itself.
There are, by Euclidean theorem (about 350BC) infinitely many primes.
There are many patterns of primes amongst which the classical known
primes are the Mersenne primes of the form $2^{p}-1$, where $p$
is a prime {[}Da2011{]}, and the Fermat primes of the form $2^{2^{n}}+1$
for a natural number $n\geq0$ {[}Da2011{]}. We omit the other classes
of primes except the factorial primes.
There was a known fact that early 12th century Indian scholars knew
about factorials. In 1677, a British mathematician, Fabian Stedman
described factorials for music. In 1808, a French mathematician, Christian
Kramp introduced notation $!$ for factorials. The factorial of $n$
can be described as product of all positive integers less than or
equal to $n$. In the Christian Kramp's notation, $n!=n(n-1)(n-2)......3.2.1.$
Factorials of $0$ and $1$ can be written as $0!=1$ and $1!=1$,
respectively. There are dozens of prime factorials. However, we recall
only few of them, particularly, factorial primes of the form $(p!\pm1)$
{[}Bor1972, BCP1982{]}, double factorial primes of the form $n!!\pm1$
for some natural number $n$ {[}Mes1948{]}, Wilson primes: $p$ for
which $p^{2}$ divides $(p-1)!+1$ {[}Bee1920{]}, and primorial primes
of the form $(p\#\pm1)$ which means as the product of all primes
up to and including the prime {[}Dub1987, Dub1989{]}. Further, a class
of the Smarandache prime is of the form $n!\times S_{n}(n)+1$ , where
$S_{n}(n)$ is smarandache consecutive sequence {[}Ear2005{]}.
The purpose of this note is to report on the discovery of the primes
of the form $p=1+n!\sum n$ for some natural numbers $n>0$.
\section*{Primes of the form $p=1+n!\sum n$ for some $n\in\mathbb{N}^{+}$}
We list in the table 1, the primes of the form $p=1+n!\sum n$ for
some natural numbers $n>0$. They are verified up-to $n=950$ and
primes are found when $n=1,2,3,4,5,6,$ $7,8,9,10,12,13,14,19,24$,251,374.
The above primes can also be expressed as $p=1+\frac{(n+1)!n}{2},$
for some natural numbers $n>0$. The author has used python software
to search and verify the above form primes and could verify up-to
$n=950$. The author conjectures that there are infinitely many primes
of the form $p=1+n!\sum n$ for some $n\in\mathbb{N}^{+}$ .
\begin{table}[H]
\caption{Primes List}
~
\centering{}%
\begin{tabular}{|c|c|c|c|}
\hline
S.No. & n & Prime, $p$ & Number of \tabularnewline
& & & digits in $p$\tabularnewline
\hline
\hline
1 & 1 & 2 & 1\tabularnewline
\hline
2 & 2 & 7 & 1\tabularnewline
\hline
3 & 3 & 37 & 2\tabularnewline
\hline
4 & 4 & 241 & 3\tabularnewline
\hline
5 & 5 & 1801 & 4\tabularnewline
\hline
6 & 6 & 15121 & 5\tabularnewline
\hline
7 & 7 & 141151 & 6\tabularnewline
\hline
8 & 8 & 1451521 & 7\tabularnewline
\hline
9 & 9 & 16329601 & 8\tabularnewline
\hline
10 & 10 & 199584001 & 9\tabularnewline
\hline
11 & 12 & 37362124801 & 11\tabularnewline
\hline
12 & 13 & 566658892801 & 12\tabularnewline
\hline
13 & 14 & 9153720576001 & 13\tabularnewline
\hline
14 & 19 & 23112569077678080001 & 20\tabularnewline
\hline
15 & 24 & 186134520519971831808000001 & 27\tabularnewline
\hline
16 & 251 & 25662820338985371726..Omitted..000000000001 & 500\tabularnewline
\hline
17 & 374 & 22873802587990440054..Omitted..0000000000001 & 807\tabularnewline
\hline
\end{tabular}
\end{table}
\subsection*{Size of the prime of the form $p=1+n!\sum n$ for some $n\in\mathbb{N}^{+}$}
To compute the size of primes of above form, we use the Stirling 's
formula {[}Sec 2.2, CG2001{]}: $log\,n!=(n+\frac{1}{2})log\,n-n+\frac{1}{2}log2\pi+O(\frac{1}{n})$.
Simply, we can also write $log\,n!\sim n(logn-1)$, if necessary.
The size of the prime, $p=1+n!\sum n$ is approximately equal to $\lfloor log_{10}(1+n!\sum n)\rceil+1$.
\section*{Conclusion }
In this note, the author conjectures that there are infinitely many
primes of the form $p=1+n!\sum n$ for some natural numbers $n>0$.
The author has found the primes for $n=1,2,3,4,5,5,6,7,8,9,10,12,13,14,19,24,251,374$,
when they are searched and verified up-to $n=950$. The number of
digits in the primes of the form $(1+n!\sum n)$ are approximately
equal to $\lfloor log_{10}(1+n!\sum n)\rceil+1$. Furthermore, an
investigation will be required for finding such primes.
|
1,116,691,500,131 | arxiv | \section{Introduction}
\subsection{Fast-Slow Systems}
Fast-slow systems of ordinary differential equations (ODEs) have the general form:
\begin{eqnarray}
\label{eq:fssgen}
\epsilon\dot{x}&=&\epsilon \frac{dx}{d\tau}=f(x,y,\epsilon)\\
\dot{y}&=&\frac{dy}{d\tau}=g(x,y,\epsilon)\nonumber
\end{eqnarray}
where $x\in\mathbb{R}^m$, $y\in\mathbb{R}^n$ and $0\leq \epsilon\ll 1$ represents the ratio of time scales. The functions $f$ and $g$ are assumed to be sufficiently smooth. In the singular limit $\epsilon\rightarrow 0$ the vector field (\ref{eq:fssgen}) becomes a differential-algebraic equation. The algebraic constraint $f=0$ defines the critical manifold $C_0=\{(x,y)\in\mathbb{R}^m\times\mathbb{R}^n:f(x,y,0)=0\}$. Where $D_xf(p)$ is nonsingular, the implicit function theorem implies that there exists a map $h(x)=y$ parametrizing $C_0$ as a graph. This yields the implicitly defined vector field $\dot{y}=g(h(y),y,0)$ on $C_0$ called the slow flow.\\
We can change (\ref{eq:fssgen}) to the fast time scale $t=\tau/\epsilon$ and let $\epsilon\rightarrow 0$ to obtain the second possible singular limit system
\begin{eqnarray}
\label{eq:fssfss}
x'&=&\frac{dx}{dt}=f(x,y,0)\\
y'&=&\frac{dy}{dt}=0\nonumber
\end{eqnarray}
We call the vector field (\ref{eq:fssfss}) parametrized by the slow variables $y$ the fast subsystem or the layer equations. The central idea of singular perturbation analysis is to use information about the fast subsystem and the slow flow to understand the full system (\ref{eq:fssgen}). One of the main tools is Fenichel's Theorem (see \cite{Fenichel1,Fenichel2,Fenichel3,Fenichel4}). It states that for every $\epsilon$ sufficiently small and $C_0$ normally hyperbolic there exists a family of invariant manifolds $C_\epsilon$ for the flow (\ref{eq:fssgen}). The manifolds are at a distance $O(\epsilon)$ from $C_0$ and the flows on them converge to the slow flow on $C_0$ as $\epsilon\rightarrow 0$. Points $p\in C_0$ where $D_xf(p)$ is singular are referred to as fold points\footnote{The projection of $C_0$ onto the $x$ coordinates may have more degenerate singularities than fold singularities at some of these points.}.\\
Beyond Fenichel's Theorem many other techniques have been developed. More detailed introductions and results can be found in \cite{ArnoldEncy,Jones,GuckenheimerNDC} from a geometric viewpoint. Asymptotic methods are developed in \cite{MisRoz,Grasman} whereas ideas from nonstandard analysis are introduced in \cite{DienerDiener}. While the theory is well developed for two-dimensional fast-slow systems, higher-dimensional fast-slow systems are an active area of current research. In the following we shall focus on the FitzHugh-Nagumo equation viewed as a three-dimensional fast-slow system.
\subsection{The FitzHugh-Nagumo Equation}
\label{sec:fhn}
The FitzHugh-Nagumo equation is a simplification of the Hodgin-Huxley model for an electric potential of a nerve axon \cite{HodginHuxley}. The first version was developed by FitzHugh \cite{FitzHugh} and is a two-dimensional system of ODEs:
\begin{eqnarray}
\label{eq:fh}
\epsilon \dot{u}&=&v-\frac{u^3}{3}+u+p\\
\dot{v}&=&-\frac1s(v+\gamma u-a)\nonumber
\end{eqnarray}
A detailed summary of the bifurcations of (\ref{eq:fh}) can be found in \cite{GGR}. Nagumo et al. \cite{Nagumo} studied a related equation that adds a diffusion term for the conduction process of action potentials along nerves:
\begin{equation}
\label{eq:fhn_original}
\left\{
\begin{array}{l}
u_\tau=\delta u_{xx}+f_a(u)-w+p \\
w_\tau=\epsilon(u-\gamma w)
\end{array}
\right.
\end{equation}
where $f_a(u)=u(u-a)(1-u)$ and $p,\gamma,\delta$ and $a$ are parameters. A good introduction to the derivation and problems associated with (\ref{eq:fhn_original}) can be found in \cite{Hastings}. Suppose we assume a traveling wave solution to (\ref{eq:fhn_original}) and set $u(x,\tau)=u(x+s\tau)=u(t)$ and $w(x,\tau)=w(x+s\tau)=w(t)$, where $s$ represents the wave speed. By the chain rule we get $u_\tau=su'$, $u_{xx}=u''$ and $w_\tau=sw'$. Set $v=u'$ and substitute into (\ref{eq:fhn_original}) to obtain the system:
\begin{eqnarray}
\label{eq:fhn_temp}
u'&=&v \nonumber\\
v'&=&\frac1\delta(sv-f_a(u)+w-p)\\
w'&=&\frac{\epsilon}{s}(u-\gamma w)\nonumber
\end{eqnarray}
System~\eqref{eq:fhn_temp} is the FitzHugh-Nagumo equation studied in this paper. Observe that a homoclinic orbit of (\ref{eq:fhn_temp}) corresponds to a traveling pulse solution of (\ref{eq:fhn_original}). These solutions are of special importance in neuroscience \cite{Hastings} and have been analyzed using several different methods. For example, it has been proved that (\ref{eq:fhn_temp}) admits homoclinic orbits \cite{Hastings1,Carpenter} for small wave speeds (``slow waves'') and large wave speeds (``fast waves''). Fast waves are stable \cite{JonesFHN} and slow waves are unstable \cite{Flores}. It has been shown that double-pulse homoclinic orbits \cite{EvansFenichelFeroe} are possible. If (\ref{eq:fhn_temp}) has two equilibrium points and heteroclinic connections exist, bifurcation from a twisted double heteroclinic connection implies the existence of multi-pulse traveling front and back waves \cite{DengFHN}. These results are based on the assumption of certain parameter ranges for which we refer to the original papers. Geometric singular perturbation theory has been used successfully to analyze (\ref{eq:fhn_temp}). In \cite{JonesKopellLanger} the fast pulse is constructed using the exchange lemma \cite{JonesKaperKopell,JonesKopell,Brunovsky}. The exchange lemma has also been used to prove the existence of a codimension two connection between fast and slow waves in ($s,\epsilon,a$)-parameter space \cite{KSS1997}. An extension of Fenichel's theorem and Melnikov's method can be employed to prove the existence of heteroclinic connections for parameter regimes of (\ref{eq:fhn_temp}) with two fixed points \cite{Szmolyan1}. The general theory of relaxation oscillations in fast-slow systems applies to (\ref{eq:fhn_temp}) (see e.g. \cite{MisRoz,GuckenheimerBoRO}) as does - at least partially - the theory of canards (see e.g. \cite{Wechselberger,Dumortier1,Eckhaus,KruSzm2}).\\
The equations (\ref{eq:fhn_temp}) have been analyzed numerically by Champneys, Kirk, Knobloch, Oldeman and Sneyd \cite{Sneydetal} using the numerical bifurcation software AUTO \cite{Doedel_AUTO1997,Doedel_AUTO2000}. They considered the following parameter values:
\begin{equation*}
\gamma=1,\qquad a=\frac{1}{10},\qquad \delta=5
\end{equation*}
We shall fix those values to allow comparison of our results with theirs. Hence we also write $f_{1/10}(u)=f(u)$. Changing from the fast time $t$ to the slow time $\tau$ and relabeling variables $x_1=u$, $x_2=v$ and $y=w$ we get:
\begin{eqnarray}
\label{eq:fhn}
\epsilon\dot{x}_1&=& x_2\nonumber\\
\epsilon\dot{x}_2&=&\frac15 (sx_2-x_1(x_1-1)(\frac{1}{10}-x_1)+y-p)=\frac15 (sx_2-f(x_1)+y-p)\\
\dot{y}&=&\frac{1}{s} (x_1-y) \nonumber
\end{eqnarray}
From now on we refer to (\ref{eq:fhn}) as ``the'' FitzHugh-Nagumo equation. Investigating bifurcations in the ($p,s$) parameter space one finds C-shaped curves of homoclinic orbits and a U-shaped curve of Hopf bifurcations; see Figure \ref{fig:cusystem}. Only part of the bifurcation diagram is shown in Figure \ref{fig:cusystem}. There is another curve of homoclinic bifurcations on the right side of the U-shaped Hopf curve. Since (\ref{eq:fhn}) has the symmetry
\begin{equation}
\label{eq:fhnsym}
x_1\rightarrow\frac{11}{15}-x_1, \quad x_2\rightarrow \frac{11}{15}-x_2,\quad y\rightarrow -y,\quad p\rightarrow \frac{11}{15}\left(1-\frac{33}{225}\right)-p
\end{equation}
we shall examine only the left side of the U-curve. The homoclinic C-curve is difficult to compute numerically by continuation methods using AUTO \cite{Doedel_AUTO1997,Doedel_AUTO2000} or MatCont \cite{MatCont}. The computations seem infeasible for small values of $\epsilon\leq 10^{-3}$. Furthermore multi-pulse homoclinic orbits can exist very close to single pulse ones and distinguishing between them must necessarily encounter problems with numerical precision \cite{Sneydetal}. The Hopf curve and the bifurcations of limit cycles shown in Figure \ref{fig:cusystem} have been computed using MatCont. The curve of homoclinic bifurcations has been computed by a new method to be described in Section \ref{sec:hetfull}.\\
\begin{figure}[htbp]
\centering
\includegraphics[width=0.85\textwidth]{./bif_matcont.eps}
\caption{Bifurcation diagram of (\ref{eq:fhn}). Hopf bifurcations are shown in green, saddle-node of limit cycles (SNLC) are shown in blue and GH indicates a generalized Hopf (or Bautin) bifurcation. The arrows indicate the side on which periodic orbits are generated at the Hopf bifurcation. The red curve shows (possible) homoclinic orbits; in fact, homoclinic orbits only exist to the left of the two black dots (see Section \ref{sec:hetfull}). Only part of the parameter space is shown because of the symmetry (\ref{eq:fhnsym}). The homoclinic curve has been thickened to indicate that multipulse homoclinic orbits exist very close to single pulse ones (see \cite{EvansFenichelFeroe}).}
\label{fig:cusystem}
\end{figure}
Since the bifurcation structure shown in Figure \ref{fig:cusystem} was also observed for other excitable systems, Champneys et al.~\cite{Sneydetal} introduced the term CU-system. Bifurcation analysis from the viewpoint of geometric singular perturbation theory has been carried out for examples with one fast and two slow variables \cite{GuckFvdP1,GuckFvdP2,GuckFT,MilikSzmolyan}. Since the FitzHugh-Nagumo equation has one slow and two fast variables, the situation is quite different and new techniques have to be developed. Our main goal is to show that many features of the complicated 2-parameter bifurcation diagram shown in Figure \ref{fig:cusystem} can be derived with a combination of techniques from singular perturbation theory, bifurcation theory and robust numerical methods. We accurately locate where the system has canards and determine the orbit structure of the homoclinic and periodic orbits associated to the C-shaped and U-shaped bifurcation curves, without computing the canards themselves. We demonstrate that the basic CU-structure of the system can be computed with elementary methods that do not use continuation methods based on collocation. The analysis of the slow and fast subsystems yields a ``singular bifurcation diagram'' to which the basic CU structure in Figure \ref{fig:cusystem} converges as $\epsilon\rightarrow 0$.\\
\textit{Remark:} We have also investigated the termination mechanism of the C-shaped homoclinic curve described in \cite{Sneydetal}. Champneys et al. observed that the homoclinic curve does not reach the U-shaped Hopf curve but turns around and folds back close to itself. We compute accurate approximations of the homoclinic orbits for smaller values $\epsilon$ than seems possible with AUTO in this region. One aspect of our analysis is a new algorithm for computing invariant slow manifolds of saddle type in the full system. This work will be described elsewhere.
\section{The Singular Limit}
The first step in our analysis is to investigate the slow and fast subsystems separately. Let $\epsilon\rightarrow 0$ in (\ref{eq:fhn}); this yields two algebraic constraints that define the critical manifold:
\begin{equation*}
C_0=\left\{(x_1,x_2,y)\in\mathbb{R}^3:x_2=0\quad y=x_1(x_1-1)(\frac{1}{10}-x_1)+p=c(x_1)\right\}
\end{equation*}
Therefore $C_0$ is a cubic curve in the coordinate plane $x_2=0$. The parameter $p$ moves the cubic up and down inside this plane. The critical points of the cubic are solutions of $c'(x_1)=0$ and are given by:
\begin{equation*}
x_{1,\pm}=\frac{1}{30}\left(11\pm\sqrt{91}\right)\qquad \text{or numerically:} \quad x_{1,+}\approx 0.6846, \quad x_{1,-}\approx 0.0487
\end{equation*}
The points $x_{1,\pm}$ are fold points with $|c''(x_{1,\pm})|\neq 0$ since $C_0$ is a cubic polynomial with distinct critical points. The fold points divide $C_0$ into three segments
\begin{equation*}
C_l=\{x_1<x_{1,-}\}\cap C_0, \quad C_m=\{x_{1,-}\leq x_1\leq x_{1,+}\}\cap C_0,\quad C_r=\{x_{1,+}<x_1\}\cap C_0
\end{equation*}
We denote the associated slow manifolds by $C_{l,\epsilon}$, $C_{m,\epsilon}$ and $C_{r,\epsilon}$. There are two possibilities to obtain the slow flow. One way is to solve $c(x_1)=y$ for $x_1$ and substitute the result into the equation $\dot{y}=\frac1s (x_1-y)$. Alternatively differentiating $y=c(x_1)$ implicitly with respect to $\tau$ yields $\dot{y}=\dot{x}_1c'(x_1)$ and therefore
\begin{equation}
\label{eq:sf}
\frac1s (x_1-y)=\dot{x}_1c'(x_1) \qquad \Rightarrow \qquad \dot{x}_1=\frac{1}{sc'(x_1)}(x_1-c(x_1))
\end{equation}
One can view this as a projection of the slow flow, which is constrained to the critical manifold in $\mathbb{R}^3$, onto the $x_1$-axis. Observe that the slow flow is singular at the fold points. Direct computation shows that the fixed point problem $x_1=c(x_1)$ has only a single real solution. This implies that the critical manifold intersects the diagonal $y=x_1$ only in a single point $x_1^*$ which is the unique equilibrium of the slow flow (\ref{eq:sf}). Observe that $q=(x_1^*,0,x_1^*)$ is also the unique equilibrium of the full system (\ref{eq:fhn}) and depends on $p$. Increasing $p$ moves the equilibrium from left to right on the critical manifold. The easiest practical way to determine the direction of the slow flow on $C_0$ is to look at the sign of $(x_1-y)$. The situation is illustrated in Figure \ref{fig:slowflow}.\\
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{./slowflow.eps}
\caption{Sketch of the slow flow on the critical manifold $C_0$}
\label{fig:slowflow}
\end{figure}
\subsection{The Slow Flow}
We are interested in the bifurcations of the slow flow depending on the parameter $p$. The bifurcations occur when $x_1^*$ passes through the fold points. The values of $p$ can simply be found by solving the equations $c'(x_1)=0$ and $c(x_1)-x_1=0$ simultaneously. The result is:
\begin{equation*}
p_-\approx 0.0511 \qquad \text{and} \qquad p_+\approx 0.5584
\end{equation*}
where the subscripts indicate the fold point at which each equilibrium is located.\\
The singular time-rescaling $\bar{\tau}=sc'(x_1)/\tau$ of the slow flow yields the desingularized slow flow
\begin{equation}
\label{eq:dssf}
\frac{dx_1}{d\bar{\tau}}=x_1-c(x_1)=x_1+\frac{x_1}{10}\left(x_1-1\right)\left(10x_1-1\right)-p
\end{equation}
Time is reversed by this rescaling on $C_l$ and $C_r$ since $s>0$ and $c'(x_1)$ is negative on these branches. The desingularized slow flow (\ref{eq:dssf}) is smooth and has no bifurcations as $p$ is varied.
\subsection{The Fast Subsystem}
\label{sec:hetsub}
The key component of the fast-slow analysis for the FitzHugh-Nagumo equation is the two-dimensional fast subsystem
\begin{eqnarray}
\label{eq:fss}
x_1'&=&x_2 \nonumber\\
x_2'&=&\frac15 (sx_2-x_1(x_1-1)(\frac{1}{10}-x_1)+y-p)
\end{eqnarray}
where $p\geq 0$, $s\geq 0$ are parameters and $y$ is fixed. Since $y$ and $p$ have the same effect as bifurcation parameters we set $p-y=\bar{p}$. We consider several fixed y-values and the effect of varying $p$ (cf. Section \ref{sec:hetfull}) in each case. There are either one, two or three equilibrium points for (\ref{eq:fss}). Equilibrium points satisfy $x_2=0$ and lie on the critical manifold, i.e. we have to solve
\begin{equation}
\label{eq:eqfss}
0=x_{1}(x_{1}-1)(\frac{1}{10}-x_{1})+\bar{p}
\end{equation}
for $x_1$. We find that there are three equilibria for approximately $\bar{p}_l=-0.1262<\bar{p}<0.0024=\bar{p}_r$, two equilibria on the boundary of this $p$ interval and one equilibrium otherwise. The Jacobian of (\ref{eq:fss}) at an equilibrium is
\begin{equation*}
A(x_1)=\left(\begin{array}{cc}
0& 1\\
\frac{1}{50}\left(1-22x_1+30x_1^2\right) &\frac{s}{5}\\
\end{array}\right)
\end{equation*}
Direct calculation yields that for $p\not\in[\bar{p}_l,\bar{p}_r]$ the single equilibrium is a saddle. In the case of three equilibria, we have a source that lies between two saddles. Note that this also describes the stability of the three branches of the critical manifold $C_l$, $C_m$ and $C_r$. For $s>0$ the matrix $A$ is singular of rank 1 if and only if $30x_1^2-22x_1+1=0$ which occurs for the fold points $x_{1,\pm}$. Hence the equilibria of the fast subsystem undergo a fold (or saddle-node) bifurcation once they approach the fold points of the critical manifold. This happens for parameter values $\bar{p}_l$ and $\bar{p}_r$. Note that by symmetry we can reduce to studying a single fold point. In the limit $s=0$ (corresponding to the case of a ``standing wave'') the saddle-node bifurcation point becomes more degenerate with $A(x_1)$ nilpotent.\\
Our next goal is to investigate global bifurcations of (\ref{eq:fss}); we start with homoclinic orbits. For $s=0$ it is easy to see that (\ref{eq:fss}) is a Hamiltonian system:
\begin{eqnarray}
\label{eq:fss2}
x_1'&=&\frac{\partial H}{\partial x_2}=x_2\nonumber\\
x_2'&=&-\frac{\partial H}{\partial x_1}=\frac15 (-x_1(x_1-1)(\frac{1}{10}-x_1)-\bar{p})
\end{eqnarray}
with Hamiltonian function
\begin{equation}
\label{eq:ham}
H(x_1,x_2)=\frac12 x_2^2-\frac{(x_1)^2}{100}+\frac{11(x_1)^3}{150}-\frac{(x_1)^4}{20}+\frac{x_1\bar{p}}{5}
\end{equation}
We will use this Hamiltonian formulation later on to describe the geometry of homoclinic orbits for slow wave speeds. Assume that $\bar{p}$ is chosen so that
(\ref{eq:fss2}) has a homoclinic orbit $x_0(t)$. We are interested in perturbations with $s> 0$ and note that in this case the divergence of (\ref{eq:fss}) is $s$. Hence the vector field is area expanding everywhere. The homoclinic orbit breaks for $s>0$ and no periodic orbits are created. Note that this scenario does not apply to the full three-dimensional system as the equilibrium $q$ has a pair of complex conjugate eigenvalues so that a Shilnikov scenario can occur. This illustrates that the singular limit can be used to help locate homoclinic orbits of the full system, but that some characteristics of these orbits change in the singular limit.\\
We are interested next in finding curves in $(\bar{p},s)$-parameter space that represent heteroclinic connections of the fast subsystem. The main motivation is the decomposition of trajectories in the full system into slow and fast segments. Concatenating fast heteroclinic segments and slow flow segments can yield homoclinic orbits of the full system \cite{Hastings,Carpenter,JonesKopellLanger,KSS1997}. We describe a numerical strategy to detect heteroclinic connections in the fast subsystem and continue them in parameter space. Suppose that $\bar{p}\in(\bar{p}_l,\bar{p}_r)$ so that (\ref{eq:fss}) has three hyperbolic equilibrium points $x_l$, $x_m$ and $x_r$. We denote by $W^u(x_{l})$ the unstable and by $W^s(x_{l})$ the stable manifold of $x_l$. The same notation is also used for $x_r$ and tangent spaces to $W^s(.)$ and $W^u(.)$ are denoted by $T^s(.)$ and $T^u(.)$. Recall that $x_m$ is a source and shall not be of interest to us for now. Define the cross section $\Sigma$ by
\begin{equation*}
\Sigma=\{(x_1,x_2)\in\mathbb{R}^2:x_1=\frac{x_l+x_r}{2}\}.
\end{equation*}
We use forward integration of initial conditions in $T^u(x_l)$ and backward integration of initial conditions in $T^s(x_r)$ to obtain trajectories $\gamma^+$ and $\gamma^-$ respectively. We calculate their intersection with $\Sigma$ and define
\begin{equation*}
\gamma_l(\bar{p},s):=\gamma^+\cap \Sigma, \qquad \gamma_r(\bar{p},s):=\gamma^-\cap \Sigma
\end{equation*}
We compute the functions $\gamma_l$ and $\gamma_r$ for different parameter values of $(\bar{p},s)$ numerically. Heteroclinic connections occur at zeros of the function
\begin{equation*}
h(\bar{p},s):=\gamma_l(\bar{p},s)-\gamma_r(\bar{p},s)
\end{equation*}
Once we find a parameter pair $(\bar{p}_0,s_0)$ such that $h(\bar{p}_0,s_0)=0$, these parameters can be continued along a curve of heteroclinic connections in $(\bar{p},s)$ parameter space by solving the root-finding problem $h(\bar{p}_0+\delta_1,s_0+\delta_2)=0$ for either $\delta_1$ or $\delta_2$ fixed and small. We use this method later for different fixed values of $y$ to compute heteroclinic connections in the fast subsystem in $(p,s)$ parameter space. The results of these computations are illustrated in Figure \ref{fig:het1}. There are two distinct branches in Figure \ref{fig:het1}. The branches are asymptotic to $\bar{p}_l$ and $\bar{p}_r$ and approximately form a ``$V$''. From Figure \ref{fig:het1} we conjecture that there exists a double heteroclinic orbit for $\bar{p}\approx-0.0622$.\\
\begin{figure}[htbp]
\centering
\psfrag{p}{$\bar{p}$}
\psfrag{s}{$s$}
\includegraphics[width=0.5\textwidth]{./heteroclinic-v2.eps}
\caption{Heteroclinic connections for equation (\ref{eq:fss}) in parameter space.}
\label{fig:het1}
\end{figure}
\textit{Remarks}: If we fix $p=0$ our initial change of variable becomes $-y=\bar{p}$ and our results for heteroclinic connections are for the FitzHugh-Nagumo equation without an applied current. In this situation it has been shown that the heteroclinic connections of the fast subsystem can be used to prove the existence of homoclinic orbits to the unique saddle equilibrium $(0,0,0)$ (cf. \cite{JonesKopellLanger}). Note that the existence of the heteroclinics in the fast subsystem was proved in a special case analytically \cite{AronsonWeinberger} but Figure \ref{fig:het1} is - to the best of our knowledge - the first explicit computation of where fast subsystem heteroclinics are located. The paper \cite{KrLinMethod} develops a method for finding heteroclinic connections by the same basic approach we used, i.e. defining a codimension one hyperplane $H$ that separates equilibrium points.\\
Figure \ref{fig:het1} suggests that there exists a double heteroclinic connection for $s=0$. Observe that the Hamiltonian in our case is $H(x_1,x_2)=\frac{(x_2)^2}{2}+V(x_1)$ where the function $V(x_1)$ is:
\begin{equation*}
V(x_1)=\frac{px_1}{5}-\frac{(x_1)^2}{100}+\frac{11(x_1)^3}{150}-\frac{(x_1)^4}{20}
\end{equation*}
The solution curves of (\ref{eq:fss2}) are given by $x_2=\pm\sqrt{2(\text{const. }-V(x_1))}$. The structure of the solution curves entails symmetry under reflection about the $x_1$-axis. Suppose $\bar{p}\in[\bar{p}_l,\bar{p}_r]$ and recall that we denoted the two saddle points of (\ref{eq:fss}) by $x_l$ and $x_r$ and that their location depends on $\bar{p}$. Therefore, we conclude that the two saddles $x_l$ and $x_r$ must have a heteroclinic connection if they lie on the same energy level, i.e. they satisfy $V(x_l)-V(x_r)=0$. This equation can be solved numerically to very high accuracy.
\begin{proposition}
\label{prop:doublehet}
The fast subsystem of the FitzHugh-Nagumo equation for $s=0$ has a double heteroclinic connection for $\bar{p}=\bar{p}^*\approx -0.0619259$. Given a particular value $y=y_0$ there exists a double heteroclinic connection for $p=\bar{p}^*+y_0$ in the fast subsystem lying in the plane $y=y_0$.
\end{proposition}
\subsection{Two Slow Variables, One Fast Variable}
\label{sec:2slow1fast}
From continuation of periodic orbits in the full system - to be described in Section \ref{sec:hopf} - we observe that near the U-shaped curve of Hopf bifurcations the $x_2$-coordinate is a faster variable than $x_1$. In particular, the small periodic orbits generated in the Hopf bifurcation lie almost in the plane $x_2=0$. Hence to analyze this region we set $\bar{x}_2=x_2/\epsilon$ to transform the FitzHugh-Nagumo equation (\ref{eq:fhn}) into a system with 2 slow and 1 fast variable:
\begin{eqnarray}
\label{eq:fhnscale}
\dot{x}_1&=& \bar{x}_2\nonumber\\
\epsilon^2 \dot{\bar{x}}_2&=&\frac15 (s\epsilon\bar{x}_2-x_1(x_1-1)(\frac{1}{10}-x_1)+y-p)\\
\dot{y}&=&\frac{1}{s} (x_1-y) \nonumber
\end{eqnarray}
Note that \eqref{eq:fhnscale} corresponds to the FitzHugh-Nagumo equation in the form (cf. \eqref{eq:fhn_original}):
\begin{equation}
\label{eq:fhn_original_small_diff}
\left\{
\begin{array}{l}
u_\tau=5\epsilon^2 u_{xx}+f(u)-w+p \\
w_\tau=\epsilon(u- w)
\end{array}
\right.
\end{equation}
Therefore the transformation $\bar{x}_2=x_2/\epsilon$ can be viewed as a rescaling of the diffusion strength by $\epsilon^2$. We introduce a new independent small parameter $\bar{\delta}=\epsilon^2$ and then let $\bar{\delta}=\epsilon^2\rightarrow 0$. This assumes that $O(\epsilon)$ terms do not vanish in this limit, yielding the diffusion free system. Then the slow manifold $S_0$ of (\ref{eq:fhnscale}) is:
\begin{equation}
\label{eq:sm2d}
S_0=\left\{ (x_1,\bar{x}_2,y) \in\mathbb{R}^3 : \bar{x}_2=\frac{1}{s\epsilon}\left(f(x_1)-y+p\right)\right\}
\end{equation}
\begin{proposition}
\label{prop:2slow1fast}
Following time rescaling by $s$, the slow flow of system~\eqref{eq:fhnscale} on $S_0$ in the variables $(x_1,y)$ is given by
\begin{eqnarray}
\label{eq:red1}
\epsilon \dot{x}_1&=&f(x_1)-y+p\nonumber\\
\dot{y}&=&x_1-y
\end{eqnarray}
In the variables $(x_1,\bar{x}_2)$ the vector field~\eqref{eq:red1} becomes
\begin{eqnarray}
\label{eq:red2}
\dot{x}_1&=& \bar{x}_2\nonumber\\
\epsilon\dot{\bar{x}}_2&=&-\frac{1}{s^2}\left(x_1-f(x_1)-p\right)+\frac{\bar{x}_2}{s}\left(f'(x_1)-\epsilon \right)
\end{eqnarray}
\end{proposition}
\textit{Remark:} The reduction to equations \eqref{eq:red1}-\eqref{eq:red2} suggests that \eqref{eq:fhnscale} is a three time-scale system. Note however that \eqref{eq:fhnscale} is not given in the three time-scale form $(\epsilon^2\dot{z}_1,\epsilon\dot{z}_2,\dot{z}_3)=(h_1(z),h_2(z),h_3(z))$ for $z=(z_1,z_2,z_3)\in\mathbb{R}^3$ and $h_i:\mathbb{R}^3\rightarrow \mathbb{R}$ $(i=1,2,3)$. The time-scale separation in \eqref{eq:red1}-\eqref{eq:red2} results from the singular $1/\epsilon$ dependence of the critical manifold $S_0$; see \eqref{eq:sm2d}.
\begin{proof}(of Proposition \ref{prop:2slow1fast})
Use the defining equation for the slow manifold (\ref{eq:sm2d}) and substitute it into $\dot{x}_1=\bar{x}_2$. A rescaling of time by $t\rightarrow st$ under the assumption that $s>0$ yields the result (\ref{eq:red1}). To derive (\ref{eq:red2}) differentiate the defining equation of $S_0$ with respect to time:
\begin{equation*}
\dot{\bar{x}}_2=\frac{1}{s\epsilon}\left(\dot{x}_1f'(x_1)-\dot{y}\right)=\frac{1}{s\epsilon}\left(\bar{x}_2f'(x_1)-\dot{y}\right)
\end{equation*}
The equations $\dot{y}=\frac{1}{s}(x_1-y)$ and $y=-s\epsilon\bar{x}_2+f(x_1)+p$
yield the equations (\ref{eq:red2}).
\end{proof}
Before we start with the analysis of (\ref{eq:red1}) we note that detailed bifurcation calculations for \eqref{eq:red1} exist. For example, Rocsoreanu et al. \cite{GGR} give a detailed overview on the FitzHugh equation \eqref{eq:red1} and collect many relevant references. Therefore we shall only state the relevant bifurcation results and focus on the fast-slow structure and canards. Equation \eqref{eq:red1} has a critical manifold given by $y=f(x_1)+p=c(x_1)$ which coincides with the critical manifold of the full FitzHugh-Nagumo system (\ref{eq:fhn}). Formally it is located in $\mathbb{R}^2$ but we still denote it by $C_0$. Recall that the fold points are located at
\begin{equation*}
x_{1,\pm}=\frac{1}{30}\left(11\pm\sqrt{91}\right)\qquad \text{or numerically:} \quad x_{1,+}\approx 0.6846, \quad x_{1,-}\approx 0.0487
\end{equation*}
Also recall that the y-nullcline passes through the fold points at:
\begin{equation*}
p_-\approx 0.0511 \qquad \text{and} \qquad p_+\approx 0.5584
\end{equation*}
We easily find that supercritical Hopf bifurcations are located at the values
\begin{equation}
\label{eq:Hopfred}
p_{H,\pm}(\epsilon)=\frac{2057}{6750} \pm \sqrt{\frac{11728171}{182250000}-\frac{359 \epsilon }{1350}+\frac{509 \epsilon ^2}{2700}-\frac{\epsilon ^3}{27}}
\end{equation}
For the case $\epsilon=0.01$ we get $p_{H,-}(0.01)\approx 0.05632$ and $p_{H,+}(0.01)\approx 0.55316$. The periodic orbits generated in the Hopf bifurcations exist for $p\in (p_{H,-},p_{H,+})$. Observe also that $p_{H,\pm}(0)=p_\pm$; so the Hopf bifurcations of (\ref{eq:red1}) coincide in the singular limit with the fold bifurcations in the one-dimensional slow flow (\ref{eq:sf}). We are also interested in canards in the system and calculate a first order asymptotic expansion for the location of the maximal canard in (\ref{eq:red1}) following \cite{KruSzm3}; recall that trajectories lying in the intersection of attracting and repelling slow manifolds are called maximal canards. We restrict to canards near the fold point $(x_{1,-},c(x_{1,-}))$.
\begin{proposition}
Near the fold point $(x_{1,-},c(x_{1,-}))$ the maximal canard in $(p,\epsilon)$ parameter space is given by:
\begin{equation*}
p(\epsilon)=x_{1,-}-c(x_{1,-})+\frac58 \epsilon+O(\epsilon^{3/2})
\end{equation*}
\end{proposition}
\begin{proof}
Let $\bar{y}=y-p$ and consider the shifts
\begin{equation*}
x_1\rightarrow x_1+x_{1,-},\quad \bar{y}\rightarrow \bar{y}+c(x_{1,-}), \quad p\rightarrow p+x_{1,-}-c(x_{1,-})
\end{equation*}
to translate the equilibrium of (\ref{eq:red1}) to the origin when $p=0$. This gives
\begin{eqnarray}
\label{eq:red1sf}
x_1'&=&x_1^2\left(\frac{\sqrt{91}}{10}-x_1\right)-\bar{y}=\bar{f}(x_1,\bar{y})\nonumber\\
y'&=&\epsilon(x_1-\bar{y}-p)=\epsilon (\bar{g}(x_1,\bar{y})-p)
\end{eqnarray}
Now apply Theorem 3.1 in \cite{KruSzm3} to find that the maximal canard of (\ref{eq:red1sf}) is given by:
\begin{equation*}
p(\epsilon)=\frac{5}{8}\epsilon+O(\epsilon^{3/2})
\end{equation*}
Shifting the parameter $p$ back to the original coordinates yields the result.
\end{proof}
If we substitute $\epsilon=0.01$ in the previous asymptotic result and neglect terms of order $O(\epsilon^{3/2})$ then the maximal canard is predicted to occur for $p\approx 0.05731$ which is right after the first supercritical Hopf bifurcation at $p_{H,-}\approx 0.05632$. Therefore we expect that there exist canard orbits evolving along the middle branch of the critical manifold $C_{m,0.01}$ in the full FitzHugh-Nagumo equation. Maximal canards are part of a process generally referred to as canard explosion \cite{DRvdP,KruSzm2,Diener}. In this situation the small periodic orbits generated in the Hopf bifurcation at $p=p_{H,-}$ undergo a transition to relaxation oscillations within a very small interval in parameter space. A variational integral determines whether the canards are stable \cite{KruSzm2,GuckenheimerBoRO}.
\begin{proposition}
\label{prop:stablecanards}
The canard cycles generated near the maximal canard point in parameter space for equation (\ref{eq:red1}) are stable.
\end{proposition}
\begin{proof}
Consider the differential equation (\ref{eq:red1}) in its transformed form (\ref{eq:red1sf}). Obviously this will not affect the stability analysis of any limit cycles. Let $x_l(h)$ and $x_m(h)$ denote the two smallest $x_1$-coordinates of the intersection between
\begin{equation*}
\bar{C}_0:=\{(x_1,\bar{y})\in\mathbb{R}^2:\bar{y}=\frac{\sqrt{91}}{10}x_1^2-x_1^3=\phi(x_1)\}
\end{equation*}
and the line $\bar{y}=h$. Geometrically $x_l$ represents a point on the left branch and $x_m$ a point on the middle branch of the critical manifold $\bar{C}_0$. Theorem 3.4 in \cite{KruSzm2} tells us that the canards are stable cycles if the function
\begin{equation*}
R(h)=\int_{x_l(h)}^{x_m(h)}\frac{\partial \bar{f}}{\partial x_1}(x_1,\phi(x_1))\frac{\phi'(x_1)}{\bar{g}(x_1,\phi(x_1))}dx_1
\end{equation*}
is negative for all values $h\in(0,\phi(\frac{\sqrt{91}}{15})]$ where $x_1=\frac{\sqrt{91}}{15}$ is the second fold point of $\bar{C}_0$ besides $x_1=0$. In our case we have
\begin{equation*}
R(h)=\int_{x_l(h)}^{x_m(h)}\frac{(\frac{\sqrt{91}}{5}x_1-3x_1^2)^2}{x-\frac{\sqrt{91}}{10}x_1^2+x_1^3}dx
\end{equation*}
with $x_l(h)\in[-\frac{\sqrt{91}}{30},0)$ and $x_m(h)\in(0,\frac{\sqrt{91}}{15}]$. Figure \ref{fig:canard_stability} shows a numerical plot of the function $R(h)$ for the relevant values of $h$ which confirms the required result.\\
\begin{figure}[htbp]
\centering
\includegraphics{./canard_stability.eps}
\caption{Plot of the function $R(h)$ for $h\in(0,\phi(\frac{\sqrt{91}}{15})]$.}
\label{fig:canard_stability}
\end{figure}
\textit{Remark:} We have computed an explicit algebraic expression for $R'(h)$ with a computer algebra system. This expression yields $R'(h)<0$ for $h\in(0,\phi(\frac{\sqrt{91}}{15})]$, confirming that $R(h)$ is decreasing.
\end{proof}
As long as we stay on the critical manifold $C_0$ of the full system, the analysis of the bifurcations and geometry of (\ref{eq:red1}) give good approximations to the dynamics of the FitzHugh-Nagumo equation because the rescaling $x_2=\epsilon \bar{x}_2$ leaves the plane $x_2=0$ invariant. Next we use the dynamics of the $\bar{x}_2$-coordinate in system (\ref{eq:red2}) to obtain better insight into the dynamics when $x_2\neq0$. The critical manifold $D_0$ of (\ref{eq:red2}) is:
\begin{equation*}
D_0=\{(x_1,\bar{x}_2)\in\mathbb{R}^2:s\bar{x}_2c'(x_1)=x_1-c(x_1)\}
\end{equation*}
We are interested in the geometry of the periodic orbits shown in Figure \ref{fig:subsysgeo} that emerge from the Hopf bifurcation at $p_{H,-}$. Observe that the amplitude of the orbits in the $x_1$ direction is much larger that than in the $x_2$-direction. Therefore we predict only a single small excursion in the $x_2$ direction for $p$ slightly larger than $p_{H,-}$ as shown in Figures \ref{fig:sub1} and \ref{fig:sub3}. The wave speed changes the amplitude of this $x_2$ excursion with a smaller wave speed implying a larger excursion. Hence equation (\ref{eq:red1}) is expected to be a very good approximation for periodic orbits in the FitzHugh-Nagumo equation with fast wave speeds. Furthermore the periodic orbits show two $x_2$ excursions in the relaxation regime after the canard explosion; see Figure \ref{fig:sub2}.
\begin{figure}[htbp]
\subfigure[Small orbit near Hopf point ($p=0.058$, $s=1.37$)]{\includegraphics[width=0.3\textwidth]{./sub1.eps} \label{fig:sub1}}
\subfigure[Orbit after canard explosion ($p=0.06$, $s=1.37$)]{\includegraphics[width=0.3\textwidth]{./sub2.eps}\label{fig:sub2}}
\subfigure[Different wave speed ($p=0.058$, $s=0.2$)]{\includegraphics[width=0.3\textwidth]{./sub3.eps}\label{fig:sub3}}
\caption{Geometry of periodic orbits in the $(x_1,x_2)$-variables of the 2-variable slow subsystem (\ref{eq:red2}). Note that here $x_2=\epsilon \bar{x}_2$ is shown. Orbits have been obtained by direct forward integration for $\epsilon=0.01$.}
\label{fig:subsysgeo}
\end{figure}
\section{The Full System}
\subsection{Hopf Bifurcation}
\label{sec:hopf}
The characteristic polynomial of the linearization of the FitzHugh-Nagumo equation (\ref{eq:fhn}) at its unique equilibrium point is
\begin{equation*}
P(\lambda)=\frac{\epsilon }{5 s}+\left(-\frac{\epsilon }{s}-\lambda \right) \left(-\frac{1}{50}+\frac{11 x_1^*}{25}-\frac{3 (x_1^*)^2}{5}-\frac{s \lambda }{5}+\lambda ^2\right)
\end{equation*}
Denoting $P(\lambda)=c_0+c_1\lambda+c_2\lambda^2+c_3\lambda^3$, a necessary condition for $P$ to have pure imaginary roots is that $c_0 = c_1 c_2$. The solutions of this equation can be expressed parametrically as a curve $(p(x_1^*),s(x_1^*))$:
\begin{eqnarray}
\label{eq:solHopf}
s(x_1^*)^2 &=& \frac{50\epsilon(\epsilon - 1)}{1 + 10\epsilon -22 x_1^* +30 (x_1^*)^2} \nonumber \\
p(x_1^*) &=& (x_1^*)^3 - 1.1 (x_1^*)^2 + 1.1
\end{eqnarray}
\begin{proposition}
\label{prop:hopf}
In the singular limit $\epsilon\rightarrow 0$ the U-shaped bifurcation curves of the FitzHugh-Nagumo equation have vertical asymptotes given by the points $p_-\approx 0.0510636 $ and $p_+\approx 0.558418$ and a horizontal asymptote given by $\{(p,s):p\in[p_-,p_+]\quad\text{and}\quad s=0\}$. Note that at $p_\pm$ the equilibrium point passes through the two fold points.
\end{proposition}
\begin{proof}
The expression for $s(x_1^*)^2$ in (\ref{eq:solHopf}) is positive when
$1 + 10\epsilon -22 x_1^* + 30 (x_1^*)^2 < 0$. For values of $x_1^*$
between the roots of $1 -22 x_1^* + 30 (x_1^*)^2 = 0$, $s(x_1^*)^2 \to 0$ in (\ref{eq:solHopf}) as $\epsilon \to 0$. The values of $p_-$ and $p_+$ in
the proposition are approximations to the value of $p(x_1^*)$ in
(\ref{eq:solHopf}) at the roots of $1 -22 x_1^* + 30 (x_1^*)^2 = 0$.
As $\epsilon \to 0$, solutions of the equation $s(x_1^*)^2 = c > 0$
in (\ref{eq:solHopf}) yield values of $x_1^*$ that tend to one of the
two roots of $1 - 22 x_1^* + 30 (x_1^*)^2 = 0$.
The result follows.
\end{proof}
\begin{figure}[htbp]
\subfigure[Projection onto $(x_1,y)$]{\includegraphics[width=0.46\textwidth]{./porbits1.eps}\label{fig:po1}}
\subfigure[Projection onto $(x_1,x_2)$]{\includegraphics[width=0.46\textwidth]{./porbits2.eps}\label{fig:po2}}
\caption{Hopf bifurcation at $p\approx 0.083$, $s=1$ and $\epsilon=0.01$. The critical manifold $C_0$ is shown in red and periodic orbits are shown in blue. Only the first and the last critical manifold for the continuation run are shown; not all periodic orbits obtained during the continuation are displayed.}
\label{fig:porbits}
\end{figure}
The analysis of the slow subsystems (\ref{eq:red1}) and (\ref{eq:red2}) gives a conjecture about the shape of the periodic orbits in the FitzHugh-Nagumo equation. Consider the parameter regime close to a Hopf bifurcation point. From (\ref{eq:red1}) we expect one part of the small periodic orbits generated in the Hopf bifurcation to lie close to the slow manifolds $C_{l,\epsilon}$ and $C_{m,\epsilon}$. Using the results about equation (\ref{eq:red2}) we anticipate the second part to consist of an excursion in the $x_2$ direction whose length is governed by the wave speed $s$. Figure \ref{fig:porbits} shows a numerical continuation in MatCont \cite{MatCont} of the periodic orbits generated in a Hopf bifurcation and confirms the singular limit analysis for small amplitude orbits. \\
Furthermore we observe from comparison of the $x_1$ and $x_2$ coordinates of the periodic orbits in Figure \ref{fig:po2} that orbits tend to lie close to the plane defined by $x_2=0$. More precisely, the $x_2$ diameter of the periodic orbits is observed to be $O(\epsilon)$ in this case. This indicates that the rescaling of Section \ref{sec:2slow1fast} can help to describe the system close to the U-shaped Hopf curve. Note that it is difficult to check whether this observation of an $O(\epsilon)$-diameter in the $x_2$-coordinate persists for values of $\epsilon<0.01$ since numerical continuation of canard-type periodic orbits is difficult to use for smaller $\epsilon$.\\
\begin{figure}[htbp]
\subfigure[$GH^\epsilon_{1}$]{\includegraphics[width=0.46\textwidth]{./gh2.eps}\label{fig:GH2}}
\subfigure[$GH^\epsilon_{2}$]{\includegraphics[width=0.46\textwidth]{./gh1.eps}\label{fig:GH1}}
\caption{Tracking of two generalized Hopf points (GH) in $(p,s,\epsilon)$-parameter space. Each point in the figure corresponds to a different value of $\epsilon$. The point $GH^\epsilon_1$ in \ref{fig:GH2} corresponds to the point shown as a square in Figure \ref{fig:cusystem} and the point $GH^\epsilon_2$ in \ref{fig:GH1} is further up on the left branch of the U-curve and is not displayed in Figure \ref{fig:cusystem}.}
\label{fig:GH}
\end{figure}
In contrast to this, it is easily possible to compute the U-shaped Hopf curve using numerical continuation for very small values of $\epsilon$. We have used this possibility to track two generalized Hopf bifurcation points in three parameters $(p,s,\epsilon)$. The U-shaped Hopf curve has been computed by numerical continuation for a mesh of parameter values for $\epsilon$ between $10^{-2}$ and $10^{-7}$ using MatCont \cite{MatCont}. The two generalized Hopf points $GH^\epsilon_{1,2}$ on the left half of the U-curve were detected as codimension two points during each continuation run. The results of this ``three-parameter continuation'' are shown in Figure \ref{fig:GH}.\\
The two generalized Hopf points depend on $\epsilon$ and we find that their singular limits in $(p,s)$-parameter space are approximately:
\begin{equation*}
GH^0_1\approx (p=0.171,s=0)\qquad \text{and} \qquad GH^0_2\approx (p=0.051,s= 3.927)
\end{equation*}
We have not found a way to recover these special points from the fast-slow decomposition of the system. This suggests that codimension two bifurcations are generally diffcult to recover from the singular limit of fast-slow systems.\\
Furthermore the Hopf bifurcations for the full system on the left half of the U-curve are subcritical between $GH^\epsilon_1$ and $GH^\epsilon_2$ and supercritical otherwise. For the transformed system \eqref{eq:fhnscale} with two slow and one fast variable we observed that in the singular limit \eqref{eq:red1} for $\epsilon^2\rightarrow 0$ the Hopf bifurcation is supercritical. In the case of $\epsilon=0.01$ the periodic orbits for \eqref{eq:fhn} and \eqref{eq:red1} exist in overlapping regions for the parameter $p$ between the $p$-values of $GH^{0.01}_1$ and $GH^{0.01}_2$. This result indicates that \eqref{eq:fhnscale} can be used to describe periodic orbits that will interact with the homoclinic C-curve.
\subsection{Homoclinic Orbits}
\label{sec:hetfull}
In the following discussion we refer to ``the'' C-shaped curve of homoclinic bifurcations of system~\eqref{eq:fhn_temp} as the parameters yielding a ``single-pulse'' homoclinic orbit. The literature as described in Section \ref{sec:fhn} shows that close to single-pulse homoclinic orbits we can expect multi-pulse homoclinic orbits that return close to the equilibrium point multiple times. Since the separation of slow manifolds $C_{\cdot,\epsilon}$ is exponentially small, homoclinic orbits of different types will always occur in exponentially thin bundles in parameter space. Values of $\epsilon< 0.005$ are small enough that the parameter region containing all the homoclinic orbits will be indistinguishable numerically from ``the'' C-curve that we locate. \\
The history of proofs of the existence of homoclinic orbits in the FitzHugh-Nagumo equation is quite extensive. The main step in their construction is the existence of a ``singular'' homoclinic orbit $\gamma_0$. We consider the case when the fast subsystem has three equilibrium points which we denote by $x_l\in C_l$, $x_m\in C_m$ and $x_r\in C_r$. Recall that $x_l$ coincides with the unique equilibrium $q=(x_1^*,0,x_1^*)$ of the full system for $p<p_-$. A singular homoclinic orbit is always constructed by first following the unstable manifold of $x_l$ in the fast subsystem given by $y=x_1^*$.\\
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{./homs0.eps}
\caption{Homoclinic orbits as level curves of $H(x_1,x_2)$ for equation (\ref{eq:fss2}) with $y=x_1^*$.}
\label{fig:homs0}
\end{figure}
First assume that $s=0$. In this case the Hamiltonian structure - see Section \ref{sec:hetsub} and equation (\ref{eq:fss2}) - can be used to show the existence of a singular homoclinic orbit. Figure \ref{fig:homs0} shows level curves $H(x_1,x_2)=H(x_1^*,0)$ for various values of $p$. The double heteroclinic connection can be calculated directly using Proposition \ref{prop:doublehet} and solving $x_1^*+\bar{p}^*=p$ for $p$.
\begin{proposition}
There exists a singular double heteroclinic connection in the FitzHugh-Nagumo equation for $s=0$ and $p\approx -0.246016=p^*$.
\end{proposition}
Techniques developed in \cite{Szmolyan1} show that the singular homoclinic orbits existing for $s=0$ and $p\in(p^*,p_-)$ must persist for perturbations of small positive wave speed and sufficiently small $\epsilon$. These orbits are associated to the lower branch of the C-curve. The expected geometry of the orbits is indicated by their shape in the singular limit shown in Figure \ref{fig:homs0}. The double heteroclinic connection is the boundary case between the upper and lower half of the C-curve. It remains to analyze the singular limit for the upper half. In this case, a singular homoclinic orbit is again formed by following the unstable manifold of $x_l$ when it coincides with the equilibrium $q=(x_1^*,0,x_1^*)$ but now we check whether it forms a heteroclinic orbit with the stable manifold of $x_r$. Then we follow the slow flow on $C_r$ and return on a heteroclinic connection to $C_l$ for a different y-coordinate with $y>x_1^*$ and $y<c(x_{1,+})=f(x_1)+p$. From there we connect back via the slow flow. Using the numerical method described in Section \ref{sec:hetsub} we first set $y=x_1^*$; note that the location of $q$ depends on the value of the parameter $p$. The task is to check when the system
\begin{eqnarray}
\label{eq:hetateq}
x_1'&=& x_2\nonumber\\
x_2'&=& \frac{1}{5}\left(f(x_1)+y-p\right)
\end{eqnarray}
has heteroclinic orbits from $C_l$ to $C_r$ with $y = x_1^*$. The result of this computation is shown in Figure \ref{fig:het_full} as the red curve. We have truncated the result at $p=-0.01$. In fact, the curve in Figure \ref{fig:het_full} can be extended to $p=p^*$. Obviously we should view this curve as an approximation to the upper part of the C-curve.\\
\begin{figure}[htbp]
\centering
\psfrag{p}{$p$}
\psfrag{s}{$s$}
\includegraphics[width=0.5\textwidth]{./het_full.eps}
\caption{\label{fig:het_full}Heteroclinic connections for equation \eqref{eq:hetateq} in parameter space. The red curve indicates left-to-right connections for $y=x_1^*$ and the blue curves indicate right-to-left connections for $y=x_1^*+v$ with $v=0.125,0.12,0.115$ (from top to bottom).}
\end{figure}
If the connection from $C_r$ back to $C_l$ occurs with vertical coordinate $x_1^*+v$, it is a trajectory of
system (\ref{eq:hetateq}) with $y=x_1^*+v$. Figure \ref{fig:het_full} shows values of $(p,s)$ at which these
heteroclinic orbits exist for $v=0.125,0.12,0.115$. An intersection between a red and a blue curve indicates a singular homoclinic orbit. Further computations show that increasing the value of $v$ slowly beyond $0.125$ yields intersections everywhere along the red curve in Figure \ref{fig:het_full}. Thus the values of $v$ on the homoclinic orbits are expected to grow as $s$ increases along the upper branch of the C-curve. Since there cannot be any singular homoclinic orbits for $p\in (p_-,p_+)$ we have to find the intersection of the red curve in Figure \ref{fig:het_full} with the vertical line $p=p_-$. Using the numerical method to detect heteroclinic connections gives:
\begin{proposition}
The singular homoclinic curve for positive wave speed terminates at $p=p_-$ and $s\approx 1.50815=s^*$ on the right and at $p=p^*$ and $s=0$ on the left.
\end{proposition}
In $(p,s)$-parameter space define the points:
\begin{equation}
\label{eq:mpts}
A=(p^*,0),\qquad B=(p_-,0),\qquad C=(p_-,s^*)
\end{equation}
In Figure \ref{fig:amazing} we have computed the homoclinic C-curve for values of $\epsilon$ between $10^{-2}$ and $5\cdot 10^{-5}$. Together with the singular limit analysis above, this yields strong numerical evidence for the following conjecture:
\begin{conjecture}
\label{cjt:hom}
The C-shaped homoclinic bifurcation curves converge to the union of the segments $AB$ and $AC$ as $\epsilon \rightarrow 0$.
\end{conjecture}
\textit{Remark 1:} Figure 4 of Krupa, Sandstede and Szmolyan \cite{KSS1997} shows a ``wedge'' that resembles shown in Figure \ref{fig:amazing}. The system that they study sets $p=0$ and
varies $a$ with $a\approx 1/2$. For $a=1/2$ and $p=0$, the equilibrium point $q$ is located at the origin and the fast subsystem with $y=0$ has a double heteroclinic connection at $q$ to the saddle equilibrium $(1,0,0)\in C_r$. The techniques developed in \cite{KSS1997} use this double heteroclinic connection as a starting point. Generalizations of the results in \cite{KSS1997} might provide a strategy to prove Conjecture \ref{cjt:hom} rigorously, a possibility that we have not yet considered. However, we think that 1-homoclinic orbits in the regime we study come in pairs and that the surface of 1-homoclinic orbits in $(p,s,\epsilon)$ space differs qualitatively from that described by Krupa, Sandstede and Szmolyan.\\
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{./amazing.eps}
\caption{\label{fig:amazing}Singular limit ($\epsilon=0$) of the C-curve is shown in blue and parts of several C-curves for $\epsilon>0$ have been computed (red).}
\end{figure}
\textit{Remark 2:} We have investigated the termination or turning mechanism of the C-curve at its upper end. The termination points shown in Figure \ref{fig:cusystem} have been obtained by a different geometric method. It relies on the observation that, in addition to the two fast heteroclinic connections, we have to connect near $C_l$ back to the equilibrium point $q$ to form a homoclinic orbit; the two heteroclinic connections might persist as intersections of suitable invariant manifolds but we also have to investigate how the flow near $C_{l,\epsilon}$ interacts with the stable manifold $W^s(q)$. These results will be reported elsewhere, but we note here that $p_{turn}(\epsilon)\rightarrow p_-$.\\
The numerical calculations of the C-curves for $\epsilon\leq 10^{-3}$ are new. Numerical continuation using the boundary value methods implemented in AUTO \cite{Doedel_AUTO2000} or MatCont \cite{MatCont} becomes very difficult for these small values of $\epsilon$ \cite{Sneydetal}. Even computing with values $\epsilon=O(10^{-2})$ using boundary value methods is a numerically challenging problem. The method we have used does not compute the homoclinic orbits themselves while it locates the homoclinic C-curve accurately in parameter space. To motivate our approach consider Figure \ref{fig:splitting} which shows the unstable manifold $W^u(q)$ for different values of $s$ and fixed $p$. We observe that homoclinic orbits can only exist near two different wave speeds $s_1$ and $s_2$ which define the parameters where $W^u(q)\subset W^s(C_{l,\epsilon})$ or $W^u(q)\subset W^s(C_{r,\epsilon})$. Figure \ref{fig:splitting} displays how $W^u(q)$ changes as $s$ varies for the fixed value $p=0.05$. If $s$ differs from the points $s_1$ and $s_2$ that define the lower and upper branches of the C-curve for the given value of $p$, then $|x_1|$ increases rapidly on $W^u(q)$ away from $q$. The changes in sign of $x_1$ on $W^u(q)$ identify values of $s$ with homoclinic orbits. The two splitting points that mark these sign changes are visible in Figure \ref{fig:splitting}. Since trajectories close to the slow manifolds separate exponentially away from them, we are able to assess the sign of $x_1$ unambiguously on trajectories close to the slow manifold and find small intervals $(p,s_1\pm 10^{-15})$ and $(p,s_2\pm 10^{-15})$ that contain the values of $s$ for which there are homoclinic orbits.\\
\begin{figure}[htbp]
\subfigure[$\epsilon=0.01$, $p=0.05$, \text{$s\in[0.1,0.9]$}]{\includegraphics[width=0.45\textwidth]{./split1.eps} \label{fig:split1}}
\subfigure[$\epsilon=0.01$, $p=0.05$, \text{$s\in[0.9,1.5]$}]{\includegraphics[width=0.45\textwidth]{./split2.eps} \label{fig:split2}}
\caption{\label{fig:splitting}Strong ``splitting'', marked by an arrow, of the unstable manifold $W^u(q)$ (red) used in the calculation of the homoclinic C-curve for small values of $\epsilon$. The critical manifold $C_0$ is shown in blue. The spacing in $s$ is $0.05$ for both figures.}
\end{figure}
The geometry of the orbits along the upper branch of the C-curve is obtained by approximating it with two fast singular heteroclinic connections and parts of the slow manifolds $C_{r,\epsilon}$ and $C_{l,\epsilon}$; this process has been described several times in the literature when different methods were used to prove the existence of ``fast waves'' (see e.g. \cite{Hastings1,Carpenter,JonesKopellLanger}).
\section{Conclusions}
\begin{sidewaysfigure}
\centering
\psfrag{x1}{$x_1$}
\psfrag{x2}{$x_2$}
\psfrag{y}{$y$}
\psfrag{p}{$p$}
\psfrag{s}{$s$}
\psfrag{A}{$A$}
\psfrag{B}{$B$}
\psfrag{C}{$C$}
\psfrag{canards}{Canards in equation (\ref{eq:red1}),(\ref{eq:red2})}
\psfrag{Ccurve}{C-curve}
\psfrag{Ucurve}{U-curve}
\psfrag{slowflowbif}{slow flow bifurcation $p=p_-$}
\psfrag{Hopfred1red2}{Hopf bif. $p_{H,-}$ in (\ref{eq:red1}),(\ref{eq:red2})}
\includegraphics[width=0.55\textwidth]{./singbif.eps}
\caption{Sketch of the singular bifurcation diagram for the FitzHugh-Nagumo equation (\ref{eq:fhn}). The points $A,B$ and $C$ are defined in (\ref{eq:mpts}). The part of the diagram obtained from equations (\ref{eq:red1}),(\ref{eq:red2}) corresponds to the case ``$\epsilon^2=0$ and $\epsilon\neq0$ and small''. In this scenario the canards to the right of $p=p_-$ are stable (see Proposition \ref{prop:stablecanards}). The phase portrait in the upper right for equation (\ref{eq:red1}) shows the geometry of a small periodic orbit generated in the Hopf bifurcation of (\ref{eq:red1}). The two phase portraits below it show the geometry of these periodic orbits further away from the Hopf bifurcation for (\ref{eq:red1}),(\ref{eq:red2}). Excursions of the periodic orbits/canards for $p>p_-$ decrease for larger values of $s$. Note also that we have indicated as dotted lines the C-curve and the U-curve for positive $\epsilon$ to allow a qualitative comparison with Figure \ref{fig:cusystem}.}
\label{fig:singbif}
\end{sidewaysfigure}
Our results are summarized in the singular bifurcation diagram shown in Figure \ref{fig:singbif}.
This figure shows information obtained by a combination of fast-slow decompositions, classical dynamical systems techniques and robust numerical algorithms that work for very small values of $\epsilon$. It recovers and extends to smaller values of $\epsilon$ the CU-structure described in \cite{Sneydetal} for the FitzHugh-Nagumo equation. The U-shaped Hopf curve was computed with an explicit formula, and the homoclinic C-curve was determined by locating transitions between different dynamical behaviors separated by the homoclinic orbits. All the results shown as solid lines in Figure \ref{fig:singbif} have been obtained by considering a singular limit. The lines $AB$ and $AC$ as well as the slow flow bifurcation follow from the singular limit $\epsilon\rightarrow 0$ yielding the fast and slow subsystems of the FitzHugh-Nagumo equation \eqref{eq:fhn}. The analysis of canards and periodic orbits have been obtained from equations (\ref{eq:red1}) and (\ref{eq:red2}) where the singular limit $\epsilon^2\rightarrow 0$ was used (see Section \ref{sec:2slow1fast}). We have also shown the C- and U-curves in Figure \ref{fig:singbif} as dotted lines to orient the reader how the results from Proposition \ref{prop:hopf} and Conjecture \ref{cjt:hom} fit in.\\
We also observed that several dynamical phenomena are difficult to recover from the singular limit fast-slow decomposition. In particular, the codimension two generalized Hopf bifurcation does not seem to be observable from the singular limit analysis. Furthermore the homoclinic orbits can be constructed from the singular limits but it cannot be determined directly from the fast and slow subsystems that they are of Shilnikov-type.\\
The type of analysis pursued here seems to be very useful for other multiple time-scale problems involving multi-parameter bifurcation problems. In future work, we shall give a geometric analysis of the folding/turning mechanism of the homoclinic C-curve, a feature of this system we have not been able to determine directly from our singular limit analysis. That work relies upon new methods for calculating $C_{l,\epsilon}$ and $C_{r,\epsilon}$ which are invariant slow manifolds of ``saddle-type'' with both stable and unstable manifolds.\\
We end with brief historical remarks. The references cited in this paper discuss mathematical challenges posed by the FitzHugh-Nagumo equation, how these challenges have been analyzed and their relationship to general questions about multiple time-scale systems. Along the line $AB$ in Figure \ref{fig:singbif} we encounter a perturbation problem regarding the persistence of homoclinic orbits that can be solved using Fenichel theory \cite{Szmolyan1}. The point $A$ marks the connection between fast and slow waves in $(p,s)$-parameter space which has been investigated in $(\epsilon,s)$-parameter space in \cite{KSS1997}. We view this codimension 2 connectivity as one of the key features of the FitzHugh-Nagumo system. The perturbation problem for homoclinic orbits close to the line $AC$ was solved using several methods and was put into the context of multiple time-scale systems in \cite{JonesKopell,JonesKopellLanger}, where the Exchange Lemma overcame difficulties in tracking $W^u(q)$ when it starts jumping away from $C_{r,\epsilon}$. This theory provides rigorous foundations that support our numerical computations and their interpretation.\\
{\bf Acknowledgment:} This research was partially supported by the National Science Foundation and Department of Energy.
\bibliographystyle{plain}
|
1,116,691,500,132 | arxiv | \section{\label{sec:1} Introduction}
The ability to predict the charge transport properties of semiconductors using non-empirical \textit{ab initio} methods is of paramount importance for the design of next-generation electronics, neuromorphic computing, energy-efficient lighting, and energy conversion and storage. For example, as beyond-silicon materials for next-generation field-effect transistors are being explored, such as wide-gap semiconductors like GaN~\cite{Pushpakaran:2020}, SiC~\cite{Ramkumar:2022}, and Ga$_2$O$_3$~\cite{Green:2022}, or high-mobility materials such as GaAs~\cite{Papez:2021}, \textit{ab initio} methods for calculating transport properties with predictive accuracy are acquiring an increasingly important role.
The past decade has seen numerous developments in first-principles calculations of phonon-limited charge transport coefficients such as the electrical conductivity in metals, and the drift and Hall mobilities in semiconductors~\cite{Restrepo:2009,Li:2014,Fiorentini:2016,Kim:2016,Mustafa:2016,Ponce:2018,Protik:2020}.
More recently, several groups turned their attention to \textit{ab initio} calculations of additional scattering mechanisms~\cite{Restrepo:2009,Caruso:2016,Lu:2019,Lu:2022,Xia:2021,Sanders:2021}.
Among the various mechanisms, impurity scattering is of particular interest since ionized donors and acceptors are ubiquitous in high-purity doped semiconductors, and intrinsic point defects are unavoidable in all other materials~\cite{Slavcheva:2002,Callebaut:2004,Romer:2017}. In this work we focus on ionized-impurity scattering, which is expected to provide the most significant contribution to the carrier relaxation rates beyond phonons, given the long-ranged nature of the Coulomb potential.
Ionized-impurity scattering in semiconductors has first been studied via the Conwell-Weisskopf model. In this model, the scattering potential of the impurity is described using a Coulomb monopole immersed in the dielectric background of the semiconductor~\cite{Conwell:1950}. The long-range nature of this potential makes it ill-behaved at long-wavelength, and the singularity at long wavelengths is removed using an \textit{ad hoc} infrared cutoff. A better handling of this singularity is achieved in the Brooks-Herring model by considering free-carrier screening~\cite{Brooks:1955}. This latter model proved very successful~\cite{Long:1959}, and is still widely used owing to its simplicity as it only requires the electronic density of states, the carrier effective mass, the high-frequency dielectric constant, and the impurity concentration.
Further improvements upon these models were subsequently introduced, e.g., carrier statistics, dispersive electronic screening, two-impurity scattering, and atomic form factors~\cite{Kosina:1997}. While this class of models enjoyed considerable success with calculations of the carrier mobility of silicon, they do not perform as well with other semiconductors~\cite{Roschke:2001,Arvanitopoulos:2017}. These and similar other empirical adjustments make it harder to quantify the role of each scattering channels, and most importantly decrease the transferability of the models and ultimately their usefulness in materials design.
During the past decade, considerable progress has been achieved in \textit{ab initio} calculations of charge carrier mobilities~\cite{Fiorentini:2016,Kim:2016,Ponce:2020,Restrepo:2009,Li:2014,Lu:2022}.
These approaches are based on the use of electronic band structures from density functional theory (DFT) \cite{Hohenberg:1964,Kohn:1965}, as well as phonon dispersion relations and electron-phonon matrix elements from supercell calculations or from density-functional perturbation theory (DFPT)~\cite{Baroni:1987,Giannozzi:1991,Gonze:1997,Baroni:2001}. To achieve a numerically converged sampling of the Brillouin zone, most calculations by now employ Wannier-Fourier interpolation~\cite{Giustino:2007,Mostofi:2014,Giustino:2017}. Mobilities are then obtained by solving the \textit{ab initio} Boltzmann transport equation ($ai$\text{BTE}) \cite{Ponce:2020}. The first study of ionized-impurity scattering from first principles was reported by Restrepo and Pantelides~\cite{Restrepo:2009}, and more recent, state-of-the-art calculations have been reported by Lu and coworkers~\cite{Lu:2022}. In this latter work, the authors find good agreement between calculated mobilities and experimental data for silicon. Additional work using a semi-empirical approach combining DFT calculations and models was also reported recently~\cite{Graziosi:2020,Ganose:2021}.
In this work, we investigate from first principles the effect of ionized-impurity scattering on the carrier mobility of semiconductors. To this aim, we take into account both carrier-phonon and carrier-impurity scattering on the same footing, within the $ai$\text{BTE}\ formalism as implemented in the EPW code.~\cite{Ponce:2016} Given that the shape of the impurity potential depends on the details of the crystal structure and its evaluation would require thermodynamic calculations of defects and defect levels~\cite{Freysoldt:2014}, we limit ourselves to consider the monopole term of the scattering potential and a random distribution of impurities. This simplification allows us to achieve an elegant and compact formalism, and to compute carrier mobilities by using solely the concentration of ionized impurities as input. To validate our methodology, we perform calculations for three test systems: Si, 3C-SiC, and GaP. For Si there is an abundance of experimental data and previous calculations to compare with. 3C-SiC, which is also referred to as cubic SiC or $\beta$-SiC in the literature, is considered a promising candidate for next-generation power electronics~\cite{Bhatnagar:1993,Li:2021,Li:2021:2}. Several experimental data sets are available for carrier mobility in 3C-SiC, especially for $n$-type (N) doping and less so for $p$-type doping (Al). GaP is a standard optoelectronic semiconductor which is of interest in non-linear optical switching~\cite{Zipperian:1982,Hughes:1991,Luo:2018}; experimental mobility data for GaP are available both for $n$-type doping (Sn) and $p$-type doping (Zn).
For each of these compounds we calculate the temperature-dependent carrier mobility at variable impurity concentration. We investigate the relative importance of carrier-phonon and carrier-impurity scattering, and we examine the validity of the classic Matthiessen's rule~\cite{Reif-Acherman:2015}.
The manuscript is organized as follows. In Sec.~\ref{sec:theory} we briefly summarize the $ai$\text{BTE}\ formalism, we provide a detailed derivation of the matrix elements for carrier-impurity scattering, and we discuss the key approximations involved. In this section we also discuss free-carrier screening, and we examine under which conditions the Matthiessen rule can reliably be used in transport calculations. Section~\ref{sec:methods} is devoted to the implementation details and the calculation parameters used in this work. In Sec.~\ref{sec:results} we discuss our results for Si, SiC, and GaP. In particular, in Sec.~\ref{subsec:expcomp} we present our calculated temperature- and concentration-dependent mobilities and compare our data with experiments. In Sec.~\ref{subsec:invtau} we analyze the relative importance of phonon- and impurity-mediated scattering processes in the carrier relaxation rates. In Sec.~\ref{subsec:matt} we test Matthiessen's rule by comparing full $ai$\text{BTE}\ calculations with the results of separate calculations including only phonon-limited or impurity-limited mobilities. In Sec.~\ref{subsec:corr} we investigate how the DFT dielectric screening and carrier effective masses influence calculated mobilities, and we test simple correction schemes along the lines of Ref.~\cite{Ponce:2018}. In Sec.~\ref{sec:conclusions} we summarize our findings and offer our conclusions. Additional details on the calculation procedure are discussed in the Appendices.
\section{\label{sec:2} Theoretical approach}\label{sec:theory}
\subsection{Carrier mobility from the \textit{ab initio} Boltzmann transport equation}
A detailed derivation of the $ai$\text{BTE}\ formalism is given in Ref.~\cite{Ponce:2016}. Here we limit ourselves to summarize the key equations in order to keep this manuscript self-contained. Within the linearized Boltzmann transport equation, the carrier mobility tensor is obtained as:
\begin{equation}\label{eq:BTE3}
\mu_{\alpha\beta} = -\frac{2}{\Omega_{\text{uc}} n_{\rm c}}\frac{1}{N_{\text{uc}}}\sum_{n\mathbf k}
v_{n\mathbf k}^\alpha \partial_{E_\beta} f_{n\mathbf k},
\end{equation}
where the factor of 2 is for the spin degeneracy, Greek indices indicate Cartesian directions, $E_\beta$ indicate the Cartesian components of the electric field, and $\partial_{E_\beta} f_{n\mathbf k}$ is the linear variation of the electronic occupation of the state with band index $n$ and wavevector $\mathbf k$ in response to the applied field. $v_{n\mathbf k}^\alpha$ represents the expectation value of the velocity operator along the direction $\alpha$, for the Kohn-Sham state $n\mathbf k$. $e$, $n_{\rm c}$, $\Omega_{\text{uc}}$, and $N_{\text{uc}}$ indicate the electron charge, the carrier density, the volume of the unit cell, and the number of unit cells in the Born-von K\'arm\'an (BvK) supercell, respectively. The $n$-summation extends over all Kohn-Sham states, although in practice only those states near the chemical potential contribute to the mobility. The $\mathbf k$-summation is over a uniform Brillouin zone grid.
The variation $\partial_{E_\beta} f_{n\mathbf k}$ is obtained from the self-consistent solution of the equation:
\begin{eqnarray}\label{eq:BTE1}
&& -e v^{\beta}_{n\mathbf{k}} \frac{\partial f^0_{n\mathbf{k}}}{\partial \epsilon_{n\mathbf{k}}}
= \sum_{m\mathbf q}\,
\big[ \tau^{-1}_{m\mathbf{k}+\mathbf{q} \to n\mathbf{k}} \, \partial_{E_{\beta}}f_{m\mathbf{k}+\mathbf{q}} \nonumber \\
&& - \tau^{-1}_{n\mathbf{k} \to m\mathbf{k}+\mathbf{q}} \, \partial_{E_{\beta}}f_{n\mathbf{k}} \big],
\end{eqnarray}
where $f^0_{n\mathbf{k}}$ denotes the Fermi-Dirac occupation of the state $n\mathbf k$ in the absence of electric field. The quantity $\tau^{-1}_{n\mathbf{k} \to m\mathbf{k}+\mathbf{q}}$ is the partial scattering rate from the Kohn-Sham state $n\mathbf k$ to the state $m\mathbf k+\mathbf q$. In many-body perturbation theory, this rate is derived from the imaginary parts of the electron self-energy, therefore different scattering mechanisms simply add up to the lowest order in perturbation theory. In this work, we write the scattering rate as the sum of the rates of carrier-phonon scattering (ph) and carrier-impurity (imp) scattering:
\begin{equation}\label{eq:totrate}
\displaystyle
\frac{1}{\tau_{n\mathbf{k} \to m\mathbf{k}+\mathbf{q}}} =
\frac{1}{\tau^{\rm ph}_{n\mathbf{k} \to m\mathbf{k}+\mathbf{q}}} +
\frac{1}{\tau^{\rm imp}_{n\mathbf{k} \to m\mathbf{k}+\mathbf{q}}}.
\end{equation}
The partial carrier-phonon scattering rate is given by~\cite{Ponce:2020}:
\begin{eqnarray}\label{eq.tau.partial}
&&\frac{1}{\tau^{\rm ph}_{n\mathbf{k} \to m\mathbf{k}+\mathbf{q}}} = \frac{1}{N_{\text{uc}}}\sum_{\nu} \frac{2\pi}{\hbar} \left| g_{mn\nu}(\mathbf{k},\mathbf{q}) \right|^2 \nonumber \\
&&\times \big[ (n_{\mathbf{q}\nu}+1-f^0_{m\mathbf{k}+\mathbf{q}})\delta(\epsilon_{n\mathbf{k}}\!-\!\epsilon_{m\mathbf{k}+\mathbf{q}}-\hbar\omega_{\mathbf{q}\nu}) \nonumber \\
&&+ (n_{\mathbf{q}\nu}+f^0_{m\mathbf{k}+\mathbf{q}})\delta(\epsilon_{n\mathbf{k}}\!-\!\epsilon_{m\mathbf{k}+\mathbf{q}}+\hbar\omega_{\mathbf{q}\nu}) \big],
\end{eqnarray}
where $\epsilon_{n\mathbf k}$ denote Kohn-Sham eigenstates, and $\omega_{\mathbf q\nu}$ stands for the frequency of a phonon with branch index $\nu$, wavevector $\mathbf q$, and Bose-Einstein occupation $n_{\mathbf q\nu}$. The matrix elements $g_{mn\nu}(\mathbf{k},\mathbf{q}$) indicate the probability amplitude for the scattering of an electron from state $n\mathbf k$ to state $m\mathbf k+\mathbf q$ via a phonon $\mathbf q\nu$~\cite{Giustino:2017}. The partial rate in Eq.~\eqref{eq.tau.partial} can be obtained either from Fermi's golden rule or from many-body perturbation theory~\cite{Giustino:2017}. The carrier-impurity scattering rate required in Eq.~\eqref{eq:totrate} is derived in the next section and is given by Eq.~\eqref{eq:tauave3}.
Together, Eqs.~\eqref{eq:BTE3}-\eqref{eq.tau.partial} and \eqref{eq:tauave3} define the $ai$\text{BTE}\ framework employed in this work. This approach consistently captures back-scattering and Umklapp processes, with a computational cost that is similar to more approximate approaches based on various relaxation-time approximations. We refer the reader to Ref.~\cite{Ponce:2020} for a comprehensive review of common approximations to the Boltzmann transport equation.
\subsection{Scattering of Carriers by ionized impurities in the monopole approximation}
To obtain the carrier-impurity scattering rate $1/\tau^{\rm imp}_{n\mathbf k \to m\mathbf k+\mathbf q}$
we proceed as follows: (i) We derive the matrix element of the scattering potential for a single impurity in a periodic BvK supercell of the crystal unit cell; (ii) We generalize the matrix element to consider a number $N_{\rm imp}$ of impurities in the BvK supercell; (iii) From this matrix element, we obtain the scattering rate corresponding to the $N_{\rm imp}$ impurities by using the first Born approximation; (iv) We average the resulting rate over a random uniform distribution of impurity positions using a method due to Kohn and Luttinger.
\subsubsection{Scattering potential and matrix element for single impurity}
We employ the monopole approximation to describe the potential of an impurity of charge $Ze$ located at the position $\mathbf r_0$ in the BvK supercell. A more refined choice would entail explicitly calculating the impurity potential in DFT and its matrix elements. This approach was pursued in Refs.~\cite{Restrepo:2009} and \cite{Lu:2022}, but it carries the disadvantage that one needs to compute defect energetics prior to mobility calculations, and then perform rotational averages to account for the randomness of the impurity orientation. Our simpler approach is useful for systematic transport calculations when detailed knowledge of the atomic-scale structure of impurities is lacking, and can be made more accurate by incorporating dipole and quadrupole terms along the lines of Refs.~\cite{Verdi:2015,Brunin:2020,Park:2020}.
By solving the Poisson equation in the BvK supercell and considering a background anisotropic static dielectric constant tensor $\bm\varepsilon^0 = \varepsilon^0_{\alpha\beta}$, the potential of this point charge is found to be [see Eq.~(S3) of Ref.~\cite{Verdi:2015}]:
\begin{equation}\label{eq.S3}
\phi({\bf r};\mathbf r_0) = \frac{4\pi}{\Omega_{\rm sc}} \frac{Ze}{4\pi\varepsilon_0} \sum_{\bf q}
\sum_{{\bf G}\ne -{\bf q}}\frac{e^{i({\bf q+ G})\cdot ({\bf r}-\mathbf r_0)}}
{({\bf q}+{\bf G})\!\cdot\!\bm\varepsilon^0\!\cdot({\bf q}+{\bf G})},
\end{equation}
modulo an inessential constant that reflects the compensating background charge. In this expression, $\varepsilon_0$ is the vacuum permittivity, $\bf G$ is a reciprocal lattice vector, and the wavevector $\mathbf q$ belongs to a uniform Brillouin-zone grid. Here an in the following, we consider that the BvK cell consists of $N_{\text{uc}}$ unit cells, so that its volume is $\Omega_{\rm sc} = N_{\text{uc}}\Omega_{\text{uc}}$, and that the Brillouin zone is discretized in a uniform grid of $N_{\text{uc}}$ points. The potential $\phi(\mathbf r,\mathbf r_0)$ is periodic over the BvK supercell.
The perturbation potential resulting from this impurity is $V = \mp e \phi$ for electrons and holes, respectively. For definiteness, we consider electrons in the following. The matrix elements of the perturbation $V$ between the Kohn-Sham states $\psi_{n\mathbf k}$ and $\psi_{m\mathbf k+\mathbf q}$ is given by:
\begin{equation}\label{eq.g1}
g_{mn}^{\rm imp}(\mathbf k,\mathbf q;\mathbf r_0) = \langle \psi_{m\mathbf k+\mathbf q} | V(\mathbf r;\mathbf r_0) | \psi_{n\mathbf k}\rangle_{\rm sc},
\end{equation}
where the integral is over the supercell. The states can be written as $\psi_{n\mathbf k} = N_{\text{uc}}^{-1/2} e^{i\mathbf k\cdot\mathbf r} u_{n\mathbf k}$, where $u_{n\mathbf k}$ is the Bloch-periodic part and is normalized in the unit cell. The combination of Eqs.~\eqref{eq.S3} and \eqref{eq.g1} yields:
\begin{equation}\label{eq.gi1}
g_{mn}^{\rm imp}(\mathbf k,\mathbf q;\mathbf r_0) = \frac{-e^2}{4\pi\varepsilon_0} \frac{4\pi Z}{\Omega_{\rm sc}}
\sum_{{\bf G}\ne -{\bf q}}\!
\frac{e^{-i({\bf q+ G})\cdot \mathbf r_0} B_{mn,\mathbf G}(\mathbf k,\mathbf q)}{({\bf q}+{\bf G})\!\cdot\!\bm\varepsilon^0\!\cdot({\bf q}+{\bf G})},
\end{equation}
having defined the overlap integral:
\begin{equation}
B_{mn,\mathbf G}(\mathbf k,\mathbf q) = \langle u_{m\mathbf k+\mathbf q} | e^{i{\bf G}\cdot {\bf r}}|u_{n\mathbf k} \rangle_{\rm uc},
\end{equation}
which is evaluated over the unit cell.
\subsubsection{Scattering rate from multiple impurities within the first Born approximation}
We now consider $N_{\rm imp}^{\rm sc}$ impurities located at the positions $\mathbf r_1,\mathbf r_2,\cdots,\mathbf r_{N_{\rm imp}}$ in the BvK supercell. The corresponding perturbation potential is the sum of the potentials obtained in the previous section, $V = \sum_{I=1}^{N_{\rm imp}^{\rm sc}} V(\mathbf r;\mathbf r_I)$, therefore the generalization of Eq.~\eqref{eq.gi1} to the case of multiple identical impurities reads:
\begin{eqnarray}\label{eq.gmany}
g_{mn}^{\rm imp}(\mathbf k,\mathbf q;\{\mathbf r_I\}) &=&
\frac{-e^2}{4\pi\varepsilon_0} \frac{4\pi Z}{\Omega_{\rm sc}}
\sum_{{\bf G}\ne -{\bf q}}\!
\frac{B_{mn,\mathbf G}(\mathbf k,\mathbf q)}{({\bf q}+{\bf G})\!\cdot\!\bm\varepsilon^0\!\cdot({\bf q}+{\bf G})}\nonumber \\
&\times& {\sum}_{I=1}^{N_{\rm imp}^{\rm sc}} e^{-i({\bf q+ G})\cdot \mathbf r_I}.
\end{eqnarray}
The total scattering rate out of state $n\mathbf k$ associated with this matrix element can be written using the first Born approximation for the scattering matrix~\cite{Sakurai:2010} [Eqs. (6.1.16) and (6.1.32)]:
\begin{equation}\label{eq:imprate0}
\frac{1}{\tau^{\text{imp}}_{n\mathbf k}} =
\sum_{m\mathbf q} \frac{2\pi}{\hbar} |g_{mn}^{\text{imp}}(\mathbf k,\mathbf q;\{\mathbf r_I\})|^2
\delta(\epsilon_{n\mathbf k}-\epsilon_{m\mathbf k+\mathbf q}).
\end{equation}
We note that this expression is an intensive quantity, as expected, i.e. it does not scale with the size of the BvK supercell [see discussion after Eq. (17)]. The partial scattering rate needed in Eq.~\eqref{eq:totrate} is then defined as:
\begin{equation}\label{eq:imprate}
\frac{1}{\tau^{\text{imp}}_{n\mathbf k\rightarrow m\mathbf k+\mathbf q}} =
\frac{2\pi}{\hbar} |g_{mn}^{\text{imp}}(\mathbf k,\mathbf q;\{\mathbf r_I\})|^2
\delta(\epsilon_{n\mathbf k}-\epsilon_{m\mathbf k+\mathbf q}).
\end{equation}
Unlike Eq.~\eqref{eq.tau.partial}, in this expressions we do not have the Fermi-Dirac occupations. These occupations drop out in the linearized Boltzmann transport equation, as it can be verified, for example, by setting $n_{\mathbf q\nu}=0$ and $\omega_{\mathbf q\nu}=0$ in Eq.~\eqref{eq.tau.partial}. In Eq.~\eqref{eq:imprate} the Dirac delta function ensures energy conservation, consistent with the fact that we are considering the scattering by a fixed potential, i.e. we are neglecting the recoil of the impurity upon collision.
By combining Eqs.~\eqref{eq.gmany} and \eqref{eq:imprate} we find:
\begin{eqnarray}\label{eq:imprate2} &&
\frac{1}{\tau^{\text{imp}}_{n\mathbf k\rightarrow m\mathbf k+\mathbf q}}(\{\mathbf r_i\}) =
\frac{2\pi}{\hbar} \left[\frac{e^2}{4\pi\varepsilon_0} \frac{4\pi Z}{\Omega_{\rm sc}}\right]^2
\delta(\epsilon_{n\mathbf k}-\epsilon_{m\mathbf k+\mathbf q})
\nonumber \\ && \times \!\!\!\!\!\!\sum_{\bf G,\bf G'\ne -{\bf q}}\!\!\!
\frac{B_{mn,\mathbf G}(\mathbf k,\mathbf q)B^*_{mn,\mathbf G'}(\mathbf k,\mathbf q)}{(\mathbf Q\cdot\!\bm\varepsilon^0\!\cdot\mathbf Q)
(\mathbf Q'\cdot\!\bm\varepsilon^0\!\cdot\mathbf Q')}
{\sum}_{I,J=1}^{N_{\rm imp}^{\rm sc}} e^{i(\mathbf Q'\cdot \mathbf r_J-\mathbf Q\cdot \mathbf r_I)}, \nonumber \\
\end{eqnarray}
having defined $\mathbf Q = \mathbf q+\mathbf G$ and $\mathbf Q' = \mathbf q+\mathbf G'$ for convenience.
\subsubsection{Kohn-Luttinger ensamble averaging of the scattering rate}
In order to account for the randomness in the distribution of impurities, we perform a configuration average of the scattering rate in Eq.~\eqref{eq:imprate2} by considering a uniform probability distribution, following the Kohn-Luttinger approach~\cite{Kohn:1957}:
\begin{equation}\label{eq:tauave}
\frac{1}{\tau^{\text{imp,ave}}_{n\mathbf k\rightarrow m\mathbf k+\mathbf q}} =
\int_{\rm sc} \frac{d\mathbf r_1\cdots d\mathbf r_{N_{\rm imp}^{\rm sc}}}{\Omega_{\rm sc}^{N_{\rm imp}^{\rm sc}}}
\frac{1}{\tau^{\text{imp}}_{n\mathbf k\rightarrow m\mathbf k+\mathbf q}}
(\{\mathbf r_i\}).
\end{equation}
The only term that depends on the impurity positions in Eq.~\eqref{eq:imprate2} is the sum over $I,J$ on the second line. Below we evaluate the ensemble average of this sum by separating the $I=J$ and $I\ne J$ terms:
\begin{eqnarray}\label{eq:sum}
&& \hspace{-10pt}\int_{\rm sc} \frac{d\mathbf r_1\cdots d\mathbf r_{N_{\rm imp}^{\rm sc}}}{\Omega_{\rm sc}^{N_{\rm imp}}}
{\sum}_{I,J=1}^{N_{\rm imp}^{\rm sc}} e^{i(\mathbf Q'\cdot \mathbf r_J-\mathbf Q\cdot \mathbf r_I)} \nonumber \\
&& =
\frac{N_{\rm imp}^{\rm sc}}{\Omega_{\rm sc}}\int_{\rm sc} d\mathbf r\,
e^{i(\mathbf Q'-\mathbf Q)\cdot \mathbf r} \nonumber \\
&&
+ \frac{N_{\rm imp}^{\rm sc}(N_{\rm imp}^{\rm sc}-1)}{\Omega_{\rm sc}^2} \left[
\int_{\rm sc} d\mathbf r\, e^{i\mathbf Q'\cdot \mathbf r}\right]\!\!\left[
\int_{\rm sc} d\mathbf r\, e^{-i\mathbf Q\cdot \mathbf r}\right]\!\!.
\end{eqnarray}
Both terms on the r.h.s.\ require the evaluation of an integral of the type:
\begin{equation}\label{eq:int}
\int_{\rm sc} d\mathbf r\, e^{i\mathbf Q\cdot \mathbf r}.
\end{equation}
This integral equals $\Omega_{\rm sc}$ for $\mathbf Q=0$; for finite $\mathbf Q$, we note that the integral becomes the Fourier representation of the Dirac delta when $N_{\text{uc}} \rightarrow\infty$, therefore it vanishes. In this limit, Eq.~\eqref{eq:sum} reduces to:
\begin{eqnarray}\label{eq:sum2}
&& \int_{\rm sc} \frac{d\mathbf r_1\cdots d\mathbf r_{N_{\rm imp}}}{\Omega_{\rm sc}^{N_{\rm imp}^{\rm sc}}}
{\sum}_{I,J=1}^{N_{\rm imp}^{\rm sc}} e^{i(\mathbf Q'\cdot \mathbf r_J-\mathbf Q\cdot \mathbf r_I)} \nonumber \\
&& =
N_{\rm imp}^{\rm sc} \,\delta_{\mathbf G,\mathbf G'}
+ N_{\rm imp}^{\rm sc}(N_{\rm imp}^{\rm sc}-1)\,
\delta_{\mathbf G,-\mathbf q}\delta_{\mathbf G',-\mathbf q}, \nonumber\\
\end{eqnarray}
Using Eqs.~\eqref{eq:sum2} and \eqref{eq:imprate2} inside Eq.~\eqref{eq:tauave}, we obtain:
\begin{eqnarray}\label{eq:tauave3}
&& \frac{1}{\tau^{\text{imp,ave}}_{n\mathbf k\rightarrow m\mathbf k+\mathbf q}} =
\frac{1}{N_{\text{uc}}} N_{\rm imp}^{\rm uc}
\frac{2\pi}{\hbar} \left[\frac{e^2}{4\pi\varepsilon_0} \frac{4\pi Z}{\Omega_{\text{uc}}}\right]^2
\nonumber \\ && \times \!\!\!\sum_{\bf G \ne -{\bf q}}
\frac{|B_{mn,\mathbf G}(\mathbf k,\mathbf q)|^2}{|(\mathbf q+\mathbf G)\cdot\!\bm\varepsilon^0\!\cdot(\mathbf q+\mathbf G)|^2}
\delta(\epsilon_{n\mathbf k}-\epsilon_{m\mathbf k+\mathbf q}),
\end{eqnarray}
where we use $N_{\rm imp}^{\rm uc} = N_{\rm imp}^{\rm sc}/N_{\text{uc}}$ to denote the number of impurities per unit cell; $N_{\rm imp}^{\rm uc}$ is a dimensionless quantity.
We note that, in practical calculations, the prefactor $1/N_{\text{uc}}$ in Eq.~\eqref{eq:tauave3}, which also appears in the partial carrier-phonon scattering rate in Eq.~\eqref{eq.tau.partial}, is included as a $\mathbf k$-point weight in Brillouin zone summations, so that the sum in Eq.~\eqref{eq:BTE1} becomes $N_{\text{uc}}^{-1}\sum_\mathbf q$ and is independent of the size of the BvK supercell.
The scattering rate given in Eq.~\eqref{eq:tauave3} is similar but not identical to alternative forms used in previous work. For example, it differs from classic approaches such as the Conwell-Weisskopf formula~\cite{Conwell:1950} and the Brooks-Herring formula~\cite{Brooks:1955} in that here the details of band structures, Kohn-Sham orbital overlaps, and anisotropic dielectric screening are fully taken into account. Furthermore, it differs from more recent \textit{ab initio} approaches such as Ref.~\cite{Restrepo:2009} in that the long-range nature of the Coulomb interaction is taken into account from the start, as opposed to being included as an \textit{ad hoc} correction. Our expression is similar to the formula provided in Ref.~\cite{Lu:2022}, except that here we take into account the periodicity of the impurity potential over the BvK supercell and the anisotropy of the dielectric tensor. The fact that we reached a similar expression as in Ref.~\cite{Lu:2022} starting from a rather different viewpoint involving the Kohn-Luttinger ensemble average lends support to both approaches.
\subsection{Free-carrier screening of the impurity potential}
The carrier-impurity scattering rate given by Eq.~\eqref{eq:tauave3} contains a singular $q^{-4}$ term that is not integrable (with $q=|\mathbf q|$), and leads to incorrect results when used in the $ai$\text{BTE}\ of Eq.~\eqref{eq:BTE1}. This problem was already identified by Conwell and Weisskopf \cite{Conwell:1950}, who introduced an infrared cutoff to suppress the Coulomb singularity.
The formal way to overcome this difficulty is to observe that ionized impurities are accompanied by free-carriers, which introduce metallic-like screening of the impurity potentials. In the Thomas-Fermi model, free-carriers introduce an additional screening
\begin{equation}
\varepsilon_{\rm TF}(q) = 1+\frac{q_{\rm TF}^2}{q^2},
\end{equation}
where $q_{\rm TF}$ is the Thomas-Fermi wavevector. When used in combination with the impurity potential appearing in Eq.~\eqref{eq:tauave3}, this additional screening lifts the Coulomb singularity. In fact, by temporarily ignoring the $\mathbf G$ vectors and the anisotropy of the dielectric tensor, free-carrier screening modifies the denominator of Eq.~\eqref{eq:tauave3} as follows:
\begin{equation}
\frac{1}{(\varepsilon^0 q^2)^2} \quad\xrightarrow{\hspace{10pt}}\quad
\frac{1}{[\varepsilon_{\rm TF}(q)\varepsilon^0 q^2]^2} =
\frac{1}{[\varepsilon^0 (q^2+q_{\rm TF}^2)]^2},
\end{equation}
which tends to the finite value $1/(\varepsilon^0 q_{\rm TF}^2)^2$ at long wavelength.
To incorporate free-carrier screening in our calculations, while taking into account all details of band structures and effective masses, we employ the Lindhard dielectric function instead of the Thomas-Fermi model, following Ref.~\cite{Ashcroft:1976}. The same approach was employed in Ref.~\cite{Lu:2022}. The Lindhard dielectric function is given by:
\begin{equation}
\varepsilon_{\rm L}(q) = 1-\frac{e^2}{4\pi\varepsilon_0}\frac{4\pi}{q^2} \frac{2}{N_{\text{uc}} \Omega_{\text{uc}}}
\sum_{n\mathbf k}\frac{f^0_{n\mathbf k+\mathbf q}-f^0_{n\mathbf k}}{\epsilon_{n\mathbf k+\mathbf q}-\epsilon_{n\mathbf k}}.
\end{equation}
Since the density of free-carriers is typically low in doped semiconductors, we only need the long wavelength limit of this expression. In this limit, $(f^0_{n\mathbf k+\mathbf q}-f^0_{n\mathbf k})/(\epsilon_{n\mathbf k+\mathbf q}-\epsilon_{n\mathbf k}) = \partial f^0_{n\mathbf k}/\partial \epsilon_{n\mathbf k}$, therefore we can write:
\begin{equation}
\varepsilon_{\rm L}(q) = 1+\frac{q_{\rm TF}^2}{q^2},
\end{equation}
having introduced the effective Thomas-Fermi vector:
\begin{equation}\label{eq:qtf}
q_{\rm TF} = \frac{e^2}{4\pi\varepsilon_0}\frac{2\cdot 4\pi}{N_{\text{uc}} \Omega_{\text{uc}}}
\sum_{n\mathbf k} \left|\frac{\partial f^0_{n\mathbf k}}{\partial\epsilon_{n\mathbf k}}\right|.
\end{equation}
For parabolic bands, Eq. (21) reduces to the Thomas-Fermi or Debye model in the respective temperature limits.
The free-carrier screening provides an additional screening mechanism to the dielectric screening of the insulating semiconductors, and is included in our calculations by replacing $\bm\varepsilon^0$ in Eq.~\eqref{eq:tauave3} by the total dielectric function:
\begin{equation}
\bm\varepsilon^0 \quad\xrightarrow{\hspace{10pt}}\quad \bm\varepsilon^0 + {\bf 1}\frac{q_{\rm TF}^2}{q^2},
\end{equation}
where ${\bf 1}$ denotes the $3\times 3$ identity matrix. We note that this improved description of the screening includes temperature effects via the Fermi-Dirac occupations entering the definition of the effective Thomas-Fermi wavevector, Eq.~\eqref{eq:qtf}.
\subsection{Matthiessen's Rule}
Matthiessen's rule~\cite{Reif-Acherman:2015} is widely employed to interpret transport measurements. In the context of carrier transport in semiconductors, this rule can be stated as follows: the contributions of different scattering channels to the mobility can be obtained by adding the reciprocals of the individual mobilities. In the case of carrier-phonon and impurity-phonon scattering, we would have:
\begin{equation}\label{eq:matt}
\frac{1}{\mu} = \frac{1}{\mu_{\rm ph}} + \frac{1}{\mu_{\rm imp}}.
\end{equation}
In Sec.~\ref{sec:4} we proceed to quantify the reliability of this approximation by comparing mobility data calculated using the complete $ai$\text{BTE}\ including both phonons and impurities with the prediction of Eq.~\eqref{eq:matt} obtained by calculating the mobility with these two scattering channels taken individually. We will show that this rule does not carry predictive power for the examples considered in this work.
From a formal standpoint, the rule expressed by Eq.~\eqref{eq:matt} is obviously related to the choice of expressing the total scattering rates as the sum or the individual rates, see Eq.~\eqref{eq:totrate}. That choice was motivated by the observation that, to first order in perturbation theory, different scattering channels do not mix. However, it is easy to see that, even when Eq.~\eqref{eq:totrate} is a good approximation, the additivity of the rates does not imply the Matthiessen rule as expressed by Eq.~\eqref{eq:matt}. To appreciate this point, we observe that the $ai$\text{BTE}\ in Eq.~\eqref{eq:BTE1} can be recast as a linear system of the type:
\begin{equation}
A \times \{\partial_{E_\beta} f_{n\mathbf k}\} = b,
\end{equation}
where the matrix $A$ contains the partial scattering rates $\tau^{-1}_{n\mathbf k\rightarrow n'\mathbf k'}$, the vector $b$ contains the drift term on the left hand side of Eq.~\eqref{eq:BTE1}, and $\{\partial_{E_\beta} f_{n\mathbf k}\}$ denotes the vector of solutions. If we break down the matrix $A$ into its contributions from carrier-phonon and carrier-impurity scattering, $A_{\rm ph}$ and $A_{\rm imp}$ respectively, we see immediately that
\begin{equation}
\{\partial_{E_\beta} f_{n\mathbf k}\} = (A_{\rm ph}+A_{\rm imp})^{-1} b \ne A_{\rm ph}^{-1}b+A_{\rm imp}^{-1} b,
\end{equation}
therefore the additivity of the scattering rates does not imply the Matthiessen rule. This point can be made even more explicit by considering the self-energy relaxation time approximation to the $ai$\text{BTE}. The approximation consists of neglecting the first term on the r.h.s.\ of Eq.~\eqref{eq:BTE1}, and yields the following expression for the mobility:
\begin{eqnarray}\label{eq.serta}
\mu_{\alpha\beta} &=& -\frac{e}{\Omega_{\text{uc}} n_{\rm c}}\frac{2}{N_{\text{uc}}}
\sum_{n\mathbf k} \frac{\partial f_{n\mathbf k}^0}{\partial \epsilon_{n\mathbf k}} v_{n\mathbf k}^\alpha v_{n\mathbf k}^\beta
\nonumber \\ &\times& \frac{1}{\displaystyle\frac{1}{\tau_{n\mathbf k}^{\rm ph}}+ \frac{1}{\tau_{n\mathbf k}^{\rm imp}}}.
\end{eqnarray}
For this expression to be amenable to Matthiessen's rule, the scattering rates would need to be independent of the electronic state, say $\tau_{n\mathbf k}^{\rm ph} = \tau^{\rm ph}$ and $\tau_{n\mathbf k}^{\rm imp} = \tau^{\rm imp}$. This is typically not the case in most semiconductors. Another special case where Matthiessen's formula is meaningful occurs when one scattering mechanism dominates over the others. For example, in Eq.~\eqref{eq.serta}, when $\tau_{n\mathbf k}^{\rm ph} \gg \tau_{n\mathbf k}^{\rm imp}$, the expression reduces to the phonon-limited mobility. In this sense, Matthiessen's rule constitutes a simple interpolation formula between the limiting cases of phonon-limited and impurity-limited mobilities. We will analyze these aspects quantitatively in Sec.~\ref{sec:4}.
\section{\label{sec:3} Computational Methods}\label{sec:methods}
All calculations are performed using the Quantum ESPRESSO materials simulation suite~\cite{Giannozzi:2017}, the EPW code~\cite{Ponce:2016}, and the Wannier90 code~\cite{Pizzi:2020}. We employ the PBE exchange and correlation functional~\cite{Perdew:1996} and optimized norm-conserving Vanderbilt (ONCV) pseudopotentials from the PseudoDojo repository~\cite{Hamann:2013,Vansetten:2018}. For consistency with previous work, we use the experimental lattice constant of Si, SiC, and GaP at room temperature, and the plane-wave kinetic energy cutoff and quadrupole tensors reported in Ref.~\cite{Ponce:2021}. We include spin-orbit coupling for the valence bands only, to capture the splitting of the valence band top. Key calculation parameters are summarized in Tab.~\ref{tab:dft_setup}.
\begin{table}[t!]
\centering
\caption{Calculation parameters used in this work: Experimental lattice constant, plane wave kinetic energy cutoff, and non-vanishing elements of the quadrupole tensor are chosen to be consistent with Ref.~\cite{Ponce:2021}.}
\begin{tabular}{l r r r}
& Si & 3C-SiC & GaP \\
\hline
Lattice constant (\AA) & 5.43 & 4.36 & 5.45 \\
Plane wave kinetic energy cutoff (eV) & 544 & 1088 & 1088 \\
$Q_{\kappa_1}$ & 11.83 & 7.41 & 13.72 \\
$Q_{\kappa_2}$ & -11.83 & -2.63 & -6.92 \\
Coarse $\mathbf k$ and $\mathbf q$ grids & 12$^3$ & 12$^3$ & 12$^3$ \\
Fine $\mathbf k$ and $\mathbf q$ electron grid & 100$^3$ & 180$^3$ & 100$^3$ \\
Fine $\mathbf k$ and $\mathbf q$ hole grid & 100$^3$ & 100$^3$ & 100$^3$ \\
\hline
\end{tabular}
\label{tab:dft_setup}
\end{table}
We calculate effective mass tensors by finite differences, using a wavevector increment of $0.01\times 2\pi/a$, where $a$ is the lattice constant reported in Tab.~\ref{tab:dft_setup}. The dynamical matrix, the variations of the self-consistent potential, and the vibrational eigenfrequencies and eigenmodes are calculated using a square convergence threshold of $10^{-16}$~Ry$^2$. This threshold refers to the change of the potential variation between two successive iterations, averaged over the unit cell. Electron energies, phonon frequencies, and electron-phonon matrix elements are initially computed on a coarse wavevector mesh using the EPW code. The electron Hamiltonian, the dynamical matrix, and the electron-phonon matrix elements are then interpolated onto fine Brillouin zone grids using Wannier-Fourier interpolation~\cite{Giustino:2007,Mostofi:2014}. Long-range dipole and quadrupole corrections are employed for improved interpolation of the electron-phonon matrix elements~\cite{Verdi:2015,Sjakste:2015,Park:2020,Brunin:2020,Ponce:2021}.
To compute carrier mobilities, only states within a narrow energy window of the band extrema are necessary. We find that, for the range of temperatures considered in this work (up to 500~K), a window of 400~meV is sufficient to obtain converged electron mobilities, and a window of 300~meV is sufficient for hole mobilities. At 300~K, converged results can be obtained by using a 200~meV window for both electrons and holes.
To evaluate the overlap matrices $B_{mn,\mathbf G}(\mathbf k,\mathbf q)$ required in Eq.~\eqref{eq:tauave3} in the fine Brillouin zone grid, we follow the procedure of Ref.~\cite{Verdi:2015} and approximate them as:
\begin{equation}
B_{mn,\mathbf G}(\mathbf k,\mathbf q) \approx \left[U(\mathbf k+\mathbf q) U^\dagger(\mathbf k)\right]_{mn},
\end{equation}
where the unitary matrix $U_{mn}(\mathbf k)$ is the diagonalizer of the interpolated Hamiltonian into the wavevector $\mathbf k$ of the fine grid. This approximation is motivated by the fact that the carrier-impurity matrix element in Eq.~\eqref{eq:tauave3} is strongly peaked at $\mathbf q+\mathbf G=0$.
The Dirac delta functions appearing in Eqs.~\eqref{eq.tau.partial} and \eqref{eq:tauave3} are computed using Gaussian functions with a small broadening parameter. The results are sensitive to the choice of this parameter, therefore we accelerate the convergence by employing adaptive smearing. The procedure for the adaptive smearing of the carrier-phonon scattering rate, which involves a so-called type-III integral, is discussed in Refs.~\cite{Yates:2007,Li:2014,Ponce:2021}. The calculation of the carrier-impurity scattering rates involves instead a type-II integral of the form:
\begin{equation}
I_{n\mathbf k}^{\text{II}} = \sum_{m} \int \frac{d\mathbf q}{\Omega_{\rm BZ}} f_{mn}(\mathbf k,\mathbf q) \,\delta(\epsilon_{m\mathbf k+\mathbf q}-\epsilon_{n\mathbf k}),
\end{equation}
where $\Omega_{\rm BZ}$ is the volume of the Brillouin zone.
In this case, adaptive broadening can be achieved by using a state-dependent width $\sigma_{m\mathbf k+\mathbf q}$. We follow the procedure by Ref.~\cite{Yates:2007}, which gives:
\begin{equation}\label{eq:adsmr}
\sigma_{m\mathbf k+\mathbf q} = \frac{\alpha}{3}\sum_{i=1}^{3} {\bf v}_{m\mathbf k+\mathbf q} \cdot \frac{{\bf b}_i}{N_i},
\end{equation}
where ${\bf v}_{m \mathbf k+\mathbf q}$ is the band velocity, ${\bf b}_i$ is a primitive vector of the reciprocal lattice, and $N_i$ denotes the number of $\mathbf k$-points along the direction of ${\bf b}_i$.
The coefficient $\alpha$ is a tunable parameter.
Previous work has used $\alpha=0.29$ for electron-phonon scattering rates~\cite{Li:2014,Ponce:2021}.
We have performed a detailed converged test by comparing fixed-smearing and variable-smearing calculations, and found that values $\alpha =$~0.1-0.3 provide similar results. For simplicity, in this work we use $\alpha=0.29$ as in previous work.
In principle we could perform calculations of carrier mobilities by setting the impurity concentration and the carrier concentration separately. This would be required, for example, for the investigation of compensation doping of semiconductors. To keep our results are general as possible, in this work we choose to focus on the simpler scenario where each impurity creates one free carrier, therefore we set the carrier density to be equal to the impurity concentration. We do not consider carrier freeze-out at low temperature, since this would require the knowledge of defect energy levels. In our calculations, the role of the carrier concentration is mainly to modulate the effective Thomas-Fermi screening wavevector in Eq.~\eqref{eq:qtf}.
\section{\label{sec:4} Results and Discussion}\label{sec:results}
\subsection{Electronic structure}\label{sec.elecst}
\begin{table}
\centering
\caption{
Calculated band effective masses, band gaps, high-frequency and static dielectric constants of Si, 3C-SiC, and GaP. All calculations performed within DFT/PBE. Experimental data are from (a) \cite{Dresselhaus:1955} and \cite{Dexter:1954}, (b) \cite{Kono:1993}, (c) \cite{Bradley:1973}, (d) \cite{Kaplan:1985}, (e) \cite{Dean:1966}, (f) \cite{Collings:1980}, (g) \cite{Bimberg:1981}, (h) \cite{Lorenz:1968}, (i) \cite{Patrick:1970}, (j) \cite{Madelung:2022}, (k) \cite{Vurgaftman:2001}, (l) \cite{Kimoto:2014}. All masses are give in units of the electron mass. The band gaps are in eV. The lines tagged ``Dresselhaus'' refer to the effective masses obtained from the Dresselhaus model fitted to experimental cyclotron data, from Ref.~\cite{Dresselhaus:1955}. \vspace{5pt}}
\begin{threeparttable}
\begin{tabular}{lllll}
\toprule\\[-8pt]
This work && Si & SiC & GaP \\
\hline\\[-8pt]
& $\Gamma$-X & 0.260 & 0.592 & 0.374 \\
$m_{\text{hh}}^*$ & $\Gamma$-K & 0.550 & 1.412 & 0.837 \\
& $\Gamma$-L & 0.655 & 1.646 & 1.091 \\[3pt]
& $\Gamma$-X & 0.189 & 0.423 & 0.143 \\
$m_{\text{lh}}^*$ & $\Gamma$-K & 0.143 & 0.328 & 0.125 \\
& $\Gamma$-L & 0.134 & 0.309 & 0.117 \\[3pt]
& $\Gamma$-X & 0.225 & 0.490 & 0.213 \\
$m_{\text{so}}^*$ & $\Gamma$-K & 0.223 & 0.472 & 0.217 \\
& $\Gamma$-L & 0.214 & 0.436 & 0.206 \\[3pt]
$m_{\text{e},\|}^{*}$ && 0.959 & 0.672 & 1.069 \\
$m_{\text{e},\perp}^{*}$ && 0.196 & 0.230 & 0.232 \\[3pt]
$E_{\text{g}}$ && 0.554 & 1.359 & 1.566 \\
$\varepsilon^\infty$ && 13.00 & \phantom{0}6.93 & 10.53 \\
$\varepsilon^0$ && 13.00 & 10.23 & 12.57 \\[2pt]
\hline\\[-8pt]
Experiment && Si & SiC & GaP \\
\hline\\[-8pt]
& $\mathbf{B}$ along [001] & 0.46$^a$ & & \\
$m^{*}_{hh}$ & $\mathbf{B}$ along [110] & 0.53$^a$ & & \\
&$\mathbf{B}$ along [111] & 0.56$^a$ & & 0.54$^c$ \\[3pt]
& Dresselhaus $\Gamma$-X & 0.40 & & \\
& Dresselhaus $\Gamma$-K & 0.56 & & \\
& Dresselhaus $\Gamma$-L & 0.62 & & \\[3pt]
& $\mathbf{B}$ along [001] & 0.171$^a$ & 0.45$^b$ & \\
$m^*_{lh}$ & $\mathbf{B}$ along [110] & 0.163$^a$ & & \\
& $\mathbf{B}$ along [111] & 0.160$^a$ & & 0.16$^c$ \\[3pt]
& Dresselhaus $\Gamma$-X & 0.18 & & \\
& Dresselhaus $\Gamma$-K & 0.16 & & \\
& Dresselhaus $\Gamma$-L & 0.15 & & \\[3pt]
$m_{{\rm e},\|}^{*}$ && 0.97$^a$ & 0.68$^d$ & 1.15$^c$, 2.0$^k$ \\
$m_{{\rm e},\perp}^{*}$ && 0.19$^a$ & 0.25$^d$ & 0.21$^c$, 0.25$^k$ \\[3pt]
$E_{\text{g}}$ && 1.13$^f$ & 2.42$^g$ & 2.26$^h$ \\
$\varepsilon^\infty$ && 11.7$^i$ & 6.52$^j$ & 9.11$^j$ \\
$\varepsilon^0$ && 11.7$^i$ & 9.72$^j$ & 11.1$^j$ \\[2pt]
\toprule
\end{tabular}
\end{threeparttable}
\label{tab:elprop}
\end{table}
Given the importance of effective masses in mobility calculations, in this section we review briefly the band structures and effective masses of Si, SiC, and GaP. Table~\ref{tab:elprop} shows our calculated directional effective masses. Hole masses are given for the heavy-hole (hh) band, light hole (lh) band, and the spin-orbit split-off (so) band. The longitudinal ($\parallel$) and transverse ($\perp$) electron masses correspond to the principal axes of the ellipsoidal conduction band extrema.
In Tab.~\ref{tab:elprop} we see that the light hole and split-off hole masses are fairly isotropic for all compounds considered in this work. For the heavy hole masses, the $\Gamma$-X direction ([100] crystallographic direction) exhibits the lightest masses, whereas considerably heavier masses are found along the $\Gamma$-K ([110]) and $\Gamma$-L ([111]) directions. Similarly, in all compounds considered here the longitudinal electron masses are considerably heavier than the corresponding transverse masses, as expected. SiC exhibits the heaviest hole masses among SiC, GaP, and Si; while GaP exhibits the heaviest electron masses.
Our calculated effective masses are in good agreement with previous calculations at the DFT level~\cite{Ponce:2018} as well as previous calculations at the GW level~\cite{Ponce:2018}. When comparing to experimental data, we see from Tab.~\ref{tab:elprop} that our electron effective masses are within 10\%
of the corresponding experimental values, which is remarkable considering that we are using DFT/PBE.
In the case of the hole masses, our calculations are also in good agreement with experiments. Here we emphasize that the experimental values usually quoted are not the effective masses, but the cyclotron masses, which depend on the direction of the magnetic field and are reported in Tab.~\ref{tab:elprop}. These cyclotron masses correspond to averages of the directional masses and cannot be compared directly to DFT calculations. To extract the correct directional effective masses, in the case of silicon we used the Dresselhaus $\mathbf k\cdot{\bf p}$ model which was fitted to experimental cyclotron data. In this model the heavy hole and light hole masses are parameterized as:
\begin{align}
\epsilon_{\text{hh}}(\mathbf k) =& Ak^2 + [B^2k^4 + C^2(k_x^2k_y^2 + k_y^2k_z^2 + k_z^2k_x^2)]^{1/2},\\
\epsilon_{\text{lh}}(\mathbf k) =& Ak^2 - [B^2k^4 + C^2(k_x^2k_y^2 + k_y^2k_z^2 + k_z^2k_x^2)]^{1/2},
\end{align}
where $k = |\mathbf k|$ and the coefficients $A$, $B$, and $C$ are $-4.1\,\hbar^2/2m_{\rm e}$, $-1.6\,\hbar^2/2m_{\rm e}$,
and $3.3\,\hbar^2/2m_{\rm e}$, respectively~\cite{Dresselhaus:1955}. From this parameterization we obtained the effective masses reported in Tab.~\ref{tab:elprop} under the keyword ``Dresselhaus''. From this table we can see that, in the case of silicon, the light hole and heavy hole masses are close to our calculated results, with the exception of the $\Gamma-X$ heavy-hole effective mass which is 65\% of the experimental value\cite{Ponce:2018}.
Our calculated dielectric constants overestimate the experimental values by 15\% at most, as expected from the underestimation of the band gaps~\cite{Patrick:1970,Madelung:2022}.
In Sec.~\ref{subsec:corr} we discuss how one can improve the calculated mobilities by introducing \textit{a posteriori} corrections to the theoretical effective masses and dielectric constants.
\subsection{Carrier mobilities}\label{subsec:expcomp}
\subsubsection{Silicon}
Figure~\ref{fig:Si_expcomp} shows a comparison between our calculated mobilities of silicon and available experimental data, as a function of temperature and impurity concentration.
The mobilities without carrier-impurity scattering [black lines in panels (a) and (b)] decrease rapidly with temperature, as expected. We find temperature slopes (the $\beta$ in $\mu \sim T^\beta$) of $-2.1$ for electrons and $-2.4$ for holes, in agreement with previous work~\cite{Ponce:2018,Ponce:2021}.
As we include carrier-impurity scattering, the room-temperature electron mobility of silicon reduces from 1381~cm$^{2}$/Vs~to 1153~cm$^{2}$/Vs\ at 1.75$\times$10$^{16}$~cm$^{-3}$\ [blue line in panel (a)] and to 812~cm$^{2}$/Vs~at 1.3$\times$10$^{17}$~cm$^{-3}$\ [red line in panel (a)]. Similarly, the room-temperature hole mobility of silicon decreases from 600~cm$^{2}$/Vs\ in the absence of impurities to 517~cm$^{2}$/Vs\ for an impurity concentration of 2.4$\times$10$^{16}$~cm$^{-3}$ [blue line in panel (b)], and to 359~cm$^{2}$/Vs\ at the impurity concentration of 2.0$\times$10$^{17}$~cm$^{2}$/Vs~[red line in panel (b)].
Our calculations for the temperature-dependent electron and hole mobilities show that a single power law becomes inadequate in the presence of impurity scattering. This is also seen in the experimental data from Refs.~\cite{Morin:1954,Canali:1975,Jacobini:1977,Konstantinos:1993}, which are shown as open circles in Fig.~\ref{fig:Si_expcomp}. We note that our calculations are in good agreement with the experiments over a broad temperature range. The agreement worsens slightly at low temperature, where carrier-impurity scattering dominates. This effect likely relates to the fact that in our calculations all donors and acceptors are assumed to be fully ionized at all temperatures; as a result of this approximation, we are neglecting carrier freeze-out and hence we are likely overestimating the impurity concentration at low temperature. In Appendix~\ref{app.partialion} we show that, by taking into account the the effects of partial impurity ionization, the agreement with experiments improves at low temperature and high impurity concentration.
Panel (c) of Fig.~\ref{fig:Si_expcomp} shows the room temperature electron mobility of silicon, as a function of impurity concentration. The electron mobility is relatively insensitive to the impurity concentration up to 10$^{16}$~cm$^{-3}$. A steep decrease in the electron mobility is seen as we approach a doping density of 10$^{17}$~cm$^{-3}$. Up to this concentration, our calculations (blue line) are in excellent agreement with experimental data (open black circles). Above 10$^{18}$~cm$^{-3}$, while the agreement with experiment is still good, we tend to slightly overestimate the measured electron mobility. This is likely due to two effects: (i) our formalism does not take into account multiple scattering events that become important at high impurity concentration, and (ii) our calculations do not include scattering by free-carrier plasmons, which dominate the mobility at high carrier density, as shown in Refs.~\cite{Caruso:2016,Kosina:1997}. A similar overestimation was observed in Ref.~\cite{Lu:2022}.
Panel (d) of Fig.~\ref{fig:Si_expcomp} shows the room temperature hole mobility of silicon as a function of impurity concentration. As for the electrons, we find generally good agreement between calculations (blue line) and experiments (open black circles) throughout the doping range. We emphasize that the vertical scales in panels (c) and (d) are different, and that the theory/experiment deviation at high impurity concentration is similar in both panels in absolute terms. At low impurity concentration, our calculations slightly overestimate the experimental data. This effect can be ascribed to the fact that our light hole effective masses are smaller than in experiments.
\subsubsection{Silicon carbide}
In Fig.~\ref{fig:3CSiC_expcomp} we show our calculated mobilities of 3C-SiC as a function of temperature and impurity concentration, and we compare to experimental data from Refs.~\cite{Shinohara:1988,Roschke:2001,Hirano:1995,Nelson:1966,Wan:2002,Lee:2003,Schoner:2006,Nagasawa:2008}. In the case of 3C-SiC, the comparison with experiments is complicated by the high concentration of line defects that nucleate at lattice-mismatched growth substrates such as Si or 6H-SiC~\cite{Ivanov:1992,Schoner:2006}, which makes it difficult to obtain data for defect-free samples. Furthermore, most experimental data are for co-doped samples, for which the impurity and carrier concentrations are more difficult to estimate.
In the absence of impurity scattering [black line in panel (a)], the low electron effective mass of SiC leads to very high theoretical mobilities, up to 33000~cm$^{2}$/Vs\ at 100~K and up to 2000~cm$^{2}$/Vs\ at room temperature. These high mobilities are in agreement with previous theoretical results~\cite{Ponce:2021}. In this case, we calculate an electron temperature exponent $\beta=-2.9$.
In panel (a) of Fig.~\ref{fig:3CSiC_expcomp} we compare our calculations (blue line) with the data reported in Ref.~\cite{Shinohara:1988} (red open circles). In that work, they synthesized 3C-SiC with $n$-type impurity density of 5.0$\times$10$^{16}$~cm$^{-3}$, and obtained electron mobilities at 100~K and 300~K of 2040~cm$^{2}$/Vs\ and 584~cm$^{2}$/Vs, respectively. In our calculations, when we consider the same impurity concentration, we find 2773~cm$^{2}$/Vs\ and 1369~cm$^{2}$/Vs\ at 100~K and 300~K, respectively; therefore we overestimate the experimental data by a factor of 30\%-230\%.
In panel (b) of Fig.~\ref{fig:3CSiC_expcomp} we show our calculated hole mobility of 3C-SiC as a function of temperature. In the absence of impurities (black line), the mobility decreases with a temperature exponent $\beta = -2.1$. In this case we could not find experimental data for uncompensated samples to compare with.
Upon including impurity scattering with an impurity concentration of 10$^{18}$~cm$^{-3}$, we find a significant reduction of the mobility at low temperature (blue line), from 1373~cm$^{2}$/Vs\ to 148~cm$^{2}$/Vs.
At 300 K, the mobility is reduced from 165~cm$^{2}$/Vs\ without impurities to 81~cm$^{2}$/Vs, in good agreement with the measured value of 50~cm$^{2}$/Vs\ reported in Ref.~\cite{Nagasawa:2008}.
Panels (c) and (d) of Fig.~\ref{fig:3CSiC_expcomp} show the room temperature electron and hole mobilities as a function of impurity concentration, respectively.
The electron mobility calculated (blue line) at low ionized donor concentration (10$^{14}$~cm$^{-3}$) is 2048~cm$^{2}$/Vs, and significantly overestimates the measured value 1000~cm$^{2}$/Vs\ by Ref.~\cite{Hirano:1995} (open black symbols). However, our calculations get closer to experimental data in the range of concentrations above 10$^{18}$~cm$^{-3}$~\cite{Kern:1997,Roschke:2001,Nelson:1966}.
The hole mobility of 3C-SiC is significantly lower than the electron mobility, as expected from much heavier hole masses shown in Tab~\ref{tab:elprop}. Our calculations (blue line) at low doping yield a mobility of 164~cm$^{2}$/Vs, to be compared to 220~cm$^{2}$/Vs\ measured in $p$-channel 3C-SiC devices~\cite{Lee:2003} (open black symbols). We note that the vertical scales in panels (c) and (d) differ, and that our calculated hole mobilities are in better agreement with experiment in relative terms. In particular, our data for the hole mobility fall right in the middle of the experimental trend shown in panel (d).
\subsubsection{Gallium phosphide}
Figure~\ref{fig:GaP_expcomp} shows our mobility calculations for GaP and a comparison with experimental data. In panel (a) we have the calculated electron mobilities as a function of temperature. In the absence of impurities, the calculated electron mobility (black line) decreases with a temperature exponent $\beta=-2.2$; the calculated mobilities at 100~K and 300~K are 4293~cm$^{2}$/Vs\ and 328~cm$^{2}$/Vs, respectively.
Upon including the effect of impurity scattering (blue line), the mobility decreases significantly, reaching 157~cm$^{2}$/Vs\ at room temperature for an impurity concentration of 2.5$\times$10$^{18}$~cm$^{-3}$. This value is in good agreement with the measured mobility of 100~cm$^{2}$/Vs\ by Ref.~\cite{Kao:1983} (blue open circles).
We note that the electron mobility of GaP is significantly lower than in silicon, despite the electron effective masses being comparable. In Sec.~\ref{subsec:invtau} we show that this effect arises from the additional polar phonon scattering that electrons experience in GaP, which is absent in silicon.
Panel (b) of Fig.~\ref{fig:GaP_expcomp} shows the calculated phonon-limited hole mobility (black line), the mobility calculated by including impurity scattering (blue line), and experimental data (open red circles). The phonon-limited hole mobility decreases with temperature with an exponent $\beta=-2.5$.
The calculated mobilities in the absence of impurities are 5096~cm$^{2}$/Vs\ and 252~cm$^{2}$/Vs\ at 100~K and 300~K, respectively. Upon including impurity scattering with a concentration of 2$\times$10$^{18}$~cm$^{-3}$, the mobility at room temperature decreases to 124~cm$^{2}$/Vs, in good agreement with the measured value of 90~cm$^{2}$/Vs\ by Ref.~\cite{Kao:1983}.
Panel (c) of Fig.~\ref{fig:GaP_expcomp} shows the room temperature electron mobility of GaP as a function of impurity concentration. In the absence of impurity scattering, we calculate a mobility of 328~cm$^{2}$/Vs (blue line), which compares well with the maximum value 258~cm$^{2}$/Vs\ measured in ultra-pure samples in Ref.~\cite{Miyauchi:1967} (open black symbols). In the intermediate doping regime, our calculated electron mobilities overestimate the experimental data by a factor of two~\cite{Kao:1983,Craford:1971,Hara:1968,Miyauchi:1967}, but the agreement improves at high doping levels.
Figure \ref{fig:GaP_expcomp}(d) shows the room temperature hole mobility of GaP as a function of impurity concentration. The calculated hole mobility is 269~cm$^{2}$/Vs\ at low impurity concentration, and decreases to 94~cm$^{2}$/Vs\ at a concentration of 10$^{19}$~cm$^{-3}$\ (blue line). Our calculations are within a factor of two from the highest measured hole mobilities across the same doping range~\cite{Kao:1983,Alfrey:1960,Cohen:1968} (open black symbols). We note that electron and hole mobilities in GaP are very similar across a wide range of temperatures and impurity concentrations (both in experiments and in our calculations), therefore GaP is an ambipolar semiconductor with well-balanced electron and hole transport.
\subsection{Carrier scattering rates}\label{subsec:invtau}
In this section we analyze and compare the scattering rates resulting from carrier-phonon and carrier-impurity processes in Si, SiC, and GaP. The Brooks-Herring model for carrier-impurity scattering~\cite{Brooks:1955}, which is based on the parabolic band approximation, predicts a scattering rate that scales as $\epsilon^{-3/2}$, where $\epsilon$ is the electron eigenvalue referred to the band extremum. This trend is a result of two competing effects: as the energy of the initial state
increases above the band bottom, the scattering phase space increases as $\epsilon^{1/2}$,
while at the same time the square modulus of the carrier-impurity matrix element given in Eq.~\eqref{eq:tauave3} decreases as $1/q^4$, which is of the order of $\epsilon^{-2}$. This simple trend
is opposite to what is expected from non-polar optical scattering and acoustic phonon scattering, which tend to increase with energy.
Figure~\ref{fig:invtau} shows the scattering rates $\tau^{-1}_{n\mathbf k}$ of holes and electrons in Si [panels (a) and (b)], SiC [panels (c) and (d)], and GaP [panels (e) and (f)]. For consistency, we set the impurity concentration to 10$^{17}$~cm$^{-3}$\ in all cases, which is in the middle of the range considered in Figs.~\ref{fig:Si_expcomp}-\ref{fig:GaP_expcomp}, and the temperature to 300~K. In line with the above discussion, the carrier-impurity scattering rates decrease as we move away from the band extrema, while the carrier-phonon scattering rates increase. In the two polar semiconductors that we are considering, SiC and GaP, we also see a sudden jump in the carrier-phonon scattering rates. This effect happens when the carrier energy reaches the threshold for the emission of a longitudinal optical phonon, thereby activating polar phonon scattering~\cite{Verdi:2015}.
Panels (a) and (b) of Fig.~\ref{fig:invtau} show that, in the case of silicon, the carrier-ionized-impurity scattering rates near the band edges are an order of magnitude higher than carrier-phonon rates (for an impurity concentration of 10$^{17}$~cm$^{-3}$). The additional scattering by carriers causes a reduction of the mobility by $\sim 30$\% for both electrons and holes, indicating that impurity scattering is a significant effect at this impurity concentration.
The rise of the carrier-electron scattering rates at energies around 150~meV that can be seen in panel (b) correspond to interband scattering between the two lowest conduction bands.
Panels (c) and (d) of Fig.~\ref{fig:invtau} show the scattering rates in SiC. Unlike in silicon, here the electron and hole scattering rates differ considerably. In the case of holes, the carrier-phonon and carrier-impurity scattering rates are comparable in magnitude near the band edge, while in the case of electrons the carrier-impurity scattering dominates. This difference is reflected in the calculated mobilities, where carrier-impurity scattering reduces the phonon-limited mobility of holes by $\sim 20$\% and of electrons by $\sim 50$\% (for the impurity concentration 10$^{17}$~cm$^{-3}$).
Data for GaP are shown in panels (e) and (f) of Fig.~\ref{fig:invtau}. In this case the carrier-phonon scattering rates are comparable to the carrier-impurity scattering rates. Accordingly, the mobilities are reduced by $\sim$10\% from their values without impurity scattering.
\subsection{Deviations from Matthiessen's Rule}\label{subsec:matt}
In Sec.~\ref{subsec:matt} we discussed how Matthiessen's rule is formally justified only when the scattering rates are state-independent constants, or when one scattering mechanism dominates over all other mechanisms. To place that reasoning on a quantitative footing, in Fig.~\ref{fig:matt} we explicitly assess the predictive accuracy of the Matthiessen rule.
For this test, we compute the mobilities of Si, SiC, and GaP by considering the following four scenarios: (i) phonon-limited mobility $\mu_{\rm ph}$ (i.e., without including carrier-impurity scattering);
(ii) impurity-limited mobility $\mu_{\rm imp}$ (i.e., without including carrier-phonon scattering); (iii) the mobility according to Matthiessen's rule, as obtained by combining (i) and (ii) using $1/\mu_{\rm M} = 1/\mu_{\rm ph}+1/\mu_{\rm imp}$; (iv) the mobility $\mu$ calculated by including both carrier-phonon scattering and carrier-impurity scattering using the $ai$\text{BTE}.
In panels (a), (c), and (e) we see this comparison for Si, SiC, and GaP, respectively, as a function of temperature. As expected, in all cases the phonon-limited mobilities (black lines) decrease with temperature while the impurity-limited mobilities (red lines) do increase. Their combination results into the characteristic smooth peak which is best seen in the cases of Si and SiC. In these panels, the dashed blue lines are from Matthiessen's rule, and the solid blue lines are the complete $ai$\text{BTE}\ solutions. We see that the Matthiessen rule tends to overestimate the $ai$\text{BTE}\ mobility, and the deviation is particularly pronounced when the phonon and impurity contributions to the mobility reduction are comparable. To quantify the deviation between $ai$\text{BTE}\ calculations and the Matthiessen results, in panels (b), (d), and (f) of Fig.~\ref{fig:matt} we show the ratio between the two values, as a function of temperature.
In all cases we see that the use of Matthiessen's rule leads to an overestimation of the mobilities by up to 50\%, which is significant in the context of predictive calculations of transport properties. More importantly, for the compounds considered in this work (Si, SiC, and GaP), the use of Matthiessen's rule would worsen the agreement between calculated mobilities and experimental data.
Based on these findings, we caution against the use of Matthiessen's rule in future \textit{ab initio} calculations of carrier mobilities.
\subsection{Improving the predictive power of the $ai$\text{BTE}}\label{subsec:corr}
In this section we investigate simple approaches to improve the predictive accuracy of the $ai$\text{BTE}\ by overcoming two standard limitations of DFT.
The first limitation is that the DFT band gap problem typically leads to an overestimation of the dielectric screening. As a result, both carrier-phonon and carrier-impurity matrix elements tend to be underestimated in DFT~\cite{Giustino:2017,Li:2019}, and mobilities tend to be overestimated. In Ref.~\cite{Ponce:2021} it was shown that, for a set of ten semiconductors, this effect leads to mobilities which can overestimate experimental data by as much as a factor of two. To mitigate this effect, we investigate a simple scaling correction to the matrix elements as follow:
\begin{equation}
g_{mn\nu}^{\text{corr}}(\mathbf k,\mathbf q) = \frac{\varepsilon_{\text{DFT}}}{\varepsilon_{\text{exp}}} g_{mn\nu}^{\text{DFT}}(\mathbf k,\mathbf q),
\end{equation}
where $\epsilon_{\rm DFT}$ is our calculated value, and $\epsilon_{\rm exp}$ is the experimental value. We use the high-frequency dielectric constant for the carrier-phonon matrix elements, as it was done in Ref.~\cite{Ponce:2018}, and the static dielectric constants for the carrier-impurity matrix elements [see Eq.~\eqref{eq.gmany}]. This approach is meaningful for the systems considered in this work, because the majority of scattering processes occur near the band extrema, and therefore involve small scattering wavevectors $\mathbf q$, thus justifying the re-scaling of screening at long wavelength only.
The second limitation of DFT calculations lies in the inaccurate curvature of the bands, which is also linked to the band gap problem, leading to slightly inaccurate carrier effective masses. This limitation could be overcome by performing GW calculations, but in this work we investigate a simpler mass scaling.
According to Drude's formula, carrier mobilities are inversely proportional to the effective masses. Based on this observation, we consider the following scaling correction, which is directly applied to the calculated mobility:
\begin{equation}
\mu_{\text{corr}} = \frac{m^*_\text{DFT}}{m^*_\text{exp}} \mu_\text{DFT},
\end{equation}
where all masses are isotropic averages.
The three compounds considered in this work all have ellipsoidal conduction band extrema, therefore we can evaluate the average isotropic mass as follows:
\begin{equation}
m^{*} = 3 ( 1/m_\parallel^* + 2/m_\perp^* )^{-1} .
\end{equation}
Evaluating the average hole mass is more complicated owing to the band degeneracy at $\Gamma$ and the fact that experimental data usually are reported for a given magnetic field direction as opposed to a crystallographic direction (see Sec.~\ref{sec.elecst}). In the case of silicon, we evaluate the
average mass using the values extracted from Dresselhaus' model (see Sec.~\ref{sec.elecst}). After this averaging procedure, the hole mass is calculated following Ref.~\cite{Ponce:2021}:
\begin{equation}
m^{*} = \frac{m_{\text{hh}}^{*,5/2}+m_{\text{lh}}^{*,5/2}}{m_\text{hh}^{*,3/2}+m_\text{lh}^{*,3/2}},
\end{equation}
where all quantities on the r.h.s.\ are spherical averages in k-space. In the case of SiC and GaP we are not aware of a parametrization similar to Dresselhaus', therefore we do not investigate mass corrections in these cases.
The carrier mobilities obtained by applying the above corrections are shown in Fig. \ref{fig:epscorr}. In all cases we use the experimental dielectric constants reported in Tab.~\ref{tab:elprop}.
Panels (a) and (b) show our results for silicon. The screening correction to the electron mobilities of Si reduces the calculated value at low impurity concentration from 1381~cm$^{2}$/Vs\ to 1133~cm$^{2}$/Vs. This reduction causes an underestimation of the experimental value by approximately 20\%. At higher impurity concentration, the corrected mobility agrees again well with experimental results. The corrections to the electron effective mass of Si are minor and do not affect the mobility.
In the case of holes, the screening and mass corrections improve considerably the agreement between theory and experiment (our calculated average hole mass is $0.43~m_{\rm e}$ while the experimental value is $0.48~m_{\rm e}$). In fact, we obtain a hole mobility of 463~cm$^{2}$/Vs\ at low impurity concentration, which is within the measured value between 450 and 500~cm$^{2}$/Vs\cite{Jacobini:1977,Konstantinos:1993}. The improvement is also noticeable at higher impurity concentration.
Results for SiC are shown in panels (c) and (d) of Fig.~\ref{fig:epscorr}. In this case, we find that screening and mass corrections do not significantly improve the agreement with experiments at low impurity concentration. In particular, the screening correction reduces the electron mobility from 2047~cm$^{2}$/Vs\ to 1815~cm$^{2}$/Vs, and the mass correction further reduces this value to 1688~cm$^{2}$/Vs. Despite these corrections, the calculated electron mobility remains too high by about a factor of two. It is possible that additional scattering mechanisms such as dislocations could contribute to reduce this difference.
In the case of the hole mobility, the screening correction reduces the calculated value at low impurity concentration from 164~cm$^{2}$/Vs\ to 148~cm$^{2}$/Vs, which is not significant when compared to the large spread of experimental values~\cite{Wan:2002,Lee:2003,Schoner:2006,Nagasawa:2008}.
The screening correction appears to be successful in the case of GaP, as seen in panels (e) and (f) of Fig.~\ref{fig:epscorr}. The electron mobility at low impurity concentration reduces from 326~cm$^{2}$/Vs\ to 243~cm$^{2}$/Vs\ upon applying the screening correction. This value is in better agreement with the experimental data. Improved agreement with experiments is also found at higher impurity concentration. The correction to the electron effective mass of GaP is small, and as a result the change in mobility is not significant. The screening correction for holes brings the calculated data closer to the experiments. In particular, at low impurity concentration the hole mobility is reduced from 269~cm$^{2}$/Vs\ to 226~cm$^{2}$/Vs.
The key takeaway from this analysis is that the screening correction to the scattering matrix elements improves the agreement between theory and experiment for the compounds considered in this work.
Based on the above observations, we suggest that screening and mass corrections could be used for the purpose of uncertainty quantification in future \textit{ab initio} calculations of transport properties.
\section{Conclusions}\label{sec:conclusions}
In this work we have demonstrated non-empirical calculations of carrier mobilities in semiconductors using the \textit{ab initio} Boltzmann transport equations, including carrier scattering by phonons and by ionized impurities. To this end, we developed an \textit{ab initio} formalism to incorporate ionized-impurity scattering within the transport workflow based on Wannier-Fourier interpolation and implemented in the EPW code.
We described ionized impurities by randomly distributed Coulomb scatters, and we obtained the carrier relaxation time by using the Kohn-Luttinger ensemble averaging procedure. We also incorporated the screening of the impurity potential by free-carriers, within a parameter-free effective Thomas-Fermi model.
We validated our approach by performing an extensive set of calculations of the electron and hole mobilities of three common semiconductors, namely Si, 3C-SiC, and GaP. In all cases we find a reasonably good agreement with experimental data, except possibly for the electron mobility in SiC which is probably reduced by additional scattering at line defects in real samples. Our calculations follow closely the experimental data both as a function of temperature (at fixed impurity concentration) and as a function of impurity concentration (at fixed temperature).
Impurity scattering is found to dominate over phonon scattering at high impurity concentration and at low temperature. In the former case, the thermal distribution function of the carrier is peaked near the band edges, therefore small-$\mathbf q$ elastic scattering by impurities dominates. In the latter case, the phonon population becomes negligible at low temperature, therefore impurities remain the only active scattering channel. These trends are fully consistent with the general understanding of carrier transport in semiconductors~\cite{Lundstrom:2009}.
We also found that the energy-dependent carrier scattering rates are strongly dependent on the detailed mechanisms at play in each compound, and vary significantly over the energy range of relevance for transport phenomena. This finding underlines the importance of detailed \textit{ab initio} calculations to achieve predictive accuracy in the description of transport phenomena of real materials.
In the presence of multiple scattering channels, it is common to analyze mobility data using the classic Matthiessen rule. However, by directly comparing $ai$\text{BTE}\ calculations including both phonon and impurity scattering with estimates based on Matthiessen's rule, we found that the latter lead to inaccurate results, with deviations of up to 50\% with respect to $ai$\text{BTE}\ calculations. This finding indicates that Matthiessen's rule should not be employed in predictive calculations of transport properties.
Lastly, we investigated simple corrections to DFT calculations of carrier mobilities, by scaling the calculated dielectric screening and the effective masses via their corresponding experimental values. We found that the screening correction generally improves agreement with experiments.
Overall, our present approach offers a powerful tool for calculating transport properties in a variety of semiconducting materials of immediate interest, as well as for screening new putative semiconductors in the context of materials discovery.
Several improvements upon this work are possible. For one, we do not account for neutral impurity scattering. This additional channel could be added by generalizing our monopole model to account for dipoles and quadrupoles, following similar work performed in the context of electron-phonon interactions~\cite{Verdi:2015,Sjakste:2015,Brunin:2020,Park:2020}. Generalizations to the case of two-dimensional materials should also be possible, for example by following the related generalization of the Fr\"ohlich matrix element to two-dimensional systems~\cite{Sohier:2016,Sio:2022}. At high impurity concentration one should also account for carrier-plasmon scattering, for example as discussed in Ref.~\cite{Caruso:2016}. And of course, any improvement in the DFT band structures and electron-phonon matrix elements would be highly beneficial to further enhance the predictive power of these calculations~\cite{Li:2019}. We hope that this study will stimulate further work along these and other promising directions.
\begin{acknowledgments}
This research is primarily supported by the Computational Materials Sciences Program funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award No. DE-SC0020129. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper: https://www.tacc.utexas.edu.
\end{acknowledgments}
|
1,116,691,500,133 | arxiv | \section{Introduction}
\label{sec:introduction}
There has been many interests in deep learning optimizer research recently \citep{amsgrad, adabound, lookahead_optimizer, radam}. These works attempt to answer the question: what is the best step size to use in each step of the gradient descent? With the first order gradient descent being the \textit{de facto} standard in deep learning optimization, the question of the optimal step size or learning rate in each step of the gradient descent arises naturally. The difficulty in choosing a good learning rate can be better understood by considering the two extremes: 1) when the learning rate is too small, training takes a long time; 2) while overly large learning rate causes training to diverge instead of converging to a satisfactory solution.
The two main classes of optimizers commonly used in deep learning are the momentum based Stochastic Gradient Descent (SGD) \citep{sgd} and adaptive momentum based methods \citep{adagrad, adam, amsgrad, adabound, radam}. The difference between the two lies in how the newly computed gradient is updated. In SGD with momentum, the new gradient is updated as a convex combination of the current gradient and the exponentially averaged previous gradients. For the adaptive case, the current gradient is further weighted by a term involving the sum of squares of the previous gradients. For a more detailed description and convergence analysis, please refer to \citet{amsgrad}.
In Adam \citep{adam}, the experiments conducted on the MNIST and CIFAR10 dataset showed that Adam has the fastest convergence property, compared to other optimizers, in particular SGD with Nesterov momentum. Adam has been popular with the deep learning community due to the speed of convergence. However, Adabound \citep{adabound}, a proposed improvement to Adam by clipping the gradient range, showed in the experiments that given enough training epochs, SGD can converge to a better quality solution than Adam. To quote from the future work of Adabound, ``why SGD usually performs well across diverse applications of machine learning remains uncertain". The choice of optimizers is by no means straight forward or cut and dry.
Another critical aspect of training a deep learning model is the batch size. Once again, while the batch size was previously regarded as a hyperparameter, recent studies such as \citet{keskar_sharp} have shed light on the role of batch size when it comes to generalization, i.e., how the trained model performs on the test dataset. Research works \citep{keskar_sharp, flat_minima} expounded the idea of sharp vs. flat minima when it comes to generalization. From experimental results on convolutional networks, e.g., AlexNet \citep{alexnet}, VggNet \citep{vgg}, \citet{keskar_sharp} demonstrated that overly large batch size tends to lead to sharp minima while sufficiently small batch size brings about flat minima. \citet{sharp_minima_generalization}, however, argues that sharp minima can also generalize well in deep networks, provided that the notion of sharpness is taken in context.
While the aforementioned works have helped to contribute our understanding of the nature of the various optimizers, their learning rates and batch size effects, they are mainly focused on computer vision (CV) related deep learning networks and datasets. In contrast, the rich body of works in Neural Machine Translation (NMT) and other Natural Language Processing (NLP) related tasks have been largely left untouched. Recall that CV deep learning networks and NMT deep learning networks are very different. For instance, the convolutional network that forms the basis of many successful CV deep learning networks is translation invariant, e.g., in a face recognition network, the convolutional filters produce the same response even when the same face is shifted or translated. In contrast, Recurrent Neural Networks (RNN) \citep{lstm, gru} and transformer-based deep learning networks \citep{Vaswani2017, bert} for NMT are specifically looking patterns in sequences. There is no guarantee that the results from the CV based studies can be carried across to NMT. There is also a lack of awareness in the NMT community when it comes to optimizers and other related issues such as learning rate policy and batch size. It is often assumed that using the mainstream optimizer (Adam) with the default settings is good enough. As our study shows, there is significant room for improvement.
\subsection{The Contributions}
The contributions of this study are to:
\begin{itemize}
\item Raise awareness of how a judicial choice of optimizer with a good learning rate policy can help improve performance;
\item Explore the use of cyclical learning rates for NMT. As far as we know, this is the first time cyclical learning rate policy has been applied to NMT;
\item Provide guidance on how cyclical learning rate policy can be used for NMT to improve performance.
\end{itemize}
\section{Related Works}
\citet{viz_loss} proposes various visualization methods for understanding the loss landscape defined by the loss functions and how the various deep learning architectures affect the landscape. The proposed visualization techniques allow a depiction of the optimization trajectory, which is particularly helpful in understanding the behavior of the various optimizers and how they eventually reach their local minima.
Cyclical Learning Rate (CLR) \citep{clr} addresses the learning rate issue by having repeated cycles of linearly increasing and decreasing learning rates, constituting the triangle policy for each cycle. CLR draws its inspiration from curriculum learning \citep{curriculum} and simulated annealing \citep{sim_anneal}. \citet{clr} demonstrated the effectiveness of CLR on standard computer vision (CV) datasets CIFAR-10 and CIFAR-100, using well established CV architecture such as ResNet \citep{resnet} and DenseNet \citep{densenet}. As far as we know, CLR has not been applied to Neural Machine Translation (NMT). The methodology, best practices and experiments are mainly based on results from CV architecture and datasets. It is by no means apparent or straightforward that the same approach can be directly carried over to NMT.
One interesting aspect of CLR is the need to balance regularizations such as weight decay, dropout and batch size, etc., as pointed out in \citet{super_clr}. The experiments verified that various regularizations need to be toned down when using CLR to achieve good results. In particular, the generalization results using the small batch size from the above-mentioned studies no longer hold for CLR. This is interesting because the use of CLR allows training to be accelerated by using a larger batch size without the sharp minima generalization concern. A related work is \citet{empirical_batchsize}, which sets a theoretical upper limit on the speed up in training time with increasing batch size. Beyond this theoretical upper limit, there will be no speed up in training time even with increased batch size.
\section{The Proposed Approach}
\label{sec:methods}
Our main approach in the NMT-based learning rate policy is based on the triangular learning rate policy in CLR. For CLR, some pertinent parameters need to be determined: base/max learning rate and cycle length. As suggested in CLR, we perform the range test to set the base/max learning rate while the cycle length is some multiples of the number of epochs. The range test is designed to select the base/max learning rate in CLR. Without the range test, the base/max learning rate in CLR will need to be tuned as hyperparameters which is difficult and time consuming. In a range test, the network is trained for several epochs with the learning rate linearly increased from an initial rate. For instance, the range test for the IWSLT2014 (DE2EN) dataset was run for 35 epochs, with the initial learning rate set to some small values, e.g., $1 \times 10^{-5}$ for Adam and increased linearly over the 35 epochs. Given the range test curve, e.g., Figure \ref{range_test_IWSLT14-de-en}, the base learning rate is set to the point where the loss starts to decrease while the maximum learning rate is selected as the point where the loss starts to plateau or to increase. As shown in Figure \ref{range_test_IWSLT14-de-en}, the base learning rate is selected as the initial learning rate for the range test, since there is a steep loss using the initial learning rate. The max learning rate is the point where the loss stagnates. For the step size, following the guideline given in \citet{clr} to select the step size between 2-10 times the number of iterations in an epoch and set the step size to 4.5 epochs.
The other hyperparameter to take care of is the learning rate decay rate, shown in Figure \ref{clr_decay}. For the various optimizers, the learning rate is usually decayed to a small value to ensure convergence. There are various commonly used decay schemes such as piece-wise constant step function, inverse (reciprocal) square root. This study adopts two learning rate decay policies:
\begin{itemize}
\item Fixed decay (shrinking) policy where the max learning rate is halved after each learning rate cycle;
\item No decay. This is unusual because for both SGD and adaptive momentum optimizers, a decay policy is required to ensure convergence.
\end{itemize}
Our adopted learning rate decay policy is interesting because experiments in \citet{clr} showed that using a decay rate is detrimental to the resultant accuracy. Our designed experiments in Section \ref{sec:experiment} reveal how CLR performs with the chosen decay policy.
\begin{figure}[!h]
\centering
\includegraphics[width=0.47\textwidth]{experiment/range_test/range_test_lower_upper.PNG}
\caption{Range test curve for the IWSLT2014-de-en dataset, showing the chosen base and max learning rate for the triangular policy.}
\label{range_test_IWSLT14-de-en}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.47\textwidth]{experiment/range_test/clr_decay.png}
\caption{The learning rate decay used in our experiments.}
\label{clr_decay}
\end{figure}
\begin{table*}[h]
\centering
\begin{tabular}{lccccc}
\hline \textbf{Corpus} & \textbf{Train} & \textbf{Valid.} &\textbf{Test} & \textbf{Source Vocab.} & \textbf{Target Vocab.} \\ \hline
IWSLT2014-de-en (DE2EN) & 160,239 & 7,283 & 6,750 & 8,844 & 6,628 \\
IWSLT2014-fr-en (FR2EN) & 166,045 & 4,818 & 4,800 & 8,508 & 7,308 \\
IWSLT2017-de-en (DE2EN) & 192,347 & 4,829 & 4,822 & 13,156 & 10,108 \\
\hline
\end{tabular}
\caption{\label{data-table} Datasets used for the experiment. }
\end{table*}
The CLR decay policy should be contrasted with the standard inverse square root policy (INV) that is commonly used in deep learning platforms, e.g., in fairseq \citep{ott2019fairseq}. The inverse square root policy (INV) typically starts with a warm-up phase where the learning rate is linearly increased to a maximum value. The learning rate is decayed as the reciprocal of the square root of the number of epochs from the above-mentioned maximum value.
The other point of interest is how to deal with batch size when using CLR. Our primary interest is to use a larger batch size without compromising the generalization capability on the test set. Following the lead in \citet{super_clr}, we look at how the NMT tasks perform when varying the batch size on top of the CLR policy. Compared to \citet{super_clr}, we stretch the batch size range, going from batch size as small as 256 to as high as 4,096. Only through examining the extreme behaviors can we better understand the effect of batch size superimposed on CLR.
\section{Experiments}
\label{sec:experiment}
\subsection{Experiment Settings}
The purpose of this section is to demonstrate the effects of applying CLR and various batch sizes to train NMT models. The experiments are performed on two translation directions (DE $\rightarrow$ EN and FR $\rightarrow$ EN) for IWSLT2014 and IWSLT2017 \citep{cettoloEtAl:EAMT2012}.
The data are pre-processed using functions from Moses \citep{moses}. The punctuation is normalized into a standard format. After tokenization, byte pair encoding (BPE) \citep{sennrich2016b} is applied to the data to mitigate the adverse effects of out-of-vocabulary (OOV) rare words. The sentences with a source-target sentence length ratio greater than 1.5 are removed to reduce potential errors from sentence misalignment. Long sentences with a length greater than 250 are also removed as a common practice. The split of the datasets produces the training, validation (valid.) and test sets presented in Table \ref{data-table}.
\begin{table}[h]
\begin{center}
\begin{tabular}{ll}
\hline \textbf{Hyperparameters} & \textbf{Values} \\ \hline
Encoder/Decoder Layers & 6 \\
Embedding Units & 512 \\
Attention Heads & 4 \\
Feed-forward Hidden Units & 1,024 \\
Batch Size (default) & 4,096 \\
Training Epoch (default) & 50 \\
\hline
\end{tabular}
\end{center}
\caption{\label{para-table} Hyperparameters for the experiments.}
\end{table}
The transformer architecture \citep{Vaswani2017} from fairseq \citep{ott2019fairseq} \footnote{https://github.com/pytorch/fairseq} is used for all the experiments. The hyperparameters are presented in Table \ref{para-table}. We compared training under CLR with an inverse square for two popular optimizers used in machine translation tasks, Adam and SGD. All models are trained using one NVIDIA V100 GPU.
\begin{table*}[h]
\centering
\begin{tabular}{lcccc}
\hline
\multirow{2}{*}{\textbf{Corpus}} & \multicolumn{2}{c}{\textbf{Adam}} & \multicolumn{2}{c}{\textbf{SGD}} \\ \cline{2-5}
& \textbf{Max} & \textbf{Base} & \textbf{Max} & \textbf{Base} \\ \hline
IWSLT2014-de-en & 5.00E-04 & 1.00E-05 & 6.90E+00 & 1.00E-03 \\
IWSLT2014-fr-en & 8.00E-04 & 1.00E-05 & - & - \\
IWSLT2017-de-en & 7.60E-04 & 1.00E-05 & 8.00E+00 & 1.00E-03 \\
\hline
\end{tabular}
\caption{\label{lrb-table} Learning rate boundary for CLR. }
\end{table*}
The learning rate boundary of the CLR is selected by the range test (shown in Figure \ref{range_test_IWSLT14-de-en}). The base and maximal learning rates adopted in this study are presented in Table \ref{lrb-table}.
Shrink strategy is applied when examining the effects of CLR in training NMT. The optimizers (Adam and SGD) are assigned with two options: 1) without shrink (as ``nshrink"); 2) with shrink at a rate of 0.5 (``yshrink"), which means the maximal learning rate for each cycle is reduced at a decay rate of 0.5.
\subsection{Effects of Applying CLR to NMT Training}
A hypothesis we hold is that NMT training under CLR may result in a better local minimum than that achieved by training with the default learning rate schedule. A comparison experiment is performed for training NMT models for ``IWSLT2014-de-en" corpus using CLR and INV with a range of initial learning rates on two optimizers (Adam and SGD), respectively.
It can be observed that both Adam and SGD are very sensitive to the initial learning rate under the default INV schedule before CLR is applied (as shown in Figures \ref{fig3} and \ref{fig4}). In general, SGD prefers a bigger initial learning rate when CLR is not applied. The initial learning rate of Adam is more concentrated towards the central range.
Applying CLR has positive impacts on NMT training for both Adam and SGD. When applied to SGD, CLR exempts the needs for a big initial learning rate as it enables the optimizer to explore the local minima better. Shrinking on CLR for SGD is not desirable as a higher learning rate is required (Figure \ref{fig4}). It is noted that applying CLR to Adam produces consistent improvements regardless of shrink options (Figure \ref{fig3}). Furthermore, it can be observed that the effects of applying CLR to Adam are more significant than those of SGD, as shown in Figure \ref{fig5}. Similar results are obtained from our experiments on ``IWSLT2017-de-en" and ``IWSLT2014-fr-en" corpora (Figures \ref{figapendix1} and \ref{figappendix2} in Appendix~\ref{sec:appendix}). The corresponding BLEU scores are presented in Table \ref{bleu-table}, in which the above-mentioned effects of CLR on Adam can also be established. The training takes fewer epochs to converge to reach a local minimum with better BLEU scores (i.e., bold fonts in Table \ref{bleu-table}).
\begin{figure}[h]
\centering
\includegraphics[width=0.47\textwidth]{experiment/14-de-en/valid_transformer_4_1.png}
\caption{A comparison study of training NMT models on IWSLT2014-de-en using CLR and INV with a range of initial learning rate on Adam. The learning rate policy ``adam\_cyc\_nshink\_5e-4" denotes the optimizer Adam is trained under CLR with the no shrink option and a max learning rate of 5e-4. }
\label{fig3}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.47\textwidth]{experiment/14-de-en/valid_transformer_3_1.png}
\caption{A comparison study of training NMT models on IWSLT2014-de-en using CLR and INV with a range of initial learning rate on SGD. The learning rate policy ``sgd\_cyc\_yshink\_5e-4" denotes the optimizer SGD is trained under CLR with the shrink option and a max learning rate of 5e-4.}
\label{fig4}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.47\textwidth]{experiment/14-de-en/valid_transformer_2.png}
\caption{A view of effects of applying CLR to Adam and SGD when training the NMT on IWSLT2014-de-en.}
\label{fig5}
\end{figure}
\begin{table*}[h]
\centering
\begin{tabular}{lccc}
\hline
\textbf{Corpus} & \textbf{Learning Rate Policy} & \textbf{Best BLEU} & \textbf{Epoch} \\ \hline
\multirow{5}{*}{IWSLT2014-de-en} & adam\_cyc\_nshrink\_5e-4 & \textbf{32.65} & \textbf{18} \\
& adam\_cyc\_yshrink\_5e-4 & \textbf{31.29} & \textbf{18} \\
& adam\_inv\_5e-4 & 30.88 & 16 \\
& sgd\_inv\_30 & 30.78 & 42 \\
& adam\_inv\_3e-4 & 30.46 & 34 \\
& sgd\_cyc\_nshrink\_6.9 & 30.16 & 45\\
\hline
\multirow{5}{*}{IWSLT2017-de-en} & adam\_cyc\_nshrink\_7.6e-4 & \textbf{33.00} & \textbf{18} \\
& adam\_cyc\_yshrink\_7.6e-4 & \textbf{31.56} & \textbf{19} \\
& sgd\_inv\_30 & 30.82 & 49 \\
& adam\_inv\_3e-4 & 30.78 & 35 \\
& adam\_inv\_5e-4 & 30.70 & 19 \\
& sgd\_cyc\_nshrink\_8 & 30.40 & 49 \\
& adam\_inv\_7.6e-4 & 28.94 & 40 \\
\hline
\multirow{4}{*}{IWSLT2014-fr-en} & adam\_cyc\_nshrink\_8e-4 & \textbf{37.82} & \textbf{17} \\
& adam\_cyc\_yshrink\_8e-4 & \textbf{36.91} & \textbf{17} \\
& adam\_inv\_5e-4 & 36.43 & 17 \\
& adam\_inv\_3e-4 & 36.25 & 35 \\
& sgd\_inv\_30 & 35.51 & 45 \\
& adam\_inv\_8e-4 & 6.20 & 43 \\
\hline
\end{tabular}
\caption{\label{bleu-table} The best BLEU for various learning rate policies when training NMT models on IWSLT2014-de-en, IWSLT2017-de-e and IWSLT2014-fr-en. The total number of training epochs for all the experiments is 50. The table is sorted by the best BLEU in descending order. }
\end{table*}
\subsection{Effects of Batch Size on CLR}
Batch size is regarded as a significant factor influencing deep learning models from the various CV studies detailed in Section \ref{sec:introduction}. It is well known to CV researchers that a large batch size is often associated with a poor test accuracy. However, the trend is reversed when the CLR policy is introduced by \citet{super_clr}. The critical question is: does this trend of using larger batch size with CLR hold for training transformers in NMT? Furthermore, what range of batch size does the associated regularization becomes significant? This will have implications because if CLR allows using a larger batch size without compromising the generalization capability, then it will allow training speed up by using a larger batch size. From Figure \ref{fig6}, we see that the trend of CLR with a larger batch size for NMT training does indeed lead to better performance. Thus the phenomenon we observe in \citet{super_clr} for CV tasks can be carried across to NMT. In fact, using a small batch size of 256 (the green curve in Figure \ref{fig6}) leads to divergence, as shown by the validation loss spiraling out of control. This is in line with the need to prevent over regularization when using CLR; in this case, the small batch size of 256 adds a strong regularization effect and thus need to be avoided. This larger batch size effect afforded by CLR is certainly good news because NMT typically deals with large networks and huge datasets. The benefit of a larger batch size afforded by CLR means that training time can be cut down considerably.
\begin{figure}[h]
\centering
\includegraphics[width=0.47\textwidth]{experiment/batch_size_test/valid_transformer_bs0_1.png}
\caption{Effects of various batch sizes when training the NMT on IWSLT2014-de-en corpus with CLR.}
\label{fig6}
\end{figure}
\label{sec:experiments}
\section{Further Analysis}
We observe the qualitative different range test curves for CV and NMT datasets. As we can see from Figures \ref{range_test_IWSLT14-de-en} and \ref{range_test_CV}. The CV range test curve looks more well defined in terms of choosing the max learning rate from the point where the curve starts to be ragged. For NMT, the range curve exhibits a smoother, more plateau characteristic. From Figure \ref{range_test_IWSLT14-de-en}, one may be tempted to exploit the plateau characteristic by choosing a larger learning rate on the extreme right end (before divergence occurs) as the triangular policy's max learning rate. From our experiments and empirical observations, this often leads to the loss not converging due to excessive learning rate. It is better to be more conservative and choose the point where the loss stagnates as the max learning rate for the triangular policy.
\begin{figure}[h]
\centering
\includegraphics[width=0.47\textwidth]{experiment/range_test/range_test_CV.png}
\caption{Range test curve for the CV CIFAR-100 dataset.}
\label{range_test_CV}
\end{figure}
\subsection{How to Apply CLR to NMT Training Matters}
A range test is performed to identify the max learning rates (MLR1 and MLR2) for the triangular policy of CLR (Figure \ref{range_test_IWSLT14-de-en}). The experiments showed the training is sensitive to the selection of MLR. As the range curve for training NMT models is distinctive to that obtained from a typical case of computer vision, it is not clear how to choose the MLR when applying CLR. A comparison experiment is performed to try MLRs with different values. It can be observed that MLR1 is a preferable option for both SGD and Adam (Figures \ref{figureMLR1} and \ref{figureMLR2}). The ``noshrink" option is mandatory for SGD, but this constraint can be relaxed for Adam. Adam is sensitive to excessive learning rate (MLR2).
\begin{figure}[h]
\centering
\includegraphics[width=0.47\textwidth]{experiment/14-de-en/valid_transformer_0.png}
\caption{MLR1 with ``noshrink" is a preferable option for SGD when applying CLR to train NMT models on IWSLT2014-de-en.}
\label{figureMLR1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.47\textwidth]{experiment/14-de-en/valid_transformer_1.png}
\caption{MLR1 is a preferable option for Adam when applying CLR to train NMT models on IWSLT2014-de-en.}
\label{figureMLR2}
\end{figure}
\begin{figure*}[h]
\centering
\subfigure[adam-inv-1e-3]{\includegraphics[scale=0.35]{experiment/surface/adam-1e-3.pdf}}
\subfigure[adam-inv-5e-4]{\includegraphics[scale=0.35]{experiment/surface/adam-5e-4.pdf}}
\subfigure[adam-cyc-yshrink]{\includegraphics[scale=0.35]{experiment/surface/adam-cyc-5e-4.pdf}}
\caption{Loss surface, optimizer trajectory and learning rates visualization for training NMT models on IWSLT2014-de-en.}
\label{fig:error_surface}
\end{figure*}
\subsection{Rationale behind Applying CLR to NMT Training}
There are two reasons proposed in \citet{clr} on why CLR works. The theoretical perspective proposed is that the increasing learning rate helps the optimizer to escape from saddle point plateaus. As pointed out in \citet{clr_saddle}, the difficulty in optimizing deep learning networks is due to saddle points, not local minima. The other more intuitive reason is that the learning rates covered in CLR are likely to include the optimal learning rate, which will be used throughout the training. Leveraging the visualization techniques proposed by \citet{viz_loss}, we take a peek at the error surface, optimizer trajectory and learning rate. The first thing to note is the smoothness of the error surface. This is perhaps not so surprising given the abundance of skip connections in transformer-based networks. Referring to Figure \ref{fig:error_surface} (c), we see the cyclical learning rate greatly amplifying Adam's learning rate in flatter region while nearer the local minimum, the cyclical learning rate policy does not harm convergence to the local minimum. This is in contrast to Figure \ref{fig:error_surface} (a) and (b), where although the adaptive nature of the learning rate in Adam helps to move quickly across flatter region, the effect is much less pronounced without the cyclical learning rate. Figure \ref{fig:error_surface} certainly does give credence to the hypothesis that the cyclical learning rate helps to escape saddle point plateaus, as well as the optimal learning rate will be included in the cyclical learning rate policy.
Some explanation about Figure \ref{fig:error_surface} is in order here. Following \citet{viz_loss}, we first assemble the network weight matrix by concatenating columns of network weights at each epoch. We then perform a Principal Component Analysis (PCA) and use the first two components for plotting the loss landscape. Even though all three plots in Figure \ref{fig:error_surface} seem to converge to the local minimum, bear in mind that this is only for the first two components, with the first two components contributing to 84.84\%, 88.89\% and 89.5\% of the variance respectively. With the first two components accounting for a large portion of the variance, it is thus reasonable to use Figure \ref{fig:error_surface} as a qualitative guide.
\section{Conclusion}
From the various experiment results, we have explored the use of CLR and demonstrated the benefits of CLR for transformer-based networks unequivocally. Not only does CLR help to improve the generalization capability in terms of test set results, but it also allows using larger batch size for training without adversely affecting the generalization capability. Instead of just blindly using default optimizers and learning rate policies, we hope to raise awareness in the NMT community the importance of choosing a useful optimizer and an associated learning rate policy.
\label{sect:conclusions}
|
1,116,691,500,134 | arxiv |
\subsection{Speaker verification}
Speaker verification aims to verify a person’s claimed identity using voice biometrics.
With the growing popularity of deep neural network (DNN), the leading modeling approach for speaker verification has transferred from the traditional Gaussian Mixture Model-Universal Background Model (GMM-UBM)~\cite{reynolds2000speaker} and \textit{i}-vector~\cite{dehak2010front} to deep speaker embedding representation learning.
A typical speaker embedding learning approach is \textit{d}-vector~\cite{variani2014deep}, where a fully-connected DNN is used to extract frame-level deep features.
These features are then averaged to form an utterance-level speaker representation.
Another popular speaker embedding learning approach is the time-delay neural network (TDNN) based \textit{x}-vector~\cite{DBLP:conf/interspeech/SnyderGPK17}, which has been shown to achieve state-of-the-art results in multiple datasets~\cite{snyder2018x}.
Recently, more advanced neural architectures, such as ResNet34, have been shown to further improve performance by extracting \textit{r}-vectors~\cite{zeinali2019but}.
In this work, we also employed ResNet34 to learn speaker embedding vectors from the audio stream.
\subsection{Face verification}
Face verification aims to verify a person’s claimed identity using visual images.
Since the impressive success of the AlexNet~\cite{DBLP:conf/nips/KrizhevskySH12} architecture in the ImageNet competition, deep convolutional neural network (DCNN)-based approaches have become the mainstream for the face verification task.
For example, DeepFace~\cite{taigman2014deepface} achieved human-level performance on the challenging LFW benchmark~\cite{LFWTech} for the first time.
Furthermore, in recent years, researchers have explored many different DCNN-based architectures for the face verification task, where remarkable improvements have been achieved.
These architectures include, but not limited to, DeepID~\cite{sun2015deepid3}, VGGNet~\cite{DBLP:journals/corr/SimonyanZ14a}, FaceNET~\cite{DBLP:conf/cvpr/SchroffKP15}, and ResFace~\cite{DBLP:conf/cvpr/HeZRS16}.
In our work, we will employ ResNet34 to extract embedding vectors from both visual and thermal images.
\subsection{Audio-visual person verification}
Although speaker verification and face verification have achieved remarkable progress in recent years, their performance decreases significantly under more challenging conditions.
For example, speaker verification is sensitive to changes in acoustic characteristics due to background noise, the person's mood (e.g., happy, angry, etc.), the uttered text, the recording device, distance, and so on.
Similarly, the performance of face verification is vulnerable to illumination, pose, emotion, distance.
To alleviate the drawbacks of the two verification approaches, researchers are turning to audio-visual verification approaches utilizing both modalities.
The initial work in this direction employed score-level fusion strategies~\cite{DBLP:conf/icassp/SellDSEG18,alam2020analysis,DBLP:conf/clear/LuqueMGAFMMMVH06}, where scores obtained from separately trained unimodal models were combined.
More recent works exploit attention-based fusion~\cite{shon2019noise,qian2021audio} to judiciously combine salient features from input modalities.
Overall, compared to unimodal verification systems, multimodal systems have been shown to be more accurate and robust, especially under noisy conditions.
To facilitate research on audio-visual person verification, various multimedia datasets have been introduced, such as VAST~\cite{DBLP:conf/lrec/TraceyS18}, JANUS~\cite{DBLP:conf/icassp/SellDSEG18}, VoxCeleb 1~\cite{nagrani2017voxceleb} \& 2~\cite{chung2018voxceleb2}.
Additionally, several person verification challenges have been organized~\cite{DBLP:conf/clear/StiefelhagenBBGMS06,sadjadi20202019}.
For example, the 2019 NIST speaker recognition evaluation (SRE) challenge~\cite{sadjadi20202019} proposed to use visual information in conjunction with audio recordings to increase the robustness of verification systems.
As a result, audio-visual fusion was found to result in remarkable performance gains (more than 85\% relative) over audio-only or face-only systems.
In this paper, we will investigate further increase of modalities by supplementing an audio-visual system with thermal images.
\iffalse
Person verification has been an active field of research for several decades~\cite{zhang2017end, snyder2018x, taigman2014deepface, wen2016discriminative}. Many approaches that have been already proposed are based on a single-stream framework, which is either on a person's voice or on his or her facial appearance, and many of them evolved over time. For example, the approaches for speaker recognition started from traditional probabilistic approaches~\cite{reynolds2000speaker, dehak2010front} and moved to deep speaker embedding ones~\cite{variani2014deep, snyder2017deep}. Whereas the approaches for face recognition have transferred from local descriptors based learning~\cite{cao2010face, lei2013learning} to deep convolutional neural networks~\cite{krizhevsky2012imagenet, parkhi2015deep, schroff2015facenet}.
Although single modality person recognition showed competitive performance, it was inferred that multi-stream framework could boost the performance further~\cite{qian2021audio}. This is because the two modalities might complement each other in situations when one of them makes a wrong decision or is absent. Based on this multi-stream framework has lately attracted more and more attention. At the 2019 NIST Speaker Recognition Evaluation challenge, it was proposed to use visual information in conjunction with the audio recordings to increase robustness of recognition frameworks~\cite{sadjadi20202019, alam2020analysis}. Apart from this, several versions of VoxCeleb - a large-scale audio-visual person recognition dataset - have been published~\cite{nagrani2017voxceleb, chung2018voxceleb2, nagrani2020voxceleb}. These datasets further boosted the development of audio-visual recognition methods~\cite{qian2021audio, chen2020multi, shon2019noise, shon2020multimodal}.
Up to date, a lot of research has been conducted on the utilization of audio or/and visual streams. In the field of speaker recognition, Chung et al. investigated multiple metric-based and softmax-based loss functions under identical conditions in order to identify which produce the best representation of an audio stream~\cite{chung2020defence}. Whereas, Yao et al. developed a new multi-stream Convolutional Neural Network which performs speaker verification tasks by a frequency selection technique - dividing the audio into different frequencies and selecting the one that describes the target best~\cite{yao2020multi}.
In the field of person recognition, Shon et al.~\cite{shon2019noise} proposed an attention-based approach with a feature-level fusion method. The attention mechanism at the core of the network learns to select a more salient modality, which enables the model to be resistant to situations in which either of the modalities is corrupted. Their subsequent work~\cite{shon2020multimodal} introduced fine-tuning of their pre-trained unimodal models in order to prevent overfitting. Chen et al.~\cite{chen2020multi} in their end-to-end multi-modal system explored and compared three methods of fusing voice and face embeddings: Simple Soft Attention Fusion~\cite{shon2019noise}, Compact Bilinear Pooling Fusion~\cite{fukui2016multimodal}, and Gated Multi-Modal Fusion~\cite{arevalo2017gated}. In addition, the authors explored the systems with two loss functions, namely Contrastive Loss with Aggressive Sampling Strategy and Additive Angular Margin Loss~\cite{deng2019arcface}. Qian et al.~\cite{qian2021audio} proposed three audio-visual deep neural networks (AVN) for multi-modal person verification. These are - a feature level AVN (AVN-F), which accepts audio and visual streams simultaneously, an embedding-level AVN (AVN-E), which accepts pre-trained voice and face embeddings, and an embedding level AVN with joint learning (AVN-J), which optimizes unimodal models together with a fusion module resulting in an end-to-end training process. In order to evaluate the proposed AVNs on noisy data, a couple of data augmentation techniques were designed: a feature-level multi-modal data augmentation and an embedding-level noise distribution matching data augmentation.
All of the above-mentioned works utilized two modalities, that are audio recordings and visual images. In this paper using the SpeakingFaces dataset we explore how 3 modalities — audio, visual and thermal streams — can affect the overall performance of a person verification task.
\fi
\iffalse
Person recognition has been an active field of research for several decades~\cite{zhang2017end, snyder2018x, taigman2014deepface, wen2016discriminative}. Many approaches that have been already proposed are based on a single-stream framework, which is either on a person's voice or on his or her facial appearance, and many of them evolved over time. For example, the approaches for speaker recognition started from traditional probabilistic approaches~\cite{reynolds2000speaker, dehak2010front} and moved to deep speaker embedding ones~\cite{variani2014deep, snyder2017deep}. Whereas the approaches for face recognition have transferred from local descriptors based learning~\cite{cao2010face, lei2013learning} to deep convolutional neural networks~\cite{krizhevsky2012imagenet, parkhi2015deep, schroff2015facenet}.
Although single modality person recognition showed competitive performance, it was inferred that multi-stream framework could boost the performance further~\cite{qian2021audio}. This is because the two modalities might complement each other in situations when one of them makes a wrong decision or is absent. Based on this multi-stream framework has lately attracted more and more attention. At the 2019 NIST Speaker Recognition Evaluation challenge, it was proposed to use visual information in conjunction with the audio recordings to increase robustness of recognition frameworks~\cite{sadjadi20202019, alam2020analysis}. Apart from this, several versions of VoxCeleb - a large-scale audio-visual person recognition dataset - have been published~\cite{nagrani2017voxceleb, chung2018voxceleb2, nagrani2020voxceleb}. These datasets further boosted the development of audio-visual recognition methods~\cite{qian2021audio, chen2020multi, shon2019noise, shon2020multimodal}.
\fi
\subsection{Encoder module}
The encoder module $Encoder(\cdot)$ is based on the ResNet34~\cite{DBLP:conf/cvpr/HeZRS16} network.
Specifically, for image input (i.e., visual and thermal), we used a variation of ResNet34, in which the number of channels in each residual block is halved in order to reduce computational costs.
For audio, we used another variation of ResNet34, in which average-pooling was replaced with self-attentive pooling (SAP)~\cite{DBLP:conf/odyssey/CaiCL18} to aggregate frame-level features into an utterance-level representation.
The encoder takes in a raw feature $x_{i}$ and outputs the corresponding embedding vector representation $e_{i}\in\mathbb{R}^{512}$:
\begin{equation}
e_{i} = Encoder(x_{i})
\end{equation}
where $i\in\{a,v,t\}$ is used to represent the stream source (i.e., audio, visual, and thermal).
We trained a separate encoder module for each data stream.
\subsection{Fusion module}
As a fusion module, we tried two different approaches: 1)~attention mechanism and 2)~score averaging.
\subsubsection{Attention mechanism}
In the attention-based approach, we implemented an embedding-level fusion similar to that in~\cite{shon2019noise}, where the fusion module can pay attention to a salient modality among audio $e_a$, visual $e_v$, and thermal $e_t$ representations.
Specifically, it first computes the attention score $\hat{\alpha}_{\{a,v,t\}}\in \mathbb{R}^3$ as follows:
\begin{equation}
\hat{\alpha}_{\{a,v,t\}} = \mathbf{W}[e_a,e_v,e_t]+\mathbf{b}
\end{equation}
where the weight matrix $\mathbf{W}\in\mathbb{R}^{3\times1536}$ and the bias vector $\mathbf{b}\in\mathbb{R}^3$ are learnable parameters.
Next, the fused person embedding vector $e_p$ is produced by the weighted sum:
\begin{equation}
e_p=\sum_{i\in\{a,v,t\}}\alpha_ie_i\text{\ \ , where\ \ } \alpha_i=\frac{\exp(\hat{\alpha}_i)}{\sum_{k\in\{a,v,t\}}\exp(\hat{\alpha}_k)}
\end{equation}
The attention-based fusion for the bimodal systems (i.e., audio-visual) was designed in a similar manner, using two modalities instead of three.
Note that the attention-based fusion module is jointly trained with the encoder modules in an end-to-end fashion.
\subsubsection{Score averaging}
In the score-level fusion, we simply averaged the scores $s_i\in[0,2]$ produced by the unimodal verification systems, where a score is the Euclidean distance between two normalized speaker embeddings.
For example, for the trimodal system, the final score $s_{\textit{final}}$ is computed as follows:
\begin{equation}
s_{\textit{final}} = \frac{\sum_{i\in\{a,v,t\}}s_i}{3}
\end{equation}
Likewise, for the bimodal system, the final score is computed by averaging the scores of two unimodal systems.
\subsection{Unimodal Person Verification}
The results of the unimodal person verification experiment on the \textit{easy} test set are given in the first part of Table~\ref{tab:results}.
When train and evaluation sets are clean, the best EER performance is achieved by the visual modality (4.09\%), followed by the audio modality (9.29\%) and then the thermal modality (10.58\%).
Under the noisy condition, the performance degrades by 28\%, 7\%, and 20\% relative EER for the audio, visual, and thermal systems, respectively.
According to these results, the visual system performs the best and is more robust to corrupted data.
Interestingly, the results in Table~\ref{tab:results_samegender} show that the best EER performance on the \textit{hard} test set, under the clean condition, is achieved by the visual modality (5.23\%), followed by the thermal modality (12.34\%) and then the audio modality (14.13\%).
Under the noisy condition, the performance degrades by 27\%, 12\%, and 20\% relative EER for the audio, visual, and thermal systems, respectively.
These results indicate that the visual system remains superior, but the thermal modality outperforms the audio in differentiating subjects of the same gender\footnote{At least in our experimental settings.}.
To further examine the performance of the unimodal systems, we computed accuracy on the \textit{easy} test set under the \textit{clean} condition, separating the same- and opposite-gender pairs.
The results are presented in the first part of Table~\ref{tab:results_acc}.
Following the observations from the above, when a given pair of subjects belong to the same gender, the visual modality (95.07\%) is superior, while the thermal modality (88.06\%) is better than the audio modality (86.72\%).
However, the audio modality (98.45\%) performs the best in distinguishing subjects of opposite gender.
\iffalse
The results show that the EER performances of unimodal models on the test set deteriorate under the second condition by 60\%-82\% relative to the first condition.
Presumably, this is due to the domain mismatch between train and evaluation sets.
In the third condition, where the noise rates in the train and evaluation sets are matched, the EER performances on the test set degrades by 7\%-27\% relative to the first condition.
These results demonstrate the poor robustness of unimodal systems under the noisy conditions.
Overall, the best EER is achieved by visual modality followed by audio and then thermal in all three conditions.
\fi
We also analysed the verification errors made by the unimodal systems on the \textit{easy} test set under the \textit{clean} condition (see Figure~\ref{fig:ver_errors}).
We observed that the number of overlapping errors between different modalities is lower than the errors made by a single modality.
This indicates that these modalities posses strong complementary properties, and therefore, multimodal systems that can combine them effectively have good potential.
Future work should focus on analysing these errors in great detail to identify the weaknesses and strengths of each modality.
\iffalse
\begin{table}[h]
\centering
\begin{tabular}{l|c|c}
\toprule
\multirow{2}{*}{Modality} & \multicolumn{2}{c}{EER (\%)} \\\cline{2-3}
& Valid & Test \\
\midrule
Visual & & \\
Audio & & \\
Thermal & & \\
\bottomrule
\end{tabular}
\caption{Unimodal person verification results.}
\label{tab:uni}
\end{table}
\fi
\begin{table}[t]
\caption{Accuracy (\%) results (mean $\pm$ std) on the \textit{easy} test set. The data condition is \textit{clean} and multimodal systems are based on score fusion. Bimodal: Audio-Visual. Trimodal: Audio-Visual-Thermal.}
\label{tab:results_acc}
\vspace{2mm}
\centering
\renewcommand\arraystretch{1.1}
\begin{tabular}{l|cc|c}
\toprule
\textbf{Modality} & \textbf{\scell{Same\\gender}} & \textbf{\scell{Opposite\\gender}} & \textbf{Overall}\\
\midrule
Audio & $86.72\pm1.88$ & $98.45\pm1.35$ & $89.79\pm1.73$\\
Visual & $95.07\pm0.86$ & $98.28\pm0.90$ & $95.91\pm0.87$\\
Thermal & $88.06\pm1.75$ & $93.24\pm0.70$ & $89.41\pm1.29$\\\hline
Bimodal & $97.07\pm0.73$ & $99.87\pm0.17$ & $97.80\pm0.58$\\\hline
Trimodal & \textbf{97.61 $\pm$ 0.39} & \textbf{99.87 $\pm$ 0.08} & \textbf{98.20} $\pm$ \textbf{0.31}\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Multimodal Person Verification}
The experimental results for our bimodal (audio-visual) and trimodal (audio-visual-thermal) verification models are presented in the second and third parts of Table~\ref{tab:results} and Table~\ref{tab:results_samegender}, respectively.
The models were constructed using two fusion methods, soft attention and score averaging, as mentioned in the previous sections.
The latter approach provides superior performance for both bimodal and trimodal verification systems, which is consistent with the observations made in~\cite{qian2021audio}, but different from the findings in~\cite{shon2019noise}.
The experimental results show that the multimodal systems outperform the unimodal systems.
For the \textit{easy} test set, the best bimodal system reduced EER by 46\% (from 4.09 to 2.20) and 40\% (from 4.36 to 2.61) relative to the visual system under the clean and noisy conditions, respectively.
The best trimodal system reduced EER by 56\% (from 4.09 to 1.80) and 51\% (from 4.36 to 2.13) under the clean and noisy conditions, respectively.
Remarkably, these improvements were achieved by simply averaging the scores of the unimodal systems.
For the \textit{hard} test set, we observed similar behaviour. The best bimodal system reduced EER by 41\% (from 5.23 to 3.10) and 33\% (from 5.84 to 3.89) relative to the visual system under the clean and noisy conditions, respectively.
The best trimodal system reduced EER by 52\% (from 5.23 to 2.53) and 42\% (from 5.84 to 3.40) under the clean and noisy conditions, respectively.
The trimodal system achieved better results than the bimodal system on both evaluation sets under the clean and noisy conditions.
On the \textit{easy} test set, EERs were improved by 18\% relative for both conditions (from 2.20 to 1.80, and from 2.61 to 2.13).
On the \textit{hard} test set, the trimodal system surpassed the bimodal system by 18\% (from 3.10 to 2.53) and 13\% (from 3.84 to 3.40) relative EERs, under the \textit{clean} and \textit{noisy} conditions respectively.
The analysis of the attention network parameters suggests that the mechanism learned to prioritize the visual stream when the streams were not corrupted.
In contrast, the network focused on all the three streams when the data were noisy.
Therefore, augmenting the train set with noise plays an important role in learning more robust features, and future work should study different augmentation methods for the multimodal person verification task.
Unfortunately, combining three modalities using the soft attention mechanism is challenging due to the instability of the training process, and further studies should be conducted to explore other attention methods.
Overall, the results suggest that the addition of thermal image data does indeed enhance the EER performance of multimodal person verification systems.
The importance of supplementing the audio-visual modalities with the thermal images is also accentuated in Table~\ref{tab:results_acc}.
The bimodal system is able to take the best of the two streams and drastically boost the performance in handling both same- and opposite-gender pairs.
The addition of the thermal stream strengthens the performance of the multimodal system even further for the same gender pairs, but has an insignificant effect on the opposite gender pairs.
\iffalse
\begin{table*}[htb]
\centering
\begin{tabular}{l|c|cc|cc|cc}
\toprule
\multirow{2}{*}{Modality} & \multirow{2}{*}{Fusion} & \multicolumn{2}{c|}{\scell{Train clean /\\evaluation clean}}& \multicolumn{2}{c|}{\scell{Train clean /\\evaluation noisy}} & \multicolumn{2}{c}{\scell{Train noisy /\\evaluation noisy}} \\\cline{3-8}
& & valid & test & valid & test & valid & test \\
\midrule
Audio & - & $10.82\pm0.39$ & $9.29\pm0.15$ & $18.04\pm0.26$ & $16.71\pm0.41$ & $14.89\pm0.60$ & $11.83\pm0.62$ \\
Visual & - & $4.04\pm0.89$ & $4.09\pm0.86$ & $6.90\pm1.44$ & $6.56\pm1.63$ & $5.32\pm0.63$ & $4.38\pm0.29$ \\
Thermal & - & $10.30\pm0.74$ & $10.58\pm1.28$ & $19.14\pm2.52$ & $19.28\pm2.46$ & $11.06\pm0.20$ & $12.39\pm1.06$ \\
\midrule
Bimodal & \multirow{2}{*}{Attention}& $4.42\pm0.85$ & $3.69\pm0.44$ & $6.51\pm0.48$ & $5.48\pm0.25$ & $5.03\pm0.95$ & $4.05\pm0.61$ \\
Trimodal & & $4.28\pm0.38$ & $4.17\pm1.19$ & $5.85\pm0.82$ & $5.47\pm1.46$ & $4.07\pm1.03$ & $3.65\pm1.06$ \\\hline
Bimodal & \multirow{2}{*}{ScoreAve} & $2.39\pm0.42$ & $2.20\pm0.58$ & $5.45\pm0.89$ & \textbf{4.50 $\pm$ 1.05} & $3.70\pm0.35$ & $2.65\pm0.34$ \\
Trimodal & & \textbf{2.26 $\pm$ 0.10} & \textbf{1.80 $\pm$ 0.31} & \textbf{5.31 $\pm$ 0.86} & $4.52\pm0.59$ & \textbf{2.98 $\pm$ 0.20} & \textbf{2.13 $\pm$ 0.51} \\
\bottomrule
\end{tabular}
\caption{EER (\%) performance results of person verification systems under three conditions. \textit{Bimodal} and \textit{Trimodal} correspond to \textit{Audio-Visual} and \textit{Audio-Visual-Thermal} models respectively. The noise rate in both train and evaluation sets is 30\%.}
\label{tab:results}
\end{table*}
\begin{table}[h]
\centering
\begin{tabular}{l|c|c}
\toprule
\multirow{2}{*}{Modality} & \multicolumn{2}{c}{EER (\%)} \\\cline{2-3}
& Valid & Test \\
\midrule
Audio-Visual & & \\
Audio-Visual-Thermal & & \\
\bottomrule
\end{tabular}
\caption{Multi-modal person verification results.}
\label{tab:multi}
\end{table}
\begin{table*}[t]
\centering
\begin{tabular}{l|c|cc|cc|cc|cc}
\toprule
\multirow{2}{*}{Modality} & \multirow{2}{*}{Fusion} & \multicolumn{4}{c|}{Train clean} & \multicolumn{4}{c}{Train noisy} \\\cline{3-10}
& & valid & test & valid' & test' & valid & test & valid' & test' \\
\midrule
Audio & - & $10.8\pm0.4$ & $9.3\pm0.2$ &$18\pm0.3$ & $16.7\pm0.4$ &$12.8\pm0.6$ &$9.5\pm0.4$ &$14.9\pm0.6$ &$11.8\pm0.6$ \\
Visual & - & $4\pm0.9$ & $4.1\pm0.9$ &$6.9\pm1.4$ & $6.5\pm1.6$ &$5.8\pm0.5$ &$4.2\pm1$ &$5.3\pm0.6$ &$4.4\pm0.3$ \\
Thermal & - & $10.3\pm0.7$ & $10.6\pm1.3$ &$19.1\pm2.5$ & $19.3\pm2.5$ &$11\pm1.1$ &$12.7\pm0.9$ &$11.1\pm0.2$ &$12.4\pm1.1$ \\
\midrule
Bimodal & \multirow{2}{*}{Attention} &$4.4\pm0.9$. &$3.7\pm0.4$ &$6.5\pm0.5$ &$5.5\pm0.3$ &$5\pm1$ &$3.96\pm0.6$ &$5\pm1$ &$4.1\pm0.6$ \\
Trimodal & &$4.3\pm0.4$. &$4.2\pm1.2$ &$5.8\pm0.8$ &$5.5\pm1.5$ &$3.9\pm1.1$ &$3.6\pm1$ &$4.1\pm1$ &$3.7\pm1.1$ \\\hline
Bimodal & \multirow{2}{*}{ScoreAve} &$2.1211$ &$1.9697$ &$4.8368$ &$3.4069$ &$3.0579$ &$1.5195$ &$3.5000$ &$1.8831$ \\
Trimodal & &$2.2579$ &$1.5931$ &$4.3158$ &$3.0909$ &$2.5211$ &$1.3463$ &$2.8105$ &$1.6277$ \\
\bottomrule
\end{tabular}
\caption{EER (\%) performance results of different person verification systems under different conditions. \textit{Bimodal} and \textit{Trimodal} correspond to \textit{Audio-Visual} and \textit{Audio-Visual-Thermal} models respectively. \textit{valid'} and \textit{test'} represent the noisy version of evaluation sets. The noise rate in both train and evaluation sets is 30\%.}
\label{tab:results}
\end{table*}
\fi
\iffalse
\subsection{Robustness Evaluation}
Compare clean-noisy and noisy-noisy for trimodal and bimodal systems.
\begin{table*}[htb]
\centering
\begin{tabular}{l|c|cc|cc|cc}
\toprule
\multirow{2}{*}{Modality} & \multirow{2}{*}{Fusion} & \multicolumn{2}{c|}{\scell{train clean /\\evaluation clean}}& \multicolumn{2}{c|}{\scell{train clean /\\evaluation noisy}} & \multicolumn{2}{c}{\scell{train noisy /\\evaluation noisy}} \\\cline{3-8}
& & valid & test & valid & test & valid & test \\
\midrule
Audio & - & $10.82\pm0.39$ & $9.29\pm0.15$ & $18.04\pm0.26$ & $16.71\pm0.41$ & $14.89\pm0.60$ & $11.83\pm0.62$ \\
Visual & - & $4.04\pm0.89$ & $4.09\pm0.86$ & $6.90\pm1.44$ & $6.56\pm1.63$ & $5.32\pm0.63$ & $4.38\pm0.29$ \\
Thermal & - & $10.30\pm0.74$ & $10.58\pm1.28$ & $19.14\pm2.52$ & $19.28\pm2.46$ & $11.06\pm0.20$ & $12.39\pm1.06$ \\
\midrule
\multirow{2}{*}{Bimodal} & Attention & $4.42\pm0.85$ & $3.69\pm0.44$ & $6.51\pm0.48$ & $5.48\pm0.25$ & $5.03\pm0.95$ & $4.05\pm0.61$ \\
& ScoreAve & $2.39\pm0.42$ & $2.20\pm0.58$ & $5.45\pm0.89$ & \textbf{4.50 $\pm$ 1.05} & $3.70\pm0.35$ & $2.65\pm0.34$ \\\cline{2-8}
\multirow{2}{*}{Trimodal} & Attention & $4.28\pm0.38$ & $4.17\pm1.19$ & $5.85\pm0.82$ & $5.47\pm1.46$ & $4.07\pm1.03$ & $3.65\pm1.06$ \\
& ScoreAve & \textbf{2.26 $\pm$ 0.10} & \textbf{1.80 $\pm$ 0.31} & \textbf{5.31 $\pm$ 0.86} & $4.52\pm0.59$ & \textbf{2.98 $\pm$ 0.20} & \textbf{2.13 $\pm$ 0.51} \\
\bottomrule
\end{tabular}
\caption{EER (\%) performance results (mean $\pm$ std) of person verification systems under three conditions: 1) \textit{train clean / evaluation clean}, 2) \textit{train clean / evaluation noisy}, and 3) \textit{train noisy / evaluation noisy}. \textit{Bimodal} and \textit{Trimodal} correspond to \textit{Audio-Visual} and \textit{Audio-Visual-Thermal} models respectively. The noise rate in both train and evaluation sets is 30\%.}
\label{tab:results}
\end{table*}
\fi
\section{Introduction}
This template can be found on the conference website. Please use
either one of the template files found on this website when preparing your
submission. If there are special questions or wishes regarding
paper preparation and submission for Odyssey 2022, correspondence
should be addressed to \mbox{$<[email protected]$>$.}
Information for full paper submission is available on the web at
$<$http://www.odyssey2022.org$>$ under which you also will find
instructions for paper preparation and usage of templates.
\section{Page layout and style}
The page layout should match with the following rules. A highly
recommended way to meet these requirements is to use one of the
templates provided and to check details against this example
file. If for some reason you cannot use any of the templates, please
follow these rules as carefully as possible, or contact the
organizers at \mbox{$<[email protected]$>$} for further
instructions.
\subsection{Basic layout features}
\begin{itemize}
\item Proceedings will be printed in A4 format. The layout is
designed so that the papers, when printed in US Letter format, will include
all material but the margins will not be symmetric. PLEASE TRY TO MAKE YOUR
SUBMISSION IN A4 FORMAT, if possible, although this is not an
absolute requirement. \item Two columns are used except for the
title part and possibly for large figures that need a full page
width. \item Left margin is 20 mm. \item Column width is 80 mm.
\item Spacing between columns is 10 mm. \item Top margin 25 mm
(except first page 30 mm to title top). \item Text height (without
headers and footers) is maximum 235 mm. \item Headers and footers
must be left empty (they will be added for printing). \item Check
indentation and spacing by comparing to this example file (in
PDF).
\end{itemize}
\subsubsection{Headings}
Section headings are centered in boldface
with the first word capitalized and the rest of the heading in
lower case. Sub-headings appear like major headings, except they
start at the left margin in the column.
Sub-sub-headings appear like sub-headings, except they are in italics
and not boldface. See examples in this
file. No more than 3 levels of headings should be used.
\subsection{Fonts}
The font used for the main text is Times. The recommended
font size is 9 points which is also the minimum allowed size.
Other font types may be used if needed for special purposes. Remember,
however, to embed all the fonts in your final PDF file!
LaTeX users: DO NOT USE THE Computer Modern FONT FOR TEXT (Times is
specified in the style file). If possible, make the final
document using POSTSCRIPT FONTS since, for example, equations with
non-PS Computer Modern are very hard to read on screen.
\subsection{Figures}
All figures must be centered in the column (or page, if the figure spans
both columns).
Figure captions should follow each figure and have the format given in
Fig. \ref{spprod}.
Figures should preferably be line drawings. If they contain gray
levels or colors, they should be checked to print well on a
high-quality non-color laser printer.
If some figures contain bitmap images, please ensure that their resolution
is high enough to preserve readability.
\subsection{Tables}
An example of a table is shown as Table \ref{table1}. Somewhat
different styles are allowed according to the type and purpose of the
table. The caption text may be above or below the table.
\begin{table}[th]
\caption{\label{table1} {\it This is an example of a table.}}
\vspace{2mm}
\centerline{
\begin{tabular}{|c|c|}
\hline
ratio & decibels \\
\hline \hline
1/1 & 0 \\
2/1 & $\approx$ 6 \\
3.16 & 10 \\
10/1 & 20 \\
1/10 & -20 \\
100/1 & 40 \\
1000/1 & 60 \\
\hline
\end{tabular}}
\end{table}
\subsection{Equations}
Equations should be placed on separate lines and numbered. Examples
of equations are given below.
Particularly,
\begin{equation}
x(t) = s(f_\omega(t))
\label{eq1}
\end{equation}
where \(f_\omega(t)\) is a special warping function
\begin{equation}
f_\omega(t)=\frac{1}{2\pi j}\oint_C \frac{\nu^{-1k}d\nu}
{(1-\beta\nu^{-1})(\nu^{-1}-\beta)}
\label{eq2}
\end{equation}
A residue theorem states that
\begin{equation}
\oint_C F(z)dz=2 \pi j \sum_k Res[F(z),p_k]
\label{eq3}
\end{equation}
Applying (\ref{eq3}) to (\ref{eq1}),
it is straightforward to see that
\begin{equation}
1 + 1 = \pi
\label{eq4}
\end{equation}
Finally we have proven the secret theorem of all speech sciences.
No more math is needed to show how useful the result is!
\begin{figure}[t]
\includegraphics[width=\columnwidth]{paper_completion.pdf}
\caption{{\it This is an example of a figure.}}
\label{spprod}
\end{figure}
\subsection{Page numbering}
Final page numbers will be added later to the document electronically. {\em
Please do not include any headers or footers!}
\subsection{References}
The reference format is the standard IEEE one.
References should be numbered in order of appearance,
for example \cite{aluisio2001learn}, and later this one \cite{swales1987writing},
and even later these other three \cite{day2012write,teufel2000,berkenkotter1989social}.
\section{Discussion}
This is the discussion. This is the discussion. This is the discussion.
This is the discussion. This is the discussion. This is the discussion.
This is the discussion. This is the discussion. This is the discussion.
This is a boring discussion. This is the discussion. This is the discussion.
This is the discussion. This is the discussion. This is the discussion.
\section{Conclusion}
This template can be found on the conference website
$<$http://www.odyssey2022.org$>$.
\bibliographystyle{IEEEbib}
\subsection{Unimodal Person Verification}
The results of the unimodal person verification experiment on the \textit{easy} test set are given in the first part of Table~\ref{tab:results}.
When train and evaluation sets are clean, the best EER performance is achieved by the visual modality (4.09\%), followed by the audio modality (9.29\%) and then the thermal modality (10.58\%).
Under the noisy condition, the performance degrades by 28\%, 7\%, and 20\% relative EER for the audio, visual, and thermal systems, respectively.
According to these results, the visual system performs the best and is more robust to corrupted data.
Interestingly, the results in Table~\ref{tab:results_samegender} show that the best EER performance on the \textit{hard} test set, under the clean condition, is achieved by the visual modality (5.23\%), followed by the thermal modality (12.34\%) and then the audio modality (14.13\%).
Under the noisy condition, the performance degrades by 27\%, 12\%, and 20\% relative EER for the audio, visual, and thermal systems, respectively.
These results indicate that the visual system remains superior, but the thermal modality outperforms the audio in differentiating subjects of the same gender\footnote{At least in our experimental settings.}.
To further examine the performance of the unimodal systems, we computed accuracy on the \textit{easy} test set under the \textit{clean} condition, separating the same- and opposite-gender pairs.
The results are presented in the first part of Table~\ref{tab:results_acc}.
Following the observations from the above, when a given pair of subjects belong to the same gender, the visual modality (95.07\%) is superior, while the thermal modality (88.06\%) is better than the audio modality (86.72\%).
However, the audio modality (98.45\%) performs the best in distinguishing subjects of opposite gender.
\iffalse
The results show that the EER performances of unimodal models on the test set deteriorate under the second condition by 60\%-82\% relative to the first condition.
Presumably, this is due to the domain mismatch between train and evaluation sets.
In the third condition, where the noise rates in the train and evaluation sets are matched, the EER performances on the test set degrades by 7\%-27\% relative to the first condition.
These results demonstrate the poor robustness of unimodal systems under the noisy conditions.
Overall, the best EER is achieved by visual modality followed by audio and then thermal in all three conditions.
\fi
We also analysed the verification errors made by the unimodal systems on the \textit{easy} test set under the \textit{clean} condition (see Figure~\ref{fig:ver_errors}).
We observed that the number of overlapping errors between different modalities is lower than the errors made by a single modality.
This indicates that these modalities posses strong complementary properties, and therefore, multimodal systems that can combine them effectively have good potential.
Future work should focus on analysing these errors in great detail to identify the weaknesses and strengths of each modality.
\iffalse
\begin{table}[h]
\centering
\begin{tabular}{l|c|c}
\toprule
\multirow{2}{*}{Modality} & \multicolumn{2}{c}{EER (\%)} \\\cline{2-3}
& Valid & Test \\
\midrule
Visual & & \\
Audio & & \\
Thermal & & \\
\bottomrule
\end{tabular}
\caption{Unimodal person verification results.}
\label{tab:uni}
\end{table}
\fi
\begin{table}[t]
\caption{Accuracy (\%) results (mean $\pm$ std) on the \textit{easy} test set. The data condition is \textit{clean} and multimodal systems are based on score fusion. Bimodal: Audio-Visual. Trimodal: Audio-Visual-Thermal.}
\label{tab:results_acc}
\vspace{2mm}
\centering
\renewcommand\arraystretch{1.1}
\begin{tabular}{l|cc|c}
\toprule
\textbf{Modality} & \textbf{\scell{Same\\gender}} & \textbf{\scell{Opposite\\gender}} & \textbf{Overall}\\
\midrule
Audio & $86.72\pm1.88$ & $98.45\pm1.35$ & $89.79\pm1.73$\\
Visual & $95.07\pm0.86$ & $98.28\pm0.90$ & $95.91\pm0.87$\\
Thermal & $88.06\pm1.75$ & $93.24\pm0.70$ & $89.41\pm1.29$\\\hline
Bimodal & $97.07\pm0.73$ & $99.87\pm0.17$ & $97.80\pm0.58$\\\hline
Trimodal & \textbf{97.61 $\pm$ 0.39} & \textbf{99.87 $\pm$ 0.08} & \textbf{98.20} $\pm$ \textbf{0.31}\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Multimodal Person Verification}
The experimental results for our bimodal (audio-visual) and trimodal (audio-visual-thermal) verification models are presented in the second and third parts of Table~\ref{tab:results} and Table~\ref{tab:results_samegender}, respectively.
The models were constructed using two fusion methods, soft attention and score averaging, as mentioned in the previous sections.
The latter approach provides superior performance for both bimodal and trimodal verification systems, which is consistent with the observations made in~\cite{qian2021audio}, but different from the findings in~\cite{shon2019noise}.
The experimental results show that the multimodal systems outperform the unimodal systems.
For the \textit{easy} test set, the best bimodal system reduced EER by 46\% (from 4.09 to 2.20) and 40\% (from 4.36 to 2.61) relative to the visual system under the clean and noisy conditions, respectively.
The best trimodal system reduced EER by 56\% (from 4.09 to 1.80) and 51\% (from 4.36 to 2.13) under the clean and noisy conditions, respectively.
Remarkably, these improvements were achieved by simply averaging the scores of the unimodal systems.
For the \textit{hard} test set, we observed similar behaviour. The best bimodal system reduced EER by 41\% (from 5.23 to 3.10) and 33\% (from 5.84 to 3.89) relative to the visual system under the clean and noisy conditions, respectively.
The best trimodal system reduced EER by 52\% (from 5.23 to 2.53) and 42\% (from 5.84 to 3.40) under the clean and noisy conditions, respectively.
The trimodal system achieved better results than the bimodal system on both evaluation sets under the clean and noisy conditions.
On the \textit{easy} test set, EERs were improved by 18\% relative for both conditions (from 2.20 to 1.80, and from 2.61 to 2.13).
On the \textit{hard} test set, the trimodal system surpassed the bimodal system by 18\% (from 3.10 to 2.53) and 13\% (from 3.84 to 3.40) relative EERs, under the \textit{clean} and \textit{noisy} conditions respectively.
The analysis of the attention network parameters suggests that the mechanism learned to prioritize the visual stream when the streams were not corrupted.
In contrast, the network focused on all the three streams when the data were noisy.
Therefore, augmenting the train set with noise plays an important role in learning more robust features, and future work should study different augmentation methods for the multimodal person verification task.
Unfortunately, combining three modalities using the soft attention mechanism is challenging due to the instability of the training process, and further studies should be conducted to explore other attention methods.
Overall, the results suggest that the addition of thermal image data does indeed enhance the EER performance of multimodal person verification systems.
The importance of supplementing the audio-visual modalities with the thermal images is also accentuated in Table~\ref{tab:results_acc}.
The bimodal system is able to take the best of the two streams and drastically boost the performance in handling both same- and opposite-gender pairs.
The addition of the thermal stream strengthens the performance of the multimodal system even further for the same gender pairs, but has an insignificant effect on the opposite gender pairs.
\iffalse
\begin{table*}[htb]
\centering
\begin{tabular}{l|c|cc|cc|cc}
\toprule
\multirow{2}{*}{Modality} & \multirow{2}{*}{Fusion} & \multicolumn{2}{c|}{\scell{Train clean /\\evaluation clean}}& \multicolumn{2}{c|}{\scell{Train clean /\\evaluation noisy}} & \multicolumn{2}{c}{\scell{Train noisy /\\evaluation noisy}} \\\cline{3-8}
& & valid & test & valid & test & valid & test \\
\midrule
Audio & - & $10.82\pm0.39$ & $9.29\pm0.15$ & $18.04\pm0.26$ & $16.71\pm0.41$ & $14.89\pm0.60$ & $11.83\pm0.62$ \\
Visual & - & $4.04\pm0.89$ & $4.09\pm0.86$ & $6.90\pm1.44$ & $6.56\pm1.63$ & $5.32\pm0.63$ & $4.38\pm0.29$ \\
Thermal & - & $10.30\pm0.74$ & $10.58\pm1.28$ & $19.14\pm2.52$ & $19.28\pm2.46$ & $11.06\pm0.20$ & $12.39\pm1.06$ \\
\midrule
Bimodal & \multirow{2}{*}{Attention}& $4.42\pm0.85$ & $3.69\pm0.44$ & $6.51\pm0.48$ & $5.48\pm0.25$ & $5.03\pm0.95$ & $4.05\pm0.61$ \\
Trimodal & & $4.28\pm0.38$ & $4.17\pm1.19$ & $5.85\pm0.82$ & $5.47\pm1.46$ & $4.07\pm1.03$ & $3.65\pm1.06$ \\\hline
Bimodal & \multirow{2}{*}{ScoreAve} & $2.39\pm0.42$ & $2.20\pm0.58$ & $5.45\pm0.89$ & \textbf{4.50 $\pm$ 1.05} & $3.70\pm0.35$ & $2.65\pm0.34$ \\
Trimodal & & \textbf{2.26 $\pm$ 0.10} & \textbf{1.80 $\pm$ 0.31} & \textbf{5.31 $\pm$ 0.86} & $4.52\pm0.59$ & \textbf{2.98 $\pm$ 0.20} & \textbf{2.13 $\pm$ 0.51} \\
\bottomrule
\end{tabular}
\caption{EER (\%) performance results of person verification systems under three conditions. \textit{Bimodal} and \textit{Trimodal} correspond to \textit{Audio-Visual} and \textit{Audio-Visual-Thermal} models respectively. The noise rate in both train and evaluation sets is 30\%.}
\label{tab:results}
\end{table*}
\begin{table}[h]
\centering
\begin{tabular}{l|c|c}
\toprule
\multirow{2}{*}{Modality} & \multicolumn{2}{c}{EER (\%)} \\\cline{2-3}
& Valid & Test \\
\midrule
Audio-Visual & & \\
Audio-Visual-Thermal & & \\
\bottomrule
\end{tabular}
\caption{Multi-modal person verification results.}
\label{tab:multi}
\end{table}
\begin{table*}[t]
\centering
\begin{tabular}{l|c|cc|cc|cc|cc}
\toprule
\multirow{2}{*}{Modality} & \multirow{2}{*}{Fusion} & \multicolumn{4}{c|}{Train clean} & \multicolumn{4}{c}{Train noisy} \\\cline{3-10}
& & valid & test & valid' & test' & valid & test & valid' & test' \\
\midrule
Audio & - & $10.8\pm0.4$ & $9.3\pm0.2$ &$18\pm0.3$ & $16.7\pm0.4$ &$12.8\pm0.6$ &$9.5\pm0.4$ &$14.9\pm0.6$ &$11.8\pm0.6$ \\
Visual & - & $4\pm0.9$ & $4.1\pm0.9$ &$6.9\pm1.4$ & $6.5\pm1.6$ &$5.8\pm0.5$ &$4.2\pm1$ &$5.3\pm0.6$ &$4.4\pm0.3$ \\
Thermal & - & $10.3\pm0.7$ & $10.6\pm1.3$ &$19.1\pm2.5$ & $19.3\pm2.5$ &$11\pm1.1$ &$12.7\pm0.9$ &$11.1\pm0.2$ &$12.4\pm1.1$ \\
\midrule
Bimodal & \multirow{2}{*}{Attention} &$4.4\pm0.9$. &$3.7\pm0.4$ &$6.5\pm0.5$ &$5.5\pm0.3$ &$5\pm1$ &$3.96\pm0.6$ &$5\pm1$ &$4.1\pm0.6$ \\
Trimodal & &$4.3\pm0.4$. &$4.2\pm1.2$ &$5.8\pm0.8$ &$5.5\pm1.5$ &$3.9\pm1.1$ &$3.6\pm1$ &$4.1\pm1$ &$3.7\pm1.1$ \\\hline
Bimodal & \multirow{2}{*}{ScoreAve} &$2.1211$ &$1.9697$ &$4.8368$ &$3.4069$ &$3.0579$ &$1.5195$ &$3.5000$ &$1.8831$ \\
Trimodal & &$2.2579$ &$1.5931$ &$4.3158$ &$3.0909$ &$2.5211$ &$1.3463$ &$2.8105$ &$1.6277$ \\
\bottomrule
\end{tabular}
\caption{EER (\%) performance results of different person verification systems under different conditions. \textit{Bimodal} and \textit{Trimodal} correspond to \textit{Audio-Visual} and \textit{Audio-Visual-Thermal} models respectively. \textit{valid'} and \textit{test'} represent the noisy version of evaluation sets. The noise rate in both train and evaluation sets is 30\%.}
\label{tab:results}
\end{table*}
\fi
\iffalse
\subsection{Robustness Evaluation}
Compare clean-noisy and noisy-noisy for trimodal and bimodal systems.
\begin{table*}[htb]
\centering
\begin{tabular}{l|c|cc|cc|cc}
\toprule
\multirow{2}{*}{Modality} & \multirow{2}{*}{Fusion} & \multicolumn{2}{c|}{\scell{train clean /\\evaluation clean}}& \multicolumn{2}{c|}{\scell{train clean /\\evaluation noisy}} & \multicolumn{2}{c}{\scell{train noisy /\\evaluation noisy}} \\\cline{3-8}
& & valid & test & valid & test & valid & test \\
\midrule
Audio & - & $10.82\pm0.39$ & $9.29\pm0.15$ & $18.04\pm0.26$ & $16.71\pm0.41$ & $14.89\pm0.60$ & $11.83\pm0.62$ \\
Visual & - & $4.04\pm0.89$ & $4.09\pm0.86$ & $6.90\pm1.44$ & $6.56\pm1.63$ & $5.32\pm0.63$ & $4.38\pm0.29$ \\
Thermal & - & $10.30\pm0.74$ & $10.58\pm1.28$ & $19.14\pm2.52$ & $19.28\pm2.46$ & $11.06\pm0.20$ & $12.39\pm1.06$ \\
\midrule
\multirow{2}{*}{Bimodal} & Attention & $4.42\pm0.85$ & $3.69\pm0.44$ & $6.51\pm0.48$ & $5.48\pm0.25$ & $5.03\pm0.95$ & $4.05\pm0.61$ \\
& ScoreAve & $2.39\pm0.42$ & $2.20\pm0.58$ & $5.45\pm0.89$ & \textbf{4.50 $\pm$ 1.05} & $3.70\pm0.35$ & $2.65\pm0.34$ \\\cline{2-8}
\multirow{2}{*}{Trimodal} & Attention & $4.28\pm0.38$ & $4.17\pm1.19$ & $5.85\pm0.82$ & $5.47\pm1.46$ & $4.07\pm1.03$ & $3.65\pm1.06$ \\
& ScoreAve & \textbf{2.26 $\pm$ 0.10} & \textbf{1.80 $\pm$ 0.31} & \textbf{5.31 $\pm$ 0.86} & $4.52\pm0.59$ & \textbf{2.98 $\pm$ 0.20} & \textbf{2.13 $\pm$ 0.51} \\
\bottomrule
\end{tabular}
\caption{EER (\%) performance results (mean $\pm$ std) of person verification systems under three conditions: 1) \textit{train clean / evaluation clean}, 2) \textit{train clean / evaluation noisy}, and 3) \textit{train noisy / evaluation noisy}. \textit{Bimodal} and \textit{Trimodal} correspond to \textit{Audio-Visual} and \textit{Audio-Visual-Thermal} models respectively. The noise rate in both train and evaluation sets is 30\%.}
\label{tab:results}
\end{table*}
\fi
\section{Introduction}
\label{sec:intro}
\input{1_intro.tex}
\section{Related work}
\label{sec:related}
\input{2_related.tex}
\section{Audio-Visual-Thermal Dataset}
\label{sec:dataset}
\input{3_dataset.tex}
\section{System Architecture}
\label{sec:method}
\input{4_method.tex}
\section{Experimental Setup}
\label{sec:exp_setup}
\input{5_exp_setup.tex}
\section{Experimental Results}
\label{sec:exp_results}
\input{6_exp_results.tex}
\section{Conclusion}
\label{sec:conclusion}
\input{7_conclusion.tex}
\bibliographystyle{IEEEbib}
\subsection{Speaker verification}
Speaker verification aims to verify a person’s claimed identity using voice biometrics.
With the growing popularity of deep neural network (DNN), the leading modeling approach for speaker verification has transferred from the traditional Gaussian Mixture Model-Universal Background Model (GMM-UBM)~\cite{reynolds2000speaker} and \textit{i}-vector~\cite{dehak2010front} to deep speaker embedding representation learning.
A typical speaker embedding learning approach is \textit{d}-vector~\cite{variani2014deep}, where a fully-connected DNN is used to extract frame-level deep features.
These features are then averaged to form an utterance-level speaker representation.
Another popular speaker embedding learning approach is the time-delay neural network (TDNN) based \textit{x}-vector~\cite{DBLP:conf/interspeech/SnyderGPK17}, which has been shown to achieve state-of-the-art results in multiple datasets~\cite{snyder2018x}.
Recently, more advanced neural architectures, such as ResNet34, have been shown to further improve performance by extracting \textit{r}-vectors~\cite{zeinali2019but}.
In this work, we also employed ResNet34 to learn speaker embedding vectors from the audio stream.
\subsection{Face verification}
Face verification aims to verify a person’s claimed identity using visual images.
Since the impressive success of the AlexNet~\cite{DBLP:conf/nips/KrizhevskySH12} architecture in the ImageNet competition, deep convolutional neural network (DCNN)-based approaches have become the mainstream for the face verification task.
For example, DeepFace~\cite{taigman2014deepface} achieved human-level performance on the challenging LFW benchmark~\cite{LFWTech} for the first time.
Furthermore, in recent years, researchers have explored many different DCNN-based architectures for the face verification task, where remarkable improvements have been achieved.
These architectures include, but not limited to, DeepID~\cite{sun2015deepid3}, VGGNet~\cite{DBLP:journals/corr/SimonyanZ14a}, FaceNET~\cite{DBLP:conf/cvpr/SchroffKP15}, and ResFace~\cite{DBLP:conf/cvpr/HeZRS16}.
In our work, we will employ ResNet34 to extract embedding vectors from both visual and thermal images.
\subsection{Audio-visual person verification}
Although speaker verification and face verification have achieved remarkable progress in recent years, their performance decreases significantly under more challenging conditions.
For example, speaker verification is sensitive to changes in acoustic characteristics due to background noise, the person's mood (e.g., happy, angry, etc.), the uttered text, the recording device, distance, and so on.
Similarly, the performance of face verification is vulnerable to illumination, pose, emotion, distance.
To alleviate the drawbacks of the two verification approaches, researchers are turning to audio-visual verification approaches utilizing both modalities.
The initial work in this direction employed score-level fusion strategies~\cite{DBLP:conf/icassp/SellDSEG18,alam2020analysis,DBLP:conf/clear/LuqueMGAFMMMVH06}, where scores obtained from separately trained unimodal models were combined.
More recent works exploit attention-based fusion~\cite{shon2019noise,qian2021audio} to judiciously combine salient features from input modalities.
Overall, compared to unimodal verification systems, multimodal systems have been shown to be more accurate and robust, especially under noisy conditions.
To facilitate research on audio-visual person verification, various multimedia datasets have been introduced, such as VAST~\cite{DBLP:conf/lrec/TraceyS18}, JANUS~\cite{DBLP:conf/icassp/SellDSEG18}, VoxCeleb 1~\cite{nagrani2017voxceleb} \& 2~\cite{chung2018voxceleb2}.
Additionally, several person verification challenges have been organized~\cite{DBLP:conf/clear/StiefelhagenBBGMS06,sadjadi20202019}.
For example, the 2019 NIST speaker recognition evaluation (SRE) challenge~\cite{sadjadi20202019} proposed to use visual information in conjunction with audio recordings to increase the robustness of verification systems.
As a result, audio-visual fusion was found to result in remarkable performance gains (more than 85\% relative) over audio-only or face-only systems.
In this paper, we will investigate further increase of modalities by supplementing an audio-visual system with thermal images.
\iffalse
Person verification has been an active field of research for several decades~\cite{zhang2017end, snyder2018x, taigman2014deepface, wen2016discriminative}. Many approaches that have been already proposed are based on a single-stream framework, which is either on a person's voice or on his or her facial appearance, and many of them evolved over time. For example, the approaches for speaker recognition started from traditional probabilistic approaches~\cite{reynolds2000speaker, dehak2010front} and moved to deep speaker embedding ones~\cite{variani2014deep, snyder2017deep}. Whereas the approaches for face recognition have transferred from local descriptors based learning~\cite{cao2010face, lei2013learning} to deep convolutional neural networks~\cite{krizhevsky2012imagenet, parkhi2015deep, schroff2015facenet}.
Although single modality person recognition showed competitive performance, it was inferred that multi-stream framework could boost the performance further~\cite{qian2021audio}. This is because the two modalities might complement each other in situations when one of them makes a wrong decision or is absent. Based on this multi-stream framework has lately attracted more and more attention. At the 2019 NIST Speaker Recognition Evaluation challenge, it was proposed to use visual information in conjunction with the audio recordings to increase robustness of recognition frameworks~\cite{sadjadi20202019, alam2020analysis}. Apart from this, several versions of VoxCeleb - a large-scale audio-visual person recognition dataset - have been published~\cite{nagrani2017voxceleb, chung2018voxceleb2, nagrani2020voxceleb}. These datasets further boosted the development of audio-visual recognition methods~\cite{qian2021audio, chen2020multi, shon2019noise, shon2020multimodal}.
Up to date, a lot of research has been conducted on the utilization of audio or/and visual streams. In the field of speaker recognition, Chung et al. investigated multiple metric-based and softmax-based loss functions under identical conditions in order to identify which produce the best representation of an audio stream~\cite{chung2020defence}. Whereas, Yao et al. developed a new multi-stream Convolutional Neural Network which performs speaker verification tasks by a frequency selection technique - dividing the audio into different frequencies and selecting the one that describes the target best~\cite{yao2020multi}.
In the field of person recognition, Shon et al.~\cite{shon2019noise} proposed an attention-based approach with a feature-level fusion method. The attention mechanism at the core of the network learns to select a more salient modality, which enables the model to be resistant to situations in which either of the modalities is corrupted. Their subsequent work~\cite{shon2020multimodal} introduced fine-tuning of their pre-trained unimodal models in order to prevent overfitting. Chen et al.~\cite{chen2020multi} in their end-to-end multi-modal system explored and compared three methods of fusing voice and face embeddings: Simple Soft Attention Fusion~\cite{shon2019noise}, Compact Bilinear Pooling Fusion~\cite{fukui2016multimodal}, and Gated Multi-Modal Fusion~\cite{arevalo2017gated}. In addition, the authors explored the systems with two loss functions, namely Contrastive Loss with Aggressive Sampling Strategy and Additive Angular Margin Loss~\cite{deng2019arcface}. Qian et al.~\cite{qian2021audio} proposed three audio-visual deep neural networks (AVN) for multi-modal person verification. These are - a feature level AVN (AVN-F), which accepts audio and visual streams simultaneously, an embedding-level AVN (AVN-E), which accepts pre-trained voice and face embeddings, and an embedding level AVN with joint learning (AVN-J), which optimizes unimodal models together with a fusion module resulting in an end-to-end training process. In order to evaluate the proposed AVNs on noisy data, a couple of data augmentation techniques were designed: a feature-level multi-modal data augmentation and an embedding-level noise distribution matching data augmentation.
All of the above-mentioned works utilized two modalities, that are audio recordings and visual images. In this paper using the SpeakingFaces dataset we explore how 3 modalities — audio, visual and thermal streams — can affect the overall performance of a person verification task.
\fi
\iffalse
Person recognition has been an active field of research for several decades~\cite{zhang2017end, snyder2018x, taigman2014deepface, wen2016discriminative}. Many approaches that have been already proposed are based on a single-stream framework, which is either on a person's voice or on his or her facial appearance, and many of them evolved over time. For example, the approaches for speaker recognition started from traditional probabilistic approaches~\cite{reynolds2000speaker, dehak2010front} and moved to deep speaker embedding ones~\cite{variani2014deep, snyder2017deep}. Whereas the approaches for face recognition have transferred from local descriptors based learning~\cite{cao2010face, lei2013learning} to deep convolutional neural networks~\cite{krizhevsky2012imagenet, parkhi2015deep, schroff2015facenet}.
Although single modality person recognition showed competitive performance, it was inferred that multi-stream framework could boost the performance further~\cite{qian2021audio}. This is because the two modalities might complement each other in situations when one of them makes a wrong decision or is absent. Based on this multi-stream framework has lately attracted more and more attention. At the 2019 NIST Speaker Recognition Evaluation challenge, it was proposed to use visual information in conjunction with the audio recordings to increase robustness of recognition frameworks~\cite{sadjadi20202019, alam2020analysis}. Apart from this, several versions of VoxCeleb - a large-scale audio-visual person recognition dataset - have been published~\cite{nagrani2017voxceleb, chung2018voxceleb2, nagrani2020voxceleb}. These datasets further boosted the development of audio-visual recognition methods~\cite{qian2021audio, chen2020multi, shon2019noise, shon2020multimodal}.
\fi
\subsection{Encoder module}
The encoder module $Encoder(\cdot)$ is based on the ResNet34~\cite{DBLP:conf/cvpr/HeZRS16} network.
Specifically, for image input (i.e., visual and thermal), we used a variation of ResNet34, in which the number of channels in each residual block is halved in order to reduce computational costs.
For audio, we used another variation of ResNet34, in which average-pooling was replaced with self-attentive pooling (SAP)~\cite{DBLP:conf/odyssey/CaiCL18} to aggregate frame-level features into an utterance-level representation.
The encoder takes in a raw feature $x_{i}$ and outputs the corresponding embedding vector representation $e_{i}\in\mathbb{R}^{512}$:
\begin{equation}
e_{i} = Encoder(x_{i})
\end{equation}
where $i\in\{a,v,t\}$ is used to represent the stream source (i.e., audio, visual, and thermal).
We trained a separate encoder module for each data stream.
\subsection{Fusion module}
As a fusion module, we tried two different approaches: 1)~attention mechanism and 2)~score averaging.
\subsubsection{Attention mechanism}
In the attention-based approach, we implemented an embedding-level fusion similar to that in~\cite{shon2019noise}, where the fusion module can pay attention to a salient modality among audio $e_a$, visual $e_v$, and thermal $e_t$ representations.
Specifically, it first computes the attention score $\hat{\alpha}_{\{a,v,t\}}\in \mathbb{R}^3$ as follows:
\begin{equation}
\hat{\alpha}_{\{a,v,t\}} = \mathbf{W}[e_a,e_v,e_t]+\mathbf{b}
\end{equation}
where the weight matrix $\mathbf{W}\in\mathbb{R}^{3\times1536}$ and the bias vector $\mathbf{b}\in\mathbb{R}^3$ are learnable parameters.
Next, the fused person embedding vector $e_p$ is produced by the weighted sum:
\begin{equation}
e_p=\sum_{i\in\{a,v,t\}}\alpha_ie_i\text{\ \ , where\ \ } \alpha_i=\frac{\exp(\hat{\alpha}_i)}{\sum_{k\in\{a,v,t\}}\exp(\hat{\alpha}_k)}
\end{equation}
The attention-based fusion for the bimodal systems (i.e., audio-visual) was designed in a similar manner, using two modalities instead of three.
Note that the attention-based fusion module is jointly trained with the encoder modules in an end-to-end fashion.
\subsubsection{Score averaging}
In the score-level fusion, we simply averaged the scores $s_i\in[0,2]$ produced by the unimodal verification systems, where a score is the Euclidean distance between two normalized speaker embeddings.
For example, for the trimodal system, the final score $s_{\textit{final}}$ is computed as follows:
\begin{equation}
s_{\textit{final}} = \frac{\sum_{i\in\{a,v,t\}}s_i}{3}
\end{equation}
Likewise, for the bimodal system, the final score is computed by averaging the scores of two unimodal systems.
\section{Introduction}
This template can be found on the conference website. Please use
either one of the template files found on this website when preparing your
submission. If there are special questions or wishes regarding
paper preparation and submission for Odyssey 2022, correspondence
should be addressed to \mbox{$<[email protected]$>$.}
Information for full paper submission is available on the web at
$<$http://www.odyssey2022.org$>$ under which you also will find
instructions for paper preparation and usage of templates.
\section{Page layout and style}
The page layout should match with the following rules. A highly
recommended way to meet these requirements is to use one of the
templates provided and to check details against this example
file. If for some reason you cannot use any of the templates, please
follow these rules as carefully as possible, or contact the
organizers at \mbox{$<[email protected]$>$} for further
instructions.
\subsection{Basic layout features}
\begin{itemize}
\item Proceedings will be printed in A4 format. The layout is
designed so that the papers, when printed in US Letter format, will include
all material but the margins will not be symmetric. PLEASE TRY TO MAKE YOUR
SUBMISSION IN A4 FORMAT, if possible, although this is not an
absolute requirement. \item Two columns are used except for the
title part and possibly for large figures that need a full page
width. \item Left margin is 20 mm. \item Column width is 80 mm.
\item Spacing between columns is 10 mm. \item Top margin 25 mm
(except first page 30 mm to title top). \item Text height (without
headers and footers) is maximum 235 mm. \item Headers and footers
must be left empty (they will be added for printing). \item Check
indentation and spacing by comparing to this example file (in
PDF).
\end{itemize}
\subsubsection{Headings}
Section headings are centered in boldface
with the first word capitalized and the rest of the heading in
lower case. Sub-headings appear like major headings, except they
start at the left margin in the column.
Sub-sub-headings appear like sub-headings, except they are in italics
and not boldface. See examples in this
file. No more than 3 levels of headings should be used.
\subsection{Fonts}
The font used for the main text is Times. The recommended
font size is 9 points which is also the minimum allowed size.
Other font types may be used if needed for special purposes. Remember,
however, to embed all the fonts in your final PDF file!
LaTeX users: DO NOT USE THE Computer Modern FONT FOR TEXT (Times is
specified in the style file). If possible, make the final
document using POSTSCRIPT FONTS since, for example, equations with
non-PS Computer Modern are very hard to read on screen.
\subsection{Figures}
All figures must be centered in the column (or page, if the figure spans
both columns).
Figure captions should follow each figure and have the format given in
Fig. \ref{spprod}.
Figures should preferably be line drawings. If they contain gray
levels or colors, they should be checked to print well on a
high-quality non-color laser printer.
If some figures contain bitmap images, please ensure that their resolution
is high enough to preserve readability.
\subsection{Tables}
An example of a table is shown as Table \ref{table1}. Somewhat
different styles are allowed according to the type and purpose of the
table. The caption text may be above or below the table.
\begin{table}[th]
\caption{\label{table1} {\it This is an example of a table.}}
\vspace{2mm}
\centerline{
\begin{tabular}{|c|c|}
\hline
ratio & decibels \\
\hline \hline
1/1 & 0 \\
2/1 & $\approx$ 6 \\
3.16 & 10 \\
10/1 & 20 \\
1/10 & -20 \\
100/1 & 40 \\
1000/1 & 60 \\
\hline
\end{tabular}}
\end{table}
\subsection{Equations}
Equations should be placed on separate lines and numbered. Examples
of equations are given below.
Particularly,
\begin{equation}
x(t) = s(f_\omega(t))
\label{eq1}
\end{equation}
where \(f_\omega(t)\) is a special warping function
\begin{equation}
f_\omega(t)=\frac{1}{2\pi j}\oint_C \frac{\nu^{-1k}d\nu}
{(1-\beta\nu^{-1})(\nu^{-1}-\beta)}
\label{eq2}
\end{equation}
A residue theorem states that
\begin{equation}
\oint_C F(z)dz=2 \pi j \sum_k Res[F(z),p_k]
\label{eq3}
\end{equation}
Applying (\ref{eq3}) to (\ref{eq1}),
it is straightforward to see that
\begin{equation}
1 + 1 = \pi
\label{eq4}
\end{equation}
Finally we have proven the secret theorem of all speech sciences.
No more math is needed to show how useful the result is!
\begin{figure}[t]
\includegraphics[width=\columnwidth]{paper_completion.pdf}
\caption{{\it This is an example of a figure.}}
\label{spprod}
\end{figure}
\subsection{Page numbering}
Final page numbers will be added later to the document electronically. {\em
Please do not include any headers or footers!}
\subsection{References}
The reference format is the standard IEEE one.
References should be numbered in order of appearance,
for example \cite{aluisio2001learn}, and later this one \cite{swales1987writing},
and even later these other three \cite{day2012write,teufel2000,berkenkotter1989social}.
\section{Discussion}
This is the discussion. This is the discussion. This is the discussion.
This is the discussion. This is the discussion. This is the discussion.
This is the discussion. This is the discussion. This is the discussion.
This is a boring discussion. This is the discussion. This is the discussion.
This is the discussion. This is the discussion. This is the discussion.
\section{Conclusion}
This template can be found on the conference website
$<$http://www.odyssey2022.org$>$.
\bibliographystyle{IEEEbib}
\section{Introduction}
\label{sec:intro}
\input{1_intro.tex}
\section{Related work}
\label{sec:related}
\input{2_related.tex}
\section{Audio-Visual-Thermal Dataset}
\label{sec:dataset}
\input{3_dataset.tex}
\section{System Architecture}
\label{sec:method}
\input{4_method.tex}
\section{Experimental Setup}
\label{sec:exp_setup}
\input{5_exp_setup.tex}
\section{Experimental Results}
\label{sec:exp_results}
\input{6_exp_results.tex}
\section{Conclusion}
\label{sec:conclusion}
\input{7_conclusion.tex}
\bibliographystyle{IEEEbib}
|
1,116,691,500,135 | arxiv | \section{Introduction}
Interaction of charged macromolecules (macroions) is essential for soft and biological materials in order to maintain
their complex structure and distinct functioning. In many cases, charge patterns
along macromolecular surfaces are inhomogeneous and exhibit a highly disordered spatial distribution.
DNA microarrays \cite{science95}, surfactant-coated surfaces \cite{Klein},
random polyelectrolytes and polyampholytes \cite{rand_polyelec}
present examples of such disordered charge distributions.
The charge pattern can be either set and quenched in the process of assembly of these surfaces,
or can exhibit various degrees of annealing when interacting with other macromolecules in aqueous solutions.
Disorder annealing in charged systems may result from
different sources; {\em e.g.}, finite mobility and mixing of charged units (lipids and proteins)
in lipid membranes \cite{lipowsky}, conformational rearrangement of DNA chains in
DNA microarrays \cite{science95} and charge regulation of contact surfaces bearing weak
acidic groups in aqueous solutions \cite{ParsegianChan}, to name a few, all lead to annealing effects.
In reality, one may deal with a more complex situation where the surface charge pattern
displays an intermediate character \cite{Klein} and thus may neither be considered as
purely quenched ({\em i.e.}, with fixed random spatial distribution) nor as purely annealed ({\em i.e.},
thermally equilibrated with the bulk solution).
Charge disorder appears to produce electrostatic features that are remarkably different from
those found in non-disordered systems. Mounting experimental evidence shows that like-charged
phospholipid membranes and fluid vesicles, which primarily contain mobile surface charges,
may undergo aggregation and fusion in the presence of multivalent cations \cite{lipowsky}.
A similar behavior is observed with negatively charged mica surfaces exposed to a solution of
positively charged surfactants \cite{Klein}; here formation of a random mosaic of surfactant
patches on apposing surfaces (after the surfactant is adsorbed from the bathing solution onto the surfaces)
leads to a long-range attraction and thus a spontaneous jump to a collapsed state.
Although this behavior is akin to the transition to the primary minimum
within the standard DLVO theory of weakly charged systems \cite{DLVO,Israelachvili}, the attractive forces at work here exceed
the universal van-der-Waals forces \cite{parsegian} incorporated in the DLVO theory by a few
orders of magnitude \cite{Klein}.
In fact, the emergence of an instability is not captured by the standard theories of charged systems that
incorporate static, non-disordered charge distributions for macromolecular surfaces. These theories
cover both mean-field \cite{DLVO,Israelachvili,Andelman}
or weak-coupling limit (including the Gaussian-fluctuations correction around the
mean-field solution \cite{podgornik-FI}) as well as the
strong-coupling (SC) limit \cite{SC_review,netz,shkl,levin}, where the central theme is the absence or emergence of
electrostatic correlations induced by neutralizing counterions in the system
that give rise to attractive interactions between like-charged objects.
Both uniform \cite{DLVO,Israelachvili,Andelman,netz} as well as
modulated \cite{charge-modulation} charge distributions have been considered in this context. In the mean-field regime,
like-charged objects always repel. While the opposite limit of strong coupling (realized, {\em e.g.},
with high valency counterions, highly charged macroions, low medium dielectric constant or low temperature
\cite{SC_review,netz}) is dominated by correlation-induced attractive forces that can bring two apposing
like-charged surfaces to very small separation distances. At small separations, however, a universal repulsion due to
the confinement entropy of intervening counterions sets in and {\em stabilizes} the system in a closely packed bound state
with a finite surface-surface separation \cite{SC_review,netz}.
This is true even for surfaces of opposite (unequal) uniform charge distribution
\cite{to-be-published}. Therefore, other mechanisms have to be at work that would lead to attractions strong enough to
counteract such repulsive forces and lead to collapse or instability in a system of
charged macroions.
One such mechanism we propose is the disorder of the charge distribution along
macromolecular surfaces that turns out to be as significant as the counterionic
correlations and could provide a new paradigm in the theory of charged soft matter.
Previous studies of charge disorder on macromolecular surfaces
have investigated both types of quenched \cite{rand_polyelec,naji_podgornik,Rudi-Ali,Fleck}
and annealed \cite{rand_polyelec,Fleck,Shklovskii_mobile,Wurger,vonGrunberg,Harries} disorder (including, specifically,
the classical work on charge-regulating surfaces \cite{ParsegianChan}). They mainly deal with situations
where the system is in equilibrium and, on the question of electrostatic interaction \cite{Wurger,vonGrunberg,Harries}, focus
primarily on the weak-coupling regime, where disordered surfaces of equal mean
charge always repel and no collapse or instability arises
\cite{note_instability}.
A systematic analysis of quenched disorder effects is presented in the previous works of two of the present
authors \cite{naji_podgornik,Rudi-Ali}, where it was shown that
not only can electrostatic interactions between like-charged objects turn from repulsive to attractive due to counterionic correlations,
but also the disorder of the surface charge itself can give rise to an additive long-range attraction.
This is most clearly demonstrated by attraction induced between disordered surfaces of {\em zero} mean charge
but with a finite variance of the disordered charge distribution \cite{Rudi-Ali}. In the SC limit, the quenched disorder-induced
attraction may be so strong that it can dominate the entropic repulsion at small separations and continuously shrink the SC surface-surface bound state \cite{SC_review,netz}
upon increasing the quenched disorder variance, predicting thus a {\em continuous
collapse transition} between a stable and a collapsed phase beyond a threshold
disorder variance \cite{naji_podgornik}. Since the experimental situation may be more
complex \cite{Klein}, and it might not allow for straightforward differentiation of the charge pattern into
purely quenched and purely annealed cases,
we next set ourselves to explore possible effects from partial annealing of the surface charge.
If there is a fingerprint of the partially annealed surface charge disorder on the nature and
magnitude of surface interactions, this
would help in assessing whether the experimentally observed interactions can be interpreted
in terms of disorder-induced interactions or not. This is the motivation with which we venture on this exploration.
In this paper, we present a general formalism for charged systems
with partially annealed disorder by invoking field-theoretic and replica methods. We
then focus on the case of two interacting {\em planar} charged surfaces as a model system and examine
explicitly the effects of disorder on the inter-surface interaction in this system.
Partially annealed disorder in general arises when a coupled motion of slow and fast variables (corresponding
here to surface charges and counterions, respectively)
is present in the system. It represents a non-equilibrium situation, whose investigation requires suitable
methods. The previously studied cases of static, non-disordered surface charge distribution
\cite{SC_review,netz} as well as quenched \cite{naji_podgornik} and annealed surface charge distribution
follow as special cases from our formalism. In the SC limit, we find that
the system of two like-charged planar surfaces with neutralizing counterions becomes
{\em globally unstable} upon annealing the quenched surface charge
and collapses into contact regardless of other system parameters due to strong attractive forces from
the annealing effects. Hence, the quenched phase behavior is not stable against
small annealing perturbations and is dramatically changed. However, stability may be restored in this system by adding a finite
amount of added salt that screens out the long-range Coulomb interactions. In this case, we recover the continuous collapse transition
between a stabilized closely packed bound state of the two surfaces and a collapsed state where the surfaces are in contact.
This is qualitatively similar to the purely quenched case \cite{naji_podgornik}.
However, the partially annealed bound state shows a significantly larger attraction and a smaller optimal surface
separation as compared to the quenched case. In other words, allowing for rearrangements of the macroion charges leads to configurations of
lower free energy. Since the present formalism is quite general, we shall also study the mean-field limit, where (in contrast to the
quenched case where no disorder effects are found \cite{naji_podgornik,Fleck,note_instability}) the disorder annealing appears to suppress the mean-field
repulsion significantly by renormalizing the surface
charge to smaller values. Hence, besides the previously established mechanisms of counterion-induced \cite{netz,shkl,levin}
and quenched disorder-induced \cite{naji_podgornik,Rudi-Ali} correlations, we find that the annealing of macroion charges
provides another mechanism enhancing the like-charge attraction.
The organization of the paper is as follows: We start with the general formalism that allows us to define and to deal with
the partially annealed disorder in terms of an ``effective partition function" that is obtained in the form of a functional
integral over a fluctuating local electric potential field. The structure of this field theory is too complicated to allow for a
general solution. We thus derive asymptotic solutions in
the mean-field limit (corresponding to the Poisson-Boltzmann theory of the classical DLVO framework) as well as the
strong-coupling limit {\em via} an application of the replica formalism. We finally evaluate and analyze the inter-surface
interactions for the specific case of planar charged surfaces in both limits and compare them. We conclude by positioning our
results in the growing framework of the weak--strong coupling formalism for charged macromolecular interactions.
\begin{figure}[t!]
\begin{center}
\includegraphics[angle=0,width=10cm]{fig1newb.eps}
\caption{Schematic view of a system of macroions with partially annealed
disordered charge distribution $\rho({\mathbf r})$ and $q$-valency
counterions at bulk temperature $T$. Surface charges may exhibit a
different effective temperature $T'$ due to their disordered nature
and slow dynamics relative to the fast-relaxing counterions.
As a model system, we study two apposed planar macroion surfaces with disordered
surface charge distributions (specified in the text) located at
$z=\pm a$ at the separation distance $d=2a$. We neglect the dielectric discontinuity
at the boundaries or the presence of added salt in the system \cite{Rudi-Ali} (see also Refs. \cite{kanduc,jho-prl,olli}). }
\label{fig:schematic}
\end{center}
\end{figure}
\section{General formalism}
Let us consider a system of {\em fixed} macroions with disordered charge distribution, $\rho({\mathbf r})$,
immersed in an aqueous medium of
dielectric constant, $\varepsilon$, along with their point-like neutralizing counterions of valency $q$.
In what follows, we shall develop our formalism for an arbitrary ensemble of fixed macroions but
for explicit calculations, we shall
delimit ourselves to a model system of two apposed charged planar surfaces with $\rho({\mathbf r})$
representing the charge distribution along both planar surfaces (see Fig. \ref{fig:schematic}).
In the quenched limit, $\rho({\mathbf r})$ is assumed to be static and only counterions are subject
to thermal fluctuations. In the annealed limit,
both counterions and macroion charges are subject to fluctuations of comparable time
scales ({\em i.e.}, $\tau_{\mathrm{ci}}\sim \tau_{\mathrm{s}}$ respectively) and thus mutually
equilibrate. The intermediate situation of partially annealed disorder by definition
occurs when there is a macroscopic separation of time scales
between the so-called fast and slow variables as frequently observed in glassy systems \cite{dotsenko1,dotsenko2}.
In the present context, counterions comprise the fast variables as they are dispersed and freely fluctuate in the bulk.
Macroion surface charges are assumed to constitute the slow variables with the time scale
$\tau_{\mathrm{s}}\gg \tau_{\mathrm{ci}}$ due to their disordered nature (as, {\em e.g.}, they
are confined typically within closely packed or
quasi-two-dimensional disordered regions such as in
lipid bilayers \cite{lipowsky}, surfactant-coated surfaces
or surface hemimicelles \cite{Klein,hemimicelles}).
Under these conditions, the mutual equilibration of fast and slow variables is hindered. Counterions
rapidly attain their equilibrium at bulk temperature $T$ and thus, because of the wide time-scale gap,
their equilibrium free energy acts as a driving force
pushing the slow dynamics of the surface charges to reach a non-equilibrium stationary state at long times.
This scheme, known generally as the adiabatic elimination of fast variables \cite{haken,Risken,Kaneko},
is investigated in a growing number of works, for instance, in the context of far-from-equilibrium
stationary states and
thermodynamics of two-temperature systems \cite{landauer,two-T}. It has been applied in particular
to study spin glasses with partially annealed disorder of the spin-spin coupling strength \cite{dotsenko1,dotsenko2,cool}. It has
been shown in general that the stationary state of such systems may be described by a Boltzmann-type probability distribution
featuring the temperature of fast degrees of freedom $T$ as well as an effective temperature $T'$
associated with the disorder.
This peculiar two-temperature representation clearly reflects the intrinsically non-equilibrium nature of partial annealing.
Obviously, the equilibrium free energy of fast variables ({\em i.e.}, counterions in our case)
will show up explicitly in the aforementioned probability distribution (here we shall not consider
the relaxational dynamics of the system and focus only on stationary-state properties).
In general, one may thus define an effective
``partition function", ${\mathcal Z}$, in analogy with
the equilibrium partition function that greatly facilitates the analysis of the
system far from equilibrium \cite{dotsenko1,dotsenko2,two-T,cool}.
This procedure is discussed in Appendix \ref{app:A} by
adopting a simple dynamical model for a charged system with surface charge disorder
and by identifying its stationary-state probability distribution. We thus find
\begin{equation}
{\mathcal Z} = \int {\mathcal D}\rho \, \exp\big(- \beta' {\mathcal W}[\rho]\big),
\label{part_fun}
\end{equation}
where $\beta' = \frac{1}{k_{\mathrm{B}}T'}$ and the density functional $ {\mathcal W}[\rho]$ can be cast into the form
\begin{equation}
\beta' {\mathcal W}[\rho] = \frac{1}{2} \int {\mathrm{d}} {\mathbf r}\, g^{-1}({\mathbf r})\,
[\rho({\mathbf r}) - \rho_0({\mathbf r}) ]^2
- n \, \ln {\mathcal Z}_{\mathrm{ci}}[\rho],
\label{eq:eff_pot}
\end{equation}
where
\begin{equation}
n = \frac{T}{T'}.
\end{equation}
Equation (\ref{eq:eff_pot}) includes statistics of both counterions and the disordered charges on
macroion surfaces. The first term is the contribution of the disorder. It can be interpreted
as a general effective disorder potential expanded to the second order around a typical value $\rho_0$
(Appendix \ref{app:A}),
which is always possible if one interprets $g({\mathbf r})$ as playing the role of
an effective disorder ``compressibility". This expansion leads to the standard Gaussian disorder weight with the mean value
$\rho_0({\mathbf r})$ and variance $g({\mathbf r})$ that can be handled most conveniently by replica techniques \cite{dotsenko1,dotsenko2}.
Namely,
\begin{equation}
{\mathcal Z} = \int {\mathcal D}\rho \, {\mathcal P}[\rho]\,\bigg( {\mathcal Z}_{\mathrm{ci}}[\rho] \bigg)^n
= \bigg\langle \!\!\!\!\! \bigg\langle \bigg( {\mathcal Z}_{\mathrm{ci}}[\rho] \bigg)^n \bigg\rangle \!\!\!\!\! \bigg\rangle,
\label{eq:Z_Zci_n}
\end{equation}
where double-brackets denote the average $ \langle \!\! \langle \cdots \rangle \!\! \rangle =
\int {\mathcal D}\rho \, {\mathcal P}[\rho]\, \big(\cdots\big)$ with respect to the Gaussian
probability distribution
\begin{equation}
{\mathcal P}[\rho] = C\, \exp\bigg(\!\! - \frac{1}{2} \int {\mathrm{d}} {\mathbf r}\, g^{-1}({\mathbf r})\,
[\rho({\mathbf r}) - \rho_0({\mathbf r}) ]^2 \bigg)
\label{eq:gaussian}
\end{equation}
with $C$ being a normalization factor.
The second term in Eq. (\ref{eq:eff_pot}) is the equilibrium free energy of a system of counterions
at a {\em fixed} realization of disordered macroion charge, $\rho = \rho({\mathbf r})$.
It follows by integrating over the counterionic degrees of freedom equilibrated
at temperature $T$.
In grand-canonical ensemble, the fixed-$\rho$ partition function, ${\mathcal Z}_{\mathrm{ci}}[\rho]$,
can be cast into a form of a functional integral as \cite{podgornik-FI,netz}
\begin{equation}
{\mathcal Z}_{\mathrm{ci}}[\rho] =
\int \frac{{\mathcal D}\phi}{{\mathcal Z}_v}
\,\, \, e^{- \beta {\mathcal H}[\phi, \rho]},
\label{eq:Z_ci}
\end{equation}
where $\phi({\mathbf r})$ is the fluctuating electrostatic potential field, $\beta =\frac{1}{k_{\mathrm{B}}T}$ and
\begin{equation}
\label{eq:H}
{\mathcal H} = \int {\mathrm{d}} {\mathbf r}\,
\bigg[ \frac{\varepsilon\varepsilon_0}{2} \big(\nabla \phi\big)^2
+ {\mathrm{i}}\, \rho \,\phi
- \lambda k_{\mathrm{B}}T\,
\Omega({\mathbf r})\, \, e^{- {\mathrm{i}} \beta q e_0 \phi}\bigg]
\end{equation}
is the effective Hamiltonian of the system comprising Coulomb interaction $v({\mathbf x})=(4\pi \varepsilon\varepsilon_0 |{\mathbf x}|)^{-1}$ between all charged units (the first two terms) as well as the entropy of counterions (the last term). Here $\lambda$ is the fugacity, ${\mathcal Z}_v = \sqrt{\det \beta v({\mathbf r}, {\mathbf r}') }$,
and $\Omega({\mathbf r})$ is a geometry function that specifies the free volume available to counterions, {\em i.e.}, the space between the two apposed planar surfaces in the model system in Fig. \ref{fig:schematic}.
The partition function (\ref{part_fun}) can be evaluated by using the replica trick \cite{dotsenko1,dotsenko2},
{\em i.e.}, by taking $n$ an integral number and then standardly extending the results to real axis (for any real value of $n=\beta'/\beta$) by analytical continuation. Thus by using Eq. (\ref{eq:Z_ci}) and averaging over
$\rho({\mathbf r})$, we arrive at the disorder-averaged expression
\begin{equation}
\label{part_fn}
{\mathcal Z} = \int
\bigg(\prod_{a=1}^n \frac{{\mathcal D} \phi_a}{{\mathcal Z}_v} \bigg) \,\, \, e^{-{\mathcal S}[\{\phi_a\}] },
\end{equation}
where the $n$-replica effective Hamiltonian reads
\begin{eqnarray}
\label{ham}
\lefteqn{
{\mathcal S}[\{\phi_a\}] =
\frac{1}{2}\sum_{a,b}\int {\mathrm{d}}{\mathbf r}\, {\mathrm{d}}{\mathbf r}' \, \phi_a({\mathbf r}) {\mathcal D}_{ab}({\mathbf r}, {\mathbf r}') \phi_b({\mathbf r}') \, +
}\\
& & +\sum_a\int {\mathrm{d}}{\mathbf r} \, \bigg[{\mathrm i}\beta \, \rho_0({\mathbf r}) \phi_a({\mathbf r}) -{\lambda}\Omega({\mathbf r}) \, e^{-{\mathrm i}\beta q e_0 \phi_a({\mathbf r})}\bigg].
\nonumber
\end{eqnarray}
The kernel ${\mathcal D}_{ab}({\mathbf r}, {\mathbf r}')$ introduced above is defined as
\begin{equation}
{\mathcal D}_{ab}({\mathbf r}, {\mathbf r}') =
\beta v^{-1}({\mathbf r}, {\mathbf r}') \,\delta_{ab} +
\beta^2 g({\mathbf r})\, \delta({\mathbf r} - {\mathbf r}'),
\label{eq:D_ab}
\end{equation}
where $v^{-1}({\mathbf r}, {\mathbf r}') = -\varepsilon\varepsilon_0 \nabla^2\delta({\mathbf r} - {\mathbf r}')$.
Equation (\ref{part_fn}) carries complete information about the mutual coupling between counterions and the surface charge disorder.
The grand-canonical ``free energy" of the partially annealed system is then obtained as
\begin{equation}
{\mathcal F} = - k_{\mathrm B}T' \ln {\mathcal Z}.
\label{eq:free_Z}
\end{equation}
The special cases of {\em purely quenched} and {\em purely annealed} disorder follow from Eq. (\ref{eq:free_Z})
for $n\rightarrow 0$ and $n=1$, respectively (see Appendix \ref{app:B}).
Note that here the number of replicas, $n=T/T'$, has a direct physical meaning of temperature ratio \cite{dotsenko1,dotsenko2,two-T,cool}. A close examination of Eq. (\ref{ham}) indicates that the partially annealed disorder gives rise to quadratic surface terms of the form $g({\mathbf r}) \phi_a({\mathbf r}) \phi_b({\mathbf r})$. It may thus
lead to renormalization of the mean surface charge (Appendix \ref{app:eff_charge})
as can be seen most clearly by looking at the mean-field equations which we shall derive next.
\section{Mean-field limit}
\label{sec:PB}
The mean-field or Poisson-Boltzmann (PB) equation \cite{DLVO,Israelachvili,Andelman}
(which becomes exact in the limit of small coupling parameters
corresponding, for instance, to low counterion valency or low surface charge density \cite{SC_review,netz})
follows from the saddle-point equation of the functional integral (\ref{part_fn}) as
\begin{equation}
\varepsilon \varepsilon_0 \nabla^2 \bar \phi_a =
{\mathrm i}\,{\lambda}qe_0\,\Omega({\mathbf r})\, e^{-{\mathrm i}\beta q e_0 \bar \phi_a({\mathbf r})} +
{\mathrm i}\, \rho_0({\mathbf r}) + \beta g({\mathbf r})\sum_b \bar \phi_b({\mathbf r}).
\end{equation}
We shall assume no preferences among different replicas on the saddle-point level, thus
$\bar \phi_a({\mathbf r}) = \bar \phi({\mathbf r})$ for $a = 1,\ldots, n$ (replica symmetry {\em ansatz}). In this way we arrive at the PB equation for the real-valued mean-field potential $ \varphi_{\mathrm{PB}}({\mathbf r})={\mathrm i} \bar \phi({\mathbf r})$ as
\begin{equation}
\label{eq:pb_eqn}
\varepsilon\varepsilon_0 \nabla^2 \varphi_{\mathrm{PB}}({\mathbf r})
= - {\lambda}qe_0\,\Omega({\mathbf r})\,e^{-\beta q e_0 \varphi_{\mathrm{PB}}({\mathbf r})} - \rho_{\mathrm{eff}}({\mathbf r}),
\end{equation}
where
\begin{equation}
\rho_{\mathrm{eff}}({\mathbf r}) \equiv \rho_0({\mathbf r}) - \beta' g({\mathbf r})\varphi_{\mathrm{PB}}({\mathbf r})
\label{eq:PB_rho_eff}
\end{equation}
is the effective (renormalized) macroion charge distribution (Appendix \ref{app:eff_charge}). It is therefore seen that in the quenched limit ($n$ or $\beta'\rightarrow 0$), the disorder effects completely vanish on the mean-field level
and the PB theory coincides {\em exactly} with that of a non-disordered
system of bare charge distribution $\rho_0({\mathbf r})$ \cite{naji_podgornik}.
This is however not true for the partially annealed disorder ($n>0$).
To proceed with the PB theory, we shall consider the specific case of two parallel charged plates located (normal to $z$ axis) at $z=-a$ and $z=+a$ at the separation distance $d=2a$ (Fig. \ref{fig:schematic}).
We take the mean charge distribution and its variance as
\begin{eqnarray}
\rho_0({\mathbf r}) &=& -\sigma e_0 \,\big[\delta(z+a) +\, \delta(z-a) \big]
\label{eq:sigma_plates} \\
g({\mathbf r}) &=& g \,e_0^2 \,\big[\delta(z+a) + \delta(z-a) \big],
\label{eq:g_plates}
\end{eqnarray}
where $g\geq 0$ and without loss of generality we assume that $\sigma\geq 0$ (and thus $q\geq 0$).
Counterions are assumed to be confined in between the plates ({\em i.e.}, $\Omega({\mathbf r}) = 1$ for $|z|<a$
and zero elsewhere), where Eq. (\ref{eq:pb_eqn}) admits the well-known solution \cite{DLVO,Israelachvili,Andelman}
\begin{equation}
\varphi_{\mathrm{PB}}(z) = \frac{1}{\beta q e_0}\ln \cos^2(K z )
\end{equation}
with $K^2=2\pi \ell_{\mathrm{B}} q^2 {\lambda}$ to be determined from the electroneutrality condition stipulating that the total charge on the two surfaces should be equal to the total charge of the counterions. This leads to the equation for $K$ of the form
\begin{equation}
K\mu\, \tan(Ka) = 1 + \gamma \ln \cos^2(K a) \equiv \frac{\sigma_{\mathrm{eff}}}{\sigma}.
\label{eq:PB_K}
\end{equation}
Here $\mu=1/(2\pi q \ell_{\mathrm{B}} \sigma)$ is the Gouy-Chapman length, $\ell_{\mathrm{B}} = e_0^2/(4\pi \varepsilon \varepsilon_0 k_{\mathrm{B}}T)$ the Bjerrum length, and
\begin{equation}
\sigma_{\mathrm{eff}} = \sigma + \beta' g e_0 \varphi_{\mathrm{PB}}(a)
\label{renormedsigma}
\end{equation}
the renormalized surface charge density (Eq. (\ref{eq:PB_rho_eff})). The latter expression
clearly reflects the mixed boundary conditions encountered here, resembling
the situation in the classical charge-regulation problems \cite{ParsegianChan}. The dimensionless parameter
\begin{equation}
\gamma = \frac{n g}{q \sigma}
\label{eq:gamma}
\end{equation}
gives a measure of the {\em disorder annealing} and is obviously proportional to the ratio $n=T/T'$ of the counterions and surface disorder temperatures.
\begin{figure}[t!]
\begin{center}
\includegraphics[angle=0,width=7.5cm]{fig2new.eps}
\caption{Rescaled PB pressure between two charged plates as a function of their separation $d$ for $\gamma = 0$, 1, 10 and $10^2$. Inset shows the PB counterion density profile for $d/\mu=4$. For $\gamma=0$, we recover the non-disordered results with the pressure decaying as $\sim 1/d^2$ \cite{netz}.
For $\gamma\gg 1$, the pressure decays as $\sim 1/(\gamma d^2)$. }
\label{fig:pb_pressure_density}
\end{center}
\end{figure}
The PB pressure, $P_{\mathrm{PB}}$, acting between the plates is obtained from the standard definition
$\beta P_{\mathrm{PB}} = n_{\mathrm{PB}}(z_0) - \frac{1}{2}\beta\varepsilon\varepsilon_0 ({\mathrm{d}}\varphi_{\mathrm{PB}}/{\mathrm{d}} z)^2|_{z_0}$ \cite{DLVO,Israelachvili,Andelman} for an arbitrary $|z_0|<a$ as
\begin{equation}
\frac{\beta P_{\mathrm{PB}}}{2\pi \ell_{\mathrm{B}} \sigma^2} = (K\mu)^2.
\end{equation}
The counterion number density profile between the plates, $n_{\mathrm{PB}}(z) = {\lambda}\,e^{-\beta q e_0 \varphi_{\mathrm{PB}}(z)}$ \cite{DLVO,Israelachvili,Andelman}, is obtained as
\begin{equation}
\frac{n_{\mathrm{PB}}(z)}{2\pi \ell_{\mathrm{B}} \sigma^2}=\left(\frac{K\mu}{\cos Kz}\right)^2.
\end{equation}
It follows from Eq. (\ref{eq:PB_K}) that the mean-field renormalized surface charge density is always smaller than the bare value
and tends to zero but never changes sign as $\gamma$ increase ($0\leq \sigma_{\mathrm{eff}} \leq \sigma$).
Therefore, the surfaces are effectively neutralized and the pressure as well as the counterion number density profile
tend to zero as $\gamma$ increases to infinity (see Fig. \ref{fig:pb_pressure_density}).
This picture relies on the assumption that the number of
surface charged units is not fixed and can respond to changes of the surface potential. Imposing
the constraint that fixes this number obviously rules out surface charge renormalization and
one observes no effects from the disorder annealing in agreement with Ref. \cite{Fleck}.
In the limit $\gamma \rightarrow 0$, we recover the non-disordered \cite{netz} or quenched \cite{naji_podgornik}
mean-field results with the following asymptotic behavior for the pressure,
\begin{equation}
\frac{\beta P_{\mathrm{PB}}}{2\pi \ell_{\mathrm{B}} \sigma^2} \simeq
\left\{
\begin{array}{ll}
2\mu/d
& {\qquad d/\mu\ll 1,}\\ \\
\pi^2\mu^2/d^2
& {\qquad d/\mu \gg 1}.
\end{array}
\right.
\end{equation}
For large $\gamma\gg 1$, we find that $(Ka)^2\simeq (\mu/a+\gamma)^{-1}$ and thus
\begin{equation}
\frac{\beta P_{\mathrm{PB}}}{2\pi \ell_{\mathrm{B}} \sigma^2} \simeq
\left\{
\begin{array}{ll}
2\mu/d
& {\qquad \gamma d/\mu\ll 1,}\\ \\
4\mu^2/(\gamma d^2)
& {\qquad \gamma d/\mu \gg 1}.
\end{array}
\right.
\end{equation}
The small separation expression above is nothing but the ideal-gas osmotic pressure of counterions
that dominates over the energetic contributions. At large separations the pressure is found to
decay asymptotically as $\sim 1/(\gamma d^2)$.
The pressure remains always non-negative and the surface-surface interaction is thus always repulsive in
the mean-field limit. Choosing the non-disordered system as the reference, however,
the decrease in the interaction pressure upon increase of the surface disorder annealing
can be interpreted as being due to an effective disorder-induced attraction whose asymptotic form
could be described by
\begin{equation}
\Delta P_{\mathrm{PB} }\sim \frac{1 - \gamma }{\gamma} \frac{1}{d^2}
\end{equation}
for large $\gamma$.
This asymptotic form again attests to the fact that the way the disorder acts on the mean-field
interaction between the two apposed surfaces is {\em via} a renormalization of the surface charge density.
\section{Strong-coupling (SC) limit}
Next we shall investigate the asymptotic strong-coupling limit which is complementary to the mean-field
limit and where counterion-induced correlations become dominant.
We employ the standard strong-coupling scheme reviewed extensively in Refs. \cite{SC_review,netz} in order
to study the partial annealing effects in the SC limit. The so-called asymptotic
SC theory is obtained from the leading order terms of a non-trivial virial expansion
(in powers of the fugacity) of the partition function (\ref{part_fn}), {\em i.e.}
\begin{equation}
{\mathcal Z} = {\mathcal Z}_0 + \lambda {\mathcal Z}_1 + {\mathcal O}({\lambda}^2).
\label{virialterms}
\end{equation}
It becomes exact in the limit of large coupling parameters corresponding,
for instance, to high counterion valency or high surface charge density \cite{SC_review,netz}.
The zeroth-order (no counterion) term, ${\mathcal Z}_0$, and the first-order (single counterion) term,
${\mathcal Z}_1$, follow from Eq. (\ref{part_fn}) as
\begin{eqnarray}
{\mathcal Z}_0 = \int\bigg(\prod_{a=1}^n \frac{{\mathcal D} \phi_a}{{\mathcal Z}_v} \bigg) \,\, e^{-{\mathcal S}_0},
\label{eq:Z_0}
\\
\label{eq:Z_1}
{\mathcal Z}_1 = \sum_{b=1}^n\int {\mathrm{d}}{\mathbf R} \, \Omega({\mathbf R})
\int\bigg(\prod_{a=1}^n \frac{{\mathcal D} \phi_a}{{\mathcal Z}_v} \bigg) \,\, e^{-{\mathcal S}_0 -
{\mathrm i}\beta q e_0 \phi_b({\mathbf R}) },
\end{eqnarray}
where
\begin{equation}
{\mathcal S}_0 = \frac{1}{2}\sum_{a,b} \int {\mathrm{d}}{{\mathbf r}\, {\mathrm{d}}{\mathbf r}'} \, \phi_a({\mathbf r}){\mathcal D}_{ab}({\mathbf r}, {\mathbf r}') \phi_b({\mathbf r}') +{\mathrm i}\beta \sum_a \int {\mathrm{d}}{\mathbf r} \, \rho_0({\mathbf r}) \phi_a({\mathbf r}).
\end{equation}
We thus need to calculate both these terms for an arbitrary number of replicas, $n$. In doing so, we shall make use of some mathematical relations that we briefly discuss below.
\subsection{Mathematical preliminaries}
\label{subsec:math}
First, it turns out that the most convenient way to carry out the calculations is to replace the
long-range Coulomb interaction $v({\mathbf x})=1/(4\pi \varepsilon\varepsilon_0 |{\mathbf x}|)$
with the exponentially screened Yukawa interaction
\begin{equation}
v_s({\mathbf x})=\frac{e^{-\kappa |{\mathbf x}|}}{4\pi \varepsilon\varepsilon_0 |{\mathbf x}|}
\end{equation}
by introducing a finite screening length $\kappa^{-1}$. In the end, we shall take the limit $\kappa\rightarrow 0$. Note that not only is this procedure of technical convenience but it is also of physical
relevance for the present problem. It corresponds to adding a background salt to the system, leading to the screened Coulomb interaction between charged units. It is to be noted however that the salt effects are taken into account in this way only on the
linear Debye-H\"uckel level. (Further study of the role of added salt in the SC limit is presented elsewhere \cite{olli}.)
Generalization of Eqs. (\ref{part_fn})-(\ref{eq:D_ab}) in the presence of Yukawa interaction is immediate as $v^{-1}({\mathbf r}, {\mathbf r}')$ is simply replaced by
\begin{equation}
v_s^{-1}({\mathbf r}, {\mathbf r}') = \varepsilon\varepsilon_0 (-\nabla^2+\kappa^2)\delta({\mathbf r} - {\mathbf r}').
\end{equation}
Second, in calculating ${\mathcal Z}_0$ and ${\mathcal Z}_1$ one needs to evaluate the determinant and the inverse of the block-matrix ${\mathcal D}_{ab}({\mathbf r}, {\mathbf r}')$. These calculations are straightforward
and may be carried out most easily by employing properties of block-matrices and the operator algebra defined over the Hilbert space $\{|{\mathbf r}\rangle\}$. We shall use the compact notation by defining the operators
$\langle {\mathbf r}|\hat v_s|{\mathbf r}'\rangle=v_s({\mathbf r}, {\mathbf r}')$, $\langle {\mathbf r}|\hat g|{\mathbf r}'\rangle=g({\mathbf r})\,\delta({\mathbf r}, {\mathbf r}')$ (for the screened Coulomb interaction and disorder variance),
and $\langle {\mathbf r}|\hat {\mathbf D}_{ab}|{\mathbf r}'\rangle={\mathcal D}_{ab}({\mathbf r}, {\mathbf r}')$
{\em via} Eq. (\ref{eq:D_ab}), where the latter is defined as an element of the $n\times n$ operator matrix
\begin{equation}
\hat {\mathbf D}=\beta\, {\mathbf e}\otimes \hat v_s^{-1}+\beta^2\,{\mathbf u}\otimes \hat g
\end{equation}
with ${\mathbf e}_{ab}=\delta_{ab}$ and ${\mathbf u}_{ab}=1$.
Also, we shall use $\langle {\mathbf r}|\rho_0\rangle = \rho_0({\mathbf r})$ and the well-known notation
$\int_{{\mathbf r}, {\mathbf r}'} \rho_0({\mathbf r}) v_s({\mathbf r}, {\mathbf r}') \rho_0({\mathbf r}') =
\langle \rho_0|\hat v_s|\rho_0\rangle$, etc.
One can prove the following identities for $\hat {\mathbf D}$
\begin{eqnarray}
\det \hat {\mathbf D} = \big(\det \beta \hat v_s^{-1}\big)^{n}\, \det \big(\hat 1+n \beta \hat g \hat v_s \big),
\label{eq:det_Dab}\\
\beta \sum_a \big\langle {\mathbf r}\big|(\hat {\mathbf D}^{-1})_{ab}\big|{\mathbf r}'\big\rangle
=\big\langle {\mathbf r}\big|\hat v_s \big(\hat 1+n \beta \hat g \hat v_s\big)^{-1}\big|{\mathbf r}'\big\rangle,
\label{eq:inverse_Dab}\\
\beta \big\langle {\mathbf r}\big| (\hat {\mathbf D}^{-1})_{aa}\big|{\mathbf r}\big\rangle
=\big\langle {\mathbf r}\big|\hat v_s \big[\hat 1 - \beta \hat g \hat v_s
\big(\hat 1+n \beta \hat g \hat v_s\big)^{-1}\big]\big|{\mathbf r}\big\rangle,
\label{eq:inverse_Daa}
\end{eqnarray}
which will be used in what follows. Note that the last two quantities do not depend on the replica indices $a$ and $b$.
\subsection{ Virial terms ${\mathcal Z}_0$ and ${\mathcal Z}_1$}
\label{subsec:virial}
Going back to the virial term ${\mathcal Z}_0$, one can perform the Gaussian integral in Eq. (\ref{eq:Z_0}) to obtain
\begin{equation}
{\mathcal Z}_0 = C_0\,e^{-\frac{n}{2}\ln \det \beta\hat v_s -\frac{1}{2}\ln \det \hat {\mathbf D}
- \frac{1}{2}\beta^2 \sum_{a,b}\langle \rho_0|(\hat {\mathbf D}^{-1})_{ab}|\rho_0\rangle}.
\label{eq:Z_0_ii}
\end{equation}
Using Eqs. (\ref{eq:det_Dab}) and (\ref{eq:inverse_Dab}), ${\mathcal Z}_0$ is completely determined.
This term represents the interaction free energy of macroion charges in the absence of counterions.
Next, the Gaussian integral in Eq. (\ref{eq:Z_1}) can be evaluated as
\begin{equation}
{\mathcal Z}_1 = n {\mathcal Z}_0 \int {\mathrm{d}}{\mathbf R} \, \Omega({\mathbf R})
\, e^{-\beta u({\mathbf R})},
\label{eq:Z_1_ii}
\end{equation}
where $u({\mathbf R})$ is nothing but the single-counterion interaction energy with the macroion charges
and reads
\begin{equation}
u({\mathbf R}) = \beta q e_0 \sum_a\langle \rho_0|(\hat {\mathbf D}^{-1})_{ab}|{\mathbf R} \rangle
+\beta\frac{(q e_0)^2}{2} \langle {\mathbf R} |(\hat {\mathbf D}^{-1})_{bb}|{\mathbf R} \rangle.
\label{eq:u_general}
\end{equation}
This is fully determined by virtue of Eqs. (\ref{eq:inverse_Dab}) and (\ref{eq:inverse_Daa}). We have thus derived the general
form of both virial terms in Eq. (\ref{virialterms}) as a function of $n$. We now proceed to the explicit evaluation
of the two virial terms for the system of two apposed charged planar surfaces as defined
by the mean charge distribution and disorder variance (\ref{eq:sigma_plates}) and (\ref{eq:g_plates}) (see Fig. \ref{fig:schematic}).
\subsection{Small-$n$ expansion}
\label{subsec:small_n}
To proceed further we focus on the small-$n$ limit of the above expressions. Our chief goal here is
to examine the stability of the system upon small annealing perturbations of a quenched charge distribution.
The annealing effects on this level are therefore expected to be additive in the free energy of the system.
By expanding Eq. (\ref{eq:Z_0_ii}) for small $n$ we arrive at
\begin{eqnarray}
{\mathcal Z}_0 \simeq C_0'\,\exp\bigg(-\frac{n\beta}{2}\bigg[{\mathrm{Tr}}(\hat g\hat v_s)
+ \langle \rho_0|\hat v_s|\rho_0\rangle\bigg] + \nonumber\\
+\frac{(n\beta)^2}{2}
\bigg[\frac{1}{2}{\mathrm{Tr}}\big(\{\hat g\hat v_s\}^2\big)
+ \langle \rho_0|\hat v_s \hat g \hat v_s|\rho_0\rangle\bigg] \bigg)
\label{eq:Z_0_approx}
\end{eqnarray}
to the lowest orders in $n$. But since we are interested in the inter-plate interaction, we shall
need to determine only the separation-dependent terms.
It easily follows that the ${\mathrm{Tr}}(\hat g\hat v_s)$
term in the above equation does not depend on the inter-surface
distance $d=2a$ and the ${\mathrm{Tr}}(\{\hat g\hat v_s\}^2)$ term
may be calculated straightforwardly (and up to an irrelevant additive term) as
\begin{equation}
\beta^2\,{\mathrm{Tr}}(\{\hat g\hat v_s\}^2) = -S\, (4\pi \ell_{\mathrm{B}}^2 g^2)\, {\mathrm{Ei}}(-2\kappa d),
\label{eq:Trgv^2}
\end{equation}
where $S$ is the total area of each surface and ${\mathrm{Ei}}(x)=\int_{-\infty}^x {\mathrm{d}}t\, e^{t}/t$
is the exponential-integral function.
The remaining two terms in the expression for ${\mathcal Z}_0$ are obtained as
\begin{eqnarray}
\label{eq:rho_v_rho}
\beta \langle \rho_0|\hat v_s|\rho_0\rangle = 2S\,\big(\sigma^2\ell_{\mathrm{B}} \big) \bigg(\frac{2\pi}{\kappa}\bigg)\big(1 + e^{- \kappa d}\big),\\
\beta^2\langle \rho_0|\hat v_s \hat g \hat v_s|\rho_0\rangle =
2S\,\big(g\sigma^2 \ell_{\mathrm{B}}^2 \big) \bigg(\frac{2\pi}{\kappa}\bigg)^2
\big(1 + e^{-\kappa d}\big)^2.
\end{eqnarray}
The small-$n$ expansion for ${\mathcal Z}_1$ leads to evaluation of
$e^{-\beta u({\mathbf R})}\simeq e^{-\beta u_0({\mathbf R})}\big[1+n \beta u_1({\mathbf R})\big]$, where
we have expanded $u({\mathbf R}) = u_0({\mathbf R}) - n u_1({\mathbf R}) + {\mathcal O}(n^2)$ to the lowest order in $n$, and thus
the following two terms
\begin{eqnarray}
u_0({\mathbf R}) = q e_0 \langle \rho_0|\hat v_s|{\mathbf R} \rangle
- \beta\frac{(q e_0)^2}{2} \langle {\mathbf R} |\hat v_s \hat g\hat v_s|{\mathbf R} \rangle,
\label{eq:u_0}\\
u_1({\mathbf R}) = \beta q e_0 \langle \rho_0|\hat v_s \hat g\hat v_s|{\mathbf R} \rangle
- \beta^2\frac{(q e_0)^2}{2} \langle {\mathbf R} |\hat v_s \hat g\hat v_s\hat g\hat v_s|{\mathbf R} \rangle.
\label{eq:u_1}
\end{eqnarray}
We have discarded a self-energy term $(qe_0)^2\langle {\mathbf R} |\hat v_s |{\mathbf R} \rangle$
in Eq. (\ref{eq:u_0}) whose only effect is to rescale
the fugacity, $\lambda$ \cite{naji_podgornik}.
We can then use the explicit expressions
\begin{eqnarray}
\beta e_0 \langle \rho_0|\hat v_s|{\mathbf R} \rangle &=&
-\big(\sigma\ell_{\mathrm{B}}\big)\bigg(\frac{2\pi}{\kappa}\bigg)\big(e^{-\kappa|a-R_z|}+e^{-\kappa|a+R_z|}\big) \\
\beta^2 e_0 \langle \rho_0 |\hat v_s \hat g\hat v_s|{\mathbf R} \rangle
&=& - \big(\sigma g\ell_{\mathrm{B}}^2\big)\bigg(\frac{2\pi}{\kappa}\bigg)^2\big(1 + e^{-\kappa d}\big) \big(e^{-\kappa|a-R_z|}+e^{-\kappa|a+R_z|}\big).
\end{eqnarray}
The two remaining expressions in Eqs. (\ref{eq:u_0}) and (\ref{eq:u_1}) are obtained as
\begin{eqnarray}
\beta^2 e_0^2\langle {\mathbf R} |\hat v_s \hat g\hat v_s|{\mathbf R} \rangle &=& -\big(2\pi g \ell_{\mathrm{B}}^2\big)
\bigg[{\mathrm{Ei}}(-2\kappa |a-R_z|) + {\mathrm{Ei}}(-2\kappa |a+R_z|)\bigg], \\
\beta^3 e_0^2\langle {\mathbf R} |\hat v_s \hat g\hat v_s\hat g\hat v_s|{\mathbf R} \rangle
& =& \big(2\pi g^2 \ell_{\mathrm{B}}^3 \big)
\bigg(\frac{2\pi}{\kappa}\bigg) \bigg[\big(e^{-2\kappa |a-R_z|}+e^{-2\kappa |a+R_z|}+2e^{-2\kappa d}\big)+ \nonumber\\
&& + 2\kappa |a-R_z|\,{\mathrm{Ei}}(-2\kappa |a-R_z|) + 2\kappa |a+R_z|\,{\mathrm{Ei}}(-2\kappa |a+R_z|) + \nonumber\\
&& + 4\kappa d\,\, {\mathrm{Ei}}(-2\kappa d)\bigg].
\end{eqnarray}
We shall need only the small $\kappa$ results, which read (after discarding irrelevant constants)
\begin{eqnarray}
\beta^2 e_0^2\langle {\mathbf R} |\hat v_s \hat g\hat v_s|{\mathbf R} \rangle &\simeq& -\big(2\pi g \ell_{\mathrm{B}}^2\big)
\ln \big(a^2-R_z^2 \big), \\
\beta^3 e_0^2\langle {\mathbf R} |\hat v_s \hat g\hat v_s\hat g\hat v_s|{\mathbf R} \rangle & \simeq& \big(2\pi g^2 \ell_{\mathrm{B}}^3 \big)
\bigg(\frac{2\pi}{\kappa}\bigg) \big(e^{-2\kappa|a-R_z|}+e^{-2\kappa|a+R_z|}+2e^{-2\kappa d}\big).\nonumber\\
\label{eq:R_vgvgv_R}
\end{eqnarray}
This completes the explicit evaluation of both virial terms. We now proceed to the evaluation of the interaction
free energy, {\em i.e.}, the part of the free energy that explicitly depends on the separation between the two surfaces.
\subsection{SC free energy}
By using Eqs. (\ref{eq:Z_0_ii})-(\ref{eq:u_general}), one can evaluate the free energy
of a partially annealed system from ${\mathcal F}^{\mathrm{SC}}= -k_{\mathrm{B}}T' \ln {\mathcal Z} $
(Eq. (\ref{eq:free_Z})),
where ${\mathcal Z} = {\mathcal Z}_0 + \lambda {\mathcal Z}_1$ in the SC limit
(Eq. (\ref{virialterms})).
The fugacity can be fixed by the number of counterions $N$ upon transforming to canonical ensemble {\em via}
\begin{equation}
n N = \lambda \frac{\partial \ln {\mathcal Z}}{\partial \lambda},
\label{eq:N_lambda}
\end{equation}
whereby we obtain
\begin{equation}
\lambda = \frac{N}{\int {\mathrm{d}}{\mathbf R} \, \Omega({\mathbf R})\, e^{-\beta u({\mathbf R})}}.
\label{eq:lambda}
\end{equation}
The canonical SC free energy is obtained {\em via} the Legendre transform,
${\mathcal F}_N^{\mathrm{SC}} = {\mathcal F}^{\mathrm{SC}} + Nk_{\mathrm{B}}T \ln \lambda $, as
\begin{equation}
\frac{\beta{\mathcal F}_N^{\mathrm{SC}}}{N} = -\frac{\ln {\mathcal Z}_0}{Nn}
- \ln \int {\mathrm{d}}{\mathbf R}\, \Omega({\mathbf R})\, e^{-\beta u({\mathbf R})}
\label{eq:SCfree_general}
\end{equation}
supplemented by the electroneutrality condition, which for the two-plate system reads $N q = 2 \sigma_{\mathrm{eff}} S$,
and again stipulates that the total charge on the surfaces equals the charge of the interposed counterions.
Note that here $\sigma_{\mathrm{eff}} $ is the effective surface charge density that has to be determined self-consistently
within the SC theory. It turns out however that, in the SC limit, there is no charge renormalization
due to partial annealing on the leading order in $n$,
and thus $\sigma_{\mathrm{eff}} = \sigma$ up to corrections of the
order ${\mathcal O}(n^2)$ (see Appendix \ref{app:eff_charge}).
This behavior is in stark contrast with the one found on the mean-field level in Section \ref{sec:PB}.
The above expressions (\ref{eq:N_lambda})-(\ref{eq:SCfree_general}) together with Eqs. (\ref{eq:Z_0_ii})-(\ref{eq:u_general})
are applicable to any general system of fixed macroions with
(Gaussian) disordered charge distribution and arbitrary degree of annealing $n$.
For small annealing perturbations, we have
\begin{equation}
\frac{\beta{\mathcal F}_N^{\mathrm{SC}}}{N} = -\frac{\ln {\mathcal Z}_0}{Nn}
- \ln \int {\mathrm{d}}{\mathbf R}\, \Omega({\mathbf R})\, e^{-\beta u_0({\mathbf R})} - n\beta
\frac{\int {\mathrm{d}}{\mathbf R} \, \Omega({\mathbf R})\, u_1({\mathbf R})\, e^{-\beta u_0({\mathbf R})} }
{\int {\mathrm{d}}{\mathbf R} \, \Omega({\mathbf R})\, e^{-\beta u_0({\mathbf R})} } + {\mathcal O}(n^2),
\end{equation}
where $ {\mathcal Z}_0$ is given by Eq. (\ref{eq:Z_0_approx}).
In what follows, we shall focus again on the two-plate model system
(Eqs. (\ref{eq:sigma_plates}) and (\ref{eq:g_plates})) and make use of the explicit
expressions (\ref{eq:Trgv^2})-(\ref{eq:R_vgvgv_R})
in order to calculate the SC free energy of this system.
We then take the limit of small inverse screening length, $\kappa\rightarrow 0$, as
noted in Section \ref{subsec:math}. We thus find that the SC free energy of this system
adopts a simple form as
\begin{eqnarray}
\frac{\beta {\mathcal F}_N^{\mathrm{SC}}}{N} &\simeq& f_{\mathrm{quenched}} +
\frac{n}{\kappa} \big(8\pi^2 q \ell_{\mathrm{B}}^2 g \sigma\big) d, \nonumber\\
&= & f_{\mathrm{quenched}} + \bigg(\frac{2\gamma}{\kappa \mu}\bigg) \frac{d}{\mu},
\label{eq:SCfree}
\end{eqnarray}
where the first term on the right hand side, $f_{\mathrm{quenched}} $,
is nothing but the quenched ($n=0$) rescaled free energy \cite{naji_podgornik}
\begin{equation}
f_{\mathrm{quenched}} = \frac{d}{2\mu} + (\chi -1)\ln d,
\label{eq:SCfree_quenched}
\end{equation}
and the second term in Eq. (\ref{eq:SCfree})
represents the leading order correction from
partial annealing of the disorder. Note that this term is
linear in $n$ as expected and scales with the inverse screening
length $\kappa^{-1}$.
The quenched free energy (\ref{eq:SCfree_quenched}) is expressed in terms of the dimensionless {\em disorder coupling parameter}
$\chi = 2\pi q^2 \ell_{\mathrm{B}}^2 g$, which gives a measure of the disorder-induced coupling {\em via}
the surface charge variance $g$. This is to be compared with the so-called {\em electrostatic
coupling parameter} $\Xi = 2\pi q^3 \ell_{\mathrm{B}}^2 \sigma$ defined originally for non-disordered
systems \cite{netz}, which measures the strength of counterion-induced correlations and, in the present
context, depends on the mean charge density $\sigma$. The information about the disorder annealing enters only {\em via}
$\gamma = n g/(q \sigma)$ as defined previously in Eq. (\ref{eq:gamma}).
These dimensionless parameters may be used to determine the phase behavior of a
partially annealed system, with the disorder effects being in general quantified by $\chi$ and $\gamma$.
\subsection{Instability and collapse transition}
The quenched free energy (\ref{eq:SCfree_quenched}) comprises the standard non-disordered SC contributions,
{\em i.e.}, the counterion-mediated attraction, $d/2\mu$, and the repulsion, $-\ln d$, due to the confinement entropy of counterions between the two surfaces \cite{SC_review,netz}. But it also includes a long-range additive logarithmic term, that is $\chi \ln d$, stemming from the disorder variance which is attractive and renormalizes the repulsive entropic term.
This peculiar form of the quenched disorder
contribution leads to the previously predicted \cite{naji_podgornik} {\em continuous collapse transition}
at the threshold $\chi_c=1$ between a stable bound state of the two surfaces and a collapsed state
where the surfaces are in contact. In other words, the optimal surface-surface
separation, $d_\ast$, behaves as
\begin{equation}
\frac{d_\ast}{\mu} =\left\{
\begin{array}{ll}
2(1-\chi)
& {\qquad \chi < 1,}\\ \\
0
& {\qquad \chi > 1}.
\end{array}
\right.
\end{equation}
If on the other hand the disorder is partially annealed ($n>0$), we see from Eq. (\ref{eq:SCfree}) that the annealing generates a {\em linear} attractive term in the free energy, that is $\sim \gamma d/\kappa$. It adds up with and enhances the counterion-mediated attraction term
(first term in Eq. (\ref{eq:SCfree_quenched}))
exhibiting thus a complementary effect when compared to the quenched disorder contribution.
The threshold of the collapse transition $\chi_c=1$ remains intact up to small
corrections of the order ${\mathcal O}(n)$ \cite{note:threshold} and the
partially annealed bound-state separation is obtained in the limit of small inverse screening length, $\kappa$, as
\begin{equation}
\frac{d_\ast}{\mu} \simeq \left\{
\begin{array}{ll}
\frac{2(1-\chi)}{1 +4\gamma/\kappa\mu}
& {\qquad \chi < 1,}\\ \\
0
& {\qquad \chi > 1},
\end{array}
\right.
\label{eq:d*_partial}
\end{equation}
which is always smaller than the quenched value, reflecting again the enhanced surface-surface attraction in the case of partially annealed disorder.
Note that the optimal separation in the partially annealed case, Eq. (\ref{eq:d*_partial}),
tends to zero, $d_\ast\rightarrow 0$, as the inverse screening length tends to zero $\kappa\rightarrow 0$.
In other words, the system goes into a collapsed state regardless of other parameters exhibiting thus
a {\em global attractive instability}. Hence, a quenched system of macroion charges and counterions is not stable with respect to annealing perturbations of the macroion charge
distribution in the absence of screening effects. The stability may be achieved by adding a finite amount
of added salt.
Unfortunately, the effects of added salt have not yet been properly
analyzed in the context of the SC theory (see
Ref. \cite{olli} for a recent attempt) and it is presently difficult to go beyond
the linear description adopted here for the salt screening effects.
\section{Conclusion and discussion}
In this work we have analyzed the effects of partially annealed disorder in the distribution of macromolecular (macroion) charges on the interaction between two such macromolecular surfaces across a solution containing
mobile neutralizing counterions. Recent experiments on decorated mica surfaces \cite{Klein} covered with a random mosaic of positive and negative charged domains (stemming from the adsorption of cationic surfactants)
clearly point to the existence of a strong attractive surface-surface interaction which, upon formation of such domains,
collapses the system into a compact state with the surfaces being in contact.
This behavior resembles the transition to a primary minimum within the DLVO theory \cite{DLVO,Israelachvili} although the
attraction mechanism here is strictly non-DLVO \cite{Klein} producing attractive electrostatic forces that are up to a few orders of magnitude larger than the universal van-der-Waals forces \cite{parsegian} as incorporated in the DLVO theory.
Moreover, the emergence of an attractive instability or even a collapse transition is not predicted within the standard theories
of electrostatic interactions between charged macromolecular surfaces (even between surfaces bearing
opposite charges) pointing thus
to the possible role of the surface charge disorder in the aforementioned collapse transition.
The AFM investigation of the surface texture of mica surfaces \cite{Klein} can be performed only before the measurements of the inter-surface forces and can not be monitored while the two surfaces are brought closer together. One can thus not be sure whether the surface distribution of the charged domains along the surfaces changes on approach of the surfaces or not. This leads in general to three possible scenarios:
\begin{itemize}
\item the surface charge disorder is completely set by the method of preparation of the surfaces and does not change on approach of the surfaces (quenched disorder),
\item the surface charge disorder responds to the changes in the separation of the surfaces just as fast as the mobile charged species (such as counterions) in the solution between the surfaces (annealed disorder),
\item and lastly, the surface charge disorder does respond to the changes of the separation but with a much larger time-scale than the mobile charged species between the surfaces (partially annealed disorder).
\end{itemize}
As the experiment alluded to above gives only the inter-surface interaction as a function of the separation, one is thus lead to investigate the {\em fingerprint} of these different scenarios on the behavior of the interaction. Since the effects of annealed
\cite{ParsegianChan,Fleck,Shklovskii_mobile,Wurger,vonGrunberg,Harries} and quenched \cite{naji_podgornik,Rudi-Ali,Fleck} disorder have already been investigated theoretically, we follow up on these analyses by investigating the changes in the inter-surface forces wrought by the intermediate case, where
it is assumed that there is a clear separation of relaxation time scales between the dynamics of the ``external" surface charges
($\tau_{\mathrm{s}}$) and the mobile charges floating in the solvent between the surfaces ($\tau_{\mathrm{ci}}$), {\em i.e.}
\begin{equation}
\label{time_scales}
\tau_{\mathrm{s}}\gg \tau_{\mathrm{ci}}.
\end{equation}
Thus, the fluctuations in the intervening Coulomb fluid have time enough to relax to their local equilibrium state for each configuration of the surface charge disorder, which shows a much slower relaxation with changes in the surface
separation. The origin of the different relaxation times for the surface and bulk dynamics could be manyfold: the finite mobility and mixing of charged units on the macromolecular surfaces, such as appears in lipid bilayers with embedded charged proteins \cite{lipowsky}, conformational rearrangement of strongly charged DNA chains such as appears in DNA microarrays \cite{science95} and charge regulation of contact surfaces bearing weak acidic groups in aqueous solutions \cite{ParsegianChan}, but should be in any case very {\em system specific}. This separation of time scales is closely related to the so-called ``adiabatic elimination" of fast degrees of freedom \cite{haken,Risken,Kaneko}, which has been used frequently in the literature
for systems with widely different time scales \cite{dotsenko1,dotsenko2,landauer,two-T,cool} (see also Appendix \ref{app:A}).
In this work we thus investigate the ``interaction fingerprint" of the partially annealed disorder on two apposed planar
surfaces upon their approach. The analysis presented above points to the fact that partial annealing of the surface charges invariably leads to additional attractive interactions between the surfaces and may even result in
a global attractive instability in the system. The nature of these attractions is quite different, however, if counterions are {\em strongly} or {\em weakly} coupled to charged surfaces in the sense of Netz \cite{SC_review,netz}. The magnitude of this coupling essentially depends on the valency of counterions, $q$, the magnitude of the surface charge density,
$\sigma e_0$, and the Bjerrum length $\ell_{\mathrm{B}}=e_0^2/(4\pi \varepsilon\varepsilon_0 k_{\mathrm{B}}T)$
(incorporating the medium temperature and dielectric constant), and is measured by the electrostatic
coupling parameter
\begin{equation}
\Xi = 2\pi q^3 \ell_{\mathrm{B}}^2 \sigma.
\end{equation}
For weakly coupled counterions, {\em i.e.} specifically in the mean-field limit ($\Xi\rightarrow 0$), the
partially annealed disorder leads to smaller mean-field repulsions due to a renormalized (reduced) value of the surface charge density. The attraction in this case can thus be inferred only from a diminished repulsion with respect to the case of a
non-disordered surface charge distribution. For strongly coupled counterions, {\em i.e.},
on the SC level ($\Xi\rightarrow \infty$), we derive explicitly an additional inter-surface attraction
stemming from the partial annealing of the surface charge distribution.
Note that the qualitative difference between the mean-field and the SC results
goes back to the strong electrostatic correlation effects that are included in the
SC theory but excluded on the mean-field level.
The reason for this additional attraction (or reduction of the inter-surface repulsion in the mean-field limit)
compared to the purely quenched case is that any rearrangement of the macroion charges, such as is assumed in partial annealing, inevitably leads to configurations of lower (free) energy which shows up as an effective attraction.
This means that every disordered charged system will be unstable against annealing. In both the mean-field as well as the strong-coupling limit, the effect of partial annealing is quantified by a single partial annealing parameter
\begin{equation}
\gamma = \frac{ng}{q \sigma},
\end{equation}
which depends on the temperature ratio $n=T/T'$. This is in addition to the disorder coupling parameter
\begin{equation}
\chi = 2 \pi q^2 \ell_{\mathrm{B}}^2 g,
\end{equation}
which measures the spread of the charge disorder distribution {\em via} the disorder variance $g$.
The parameter $\gamma$ and the effective surface temperature $T'$ may be estimated experimentally from the mobility ($\Gamma_s$) and the diffusion ($D_s$) coefficients of the surface charges and by applying
Einstein's relation $k_{\mathrm{B}}T' = D_s/\Gamma_s$ (Appendix \ref{app:elimination}).
The temperature ratio quantifying the effects of partial annealing can thus be cast into an equivalent form
\begin{equation}
n = (k_{\mathrm{B}}T) \frac{\Gamma_s}{ D_s}.
\end{equation}
How does one differentiate the effects of quenched and partially annealed disorder? Comparing the results derived in the quenched case \cite{naji_podgornik,Rudi-Ali} with those obtained in this work, one realizes that in the former case the effects of the disorder are limited to the strong-coupling limit, while in the partially annealed case they persist also in the mean-field limit.
This is because in the mean-field limit the surface charge density is renormalized in the presence of
partially annealed disorder. This effect is always absent in the quenched disorder case (Appendix \ref{app:eff_charge}).
Moreover, we find that in the SC regime and in the limit of low screening $\kappa\rightarrow 0$, a charged system that may
be stable in the presence of quenched charge disorder could become globally
unstable and collapse upon partial annealing of the disorder. One should also note that for small non-zero
screening and in the stable phase ($\chi<1$), the system adopts a much smaller surface-surface
distance in the partially annealed case
than in the quenched case as the ratio between the optimal distances in these two cases
is given by $ (1 +4\gamma/\kappa\mu)^{-1}$.
Another indicative feature of partially annealed disorder is the
curious property that the surfaces interact electrostatically even
if the mean surface charge density, $\sigma$, goes to zero, that is when
the surfaces become {\em net} electroneutral! It
may be seen most clearly from the ${\mathrm{Tr}}(\{\hat g\hat v_s\}^2)$ term
in Eq. (\ref{eq:Trgv^2}), which in the limit $\kappa\rightarrow 0$ leads to
a logarithmic attractive contribution in the free energy as $\simeq n \,\pi\, g^2\, \ell_{\mathrm{B}}^2 S \, \ln d$ stemming only
from the variance, $g$, of the surface charge disorder (note however that when $\sigma>0$ this is
a subleading term compared to the additive $1/\kappa$ term
considered in Eq. (\ref{eq:SCfree})) \cite{note:threshold}.
This effect disappears in the quenched limit ($n\rightarrow 0$)
unless either the dielectric discontinuities at bounding surfaces or the presence of salt in between the bounding surfaces are taken into account \cite{Rudi-Ali}.
It is thus safe to say that the effects of partially annealed
disorder are in general stronger and ubiquitous and may be qualitative in both mean-field and strong-coupling limits.
Nevertheless, though the above-mentioned interaction fingerprints would help in assessing the importance of disorder or even its presence in the charge distribution on macromolecular surfaces, the interactions by themselves would not be enough to make this conclusion with a reasonable degree of certainty. Unfortunately, more detailed experiments where one would concurrently measure interactions as well as surface charge distributions would still be essential.
Our analysis of the effects of macromolecular charge disorder on their interactions supplements the recently acquired new wisdom of bio-colloidal interactions \cite{SC_review,netz,shkl,levin}, as opposed to its classical formulation \cite{DLVO,Israelachvili}, in quite an illuminating way. Whereas on the DLVO or mean-field level \cite{DLVO,Israelachvili,Andelman} one can formulate the salient features of macromolecular Coulomb interactions with the folk wisdom that opposites attract and likes repel, the strong-coupling paradigm \cite{SC_review,netz,shkl,levin} suggests that
likes attract too if the system is highly charged. To this we would add, in view of our previous work \cite{naji_podgornik,Rudi-Ali} and the work described above, that if the surface charge distribution is disordered, the system may become unstable and collapse
due to attractive disorder-induced forces and that even neutral macromolecular surfaces can interact {\em via}
electrostatic interactions. This is especially true if the disorder distribution is partially annealed as we set to prove in this work.
\begin{acknowledgements}
R.P. would like to acknowledge the support of Agency for Research and Development of Slovenia
under grants P1-0055(C), Z1-7171 and L2-7080. This study was supported by the Intramural Research Program of the NIH, National Institute of Child Health and Human Development. This research was supported in part by the National Science Foundation under Grant No. PHY05-51164.
\end{acknowledgements}
|
1,116,691,500,136 | arxiv | \section{Introduction}
Cyberattacks cause huge damage to our society. Many cyberattacks start with phishing. Phishing is to trick people into revealing their sensitive information to the attacker. In particular, phishing URLs are camouflaged as URLs that look familiar to people. Careless people will click them, causing their private information to be leaked. Therefore, many detection methods have been developed and as a response, attackers started to consider evasion techniques that camouflage with legitimate patterns (see Section~\ref{sec:eva} for more details)~\cite{Oliveira:2017:DSP:3025453.3025831,Lin:2019:SSE:3349608.3336141,Ho:2019:DCL:3361338.3361427,adv, Ehab}. Thus, it is of utmost importance to prevent phishing attacks using evasion.
There have been proposed machine learning methods to detect phishing. They can be categorized into two types: content-based and URL string-based. \textit{Content-based methods} download and analyze web page contents~\cite{8015116,DBLP:conf/icitst/MohammadTM12,2017arXiv170107179S}. However, they require non-trivial computations to process many web pages and are weak against web browser-based exploits (because we need to access their web pages). Most importantly, it is not easy to collect such training data. For all those reasons, content-based methods are not always preferred. \textit{String-based methods} mainly rely on URL string pattern analyses because it is well known that phishing URLs have very distinguishable string patterns~\cite{Ma09beyondblacklists,Blum:2010:LFB:1866423.1866434,6061361,DBLP:conf/icitst/MohammadTM12,Mohammad2014,7207281,Verma:2015:CPU:2699026.2699115,7945048,2017arXiv170107179S,hong2020phishing,anand2018phishing}. Thus, many lexical features to detect phishing URLs have been proposed (see Section~\ref{sec:rel}). These features are known to be effective in detecting phishing URLs. Because string-based methods are computationally lightweight and provide high accuracy, many researchers prefer them for the high efficiency~\cite{2017arXiv170107179S}. Some researchers rely on a blacklist of IP addresses and domains. However, its accuracy is known to be mediocre.
Almost all existing string-based methods hardly consider evasion~\cite{adv}. Evasion means the technique that the attacker creates phishing URLs seemingly legitimate by manipulating their patterns to deceive defenders' detection methods. In this work, we consider a couple of more key patterns of phishing attacks to design an advanced string-based detection method that outperforms existing methods and is strong against evasion. First, the attacker is sensitive to cost efficiency~\cite{apwg}. In many cases, they (partially) reuse phishing attack materials and prefer specific hosting companies for their looser policies (\textit{e.g.,} not to require identification information) and relatively cheaper prices than other agencies. When a private server is used instead of hosting companies, the attacker prefers shared hosting, \textit{i.e.,} one server is used for multiple phishing attack campaigns and also for multiple domains --- in our data, 15.8\% of IP addresses are connected to multiple domains. Second, the attacker creates phishing URLs on top of benign servers, domains, IP addresses, and/or substrings to evade existing detection methods~\cite{apwg}.
Considering all these facts, we design a novel unified framework of natural language processing and a network-based approach to detect phishing URLs --- its overall workflow is shown in Fig.~\ref{fig:approach}. We regard each URL as a sentence and segment it into substrings (words) considering the syntax and punctuation symbols of URLs --- URLs have well defined syntax as in English. After that, we build one large network that consists of heterogeneous entities, such as URLs, domains, IP addresses, authoritative name servers, and substrings, and perform our \emph{customized belief propagation} to detect phishing URLs (see Section~\ref{edge_potential}). We note that the above listed related works do no include any network-based inference schemes. On the contrary, similar network-based inference methods had been used in various other domains~\cite{Manadhata2014,chau2011polonium}. However, our method differs from them in defining \emph{edge potentials} which decide a penalty when two neighboring entities have different predicted labels.
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{figures/evasion_CCS.pdf}
\vspace{-3em}
\caption{The overall workflow of the proposed method. In the first step, we segment collected URLs into words and remove meaningless ones that correspond to stop words that have high frequency but do not carry useful information. In the second step, we construct a heterogeneous network of URLs, Domains, IP addresses, etc. In the last step, we run the customized belief propagation method to make it robust.}
\label{fig:approach}
\vspace{-1em}
\end{figure*}
Our approach is effective to infer that seemingly unrelated phishing URLs are actually related and is robust to evasion. Because we infer on such a network of heterogeneous entities, \emph{an evasion for a phishing URL is not likely to be successful unless a majority of its neighbors in the network are evaded at the same time} (see Section~\ref{sec:rob} for more detailed discussions with theorems and proofs), which is our main contribution in comparison with existing works.
We crawled many suspicious URLs and also downloaded a couple of datasets released by other researchers~\cite{Sorio2013DetectionOH,ahmad}. In total, we have about 120K phishy and 380K benign URLs. We compare our approach with state-of-the-art baseline methods including graph convolutional networks (GCNs) and feature engineering-based methods. Our method shows the best detection performance among them. Furthermore, in additional evasion tests, our method shows better F-1 scores than other baseline methods. Because the evasion incurs non-trivial expenses for the attacker to access to benign domains, IP addresses, and so forth, our robust detection method greatly increases the attacker's financial burden to perform evasion.
Our contributions can be summarized as follows:
\begin{itemize}
\item We design a novel network-based inference method equipped with our proposed robust edge potential assignment mechanism. Our network inference on top of the edge potential assignment outperforms many baseline methods including feature engineering-based and network-based classifiers.
\item Our proposed network-based method has a theoretical ground on why it is robust to evasion (see Section~\ref{sec:rob}).
\item We conduct experiments with a large set of URLs collected by us and downloaded from other work. Our data covers a wide variety of phishy/benign URL patterns.
\end{itemize}
In the following, we first review the literature in Section~\ref{sec:rel} and describe the motivation of this work in Section~\ref{sec:eva}. Then, in Sections~\ref{sec:proposedMethod} and ~\ref{sec:rob}, we design a novel network-based detection method robust to evasion and analyze the theoretical robustness of the proposed method. After that, we conduct extensive experiments on phishing URL detection with and without evasion in Section~\ref{sec:Experiments}. Lastly, in Sections~\ref{sec:crawl} and~\ref{sec:conclusions}, we describe crawled our data and conclude our paper.
For reference, in Appendix~\ref{sec:baseline}, we introduce a set of lexical features widely used to detect phishing URLs and sorted in descending order of the feature importance extracted from the best performing baseline method.
\section{Related Work}\label{sec:rel}
In this section, we review phishing URL detection models and attackers' behavioral pattern analyses.
\subsection{Methods to Detect Phishing URLs}Extensive work has been done to counter phishing attacks~\cite{Ma09beyondblacklists,Blum:2010:LFB:1866423.1866434,6061361,DBLP:conf/icitst/MohammadTM12,Mohammad2014,7207281,Verma:2015:CPU:2699026.2699115,7945048,8015116,2017arXiv170107179S}. Typically, researchers have explored machine learning techniques to automatically detect phishing URLs. It is vital to have a well-defined set of features for the effectiveness of classification algorithms. So, we introduce a widely used set of 19 URL features that we collected from related papers in Appendix~\ref{sec:baseline}. All these features are used by some baseline methods in our experiments. All the mentioned works are not based on network-based inference but on feature engineering.
Mao et al. designed a phishing URL detection method robust to evasion based on web page content features~\cite{8015116}. However, it is not easy to collect such training data in many cases because phishing attacks do not last long and web pages are quickly removed, which is one common drawback of all content-based detection methods~\cite{6061361}.
In~\cite{7945048,2018arXiv180203162L,melissa-dl}, several sequence (\textit{e.g.,} URL in our context) classification models have been proposed. Some of them have an advanced architecture to combine various components such as recurrent neural networks, convolutional neural networks, word embeddings, and their multiple hierarchical layers. We use their ideas as additional baselines. The first one uses long short-term memory (LSTM) cells and the second model uses one-dimensional convolution (1DConv), and the third baseline uses both (1DConv+LSTM).
For a couple of related problems~\cite{Manadhata2014,chau2011polonium}, network-based methods have been used. In~\cite{Manadhata2014}, the authors tried to detect malicious domains (rather than URLs) and the authors in~\cite{chau2011polonium} proposed one heuristic-based belief propagation method to detect malicious codes. Those two works differ in how to create networks but use the same belief propagation method. Both methods correspond to the baseline method marked as `POL' in our experiments. Peng et al. and Khalil et al. also tried a network approach for malicious domain detection~\cite{10.1007/978-3-030-12981-1_34,Khalil:2018:DOG:3176258.3176329}. However, their methods are not directly applicable to our phishing URL data.
\subsection{Attackers' Behavioral Patterns}\label{sec:attacker}
Phishing Activity Trends Report~\cite{apwg} by Anti-Phishing Working Group is one of the most reputable reports. We analyzed their quarterly reports. The two most important observations from the reports are i) there are some web hosting companies preferred by the attacker due to their low prices and anonymity, and ii) many phishing URLs have similar string patterns because they are created by similar tools or reused from old phishing campaigns. There exist many other interesting observations as follows:
\begin{itemize}
\item There has been an increase in the number of phishings using free hosting providers or website builders. It has been reported that 81.7\% of malicious websites are hosted on free hosting providers~\cite{de2021compromised}. These free hosts are easy to use but also allow threat actors to create subdomains spoofing a targeted brand, resulting in a more legitimate-looking phishing site. Free hosts also afford phishers additional anonymity, because these services hide registrant information.
\item The attacker prefers shared hosting which means multiple domains share the same hosting server. Therefore, seemingly unrelated domains may belong to the same host or IP address.
\item Hundreds of vendors are mostly targeted. This continues a years-long trend in which a few hundred companies are attacked regularly. Considering this fact, we crawled URLs from \url{phishtank.com} for the three most frequently attacked vendors: Bank of America, eBay, and PayPal.
\item 53\% of phishing attacks use `com' domains and `net', `org', and `br' domains are next equally preferred.
\end{itemize}
\section{Motivation}\label{sec:eva}
\begin{definition}
Evasion is an effective technique that one can adopt to disturb a machine learning task by creating a `counter-evident' sample, \textit{e.g.,} a phishing URL hosted by a benign domain or IP address. This evasion can be done in various ways. For detailed evasion techniques that we consider, refer to Section~\ref{sec:evasion}.
\end{definition}
\vspace{-6px}
Shirazi et al. showed that existing phishing URL detection methods are adversely impacted by evasion without suggesting a countermeasure~\cite{adv}.
Specifically, they conducted evasion tests that randomly select up to four features of phishing URLs and change the selected features to other benign values. In their non-evasion tests, most classifiers showed high accuracy. In their evasion tests, however, the best performing classifier's accuracy (recall) decreased from 82-97\% to 79-45\% with one feature change, and to 0\% with four feature changes.
To our knowledge, it has not been actively studied to design a non-content-based phishing URL detection method robust to evasion. We consider many aspects of URLs, including domains, IP addresses, name servers, and string patterns \emph{except contents} --- because collecting phishing web page contents require non-trivial efforts. Most importantly, our method is based on a network of them. Intuitively speaking, attackers cannot disturb our network-based inference task even after evasion if many neighbors of a phishing URL in the network remain the same as before (see Section~\ref{sec:rob}). Some large-scale evasion can still neutralize our method. However, it requires non-trivial expenses, thus decreasing the attackers' motivation on such evasion.
While it is hard to measure the evasion cost for money, it includes various intangible efforts, such as exploiting benign web servers to implant their phishing pages, maintaining a custom domain without any phishing campaigns until D-Day to prevent it from being blacklisted, and so forth. In particular, it depends on security environments and skills how long it will take until an attacker successfully exploits an administrator's account of a benign server.
\section{Proposed Method}\label{sec:proposedMethod}
After introducing the overall workflow in our method, we describe its detailed steps with some key visualization results.
\subsection{Overall Method}Fig.~\ref{fig:approach} shows our overall workflow. The entire process can be divided into the following steps:
\begin{enumerate}
\item We crawl many URLs from \url{phishtank.com} and download other works' open datasets.
\item As mentioned earlier, we create a heterogeneous network of URLs, domains, IP addresses, name servers, and substrings (words). We use a standard natural language processing technique to segment URLs into substrings (words) and draw edges between a URL and substrings.
\item We run our customized belief propagation algorithm to infer unknown URLs' phishy/benign labels, which is our main contribution. In particular, this type of inference is called \emph{transductive}. In our case, both training and testing samples co-exist in a network and testing samples' labels are inferred from other known training samples' labels following the network architecture.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth,trim=1.8 0 0 0.71cm, clip]{figures/elbow.png}
\vspace{-1.5em}
\caption{The elbow method to find the stabilizing point of frequency. All substrings (words) before the found stabilizing point are considered as stop words.}
\label{fig:elbow}
\end{figure}
\subsection{Network Construction}\label{sec:ne}
We do a network-based classification rather than feature engineering-based classification. As mentioned earlier, phishing URLs share many common string patterns and various entities are cross-related to each other, so we create a network to represent complicated relationships among multiple entities (vertices) such as URLs, their domains, IP addresses, authoritative name servers, and substrings.
\begin{itemize}
\item We draw an edge between a URL and its domain.
\item We draw an edge between a domain and its resolved IP address. We use \url{domains.google} and \url{virustotal.com} to retrieve domain-IP address resolution history. They return not only current but also all past resolution results with timestamps which enable correct connections. Sometimes, one domain can be connected to multiple IP addresses.
\item We draw an edge between a domain and its authoritative name servers. In general, there exist multiple authoritative name servers for a domain, and one authoritative name server provides resolution services for multiple domains.
\item We draw an edge between a URL (\textit{i.e.,} sentence) and a substring (\textit{i.e.,} word) if the URL contains the substring. For these edges, it is very crucial how to segment a URL into substrings. We will shortly describe this in the following section.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.99\columnwidth,trim=0 0.95 0 0.1cm, clip]{figures/network_new3.pdf}
\vspace{-0.8em}
\caption{The network constructed from our data. Red means phishing URLs and blue means benign URLs. Other colors mean non-URL entities --- name servers are not displayed due to their lesser significance than that of other entities. Note that there exist many clusters. The vertex size represents the strength (more specifically, modularity~\cite{Blondel_2008}) of the cluster that a vertex belongs to.}
\label{fig:network}
\end{figure}
\subsubsection{How to segment a URL into words}\label{sec:seg}
A URL is used to locate resources in the Internet. It consists of several parts: scheme, username, password, host, port number, path, and query string --- some of them can be missing. We use our customized word segmentation policies in each part as follows:
\begin{itemize}
\vspace{0.5em}
\item \textit{Scheme} means the protocol, \textit{e.g.,} http and https. Only two words can be possible. However, since these words have very high frequencies, we do not use these two words in our network. We will describe how to remove those \emph{stop words}\footnote{Stop words do not carry meaning but have high frequency values in English, such as `a', `the', `is', and so forth. It is a standard process to remove such stop words in natural language processing algorithms. We use the elbow method based on frequency to detect the stop words of URLs.} of URLs shortly.
\item \textit{Username} and \textit{password} can be specified before host. We segment them using the punctuation symbols, \textit{i.e.,} `//', `:', and `@'. An example is `http://username:[email protected]'.
\item \textit{Hostname} can be simply segmented into words by `.'.
\item Sometimes \textit{path} can be very long, separated by `/'. We use all possible punctuation symbols, such as `/', `.', `!', `\&', `,', `\#', `\$', `\%', and `;', to segment the path part into words.
\item \textit{Query string} is able to contain multiple queries separated by `\&', and each query consists of a query name and a value, \textit{e.g.,} `term=bluebird\&source=browser-search'. We extract words using the two punctuation symbols, `=' and `\&'.
\end{itemize}
\vspace{0.5em}
Because the syntax of URLs is well defined, extracting words can be done very efficiently. However, many meaningless words can be also extracted. Therefore, before drawing edges between URLs and words, those words should be removed. In the field of natural language processing, it is well known that the frequency of words follows Zipf's law --- more precisely, word frequency exponentially decays~\cite{33858}. In particular, this pattern describes stop words in English very well. For instance, the frequency of the most popular stop word `the' occupies 7\% of all word occurrences in the Brown Corpus of American English~\cite{francis79browncorpus} and the second most popular stop word `of' has 3.5\%. We found that the extracted words from URLs show similar statistics (cf. Fig.~\ref{fig:elbow}). After that, we remove some high-frequency words using the \textit{elbow method}~\cite{ketchen1996application}. It decides the point whose perpendicular distance to the line segment connecting the two ends is the biggest as the saturation point, which is 800 in our data. We remove all the words whose frequency values are larger than the point.
Fig.~\ref{fig:network} shows the network created by the proposed method. Note that there exists strong correlation between the cluster constructions and the ground-truth phishy/benign labels, which justifies our network-based inference method that will be described shortly. \emph{In this regard, the main intuition in our work is that it is hard to evade our natural language processing and network-based approach unless a majority of entities in a cluster are evaded simultaneously.}
\subsection{Network-based Inference} We employ \textit{loopy belief propagation} (LBP)~\cite{Bishop:2006:PRM:1162264} for our network-based inference. Our key contribution in this step is to define a more advanced edge potential assignment mechanism than that of the state-of-the-art methods~\cite{chau2011polonium,Manadhata2014}.
Because these methods typically not only follow a majority voting of neighbors but also give a fixed edge potential regardless of the similarity of the two connected vertices, a vertex is mainly classified as benign if it has many benign neighbors.
However, we want to correctly classify a phishing vertex even if it has many benign neighbors. Therefore, we define a more advanced edge potential assignment mechanism for enabling more sophisticated classification and achieving evasion-robustness.
We will describe our edge potential definition in Section~\ref{edge_potential}.
LBP is a message passing algorithm to solve network-based inference problems. Let $x \in X$ be a hidden variable and $N_x$ be a set of its neighboring variables, and let $o \in O$ be an observed variable. In our contexts, an observed variable means a training sample and a hidden variable means a testing sample. We use $X$ and $O$ to denote a set of all hidden and observed variables, respectively. Each variable represents the phishy/benign label of an entity in our case. $x$ sends a message to other hidden variable $y \in N_x$ after collecting all messages from $N_x \setminus \{y\}$. Note that observed variables never receive any messages; they only broadcast the messages to their neighboring hidden variables. In our case, phishy and benign URLs in the training set are observed variables.
As mentioned, we need to calculate a message $msg_{x \rightarrow y}(\ell)$ from a variable $x$ to other variable $y$ regarding a phishy/benign label $\ell \in L$, where $L=\{phishy, benign\}$ is a set of all possible label options. There exist several message passing strategies: \textit{sum-product}, \textit{max-product}, and \textit{min-sum}. We use the min-sum algorithm having better computational stability than the other two algorithms. For some high degree vertices, message values tend to quickly decay to zeros (\textit{i.e.,} floating point underflow) in the sum-product and max-product. Their product operation is reduced to the sum in the min-sum algorithm.
The message in the min-sum algorithm is calculated as:
\vspace{0.4em}
\begin{align}\label{eq:msg}
\begin{split}
msg_{x \rightarrow y}(\ell) = &\min_{\ell'} \Big[ \log{\left(1 - \phi_y(\ell')\right)} + \psi_{xy}(\ell, \ell') + \\
&\sum_{k \in N_x \setminus \{y\}} msg_{k \rightarrow x}(\ell') \Big],
\end{split}
\end{align}
\vspace{0.5em}
where $\phi_y(\ell')$ is a \textit{prior} that the variable $y$ has the label $\ell'$ and $\psi_{xy}(\ell, \ell')$ is an \textit{edge potential}, indicating a joint-probability that $x$'s label is $\ell$ and $y$'s label is $\ell'$. Note that there is a log function in the message definition so the min-sum is equivalent to performing the max-product in the log space for better computational stability.
After exchanging messages many times, we first calculate a \textit{cost} of each variable and label pair and then choose the label that yields the \textit{lowest cost}\footnote{The min-sum tries to minimize `cost' as the name `min' suggests whereas both the sum-product and max-product maximize `belief'.} for each variable. The cost, when $x$ has the label $\ell$, is computed as:
\vspace{0.3em}
\begin{align}\label{eq:cost}
Cost(x,\ell) = \log{\left(1 - \phi_x(\ell)\right)} + \sum_{k \in N_x} msg_{k \rightarrow x}(\ell).
\end{align}
Then, the formal definition of the problem that the min-sum algorithm solves can be defined as follows:
\vspace{0.4em}
\begin{align}\label{eq:minsum}
\operatorname{argmin}_{g} \sum_{x}Cost(x, g(x)),
\end{align}
\vspace{0.4em}
where $g: X \rightarrow L$, where $X$ is a set of hidden variables and $L=\{phishy, benign\}$, is a label assignment function. It is worth mentioning in our setting, $x$ can be a hidden variable representing a URL, domain, IP, name server, or word. Our final target is to infer the labels of testing URLs. To this end, we need to infer the labels of other non-URL entities as well because they connect URLs. Therefore, the min-sum algorithm can be described as a process of finding such label assignments to hidden variables that the sum of the costs is minimized.
\subsubsection{Edge Potential Assignment} \label{edge_potential}
The definition of edge potential $\psi_{xy}(\ell, \ell')$ is the key factor in the LBP method.~\cite{chau2011polonium} used the heuristics of \textit{homophily} and \textit{heterophily}. They, for example, assign an edge potential of $0.5 - \epsilon$ (resp. $0.5 + \epsilon$) if two neighboring variables $x$ and $y$ have different (resp. same) labels as shown in the compatibility matrix in Table~\ref{table:polonium}. $\epsilon$ is usually set as very small, \textit{e.g.,} 0.001. We use two labels, phishy and benign. At the end of the network-based inference process, for each entity, one label will be assigned as a prediction result. The final label assignments are greatly influenced by the edge potential definition.
In contrast to~\cite{chau2011polonium}, we incorporate more factors, such as similarity among entities and an improved compatibility matrix, to derive reliable edge potentials --- we prove shortly in Section~\ref{sec:rob} that reliable similarity definitions can lead to the evasion-robustness in our method. The similarity can be measured via various embedding approaches, such as Doc2Vec \cite{le2014distributed} and Node2Vec \cite{grover2016node2vec}. We discuss how to calculate vector representations of URLs, their domains, IP addresses, authoritative name servers, and words in Section~\ref{embedding}.
\begin{table}[t!]
\centering
\caption{The compatibility matrix proposed in Polonium~\cite{chau2011polonium} based on the homophily heuristic}\label{table:polonium}
\vspace{-0.8em}
{\renewcommand{\arraystretch}{1.3}
\begin{tabular}{c | c c}
$\psi_{xy}({\ell, \ell'})$ & Phishy & Benign \\ [0.5ex]
\hline
Phishy & $0.5 + \epsilon$ & $0.5 - \epsilon$ \\
Benign & $0.5 - \epsilon$ & $0.5 + \epsilon$ \\
\end{tabular}
\vspace{1.0em}
\caption{Our compatibility matrix $M$ for the min-sum algorithm. $\mathbf{x}$ and $\mathbf{y}$ mean vector representations. $sim(\mathbf{x},\mathbf{y})$ is a similarity between two vectors.}
\label{table:2}
\vspace{-0.8em}
\resizebox{\columnwidth}{!}{\begin{tabular}{c | c c}
$\psi_{xy}({\ell, \ell'})$ & Phishy & Benign \\ [0.5ex]
\hline
Phishy & $\min(ths_+, 1-sim(\mathbf{x}, \mathbf{y}))$ & $\max(ths_-, sim(\mathbf{x}, \mathbf{y}))$ \\
Benign & $\max(ths_-, sim(\mathbf{x}, \mathbf{y}))$ & $\min(ths_+, 1-sim(\mathbf{x}, \mathbf{y}))$ \\
\end{tabular}}
}
\end{table}
To calculate the similarity based on those vector representations, we adopt several different similarity measures, including the cosine similarity and various kernels. Our proposed definition of edge potential is shown in Table~\ref{table:2}. In the table, we denote vector representations of entities in boldface and $sim(\mathbf{x},\mathbf{y})$ indicates a similarity between two vectors that can be defined in various ways. Two such examples are as follows:
\vspace{0.3em}
\begin{align*}
sim(\mathbf{x},\mathbf{y}) =
\vspace{0.3em}
\begin{cases}
\vspace{0.3em}
cos(\mathbf{x},\mathbf{y})\textrm{ based on the cosine similarity}, \\
\exp(\frac{\|\mathbf{x}-\mathbf{y}\|^2}{2\sigma^2})\textrm{ based on the RBF kernel}.
\end{cases}
\vspace{10em}
\end{align*}
After that, we use a concept inspired by the \emph{hinge-loss}~\cite{Rosasco:2004:LFS:996933.996940} to assign edge potential values. For instance, $\min(ths_+, 1-sim(\mathbf{x}, \mathbf{y}))$ in the table is to limit the minimum edge potential to $ths_+$\footnote{This means a lower-bound of edge potential and is set by a user.} when two entities have the same label. When $sim(\mathbf{x}, \mathbf{y})$ is low (resp. high), the proposed definition imposes a large (resp. small) penalty closed to 1 (resp. $ths_+$). Therefore, the proposed mechanism is able to assign much more sophisticated edge potentials in comparison with existing methods.
One should be very careful when applying our compatibility matrix to other applications. Recall that we use the min-sum algorithm so that in our compatibility matrix $M$, we assign 0 (which corresponds to 1 in the sum-product and max-product algorithms) when $\ell$ and $\ell'$ are the same. For the sum-product and max-product algorithms, $1 - M$ should be used.
\begin{figure}[t]
\vspace{-1.5em}
\centering
\subfloat[Cosine Similarity]{\includegraphics[width=0.495\linewidth,trim=0 0 0 0cm, clip]{figures/co_sim.pdf}} \hfill
\subfloat[Inference Result]{\includegraphics[width=0.495\linewidth,trim=0 0 0 0cm, clip]{figures/in_sim.pdf}}
\vspace{-0.8em}
\caption{Examples of the pairwise vertex similarity with DeepWalk and our network inference with $ths_+ = ths_- = 0.7$. (a) We choose the highest PageRank URL and other 199 URLs in its neighborhood with the breadth-first search for each of the phishy and benign classes. In total, there are 400 URLs in the similarity plot. (b) From the similarity, our network-based inference is able to infer almost correctly.}
\label{fig:emb}
\end{figure}
\subsubsection{Vector Representations of Entities}\label{embedding}
We describe how we can calculate reliable vector representations of various entities. These embedding methods are known to be effective in discovering latent relationships among entities~\cite{mikolov2013efficient,le2014distributed,2014arXiv1403.6652P,grover2016node2vec,yoo2022directed,lee2020asine,lee2020negative,DBLP:conf/iclr/0002JJH0JSP22}, which is a good fit to our network-based detection under the presence of evasions.
\paragraph{Word Embedding-based Methods}
In the area of natural language processing, there have been proposed various semantic embedding methods such as Word2Vec~\cite{mikolov2013efficient} and Doc2Vec~\cite{le2014distributed}. As we mentioned earlier, we segment URLs into words so we can directly apply the methods to calculate the vector representations of URLs and words. However, we cannot directly calculate vector representations of domains, IP addresses, and name servers in this approach because it considers only strings. Inspired by \textit{locally linear embedding} (LLE)~\cite{Roweis2000}, however, we propose a heuristic to represent a domain, IP address, or name server as a mean vector of its neighbors' vectors. LLE says that a vector representation of an entity is a weighted combination of its neighbors' vectors, \textit{e.g.,} equally weighted in our case. For this, we first calculate mean vector representations of domains and then IP addresses and so forth, given URLs' vector representations calculated by Word2Vec or Doc2Vec.
\paragraph{Network Embedding-based Methods}
Another reliable approach to find vector representations is to use network embedding methods. Many of these methods have been proposed by social network researchers. One advantage of the approach is that we can find vector representations of all entities simultaneously because they can run on our network directly. We use Node2Vec~\cite{grover2016node2vec} and DeepWalk~\cite{2014arXiv1403.6652P}. In Fig.~\ref{fig:emb} (a), we show a pairwise similarity plot that intuitively justifies our embedding and similarity-based edge potential assignments. However, we see a small portion of phishy and benign pairs in the green circle have high similarities. This can be corrected by our proposed edge potential assignment mechanism, which is shown in Fig.~\ref{fig:emb} (b).
\section{Evasion-Robustness of Our Network-based Approach}\label{sec:rob}
In this section, we formally prove that a hidden variable's phishy/benign label follows its \emph{similar} neighbors' majority label, which improves the robustness to evasion.
\begin{lemma}\label{lemma}
Suppose $ths_+ = ths_- = 0$ and a small network that consists of a hidden variable $u$ and its $m$ neighbors $N_u = \{v_1, \cdots, v_m\}$. Let $\ell_u$ be the phishy/benign label of $u$. When $\ell_u = \operatorname{argmin}_{\ell} \sum_j sim(\mathbf{u},\mathbf{v}_j) \cdot I(v_j, \ell)$, where $I(v_j, \ell) \in \{0,1\}$ is an indicator function saying if $v_i$ has a label $\ell$, the min-sum algorithm in Eq.~\eqref{eq:minsum} is optimized.
\end{lemma}
\begin{proof}
$\ell_u$ is inferred by Eq.~\eqref{eq:cost}. In particular, the second term in the equation, $\sum_{v \in N_u} msg_{v \rightarrow u}(\ell)$, is significant to decide its label, and $msg_{v \rightarrow u}(\ell)$ is dominated only by $\psi_{vu}(\ell_v, \ell_u)$ in the assumed network (cf. Eq.~\eqref{eq:msg}). $\sum_{v \in N_u} \psi_{vu}(\ell_v, \ell_u)$ is minimized when $\ell_u$ follows the majority label considering the vector similarities because $\psi_{vu}(\ell_v, \ell_u)$ is determined by $sim(\mathbf{v}, \mathbf{u})$ in Table~\ref{table:2}.
\end{proof}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{figures/lemma.pdf}
\vspace{-0.8em}
\caption{For ease of discussion, suppose $u$ is a hidden variable and other variables' labels are fixed. Each edge is annotated with $sim(\mathbf{u}, \mathbf{v}_i)$. Our method concludes that $u$ is phishy although $u$ has more benign neighbors.}
\vspace{2em}
\label{fig:lemma}
\centering
\includegraphics[trim={1.3cm 1.8cm 1.1cm 1.2cm},clip,width=1\columnwidth]{figures/evasion.pdf}
\vspace{-1.5em}
\caption{There are two clusters. In general, connections between phishy and benign clusters are not strong (cf. Fig.~\ref{fig:network}). `Domain1' is connected to `IP2' after evasion. However, the connection between them is weak and after embedding, $sim(\mathbf{x}, \mathbf{y})$ is low, where $x=Domain1$ and $y=IP2$. Thus, a low penalty is given to their dissimilar labels by our compatibility matrix and the belief propagation can still identify `Domain1' as phishy.}
\label{fig:evasion}
\end{figure}
\begin{example}[Example of Lemma~\ref{lemma}]
In Fig.~\ref{fig:lemma}, there is the small network we used in Lemma~\ref{lemma}. For ease of discussion, suppose that only $u$ is a hidden variable and others are observed variables. The optimal min-sum solution is $g(u)=Phishy$ because $sim(\mathbf{u}, \mathbf{v}_1) > \sum_{j > 1} sim(\mathbf{u}, \mathbf{v}_j)$ and $Cost(u, Phishy) = \sum_{j > 1} sim(\mathbf{u}, \mathbf{v}_j)$ is smaller than $Cost(u, Benign) = sim(\mathbf{u}, \mathbf{v}_1)$.
\end{example}
This lemma can be generalized to the following theorem for larger general networks:
\begin{theorem}
Given a large network $G=(V,E)$, the min-sum algorithm is optimized if for each hidden variable $u \in V$ and its neighbors $N_u$, $\ell_u = \operatorname{argmin}_{\ell} \sum_j sim(\mathbf{u},\mathbf{v}_j) \cdot I(v_j, \ell)$.
\end{theorem}
\begin{proof}
If we can achieve $\ell_u = \operatorname{argmin}_{\ell} \sum_j sim(\mathbf{u},\mathbf{v}_j) \cdot I(v_j, \ell)$ for each hidden variable $u$, it is immediate that the overall cost is minimized in Eq.~\eqref{eq:minsum} because the overall cost is defined as the sum of each hidden variable's cost.
\end{proof}
This theorem discusses a sufficient condition of the optimal min-sum solution but sometimes the sufficient condition, $\ell_u = \operatorname{argmin}_{\ell} \sum_j sim(\mathbf{u},\mathbf{v}_j) \cdot I(v_j, \ell)$ for each $u \in X$, is not achievable. However, what the min-sum does in such a case is to strategically drop the sufficient condition for some hidden variables to better minimize the sum of costs for other majority of hidden variables. Therefore, we can still say that the sufficient condition is achievable in general in any network for its majority of hidden variables. In particular, our embedding and hinge-loss based edge potential assignment bring large flexibility to the process. Therefore, the cost sum can be effectively minimized with the proposed method. Fig.~\ref{fig:emb} shows one such example that our proposed method is able to achieve the sufficient condition in most cases by ignoring some minor edges with high similarity. Because of this property, our approach is robust to evasion unless the attacker \emph{collectively evade} for neighboring URLs/domains/IP addresses/name servers (see Fig.~\ref{fig:evasion} for an example). However, the collective evasion will cost non-trivial expenses to the attacker.
\section{Experiments}\label{sec:Experiments}
In this section, we introduce our detailed experimental environments and results. We collected many URLs from crowd-sourced repositories and other papers. After that, we conducted experiments with ten baselines, ranging from classical classifiers and graphical methods to graph convolutional networks. Our method shows the best accuracy and robustness.
The source codes, data, and reproducibility information of our method are available at \url{https://github.com/taerikkk/BPE}.
\begin{table}[t]
\centering
\caption{The number of phishy and benign URLs for each dataset. Note that Sorio's and Ahmad's datasets are already tagged with ground-truth labels, so we did not use \url{virustotal.com} for them. There exist overlapped URLs so the total number of URLs is smaller than their sum.\label{tbl:data}}
\vspace{-0.5em}
{\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c|c|r|r|}
\hline
Dataset & \begin{tabular}[c]{@{}c@{}}VirusTotal\vspace{-3px}\\Threshold\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# Phishing\vspace{-3px}\\URL\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# Benign\vspace{-3px}\\ URL\end{tabular} \\ \hline \hline
Bank of America & $4/7$ & 4,610 & 9,408 \\ \hline
eBay & $4/7$ & 8,529 & 18,800 \\ \hline
PayPal & $4/7$ & 9,690 & 17,572 \\ \hline
Sorio et al.~\cite{Sorio2013DetectionOH} & N$/$A & 40,439 & 3,637 \\ \hline
Ahmad et al.~\cite{ahmad} & N$/$A & 62,231 & 344,800 \\ \hline \hline
Total & N$/$A & 119,012 & 381,734 \\ \hline
\end{tabular}}
\end{table}
\subsection{Datasets}\label{data}
There have been created several phishing URL detection datasets~\cite{Ma09beyondblacklists,35580,Mohammad2014}. However, almost all of them do not release raw URL strings so we cannot use their datasets. We found only two open datasets with raw URL strings~\cite{Sorio2013DetectionOH,ahmad}. In addition to them, we also crawled \url{phishtank.com} and collected three sets of URLs reported during a couple of months recently for Bank of America, eBay, and PayPal, the top-3 most popular targets in the website (see Section~\ref{sec:crawl} for more details). \url{Phishtank.com} is a crowdsourced repository of suspicious URLs that does not provide ground-truth labels --- users can upvote or downvote the reported URLs in the website, but its voting system is not reliable because anyone (even including attackers) can participate. In total, we have about 500K URLs, 172K domains, and 66K IP addresses. Instead, we used \url{virustotal.com} to tag collected URLs. This website returns the prediction results of over 60 anti-virus (AV) products given a URL. The seven most reliable and popular AV products (such as McAfee, Norton, Kaspersky, Avast, and Trend Micro) were selected among them, and a URL is considered phishy if more than half of them indicate so, \textit{i.e.,} tagging by majority vote. At the end, we merged these datasets into one and created a very large URL dataset whose statistics are shown in Table~\ref{tbl:data}.
We split the combined set in the standard ratio of 80:20 for training and testing. Only 10\% of the URLs have timestamps. With them, we also tried to split in chronological order. Our method shows good accuracy for this configuration as well. However, we do not include the results because of i) its results similar to that of the random split, ii) its small data size, and iii) space reasons.
\begin{table*}[t]
\center
\caption{Detection results of some selected baseline methods and our proposed method. The best result in each measure (\textit{i.e.,} each column) is indicated in boldface. \textsc{\textsf{BPE}} is our method.}\label{tbl:result}
\vspace{-0.5em}
{{\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Type & Method &\begin{tabular}[c]{@{}c@{}}Recall (Phishy)\end{tabular} &\begin{tabular}[c]{@{}c@{}}Precision (Phishy)\end{tabular}& F-1 & Accuracy \\ \hline \hline
\multirow{6}{*}{Baseline} & AdaBoost & 0.830 & 0.830 & 0.830 & 0.831 \\ \cline{2-6}
& SGDClassifier & 0.762 & 0.720 & 0.734 & 0.720 \\ \cline{2-6}
& RandomForest & 0.840 & 0.850 & 0.840 & 0.847 \\ \cline{2-6}
& LSTM & 0.697 & 0.710 & 0.688 & 0.857 \\ \cline{2-6}
& 1DConv & 0.677 & 0.735 & 0.689 & 0.864 \\ \cline{2-6}
& 1DConv+LSTM & 0.788 & 0.806 & 0.784 & 0.902 \\ \hline \hline
\begin{tabular}[c]{@{}c@{}}Noisy\vspace{-3px}\\ Network\end{tabular} & \textsc{\textsf{BPE}} & 1.000 & 0.001 & 0.001 & 0.083 \\ \hline
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}Simple\vspace{-3px}\\ Network\end{tabular}} & RWR & 0.569 & 0.917 & 0.702 & 0.815 \\ \cline{2-6}
& POL & 0.901 & 0.853 & 0.876 & 0.943 \\ \cline{2-6}
& \begin{tabular}[c]{@{}c@{}}\textsc{\textsf{BPE}}\vspace{-3px}\\(Cos, Deepwalk)\end{tabular} & 0.901 & 0.864 & 0.882 & 0.945 \\ \cline{2-6}
& \begin{tabular}[c]{@{}c@{}}\textsc{\textsf{BPE}}\vspace{-3px}\\(RBF, Doc2Vec)\end{tabular} & 0.895 & 0.864 & 0.879 & 0.943 \\ \hline
\multirow{8}{*}{\begin{tabular}[c]{@{}c@{}}Extended\vspace{-3px}\\ Network\end{tabular}} & RWR & 0.648 & \textbf{0.930} & 0.764 & 0.863 \\ \cline{2-6}
& POL & 0.899 & 0.850 & 0.874 & 0.942 \\ \cline{2-6}
& LGCN & \textbf{0.999} & 0.762 & 0.865 & 0.762 \\ \cline{2-6}
& GAT & 0.995 & 0.762 & 0.863 & 0.760 \\ \cline{2-6}
& \begin{tabular}[c]{@{}c@{}}\textsc{\textsf{BPE}}\vspace{-3px}\\(Cos, Deepwalk)\end{tabular} & 0.958 & 0.831 & 0.890 & \textbf{0.969}\\ \cline{2-6}
& \begin{tabular}[c]{@{}c@{}}\textsc{\textsf{BPE}}\vspace{-3px}\\(RBF, Deepwalk)\end{tabular} & 0.958 & 0.832 & \textbf{0.891} & \textbf{0.969} \\ \hline
\end{tabular}}}
\end{table*}
\subsection{Baselines and Hyperparameters} Among many methods proposed, we consider the following baseline methods in our experiments. First, we test many feature-based prediction models. For this, we had surveyed literature and collected 19 features (see Appendix~\ref{sec:baseline}). After that, we predict with various classifiers after under/oversampling to address the imbalanced nature in our dataset --- benign URL numbers are much larger than phishing URL numbers in the training set.
In addition to synthetic minority oversampling~\cite{DBLP:journals/corr/abs-1106-1813} and adaptive synthetic sampling~\cite{He08adasyn:adaptive}, we consider five undersampling methods, six oversampling methods, and one ensemble method below.
Undersampling methods:
\begin{itemize}
\item Naive random undersampling is randomly choose samples to drop.
\item Tomek's link is a representative undersampling method.
\item Clustering uses centroids of clusters after dropping other cluster members.
\item NearMiss is also popular for undersampling.
\item Various nearest neighbor methods are able to undersampling.
\end{itemize}
Oversampling methods:
\begin{itemize}
\item Naive random oversampling is randomly choose samples to add.
\item SMOTE~\cite{DBLP:journals/corr/abs-1106-1813} and its variants are a family of the most popular oversampling methods, which include five variations.
\item ADASYN~\cite{He08adasyn:adaptive} is also popular for oversampling.
\end{itemize}
Ensemble method:
\begin{itemize}
\item Ensemble method means that we use both the oversampling and undersampling methods at the same time.
\end{itemize}
We refer to a survey paper~\cite{JMLR:v18:16-365} for more detailed information.
The combination of classifiers, under/oversampling methods, and their hyperparameters create a huge number of possible options in this method. So, first, we perform 5-fold cross validation to choose the best performing classifier/sampling method, and its hyperparameters. Second, we also test three deep learning-based sequence classification methods mentioned in Section~\ref{sec:rel}. These neural networks are based on recurrent or convolutional layers. We use their hyperparameters recommended in their original publications.
Third, on the \textit{simple network} that consists only of URLs and their words, we run the following graphical methods: i) Random Walk with Restart (RWR): This method runs many random walks from training URLs and counts the number of visits to each testing URL. It is very successful for recommender systems~\cite{Cooper:2014:RWR:2567948.2579244}. ii) Polonium (POL): Polonium based on a simple belief propagation strategy showed a big success in predicting malware and malicious domains. We run the belief propagation on our network with Polonium's compatibility matrix definition in Table~\ref{table:polonium}. iii) Belief Propagation with Enhancements (\textsc{\textsf{BPE}}): This is our method to run the belief propagation based on our improved definition of compatibility matrix. We test various embedding techniques, $ths_+ = \{0, 0.1, 0.3, 0.5, 0.7, 0.9, 1\}$, $ths_- = \{0, 0.1, 0.3, 0.5, 0.7, 0.9, 1\}$, and for calculating the vector similarity, the cosine similarity and RBF kernel. We set the dimension of the embeddings to 128.
Fourth, on the \textit{extended network} that consists of all entities (cf. Section~\ref{sec:ne}), we test the same set of graphical methods: RWR, POL, and \textsc{\textsf{BPE}}. For this, we use the blacklist of 41,881 IP addresses and 158,271 domains provided by \url{virustotal.com}. In other words, those blacklisted entities are converted into observed variables and excluded from the inference process. We also test \textsc{\textsf{BPE}} on the \textit{noisy network} where stop words are not removed. Last, we test state-of-the-art graph convolutional networks (GCNs) such as LGCN~\cite{Gao:2018:LLG:3219819.3219947} and GAT~\cite{velickovic2018graph} on the extended network. For a vertex $v$, we feed a feature vector after concatenating i) the 19 features of $v$ we use in the feature-based prediction, ii) a binary value denoting whether $v$ is blacklisted or not, and iii) a one-hot vector where only the index of the vertex $v$ is one. If some items are missing, we concatenate with zeros --- \textit{e.g.,} a domain does not have the 19 features so we zero them out. We test the hyperparameters recommended in their original papers. To prevent overfitting, we also add an L2 regularization of neural network weights. In all those graphical models, such as RWR, POL, GCNs, and our method (\textit{i.e.,} \textsc{\textsf{BPE}}), the labels of training URLs are fixed and only unknown labels of testing URLs are inferred.
We exclude other content-based detection methods in our experiments because it is hard to obtain web page contents in general --- recall that phishing attacks do not last long and attackers usually clean their traits from the Internet after the accomplishment of their goal. The two datasets we downloaded from ~\cite{Sorio2013DetectionOH,ahmad} do not include any content information and we also could not collect web page information in HTML from \url{phishtank.com} in a stable manner.
\subsection{Environments}
\paragraph{Hardware} We conducted our experiments on the machines with i9-9900K, 64GB RAM, and GTX 1070.
\paragraph{Software} As our experiments utilize many different types of baseline methods, our software environments are rather complicated. The selected list of important software/libraries are as follows:
\begin{itemize}
\item Python ver 3.8.1.
\item Scikit Learn ver 0.22.1.
\item TensorFlow ver 1.5.1.
\item CUDA ver 10.
\item NetworkX ver 2.4.
\end{itemize}
\vspace{-0.5em}
\subsection{Experimental Results} We summarize the results shown in Table~\ref{tbl:result} as follows. Among all feature-based methods, RandomForest performs the best. For all metrics, it outperforms AdaBoost, SGDClassifier, and others, \textit{e.g.,} the F-1 score of 0.840 for RandomForest vs. 0.830 for AdaBoost vs. 0.734 for SGDClassifier. However, all these feature-based baseline methods are clearly beaten by the network-based methods. This supports the efficacy of our network-based approach.
RWR's precision for the phishy class on the extended network is the best (0.930). However, its recall is worse than other network-based inference methods. POL shows a balanced performance between recall and precision as in its original task to detect malware. LGCN's recall for the phishy class is the best (0.999). For LGCN and GAT, we found that they are sensitive to hyperparameters and hard to regularize the overfitting. Surprisingly, the best F-1 was made when we allow overfitting to the phishy class to some degree. When we increase the coefficient of the L2 regularizer to prevent overfitting, their F-1 scores drastically decrease. We also found that training with subgraphs is not effective in processing our large network.
Therefore, we set the size of subgraph as large as possible in our recent GPU model --- due to the GPU memory limitation, whole graph training is impossible for our network --- but its performance is inferior to our method.
Our method with $ths_+ = 0.7$, $ths_- = 0.7$, RBF kernel, and DeepWalk, which is marked as `\textsc{\textsf{BPE}} (RBF, Deepwalk)', shows the best performance for F-1 and accuracy. Although \textsc{\textsf{BPE}}'s precision for the phishy class is a little lower (0.832) than the best feature-based method (\textit{i.e.,} RandomForest)'s score (0.850), the \textsc{\textsf{BPE}}'s recall for the phishy class is much higher (0.958) than that of RandomForest (0.840).
However, one may be worried that our method mis-classifies benign as phishy due to its relatively low precision.
To this end, we measure the false positive rate (\textit{i.e.,} FPR) to \textsc{\textsf{BPE}} and RandomForest.
As a result, we could obtain 0.031 for \textsc{\textsf{BPE}} and 0.306 for RandomForest. Therefore, we expect that \textsc{\textsf{BPE}} is the most useful for accurately detecting phishing URLs in practice.
The same network-based method on the noisy network shows poor performance (\textit{e.g.,} 0.01 for F-1), which proves our network definition also plays an important role.
\begin{table*}
\centering
\caption{F-1 scores of \textsc{\textsf{BPE}}, POL and RandomForest (RF) after M1-5 evasions. The best result in each evasion method and ratio is indicated in boldface. \textsc{\textsf{BPE}} is our method.}\label{tbl:evasion}
\vspace{-0.5em}
{\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Evasion\\Ratio\end{tabular}} & \multicolumn{3}{c|}{M1 (domain)} & \multicolumn{3}{c|}{M2 (path)} & \multicolumn{3}{c|}{M3 (query)} & \multicolumn{3}{c|}{M4 (domain and path)} & \multicolumn{3}{c|}{M5 (domain and query)} \\ \cline{2-16}
& \textsc{\textsf{BPE}} & POL & RF & \textsc{\textsf{BPE}} & POL & RF & \textsc{\textsf{BPE}} & POL & RF & \textsc{\textsf{BPE}} & POL & RF & \textsc{\textsf{BPE}} & POL & RF \\ \hline \hline
5\% & \textbf{0.866} & 0.836 & 0.812 & \textbf{0.876} & 0.843 & 0.820 & \textbf{0.888} & 0.867 & 0.821 & \textbf{0.861} & 0.811 & 0.813 & \textbf{0.873} & 0.822 & 0.814 \\ \hline
10\% & \textbf{0.847} & 0.817 & 0.816 & \textbf{0.861} & 0.822 & 0.810 & \textbf{0.882} & 0.841 & 0.816 & \textbf{0.829} & 0.778 & 0.804 & \textbf{0.863} & 0.811 & 0.807 \\ \hline
15\% & \textbf{0.833} & 0.802 & 0.810 & \textbf{0.858} & 0.817 & 0.803 & \textbf{0.882} & 0.836 & 0.811 & \textbf{0.805} & 0.760 & 0.790 & \textbf{0.854} & 0.807 & 0.798 \\ \hline
\end{tabular}}
\end{table*}
\vspace{-0.8em}
\begin{table}
\centering
\caption{F-1 scores of \textsc{\textsf{BPE}}, POL and RandomForest (RF) after M6 and M7 evasions. The best result in each evasion method and ratio is indicated in boldface. \textsc{\textsf{BPE}} is our method.}\label{tbl:evasion2}
\vspace{-0.5em}
{\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Evasion\\Ratio\end{tabular}} & \multicolumn{3}{c|}{M6 (path and query)} & \multicolumn{3}{c|}{M7 (all)} \\ \cline{2-7}
& \textsc{\textsf{BPE}} & POL & RF & \textsc{\textsf{BPE}} & POL & RF \\ \hline \hline
5\% & \textbf{0.874} & 0.838 & 0.814 & \textbf{0.861} & 0.827 & 0.760 \\ \hline
10\% & \textbf{0.869} & 0.832 & 0.808 & \textbf{0.828} & 0.791 & 0.751 \\ \hline
15\% & \textbf{0.857} & 0.820 & 0.802 & \textbf{0.803} & 0.762 & 0.733 \\ \hline
\end{tabular}}
\end{table}
\paragraph{Statistical significance} For the statistical significance of our experiments, we conduct paired $t$-tests with a 95\% confidence level between \textsc{\textsf{BPE}} and each baseline, and achieved a $p$-value less than 0.05 for all cases.
\paragraph{Transductive vs. Inductive}
Transductive and inductive inferences are two popular paradigms of machine learning~\cite{Vapnik:1995:NSL:211359}. Among all the baseline methods, RandomForest and some other classifiers are inductive methods and many other network-based methods are transductive. In many cases, people rely on the inductive inference where a generalized prediction model trained with a training set predicts for unknown testing samples. In our work, however, we adopt a transductive method where the class label of a specific unknown testing sample is inferred from other specific related training samples in the network architecture. Fig.~\ref{fig:network} justifies our transductive approach because a cluster usually consists of vertices from the same class. However, it is not the case that all transductive methods are successful in Table~\ref{tbl:result}.
\paragraph{Time performance}
\textsc{\textsf{BPE}} is an advanced LBP-based method with our novel similarity-based edge potential assignments. Therefore, the time complexity of \textsc{\textsf{BPE}} is $O(S \cdot |E| \cdot t)$, where $S$ indicates a similarity calculation cost, $E$ indicates the set of edges, and $t$ does the number of iterations required for the convergence. $t$ is typically small in our setting, \textit{e.g.,} $t=5$ is enough. The time complexity of RandomForest (\textit{i.e.,} the best feature-based method) is $O(f \cdot n \cdot \log(n))$, where $f$ is the number of features and $n$ is the number of URLs. In our experiments, the training (wall-clock) time of \textsc{\textsf{BPE}} is 4.7 times faster than that of RandomForest.
\subsection{Parameter Sensitivity}
\paragraph{Sensitivity to thresholds} The following threshold combinations perform very well and are comparable to each other in our experiments: ($ths_+$ = 0.7, $ths_-$ = 0.7), ($ths_+$ = 0.3, $ths_-$ = 0.9), ($ths_+$ = 0.3, $ths_-$ = 0.5), ($ths_+$ = 0.7, $ths_-$ = 0.9), ($ths_+$ = 0.5, $ths_-$ = 0.3), and so on. One common characteristic of them is that two extreme values, 0 and 1, are not preferred. This supports our decision to adopt thresholds because two dissimilar neighbors do not always mean that their labels should be different. In other words, the one you are not close to is not necessarily your enemy. By limiting the penalty, we achieved the best accuracy in our experiments.
\paragraph{Sensitivity to embedding} It turns out that network embeddings are more effective than word or document embedding methods. All high ranked results are produced by DeepWalk. Doc2Vec produces the best result only for the simple network and RBF kernel environment. We think that this is because our network definition considers common words among URLs and DeepWalk is able to capture the semantic meanings of words closely located in the network.
\paragraph{Cosine similarity vs. RBF kernel} It seems the cosine similarity and the RBF kernel are comparable to each other in our experiments. When sorting all results, all highly ranked results are evenly distributed to both of them.
\begin{figure*}[t]
\centering
\subfloat[Original]{\includegraphics[width=0.248\textwidth]{figures/M_original.pdf}}\hfill
\subfloat[M1 Evasion]{\includegraphics[width=0.248\textwidth]{figures/M_M1.pdf}}\hfill
\subfloat[M2 Evasion]{\includegraphics[width=0.248\textwidth]{figures/M_M2.pdf}}\hfill
\subfloat[M3 Evasion]{\includegraphics[width=0.248\textwidth]{figures/M_M3.pdf}}\hfill
\subfloat[M4 Evasion]{\includegraphics[width=0.248\textwidth]{figures/M_M4.pdf}}\hfill
\subfloat[M5 Evasion]{\includegraphics[width=0.248\textwidth]{figures/M_M5.pdf}}\hfill
\subfloat[M6 Evasion]{\includegraphics[width=0.248\textwidth]{figures/M_M6.pdf}}\hfill
\subfloat[M7 Evasion]{\includegraphics[width=0.248\textwidth]{figures/M_M7.pdf}}\hfill
\caption{Visualization of the original network and M1-7 evasions of the target phishing URL denoted with the largest red vertex. Each edge is annotated with the similarity. The meaning of vertex color follows that in Fig.~\ref{fig:network}.}
\label{fig:evasion_case}
\end{figure*}
\subsection{Evasion Tests}\label{sec:evasion}
For our evasion testing, we consider all possible variations for the parts of phishing URLs, \textit{i.e.,} domain, path, and query. Specifically, we define, in total, seven evasion methods (\textit{i.e.,} M1-7) as follows: M1) Phishing URL's domain is changed to other random benign domain (and as a result, IP address is changed too); M2) Phishing URL's path string (cf. Section~\ref{sec:seg}) is changed to other random benign one; M3) Phishing URL's query string (cf. Section~\ref{sec:seg}) is changed to other random benign one; M4) Phishing URL's domain and path string are changed to other random benign ones; M5) Phishing URL's domain and query string are changed to other random benign ones; M6) Phishing URL's path and query strings are changed to other random benign ones; M7) Phishing URL's each part is independently changed to other random benign ones, \textit{i.e.,} phishing URL becomes an entirely new URL that looks like benign.
Note that our evasion tests embrace Shirazi et al.'s evasion settings (cf. Section~\ref{sec:eva}). Also, we note that M7 evasion is the most challenging situation. As mentioned earlier, for M7 evasion, the attackers' motivation may be low, because it requires non-trivial expenses. Nevertheless, it is worth mentioning that we take into account the case where all of the domain, path, and query strings are evaded simultaneously.
For some spear phishing attacks aiming at particular targets, however, sophisticated URLs are prepared with all benign string patterns and web page contents, in which case more advanced techniques are required to detect. It is well known that the attacker invests large efforts in spear phishing by considering even the psychological and habitual characteristics of the targets after hijacking benign user accounts~\cite{Oliveira:2017:DSP:3025453.3025831,Lin:2019:SSE:3349608.3336141,Ho:2019:DCL:3361338.3361427}. However, it is out of the scope of this paper and we leave it as our future work.
To simulate evasions, we modify random 5-15\% of our testing phishing URLs using one of the seven evasion methods. After the modifications, its network is also reconstructed accordingly.
We compare \textsc{\textsf{BPE}} (our method) with POL and RandomForest (RF) which represent network-based and best feature-based baseline methods, respectively. Because all entities are connected in our network and one evasion may affect other neighboring non-evaded URLs in the worst case, a simple measure counting the number of successful detections for the phishing URLs with evasion is not a correct metric. So, we re-evaluate all testing URLs again after evasion and report the results in Tables~\ref{tbl:evasion} and ~\ref{tbl:evasion2}.
As shown in Tables~\ref{tbl:evasion} and ~\ref{tbl:evasion2}, our \textsc{\textsf{BPE}} outperforms other baselines with non-trivial margins. Especially, \textsc{\textsf{BPE}} outperforms RandomForest by up to 13.29\% in the most challenging situation, \textit{i.e.,} M7 with an evasion ratio of 15\%.
In addition, although M7 evasion is the most challenging situation, where we independently change every part of a phishing URL to benign, \textsc{\textsf{BPE}} still shows relatively high F-1 scores (0.803-0.861).
This is because each part of a benign URL connected to a phishing URL is not likely to have high similarity scores (since these are from different benign URLs), so the phishing URL has a low similarity to each newly connected vertex.
Therefore, \textsc{\textsf{BPE}} with our novel similarity-based edge potential will not predict this phishing URL as benign.
On the other hand, POL using a majority voting of neighbors shows low F-1 scores (0.762-0.827) in M7 evasion.
Furthermore, we found that \textsc{\textsf{BPE}} in various evasion settings outperforms most baselines in non-evasion settings. Specifically, except for M1 with an evasion ratio of 15\% and M4 (resp. M7) with an evasion ratio of 10\% and 15\%, the minimum F-1 score of \textsc{\textsf{BPE}} in various evasion settings is 0.847 (\textit{i.e.,} M1 with an evasion ratio of 10\%), which surpasses that of the best baseline in non-evasion settings, \textit{i.e.,} 0.840 for RandomForest.
One more important fact is that evasion incurs additional costs to the attacker. To make a domain whitelisted, for instance, the attacker should pay hosting fees and maintain the domain for a considerable amount of time without any attack campaigns or should compromise other benign web servers. Some attackers do this and switch to phishing web pages at D-Day to launch a phishing attack~\cite{apwg}. Even after the attacker's efforts, experiments show that our method is good at detecting those evasion cases.
\begin{figure*}
\centering
\subfloat[Ground-truth]{\includegraphics[width=0.31\textwidth]{figures/vis_gt.png}}\hfill
\subfloat[Proposed Method, \textit{i.e.,} \textsc{\textsf{BPE}}]{\includegraphics[width=0.31\textwidth]{figures/vis_bp.png}}\hfill
\subfloat[RandomForest]{\includegraphics[width=0.31\textwidth]{figures/vis_rf.png}}
\caption{Visualization of phishy/benign predictions for a partial area in our network. RandomForest, the best inductive method in our experiments, do not use our network information so, when being projected onto it, its predictions do not strictly follow the network connectivity as shown in (c). The meaning of vertex color follows that in Fig.~\ref{fig:network}.}\label{fig:trans}
\vspace{-1em}
\end{figure*}
\paragraph{Evasion case study}
Fig.~\ref{fig:evasion_case} shows eight 2-hop ego networks for a phishing URL that is randomly selected for our evasion settings. The first one shows the original network connection in our dataset. The target URL (the largest red vertex) is connected to other phishy domain and words, in which case it is straightforward to classify the target URL as phishy. In the other seven networks, however, the target URL is connected to a benign domain or/and word(s). Even after these evasions, our method correctly infers that the target URL is still phishy whereas POL and RandomForest fail to detect all the evasion cases. Our method is equipped with a sophisticated edge potential assignment mechanism whereas POL does not consider them. Our theoretical analyses in Section~\ref{sec:rob} also well supports the robust nature of our method.
We also introduce other visualizations with real prediction results. Fig.~\ref{fig:trans} shows three visualizations including our method's and RandomForest's predictions. To emphasize their differences, we choose some important domain/IP/word vertices from our network and show their URL neighbors (rather than showing the full network). In Fig.~\ref{fig:trans} (a), we can observe a strong pattern that the ground-truth label follows the network connectivity in many cases. Sometimes red (phishy) and blue (benign) vertices are mixed in a cluster but this is mainly because we find the clusters in the sub-network only. Our method in Fig.~\ref{fig:trans} (b) shows a better compliance to the network connectivity than that in Fig.~\ref{fig:trans} (c). To evade our method, therefore, the majority of URLs in the same cluster should be evaded at the same time, which burdens the attacker with non-trivial costs (see our evasion cost discussion in Section~\ref{sec:eva}).
\section{Data Crawling}\label{sec:crawl}
To collect as many phishing URL samples as possible, we had monitored \url{phishtank.com} for a couple of months while searching other researchers' available data. There are several online datasets --- many of them were released by Ma et al. who had published several papers for phishing URL detection~\cite{Ma:2009:ISU:1553374.1553462,Ma09beyondblacklists}. However, their data does not include raw string patterns. We also contacted them but they replied that they cannot share the raw data. Mohammad et al. also released their data in \url{https://archive.ics.uci.edu/ml/datasets/Phishing+Websites} but they also do not release their raw data used for their research~\cite{DBLP:conf/icitst/MohammadTM12,Mohammad2014}. As mentioned earlier, we need string patterns of phishing URLs so we couldn't utilize all the above mentioned data.
Therefore, we programmed a web crawler using an automated web browser library and collected all the URLs reported for Bank of America, eBay, and PayPal. For retrieving additional information from \url{virustotal.com}, we received an academic license to their APIs and collected many such information we listed in the main paper. The academic license was activated for three months so it was more than enough for us to retrieve all the needed information.
\section{Conclusions \& Future Work}\label{sec:conclusions}
Although many (machine learning) methods have been proposed to detect phishing URLs, it had been overlooked that attackers can use evasion techniques to neutralize them. In this paper, we tackled the significant problem of detecting phishing URLs after evasion. After segmenting URLs into words and creating a heterogeneous network that consists of cross-related entities, we performed the belief propagation equipped with our customized edge potential mechanism which is our main contribution. Furthermore, we showed that our design is theoretically robust to evasion. We collected recent URLs and downloaded other two datasets for extensive experiments. Our experiments with about 500K URLs verify that our method is the most effective in detecting phishing URLs and also is the most robust to evasion than all baselines. Besides, we expect that our method can be easily applied to address any similar network-based problem (\textit{e.g.,} detecting fake accounts in social networks and email spam) if it can be represented as a classification on graphs.
In the future, we will study a string and content-based robust detection method. For some evasion techniques, it is limited to only string-based detection methods. However, it requires non-trivial efforts to collect web page contents. Therefore, we think that hybrid methods will be the most useful for real-world applications.
\section*{Acknowledgment}
The work of Sang-Wook Kim was supported by the Institute of Information \& Communications Technology Planning \& Evaluation (IITP) grant funded by the Korea government (MSIT) (No. RS-2022-00155586) and by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2018R1A5A7059549). The work of Noseong Park was supported by the Institute of Information \& Communications Technology Planning \& Evaluation (IITP) grant funded by the Korean government (MSIT) (No. 2020-0-01361, Artificial Intelligence Graduate School Program (Yonsei University)).
\bibliographystyle{ACM-Reference-Format}
|
1,116,691,500,137 | arxiv | \section*{Preface}
All sections of this course are either labelled as `Theory' or as
`Algorithms and Implementations'.
It is possible to study only the theory parts. However, the algorithmic parts
depend heavily on the developed theory. Of course, if one is principally interested in implementations,
one need not understand each and every proof.
Accordingly, theoretical and computer exercises are provided.
The conception of this course is different from every treatment I
know, in particular, from William Stein's excellent book `Modular
Forms: A Computational Approach' (\cite{Stein}) and from~\cite{Cremona}.
We emphasize the central role of Hecke
algebras and focus on the use of group
cohomology since on the one hand it can be described in very explicit
and elementary terms and on the other hand already allows the
application of the strong machinery of homological algebra.
We shall not discuss any geometric approaches.
The treatment of the (group cohomological) modular symbols algorithm given in this course is complete.
However, we did not include any treatment of Heilbronn matrices describing Hecke operators
on Manin symbols, which allow a speed-up of Hecke operators.
This course was originally held at the Universit\"at Duisburg-Essen in 2008
and its notes have been slightly reworked for publication in this volume.
{\bf Acknowledgements.}
I would like to thank the anonymous referees for a huge number of helpful suggestions and corrections that surely improved
the text. Thanks are also due to the students who followed the original course, among them Maite Aranés, Adam Mohamed and Ralf Butenuth,
for their helpful feedback. I would also like to thank Mariagiulia De Maria, Daniel Berhanu Mamo, Atin Modi, Luca Notarnicola and Emiliano Torti for
useful corrections.
\section{Motivation and Survey}\label{sec:1}
This section serves as an introduction to the topics of the course. We will briefly review the theory of modular forms and Hecke operators. Then we will define the modular symbols formalism and state a theorem by Eichler and Shimura establishing a link between modular forms and modular symbols. This link is the central ingredient since the modular symbols algorithm for the computation of modular forms is entirely based on it. In this introduction, we shall also be able to give an outline of this algorithm.
\subsection{Theory: Brief review of modular forms and Hecke operators}
\subsubsection*{Congruence subgroups}
We first recall the standard congruence subgroups of $\mathrm{SL}_2(\mathbb{Z})$.
By $N$ we shall always denote a positive integer.
Consider the group homomorphism
$$ \mathrm{SL}_2(\mathbb{Z}) \to \mathrm{SL}_2(\mathbb{Z}/N\mathbb{Z}).$$
By Exercise~\ref{exsln} it is surjective.
Its kernel is called the principal congruence subgroup of level~$N$ and denoted $\Gamma(N)$.
The group $\mathrm{SL}_2(\mathbb{Z}/N\mathbb{Z})$ acts naturally on $(\mathbb{Z}/N\mathbb{Z})^2$ (by multiplying the matrix with a vector).
We look at the orbit and the stabiliser of $\vect 10$. The orbit is
$$ \mathrm{SL}_2(\mathbb{Z}/N\mathbb{Z})\vect 10 = \{ \vect ac \;|\; a,c \textnormal{ generate }\mathbb{Z}/N\mathbb{Z}\}$$
because the determinant is~$1$.
We also point out that the orbit of $\vect 10$ can and should be viewed as the set of elements in $(\mathbb{Z}/N\mathbb{Z})^2$ which are of precise (additive) order~$N$.
We now consider the stabiliser of $\vect 10$ and define the group $\Gamma_1(N)$ as the preimage of that stabiliser group in $\mathrm{SL}_2(\mathbb{Z})$.
Explicitly, this means that $\Gamma_1(N)$ consists of those matrices in $\mathrm{SL}_2(\mathbb{Z})$ whose reduction modulo $N$ is of the form $\mat 1*01$.
The group $\mathrm{SL}_2(\mathbb{Z}/N\mathbb{Z})$ also acts on $\mathbb{P}^1(\mathbb{Z}/N\mathbb{Z})$, the projective line over $\mathbb{Z}/N\mathbb{Z}$, which one can define as the tuples $(a:c)$ with $a,c \in \mathbb{Z}/N\mathbb{Z}$ such that $\langle a,c\rangle = \mathbb{Z}/N\mathbb{Z}$ modulo the equivalence relation given by multiplication by an element of $(\mathbb{Z}/N\mathbb{Z})^\times$.
The action is the natural one (we should actually view $(a:c)$ as a column vector, as above).
The orbit of $(1:0)$ for this action is $\mathbb{P}^1(\mathbb{Z}/N\mathbb{Z})$.
The preimage in $\mathrm{SL}_2(\mathbb{Z})$ of the stabiliser group of $(1:0)$ is called $\Gamma_0(N)$.
Explicitly, it consists of those matrices in $\mathrm{SL}_2(\mathbb{Z})$ whose reduction is of the form $\mat **0*$.
We also point out that the quotient of $\mathrm{SL}_2(\mathbb{Z}/N\mathbb{Z})$ modulo the stabiliser of $(1:0)$ corresponds to the set of cyclic subgroups of precise order $N$ in $(\mathbb{Z}/N\mathbb{Z})^2$.
These observations are at the base of defining level structures for elliptic curves.
It is clear that $\Gamma_1(N)$ is a normal subgroup of $\Gamma_0(N)$ and that the map
$$ \Gamma_0(N) / \Gamma_1(N) \xrightarrow{\mat abcd \mapsto a \mod N} (\mathbb{Z}/N\mathbb{Z})^\times$$
is a group isomorphism.
The quotient $\Gamma_0(N) / \Gamma_1(N)$ will be important in the sequel because it will act on modular forms and modular symbols for $\Gamma_1(N)$.
For that purpose, we shall often consider characters (i.e.\ group homomorphisms) of the form
$$ \chi : (\mathbb{Z}/N\mathbb{Z})^\times \to \mathbb{C}^\times.$$
We shall also often extend $\chi$ to a map $(\mathbb{Z}/N\mathbb{Z}) \to \mathbb{C}$ by imposing $\chi(r) = 0$ if $(r,N) \ne 1$.
On the number theory side, the group $(\mathbb{Z}/N\mathbb{Z})^\times$ enters as the Galois group of a cyclotomic extension.
More precisely, by class field theory or Exercise~\ref{qzetan} we have the isomorphism
$$ \Gal(\mathbb{Q}(\zeta_N)/\mathbb{Q}) \xrightarrow{\Frob_\ell \mapsto \ell} (\mathbb{Z}/N\mathbb{Z})^\times$$
for all primes $\ell \nmid N$.
By $\Frob_\ell$ we denote (a lift of) the Frobenius endomorphism $x \mapsto x^\ell$, and by $\zeta_N$ we denote any primitive $N$-th root of unity.
We shall, thus, later on also consider $\chi$ as a character of $\Gal(\mathbb{Q}(\zeta_N)/\mathbb{Q})$. The name
{\em Dirichlet character} (here of {\em modulus $N$}) is common usage for both.
\subsubsection*{Modular forms}
We now recall the definitions of modular forms. Standard references are \cite{DS} and~\cite{CS}, but I still vividly recommend~\cite{DiamondIm},
which gives a concise and yet rather complete introduction. We denote by
$$\mathbb{H} = \{ z \in \mathbb{C} | \mathrm{im}(z) > 0\}$$
the upper half plane. The set of cusps is by definition $\mathbb{P}^1(\mathbb{Q}) = \mathbb{Q} \cup \{\infty\}$.
The group $\mathrm{PSL}_2(\mathbb{Z})$ acts on $\mathbb{H}$ by Möbius transforms; more explicitly, for $M = \mat abcd \in \mathrm{SL}_2(\mathbb{Z})$ and $z \in \mathbb{H} \cup \mathbb{P}^1(\mathbb{Q})$
one sets
\begin{equation}\label{eq:moebius}
M.z = \frac{az+b}{cz+d}.
\end{equation}
For $M = \mat abcd$ an integer matrix with non-zero determinant,
an integer $k$ and a function $f:\mathbb{H} \to \mathbb{C}$, we put
$$ (f|_k M)(z) = (f|M)(z) := f\big(M.z\big) \frac{\Det(M)^{k-1}}{(cz+d)^k}.$$
Fix integers $k\ge 1$ and $N \ge 1$. A function
$$ f: \mathbb{H} \to \mathbb{C}$$
given by a convergent power series (the $a_n(f)$ are complex numbers)
$$f(z) = \sum_{n=0}^\infty a_n(f) (e^{2\pi i z})^n = \sum_{n=0}^\infty a_n(f) q^n
\;\; \text{ with } q(z) = e^{2 \pi i z}$$
is called a {\em modular form of weight $k$ for $\Gamma_1(N)$} if
\begin{enumerate}[(i)]
\item $(f|_k \mat abcd) (z) = f(\frac{az+b}{cz+d})(cz+d)^{-k} = f(z)$ for all
$\mat abcd \in \Gamma_1(N)$, and
\item the function $(f|_k \mat abcd) (z) = f(\frac{az+b}{cz+d}) (cz+d)^{-k}$
admits a limit when $z$ tends to $i \infty$ (we often just write $\infty$) for all $\mat abcd \in \mathrm{SL}_2(\mathbb{Z})$
(this condition is called {\em $f$ is holomorphic at the cusp $a/c$}).
\end{enumerate}
We use the notation $\Mkone kN\mathbb{C}$. If we replace (ii) by
\begin{enumerate}[(i)]
\item [(ii)'] the function $(f|_k \mat abcd) (z) = f(\frac{az+b}{cz+d}) (cz+d)^{-k}$
is holomorphic and
the limit $f(\frac{az+b}{cz+d}) (cz+d)^{-k}$ is $0$
when $z$ tends to $i \infty$,
\end{enumerate}
then $f$ is called a {\em cusp form}. For these,
we introduce the notation $\Skone kN\mathbb{C}$.
Let us now suppose that we are given a Dirichlet character~$\chi$
of modulus~$N$ as above. Then we replace (i) as follows:
\begin{enumerate}[(i)]
\item [(i)'] $f(\frac{az+b}{cz+d}) (cz+d)^{-k} = \chi(d) f(z)$ for all
$\mat abcd \in \Gamma_0(N)$.
\end{enumerate}
Functions satisfying this condition are called {\em modular forms}
(respectively, {\em cusp forms} if they satisfy (ii)') {\em of weight $k$,
character $\chi$ and level $N$}. The notation
$\Mk kN\chi\mathbb{C}$ (respectively, $\Sk kN\chi\mathbb{C}$) will be used.
All these are finite dimensional $\mathbb{C}$-vector spaces. For $k \ge 2$, there are dimension formulae, which one can look up in \cite{Stein}.
We, however, point the reader to the fact that for $k=1$ nearly nothing about the dimension is known (except that it is smaller than the respective dimension for $k=2$; it is believed to be much smaller, but only very weak results are known to date).
\subsubsection*{Hecke operators}
At the base of everything that we will do with modular forms are the
Hecke operators and the diamond operators. One should really define
them more conceptually (e.g.\ geometrically), but this takes some time.
Here is a definition by formulae.
If $a$ is an integer coprime to~$N$, by Exercise~\ref{exsigmaa} we may let $\sigma_a$ be a matrix in $\Gamma_0(N)$ such that
\begin{equation}\label{sigmaa}
\sigma_a \equiv \mat {a^{-1}}00a \mod N.
\end{equation}
We define the {\em diamond operator} $\diam a$ (you see the diamond in the notation, with some phantasy) by the formula
$$ \diam a f = f|_k \sigma_a.$$
If $f \in \Mk kN\chi\mathbb{C}$, then we have by definition $\diam a f = \chi(a) f$.
The diamond operators give a group action of $(\mathbb{Z}/N\mathbb{Z})^\times$ on $\Mkone kN\mathbb{C}$ and on $\Skone kN\mathbb{C}$,
and the $\Mk kN\chi\mathbb{C}$ and $\Sk kN\chi\mathbb{C}$ are the $\chi$-eigenspaces for this action.
We thus have the isomorphism
$$ \Mkone kN\mathbb{C} \cong \bigoplus_\chi \Mk kN\chi\mathbb{C}$$
for $\chi$ running through the characters of $(\mathbb{Z}/N\mathbb{Z})^\times$ (and similarly for the cuspidal spaces).
Let $\ell$ be a prime. We let
\begin{align}\label{setrp}
\mathcal{R}_\ell & := \{ \mat 1r0\ell | 0 \le r \le \ell-1 \} \cup \{\sigma_\ell \mat \ell 001\}, & \text{ if }\ell\nmid N\\
\mathcal{R}_\ell & := \{ \mat 1r0\ell | 0 \le r \le \ell-1 \}, & \text{ if }\ell\mid N
\end{align}
We use these sets to define the {\em Hecke operator $T_\ell$} acting
on $f$ as above as follows:
$$ f |_k T_\ell := T_\ell f := \sum_{\delta \in \mathcal{R}_\ell} f|_k\delta.$$
\begin{lemma}\label{lemhecke}
Suppose $f \in \Mk kN\chi\mathbb{C}$.
Recall that we have extended $\chi$ so that $\chi(\ell) = 0$ if $\ell$ divides~$N$.
We have the formula
$$ a_n(T_\ell f) = a_{\ell n}(f) + \ell^{k-1} \chi(\ell) a_{n/\ell}(f).$$
In the formula, $a_{n/\ell}(f)$ is to be read as $0$ if $\ell$ does not divide $n$.
\end{lemma}
\begin{proof}
Exercise~\ref{exhecke}.
\end{proof}
The Hecke operators for composite $n$ can be defined as follows (we put $T_1$ to be the identity):
\begin{equation}\label{eq:Tn-composite}
\begin{array}{rll}
T_{\ell^{r+1}} &= T_\ell \circ T_{\ell^r} - \ell^{k-1} \diam \ell T_{\ell^{r-1}} & \textnormal{ for all primes~$\ell$ and $r \ge 1$},\\
T_{uv} &= T_u \circ T_v & \textnormal{ for coprime positive integers $u,v$}.
\end{array}
\end{equation}
We derive the very important formula (valid for every $n$)
\begin{equation}\label{aeins}
a_1(T_n f) = a_n(f).
\end{equation}
It is the only formula that we will really need.
From Lemma~\ref{lemhecke} and the above formulae it is also evident that the Hecke operators commute among one another.
By Exercise~\ref{excommute} eigenspaces for a collection of operators (i.e.\ each element of a given set of Hecke operators acts by scalar multiplication) are respected by all Hecke operators.
Hence, it makes sense to consider modular forms which are eigenvectors for every Hecke operator. These are called {\em Hecke eigenforms}, or often just {\em eigenforms}.
Such an eigenform $f$ is called {\em normalised} if $a_1(f) = 1$.
We shall consider eigenforms in more detail in the following section.
Finally, let us point out the formula (for $\ell$ prime and $\ell \equiv d \mod N$)
\begin{equation}\label{diamondinhecke}
\ell^{k-1} \diam d = T_\ell^2 - T_{\ell^2}.
\end{equation}
Hence, the diamond operators can be expressed as $\mathbb{Z}$-linear combinations of Hecke operators.
Note that divisibility is no trouble since we may choose $\ell_1$, $\ell_2$, both congruent to $d$ modulo $N$ satisfying an equation $1 = \ell_1^{k-1}r + \ell_2^{k-1}s$ for appropriate $r,s \in \mathbb{Z}$.
\subsubsection*{Hecke algebras and the $q$-pairing}
We now quickly introduce the concept of Hecke algebras. It will be treated in more detail in later sections.
In fact, {\em when we claim to compute modular forms with the modular symbols algorithm, we are really computing Hecke algebras.}
In the couple of lines to follow, we show that the Hecke algebra is the dual of modular forms, and hence all knowledge about modular forms can
- in principle - be derived from the Hecke algebra.
For the moment, we define the {\em Hecke algebra} of $\Mkone kN\mathbb{C}$ as the sub-$\mathbb{C}$-algebra
inside the endomorphism ring of the $\mathbb{C}$-vector space $\Mkone kN\mathbb{C}$ generated by all Hecke operators and all diamond operators.
We make similar definitions for $\Skone kN\mathbb{C}$, $\Mk kN\chi\mathbb{C}$ and $\Sk kN\chi\mathbb{C}$.
Let us introduce the pieces of notation
$$\mathbb{T}_\mathbb{C}(\Mkone kN\mathbb{C}), \mathbb{T}_\mathbb{C}(\Skone{k}{N}{\mathbb{C}}), \mathbb{T}_\mathbb{C}(\Mk{k}{N}{\chi}{\mathbb{C}}) \text{ and } \mathbb{T}_\mathbb{C}(\Sk{k}{N}{\chi}{\mathbb{C}}),$$
respectively.
We now define a bilinear pairing, which we call the {\em (complex) $q$-pairing}, as
$$ \Mk kN\chi\mathbb{C} \times \mathbb{T}_\mathbb{C}(\Mk kN\chi\mathbb{C}) \to \mathbb{C}, \;\;(f,T) \mapsto a_1(Tf)$$
(compare with Equation~\ref{aeins}).
\begin{lemma}\label{qpair}
Suppose $k \ge 1$.
The complex $q$-pairing is perfect, as is the analogous pairing for $\Sk kN\chi\mathbb{C}$. In particular,
$$ \Mk kN\chi\mathbb{C} \cong {\rm Hom}_\mathbb{C} (\mathbb{T}_\mathbb{C}(\Mk kN\chi\mathbb{C}),\mathbb{C}), \;\; f \mapsto (T \mapsto a_1(Tf))$$
and similarly for $\Sk kN\chi\mathbb{C}$. For $\Sk kN\chi\mathbb{C}$, the inverse is given by sending $\phi$ to $\sum_{n=1}^\infty \phi(T_n) q^n$.
\end{lemma}
\begin{proof}
Let us first recall that a pairing over a field is perfect
if and only if it is non-degenerate. That is what we are going to
check. It follows from Equation~\ref{aeins} like this.
If for all $n$ we have $0 = a_1(T_n f) = a_n(f)$, then
$f = 0$ (this is immediately clear for cusp forms; for
general modular forms at the first place we can only conclude
that $f$ is a constant, but since $k \ge 1$, non-zero constants
are not modular forms). Conversely, if
$a_1 (Tf) = 0$ for all $f$, then $a_1(T (T_n f)) = a_1 (T_n T f)
= a_n (Tf) = 0$ for all $f$ and all $n$, whence $Tf = 0$ for all $f$.
As the Hecke algebra is defined as a subring in the endomorphism
of $\Mk kN\chi\mathbb{C}$ (resp.\ the cusp forms), we find $T=0$,
proving the non-degeneracy.
\end{proof}
The perfectness of the $q$-pairing is also called the {\em existence of a $q$-expansion principle}.
Due to its central role for this course,
we repeat and emphasize that the Hecke algebra is the linear dual of the space of modular forms.
\begin{lemma}\label{eigenf}
Let $f$ in $\Mkone kN\mathbb{C}$ be a normalised eigenform. Then
$$ T_n f = a_n(f) f \;\;\; \text{ for all } n \in \mathbb{N}.$$
Moreover, the natural map from the above duality gives a bijection
\begin{equation*}
\{ \textnormal{Normalised eigenforms in }\Mkone kN\mathbb{C}\} \leftrightarrow
{\rm Hom}_{\mathbb{C}-\textnormal{algebra}} (\mathbb{T}_\mathbb{C}(\Mkone kN\mathbb{C}),\mathbb{C}).
\end{equation*}
Similar results hold, of course, also in the presence of~$\chi$.
\end{lemma}
\begin{proof}
Exercise~\ref{exeigenf}.
\end{proof}
\subsection{Theory: The modular symbols formalism}
In this section we give a definition of formal modular symbols, as implemented in {\sc Magma} and like the one in \cite{MerelUniversal},
\cite{Cremona} and~\cite{Stein}, except that we do not factor out torsion, but intend a common treatment for all rings.
Contrary to the texts just mentioned, we prefer to work with the group
$$ \mathrm{PSL}_2(\mathbb{Z}) = \mathrm{SL}_2(\mathbb{Z}) / \langle -1 \rangle,$$
since it will make some of the algebra much simpler and since
it has a very simple description as a free product (see later).
The definitions of modular forms could have been formulated
using $\mathrm{PSL}_2(\mathbb{Z})$ instead of $\mathrm{SL}_2(\mathbb{Z})$, too (Exercise~\ref{expsl}).
We introduce some definitions and pieces of notation to be used in all the course.
\begin{definition}
Let $R$ be a ring, $\Gamma$ a group and $V$ a left $R[\Gamma]$-module.
The $\Gamma$-invariants of~$V$ are by definition
$$ V^\Gamma = \{ v \in V | g.v = v \; \forall \; g \in \Gamma \} \subseteq V.$$
The $\Gamma$-coinvariants of~$V$ are by definition
$$ V_\Gamma = V / \langle v - g.v | g \in \Gamma, v \in V \rangle.$$
If $H \le \Gamma$ is a finite subgroup, we define the norm of~$H$ as
$$ N_H = \sum_{h \in H} h \in R[\Gamma].$$
Similarly, if $g \in \Gamma$ is an element of finite order~$n$, we define the norm of~$g$ as
$$ N_g = N_{\langle g \rangle} = \sum_{i=0}^{n-1} g^i \in R[\Gamma].$$
\end{definition}
Please look at the important Exercise~\ref{exgp} for some properties of these definitions.
We shall make use of the results of this exercise in the section on group cohomology.
For the rest of this section, we let $R$ be a commutative ring with unit and $\Gamma$ be a subgroup of finite index in $\mathrm{PSL}_2(\mathbb{Z})$.
For the time being we allow general modules; so we let $V$ be a left $R[\Gamma]$-module.
Recall that $\mathrm{PSL}_2(\mathbb{Z})$ acts on $\mathbb{H} \cup \mathbb{P}^1(\mathbb{Q})$ by Möbius transformations, as defined earlier.
A generalised version of the definition below appeared in~\cite{MS}.
\begin{definition}\label{defMS}
We define the $R$-modules
$$ \mathcal{M}_R := R[\{\alpha,\beta\}| \alpha,\beta \in \mathbb{P}^1(\mathbb{Q})]/
\langle \{\alpha,\alpha\}, \{\alpha,\beta\} + \{\beta,\gamma\} + \{\gamma,\alpha\}
| \alpha,\beta,\gamma \in \mathbb{P}^1(\mathbb{Q})\rangle$$
and
$$ \mathcal{B}_R := R[\mathbb{P}^1(\mathbb{Q})].$$
We equip both with the natural left $\Gamma$-action.
Furthermore, we let
$$ \mathcal{M}_R(V) := \mathcal{M}_R \otimes_R V \;\;\;\; \text{ and } \;\;\;\; \mathcal{B}_R(V) := \mathcal{B}_R \otimes_R V$$
for the left diagonal $\Gamma$-action.
\begin{enumerate}[(a)]
\item We call the $\Gamma$-coinvariants
$$ \mathcal{M}_R (\Gamma,V) := \mathcal{M}_R(V)_\Gamma =
\mathcal{M}_R(V)/ \langle (x - g x) | g \in \Gamma, x \in \mathcal{M}_R(V) \rangle$$
{\em the space of $(\Gamma,V)$-modular symbols.}
\item We call the $\Gamma$-coinvariants
$$ \mathcal{B}_R(\Gamma,V) := \mathcal{B}_R(V)_\Gamma =
\mathcal{B}_R(V)/ \langle (x - g x) | g \in \Gamma, x \in \mathcal{B}_R(V) \rangle$$
{\em the space of $(\Gamma,V)$-boundary symbols.}
\item We define the {\em boundary map} as the map
$$ \mathcal{M}_R(\Gamma,V) \to \mathcal{B}_R(\Gamma,V)$$
which is induced from the map $\mathcal{M}_R \to \mathcal{B}_R$ sending $\{\alpha, \beta\}$
to $\{\beta\} - \{\alpha\}$.
\item The kernel of the boundary map is denoted by $\mathcal{CM}_R(\Gamma,V)$ and is called
{\em the space of cuspidal $(\Gamma,V)$-modular symbols.}
\item The image of the boundary map inside $\mathcal{B}_R(\Gamma,V)$ is
denoted by $\mathcal{E}_R(\Gamma,V)$ and is called
{\em the space of $(\Gamma,V)$-Eisenstein symbols.}
\end{enumerate}
\end{definition}
The reader is now invited to prove that the definition of
$\mathcal{M}_R(\Gamma,V)$ behaves well with respect to base change (Exercise~\ref{basechange}).
\subsubsection*{The modules $V_n(R)$ and $V_n^\chi(R)$}
Let $R$ be a ring. We put $V_n(R) = R[X,Y]_n \cong \Sym^{n}(R^2)$ (see Exercise~\ref{exsym}).
By $R[X,Y]_n$ we mean the homogeneous polynomials of degree~$n$ in two variables with coefficients in the ring~$R$.
By $\mathrm{Mat}_2(\ZZ)_{\det \neq 0}$ we denote the monoid of integral $2\times 2$-matrices with non-zero determinant (for matrix multiplication),
{\em i.e.}, $\mathrm{Mat}_2(\ZZ)_{\det \neq 0} = \mathrm{GL}_2(\mathbb{Q}) \cap \mathbb{Z}^{2 \times 2}$.
Then $V_n(R)$ is a $\mathrm{Mat}_2(\ZZ)_{\det \neq 0}$-module in several natural ways.
One can give it the structure of a left $\mathrm{Mat}_2(\ZZ)_{\det \neq 0}$-module via the polynomials by putting
$$ (\mat abcd .f) (X,Y) = f \big( (X,Y) \mat abcd \big) = f \big( (aX+cY, bX+dY) \big).$$
Merel and Stein, however, consider a different one, and that is the one implemented in {\sc Magma}, namely
$$ (\mat abcd .f) (X,Y) = f \big( (\mat abcd)^\iota \vect XY \big)
= f \big( \mat d{-b}{-c}a \vect XY \big) = f \big( \vect {dX-bY}{-cX+aY} \big).$$
Here, $\iota$ denotes Shimura's main involution whose definition
can be read off from the line above (note that $M^\iota$ is the inverse of $M$ if $M$ has determinant~$1$).
Fortunately, both actions are isomorphic due to the fact that the transpose of $(\mat abcd)^\iota \vect XY$ is equal
to $(X,Y) \sigma^{-1} \mat abcd \sigma$, where $\sigma = \mat 01{-1}0$.
More precisely, we have the isomorphism $V_n(R) \xrightarrow{f \mapsto \sigma^{-1}.f} V_n(R)$,
where the left hand side module carries "our" action and the right hand side
module carries the other one. By $\sigma^{-1}.f$ we mean "our" $\sigma^{-1}.f$.
Of course, there is also a natural right action by $\mathrm{Mat}_2(\ZZ)_{\det \neq 0}$, namely
$$ (f.\mat abcd) (\vect XY) = f ( \mat abcd \vect XY ) = f(\vect{aX+bY}{cX+dY}).$$
By the standard inversion trick, also both left actions described above can be turned into right ones.
Let now $(\mathbb{Z}/N\mathbb{Z})^\times \to R^\times$ be a Dirichlet character, which we shall also consider as a character
$\chi: \Gamma_0(N) \xrightarrow{\mat abcd \mapsto a} (\mathbb{Z}/N\mathbb{Z})^\times \xrightarrow{\chi} R^\times$.
By $R^\chi$ we denote the $R[\Gamma_0(N)]$-module which
is defined to be $R$ with the $\Gamma_0(N)$-action through~$\chi$,
i.e.\ $\mat abcd .r = \chi(a) r = \chi^{-1}(d) r$ for $\mat abcd \in \Gamma_0(N)$ and $r \in R$.
For use with Hecke operators, we extend this action to matrices $\mat abcd \in \mathbb{Z}^{2 \times 2}$ which are
congruent to an upper triangular matrix modulo~$N$ (but not necessarily of determinant~$1$). Concretely,
we also put $\mat abcd .r = \chi(a) r$ for $r \in R$ in this situation. Sometimes, however, we want to use the coefficient~$d$
in the action. In order to do so, we let $R^{\iota,\chi}$ be $R$ with the action $\mat abcd .r = \chi(d) r$
for matrices as above.
In particular, the $\Gamma_0(N)$-actions on $R^{\iota,\chi}$ and $R^{\chi^{-1}}$ coincide.
Note that due to $(\mathbb{Z}/N\mathbb{Z})^\times$ being an abelian group,
the same formulae as above make $R^\chi$ also into a right $R[\Gamma_0(N)]$-module.
Hence, putting
$$(f \otimes r).\mat abcd = (f|_k \mat abcd) \otimes \mat abcd r$$
makes $\Mkone kN\mathbb{C} \otimes_\mathbb{C} \mathbb{C}^\chi$ into a right $\Gamma_0(N)$-module and we have the
description (Exercise~\ref{exchar})
\begin{equation}\label{eqchar}
\Mk kN\chi\mathbb{C} = (\Mkone kN\mathbb{C} \otimes_\mathbb{C} \mathbb{C}^\chi)^{(\mathbb{Z}/N\mathbb{Z})^\times}
\end{equation}
and similarly for $\Sk kN\chi\mathbb{C}$.
We let
$$V_n^\chi(R) := V_n(R) \otimes_R R^\chi \textnormal{ and }V_n^{\iota,\chi}(R) := V_n(R) \otimes_R R^{\iota,\chi}$$
equipped with the natural diagonal left $\Gamma_0(N)$-actions.
Note that unfortunately these modules are in general not $\mathrm{SL}_2(\mathbb{Z})$-modules,
but we will not need that.
Note, moreover, that if $\chi(-1) = (-1)^n$, then minus the identity
acts trivially on $V_n^\chi(R)$ and $V_n^{\iota,\chi}(R)$, whence we consider these
modules also as $\Gamma_0(N)/\{\pm 1\}$-modules.
\subsubsection*{The modular symbols formalism for standard congruence subgroups}
We now specialise the general set-up on modular symbols that we
have used so far to the precise situation needed for establishing
relations with modular forms.
So we let $N \ge 1$, $k \ge 2$ be integers and fix a character
$\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to R^\times$, which we also sometimes
view as a group homomorphism $\Gamma_0(N) \to R^\times$ as above. We
impose that $\chi(-1) = (-1)^k$.
We define
$$\cMk kN\chi R := \mathcal{M}_R(\Gamma_0(N)/\{\pm 1\},V_{k-2}^\chi(R)),$$
$$\cCMk kN\chi R := \mathcal{CM}_R(\Gamma_0(N)/\{\pm 1\},V_{k-2}^\chi(R)),$$
$$\cBk kN\chi R := \mathcal{B}_R(\Gamma_0(N)/\{\pm 1\},V_{k-2}^\chi(R))$$
and
$$\cEk kN\chi R := \mathcal{E}_R(\Gamma_0(N)/\{\pm 1\},V_{k-2}^\chi(R)).$$
We make the obvious analogous definitions for $\cMkone kNR$ etc.
Let
\begin{equation}\label{defeta}
\eta := \mat {-1}001.
\end{equation}
Because of
$$ \eta \mat abcd \eta = \mat a {-b}{-c} d$$
we have
$$ \eta \Gamma_1(N) \eta = \Gamma_1(N) \;\;\; \text{ and } \;\;\;\eta \Gamma_0(N) \eta = \Gamma_0(N).$$
We can use the matrix $\eta$ to define an involution (also denoted by $\eta$) on the various modular symbols spaces. We just use the diagonal action on $\mathcal{M}_R(V) := \mathcal{M}_R \otimes_R V$, provided, of course, that $\eta$ acts on~$V$. On $V_{k-2}(R)$ we use the usual $\mathrm{Mat}_2(\ZZ)_{\det \neq 0}$-action, and on $V_{k-2}^\chi(R) = V_{k-2}(R) \otimes R^\chi$ we let $\eta$ only act on the first factor.
We will denote by the superscript ${}^+$ the subspace invariant under this involution, and by the superscript ${}^-$ the anti-invariant one. We point out that there are other very good
definitions of $+$-spaces and $-$-spaces. For instance, in many applications it can
be of advantage to define the $+$-space as the $\eta$-coinvariants, rather than
the $\eta$-invariants. In particular, for modular symbols, where we are using quotients
and coinvariants all the time, this alternative definition is more suitable. The reader
should just think about the differences between these two definitions.
Note that here we are not following the conventions of \cite{Stein}, p.~141. Our action just seems more natural than adding an extra minus sign.
\subsubsection*{Hecke operators}
The aim of this part is to state the definition of Hecke operators
and diamond operators on formal modular symbols $\cMk kN\chi R$
and $\cCMk kN\chi R$. One immediately sees that it is very similar
to the one on modular forms. One can get a different insight in the
defining formulae by seeing how they are derived from a double coset formulation
in section~\ref{sec:Hecke}.
The definition given here is also explained in detail in \cite{Stein}.
We should also mention the very important fact that one can transfer Hecke
operators in an explicit way to Manin symbols using Heilbronn matrices.
We shall not do this explicitly in this course. This point is discussed in detail in \cite{Stein} and \cite{MerelUniversal}.
We now give the definition only for $T_\ell$ for a prime~$\ell$ and
the diamond operators. The $T_n$ for composite $n$ can be computed
from those by the formulae already stated in~\eqref{eq:Tn-composite}.
Notice that the $R[\Gamma_0(N)]$-action on $V_{k-2}^\chi(R)$
(for the usual conventions, in particular, $\chi(-1) = (-1)^k$)
extends naturally to an action of the semi-group generated
by $\Gamma_0(N)$ and $\mathcal{R}_\ell$ (see Equation~\ref{setrp}).
Thus, this semi-group acts on $\cMk kN\chi R$ (and the
cusp space) by the diagonal action on the tensor product.
Let $x \in \cMkone kN R$ or $x \in \cMk kN \chi R$. We put
$$ T_\ell x = \sum_{\delta \in \mathcal{R}_\ell} \delta.x.$$
If $a$ is an integer coprime to~$N$, we define the diamond operator as
$$ \diam a x = \sigma_a x $$
with $\sigma_a$ as in equation~\eqref{sigmaa}.
When $x = (m \otimes v \otimes 1)_{\Gamma_0(N)/\{\pm 1\}} \in \cMk kN \chi R$ for $m \in \mathcal{M}_R$ and $v\in V_{k-2}$, we have
$\diam a x = (\sigma_a m \otimes \sigma_a v) \otimes \chi(a^{-1}))_{\Gamma_0(N)/\{\pm 1\}} = x$,
thus $(\sigma_a(m \otimes v) \otimes 1)_{\Gamma_0(N)/\{\pm 1\}} = \chi(a) (m \otimes v \otimes 1)_{\Gamma_0(N)/\{\pm 1\}}$.
As in the section on Hecke operators on modular forms, we define Hecke algebras on modular symbols in a very similar way. We will take the freedom of taking arbitrary base rings (we will do that for modular forms in the next section, too).
Thus for any ring $R$ we let $\mathbb{T}_R (\cMkone knR)$ be the $R$-subalgebra
of the $R$-endomorphism algebra of the $R$-module $\cMkone knR$ generated by the Hecke
operators $T_n$. For a character $\chi: \mathbb{Z}/N\mathbb{Z} \to R^\times$, we make
a similar definition. We also make a similar definition for the
cuspidal subspace and the $+$- and $-$-spaces.
The following fact will be obvious from the description of modular symbols
as Manin symbols (see Theorem~\ref{ManinSymbols}), which will be derived in a later chapter.
Here, we already want to use it.
\begin{proposition}\label{factfg}
The $R$-modules $\cMkone kN R$, $\cCMkone kNR$, $\cMk kN\chi R$, $\cCMk kN\chi R$
are finitely presented.
\end{proposition}
\begin{corollary}
Let $R$ be Noetherian.
The Hecke algebras $\mathbb{T}_R(\cMkone kN R)$, $\mathbb{T}_R(\cCMkone kNR)$,
$\mathbb{T}_R(\cMk kN\chi R)$ and $\mathbb{T}_R(\cCMk kN\chi R)$
are finitely presented $R$-modules.
\end{corollary}
\begin{proof}
This follows from Proposition~\ref{factfg} since
the endomorphism ring of a finitely generated module is finitely generated and
submodules of finitely generated modules over Noetherian rings are finitely generated.
Furthermore, over a Noetherian ring, finitely generated implies finitely presented.
\end{proof}
This very innocent looking corollary will give - together with the Eichler-Shimura
isomorphism - that coefficient fields of normalised eigenforms are number fields.
We next prove that the formation of Hecke algebras for modular symbols
behaves well with respect to flat base change.
We should have in mind the example $R=\mathbb{Z}$ or $R=\mathbb{Z}[\chi]:=\mathbb{Z}[\chi(n) : n \in \mathbb{N}]$ (i.e., the ring of integers
of the cyclotomic extension of~$\mathbb{Q}$ generated by the values of~$\chi$ or, equivalently, $\mathbb{Z}[e^{2 \pi i/r}]$ where $r$
is the order of~$\chi$) and $S=\mathbb{C}$.
\begin{proposition}\label{hamsymbc}
Let $R$ be a Noetherian ring and $R \to S$ a flat ring
homomorphism.
\begin{enumerate}[(a)]
\item The natural map
$$ \mathbb{T}_R (\cMkone kNR) \otimes_R S \cong \mathbb{T}_S (\cMkone kNS)$$
is an isomorphism of $S$-algebras.
\item The natural map
$$ {\rm Hom}_R(\mathbb{T}_R(\cMkone kNR),R) \otimes_R S \cong {\rm Hom}_S(\mathbb{T}_S(\cMkone kNS),S)$$
is an isomorphism of $S$-modules.
\item The map
$$ {\rm Hom}_R(\mathbb{T}_R(\cMkone kNR) ,S)
\xrightarrow{\phi \mapsto (T \otimes s \mapsto \phi(T)s)} {\rm Hom}_S(\mathbb{T}_R(\cMkone kNR) \otimes_R S,S)$$
is also an isomorphism of $S$-modules.
\item Suppose in addition that $R$ is an integral domain and $S$ a field
extension of the field of fractions of~$R$. Then the natural map
$$ \mathbb{T}_R(\cMkone kNR) \otimes_R S \to \mathbb{T}_R(\cMkone kNS) \otimes_R S$$
is an isomorphism of $S$-algebras.
\end{enumerate}
For a character $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to R^\times$, similar results hold.
Similar statements also hold for the cuspidal subspace.
\end{proposition}
\begin{proof}
We only prove the proposition for $M := \cMkone kNR$. The arguments
are exactly the same in the other cases.
(a) By Exercise~\ref{basechange} it suffices to prove
$$ \mathbb{T}_R (M) \otimes_R S \cong \mathbb{T}_S (M \otimes_R S).$$
Due to flatness and the finite presentation of~$M$ the natural homomorphism
$$ {\rm End}_R(M) \otimes_R S \to {\rm End}_S(M\otimes_R S)$$
is an isomorphism (see \cite{Eisenbud}, Prop.~2.10). By definition,
the Hecke algebra $\mathbb{T}_R(M)$ is an $R$-submodule of ${\rm End}_R(M)$. As injections are preserved
by flat morphisms, we obtain the injection
$$\mathbb{T}_R(M) \otimes_R S \hookrightarrow {\rm End}_R(M) \otimes_R S \cong {\rm End}_S(M\otimes_R S).$$
The image is equal to $\mathbb{T}_S(M\otimes_R S)$, since all Hecke operators
are hit, establishing~(a).
(b) follows from the same citation from \cite{Eisenbud} as above.
(c) Suppose that under the map from Statement~(c)
$\phi \in {\rm Hom}_R(\mathbb{T}_R(M) ,S)$ is mapped to the zero map.
Then $\phi(T)s=0$ for all $T$ and all $s \in S$.
In particular with $s=1$ we get $\phi(T)=0$ for all $T$,
whence $\phi$ is the zero map, showing injectivity.
Suppose now that $\psi \in {\rm Hom}_S(\mathbb{T}_R(M) \otimes_R S,S)$ is given.
Call $\phi$ the composite
$\mathbb{T}_R(M) \to \mathbb{T}_R(M) \otimes_R S \xrightarrow{\psi} S$. Then $\psi$
is the image of~$\phi$, showing surjectivity.
(d) We first define
$$N := \ker \big( M \xrightarrow{\pi: m \mapsto m \otimes 1} M \otimes_R S\big).$$
We claim that $N$ consists only of $R$-torsion elements.
Let $x \in N$. Then $x \otimes 1 = 0$. If $rx \neq 0$ for all $r \in R - \{0\}$,
then the map $R \xrightarrow {r \mapsto rx} N$ is injective. We call $F$ the image
to indicate that it is a free $R$-module. Consider the exact sequence of $R$-modules:
$$ 0 \to F \to M \to M/F \to 0.$$
From flatness we get the exact sequence
$$ 0 \to F \otimes_R S \to M \otimes_R S \to M/F \otimes_R S \to 0.$$
But, $F \otimes_R S$ is $0$, since it is generated by $x\otimes 1 \in M \otimes_R S$.
However, $F$ is free, whence $F \otimes_R S$ is also~$S$. This contradiction shows
that there is some $r \in R-\{0\}$ with $rx = 0$.
As $N$ is finitely generated, there is some $r \in R-\{0\}$ such that $rN = 0$.
Moreover, $N$ is characterised as the set of elements $x \in M$ such that $rx=0$.
For, we already know that $x \in N$ satisfies $rx = 0$. If, conversely, $rx = 0$
with $x \in M$, then $0 = rx \otimes 1/r = x \otimes 1 \in M \otimes_R S$.
Every $R$-linear (Hecke) operator $T$ on $M$ clearly restricts to~$N$,
since $r Tn = T rn=T0=0$.
Suppose now that $T$ acts as~$0$ on~$M \otimes_R S$.
We claim that then $rT = 0$ on all of~$M$.
Let $m \in M$. We have $0 = T \pi m = \pi Tm$. Thus $Tm \in N$ and, so,
$rTm = 0$, as claimed. In other words, the kernel of the homomorphism
$\mathbb{T}_R(M) \to \mathbb{T}_R(M \otimes_R S)$ is killed by~$r$.
This homomorphism is surjective,
since by definition $\mathbb{T}_R(M \otimes_R S)$ is generated by all Hecke operators
acting on~$M \otimes_R S$. Tensoring with $S$ kills the torsion and the statement follows.
\end{proof}
Some words of warning are necessary.
It is essential that $R \to S$ is a flat homomorphism. A similar result for
$\mathbb{Z} \to \mathbb{F}_p$ is not true in general. I call this a "faithfulness problem",
since then $\cMkone kN {\mathbb{F}_p}$ is not a faithful module for
$\mathbb{T}_\mathbb{Z}(\cMkone kN\mathbb{C}) \otimes_\mathbb{Z} {\mathbb{F}_p}$. Some effort goes into finding
$k$ and $N$, where this module is faithful. See, for instance, \cite{faithful}.
Moreover, $\cMkone kN R$ need not be a free $R$-module and can contain torsion.
Please have a look at Exercise~\ref{exhamsymbc} now to find out whether one
can use the $+$- and the $-$-space in the proposition.
\subsection{Theory: The modular symbols algorithm}
\subsubsection*{The Eichler-Shimura theorem}
At the basis of the modular symbols algorithm is the following theorem
by Eichler, which was extended by Shimura. One of our aims in this lecture
is to provide a proof for it. In this introduction, however, we only state
it and indicate how the modular symbols algorithm can be derived from it.
\begin{theorem}[Eichler-Shimura]\label{thmes}
There are isomorphisms respecting the Hecke operators
\begin{enumerate}[(a)]
\item $\Mk kN\chi\mathbb{C}) \oplus \Sk kN\chi\mathbb{C}^\vee \cong \cMk kN\chi\mathbb{C},$
\item $\Sk kN\chi\mathbb{C}) \oplus \Sk kN\chi\mathbb{C}^\vee \cong \cCMk kN\chi\mathbb{C},$
\item $\Sk kN\chi\mathbb{C} \cong \cCMk kN\chi\mathbb{C}^+.$
\end{enumerate}
Similar isomorphisms hold for modular forms and modular symbols on $\Gamma_1(N)$
and $\Gamma_0(N)$.
\end{theorem}
\begin{proof}
Later in this lecture (Theorems \ref{compthm} and~\ref{esgammaeins}, Corollary~\ref{esgammanull}).
\end{proof}
\begin{corollary}\label{algz1}
Let $R$ be a subring of $\mathbb{C}$ and $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to R^\times$
a character. Then there is the natural isomorphism
$$ \mathbb{T}_R(\Mk kN\chi\mathbb{C}) \cong \mathbb{T}_R(\cMk kN\chi\mathbb{C}).$$
A similar result holds cusp forms, and also for $\Gamma_1(N)$ without a character as well as for $\Gamma_0(N)$.
\end{corollary}
\begin{proof}
We only prove this for the full space of modular forms. The arguments in the other cases are very similar.
Theorem~\ref{thmes} tells us that the $R$-algebra generated by the
Hecke operators inside the endomorphism ring of $\Mk kN\chi\mathbb{C}$ equals the $R$-algebra
generated by the Hecke operators inside the endomorphism ring of $\cMk kN\chi\mathbb{C}$, {\em i.e.}
the assertion to be proved.
To see this, one just needs to see that the algebra generated by all Hecke operators on
$\Mk kN\chi\mathbb{C} \oplus \Sk kN\chi\mathbb{C}^\vee$
is the same as the one generated by all Hecke operators on $\Mk kN\chi\mathbb{C}$, which follows
from the fact that if some Hecke operator $T$ annihilates the full space of modular forms, then
it also annihilates the dual of the cusp space.
\end{proof}
The following corollary of the Eichler-Shimura theorem is of utmost importance
for the theory of modular forms. It says that Hecke algebras of modular forms
have an integral structure (take $R=\mathbb{Z}$ or $R =\mathbb{Z} [\chi]$). We will say more
on this topic in the next section.
\begin{corollary}\label{algz}
Let $R$ be a subring of $\mathbb{C}$ and $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to R^\times$
a character. Then the natural map
$$\mathbb{T}_R(\Mk kN\chi\mathbb{C}) \otimes_R \mathbb{C} \cong \mathbb{T}_\mathbb{C} (\Mk kN\chi \mathbb{C})$$
is an isomorphism.
A similar result holds cusp forms, and also for $\Gamma_1(N)$ without a character as well as for $\Gamma_0(N)$.
\end{corollary}
\begin{proof}
We again stick to the full space of modular forms. Tensoring the isomorphism from Corollary~\ref{algz1} with $\mathbb{C}$ we get
$$\mathbb{T}_R(\Mk kN\chi\mathbb{C}) \otimes_R \mathbb{C} \cong \mathbb{T}_R(\cMk kN\chi\mathbb{C}) \otimes_R \mathbb{C}
\cong \mathbb{T}_\mathbb{C}(\cMk kN\chi\mathbb{C}) \cong \mathbb{T}_\mathbb{C}(\Mk kN\chi\mathbb{C}),$$
using Proposition~\ref{hamsymbc}~(d) and again Theorem~\ref{thmes}.
\end{proof}
The next corollary is at the base of the modular symbols algorithm, since it describes modular forms in linear algebra
terms involving only modular symbols.
\begin{corollary}\label{cores}
Let $R$ be a subring of $\mathbb{C}$ and $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to R^\times$
a character. Then we have the isomorphisms
\begin{align*}
\Mk kN\chi\mathbb{C} & \cong {\rm Hom}_R(\mathbb{T}_R (\cMk kN\chi R), R) \otimes_R \mathbb{C} \\
& \cong {\rm Hom}_R(\mathbb{T}_R (\cMk kN\chi R), \mathbb{C}) & \textnormal{ and} \\
\Sk kN\chi\mathbb{C} & \cong {\rm Hom}_R(\mathbb{T}_R (\cCMk kN\chi R), R) \otimes_R \mathbb{C} \\
& \cong {\rm Hom}_R(\mathbb{T}_R (\cCMk kN\chi R), \mathbb{C}).
\end{align*}
Similar results hold for $\Gamma_1(N)$ without a character and also for $\Gamma_0(N)$.
\end{corollary}
\begin{proof}
This follows from Corollaries~\ref{algz1}, \ref{algz}, Proposition~\ref{hamsymbc} and Lemma~\ref{qpair}.
\end{proof}
Please look at Exercise~\ref{excores} to find out which statement
should be included into this corollary concerning the $+$-spaces.
Here is another important consequence of the Eichler-Shimura theorem.
\begin{corollary}
Let $f = \sum_{n=1}^\infty a_n(f) q^n \in \Skone kN\mathbb{C}$ be a normalised Hecke ei\-genform. Then $\mathbb{Q}_f := \mathbb{Q}(a_n(f) | n \in \mathbb{N})$ is
a number field of degree less than or equal to $\dim_\mathbb{C} \Skone kN\mathbb{C}$.
If $f$ has Dirichlet character~$\chi$, then $\mathbb{Q}_f$ is a finite field
extension of $\mathbb{Q}(\chi)$ of degree less than or equal to $\dim_\mathbb{C} \Sk kN\chi\mathbb{C}$.
Here $\mathbb{Q}(\chi)$ is the extension of $\mathbb{Q}$ generated by all the values of~$\chi$.
\end{corollary}
\begin{proof}
It suffices to apply the previous corollaries with $R = \mathbb{Q}$ or $R = \mathbb{Q}(\chi)$ and
to remember that normalised Hecke eigenforms correspond to algebra homomorphisms
from the Hecke algebra into~$\mathbb{C}$.
\end{proof}
\subsubsection*{Sketch of the modular symbols algorithm}
It may now already be quite clear how the modular symbols algorithm for computing
cusp forms proceeds. We give a very short sketch.
\begin{algorithm}\label{algmodsymsketch}
\noindent \underline{Input:} A field $K \subset \mathbb{C}$, integers $N \ge 1$, $k \ge 2$, $P$,
a character $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to K^\times$.
\noindent \underline{Output:} A basis of the space of cusp forms
$\Sk kN\chi\mathbb{C}$; each form is given by its standard $q$-expansion with
precision~$P$.
\begin{enumerate}[(1)]
\item create $M := \cCMk kN\chi K$.
\item $L \leftarrow []$ (empty list), $n \leftarrow 1$.
\item repeat
\item \hspace*{1cm} compute $T_n$ on $M$.
\item \hspace*{1cm} join $T_n$ to the list $L$.
\item \hspace*{1cm} $\mathbb{T} \leftarrow$ the $K$-algebra generated by all $T \in L$.
\item \hspace*{1cm} $n \leftarrow n+1$
\item until $\dim_K (\mathbb{T}) = \dim_\mathbb{C} \Sk kN\chi\mathbb{C}$
\item compute a $K$-basis $B$ of $\mathbb{T}$.
\item compute the basis $B^\vee$ of $\mathbb{T}^\vee$ dual to~$B$.
\item for $\phi$ in $B^\vee$ do
\item \hspace*{1cm} output $\sum_{n=1}^P \phi (T_n) q^n \in K[q]$.
\item end for.
\end{enumerate}
\end{algorithm}
We should make a couple of remarks concerning this algorithm.
Please remember that there are dimension formulae for $\Sk kN\chi\mathbb{C}$, which can be looked up in~\cite{Stein}.
It is clear that the repeat-until loop will stop, due to Corollary~\ref{cores}.
We can even give an upper bound as to when it stops at the latest.
That is the so-called Sturm bound, which is the content of the following proposition.
\begin{proposition}[Sturm]\label{sturm}
Let $f \in \Mk kN\chi\mathbb{C}$ such that $a_n(f) = 0$ for all $n \le \frac{k\mu}{12}$,
where $\mu = N \prod_{l \mid N \textnormal{ prime}} (1 + \frac{1}{l})$.
Then $f=0$.
\end{proposition}
\begin{proof}
Apply Corollary 9.20 of \cite{Stein} with $\mathfrak{m} = (0)$.
\end{proof}
\begin{corollary}\label{corsturm}
Let $K, N,\chi$ etc.\ as in the algorithm. Then $\mathbb{T}_K(\cCMk kN\chi K)$
can be generated as a $K$-vector space by the operators $T_n$ for $1 \le n \le \frac{k \mu}{12}$.
\end{corollary}
\begin{proof}
Exercise~\ref{excorsturm}.
\end{proof}
We shall see later how to compute eigenforms and how to decompose the space
of modular forms in a "sensible" way.
\subsection{Theory: Number theoretic applications}
We close this survey and motivation section by sketching some number theoretic applications.
\subsubsection*{Galois representations attached to eigenforms}
We mention the sad fact that until 2006 only the one-dimensional representations
of $\Gal(\overline{\QQ}/\mathbb{Q})$ were well understood. In the case of finite image
one can use the Kronecker-Weber theorem, which asserts that any cyclic
extension of $\mathbb{Q}$ is contained in a cyclotomic field. This is
generalised by global class field theory to one-dimensional
representations of $\Gal(\overline{\QQ}/K)$ for each number field $K$.
Since we now have a proof of Serre's modularity conjecture \cite{Serre}
(a theorem by Khare, Wintenberger \cite{KhareWintenberger}),
we also know a little bit about $2$-dimensional representations
of $\Gal(\overline{\QQ}/\mathbb{Q})$, but, replacing $\mathbb{Q}$ by any other number field, all
one has is conjectures.
The great importance of modular forms for modern number theory is due
to the fact that one may attach a $2$-dimensional representation of
the Galois group of the rationals to each normalised cuspidal
eigenform. The following theorem is due to Shimura for $k=2$ and due
to Deligne for $k \ge 2$.
Until the end of this section, we shall use the language of Galois representations
(e.g.\ irreducible, unramified, Frobenius element, cyclotomic character) without
introducing it. It will not be used elsewhere.
The meanwhile quite old lectures by Darmon, Diamond and Taylor
are still an excellent introduction to the subject \cite{DarmonDiamondTaylor}.
\begin{theorem}\label{deligneqp}
Let $k \ge 2$, $N \ge 1$, $p$ a prime,
and $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to \mathbb{C}^\times$ a character.
Then to any normalised eigenform $f \in \Sk kN\chi\mathbb{C}$
with $f = \sum_{n\ge 1} a_n(f) q^n$ one can attach a Galois
representation, i.e.\ a continuous group homomorphism,
$$\rho_f: \Gal(\overline{\QQ}/\mathbb{Q}) \to \mathrm{GL}_2(\overline{\QQ}_p)$$
such that
\begin{enumerate}[(i)]
\item $\rho_f$ is irreducible,
\item $\det(\rho_f (c)) = -1$ for any complex conjugation $c \in \Gal(\overline{\QQ}/\mathbb{Q})$
(one says that $\rho_f$ is {\em odd}),
\item for all primes $\ell \nmid Np$ the representation $\rho_f$
is unramified at~$\ell$,
$$\mathrm{tr}(\rho_f(\Frob_\ell)) = a_\ell(f) \;\; \text{ and } \;\;
\Det(\rho_f(\Frob_\ell)) = \ell^{k-1} \chi(\ell).$$
In the statement, $\Frob_\ell$ denotes a Frobenius element at~$\ell$.
\end{enumerate}
\end{theorem}
By choosing a $\rho(\Gal(\overline{\QQ}/\mathbb{Q}))$-stable lattice in $\overline{\QQ}_p^2$
and applying reduction and semi-simplification one obtains
the following consequence.
\begin{theorem}\label{delignefp}
Let $k \ge 2$, $N \ge 1$, $p$ a prime,
and $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to \mathbb{C}^\times$ a character.
Then to any normalised eigenform $f \in \Sk kN\chi\mathbb{C}$
with $f = \sum_{n\ge 1} a_n(f) q^n$ and to any prime ideal $\mathfrak{P}$
of the ring of integers $\mathcal{O}_f$ of $\mathbb{Q}_f = \mathbb{Q}(a_n(f) : n \in \mathbb{N})$ with
residue characteristic~$p$ (and silently a fixed embedding $\mathcal{O}_f/\mathfrak{P} \hookrightarrow \overline{\FF}_p$), one can attach a Galois
representation, i.e.\ a continuous group homomorphism
(for the discrete topology on $\mathrm{GL}_2(\overline{\FF}_p)$),
$$\rho_f: \Gal(\overline{\QQ}/\mathbb{Q}) \to \mathrm{GL}_2(\overline{\FF}_p)$$
such that
\begin{enumerate}[(i)]
\item $\rho_f$ is semi-simple,
\item $\det(\rho_f (c)) = -1$ for any complex conjugation $c \in \Gal(\overline{\QQ}/\mathbb{Q})$
(one says that $\rho_f$ is {\em odd}),
\item for all primes $\ell \nmid Np$ the representation $\rho_f$
is unramified at~$\ell$,
$$\mathrm{tr}(\rho_f(\Frob_\ell)) \equiv a_\ell(f) \mod \mathfrak{P} \;\; \text{ and } \;\;
\det(\rho_f(\Frob_\ell)) \equiv \ell^{k-1} \overline{\chi}(\ell) \mod \mathfrak{P}.$$
\end{enumerate}
\end{theorem}
\subsubsection*{Translation to number fields}
\begin{proposition}\label{overnf}
Let $f$, $\mathbb{Q}_f$, $\mathfrak{P}$ and $\rho_f$ be as in Theorem~\ref{delignefp}.
Then the following hold:
\begin{enumerate}[(a)]
\item The image of $\rho_f$ is finite and its image is contained in $\mathrm{GL}_2(\mathbb{F}_{p^r})$ for some~$r$.
\item The kernel of $\rho_f$ is an open subgroup of $\Gal(\overline{\QQ}/\mathbb{Q})$ and is hence
of the form $\Gal(\overline{\QQ}/K)$ for some Galois number field~$K$. Thus, we can and do
consider $\Gal(K/\mathbb{Q})$ as a subgroup of $\mathrm{GL}_2(\mathbb{F}_{p^r})$.
\item The characteristic polynomial of $\Frob_\ell$ (more precisely, of $\Frob_{\Lambda/\ell}$
for any prime $\Lambda$ of $K$ dividing $\ell$) is equal to $X^2 - a_\ell(f) X + \chi(\ell) \ell^{k-1} \mod \mathfrak{P}$
for all primes $\ell \nmid Np$.
\end{enumerate}
\end{proposition}
\begin{proof}
Exercise~\ref{exovernf}.
\end{proof}
To appreciate the information obtained from the $a_\ell(f) \mod \mathfrak{P}$, the reader is invited to do Exercise~\ref{exinfochar} now.
\subsubsection*{Images of Galois representations}
One can also often tell what the Galois group $\Gal(K/\mathbb{Q})$ is as an abstract group.
There are not so many possibilities, as we see from the following theorem.
\begin{theorem}[Dickson]
Let $p$ be a prime and $H$ a finite subgroup of $\mathrm{PGL}_2(\overline{\FF}_p)$.
Then a conjugate of $H$ is isomorphic to one of the following groups:
\begin{itemize}
\item finite subgroups of the upper triangular matrices,
\item $\mathrm{PSL}_2(\mathbb{F}_{p^r})$ or $\mathrm{PGL}_2(\mathbb{F}_{p^r})$ for $r \in \mathbb{N}$,
\item dihedral groups $D_r$ for $r \in \mathbb{N}$ not divisible by $p$,
\item $A_4$, $A_5$ or $S_4$.
\end{itemize}
\end{theorem}
For modular forms there are several results mostly by Ribet concerning the
groups that occur as images \cite{Ribet}.
Roughly speaking, they say that the image is
`as big as possible' for almost all $\mathfrak{P}$ (for a given~$f$). For modular forms
without CM and inner twists (we do not define these notions in this course)
this means that if $G$ is the image, then $G$ modulo scalars
is equal to $\mathrm{PSL}_2(\mathbb{F}_{p^r})$ or $\mathrm{PGL}_2(\mathbb{F}_{p^r})$, where $\mathbb{F}_{p^r}$
is the extension of $\mathbb{F}_p$ generated by the $a_n(f) \mod \mathfrak{P}$.
An interesting question is to study which groups (i.e.\ which $\mathrm{PSL}_2(\mathbb{F}_{p^r})$) actually occur.
It would be nice to prove that all of them do, since - surprisingly -
the simple groups $\mathrm{PSL}_2(\mathbb{F}_{p^r})$ are still resisting a lot to all efforts to
realise them as Galois groups over $\mathbb{Q}$ in the context of inverse Galois theory.
\subsubsection*{Serre's modularity conjecture}
Serre's modularity conjecture is the following.
Let $p$ be a prime and $\rho: \Gal(\overline{\QQ}/\mathbb{Q}) \to \mathrm{GL}_2(\overline{\FF}_p)$
be a continuous, odd, irreducible representation.
\begin{itemize}
\item Let $N_\rho$ be the (outside of~$p$) conductor of~$\rho$ (defined by
a formula analogous to the formula for the Artin conductor, except
that the local factor for~$p$ is dropped).
\item Let $k_\rho$ be the integer defined by~\cite{Serre}.
\item Let $\chi_\rho$ be the prime-to-$p$ part of $\det \circ \rho$
considered as a character
$(\mathbb{Z}/N_\rho\mathbb{Z})^\times \times (\mathbb{Z}/p\mathbb{Z})^\times \to \overline{\FF}_p^\times$.
\end{itemize}
\begin{theorem}[Khare, Wintenberger, Kisin: Serre's Modularity Conjecture]
Let $p$ be a prime and $\rho: \Gal(\overline{\QQ}/\mathbb{Q}) \to \mathrm{GL}_2(\overline{\FF}_p)$
be a continuous, odd, irreducible representation.
Then there exists a normalised eigenform
$$f \in \Sk {k_\rho}{N_\rho}{\chi_\rho}{\mathbb{C}}$$
such that $\rho$ is isomorphic to the Galois representation
$$\rho_f: \Gal(\overline{\QQ}/\mathbb{Q}) \to \mathrm{GL}_2(\overline{\FF}_p)$$
attached to~$f$ by Theorem~\ref{delignefp}.
\end{theorem}
Serre's modularity conjecture implies that we can compute
(in principle, at least) arithmetic properties of all Galois
representations of the type in Serre's conjecture by
computing the mod~$p$ Hecke eigenforms they come from.
Conceptually, Serre's modularity conjecture gives an explicit description of
all irreducible, odd and continuous `mod $p$' representations
of $\Gal(\overline{\QQ}/\mathbb{Q})$ and, thus, in a sense generalises class field theory.
Edixhoven et al.\ \cite{Edixhoven} have succeeded in giving
an algorithm which computes the actual Galois representation
attached to a mod~$p$ modular form. Hence, with Serre's conjecture
we have a way of - in principle - obtaining all information on
$2$-dimensional irreducible, odd continuous representations of $\Gal(\overline{\QQ}/\mathbb{Q})$.
\subsection{Theory: Exercises}
\begin{exercise}\label{exsln}
\begin{enumerate}[(a)]
\item The group homomorphism
$$ \mathrm{SL}_2(\mathbb{Z}) \to \mathrm{SL}_2(\mathbb{Z}/N\mathbb{Z})$$
given by reducing the matrices modulo~$N$ is surjective.
\item Check the bijections
$$\mathrm{SL}_2(\mathbb{Z})/\Gamma_1(N) = \{ \vect a c | \langle a,c \rangle = \mathbb{Z}/N\mathbb{Z}\}$$
and
$$\mathrm{SL}_2(\mathbb{Z})/\Gamma_0(N) = \mathbb{P}^1 (\mathbb{Z}/N\mathbb{Z}),$$
which were given in the beginning.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{qzetan}
Let $N$ be an integer and $\zeta_N \in \mathbb{C}$ any primitive $N$-th root of unity.
Prove that the map
$$ \Gal(\mathbb{Q}(\zeta_N)/\mathbb{Q}) \xrightarrow{\Frob_\ell \mapsto \ell} (\mathbb{Z}/N\mathbb{Z})^\times$$
(for all primes $\ell \nmid N$) is an isomorphism.
\end{exercise}
\begin{exercise}\label{exsigmaa}
Prove that a matrix $\sigma_a$ as in Equation~\ref{sigmaa} exists.
\end{exercise}
\begin{exercise}\label{exhecke}
Prove Lemma~\ref{lemhecke}. See also \cite[Proposition~5.2.2]{DS}.
\end{exercise}
\begin{exercise}\label{excommute}
\begin{enumerate}[(a)]
\item Let $K$ be a field, $V$ a vector space and $T_1,T_2$ two
commuting endomorphisms of~$V$, i.e.\ $T_1 T_2 = T_2 T_1$.
Let $\lambda_1 \in K$ and consider the $\lambda_1$-eigenspace
of~$T_1$, i.e.\ $V_1 = \{ v | T_1 v = \lambda_1 v\}$.
Prove that $T_2 V_1 \subseteq V_1$.
\item Suppose that $\Mkone Nk\mathbb{C}$ is non-zero. Prove that it contains
a Hecke eigenform.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exeigenf}
Prove Lemma~\ref{eigenf}.
Hint: use the action of Hecke operators explicitly described on $q$-expansions.
\end{exercise}
\begin{exercise} \label{expsl}
Check that it makes sense to replace $\mathrm{SL}_2(\mathbb{Z})$ by $\mathrm{PSL}_2(\mathbb{Z})$
in the definition of modular forms.
Hint: for the transformation rule: if $-1$ is not in the congruence subgroup in question, there is nothing to show;
if $-1$ is in it, one has to verify that it acts trivially. Moreover convince yourself that the holomorphy at the cusps does not depend
on replacing a matrix by its negative.
\end{exercise}
\begin{exercise}\label{exgp}
Let $R$ be a ring, $\Gamma$ a group and $V$ a left $R[\Gamma]$-module.
\begin{enumerate}[(a)]
\item Define the augmentation ideal $I_\Gamma$ by the exact sequence
$$ 0 \to I_\Gamma \to R[\Gamma] \xrightarrow{\gamma \mapsto 1} R \to 1.$$
Prove that $I_\Gamma$ is the ideal in $R[\Gamma]$ generated by the
elements $1-g$ for $g \in \Gamma$.
\item Conclude that $V_\Gamma = V / I_\Gamma V$.
\item Conclude that $V_\Gamma \cong R \otimes_{R[\Gamma]} V$.
\item Suppose that $\Gamma = \langle T \rangle$ is a cyclic group
(either finite or infinite (isomorphic to $(\mathbb{Z},+)$)).
Prove that $I_\Gamma$ is the ideal generated by $(1-T)$.
\item Prove that $V^\Gamma \cong {\rm Hom}_{R[\Gamma]}(R,V)$.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{basechange}
Let $R$, $\Gamma$ and $V$ as in Definition~\ref{defMS} and let $R \to S$ be a ring homomorphism.
\begin{enumerate}[(a)]
\item Prove that
$$\mathcal{M}_R(\Gamma,V) \otimes_R S \cong \mathcal{M}_S(\Gamma,V \otimes_R S).$$
\item Suppose $R \to S$ is flat. Prove a similar statement for the cuspidal subspace.
\item Are similar statements true for the boundary or the Eisenstein space?
What about the $+$- and the $-$-spaces?
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exsym}
Prove that the map
$$ \Sym^{n}(R^2) \to R[X,Y]_n, \;\;\;
\vect {a_1}{b_1} \otimes \dots \otimes \vect{a_n}{b_n} \mapsto (a_1X + b_1Y) \cdots (a_nX + b_nY)$$
is an isomorphism,
where $\Sym^n(R^2)$ is the $n$-th symmetric power of~$R^2$, which is defined as
the quotient of $\underbrace{R^2 \otimes_R \dots \otimes_R R^2}_{n\textnormal{-times}}$
by the span of all elements
$v_1 \otimes \dots \otimes v_n - v_{\sigma(1)} \otimes \dots \otimes v_{\sigma(n)}$
for all $\sigma$ in the symmetric group on the letters $\{1,2,\dots,n\}$.
\end{exercise}
\begin{exercise}\label{exchar}
Prove Equation~\ref{eqchar}.
\end{exercise}
\begin{exercise}\label{exhamsymbc}
Can one use $+$- or $-$-spaces in Proposition~\ref{hamsymbc}?
What could we say if we defined the $+$-space as $M/(1-\eta) M$ with $M$
standing for some space of modular symbols?
\end{exercise}
\begin{exercise}\label{excores}
Which statements in the spirit of Corollary~\ref{cores}~(b) are true for the
$+$-spaces?
\end{exercise}
\begin{exercise}\label{excorsturm}
Prove Corollary~\ref{corsturm}.
\end{exercise}
\begin{exercise}\label{exovernf}
Prove Proposition~\ref{overnf}.
\end{exercise}
\begin{exercise}\label{exinfochar}
In how far is a conjugacy class in $\mathrm{GL}_2(\mathbb{F}_{p^r})$ determined by
its characteristic polynomial?
Same question as above for a subgroup $G \subset \mathrm{GL}_2(\mathbb{F}_{p^r})$.
\end{exercise}
\subsection{Computer exercises}
\begin{cexercise}
\begin{enumerate}[(a)]
\item Create a list $L$ of all primes in between 234325 and 3479854?
How many are there?
\item For $n=2,3,4,5,6,7,997$ compute for each $a \in \mathbb{Z}/ n\mathbb{Z}$ how often
it appears as a residue in the list $L$.
\end{enumerate}
\end{cexercise}
\begin{cexercise}
In this exercise you verify the validity of the prime number theorem.
\begin{enumerate}[(a)]
\item Write a function {\tt NumberOfPrimes} with the following specifications.
Input: Positive integers $a,b$ with $a \le b$.
Output: The number of primes in $[a,b]$.
\item Write a function {\tt TotalNumberOfPrimes} with the following specifications.
Input: Positive integers $x, s$.
Output: A list $[n_1,n_2,n_3,\dots,n_m]$ such that $n_i$ is the number of primes
between $1$ and $i\cdot s$ and $m$ is the largest integer smaller than or equal to
$x/s$.
\item Compare the output of {\tt TotalNumberOfPrimes} with the predictions
of the prime number theorem: Make a function that returns the list $[r_1,r_2,\dots,r_m]$
with $r_i = \frac{si}{\log{si}}$. Make a function that computes the quotient of
two lists of "numbers".
\item Play with these functions. What do you observe?
\end{enumerate}
\end{cexercise}
\begin{cexercise}
Write a function {\tt ValuesInField} with: Input: a unitary polynomial $f$ with integer
coefficients and $K$ a finite field. Output: the set of values of $f$ in $K$.
\end{cexercise}
\begin{cexercise}
\begin{enumerate}[(a)]
\item Write a function {\tt BinaryExpansion} that computes the binary expansion
of a positive integer. Input: positive integer~$n$. Output: list of $0$'s and $1$'s
representing the binary expansion.
\item Write a function {\tt Expo} with: Input: two positive integers $a,b$.
Output $a^b$. You must not use the in-built function $a^b$, but write a sensible
algorithm making use of the binary expansion of~$b$. The only arithmetic operations allowed
are multiplications.
\item Write similar functions using the expansion with respect to a general base~$d$.
\end{enumerate}
\end{cexercise}
\begin{cexercise}
In order to contemplate recursive algorithms, the monks in Hanoi used
to play the following game. First they choose a degree of contemplation,
i.e.\ a positive integer~$n$. Then they create three lists:
$$L_1 := [n,n-1,\dots,2,1]; L_2 := []; L_3 := [];$$
The aim is to exchange $L_1$ and $L_2$. However, the monks may only perform
the following step: Remove the last element from one of the lists and
append it to one of the other lists, subject to the important condition
that in all steps all three lists must be descending.
Contemplate how the monks can achieve their goal. Write a procedure
with input~$n$ that plays the game. After each step,
print the number of the step, the three lists and test whether all
lists are still descending.
[Hint: For recursive procedures, i.e.\ procedures calling themselves,
in {\sc Magma} one must put the command {\tt forward my\_procedure} in front of the
definition of {\tt my\_procedure}.]
\end{cexercise}
\begin{cexercise}
This exercise concerns the normalised cuspidal eigenforms in weight~$2$ and level~$23$.
\begin{enumerate}[(a)]
\item What is the number field $K$ generated by the coefficients of each of the two
forms?
\item Compute the characteristic polynomials of the first 100 Fourier coefficients
of each of the two forms.
\item Write a function that for a given prime $p$ computes the reduction modulo~$p$
of the characteristic polynomials from the previous point and their factorisation.
\item Now use modular symbols over $\mathbb{F}_p$ for a given $p$. Compare the results.
\item Now do the same for weight~$2$ and level~$37$. In particular, try $p=2$.
What do you observe? What could be the reason for this behaviour?
\end{enumerate}
\end{cexercise}
\begin{cexercise}\label{cexalgmodsymsketch}
Implement Algorithm~\ref{algmodsymsketch}.
\end{cexercise}
\section{Hecke algebras}
An important point made in the previous section is that for computing modular forms,
one computes Hecke algebras. This perspective puts Hecke algebras in its centre.
The present section is written from that point of view.
Starting from Hecke algebras, we define modular forms with coefficients in arbitrary rings,
we study integrality properties and also present results on the structure of Hecke algebras,
which are very useful for studying the arithmetic of modular forms.
It is essential for studying arithmetic properties of modular forms
to have some flexibility for the coefficient rings. For instance, when
studying mod~$p$ Galois representations attached to modular forms, it is
often easier and sometimes necessary to work with modular forms whose $q$-expansions
already lie in a finite field. Moreover, the concept of congruences of
modular forms only gets its seemingly correct framework when working over
rings such as extensions of finite fields or rings like $\mathbb{Z}/p^n\mathbb{Z}$.
There is a very strong theory of modular forms over a general ring~$R$
that uses algebraic geometry over~$R$. One can, however, already get
very far if one just defines modular forms over~$R$ as the $R$-linear
dual of the $\mathbb{Z}$-Hecke algebra of the holomorphic modular forms, i.e.\
by taking $q$-expansions with coefficients in~$R$. In this course we shall only
use this. Precise definitions will be given in a moment. A priori it is
maybe not clear whether non-trivial modular forms with $q$-expansions in the
integers exist at all. The situation is as good as it could possibly be:
the modular forms with $q$-expansion in the integers form a lattice in
the space of all modular forms (at least for $\Gamma_1(N)$
and $\Gamma_0(N)$; if we are working with a Dirichlet character, the
situation is slightly more involved). This is an extremely useful and important
fact, which we shall derive from the corollaries of the Eichler-Shimura
isomorphism given in the previous section.
Hecke algebras of modular forms
over~$R$ are finitely generated $R$-modules. This leads us to a study,
belonging to the theory
of Commutative Algebra, of finite $R$-algebras, that is, $R$-algebras
that are finitely generated as $R$-modules. We shall prove structure
theorems when $R$ is a discrete valuation ring or a finite field.
Establishing back the connection with modular forms, we will for example
see that the maximal ideals of Hecke algebras correspond to Galois conjugacy
classes of normalised eigenforms, and, for instance, the notion of a
congruence can be expressed as a maximal prime containing two minimal ones.
\subsection{Theory: Hecke algebras and modular forms over rings}
We start by recalling and slightly extending
the concept of Hecke algebras of modular forms.
It is of utmost importance for our treatment
of modular forms over general rings and their computation. In fact, as pointed
out a couple of times, we will compute Hecke algebras and not modular forms.
We shall assume that $k \ge 1$ and $N \ge 1$.
As in the introduction, we define the {\em Hecke algebra}
of $\Mkone kN\mathbb{C}$ as the subring (i.e.\ the $\mathbb{Z}$-algebra)
inside the endomorphism ring of the $\mathbb{C}$-vector space $\Mkone kN\mathbb{C}$
generated by all Hecke operators. Remember that due to Formula~\ref{diamondinhecke}
all diamond operators are contained in the Hecke algebra.
Of course, we make similar definitions for $\Skone kN\mathbb{C}$ and use the notations
$\mathbb{T}_\mathbb{Z}(\Mkone kN\mathbb{C})$ and $\mathbb{T}_\mathbb{Z}(\Skone{k}{N}{\mathbb{C}})$.
If we are working with modular forms with a character, we essentially
have two possibilities for defining the Hecke algebra, namely,
firstly as above as the $\mathbb{Z}$-algebra generated by all Hecke operators
inside the endomorphism ring of the $\mathbb{C}$-vector space $\Mk kN\chi\mathbb{C}$
(notation $\mathbb{T}_\mathbb{Z} (\Mk kN\chi\mathbb{C})$) or, secondly, as the $\mathbb{Z}[\chi]$-algebra
generated by the Hecke operators inside ${\rm End}_\mathbb{C}(\Mk kN\chi\mathbb{C})$
(notation $\mathbb{T}_{\mathbb{Z}[\chi]} (\Mk kN\chi\mathbb{C})$); similarly for the
cusp forms. Here $\mathbb{Z}[\chi]$ is the ring extension of $\mathbb{Z}$ generated
by all values of~$\chi$, it is the integer ring of $\mathbb{Q}(\chi)$.
For two reasons we prefer the second variant. The first reason is
that we needed to work over $\mathbb{Z}[\chi]$ (or its extensions) for modular
symbols. The second reason is that on the natural $\mathbb{Z}$-structure
inside $\Mkone kN\mathbb{C}$ the decomposition into $(\mathbb{Z}/N\mathbb{Z})^\times$-eigenspaces
can only be made after a base change to $\mathbb{Z}[\chi]$. So, the $\mathbb{C}$-dimension of
$\Mk kN\chi\mathbb{C}$ equals the $\mathbb{Q}[\chi]$-dimension
of $\mathbb{T}_{\mathbb{Q}[\chi]}(\Mk kN\chi\mathbb{C})$ and not the $\mathbb{Q}$-dimension of
$\mathbb{T}_{\mathbb{Q}}(\Mk kN\chi\mathbb{C})$.
\begin{lemma}\label{freealg}
\begin{enumerate}[(a)]
\item The $\mathbb{Z}$-algebras $\mathbb{T}_\mathbb{Z} (\Mkone kN\mathbb{C})$ and $\mathbb{T}_\mathbb{Z} (\Mk kN\chi\mathbb{C})$
are free $\mathbb{Z}$-modules of finite rank; the same holds for the cuspidal Hecke algebras.
\item The $\mathbb{Z}[\chi]$-algebra $\mathbb{T}_{\mathbb{Z}[\chi]} (\Mk kN\chi\mathbb{C})$
is a torsion-free finitely generated $\mathbb{Z}[\chi]$-module;
the same holds for the cuspidal Hecke algebra.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) Due to the corollaries of the Eichler-Shimura theorem
(Corollary~\ref{algz}) we know that
these algebras are finitely generated as $\mathbb{Z}$-modules. As they lie
inside a vector space, they are free (using the structure theory
of finitely generated modules over principal ideal domains).
(b) This is like (a), except that $\mathbb{Z}[\chi]$ need not be a principal
ideal domain, so that we can only conclude torsion-freeness, but not
freeness.
\end{proof}
\subsubsection*{Modular forms over rings}
Let $k \ge 1$ and $N \ge 1$. Let $R$ be any $\mathbb{Z}$-algebra (ring).
We now use the $q$-pairing to define modular (cusp) forms over~$R$.
We let
\begin{align*}
\Mkone kN R := &{\rm Hom}_\mathbb{Z} (\mathbb{T}_\mathbb{Z}(\Mkone kN\mathbb{C}),R)\\
\cong&{\rm Hom}_R (\mathbb{T}_\mathbb{Z}(\Mkone kN\mathbb{C}) \otimes_\mathbb{Z} R,R).
\end{align*}
We stress the fact that ${\rm Hom}_R$ denotes the homomorphisms as $R$-modules (and not as $R$-algebras; those will appear later).
The isomorphism is proved precisely as in Proposition~\ref{hamsymbc}~(c),
where we did not use the flatness assumption.
Every element $f$ of $\Mkone kN R$ thus corresponds
to a $\mathbb{Z}$-linear function $\Phi: \mathbb{T}_\mathbb{Z}(\Mkone kN\mathbb{C}) \to R$ and is uniquely
identified by its {\em formal $q$-expansion}
$$f = \sum_n \Phi(T_n) q^n = \sum_n a_n(f) q^n \in R[[q]].$$
We note that $\mathbb{T}_\mathbb{Z}(\Mkone kN\mathbb{C})$ acts naturally on
${\rm Hom}_\mathbb{Z} (\mathbb{T}_\mathbb{Z}(\Mkone kN\mathbb{C}),R)$, namely by
\begin{equation}\label{eq:hecke}
(T.\Phi)(S) = \Phi(TS) = \Phi(ST).
\end{equation}
This means that the action of $\mathbb{T}_\mathbb{Z}(\Mkone kN\mathbb{C})$ on
$\Mkone kN R$ gives the same formulae as usual on formal
$q$-expansions.
For cusp forms we make the obvious analogous definition, i.e.\
\begin{align*}
\Skone kN R := &{\rm Hom}_\mathbb{Z} (\mathbb{T}_\mathbb{Z}(\Skone kN\mathbb{C}),R)\\
\cong & {\rm Hom}_R (\mathbb{T}_\mathbb{Z}(\Skone kN\mathbb{C}) \otimes_\mathbb{Z} R,R).
\end{align*}
We caution the reader that for modular forms which are not cusp forms
there also ought to be some $0$th coefficient in the formal $q$-expansion,
for example, for recovering
the classical holomorphic $q$-expansion. Of course, for cusp forms we do not
need to worry.
Now we turn our attention to modular forms with a character.
Let $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to \mathbb{C}^\times$ be a
Dirichlet character and $\mathbb{Z}[\chi] \to R$ a ring homomorphism.
We now proceed analogously to the treatment of modular symbols
for a Dirichlet character. We work with $\mathbb{Z}[\chi]$ as the base ring
(and not $\mathbb{Z}$). We let
\begin{align*}
\Mk kN\chi R := &{\rm Hom}_{\mathbb{Z}[\chi]} (\mathbb{T}_{\mathbb{Z}[\chi]}(\Mk kN\chi\mathbb{C}),R) \\
\cong &{\rm Hom}_R (\mathbb{T}_{\mathbb{Z}[\chi]}(\Mk kN\chi\mathbb{C}) \otimes_{\mathbb{Z}[\chi]} R,R)
\end{align*}
and similarly for the cusp forms.
We remark that these definitions of $\Mkone kN\mathbb{C}$, $\Mk k N \chi \mathbb{C}$ etc.\
agree with those from section~\ref{sec:1}; thus, it is justified to use the same pieces of notation.
As a special case, we get that
$\Mkone k N \mathbb{Z}$ precisely consists of those holomorphic modular
forms in $\Mkone kN\mathbb{C}$ whose $q$-expansions take values in~$\mathbb{Z}$.
If $\mathbb{Z}[\chi] \xrightarrow{\pi} R=\mathbb{F}$ with $\mathbb{F}$
a finite field of characteristic~$p$ or $\overline{\FF}_p$, we call
$\Mk k N \chi \mathbb{F}$ the space of {\em mod $p$ modular forms
of weight $k$, level $N$ and character $\chi$}.
Of course, for the cuspidal space similar statements are made and we use similar notation.
We furthermore extend the notation for Hecke algebras introduced in section~\ref{sec:1} as follows.
If $S$ is an $R$-algebra and $M$ is an $S$-module admitting the action of Hecke operators $T_n$ for $n \in \mathbb{N}$,
then we let $\mathbb{T}_R(M)$ be the $R$-subalgebra of ${\rm End}_S(M)$ generated by all $T_n$ for $n \in \mathbb{N}$.
We now study base change properties of modular forms over~$R$.
\begin{proposition}
\begin{enumerate}[(a)]
\item Let $\mathbb{Z} \to R \to S$ be ring homomorphisms. Then the following
statements hold.
\begin{enumerate}[(i)]
\item The natural map
$$ \Mkone kNR \otimes_R S \to \Mkone kNS$$
is an isomorphism.
\item The evaluation pairing
$$ \Mkone kNR \times \mathbb{T}_\mathbb{Z}(\Mkone kN\mathbb{C}) \otimes_\mathbb{Z} R \to R$$
is the $q$-pairing and it is perfect.
\item The Hecke algebra $\mathbb{T}_R(\Mkone kNR)$ is naturally isomorphic to\\
$\mathbb{T}_\mathbb{Z}(\Mkone kN\mathbb{C}) \otimes_\mathbb{Z} R$.
\end{enumerate}
\item If $\mathbb{Z}[\chi] \to R \to S$ are flat, then Statement~(i) holds
for $\Mk kN\chi R$.
\item If $\mathbb{T}_{\mathbb{Z}[\chi]} (\Mk kN\chi\mathbb{C})$ is a free $\mathbb{Z}[\chi]$-module
and $\mathbb{Z}[\chi] \to R \to S$ are ring homomorphisms,
statements (i)-(iii) hold for $\Mk kN\chi R$.
\end{enumerate}
\end{proposition}
\begin{proof}
(a) We use the following general statement, in which $M$ is assumed to
be a free finitely generated $R$-module and $N, T$ are $R$-modules:
$$ {\rm Hom}_R (M,N) \otimes_R T \cong {\rm Hom}_R (M, N\otimes_R T).$$
To see this, just see $M$ as $\bigoplus R$ and pull the direct sum out
of the ${\rm Hom}$, do the tensor product, and put the direct sum back
into the ${\rm Hom}$.
(i) Write $\mathbb{T}_\mathbb{Z}$ for $\mathbb{T}_\mathbb{Z}(\Mkone kN\mathbb{C})$. It is a free $\mathbb{Z}$-module by
Lemma~\ref{freealg}.
We have
$$\Mkone kNR \otimes_R S = {\rm Hom}_\mathbb{Z}(\mathbb{T}_\mathbb{Z},R) \otimes_R S,$$
which by the above is isomorphic to ${\rm Hom}_\mathbb{Z}(\mathbb{T}_\mathbb{Z},R \otimes_R S)$ and
hence to $\Mkone kNS$.
(ii) The evaluation pairing ${\rm Hom}_\mathbb{Z}(\mathbb{T}_\mathbb{Z},\mathbb{Z}) \times \mathbb{T}_\mathbb{Z} \to \mathbb{Z}$
is perfect, since $\mathbb{T}_\mathbb{Z}$ is free as a $\mathbb{Z}$-module. The result
follows from~(i) by tensoring with~$R$.
(iii) We consider the natural map
$$ \mathbb{T}_\mathbb{Z} \otimes_\mathbb{Z} R \to {\rm End}_R ({\rm Hom}_R(\mathbb{T}_\mathbb{Z} \otimes_\mathbb{Z} R, R))$$
and show that it is injective. Its image is by definition $\mathbb{T}_R(\Mkone kNR)$.
Let $T$ be in the kernel. Then $\phi(T) = 0$ for all
$\phi \in {\rm Hom}_R(\mathbb{T}_\mathbb{Z} \otimes_\mathbb{Z} R,R)$.
As the pairing in (ii) is perfect and, in particular, non-degenerate,
$T=0$ follows.
(b) Due to flatness we have
$${\rm Hom}_R(\mathbb{T}_{\mathbb{Z}[\chi]} \otimes_{\mathbb{Z}[\chi]} R,R) \otimes_R S \cong {\rm Hom}_S (\mathbb{T}_{\mathbb{Z}[\chi]} \otimes_{\mathbb{Z}[\chi]} S, S),$$
as desired.
(c) The same arguments as in~(a) work.
\end{proof}
\subsubsection*{Galois conjugacy classes}
By the definition of the Hecke action in equation~\eqref{eq:hecke}, the normalised Hecke eigenforms in
the $R$-module $\Mkone k N R$ are precisely
the $\mathbb{Z}$-algebra homomorphisms in
${\rm Hom}_\mathbb{Z} (\mathbb{T}_\mathbb{Z}(\Mkone kN\mathbb{C}),R)$,
where the normalisation means that the identity operator $T_1$ is sent to~$1$.
Such an algebra homomorphism $\Phi$ is often referred to as a
{\em system of eigenvalues}, since the image of each $T_n$
corresponds to an eigenvalue of~$T_n$, namely to $\Phi(T_n) = a_n(f)$
(if $f$ corresponds to $\Phi$).
Let us now consider a perfect field $K$ (if we are working with a Dirichlet character,
we also want that $K$ admits a ring homomorphism $\mathbb{Z}[\chi] \to K$).
Denote by ${\overline{K}}$ an algebraic closure, so that we have
\begin{align*}
\Mkone kN {\overline{K}}& = {\rm Hom}_\mathbb{Z} (\mathbb{T}_\mathbb{Z}(\Mkone kN\mathbb{C}),{\overline{K}})\\
& \cong {\rm Hom}_K (\mathbb{T}_\mathbb{Z}(\Mkone kN\mathbb{C}) \otimes_\mathbb{Z} K,{\overline{K}}).
\end{align*}
We can compose any $\Phi \in {\rm Hom}_\mathbb{Z} (\mathbb{T}_\mathbb{Z}(\Mkone kN\mathbb{C}),{\overline{K}})$ by
any field automorphism $\sigma: {\overline{K}} \to {\overline{K}}$ fixing~$K$.
Thus, we obtain an action of the absolute Galois group
$\Gal({\overline{K}}/K)$ on $\Mkone kN {\overline{K}}$ (on formal $q$-expansions,
we only need to apply $\sigma$ to the coefficients).
All this works similarly for the cuspidal subspace, too.
Like this, we also obtain a $\Gal({\overline{K}}/K)$-action on the
normalised eigenforms, and can hence speak about {\em Galois
conjugacy classes of eigenforms}.
\begin{proposition}\label{galoisconjugacy}
We have the following bijective correspondences:
\begin{align*}
\Spec(\mathbb{T}_K(\cdot)) &\overset{1-1}{\leftrightarrow}
{\rm Hom}_{K \textnormal{-alg}}(\mathbb{T}_K(\cdot),{\overline{K}})/\Gal({\overline{K}}/K)\\
&\overset{1-1}{\leftrightarrow} \{\textnormal{ normalised eigenf.\ in $\cdot$ }\}/\Gal({\overline{K}}/K)
\end{align*}
and with $K = {\overline{K}}$
$$ \Spec(\mathbb{T}_{\overline{K}}(\cdot)) \overset{1-1}{\leftrightarrow}
{\rm Hom}_{{\overline{K}} \textnormal{-alg}}(\mathbb{T}_{\overline{K}}(\cdot),{\overline{K}})
\overset{1-1}{\leftrightarrow} \{\textnormal{ normalised eigenforms in $\cdot$ }\}.$$
Here, $\cdot$ stands for either $\Mkone kN{\overline{K}}$, $\Skone kN{\overline{K}}$
or the respective spaces with a Dirichlet character.
\end{proposition}
We recall that $\Spec$ of a ring is the set
of prime ideals. In the next section we will see that in
$\mathbb{T}_K(\cdot)$ and $\mathbb{T}_{\overline{K}}(\cdot)$ all prime ideals are
already maximal (it is an easy consequence of the finite
dimensionality).
\begin{proof}
Exercise~\ref{exgaloisconjugacy}.
\end{proof}
We repeat that the coefficients of any eigenform~$f$
in $\Mk kN\chi {\overline{K}}$ lie in a finite extension of $K$, namely
in $\mathbb{T}_K(\Mk kN\chi K)/ \mathfrak{m}$, when $\mathfrak{m}$ is the maximal
ideal corresponding to the conjugacy class of~$f$.
Let us note that the above discussion applies to
${\overline{K}}=\mathbb{C}$, ${\overline{K}} = \overline{\QQ}$, ${\overline{K}} = \overline{\QQ}_p$, as well
as to ${\overline{K}} = \overline{\FF}_p$.
In the next sections we will also take into account the finer
structure of Hecke algebras over $\mathcal{O}$, or rather over
the completion of $\mathcal{O}$ at one prime.
\subsubsection{Some commutative algebra}
In this section we leave the special context of modular forms
for a moment and provide quite
useful results from commutative algebra that will be applied
to Hecke algebras in the sequel.
We start with a simple case which we will prove directly.
Let $\mathbb{T}$ be an {\em Artinian} algebra, i.e.\ an algebra in which
every descending chain
of ideals becomes stationary. Our main example will be finite
dimensional algebras over a field. That those are Artinian is obvious, since in every
proper inclusion of ideals the dimension diminishes.
For any ideal $\mathfrak{a}$ of $\mathbb{T}$ the sequence $\mathfrak{a}^n$ becomes stationary,
i.e.\ $\mathfrak{a}^n = \mathfrak{a}^{n+1}$ for all $n$ ``big enough''.
Then we will use the notation $\mathfrak{a}^\infty$ for $\mathfrak{a}^n$.
\begin{proposition}\label{propartin}
Let $\mathbb{T}$ be an Artinian ring.
\begin{enumerate}[(a)]
\item Every prime ideal of $\mathbb{T}$ is maximal.
\item There are only finitely many maximal ideals in $\mathbb{T}$.
\item Let $\mathfrak{m}$ be a maximal ideal of~$\mathbb{T}$. It is the only maximal
ideal containing~$\mathfrak{m}^\infty$.
\item Let $\mathfrak{m} \neq \mathfrak{n}$ be two maximal ideals.
For any $k \in \mathbb{N}$ and $k=\infty$ the ideals $\mathfrak{m}^k$ and $\mathfrak{n}^k$
are coprime.
\item The Jacobson radical $\bigcap_{\mathfrak{m} \in \Spec(\mathbb{T})} \mathfrak{m}$ is equal
to the nilradical and consists of the nilpotent elements.
\item We have $\bigcap_{\mathfrak{m} \in \Spec(\mathbb{T})} \mathfrak{m}^\infty = (0)$.
\item (Chinese Remainder Theorem) The natural map
$$\mathbb{T} \xrightarrow{a \mapsto (\dots, a + \mathfrak{m}^\infty, \dots)}
\prod_{\mathfrak{m} \in \Spec(\mathbb{T})} \mathbb{T}/\mathfrak{m}^\infty$$
is an isomorphism.
\item For every maximal ideal $\mathfrak{m}$, the ring $\mathbb{T}/\mathfrak{m}^\infty$ is local with
maximal ideal~$\mathfrak{m}$ and is hence isomorphic to $\mathbb{T}_\mathfrak{m},$
the localisation of $\mathbb{T}$ at~$\mathfrak{m}$.
\end{enumerate}
\end{proposition}
\begin{proof}
(a) Let $\mathfrak{p}$ be a prime ideal of~$\mathbb{T}$. The quotient $\mathbb{T} \twoheadrightarrow \mathbb{T}/\mathfrak{p}$
is an Artinian integral domain, since ideal chains in $\mathbb{T}/\mathfrak{p}$ lift to ideal
chains in~$\mathbb{T}$. Let $0 \neq x \in \mathbb{T}/\mathfrak{p}$. We have $(x)^n = (x)^{n+1} = (x)^\infty$
for some $n$ big enough. Hence, $x^n = y x^{n+1}$ with some $y \in \mathbb{T}/\mathfrak{p}$ and
so $xy = 1$, as $\mathbb{T}/\mathfrak{p}$ is an integral domain.
(b) Assume there are infinitely many maximal ideals, number a countable
subset of them by $\mathfrak{m}_1, \mathfrak{m}_2, \dots$. Form the descending ideal chain
$$ \mathfrak{m}_1 \supset \mathfrak{m}_1 \cap \mathfrak{m}_2 \supset \mathfrak{m}_1\cap\mathfrak{m}_2\cap\mathfrak{m}_3 \supset \dots.$$
This chain becomes stationary, so that for some $n$ we have
$$ \mathfrak{m}_1\cap\dots\cap\mathfrak{m}_n = \mathfrak{m}_1\cap\dots\cap\mathfrak{m}_n\cap\mathfrak{m}_{n+1}.$$
Consequently, $\mathfrak{m}_1\cap\dots\cap\mathfrak{m}_n \subset \mathfrak{m}_{n+1}$. We claim that there
is $i \in \{1,2,\dots,n\}$ with $\mathfrak{m}_i \subset \mathfrak{m}_{n+1}$. Due to the maximality
of $\mathfrak{m}_i$ we obtain the desired contradiction.
To prove the claim we assume that $\mathfrak{m}_i \not\subseteq \mathfrak{m}_{n+1}$ for all~$i$.
Let $x_i \in \mathfrak{m}_i - \mathfrak{m}_{n+1}$ and $y = x_1\cdot x_2 \cdots x_n$.
Then $y \in \mathfrak{m}_1\cap\dots\cap\mathfrak{m}_n$, but $y \not\in \mathfrak{m}_{n+1}$ due to the primality
of~$\mathfrak{m}_{n+1}$, giving a contradiction.
(c) Let $\mathfrak{m} \in \Spec(\mathbb{T})$ be a maximal ideal. Assume
that $\mathfrak{n}$ is a different maximal ideal with $\mathfrak{m}^\infty \subset \mathfrak{n}$.
Choose $x \in \mathfrak{m}$. Some power $x^r \in \mathfrak{m}^\infty$ and, thus,
$x^r \in \mathfrak{n}$. As $\mathfrak{n}$ is prime, $x \in \mathfrak{n}$ follows, implying
$\mathfrak{m} \subseteq \mathfrak{n}$, contradicting the maximality of~$\mathfrak{m}$.
(d) Assume that $I := \mathfrak{m}^k + \mathfrak{n}^k \neq \mathbb{T}$. Then $I$ is contained in some
maximal ideal $\mathfrak{p}$. Hence, $\mathfrak{m}^\infty$ and $\mathfrak{n}^\infty$ are contained in~$\mathfrak{p}$,
whence by (c), $\mathfrak{m}=\mathfrak{n}=\mathfrak{p}$; contradiction.
(e) It is a standard fact from Commutative Algebra that
the nilradical (the ideal of nilpotent elements) is the intersection
of the minimal prime ideals.
(f) For $k \in \mathbb{N}$ and $k = \infty$, (d) implies
$$ \bigcap_{\mathfrak{m} \in \Spec(\mathbb{T})} \mathfrak{m}^k = \prod_{\mathfrak{m} \in \Spec(\mathbb{T})} \mathfrak{m}^k
= (\prod_{\mathfrak{m} \in \Spec(\mathbb{T})} \mathfrak{m})^k = (\bigcap_{\mathfrak{m} \in \Spec(\mathbb{T})} \mathfrak{m})^k.$$
By (e) we know that $\bigcap_{\mathfrak{m} \in \Spec(\mathbb{T})} \mathfrak{m}$ is the nilradical. It
can be generated by finitely many elements $a_1,\dots,a_n$ all of which are
nilpotent. So a high enough power of $\bigcap_{\mathfrak{m} \in \Spec(\mathbb{T})} \mathfrak{m}$ is zero.
(g) The injectivity follows from~(f). It suffices to show that the elements
$(0,\dots,0,1,0,\dots,0)$ are in the image of the map. Suppose the $1$ is at
the place belonging to~$\mathfrak{m}$. Due to coprimeness (d) for any maximal ideal $\mathfrak{n}\neq \mathfrak{m}$
we can find $a_\mathfrak{n} \in \mathfrak{n}^\infty$ and $a_\mathfrak{m} \in \mathfrak{m}^\infty$ such that $1= a_\mathfrak{m} + a_\mathfrak{n}$.
Let $x := \prod_{\mathfrak{n} \in \Spec(\mathbb{T}), \mathfrak{n} \neq \mathfrak{m}} a_\mathfrak{n}$.
We have $x \in \prod_{\mathfrak{n} \in \Spec(\mathbb{T}), \mathfrak{n} \neq \mathfrak{m}} \mathfrak{n}^\infty$ and
$x = \prod_{\mathfrak{n} \in \Spec(\mathbb{T}), \mathfrak{n} \neq \mathfrak{m}} (1-a_\mathfrak{m}) \equiv 1 \mod \mathfrak{m}$.
Hence, the map sends $x$ to $(0,\dots,0,1,0,\dots,0)$, proving the surjectivity.
(h) By (c), the only maximal ideal of~$\mathbb{T}$ containing~$\mathfrak{m}^\infty$ is~$\mathfrak{m}$.
Consequently, $\mathbb{T}/{\mathfrak{m}^\infty}$ is a local ring with maximal ideal the image of~$\mathfrak{m}$.
Let $s \in \mathbb{T} - \mathfrak{m}$. As $s + \mathfrak{m}^\infty \not\in \mathfrak{m}/\mathfrak{m}^\infty$, the element
$s + \mathfrak{m}^\infty$ is a unit in $\mathbb{T}/\mathfrak{m}^\infty$. Thus, the map
$$ \mathbb{T}_\mathfrak{m} \xrightarrow{\frac{y}{s} \mapsto y s^{-1} + \mathfrak{m}^\infty} \mathbb{T}/\mathfrak{m}^\infty$$
is well-defined. It is clearly surjective. Suppose $\frac{y}{s}$ maps to~$0$.
Since the image of~$s$ is a unit, $y \in \mathfrak{m}^\infty$ follows.
The element $x$ constructed in~(g) is in $\prod_{\mathfrak{n} \in \Spec(\mathbb{T}), \mathfrak{n} \neq \mathfrak{m}} \mathfrak{n}^\infty$, but not in~$\mathfrak{m}$.
By (f) and (d), $(0) = \prod_{\mathfrak{m} \in \Spec(\mathbb{T})} \mathfrak{m}^\infty$. Thus, $y \cdot x = 0$ and
also $\frac{y}{s} = \frac{yx}{sx} = 0$, proving the injectivity.
\end{proof}
A useful and simple way to rephrase a product decomposition as in~(g) is to
use idempotents. In concrete terms, the idempotents of $\mathbb{T}$ (as in the proposition)
are precisely the elements of the form $(\dots,x_\mathfrak{m},\dots)$
with $x_\mathfrak{m} \in \{0,1\} \subseteq \mathbb{T}/\mathfrak{m}^\infty$.
\begin{definition}
Let $\mathbb{T}$ be a ring. An {\em idempotent of~$\mathbb{T}$} is an element~$e$ that satisfies
$e^2=e$.
Two idempotents $e$, $f$ are {\em orthogonal} if $ef=0$.
An idempotent $e$ is {\em primitive} if $e\mathbb{T}$ is a local ring.
A set of idempotents $\{e_1,\dots, e_n\}$ is said to be {\em complete} if
$1 = \sum_{i=1}^n e_i$.
\end{definition}
In concrete terms for $\mathbb{T} = \prod_{\mathfrak{m} \in \Spec(\mathbb{T})} \mathbb{T}/\mathfrak{m}^\infty$, a complete
set of primitive pairwise orthogonal idempotents is given by
$$(1,0,\dots,0), (0,1,0,\dots,0), \dots, (0,\dots,0,1,0), (0,\dots,0,1).$$
In Exercise~\ref{exidempotent}, you are asked (among other things)
to prove that in the above case
$\mathfrak{m}^\infty$ is a principal ideal generated by an idempotent.
Below we will present an algorithm for computing a complete set of primitive
pairwise orthogonal idempotents for an Artinian ring.
We now come to a more general setting, namely working with a finite algebra~$\mathbb{T}$
over a complete local ring instead of a field. We will lift the idempotents
of the reduction of~$\mathbb{T}$ (for the maximal ideal of the complete local ring)
to idempotents of~$\mathbb{T}$ by Hensel's lemma. This gives us a proposition
very similar to Proposition~$\ref{propartin}$.
\begin{proposition}[Hensel's lemma]\label{hensel}
Let $R$ be a ring that is complete with respect to the ideal~$\mathfrak{m}$ and let
$f \in R[X]$ be a polynomial. If
$$ f(a) \equiv 0 \mod (f'(a))^2 \mathfrak{m}$$
for some $a \in R$, then there is $b \in R$ such that
$$ f(b) = 0 \textnormal{ and } b \equiv a \mod f'(a)\mathfrak{m}.$$
If $f'(a)$ is not a zero-divisor, then $b$ is unique with
these properties.
\end{proposition}
\begin{proof}
\cite{Eisenbud}, Theorem~7.3.
\end{proof}
Recall that the {\em height} of a prime ideal $\mathfrak{p}$ in a ring~$R$ is the supremum among
all $n \in \mathbb{N}$ such that there are inclusions of prime ideals
$\mathfrak{p}_0 \subsetneq \mathfrak{p}_1 \subsetneq \dots \subsetneq \mathfrak{p}_{n-1} \subsetneq \mathfrak{p}$.
The {\em Krull dimension} of~$R$ is the supremum of the heights of the prime ideals of~$R$.
\begin{proposition}\label{commalg}
Let $\mathcal{O}$ be an integral domain of characteristic zero which is a finitely
generated $\mathbb{Z}$-module. Write $\widehat{\mathcal{O}}$ for the completion of $\mathcal{O}$ at a
maximal prime of~$\mathcal{O}$ and denote by $\mathbb{F}$ the residue field
and by ${\widehat{K}}$ the fraction field of~$\widehat{\mathcal{O}}$.
Let furthermore $\mathbb{T}$ be a commutative $\mathcal{O}$-algebra
which is finitely generated as an $\mathcal{O}$-module.
For any ring homomorphism $\mathcal{O}\to S$ write $\mathbb{T}_S$ for $\mathbb{T} \otimes_\mathcal{O} S$.
Then the following statements hold.
\begin{enumerate}[(a)]
\item The Krull dimension of $\mathbb{T}_{\widehat{\mathcal{O}}}$ is less than or equal to~$1$,
i.e.\ between any prime ideal and any maximal ideal $\mathfrak{p} \subset \mathfrak{m}$ there is
no other prime ideal.
The maximal ideals of $\mathbb{T}_{\widehat{\mathcal{O}}}$ correspond bijectively under
taking pre-images to the maximal ideals of $\mathbb{T}_\mathbb{F}$. Primes $\mathfrak{p}$ of
height $0$ (i.e.\ those that do not contain any other prime ideal)
which are properly contained in a prime of height~$1$ (i.e.\ a maximal prime) of
$\mathbb{T}_{\widehat{\mathcal{O}}}$ are in bijection with primes of $\mathbb{T}_{\widehat{K}}$ under extension
(i.e.\ $\mathfrak{p} \mathbb{T}_{\widehat{K}}$), for which the notation $\mathfrak{p}^e$ will be used.
Under the correspondences, one has
$$\mathbb{T}_{\mathbb{F},\mathfrak{m}} \cong \mathbb{T}_{\widehat{\mathcal{O}},\mathfrak{m}} \otimes_{\widehat{\mathcal{O}}} \mathbb{F}$$
and
$$\mathbb{T}_{\widehat{\mathcal{O}}, \mathfrak{p}} \cong \mathbb{T}_{{\widehat{K}},\mathfrak{p}^e}.$$
\item The algebra $\mathbb{T}_{\widehat{\mathcal{O}}}$ decomposes as
$$ \mathbb{T}_{\widehat{\mathcal{O}}} \cong \prod_\mathfrak{m} \mathbb{T}_{\widehat{\mathcal{O}},\mathfrak{m}},$$
where the product runs over the maximal ideals $\mathfrak{m}$ of $\mathbb{T}_{\widehat{\mathcal{O}}}$.
\item The algebra $\mathbb{T}_\mathbb{F}$ decomposes as
$$ \mathbb{T}_\mathbb{F} \cong \prod_\mathfrak{m} \mathbb{T}_{\mathbb{F},\mathfrak{m}},$$
where the product runs over the maximal ideals $\mathfrak{m}$ of $\mathbb{T}_\mathbb{F}$.
\item The algebra $\mathbb{T}_{\widehat{K}}$ decomposes as
$$ \mathbb{T}_{\widehat{K}} \cong \prod_\mathfrak{p} \mathbb{T}_{{\widehat{K}},\mathfrak{p}^e} \cong \prod_\mathfrak{p} \mathbb{T}_{\widehat{\mathcal{O}},\mathfrak{p}} ,$$
where the products run over the minimal prime ideals $\mathfrak{p}$ of
$\mathbb{T}_{\widehat{\mathcal{O}}}$ which are contained in a prime ideal of height~$1$.
\end{enumerate}
\end{proposition}
\begin{proof}
We first need that $\widehat{\mathcal{O}}$ has Krull dimension~$1$. This, however,
follows from the fact that $\mathcal{O}$ has Krull dimension~$1$, as it is an integral extension of~$\mathbb{Z}$,
and the correspondence between the prime ideals of a ring and its completion.
As $\mathbb{T}_{\widehat{\mathcal{O}}}$ is a finitely generated $\widehat{\mathcal{O}}$-module,
$\mathbb{T}_{\widehat{\mathcal{O}}}/\mathfrak{p}$ with a prime $\mathfrak{p}$ is an integral domain which is
a finitely generated $\widehat{\mathcal{O}}/(\mathfrak{p} \cap \widehat{\mathcal{O}})$-module.
Hence, it is either a finite field (when the prime ideal $\mathfrak{p} \cap \widehat{\mathcal{O}}$ is the
unique maximal ideal of~$\widehat{\mathcal{O}}$) or a finite
extension of $\widehat{\mathcal{O}}$ (when $\mathfrak{p} \cap \widehat{\mathcal{O}}=0$ so that the structure map
$\widehat{\mathcal{O}} \to \mathbb{T}_{\widehat{\mathcal{O}}}/\mathfrak{p}$ is injective). This proves that the height of $\mathfrak{p}$ is less
than or equal to~$1$. The correspondences and the isomorphisms of Part~(a) are
the subject of Exercise~\ref{excorrespondences}.
We have already seen Parts~(c) and~(d) in Lemma~\ref{propartin}. Part~(b)
follows from~(c) by applying Hensel's lemma (Proposition~\ref{hensel}) to the idempotents
of the decomposition of~(c). We follow \cite{Eisenbud}, Corollary~7.5,
for the details. Since $\widehat{\mathcal{O}}$ is complete with respect
to some ideal~$\mathfrak{p}$, so is $\mathbb{T}_{\widehat{\mathcal{O}}}$ as it is a finitely generated $\widehat{\mathcal{O}}$-module. Hence, we may use
Hensel's lemma in $\mathbb{T}_{\widehat{\mathcal{O}}}$.
Given an idempotent $\overline{e}$ of $\mathbb{T}_\mathbb{F}$, we will first show
that it lifts to a unique idempotent of~$\mathbb{T}_{\widehat{\mathcal{O}}}$.
Let $e$ be any lift of $\overline{e}$ and let $f(X) = X^2 - X$ be a polynomial
annihilating~$\overline{e}$.
We have that $f'(e) = 2e-1$ is a unit since
$(2e-1)^2 = 4e^2-4e+1\equiv 1 \mod \mathfrak{p}$.
Hensel's lemma now gives us a unique root $e_1 \in \mathbb{T}_{\widehat{\mathcal{O}}}$ of~$f$,
i.e.\ an idempotent, lifting~$\overline{e}$.
We now lift every element of a set of pairwise orthogonal idempotents
of $\mathbb{T}_\mathbb{F}$. It now suffices to show that the lifted idempotents are also
pairwise orthogonal (their sum is $1$; otherwise we would get a contradiction
in the correspondences in~(a): there cannot be more idempotents in $\mathbb{T}_{\widehat{\mathcal{O}}}$
than in $\mathbb{T}_\mathbb{F}$). As their reductions are orthogonal, a product $e_ie_j$
of lifted idempotents is in~$\mathfrak{p}$. Hence, $e_ie_j=e_i^de_j^d \in \mathfrak{p}^d$ for
all~$d$, whence $e_i e_j = 0$, as desired.
\end{proof}
\subsubsection{Commutative algebra of Hecke algebras}
Let $k\ge 1$, $N \ge 1$ and $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to \mathbb{C}^\times$.
Moreover, let $p$ be a prime, $\mathcal{O} := \mathbb{Z}[\chi]$, $\mathfrak{P}$ a maximal
prime of $\mathcal{O}$ above~$p$, and let $\mathbb{F}$
be the residue field of $\mathcal{O}$ modulo~$\mathfrak{P}$.
We let $\widehat{\mathcal{O}}$ denote the completion of $\mathcal{O}$
at~$\mathfrak{P}$. Moreover, the field of fractions of $\widehat{\mathcal{O}}$
will be denoted by~${\widehat{K}}$ and an algebraic closure by ${\overline{\widehat{K}}}$.
For $\mathbb{T}_\mathcal{O}(\Mk kN\chi\mathbb{C})$ we only write $\mathbb{T}_\mathcal{O}$
for short, and similarly over other rings.
We keep using the fact that $\mathbb{T}_\mathcal{O}$ is finitely generated
as an $\mathcal{O}$-module.
We shall now apply Proposition~\ref{commalg} to $\mathbb{T}_{\widehat{\mathcal{O}}}$.
\begin{proposition}\label{prop:pure-one}
The Hecke algebras $\mathbb{T}_{\mathcal{O}}$ and $\mathbb{T}_{\widehat{\mathcal{O}}}$ are pure of Krull dimension~$1$,
i.e.\ every maximal prime contains some minimal prime ideal.
\end{proposition}
\begin{proof}
It suffices to prove that $\mathbb{T}_{\widehat{\mathcal{O}}}$ is pure of Krull dimension~$1$ because
completion of $\mathbb{T}_{{\mathcal{O}}}$ at a maximal ideal of $\mathcal{O}$ does not change the Krull dimension.
First note that $\widehat{\mathcal{O}}$ is pure of Krull dimension~$1$ as it
is an integral extension of~$\mathbb{Z}_p$ (and the Krull dimension is an invariant in integral extensions).
With the same reasoning, $\mathbb{T}_{\widehat{\mathcal{O}}}$ is of Krull dimension~$1$; we have to see that it is pure.
According to proposition~\ref{commalg}, $\mathbb{T}_{\widehat{\mathcal{O}}}$ is the direct product of finite local
$\widehat{\mathcal{O}}$-algebras~$\mathbb{T}_i$. As each $\mathbb{T}_i$ embeds into a finite dimensional matrix algebra over~${\widehat{K}}$,
it admits a simultaneous eigenvector (after possibly a finite extension of ${\widehat{K}}$) for the standard action
of the matrix algebra on the corresponding ${\widehat{K}}$-vector space and the map $\varphi$ sending
an element of $\mathbb{T}_i$ to its eigenvalue is non-trivial and its kernel is a prime ideal strictly contained
in the maximal ideal of~$\mathbb{T}_i$.
To see this, notice that the eigenvalues are integral, i.e.\ lie in the valuation ring of a finite extension of~${\widehat{K}}$,
and can hence be reduced modulo the maximal ideal. The kernel of $\varphi$ followed by the reduction map is the required maximal ideal.
This proves that the height of the maximal ideal is~$1$.
\end{proof}
By Proposition~\ref{commalg}, minimal primes of $\mathbb{T}_{\widehat{\mathcal{O}}}$
correspond to the maximal primes of $\mathbb{T}_{\widehat{K}}$ and hence
to $\Gal({\overline{\widehat{K}}}/{\widehat{K}})$-conjugacy classes of eigenforms
in $\Mk kN\chi{\overline{\widehat{K}}}$. By a brute force identification of
${\overline{\widehat{K}}} = \overline{\QQ}_p$ with $\mathbb{C}$ we may still think about
these eigenforms as the usual holomorphic ones (the Galois
conjugacy can then still be seen as conjugacy by a decomposition
group above~$p$ inside the absolute Galois group of
the field of fractions of~$\mathcal{O}$).
Again by Proposition~\ref{commalg}, maximal prime ideals of $\mathbb{T}_{\widehat{\mathcal{O}}}$
correspond to the maximal prime ideals of $\mathbb{T}_\mathbb{F}$ and hence
to $\Gal(\overline{\FF}/\mathbb{F})$-conjugacy classes of eigenforms
in $\Mk kN\chi\overline{\FF}$.
The spectrum of $\mathbb{T}_{\widehat{\mathcal{O}}}$ allows one to phrase very
elegantly when conjugacy classes of eigenforms are congruent
modulo a prime above~$p$. Let us first explain what that means.
Normalised eigenforms~$f$ take their coefficients $a_n(f)$ in
rings of integers of number fields ($\mathbb{T}_\mathcal{O} / \mathfrak{m}$, when $\mathfrak{m}$
is the kernel of the $\mathcal{O}$-algebra homomorphism $\mathbb{T}_\mathcal{O} \to \mathbb{C}$,
given by $T_n \mapsto a_n(f)$), so they can be reduced modulo
primes above~$p$ (for which we will often just say ``reduced
modulo~$p$'').
The reduction modulo a prime above~$p$ of the $q$-expansion
of a modular form $f$ in $\Mk kN\chi\mathbb{C}$ is the
formal $q$-expansion of an eigenform in $\Mk kN\chi\overline{\FF}$.
If two normalised eigenforms $f,g$ in $\Mk kN\chi\mathbb{C}$ or
$\Mk kN\chi{\overline{\widehat{K}}}$ reduce to the same element in
$\Mk kN\chi\overline{\FF}$, we say that they are {\em congruent
modulo~$p$}.
Due to Exercise~\ref{exconj}, we may speak about {\em reductions
modulo~$p$} of $\Gal({\overline{\widehat{K}}}/{\widehat{K}})$-conjugacy classes of normalised
eigenforms to $\Gal(\overline{\FF}/\mathbb{F})$-conjugacy classes.
We hence say that two $\Gal({\overline{\widehat{K}}}/{\widehat{K}})$-con\-ju\-gacy
classes, say corresponding to normalised eigenforms $f,g$,
respectively, minimal ideals $\mathfrak{p}_1$
and $\mathfrak{p}_2$ of $\mathbb{T}_{\widehat{\mathcal{O}}}$, are {\em congruent
modulo~$p$}, if they reduce to the same
$\Gal(\overline{\FF}/\mathbb{F})$-conjugacy class.
\begin{proposition}\label{conjugacy}
The $\Gal({\overline{\widehat{K}}}/{\widehat{K}})$-conjugacy classes belonging to minimal
primes $\mathfrak{p}_1$ and $\mathfrak{p}_2$ of $\mathbb{T}_{\widehat{\mathcal{O}}}$ are congruent
modulo~$p$ if and only if they are contained in a
common maximal prime $\mathfrak{m}$ of $\mathbb{T}_{\widehat{\mathcal{O}}}$.
\end{proposition}
\begin{proof}
Exercise~\ref{exconjugacy}.
\end{proof}
We mention the fact that if $f$ is a newform belonging to the maximal ideal
$\mathfrak{m}$ of the Hecke algebra $\mathbb{T} := \mathbb{T}_\mathbb{Q}(S_k(\Gamma_1(N),\mathbb{C}))$, then $\mathbb{T}_\mathfrak{m}$ is isomorphic
to $\mathbb{Q}_f = \mathbb{Q}(a_n | n \in \mathbb{N})$.
This follows from newform (Atkin-Lehner) theory (see \cite[\S5.6-5.8]{DS}), which implies that the Hecke
algebra on the newspace is diagonalisable, so that it is the direct product of the coefficient fields.
We include here the famous Deligne-Serre lifting lemma \cite[Lemme~6.11]{DeligneSerre}, which we can easily prove with the tools developed so far.
\begin{proposition}[Deligne-Serre lifting lemma]\label{prop:deligne-serre}
Any normalised eigenform ${\overline{f}} \in \Skone kN{\overline{\FF}_p}$ is the reduction of a normalised eigenform $f \in \Skone kN\mathbb{C}$.
\end{proposition}
\begin{proof}
Let $\mathbb{T}_\mathbb{Z} = \mathbb{T}_\mathbb{Z}(\Skone kN\mathbb{C})$.
By definition, ${\overline{f}}$ is a ring homomorphism $\mathbb{T}_\mathbb{Z} \to \overline{\FF}_p$ and its kernel is a maximal ideal $\mathfrak{m}$ of $\mathbb{T}_\mathbb{Z}$.
According to Proposition~\ref{prop:pure-one}, the Hecke algebra is pure of Krull dimension one, hence $\mathfrak{m}$
is of height~$1$, meaning that it strictly contains a minimal prime ideal~$\mathfrak{p} \subset \mathbb{T}_\mathbb{Z}$.
Let $f$ be the composition of the maps in the first line of the diagram:
$$ \xymatrix@=1cm{
\mathbb{T}_\mathbb{Z} \ar@{->>}[r]\ar@{->>}[dr]\ar@{->}[drr]^(.7){{\overline{f}}} & \mathbb{T}_\mathbb{Z}/\mathfrak{p} \ar@{^(->}[r]\ar@{->>}[d] & \overline{\ZZ} \ar@{->>}[d] \ar@{^(->}[r] &\mathbb{C}\\
& \mathbb{T}_\mathbb{Z}/\mathfrak{m} \ar@{^(->}[r] & \overline{\FF}_p}$$
where all surjections and all injections are the natural ones,
and the map $\overline{\ZZ} \twoheadrightarrow \overline{\FF}_p$ is chosen in order to make the diagram commutative.
Note that ${\overline{f}}$ is a ring homomorphism and thus a normalised eigenform in $\Skone kN\mathbb{C}$. By the diagram, its reduction is~${\overline{f}}$.
\end{proof}
\subsection{Algorithms and Implementations: Localisation Algorithms}
Let $K$ be a perfect field, ${\overline{K}}$ an algebraic closure
and $A$ a finite dimensional commutative $K$-algebra.
In the context of Hecke algebras we would like to
compute a local decomposition of~$A$ as in Proposition~\ref{commalg}.
\subsubsection{Primary spaces}
\begin{definition}
An $A$-module $V$ which is finite dimensional as $K$-vector space is called a {\em primary space} for $A$ if the minimal polynomial for all
$a \in A$ is a prime power in~$K[X]$.
\end{definition}
\begin{lemma}\label{LemDec}
\begin{enumerate}[(a)]
\item $A$ is local if and only if the minimal polynomial of~$a$ (in $K[X]$)
is a prime power for all $a \in A$.
\item Let $V$ be an $A$-module which is finite dimensional as $K$-vector space and which is a primary space for~$A$.
Then the image of $A$ in ${\rm End}_K(V)$ is a local algebra.
\item Let $V$ be an $A$-module which is finite dimensional as $K$-vector space and let $a_1, \dots, a_n$ be generators
of the algebra $A$. Suppose that for
$i \in \{1,\dots,n\}$ the minimal polynomial $a_i$ on $V$ is a power of $(X - \lambda_i)$ in $K[X]$
for some $\lambda_i \in K$ (e.g.\ if $K = {\overline{K}}$).
Then the image of $A$ in ${\rm End}_K(V)$ is a local algebra.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) Suppose first that $A$ is local and take $a \in A$.
Let $\phi_a: K[X] \to A$ be the homomorphism of $K$-algebras defined
by sending $X$ to $a$. Let $(f)$ be the kernel with $f$ monic,
so that by definition $f$ is the minimal polynomial of~$a$.
Hence, $K[X]/(f) \hookrightarrow A$, whence $K[X]/(f)$ is local, as
it does not contain any non-trivial idempotent.
Thus, $f$ cannot have two different prime factors.
Conversely, if $A$ were not local, we would have an idempotent
$e \not \in \{0,1\}$. The minimal polynomial of $e$
is $X(X-1)$, which is not a prime power.
(b) follows directly.
For (c) one can use the following. Suppose that $(a-\lambda)^r V = 0$
and $(b - \mu)^s V = 0$. Then $((a+b) - (\lambda + \mu))^{r+s} V = 0$,
as one sees by rewriting $((a+b) - (\lambda + \mu)) = (a-\lambda)+(b-\mu)$
and expanding out. From this it also follows that $(ab - \lambda \mu)^{2(r+s)} V = 0$
by rewriting $ab - \lambda \mu = (a-\lambda)(b-\mu) + \lambda(b-\mu) + \mu(a - \lambda)$.
\end{proof}
We warn the reader that algebras such that a set of generators acts primarily
need not be local, unless they are defined over an algebraically closed
field, as we have seen in Part~(c) above. In Exercise~\ref{exnonlocal}
you are asked to find an example.
The next proposition, however, tells us that an algebra over a field
having a basis consisting of primary elements is local. I found the idea
for that proof in~\cite{Eberly}.
\begin{proposition}\label{primary}
Let $K$ be a field of characteristic~$0$ or a finite field.
Let $A$ be a finite dimensional commutative algebra over~$K$ and let
$a_1,\dots,a_n$ be a $K$-basis of~$A$ with the property that
the minimal polynomial of each~$a_i$ is a power of a prime
polynomial $p_i \in K[X]$.
Then $A$ is local.
\end{proposition}
\begin{proof}
We assume that $A$ is not local and take a decomposition
$\alpha: A \xrightarrow{\sim} \prod_{j=1}^r A_j$ with $r \ge 2$.
Let $K_j$ be the residue field of $A_j$ and consider the finite dimensional $K$-algebra $\overline{A} := \prod_{j=1}^r K_j$.
Write $\overline{a_1},\dots,\overline{a_n}$ for the images of the $a_i$ in~$\overline{A}$.
They form a $K$-basis. In order to have access to the components, also write $\overline{a_i} = (\overline{a_{i,1}},\dots,\overline{a_{i,r}})$.
Since the minimal polynomial of an element in a product is the lowest common multiple of the minimal polynmials
of the components, the assumption implies that, for each $i=1,\dots,r$, the minimal polynomial of
$a_{i,j}$ is independent of~$j$; call it $p_i \in K[X]$.
Let $N/K$ be the splitting field of the polynomials $p_1,\dots,p_r$. This means that $N$ is the normal closure of $K_j$ over~$K$
for any~$j$. As a particular case, note that $N=K_j$ for all $j$ if $K$ is a finite field since finite extensions of finite fields
are automatically normal.
Now consider the trace $\Tr_{N/K}$ and note that $\Tr_{N/K}(\overline{a_i})$ is a diagonal element in~$\overline{A}$ for all $i=1,\dots,r$
since the components $\overline{a_{i,j}}$ are roots of the same minimal polynomial.
Consequently, $\Tr_{N/K}(\overline{a})$ is a diagonal element for all $\overline{a} \in \overline{A}$ since the $\overline{a_i}$
form a $K$-basis of~$\overline{A}$.
In order to come to a contradiction, it now suffices to produce an element the trace of which is not diagonal.
By Exercise~\ref{extrace} there is $x \in K_1$ such that $\Tr_{N/K} (x) \neq 0$.
Then the element $(x,0,\dots,0) \in \overline{A}$ clearly provides an example of an element with non-diagonal trace.
\end{proof}
\begin{lemma}\label{algfield}
Let $A$ be a local finite dimensional commutative algebra over a perfect field~$K$.
Let $a_1,\dots, a_n$ be a set of $K$-algebra generators of~$A$
such that the minimal polynomial of each $a_i$ is a prime polynomial.
Then $A$ is a field.
\end{lemma}
\begin{proof}
As the $a_i$ are simultaneously diagonalisable over a separable closure (considering the algebra as a matrix algebra)
due to their minimal polynomials being squarefree (using here the perfectness of $K$), so are sums and products of the~$a_i$.
Hence, $0$ is the only nilpotent element in~$A$. As the maximal
ideal in an Artinian local algebra is the set of nilpotent elements,
the lemma follows.
\end{proof}
\begin{proposition}\label{genmax}
Let $A$ be a local finite dimensional commutative algebra over a perfect field~$K$.
Let $a_1,\dots, a_n$ be a set of $K$-algebra generators of~$A$.
Let $p_i^{e_i}$ be the minimal polynomial of~$a_i$ (see Lemma~\ref{LemDec}).
Then the maximal ideal~$\mathfrak{m}$ of~$A$ is generated by
$\{p_1(a_1),\dots,p_n(a_n)\}$.
\end{proposition}
\begin{proof}
Let $\mathfrak{a}$ be the ideal generated by $\{p_1(a_1),\dots,p_n(a_n)\}$.
The quotient $A/\mathfrak{a}$ is generated by the images of the~$a_i$,
call them $\overline{a_i}$.
As $p_i(a_i) \in \mathfrak{a}$, it follows $p_i(\overline{a_i}) = 0$, whence
the minimal polynomial of $\overline{a_i}$ equals the prime polynomial~$p_i$.
By Lemma~\ref{algfield}, we know that $A/\mathfrak{a}$ is a field, whence $\mathfrak{a}$ is the maximal ideal.
\end{proof}
\subsubsection{Algorithm for computing common primary spaces}
It may help to think about finite dimensional commutative algebras over
a field as algebras of matrices. Then the localisation statements
of this section just mean choosing a basis such that one obtains block matrices.
By a {\em common primary space} for commuting matrices we mean a direct summand
of the underlying vector space on which the minimal polynomials of the given matrices
are prime powers. By Proposition~\ref{primary}, a common primary
space of a basis of a matrix algebra is a local factor of the algebra.
By a {\em generalised eigenspace} for commuting matrices we mean a vector
subspace of the underlying vector space on which the minimal polynomial of the given matrices
are irreducible. Allowing base changes to extension fields, the matrices
restricted to the generalised eigenspace are diagonalisable.
In this section we present a straight forward algorithm for
computing common primary spaces and common generalised eigenspaces.
\begin{algorithm}\label{algPrimary}
\underline{Input}: list {\tt ops} of commuting operators acting
on the $K$-vector space~$V$.
\underline{Output}: list of the common primary spaces inside~$V$
for all operators in {\tt ops}.
\begin{enumerate}[(1)]
\itemsep=0cm plus 0pt minus 0pt
\item List := [V];
\item for $T$ in {\tt ops} do
\item \hspace*{1cm} newList := [];
\item \hspace*{1cm} for $W$ in List do
\item \hspace*{1cm} \hspace*{1cm} compute the minimal polynomial $f \in K[X]$ of $T$
restricted to $W$.
\item \hspace*{1cm} \hspace*{1cm} factor $f$ over $K$ into its prime powers $f(X) = \prod_{i=1}^n p_i(X)^{e_i}$.
\item \hspace*{1cm} \hspace*{1cm} if $n$ equals $1$, then
\item \hspace*{1cm} \hspace*{1cm} \hspace*{1cm} append $W$ to newList,
\item \hspace*{1cm} \hspace*{1cm} else for i := 1 to n do
\item \hspace*{1cm} \hspace*{1cm} \hspace*{1cm} compute $\widetilde{W}$ as the kernel of
$p_i(T|_W)^\alpha$ with $\alpha = {e_i}$ for common primary spaces
or $\alpha = 1$ for common generalised eigenspaces.
\item \hspace*{1cm} \hspace*{1cm} \hspace*{1cm} append $\widetilde{W}$ to newList.
\item \hspace*{1cm} \hspace*{1cm} end for; end if;
\item \hspace*{1cm} end for;
\item \hspace*{1cm} List := newList;
\item end for;
\item return List and stop.
\end{enumerate}
\end{algorithm}
\subsubsection{Algorithm for computing idempotents}
Using Algorithm~\ref{algPrimary} it is possible to compute a complete
set of orthogonal primitive idempotents for~$A$. We now sketch a direct algorithm.
\begin{algorithm}\label{algIdempotents}
\underline{Input}: matrix~$M$.
\underline{Output}: complete set of orthogonal primitive idempotents
for the matrix algebra generated by $M$ and $1$.
\begin{enumerate}[(1)]
\itemsep=0cm plus 0pt minus 0pt
\item compute the minimal polynomial $f$ of~$M$.
\item factor it $f = (\prod_{i=1}^n p_i^{e_i}) X^e$ over $K$ with
$p_i$ distinct irreducible polynomials different from~$X$.
\item List := [];
\item for $i = 1$ to $n$ do
\item \hspace*{1cm} $g := f / p_i^{e_i}$;
\item \hspace*{1cm} $M_1 := g(M)$. If we think about $M_1$ in block form,
then there is only one non-empty block on the diagonal, the rest is zero.
In the next steps this block is replaced by the identity.
\item \hspace*{1cm} compute the minimal polynomial $h$ of $M_1$.
\item \hspace*{1cm} strip possible factors $X$ from~$h$ and normalise $h$ so that $h(0)=1$.
\item \hspace*{1cm} append $1-h(M_1)$ to List. Note that $h(M_1)$ is the identity matrix
except at the block corresponding to $p_i$, which is zero. Thus $1-h(M_1)$
is the idempotent being zero everywhere and being the identity in the block
corresponding to~$p_i$.
\item end for;
\item if $e > 0$ then
\item \hspace*{1cm} append $1-\sum_{e \in \textnormal{ List}} e$ to List.
\item end if;
\item return List and stop.
\end{enumerate}
\end{algorithm}
The algorithm for computing a complete set of orthogonal primitive idempotents
for a commutative matrix algebra consists of multiplying together the
idempotents of every matrix in a basis. See Computer Exercise~\ref{cexidempotents}.
\subsection{Theoretical exercises}
\begin{exercise}
Use your knowledge on modular forms to prove that a modular
form $f = \sum_{n=0}^\infty a_n(f) q^n$ of weight $k \ge 1$ and
level $N$ (and Dirichlet character~$\chi$)
is uniquely determined by $\sum_{n=1}^\infty a_n(f) q^n$.
\end{exercise}
\begin{exercise}\label{exgaloisconjugacy}
Prove Proposition~\ref{galoisconjugacy}.
Hint: use that the kernel of a ring homomorphism into an integral domain is a prime ideal; moreover, use that
all prime ideals in the Hecke algebra in the exercise are maximal; finally, use that field
homomorphisms can be extended to separable extensions (using here that $K$ is perfect).
\end{exercise}
\begin{exercise}\label{exidempotent}
Let $\mathbb{T}$ be an Artinian ring.
\begin{enumerate}[(a)]
\item Let $\mathfrak{m}$ be a maximal ideal of~$\mathbb{T}$.
Prove that $\mathfrak{m}^\infty$ is a principal ideal generated by an
idempotent. Call it~$e_\mathfrak{m}$.
\item Prove that the idempotents $1 - e_\mathfrak{m}$ and $1 - e_\mathfrak{n}$ for different
maximal ideals $\mathfrak{m}$ and $\mathfrak{n}$ are orthogonal.
\item Prove that the set $\{1-e_\mathfrak{m} | \mathfrak{m} \in \Spec(\mathbb{T})\}$ forms a complete set
of pairwise orthogonal idempotents.
\end{enumerate}
Hint: see \cite[\S8]{AM}.
\end{exercise}
\begin{exercise}\label{excorrespondences}
Prove the correspondences and the isomorphisms from Part~(a)
of Proposition~\ref{commalg}.
Hint: you only need basic reasonings from commutative algebra.
\end{exercise}
\begin{exercise}\label{exconj}
Let $f,g \in \Mk kN\chi{\overline{\widehat{K}}}$ be normalised eigenforms that we assume to be
$\Gal({\overline{\widehat{K}}}/{\overline{K}})$-conjugate. Prove that their reductions modulo~$p$
are $\Gal(\overline{\FF}/\mathbb{F})$-conjugate.
\end{exercise}
\begin{exercise}\label{exconjugacy}
Prove Proposition~\ref{conjugacy}.
Hint: it suffices to write out the definitions.
\end{exercise}
\begin{exercise}\label{exnonlocal}
Find a non-local algebra~$A$ over a field~$K$ (of your choice) such that~$A$
is generated as a $K$-algebra by $a_1,\dots,a_n$ having the property
that the minimal polynomial of each $a_i$ is a power of an irreducible
polynomial in~$K[X]$.
\end{exercise}
\begin{exercise}\label{extrace}
Let $K$ be a field of characteristic~$0$ or a finite field.
Let $L$ be a finite extension of~$K$ with Galois closure $N$ over~$K$.
Show that there is an element $x \in L$ with $\Tr_{N/K} (x) \neq 0$.
\end{exercise}
\begin{exercise}\label{exlowertriangular}
Let $A$ be a commutative matrix algebra over a perfect field~$K$.
Suppose that the minimal polynomial of each element of a generating
set is the power of a prime polynomial (i.e.\ it is primary).
Show that there exist base change matrices such that the base
changed algebra consists only of lower triangular matrices. You may
and you may have to extend scalars to a finite extension of~$K$.
In Computer Exercise~\ref{cexlowertriangular} you are asked to find and
implement an algorithm computing such base change matrices.
\end{exercise}
\subsection{Computer exercises}
\begin{cexercise}
Change Algorithm~\ref{algmodsymsketch}
(see Computer Exercise~\ref{cexalgmodsymsketch}) so that
it works for modular forms over a given ring~$R$.
\end{cexercise}
\begin{cexercise}\label{cexmaxid}
Let $A$ be a commutative matrix algebra over a perfect field~$K$.
\begin{enumerate}[(a)]
\item Write an algorithm to test whether $A$ is local.
\item Suppose $A$ is local. Write an algorithm to compute its
maximal ideal.
\end{enumerate}
\end{cexercise}
\begin{cexercise}
Let $A$ be a commutative algebra over a field~$K$.
The regular representation is defined as the image
of the injection
$$ A \to {\rm End}_K(A), \;\;\; a \mapsto (b \mapsto a\cdot b).$$
Write a function computing the regular representation.
\end{cexercise}
\begin{cexercise}
Implement Algorithm~\ref{algPrimary}. Also write a function that
returns the local factors as matrix algebras (possibly using
regular representations).
\end{cexercise}
\begin{cexercise}\label{cexidempotents}
\begin{enumerate}[(a)]
\item Implement Algorithm~\ref{algIdempotents}.
\item Let $S$ be a set of idempotents.
Write a function selecting a subset of~$S$ consisting of pairwise
orthogonal idempotents such that the subset spans~$S$ (all idempotents
in~$S$ can be obtained as sums of elements in the subset).
\item Write a function computing a complete set of pairwise orthogonal
idempotents for a commutative matrix algebra~$A$ over a field by multiplying
together the idempotents of the matrices in a basis and selecting a
subset as in~(b).
\item Use Computer Exercise~\ref{cexmaxid} to compute the maximal ideals
of~$A$.
\end{enumerate}
\end{cexercise}
\begin{cexercise}
Let $A$ be a commutative matrix algebra over a perfect field~$K$.
Suppose that $A$ is a field (for instance obtained as the quotient
of a local $A$ by its maximal ideal computed in Computer Exercise~\ref{cexmaxid}).
Write a function returning an irreducible polynomial~$p$
such that $A$ is $K[X]/(p)$.
If possible, the algorithm should not use factorisations of polynomials.
It is a practical realisation of Kronecker's primitive element theorem.
\end{cexercise}
\begin{cexercise}\label{cexlowertriangular}
Let $A$ be a commutative matrix algebra over a perfect field~$K$.
Suppose that the minimal polynomial of each element of a generating
set is the power of a prime polynomial (i.e.\ it is primary).
Write a function computing base change matrices such that the base
changed algebra consists only of lower triangular matrices
(cf.\ Exercise~\ref{exlowertriangular}).
\end{cexercise}
\section{Homological algebra}
In this section we provide the tools from homological algebra that will be necessary
for the modular symbols algorithm (in its group cohomological version).
A good reference is~\cite{Weibel}.
We will be sloppy about categories. When we write category below,
we really mean abelian category, since we obviously need the existence of
kernels, images, quotients etc. For what we have in mind, we should really understand the word
category not in its precise mathematical sense but as a placeholder for
$R-\mathrm{modules}$, or (co-)chain complexes of $R-\mathrm{modules}$ and other categories
from everyday life.
\subsection{Theory: Categories and Functors}
\begin{definition}
A {\em category} $\mathcal{C}$ consists of the following data:
\begin{itemize}
\item a class $\mathrm{obj}(\mathcal{C})$ of {\em objects},
\item a set ${\rm Hom}_\mathcal{C}(A,B)$ of {\em morphisms} for every ordered pair $(A,B)$
of objects,
\item an {\em identity morphism} $\mathrm{id}_A \in {\rm Hom}_\mathcal{C}(A,A)$ for every object $A$, and
\item a {\em composition function}
$$ {\rm Hom}_\mathcal{C}(A,B) \times {\rm Hom}_\mathcal{C}(B,C) \to {\rm Hom}_\mathcal{C}(A,C), \;\; (f,g) \mapsto g \circ f$$
for every ordered triple $(A,B,C)$ of objects
\end{itemize}
such that
\begin{itemize}
\item (Associativity) $(h \circ g) \circ f = h \circ (g \circ f)$ for all
$f \in {\rm Hom}_\mathcal{C}(A,B)$, $g \in {\rm Hom}_\mathcal{C}(B,C)$, $h \in {\rm Hom}_\mathcal{C} (C,D)$ and
\item (Unit Axiom) $\mathrm{id}_B \circ f = f = f \circ \mathrm{id}_A$ for $f \in {\rm Hom}_\mathcal{C}(A,B)$.
\end{itemize}
\end{definition}
\begin{example}
Examples of categories are
\begin{itemize}
\item Sets: objects are sets, morphisms are maps.
\item Let $R$ be a not necessarily commutative ring. Left $R$-modules ($R-\mathrm{modules}$): objects are $R$-modules, morphisms are $R$-module homomorphisms.
This is the category we are going to work with most of the time. Note that the category
of $\mathbb{Z}$-modules is the category of abelian groups.
\item Right $R$-modules ($\mathrm{modules}-R$): as above.
\end{itemize}
\end{example}
\begin{definition}
Let $\mathcal{C}$ and $\mathcal{D}$ be categories. A {\em covariant/contravariant functor} $F: \mathcal{C} \to \mathcal{D}$ is
\begin{itemize}
\item a rule $\mathrm{obj}(\mathcal{C}) \to \mathrm{obj}(\mathcal{D}), \; C \mapsto F(C)$ and
\item a rule
$\begin{cases}
\textnormal{covariant:} & {\rm Hom}_\mathcal{C}(C_1,C_2) \to {\rm Hom}_\mathcal{D}(F(C_1),F(C_2)), \; f \mapsto F(f)\\
\textnormal{contravariant:} & {\rm Hom}_\mathcal{C}(C_1,C_2) \to {\rm Hom}_\mathcal{D}(F(C_2),F(C_1)), \; f \mapsto F(f)\\
\end{cases}$
\end{itemize}
such that
\begin{itemize}
\item $F(\mathrm{id}_C) = \mathrm{id}_{F(C)}$ and
\item $\begin{cases}
\textnormal{covariant:} & F(g \circ f) = F(g) \circ F(f)\\
\textnormal{contravariant:} & F(g \circ f) = F(f) \circ F(g)\\
\end{cases}$
\end{itemize}
\end{definition}
\begin{example}
\begin{itemize}
\item Let $M \in \mathrm{obj}(R-\mathrm{modules})$. Define
$${\rm Hom}_R(M,\cdot): R-\mathrm{modules} \to \mathbb{Z}-\mathrm{modules}, \;\; A \mapsto {\rm Hom}_R(M,A).$$
This is a covariant functor.
\item Let $M \in \mathrm{obj}(R-\mathrm{modules})$. Define
$${\rm Hom}_R(\cdot,M): R-\mathrm{modules} \to \mathbb{Z}-\mathrm{modules}, \;\; A \mapsto {\rm Hom}_R(A,M).$$
This is a contravariant functor.
\item Let $M \in \mathrm{obj}(R-\mathrm{modules})$. Define
$$\cdot \otimes_R M: \mathrm{modules}-R \to \mathbb{Z}-\mathrm{modules}, \;\; A \mapsto A \otimes_R M.$$
This is a covariant functor.
\item Let $M \in \mathrm{obj}(\mathrm{modules}-R)$. Define
$$M \otimes_R \cdot: R-\mathrm{modules} \to \mathbb{Z}-\mathrm{modules}, \;\; A \mapsto M \otimes_R A.$$
This is a covariant functor.
\end{itemize}
\end{example}
\begin{definition}
Let $\mathcal{C}$ and $\mathcal{D}$ be categories and $F,G: \mathcal{C} \to \mathcal{D}$ be both covariant or both contra\-variant functors.
A {\em natural transformation} $\alpha: F \Rightarrow G$ is a collection of morphisms
$\alpha= (\alpha_C)_{C \in \mathcal{C}}: F(C) \to G(C)$ in~$\mathcal{D}$ for $C \in \mathcal{C}$ such that
for all morphisms $f:C_1 \to C_2$ in~$\mathcal{C}$ the following diagram commutes:\\
\begin{tabular}{cc}
covariant: & contravariant:\\
$\xymatrix@=0.5cm{
F(C_1) \ar@{->}[r]^{F(f)} \ar@{->}[d]_{\alpha_{C_1}} & F(C_2) \ar@{->}[d]^{\alpha_{C_2}}\\
G(C_1) \ar@{->}[r]^{G(f)} & G(C_2) }$ &
$\xymatrix@=0.5cm{
F(C_1) \ar@{<-}[r]^{F(f)} \ar@{->}[d]_{\alpha_{C_1}} & F(C_2) \ar@{->}[d]_{\alpha_{C_2}}\\
G(C_1) \ar@{<-}[r]^{G(f)} & G(C_2). }$
\end{tabular}
\end{definition}
\begin{example}\label{nattrans}
Let $R$ be a not necessarily commutative ring and let $A,B \in \mathrm{obj}(R-\mathrm{modules})$
together with a morphism $A \to B$.
Then there are natural transformations
${\rm Hom}_R(B,\cdot) \Rightarrow {\rm Hom}_R(A,\cdot)$ and
${\rm Hom}_R(\cdot,A) \Rightarrow {\rm Hom}_R(\cdot,B)$ as well as
$\cdot \otimes_R A \Rightarrow \cdot \otimes_R B$ and
$A \otimes_R \cdot \Rightarrow B \otimes_R \cdot$.
\end{example}
\begin{proof}
Exercise~\ref{exnattrans}.
\end{proof}
\begin{definition}
\begin{itemize}
\item A covariant functor $F: \mathcal{C} \to \mathcal{D}$ is called {\em left-exact}, if for every exact sequence
$$ 0 \to A \to B \to C$$
the sequence
$$ 0 \to F(A) \to F(B) \to F(C)$$
is also exact.
\item A contravariant functor $F: \mathcal{C} \to \mathcal{D}$ is called {\em left-exact}, if for every exact sequence
$$ A \to B \to C \to 0$$
the sequence
$$ 0 \to F(C) \to F(B) \to F(A)$$
is also exact.
\item A covariant functor $F: \mathcal{C} \to \mathcal{D}$ is called {\em right-exact}, if for every exact sequence
$$ A \to B \to C \to 0$$
the sequence
$$ F(A) \to F(B) \to F(C) \to 0$$
is also exact.
\item A contravariant functor $F: \mathcal{C} \to \mathcal{D}$ is called {\em right-exact}, if for every exact sequence
$$ 0 \to A \to B \to C$$
the sequence
$$ F(C) \to F(B) \to F(A) \to 0$$
is also exact.
\item A covariant or contravariant functor is {\em exact} if it is both left-exact and right-exact.
\end{itemize}
\end{definition}
\begin{example}\label{HomTensor}
Both functors ${\rm Hom}_R(\cdot,M)$ and ${\rm Hom}_R(M,\cdot)$ for $M \in \mathrm{obj}(R-\mathrm{modules})$
are left-exact.
Both functors $\cdot \otimes_R M$ for $M \in \mathrm{obj}(R-\mathrm{modules})$
and $M \otimes_R \cdot$ for $M \in \mathrm{obj}(\mathrm{modules}-R)$
are right-exact.
\end{example}
\begin{proof}
Exercise~\ref{exHomTensor}.
\end{proof}
\begin{definition}
Let $R$ be a not necessarily commutative ring.
A left $R$-module $P$ is called {\em projective} if the functor
${\rm Hom}_R(P,\cdot)$ is exact.
A left $R$-module $I$ is called {\em injective} if the functor
${\rm Hom}_R(\cdot,I)$ is exact.
\end{definition}
\begin{lemma}\label{lemprojective}
Let $R$ be a not necessarily commutative ring and let $P$ be a left $R$-module.
Then $P$ is projective if and only if $P$ is a direct summand of some
free $R$-module.
In particular, free modules are projective.
\end{lemma}
\begin{proof}
Exercise~\ref{exprojective}.
\end{proof}
\subsection{Theory: Complexes and Cohomology}
\begin{definition}
A {\em (right) chain complex} $C_\bullet$ in the category $R-\mathrm{modules}$ is a collection of
objects $C_n \in \mathrm{obj}(R-\mathrm{modules})$ for $n \ge m$ for some $m \in \mathbb{Z}$ together with homomorphisms
$C_{n+1} \xrightarrow{\partial_{n+1}} C_n$, i.e.\
$$ \cdots \to C_{n+1} \xrightarrow{\partial_{n+1}} C_{n} \xrightarrow{\partial_{n}}
C_{n-1} \to \cdots \to C_{m+2} \xrightarrow{\partial_{m+2}} C_{m+1} \xrightarrow{\partial_{m+1}} C_m
\xrightarrow{\partial_m} 0,$$
such that
$$ \partial_{n} \circ \partial_{n+1} = 0 $$
for all $n \ge m$.
The group of {\em $n$-cycles} of this chain complex is defined as
$$ \Z_n(C_\bullet) = \ker(\partial_n).$$
The group of {\em $n$-boundaries} of this chain complex is defined as
$$ \B_n(C_\bullet) = \Image(\partial_{n+1}).$$
The {\em $n$-th homology group} of this chain complex is defined as
$$ \h_n(C_\bullet) = \ker(\partial_n)/\Image(\partial_{n+1}).$$
The chain complex $C_\bullet$ is {\em exact} if $\h_n(C_\bullet) = 0$ for all~$n$.
If $C_\bullet$ is exact and $m = -1$, one often says that $C_\bullet$ is a
{\em resolution} of~$C_{-1}$.
A {\em morphism of right chain complexes} $\phi_\bullet: C_\bullet \to D_\bullet$ is a collection of
homomorphisms $\phi_n: C_n \to D_n$ for $n \in \mathbb{N}_0$ such that all the diagrams
$$ \begin{CD}
C_{n+1} @>{\partial_{n+1}}>> C_n \\
@V{\phi_{n+1}}VV @V{\phi_n}VV \\
D_{n+1} @>{\partial_{n+1}}>> D_n
\end{CD} $$
are commutative.
If all $\phi_n$ are injective, we regard $C_\bullet$ as a sub-chain complex of $D_\bullet$.
If all $\phi_n$ are surjective, we regard $D_\bullet$ as a quotient complex of $C_\bullet$.
\end{definition}
\begin{definition}
A {\em (right) cochain complex} $C^\bullet$ in the category $R-\mathrm{modules}$ is a collection of
objects $C^n \in \mathrm{obj}(R-\mathrm{modules})$ for $n \ge m$ for some $m \in \mathbb{Z}$ together with homomorphisms
$C^n \xrightarrow{\partial^{n+1}} C^{n+1}$, i.e.\
$$ 0 \xrightarrow{\partial^m} C^m \xrightarrow{\partial^{m+1}} C^{m+1}
\xrightarrow{\partial^{m+2}} C^{m+2} \to \cdots \to
C^{n-1} \xrightarrow{\partial^{n}} C^{n} \xrightarrow{\partial^{n+1}} C^{n+1} \to \cdots,$$
such that
$$ \partial^{n+1} \circ \partial^{n} = 0 $$
for all $n \ge m$.
The group of {\em $n$-cocycles} of this cochain complex is defined as
$$ \Z^n(C_\bullet) = \ker(\partial^{n+1}).$$
The group of {\em $n$-coboundaries} of this cochain complex is defined as
$$ \B^n(C_\bullet) = \Image(\partial_{n}).$$
The {\em $n$-th cohomology group} of this cochain complex is defined as
$$ \h^n(C^\bullet) = \ker(\partial^{n+1})/\Image(\partial^{n}).$$
The cochain complex $C^\bullet$ is {\em exact} if $\h^n(C_\bullet) = 0$ for all~$n$.
If $C^\bullet$ is exact and $m = -1$, one often says that $C^\bullet$ is a
{\em resolution} of~$C^{-1}$.
A {\em morphism of right cochain complexes} $\phi^\bullet: C^\bullet \to D^\bullet$ is a collection of
homomorphisms $\phi^n: C^n \to D^n$ for $n \in \mathbb{N}_0$ such that all the diagrams
$$ \begin{CD}
C^n @>{\partial^{n+1}}>> C^{n+1} \\
@V{\phi^{n}}VV @V{\phi^{n+1}}VV \\
D^n @>{\partial^{n+1}}>> D^{n+1}
\end{CD} $$
are commutative.
If all $\phi^n$ are injective, we regard $C^\bullet$ as a sub-chain complex of $D^\bullet$.
If all $\phi^n$ are surjective, we regard $D^\bullet$ as a quotient complex of $C^\bullet$.
\end{definition}
In Exercise~\ref{excomplexes} you are asked to define kernels, cokernels and images
of morphisms of cochain complexes and to show that morphisms of cochain complexes
induce natural maps on the cohomology groups. In fact, cochain complexes of $R$-modules
form an abelian category.
\subsection*{Example: standard resolution of a group}
Let $G$ be a group and $R$ a commutative ring.
Write $G^n$ for the $n$-fold direct product $G \times \dots \times G$ and
equip $R[G^n]$ with the diagonal $R[G]$-action.
We describe the {\em standard resolution} $F(G)_\bullet$ of
$R$ by free $R[G]$-modules:
$$ \xymatrix@=1cm{
0 \ar@{<-}[r] \ar@{<-}@/^1pc/[rr]^{\partial_0}
& R \ar@{<-}[r]^(.4){\epsilon}
& F(G)_0 := R[G] \ar@{<-}[r]^{\partial_1}
& F(G)_1 := R[G^2] \ar@{<-}[r]^(.7){\partial_2}
& \cdots,}$$
where we put (the ``hat'' means that we leave out that element):
$$ \partial_n := \sum_{i=0}^{n} (-1)^i d_i \;\;\text{ and } \;\;
d_i (g_0, \dots, g_n) := (g_0, \dots, \hat{g_i}, \dots, g_n).$$
The map $\epsilon$ is the usual augmentation map defined by sending
$g \in G$ to~$1 \in R$.
By `standard resolution' we refer to the straight maps. We have included
the bended arrow $\partial_0$, which is $0$ by definition, because it will be needed
in the definition of group cohomology (Definition~\ref{defi:gp-coh}).
In Exercise~\ref{exstandardresolution} you are asked to check that
the standard resolution is indeed a resolution, i.e.\ that the above
complex is exact.
\subsection*{Example: bar resolution of a group}
We continue to treat the standard resolution $R$ by $R[G]$-modules, but we will write
it differently. \cite{Weibel} calls the following the
{\em unnormalised bar resolution} of~$G$. We shall simply say {\em bar resolution}.
If we let $h_r := g_{r-1}^{-1} g_r$, then we get the identity
$$(g_0, g_1, g_2, \dots, g_n) = g_0 . (1, h_1, h_1 h_2, \dots, h_1 h_2 \dots h_n ) =:
g_0 . [h_1|h_2| \dots h_n].$$
The symbols $[h_1|h_2| \dots |h_n]$
with arbitrary $h_i \in G$ hence form an $R[G]$-basis of~$F(G)_n$, and
one has $F(G)_n = R[G] \otimes_R (\text{free $R$-module on } [h_1|h_2| \dots |h_n])$.
One computes the action of $\partial_n$ on this basis and gets
$\partial_n = \sum_{i=0}^n (-1)^i d_i$ where
$$
d_i([h_1| \dots |h_n]) =
\begin{cases}
h_1 \text{[} h_2| \dots |h_n \text{]} & i = 0 \\
\text{[} h_1| \dots |h_i h_{i+1}| \dots |h_n \text{]} & 0 < i < n \\
\text{[}h_1| \dots |h_{n-1}\text{]} & i = n.
\end{cases} $$
We will from now on, if confusion is unlikely, simply write $(h_1,\dots,h_n)$
instead of $[h_1|\dots|h_n]$.
\subsection*{Example: resolution of a cyclic group}\label{seccyclic}
Let $G=\langle T \rangle$ be an infinite cyclic group (i.e.\ a group isomorphic
to $(\mathbb{Z},+)$). Here is a very simple resolution of~$R$ by free $R[G]$-modules:
\begin{equation}\label{eq:res-cyclic-infinite}
0 \to R[G] \xrightarrow{T-1} R[G] \xrightarrow{\epsilon} R \to 0.
\end{equation}
Let now $G=\langle \sigma \rangle$ be a finite cyclic group of order~$n$ and let $N_\sigma := \sum_{i=0}^{n-1} \sigma^i$.
Here is a resolution of~$R$ by free $R[G]$-modules:
\begin{equation}\label{eq:res-cyclic-finite}
\cdots \to R[G] \xrightarrow{N_\sigma} R[G] \xrightarrow{1-\sigma} R[G]
\xrightarrow{N_\sigma} R[G] \xrightarrow{1-\sigma} R[G] \xrightarrow{\epsilon} R \to 0.
\end{equation}
In Exercise~\ref{excyclic} you are asked to verify the exactness of these two sequences.
\subsection*{Group cohomology}
A standard reference for group cohomology is~\cite{Brown}.
\begin{definition}\label{defi:gp-coh}
Let $R$ be a ring, $G$ a group and $M$ a left $R[G]$-module.
Recall that $F(G)_\bullet$ denotes the standard resolution of~$R$ by
free $R[G]$-modules.
\begin{enumerate}[(a)]
\item Let $M$ be a left $R[G]$-module.
When we apply the functor ${\rm Hom}_{R[G]}(\cdot,M)$ to the standard
resolution $F(G)_\bullet$ cut off at~$0$ (i.e.\ $F(G)_1 \xrightarrow{\partial_1} F(G)_0
\xrightarrow{\partial_0} 0$), we get the cochain complex ${\rm Hom}_{R[G]}(F(G)_\bullet,M)$:
$$ \cdots \to {\rm Hom}_{R[G]}(F(G)_{n-1},M) \xrightarrow{\partial^{n}} {\rm Hom}_{R[G]}(F(G)_{n},M)
\xrightarrow{\partial^{n+1}} {\rm Hom}_{R[G]}(F(G)_{n+1},M) \to \cdots .$$
Define the $n$-th cohomology group of~$G$ with values in the $G$-module~$M$ as
$$ \h^n(G,M) := \h^n({\rm Hom}_{R[G]}(F(G)_\bullet,M)).$$
\item Let $M$ be a right $R[G]$-module.
When we apply the functor $M \otimes_{R[G]} \cdot $ to the standard
resolution $F(G)_\bullet$ cut off at~$0$ we get the chain complex
$M \otimes_{R[G]} F(G)_\bullet$ :
$$ \cdots \to M \otimes_{R[G]} F(G)_{n+1} \xrightarrow{\partial_{n+1}} M \otimes_{R[G]} F(G)_{n}
\xrightarrow{\partial_{n}} M \otimes_{R[G]} F(G)_{n-1} \to \cdots .$$
Define the $n$-th homology group of~$G$ with values in the $G$-module~$M$ as
$$ \h_n(G,M) := \h_n(M \otimes_{R[G]} F(G)_\bullet).$$
\end{enumerate}
\end{definition}
In this lecture we shall only use group cohomology.
As a motivation for looking at group cohomology in this lecture, we
can already point out that
$$ \h^1(\Gamma_1(N),V_{k-2}(R)) \cong \mathcal{M}_k(\Gamma_1(N),R),$$
provided that $6$ is invertible in~$R$ (see Theorem~\ref{compthm}).
The reader is invited to compute explicit descriptions of $\h^0$, $\h_0$ and $\h^1$
in Exercise~\ref{exsmallh}.
\subsection{Theory: Cohomological Techniques}
The cohomology of groups fits into a general machinery, namely that of derived
functor cohomology. Derived functors are universal cohomological $\delta$-functors
and many properties of them can be derived in a purely formal way from
the universality. What this means will be explained in this section. We omit all
proofs.
\begin{definition}
Let $\mathcal{C}$ and $\mathcal{D}$ be (abelian) categories (for instance, $\mathcal{C}$ the right cochain complexes
of $R-\mathrm{modules}$ and $\mathcal{D} = R-\mathrm{modules}$).
A {\em positive covariant cohomological $\delta$-functor}
between $\mathcal{C}$ and $\mathcal{D}$ is a collection
of functors $\h^n: \mathcal{C} \to \mathcal{D}$ for $n \ge 0$ together with {\em connecting morphisms}
$$\delta^n: \h^{n} (C) \to \h^{n+1} (A)$$
which are defined for every short exact sequence
$0 \to A \to B \to C \to 0$ in~$\mathcal{C}$
such that the following hold:
\begin{enumerate}[(a)]
\item (Positivity) $\h^n$ is the zero functor if $n < 0$.
\item
For every short exact sequence
$0 \to A \to B \to C \to 0$ in~$\mathcal{C}$ there is the {\em long exact sequence} in~$\mathcal{D}$:
$$ \cdots \to \h^{n-1}(C) \xrightarrow{\delta^{n-1}} \h^n (A) \to \h^n(B) \to \h^n(C)
\xrightarrow{\delta^{n}} \h^{n+1} (A) \to \cdots,$$
where the maps $\h^n (A) \to \h^n(B) \to \h^n(C)$ are those that are induced from
the homomorphisms in the exact sequence $0 \to A \to B \to C \to 0$.
\item
For every commutative diagram in~$\mathcal{C}$
$$ \begin{CD}
0 @>>> A @>>> B @>>> C @>>> 0 \\
&& @VfVV @VgVV @VhVV \\
0 @>>> A' @>>> B' @>>> C' @>>> 0
\end{CD} $$
with exact rows the following diagram in $\mathcal{D}$ commutes, too:
$$ \begin{CD}
\h^{n-1} (C) @>{\delta^{n-1}}>> \h^{n} (A) @>>> \h^{n} (B) @>>> \h^{n} (C) @>{\delta^{n}}>> \h^{n+1} (A) \\
@V{\h^{n-1} (h)}VV @V{\h^{n} (f)}VV @V{\h^{n} (g)}VV @V{\h^{n} (h)}VV @V{\h^{n+1} (f)}VV\\
\h^{n-1} (C') @>{\delta^{n-1}}>> \h^{n} (A') @>>> \h^{n} (B') @>>> \h^{n} (C') @>{\delta^{n}}>> \h^{n+1} (A')
\end{CD} $$
\end{enumerate}
\end{definition}
\begin{theorem}\label{thm:deltafun}
Let $R$ be a ring (not necessarily commutative).
Let $\mathcal{C}$ stand for the category of cochain complexes of left $R$-modules.
Then the cohomology functors
$$ \h^n: \mathcal{C} \to R-\mathrm{modules}, \;\;\; C^\bullet \mapsto \h^n(C^\bullet)$$
form a cohomological $\delta$-functor.
\end{theorem}
\begin{proof}
This theorem is proved by some 'diagram chasing' starting from the
snake lemma. See Chapter~1 of~\cite{Weibel} for details.
\end{proof}
It is not difficult to conclude that group cohomology also forms
a cohomological $\delta$-functor.
\begin{proposition}\label{gpcohfunctor}
Let $R$ be a commutative ring and $G$ a group.
\begin{enumerate}[(a)]
\item The functor from $R[G]-\mathrm{modules}$ to cochain complexes of $R[G]-\mathrm{modules}$
which associates to a left $R[G]$-module~$M$ the cochain complex
${\rm Hom}_{R[G]} (F(G)_\bullet,M)$ with $F(G)_\bullet$ the bar resolution of $R$
by free $R[G]$-modules is exact, i.e.\ it takes an exact sequence
$0 \to A \to B \to C \to 0$ of $R[G]$-modules to the exact sequence
$$0 \to {\rm Hom}_{R[G]}(F(G)_\bullet,A) \to {\rm Hom}_{R[G]}(F(G)_\bullet,B)
\to {\rm Hom}_{R[G]}(F(G)_\bullet,C) \to 0$$
of cochain complexes.
\item The functors
$$ \h^n(G,\cdot): R[G]-\mathrm{modules} \to R-\mathrm{modules}, \;\;\; M \mapsto \h^n(G,M)$$
form a positive cohomological $\delta$-functor.
\end{enumerate}
\end{proposition}
\begin{proof}
Exercise~\ref{exgpcohfunctor}.
\end{proof}
We will now come to universal $\delta$-functors. Important examples of such
(among them group cohomology)
are obtained from injective resolutions. Although the following discussion
is valid in any abelian category (with enough injectives), we restrict to
$R-\mathrm{modules}$ for a not necessarily commutative ring~$R$.
\begin{definition}
Let $R$ be a not necessarily commutative ring and let $M \in \mathrm{obj}(R-\mathrm{modules})$.
A {\em projective resolution} of~$M$ is a resolution
$$ \cdots \to P_2 \xrightarrow{\partial_2} P_1 \xrightarrow{\partial_1} P_0 \to M \to 0,$$
i.e.\ an exact chain complex, in which all the $P_n$ for $n \ge 0$ are projective
$R$-modules.
An {\em injective resolution} of~$M$ is a resolution
$$ 0 \to M \to I^0 \xrightarrow{\partial^1} I^1 \xrightarrow{\partial^2} I^2 \to \cdots,$$
i.e.\ an exact cochain complex, in which all the $I^n$ for $n \ge 0$ are injective
$R$-modules.
\end{definition}
We state the following lemma as a fact. It is easy for projective resolutions
and requires work for injective ones (see e.g.\ \cite{Eisenbud}).
\begin{lemma}
Injective and projective resolutions exist in the category of $R$-modules,
where $R$ is any ring (not necessarily commutative).
\end{lemma}
Note that applying a left exact covariant functor $\mathcal{F}$ to an injective resolution
$$0 \to M \to I^0 \to I^1 \to I^2 \to \cdots$$
of~$M$ gives rise to a cochain complex
$$ 0 \to \mathcal{F}(M) \to \mathcal{F}(I^0) \to \mathcal{F}(I^1) \to \mathcal{F}(I^2) \to \cdots,$$
of which only the part $0 \to \mathcal{F}(M) \to \mathcal{F}(I^0) \to \mathcal{F}(I^1)$ need be exact.
This means that the $\h^0$ of the (cut off at~$0$) cochain complex
$\mathcal{F}(I^0) \to \mathcal{F}(I^1) \to \mathcal{F}(I^2) \to \cdots$ is equal to $\mathcal{F}(M)$.
\begin{definition}
Let $R$ be a not necessarily commutative ring.
\begin{enumerate}[(a)]
\item Let $\mathcal{F}$ be a left exact covariant functor
on the category of $R$-modules (mapping for instance to $\mathbb{Z}-\mathrm{modules}$).
The {\em right derived functors} $R^n\mathcal{F}(\cdot)$ of~$\mathcal{F}$ are the functors
on the category of $R-\mathrm{modules}$ defined as follows.
For $M \in \mathrm{obj}(R-\mathrm{modules})$ choose an injective resolution
$0 \to M \to I^0 \to I^1 \to \cdots$ and let
$$ R^n\mathcal{F}(M) := \h^n \big(\mathcal{F}(I^0) \to \mathcal{F}(I^1) \to \mathcal{F}(I^2) \to \cdots\big).$$
\item Let $\mathcal{G}$ be a left exact contravariant functor
on the category of $R$-modules.
The {\em right derived functors} $R^n\mathcal{G}(\cdot)$ of~$\mathcal{G}$ are the functors
on the category of $R-\mathrm{modules}$ defined as follows.
For $M \in \mathrm{obj}(R-\mathrm{modules})$ choose a projective resolution
$\cdots \to P_1 \to P_0 \to M \to 0$ and let
$$ R^n\mathcal{G}(M) := \h^n \big(\mathcal{G}(P_0) \to \mathcal{G}(P_1) \to \mathcal{G}(P_2) \to \cdots\big).$$
\end{enumerate}
\end{definition}
We state the following lemma without a proof. It is a simple consequence of
the injectivity respectively projectivity of the modules in the resolution.
\begin{lemma}
The right derived functors do not depend on the choice of the resolution
and they form a cohomological $\delta$-functor.
\end{lemma}
Of course, one can also define left derived functors of right exact functors.
An important example is the $\Tor$-functor which is obtained by deriving the
tensor product functor in a way dual to~$\Ext$ (see below).
As already mentioned, the importance of right and left derived functors comes
from their universality.
\begin{definition}\label{defi:deltafunctor}
\begin{enumerate}[(a)]
\item
Let $(\h^n)_n$ and $(T^n)_n$ be cohomological $\delta$-functors.
A {\em morphism of cohomological $\delta$-functors} is a collection
of natural transformations $\eta^n: \h^n \Rightarrow T^n$ that commute
with the connecting homomorphisms~$\delta$, i.e.\
for every short exact sequence $0 \to A \to B \to C \to 0$ and every~$n$
the diagram
$$ \begin{CD}
\h^n(C) @>{\delta}>> \h^{n+1}(A) \\
@V{\eta_C^n}VV @V{\eta_A^{n+1}}VV \\
T^n(C) @>{\delta}>> T^{n+1}(A)
\end{CD} $$
commutes.
\item The cohomological $\delta$-functor $(\h^n)_n$ is {\em universal}
if for every cohomological $\delta$-functor $(T^n)_n$
and every natural transformation $\eta^0: \h^0(\cdot) \Rightarrow T^0(\cdot)$
there is a unique natural transformation $\eta^n: \h^n(\cdot) \Rightarrow T^n(\cdot)$
for all~$n\ge 1$ such that the $\eta^n$ form a morphism of
cohomological $\delta$-functors between $(\h^n)_n$ and $(T^n)_n$.
\end{enumerate}
\end{definition}
For the proof of the following central result we refer to \cite{Weibel}, Chapter~2.
\begin{theorem}
Let $R$ be a not necessarily commutative ring and
let $\mathcal{F}$ be a left exact covariant or contravariant functor
on the category of $R$-modules (mapping for instance to $\mathbb{Z}-\mathrm{modules}$).
The {\em right derived functors} $(R^n\mathcal{F}(\cdot))_n$ of~$\mathcal{F}$
form a \underline{universal} cohomological $\delta$-functor.
\end{theorem}
\begin{example}
\begin{enumerate}[(a)]
\item Let $R$ be a commutative ring and $G$ a group.
The functor
$$(\cdot)^G: R[G]-\mathrm{modules} \to R-\mathrm{modules}, \;\;\; M \mapsto M^G$$
is left exact and covariant, hence we can form its right derived functors
$R^n(\cdot)^G$. Since we have the special case $(R^0(\cdot)^G)(M) = M^G$,
universality gives a morphism of cohomological $\delta$-functors
$R^n(\cdot)^G \Rightarrow H^n(G,\cdot)$. We shall see that this is an isomorphism.
\item Let $R$ be a not necessarily commutative ring. We have seen that
the functors ${\rm Hom}_R(\cdot,M)$ and ${\rm Hom}_R(M,\cdot)$ are left exact.
We write
$$ \Ext_R^n (\cdot,M) := R^n {\rm Hom}_R(\cdot,M) \;\;\; \text{ and }\;\;\;
\Ext_R^n (M,\cdot) := R^n {\rm Hom}_R(M,\cdot).$$
See Theorem~\ref{extbalanced} below.
\item Many cohomology theories in (algebraic) geometry are also
of a right derived functor nature. For instance, let $X$ be a topological
space and consider the category of sheaves of abelian groups on~$X$.
The global sections functor $\mathcal{F} \mapsto \mathcal{F}(X) = \h^0(X,\mathcal{F})$ is left exact
and its right derived functors $R^n(\h^0(X,\cdot))$ can be formed.
They are usually denoted by $\h^n(X,\cdot)$ and they define 'sheaf cohomology'
on~$X$. Etale cohomology is an elaboration of this based on a generalisation
of topological spaces.
\end{enumerate}
\end{example}
\subsection*{Universal properties of group cohomology}
\begin{theorem}\label{extbalanced}
Let $R$ be a not necessarily commutative ring.
The $\Ext$-functor is {\em balanced}. This means that for any two
$R$-modules $M,N$ there are isomorphisms
$$ (\Ext^n_R (\cdot,N))(M) \cong (\Ext^n_R (M,\cdot))(N) =: \Ext_R^n(M,N).$$
\end{theorem}
\begin{proof}
\cite{Weibel}, Theorem~2.7.6.
\end{proof}
\begin{corollary}\label{gpcoh}
Let $R$ be a commutative ring and $G$ a group. For every $R[G]$-module~$M$
there are isomorphisms
$$ H^n(G,M) \cong \Ext_{R[G]}^n(R,M) \cong (R^n(\cdot)^G)(M)$$
and the functors $(H^n(G,\cdot))_n$ form a universal cohomological $\delta$-functor.
Moreover, apart from the standard resolution of $R$ by free $R[G]$-modules,
any resolution of $R$ by projective $R[G]$-modules may be used to
compute $H^n(G,M)$.
\end{corollary}
\begin{proof}
We may compute
$\Ext_{R[G]}^n(\cdot,M) (R)$ by any resolution of~$R$ by projective $R[G]$-modules.
Our standard resolution is such a resolution, since any free module is projective.
Hence, $H^n(G,M) \cong \Ext_{R[G]}^n(\cdot,M)(R)$.
The key is now that $\Ext$ is balanced (Theorem~\ref{extbalanced}), since it
gives $H^n(G,M) \cong \Ext_{R[G]}^n(R,\cdot)(M) \cong R^n(\cdot)^G(M) \cong \Ext^n_{R[G]}(R,M)$.
As the $\Ext$-functor is universal (being a right derived functor), also
$H^n(G,\cdot)$ is universal. For the last statement we recall that
right derived functors do not depend on the chosen projective respectively
injective resolution.
\end{proof}
You are invited to look at Exercise~\ref{exfree} now.
\subsection{Theory: Generalities on Group Cohomology}
We now apply the universality of the $\delta$-functor
of group cohomology. Let $\phi: H \to G$ be a group
homomorphism and $A$ an $R[G]$-module.
Via~$\phi$ we may
consider $A$ also as an $R[H]$-module and
$\mathrm{res}^0: \h^0(G,\cdot) \to \h^0(H,\cdot)$ is a natural
transformation. By the universality of $\h^\bullet(G,\cdot)$ we get
natural transformations
$$\mathrm{res}^n: \h^n(G,\cdot) \to \h^n(H,\cdot).$$
These maps are called {\em restrictions}. See Exercise~\ref{excochains}
for a description in terms of cochains. Very often
$\phi$ is just the embedding map of a subgroup.
Assume now that $H$ is a normal subgroup of~$G$ and $A$
is an $R[G]$-module. Then we can consider
$\phi: G \to G/H$ and the restriction above gives
natural transformations
$\mathrm{res}^n: \h^n(G/H,(\cdot)^H) \to \h^n(G,(\cdot)^H)$.
We define the {\em inflation maps} to be
$$\mathrm{infl}^n: \h^n(G/H,A^H) \xrightarrow{\mathrm{res}^n} \h^n(G,A^H)
\longrightarrow \h^n(G,A)$$
where the last arrow is induced from the natural inclusion $A^H \hookrightarrow A$.
Under the same assumptions, conjugation by~$g \in G$ preserves~$H$
and we have the isomorphism $H^0(H,A) = A^H \xrightarrow{a \mapsto ga} A^H = H^0(H,A)$.
Hence by universality we obtain natural maps $H^n(H,A) \to H^n(H,A)$ for every
$g \in G$. One even gets an $R[G]$-action on $\h^n(H,A)$.
As $h \in H$ is clearly the identity on $\h^0(H,A)$, the above action
is in fact also an $R[G/H]$-action.
Let now $H \le G$ be a subgroup of finite index.
Then the norm $N_{G/H} := \sum_{\{g_i\}} \in R[G]$ with $\{g_i\}$
a system of representatives of $G/H$ gives a natural
transformation
$\mathrm{cores}^0: \h^0(H,\cdot) \to \h^0(G,\cdot)$
where $\cdot$ is an $R[G]$-module. By universality
we obtain
$$\mathrm{cores}^n: \h^n(H,\cdot) \to \h^n(G,\cdot),$$
the {\em corestriction (transfer)} maps.
The inflation map, the $R[G/H]$-action and the corestriction
can be explicitly described in terms of cochains of the bar resolution
(see Exercise~\ref{excochains}).
It is clear that $\mathrm{cores}^0 \circ \mathrm{res}^0$ is multiplication
by the index $(G:H)$. By universality, also $\mathrm{cores}^n \circ \mathrm{res}^n$ is multiplication
by the index $(G:H)$.
Hence we have proved the first part of the following proposition.
\begin{proposition}\label{corres}
\begin{enumerate}[(a)]
\item Let $H < G$ be a subgroup of finite index $(G:H)$.
For all $i$ and all $R[G]$-modules $M$ one has the equality
$$ \mathrm{cores}_H^G \circ \mathrm{res}_H^G = (G:H) $$
on all $\h^i(G,M)$.
\item Let $G$ be a finite group of order~$n$ and $R$
a ring in which $n$ is invertible. Then
$\h^i(G,M) = 0$ for all $i\ge 1$ and all $R[G]$-modules~$M$.
\end{enumerate}
\end{proposition}
\begin{proof}
Part~(b) is an easy consequence with $H=1$, since
$$ \h^i(G,M) \xrightarrow{\mathrm{res}_H^G} \h^i(1,M) \xrightarrow{\mathrm{cores}_H^G} \h^i(G,M) $$
is the zero map (as $\h^i(1,M)=0$ for $i\ge 1$), but it also is multiplication by~$n$.
\end{proof}
The following exact sequence turns out to be very important for
our purposes.
\begin{theorem}[Hochschild-Serre]\label{thm:hochschild-serre}
Let $H \le G$ be a normal subgroup and $A$ an $R[G]$-module.
There is the exact sequence:
$$0 \to \h^1(G/H,A^H) \xrightarrow{\mathrm{infl}} \h^1(G,A) \xrightarrow{\mathrm{res}}
\h^1(G,A)^{G/H} \to \h^2(G/H,A^H) \xrightarrow{\mathrm{infl}} \h^2(G,A).$$
\end{theorem}
\begin{proof}
We only sketch the proof for those who know spectral sequences.
It is, however, possible to verify the exactness on cochains
explicitly (after having defined the missing map appropriately).
Grothendieck's theorem on spectral sequences (\cite{Weibel}, 6.8.2)
associates to the composition of functors
$$ (A \mapsto A^H \mapsto (A^H)^{G/H}) = (A \mapsto A^G)$$
the spectral sequence
$$ E^{p,q}_2: H^p(G/H,H^q(H,A)) \Rightarrow H^{p+q}(G,A).$$
The statement of the theorem is then just the $5$-term sequence that
one can associate with every spectral sequence of this type.
\end{proof}
\subsection*{Coinduced modules and Shapiro's Lemma}
Let $H < G$ be a subgroup and $A$ be a left $R[H]$-module.
The $R[G]$-module
$${\rm Coind}_H^G (A) := {\rm Hom}_{R[H]} (R[G], A)$$
is called the {\em coinduction} or the {\em coinduced module}
from $H$ to $G$ of $A$.
We make ${\rm Coind}_H^G (A)$ into a left $R[G]$-module by
$$ (g.\phi)(g') = \phi(g'g) \;\; \forall\, g,g' \in G,\, \phi \in {\rm Hom}_{R[H]} (R[G], A).$$
\begin{proposition}[Shapiro's Lemma]\label{shapiro}
For all $n \ge 0$, the map
$$ \mathrm{Sh}: \h^n (G, {\rm Coind}_H^G (A)) \to \h^n (H, A)$$
given on cochains is given by
$$ c \mapsto ((h_1,\dots,h_n) \to (c(h_1,\dots,h_n))(1_G))$$
is an isomorphism.
\end{proposition}
\begin{proof}
Exercise~\ref{exshapiro}.
\end{proof}
\subsection*{Mackey's formula and stabilisers}
If $H \le G$ are groups and $V$ is an $R[G]$-module, we denote by
${\rm Res}_H^G(V)$ the module~$V$ considered as an $R[H]$-module if we want to stress that the module is
obtained by restriction. In later sections, we will often silently restrict modules to subgroups.
\begin{proposition}\label{propmackey}
Let $R$ be a ring, $G$ be a group and $H,K$ subgroups of~$G$.
Let furthermore $V$ be an $R[H]$-module. {\em Mackey's formula} is the isomorphism
$$ {\rm Res}_K^G {\rm Coind}_H^G V \cong \prod_{g \in H\backslash G / K}
{\rm Coind}_{K\cap g^{-1}Hg}^K {}^g ({\rm Res}^H_{H\cap gKg^{-1}} V).$$
Here ${}^g ({\rm Res}^H_{H\cap gKg^{-1}} V)$ denotes the
$R[K \cap g^{-1}Hg]$-module obtained from $V$ via the
conjugated action $g^{-1}hg ._g v := h. v$ for $v \in V$
and $h \in H$ such that $g^{-1}hg \in K$.
\end{proposition}
\begin{proof}
We consider the commutative diagram
$$ \xymatrix@=0.5cm{
{\rm Res}_K^G {\rm Hom}_{R[H]}(R[G],V) \ar@{->}[r] \ar@{->}[dr] &
\prod_{g \in H\backslash G / K}
{\rm Hom}_{R[K\cap g^{-1}Hg]}(R[K], {}^g ({\rm Res}^H_{H\cap gKg^{-1}} V)) \ar@{->}^\sim[d] \\
&\prod_{g \in H\backslash G / K}
{\rm Hom}_{R[H\cap gKg^{-1}]}(R[gKg^{-1}], {\rm Res}^H_{H\cap gKg^{-1}} V)).}$$
The vertical arrow is just given by conjugation and is clearly
an isomorphism.
The diagonal map is the product of the natural restrictions.
From the bijection
$$ \big(H \cap gKg^{-1}\big) \backslash gKg^{-1}
\xrightarrow{gkg^{-1} \mapsto Hgk} H \backslash HgK$$
it is clear that also the diagonal map is an isomorphism,
proving the proposition.
\end{proof}
From Shapiro's Lemma \ref{shapiro} we directly get the following.
\begin{corollary}\label{cormackey}
In the situation of Proposition~\ref{propmackey} one has
\begin{align*}
\h^i(K,{\rm Coind}_H^G V)
& \cong \prod_{g \in H\backslash G / K}
\h^i(K \cap g^{-1}Hg, {}^g ({\rm Res}^H_{H\cap gKg^{-1}} V) \\
& \cong \prod_{g \in H\backslash G / K}
\h^i(H \cap gKg^{-1}, {\rm Res}^H_{H \cap gKg^{-1}} V)
\end{align*}
for all $i \in \mathbb{N}$.
\end{corollary}
\subsection{Theoretical exercises}
\begin{exercise}\label{exnattrans}
Check the statements made in Example~\ref{nattrans}.
\end{exercise}
\begin{exercise}\label{exHomTensor}
Verify the statements of Example~\ref{HomTensor}.
\end{exercise}
\begin{exercise}\label{exprojective}
Prove Lemma~\ref{lemprojective}.
Hint: take a free $R$-module $F$ which surjects onto~$P$, {\em i.e.} $\pi: F \twoheadrightarrow P$,
and use the definition of $P$ being projective to show that the surjection admits a split $s: P \to F$, meaning that
$\pi \circ s$ is the identity on~$P$. This is then equivalent to the assertion.
\end{exercise}
\begin{exercise}\label{excomplexes}
Let $\phi^\bullet: C^\bullet \to D^\bullet$ be a morphism of cochain complexes.
\begin{enumerate}[(a)]
\item Show that $\ker(\phi^\bullet)$ is a cochain complex and is a subcomplex of $C^\bullet$ in a
natural way.
\item Show that $\Image(\phi^\bullet)$ is a cochain complex and is a subcomplex of $D^\bullet$ in a
natural way.
\item Show that $\coker(\phi^\bullet)$ is a cochain complex and is a quotient of $D^\bullet$ in a
natural way.
\item Show that $\phi^\bullet$ induces homomorphisms
$\h^n(C^\bullet) \xrightarrow{\h^n(\phi^\bullet)} \h^n(D^\bullet)$
for all $n \in \mathbb{N}$.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exstandardresolution}
Check the exactness of the standard resolution of a group~$G$.
\end{exercise}
\begin{exercise}\label{excyclic}
Check the exactness of the resolutions \eqref{eq:res-cyclic-infinite} and \eqref{eq:res-cyclic-finite} for an infinite and a finite cyclic group,
respectively.
\end{exercise}
\begin{exercise}\label{exsmallh}
Let $R$, $G$, $M$ be as in the definition of group (co-)homology.
\begin{enumerate}[(a)]
\item Prove $\h^0(G,M) \cong M^G$, the $G$-invariants of~$M$.
\item Prove $\h_0(G,M) \cong M_G$, the $G$-coinvariants of~$M$.
\item Prove the explicit descriptions:
\begin{align*}
\Z^1(G,M) &= \{ f: G \to M \text{ map } |\; f(gh) = g.f(h) + f(g) \; \forall g,h \in G \},\\
\B^1(G,M) &= \{ f: G \to M \text{ map } |\; \exists m \in M : f(g) = (1-g)m \; \forall g \in G\},\\
\h^1(G,M) &= Z^1(G,M) / B^1(G,M).
\end{align*}
In particular, if the action of $G$ on~$M$ is trivial, the boundaries $B^1(G,M)$ are zero,
and one has:
$$ \h^1(G,M) = {\rm Hom}_{\textnormal{group}}(G,M).$$
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exgpcohfunctor}
Prove Proposition~\ref{gpcohfunctor}.
Hint: for (a), use that free modules are projective.
(b) follows from (a) together with Theorem~\ref{thm:deltafun} or, alternatively, by direct calculation.
See also \cite[III.6.1]{Brown}.
\end{exercise}
\begin{exercise}\label{exfree}
Let $R$ be a commutative ring.
\begin{enumerate}[(a)]
\item
Let $G= \langle T \rangle$ be a free cyclic group and $M$ any $R[G]$-module. Prove
$$\h^0(G, M) = M^G, \;\;\;\h^1(G, M) = M / (1-T)M \;\;\;\text{ and }\;\;\; \h^i(G,M)= 0$$
for all $i \ge 2$.
\item For a finite cyclic group~$G = \langle \sigma \rangle$ of order~$n$ and any $R[G]$-module~$M$ prove that
\begin{align*}
\h^0(G,M) &\cong M^G, &\h^1(G, M) &\cong \{m \in M \;|\; N_\sigma m = 0\} / (1-\sigma)M, \\
\h^2(G, M) &\cong M^G/N_\sigma M, &\h^i(G,M) &\cong \h^{i+2}(G,M) \textnormal{ for all $i \ge 1$.}
\end{align*}
\end{enumerate}
\end{exercise}
\begin{exercise}\label{excochains}
Let $R$ be a commutative ring.
\begin{enumerate}[(a)]
\item Let $\phi: H \to G$ be a group homomorphism and $A$ an $R[G]$-module.
Prove that the restriction maps $\mathrm{res}^n: H^n(G,A) \to H^n(H,A)$
are given in terms of cochains of the bar resolution by composing
the cochains by~$\phi$.
\item Let $H$ be a normal subgroup of~$G$.
Describe the inflation maps in terms of cochains of the bar resolution.
\item Let $H$ be a normal subgroup of~$G$ and $A$ an $R[G]$-module.
Describe the $R[G/H]$-action on $H^n(H,A)$ in terms of cochains of the bar resolution.
\item Let now $H \le G$ be a subgroup of finite index.
Describe the corestriction maps in terms of cochains of the bar resolution.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exshapiro}
Prove Shapiro's lemma, i.e.\ Proposition~\ref{shapiro}.
Hint: see \cite[(6.3.2)]{Weibel} for an abstract proof; see also \cite[III.6.2]{Brown} for the underlying map.
\end{exercise}
\section{Cohomology of $\mathrm{PSL}_2(\mathbb{Z})$}
In this section, we shall calculate the cohomology of the group $\mathrm{PSL}_2(\mathbb{Z})$ and important properties thereof.
This will be at the basis of our treatment of Manin symbols in the following section.
The key in this is the description of $\mathrm{PSL}_2(\mathbb{Z})$ as a free product of two cyclic groups.
\subsection{Theory: The standard fundamental domain for $\mathrm{PSL}_2(\mathbb{Z})$}
We define the matrices of $\mathrm{SL}_2(\mathbb{Z})$
$$\sigma := \mat 0{-1}10, \;\;\; \tau := \mat {-1}1{-1}0,\;\;\;
T = \mat 1101 = \tau \sigma.$$
By the definition of the action of $\mathrm{SL}_2(\mathbb{Z})$ on $\mathbb{H}$ in equation~\ref{eq:moebius}, we have for all $z \in \mathbb{H}$:
$$\sigma. z = \frac{-1}{z}, \;\;\; \tau. z := 1-\frac{1}{z} ,\;\;\;
T.z = z+1.$$
These matrices have the following conceptual meaning:
$$ \langle \pm \sigma \rangle = \Stab_{\mathrm{SL}_2(\mathbb{Z})} (i), \;
\langle \pm \tau \rangle = \Stab_{\mathrm{SL}_2(\mathbb{Z})} (\zeta_6)\; \text{ and } \;
\langle \pm T \rangle = \Stab_{\mathrm{SL}_2(\mathbb{Z})} (\infty)$$
with $\zeta_6 = e^{2\pi i/6}$.
From now on we will often represent
classes of matrices in $\mathrm{PSL}_2(\mathbb{Z})$ by matrices in $\mathrm{SL}_2(\mathbb{Z})$.
The orders of $\sigma$ and $\tau$ in $\mathrm{PSL}_2(\mathbb{Z})$ are $2$ and~$3$,
respectively. These statements are checked by calculation.
Exercise~\ref{exfiniteorder} is recommended at this point.
Even though in this section our interest concerns the full group $\mathrm{SL}_2(\mathbb{Z})$,
we give the definition of fundamental domain for general subgroups of $\mathrm{SL}_2(\mathbb{Z})$ of finite index.
\begin{definition}\label{defi:fd}
Let $\Gamma \le \mathrm{SL}_2(\mathbb{Z})$ be a subgroup of finite index.
A {\em fundamental domain} for the action of~$\Gamma$ on~$\mathbb{H}$
is a subset $\mathcal{F} \subset \mathbb{H}$ such that the following hold:
\begin{enumerate}[(i)]
\item $\mathcal{F}$ is open.
\item For every $z \in \mathbb{H}$, there is $\gamma \in \Gamma$ such that $\gamma.z \in \overline{\mathcal{F}}$.
\item If $\gamma.z \in \mathcal{F}$ for $z \in \mathcal{F}$ and $\gamma \in \Gamma$, then one has $\gamma = \pm \mat 1001$.
\end{enumerate}
\end{definition}
In other words, a fundamental domain is an open set, which is small enough not to contain any two points that
are equivalent under the operation by~$\Gamma$, and which is big enough that every point in the upper half plane
is equivalent to some point in the closure of the fundamental domain.
\begin{proposition}\label{prop:fd}
The set
$$ \mathcal{F} := \{ z \in \mathbb{H} \;| \; |z| > 1 \textnormal{ and } -\frac{1}{2} < \Real(z) < \frac{1}{2} \}$$
is a fundamental domain for the action of $\mathrm{SL}_2(\mathbb{Z})$ on~$\mathbb{H}$.
\end{proposition}
It is clear that $\mathcal{F}$ is open. For~(ii), we use the following lemma.
\begin{lemma}\label{lem:fd}
Let $z \in \mathbb{H}$. The orbit $\mathrm{SL}_2(\mathbb{Z}).z$ contains a point $\gamma.z$
with maximal imaginary part (geometrically also called `height'), {\em i.e.}
$$ \Imag (\gamma.z) \ge \Imag(g.z) \;\; \forall g \in \mathrm{SL}_2(\mathbb{Z}).$$
A point $z \in \mathbb{H}$ is of maximal height
if $|c z + d| \ge 1$ for all coprime $c,d \in \mathbb{Z}$.
\end{lemma}
\begin{proof}
We have the simple formula $\Imag (\gamma.z) = \frac{\Imag(z)}{|cz+d|^2}$.
It implies
$$ \Imag (z) \le \Imag(\gamma.z) \Leftrightarrow |cz + d| \le 1.$$
For fixed $z = x+iy$ with $x,y \in \mathbb{R}$, consider the inequality
$$1 \ge |cz + d|^2 = (cx+d)^2 + c^2y^2.$$
This expression admits only finitely many solutions $c,d \in \mathbb{Z}$.
Among these finitely many, we may choose a coprime pair $(c,d)$ with minimal $|cz + d|$.
Due to the coprimeness, there are $a,b \in \mathbb{Z}$ such that the matrix $M := \mat abcd$ belongs to $\mathrm{SL}_2(\mathbb{Z})$.
It is now clear that $M.z$ has maximal height.
\end{proof}
We next use a simple trick to show~(ii) in Definition~\ref{defi:fd} for $\mathcal{F}$.
Let $z \in \mathbb{H}$. By Lemma~\ref{lem:fd}, we choose $\gamma \in \mathrm{SL}_2(\mathbb{Z})$ such that $\gamma.z$ has maximal height.
We now `transport' $\gamma.z$ via an appropriate translation $T^n$ in such a way that
$-1/2 \le \Real(T^n \gamma . z) < 1/2$. The height is obviously left invariant.
Now we have $|T^n \gamma . z| \ge 1$ because otherwise the height of
$T^n \gamma . z$ would not be maximal. For, if $|T^n \gamma . z + 0| < 1$ then applying $\sigma$ (corresponding to reflection on the unit circle)
would make the height strictly bigger.
More precisely, we have the following result.
\begin{lemma}\label{lem:fd2}
Every point of maximal height in $\mathbb{H}$ can be translated into the closure of the fundamental domain~$\overline{\mathcal{F}}$.
Conversely, $\overline{\mathcal{F}}$ only contains points of maximal height.
\end{lemma}
\begin{proof}
The first part was proved in the preceding discussion. The second one follows from the calculation
\begin{multline}\label{eq:fd2}
|cz+d|^2 = (cx+d)^2+c^2y^2 = c^2 |z|^2 + 2cdx + d^2 \\
\ge c^2|z|^2 - |cd| + d^2 \ge c^2 - |cd| + d^2 \ge (|c|-|d|)^2 + |cd| \ge 1
\end{multline}
for all coprime integers $c,d$ and $z=x+iy \in \mathbb{H}$ with $x,y \in \mathbb{R}$.
\end{proof}
\begin{proof}[End of the proof of Proposition~\ref{prop:fd}.]
Let $z \in \mathcal{F}$ and $\gamma := \mat abcd \in \mathrm{SL}_2(\mathbb{Z})$ such that $\gamma.z \in \mathcal{F}$.
By Lemma~\ref{lem:fd2}, $z$ and $\gamma.z$ both have maximal height, whence $|cz+d| = 1$.
Hence the inequalities in equation~\ref{eq:fd2} are equalities, implying $c=0$.
Thus, $\gamma = \pm T^n$ for some $n \in \mathbb{Z}$. But only $n=0$ is compatible with the
assumption $\gamma.z \in \mathcal{F}$. This proves (iii) in Definition~\ref{defi:fd} for $\mathcal{F}$.
\end{proof}
\begin{proposition}\label{prop:gensl2z}
The group $\mathrm{SL}_2(\mathbb{Z})$ is generated by the matrices $\sigma$ and~$\tau$.
\end{proposition}
\begin{proof}
Let $\Gamma := \langle \sigma,\tau \rangle$ be the subgroup of $\mathrm{SL}_2(\mathbb{Z})$ generated by $\sigma$ and~$T$.
We prove that for any $z \in \mathbb{H}$ there is $\gamma \in \Gamma$ such that $\gamma.z \in \overline{\mathcal{F}}$.
For that, note that the orbit $\Gamma.z$ contains a point $\gamma.z$ for $\gamma \in \Gamma$ of maximal height as it is a subset of $\mathrm{SL}_2(\mathbb{Z}).z$,
for which we have seen that statement.
As $\Gamma$ contains $T=\tau \sigma$, we can translate $\gamma.z$ so as to have real part in between $-\frac{1}{2}$ and $\frac{1}{2}$.
As $\Gamma$ also contains $\sigma$, the absolute value of the new point has to be at least~$1$ because other $\sigma$ would make the height bigger.
In order to conclude, choose any point $z \in \mathcal{F}$ and let $M \in \mathrm{SL}_2(\mathbb{Z})$.
We consider the point $M.z$ and `transport' it back into $\mathcal{F}$ via a matrix $\gamma \in \Gamma$.
We thus have $(\gamma M).z \in \mathcal{F}$. As $\mathcal{F}$ is a fundamental domain for $\mathrm{SL}_2(\mathbb{Z})$, it follows $\gamma M = \pm 1$,
showing $M \in \Gamma$.
\end{proof}
An alternative algorithmic proof is provided in Algorithm~\ref{algpsl} below.
\subsection{Theory: $\mathrm{PSL}_2(\mathbb{Z})$ as a free product}
We now apply the knowledge about the (existence of the) fundamental domain for $\mathrm{PSL}_2(\mathbb{Z})$
to derive that $\mathrm{PSL}_2(\mathbb{Z})$ is a free product.
\begin{definition}
Let $G$ and $H$ be two groups. The {\em free product $G * H$}
of $G$ and $H$ is the group having as elements
all the possible {\em words}, i.e.\ sequences of symbols,
$a_1 a_2 \dots a_n$ with $a_i \in G-\{1\}$ or $a_i \in H-\{1\}$ such that
elements from $G$ and $H$ alternate (i.e.\ if $a_i \in G$,
then $a_{i+1} \in H$ and vice versa) together with the empty word, which
we denote by~$1$.
The integer~$n$ is called the {\em length} of the group element (word)
$w=a_1 a_2 \dots a_n$ and denoted by~$l(w)$. We put $l(1) = 0$ for the
empty word.
The group operation in $G*H$ is concatenation of words followed by `reduction' (in order to obtain
a new word obeying to the rules). The reduction ruls are: for all words $v,w$, all $g_1,g_2 \in G$ and all $h_1,h_2 \in H$:
\begin{itemize}
\item $v1w=vw$,
\item $vg_1g_2w=v(g_1g_2)w$ (i.e.\ the multiplication of $g_1$ and $g_2$ in $G$ is carried out),
\item $vh_1h_2w=v(h_1h_2)w$ (i.e.\ the multiplication of $h_1$ and $h_2$ in $H$ is carried out).
\end{itemize}
\end{definition}
In Exercise~\ref{exfreegroup} you are asked to verify that $G*H$
is indeed a group and to prove a universal property.
Alternatively, if $G$ is given by the set of generators $\mathcal{G}_G$ together with relations $\mathcal{R}_G$ and similarly for the group~$H$,
then the free product $G * H$ can be described as the group generated by $\mathcal{G}_G \cup \mathcal{G}_H$ with relations $\mathcal{R}_G \cup \mathcal{R}_H$.
\begin{theorem}\label{thm:freepr}
Let $\mathcal{P}$ be the free product $\langle \sigma \rangle * \langle \tau \rangle$ of
the cyclic groups $\langle \sigma \rangle$ of order~$2$ and $\langle \tau \rangle$ of order~$3$.
Then $\mathcal{P}$ is isomorphic to~$\mathrm{PSL}_2(\mathbb{Z})$.
In particular, as an abstract group, $\mathrm{PSL}_2(\mathbb{Z})$ can be represented by
generators and relations as $\langle \sigma, \tau \, | \, \sigma^2=\tau^3=1 \rangle$.
\end{theorem}
In the proof, we will need the following statement, which we separate because it is entirely computational.
\begin{lemma}\label{lem:freepr}
Let $\gamma \in \mathcal{P}$ be $1$ or any word starting in~$\sigma$ on the left, {\em i.e.} $\sigma \tau^{e_1} \sigma \tau^{e_2} \dots$.
Then $\Imag(\tau^2 \gamma.i) < 1$.
\end{lemma}
\begin{proof}
For $\gamma=1$, the statement is clear. Suppose $\gamma = \sigma \tau^{e_1} \sigma \tau^{e_2} \sigma \dots \tau^{e_{r-1}} \sigma \tau^{e_r}$
with $r \ge 0$, $e_i \in \{1,2\}$ for $i=1,\dots,r$. We prove more generally
$$ \Imag(\tau^2\gamma.i)=\Imag(\tau^2(\gamma\sigma).i) > \Imag(\tau^2 (\gamma \sigma) \tau^e.i) = \Imag(\tau^2 (\gamma \sigma) \tau^e \sigma.i)$$
for any $e=1,2$. This means that extending the word to the right by $\sigma \tau^e$, the imaginary part goes strictly down for both $e=1,2$.
We first do some matrix calculations.
Let us say that an integer matrix $\mat abcd$ satisfies (*) if $(c+d)^2> \max(c^2,d^2)$.
The matrix $\tau^2\sigma = \mat {-1}0{-1}{-1}$ clearly satisfies~(*).
Let us assume that $\gamma =\mat abcd$ satisfies~(*). We show that
$\gamma \tau \sigma = \mat ** c {c+d}$ and $\gamma \tau^2 \sigma = \mat ** {-c-d} {-d}$ also satisfy~(*).
The first one follows once we know $(2c+d)^2 > \max(c^2,(c+d)^2)$. This can be seen like this:
$$ (2c+d)^2 = (c^2+2cd) + 2c^2 + (c+d)^2 > 2c^2+(c+d)^2 \ge \max(c^2,(c+d)^2),$$
where we used that (*) implies $(c+d)^2>d^2$ and, thus, $c^2+2cd>0$. The second inequality is obtained by
exchanging the roles of $c$ and~$d$.
We thus see that $\tau^2\gamma = \mat abcd$ satisfies~(*) for all words $\gamma$ starting and ending in~$\sigma$.
Finally, we have for all such $\gamma$:
\begin{align*}
\Imag(\tau^2\gamma i) &= \frac{1}{|ci+d|^2} &= \frac{1}{c^2+d^2},\\
\Imag(\tau^2 \gamma \tau i) &= \Imag(\tau^2\gamma (i+1))
= \frac{1}{|c(i+1)+d|^2} &= \frac{1}{(c+d)^2+c^2},\\
\Imag(\tau^2 \gamma \tau^2 i) &= \Imag(\tau^2\gamma \frac{1+i}{2})
= \frac{1/2}{|c(i/2+1/2)+d|^2} &= \frac{2}{(c+2d)^2+c^2}.
\end{align*}
Now (*) implies the desired inequalities of the imaginary parts.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:freepr}]
As $\mathrm{SL}_2(\mathbb{Z})$ is generated by $\sigma$ and~$\tau$ due to Proposition~\ref{prop:gensl2z},
the universal property of the free product
gives us a surjection of groups $\mathcal{P} \twoheadrightarrow \mathrm{PSL}_2(\mathbb{Z})$.
Let $B$ be the geodesic path from $\zeta_6$ to~$i$, i.e.\ the arc
between $\zeta_6$ and~$i$ in positive orientation (counter clockwise)
on the circle of radius~$1$ around the origin, lying entirely on the closure $\overline{\mathcal{F}}$ of the
standard fundamental domain from Proposition~\ref{prop:fd}.
Define the map
$$ \mathrm{PSL}_2(\mathbb{Z}) \xrightarrow{\phi} \{\textnormal{Paths in }\mathbb{H}\}$$
which sends $\gamma \in \mathrm{PSL}_2(\mathbb{Z})$ to $\gamma.B$, i.e.\ the image of~$B$
under~$\gamma$.
The proof of the theorem is finished by showing that the composite
$$ \mathcal{P} \twoheadrightarrow \mathrm{PSL}_2(\mathbb{Z}) \xrightarrow{\phi} \{\textnormal{Paths in }\mathbb{H}\}$$
is injective, as then the first map must be an isomorphism.
This composition is injective because its image is a tree, that is, a graph without circles.
By drawing it, one convinces oneself very quickly hereof.
We, however, give a formal argument, which can also be nicely visualised on the geometric realisation of the graph
as going down further and further in every step.
In order to prepare for the proof, let us first suppose that $\gamma_1.B$ and $\gamma_2.B$ for some $\gamma_1,\gamma_2 \in \mathrm{PSL}_2(\mathbb{Z})$
meet in a point which is not the endpoint of either of the two paths. Then $\gamma.B$ intersects $B$ in some interior point for
$\gamma := \gamma_1^{-1}\gamma_2$. This intersection point lies on the boundary of the fundamental dommain $\mathcal{F}$.
Consequently, by (iii) in Definition~\ref{defi:fd}, $\gamma = \pm 1$ and $\gamma_1.B = \gamma_2.B$.
This implies that if $\Imag(\gamma_1.i) \neq \Imag(\gamma_2.i)$ where $i=\sqrt{-1}$,
then $\gamma_1.B$ and $\gamma_2.B$ do not meet in any interior point and are thus distinct paths.
It is obvious that $B, \sigma.B, \tau.B$ are distinct paths. They share the property that their point
that is conjugate to~$i$ has imaginary part~$1$ (in fact, the points cojugate to~$i$ in the paths are $i$, $i$, $i+1$,
respectively).
By Lemma~\ref{lem:freepr}, for $\gamma$ equal to~$1$ or any word in $\mathcal{P}$ starting with~$\sigma$ on the left,
we obtain that $\tau^2\gamma.B$ is distinct from $B, \sigma.B, \tau.B$ because it lies `lower'.
In particular, $\tau^2\gamma.B \neq B$.
As $\tau^2\gamma.B \neq \tau.B$, we also find $\tau \gamma.B \neq B$.
Finally, if $\gamma.B = B$ and $\gamma = \sigma\tau^e \gamma'$ with $e\in \{1,2\}$ and $\gamma'$ starting in $\sigma$ or $\gamma'=1$,
then $\tau^e\gamma'.B = \sigma.B$, which has already been excluded.
We have thus found that for any non-trivial word $\gamma\in \mathcal{P}$, the conjugate $\gamma.B$ is distinct from~$B$.
This proves the desired injectivity.
\end{proof}
\subsection{Theory: Mayer-Vietoris for $\mathrm{PSL}_2(\mathbb{Z})$}
Motivated by the description $\mathrm{PSL}_2(\mathbb{Z}) = C_2 * C_3$, we
now consider the cohomology of a group~$G$
which is the free product of two finite groups $G_1$ and $G_2$, i.e.\ $G = G_1 * G_2$.
\begin{proposition}\label{mvbieri}
Let $R$ be a commutative ring. The sequence
$$0 \to R[G] \xrightarrow{\alpha} R[G/G_1] \oplus R[G/G_2] \xrightarrow{\epsilon} R \to 0$$
with $\alpha(g) = (gG_1,-gG_2)$ and $\epsilon(gG_1,0)=1=\epsilon(0,gG_2)$
is exact.
\end{proposition}
\begin{proof}
This proof is an even more elementary version of an elementary proof
that I found in \cite{Bieri}.
Clearly, $\epsilon$ is surjective and also $\epsilon \circ \alpha = 0$.
Next we compute exactness at the centre.
We first claim that for every element $g \in G$ we have
$$ g -1 = \sum_j \alpha_j g_j (h_j-1) \in R[G/G_1]$$
for certain $\alpha_j \in R$ and certain $g_j \in G$, $h_j \in G_2$
and analogously with the roles of $G_1$ and $G_2$ exchanged.
To see this, we write $g= a_1 a_2 \dots a_n$ with $a_i$ alternatingly in $G_1$ and $G_2$ (we do not need the
uniqueness of this expression). If $n=1$, there is nothing to do.
If $n > 1$, we have
$$a_1 a_2 \dots a_n - 1 = a_1 a_2 \dots a_{n-1} (a_n - 1) + (a_1 a_2 \dots a_{n-1} - 1)$$
and we obtain the claim by induction.
Consequently, we have for all $\lambda = \sum_i r_i g_i G_1$ and
all $\mu = \sum_k \tilde{r}_k \tilde{g}_k G_2$
with $r_i, \tilde{r}_k \in R$ and $g_i, \tilde{g}_k \in G$
$$\lambda - \sum_i r_i 1_G G_1= \sum_j \alpha_j g_j (h_j-1) \in R[G/G_1]$$
and
$$\mu - \sum_k \tilde{r}_k 1_G G_2
= \sum_l \tilde{\alpha}_l \tilde{g}_l (\tilde{h}_l-1) \in R[G/G_2]$$
for certain $\alpha_j, \tilde{\alpha}_l \in R$,
certain $g_j, \tilde{g}_l \in G$ and certain $h_j \in G_2$, $\tilde{h}_l \in G_1$.
Suppose now that with $\lambda$ and $\mu$ as above we have
$$\epsilon(\lambda, \mu)= \sum_i r_i + \sum_k \tilde{r}_k= 0.$$
Then we directly get
$$ \alpha(\sum_j \alpha_j g_j (h_j-1)
- \sum_l \tilde{\alpha}_l \tilde{g}_l (\tilde{h}_l-1)
+ \sum_i r_i 1_G\big)= (\lambda, \mu)$$
and hence the exactness at the centre.
It remains to prove that $\alpha$ is injective.
Now we use the freeness of the product.
Let $\lambda = \sum_w a_w w \in R[G]$ be an element in the kernel of~$\alpha$.
Hence, $\sum_w a_w wG_1 = 0$ and $\sum_w a_w wG_2=0$.
Let us assume that $\lambda \neq 0$. It is clear that $\lambda$
cannot just be a multiple of~$1 \in G$, as otherwise it would not
be in the kernel of~$\alpha$.
Now pick the $g \in G$ with $a_g \neq 0$ having maximal length $l(g)$
(among all the $l(w)$ with $a_w \neq 0$).
It follows that $l(g)> 0$.
Assume without loss of generality that the
representation of~$g$ ends in a non-zero element of~$G_1$.
Further, since $a_g \neq 0$ and $0 = \sum_w a_w wG_2$, there
must be an $h \in G$ with $g \neq h$,
$g G_2 = h G_2$ and $a_h \neq 0$. As $g$ does not end in~$G_2$,
we must have $h = gy$ for some $0 \neq y \in G_2$.
Thus, $l(h) > l(g)$, contradicting the maximality and proving
the proposition.
\end{proof}
Recall that we usually denote the restriction of a module to a subgroup by the same symbol.
For example, in the next proposition we will write $\h^1(G_1, M)$ instead of $\h^1(G_1,{\rm Res}^G_{G_1}(M))$.
\begin{proposition}[Mayer-Vietoris]\label{mv}
Let $G = G_1 * G_2$ be a free product.
Let $M$ be a left $R[G]$-module. Then the Mayer-Vietoris
sequence gives the exact sequences
$$0 \to M^{G} \to M^{G_1} \oplus M^{G_2} \to M \to
\h^1(G,M) \xrightarrow{\mathrm{res}} \h^1(G_1, M) \oplus \h^1(G_2, M) \to 0.$$
and for all $i \ge 2$ an isomorphism
$$\h^i(G,M) \cong \h^i(G_1, M) \oplus \h^i(G_2, M).$$
\end{proposition}
\begin{proof}
We see that all terms in the exact sequence of
Proposition~\ref{mvbieri} are free $R$-modules. We now apply the functor
${\rm Hom}_R(\cdot, M)$ to this exact sequence and obtain the exact
sequence of $R[G]$-modules
$$ 0 \to M \to \underbrace{{\rm Hom}_{R[G_1]}(R[G],M)}_{\cong {\rm Coind}_{G_1}^G(M)} \oplus \underbrace{{\rm Hom}_{R[G_2]}(R[G],M)}_{\cong {\rm Coind}_{G_2}^G(M)}\to
\underbrace{{\rm Hom}_R (R[G],M)}_{\cong {\rm Coind}_{1}^G(M)} \to 0.$$
The central terms, as well as the term on the
right, can be identified with coinduced modules. Hence, the
statements on cohomology follow by taking the long exact sequence of
cohomology and invoking Shapiro's Lemma~\ref{shapiro}.
\end{proof}
We now apply the Mayer-Vietoris sequence (Prop.~\ref{mv}) to $\mathrm{PSL}_2(\mathbb{Z})$
and get that for any ring~$R$ and any left
$R[\mathrm{PSL}_2(\mathbb{Z})]$-module~$M$ the sequence
\begin{multline}\label{mayervietoris}
0 \to M^{\mathrm{PSL}_2(\mathbb{Z})} \to M^{\langle \sigma \rangle} \oplus M^{\langle \tau \rangle}
\to M \\ \xrightarrow{m \mapsto f_m}
\h^1(\mathrm{PSL}_2(\mathbb{Z}),M)
\xrightarrow{\mathrm{res}} \h^1(\langle \sigma \rangle, M) \oplus \h^1(\langle \tau \rangle, M) \to 0
\end{multline}
is exact and for all $i \ge 2$ one has isomorphisms
\begin{equation}\label{mveins}
\h^i(\mathrm{PSL}_2(\mathbb{Z}),M) \cong \h^i(\langle \sigma \rangle, M) \oplus \h^i(\langle \tau \rangle, M).
\end{equation}
The $1$-cocycle $f_m$ can be explicitly described as the cocycle
given by $f_m(\sigma) = (1-\sigma)m$ and $f_m(\tau) = 0$ (see
Exercise~\ref{exmv}).
\begin{lemma}\label{mackeypsl}
Let $\Gamma \le \mathrm{PSL}_2(\mathbb{Z})$ be a subgroup of finite index
and let $x \in \mathbb{H} \cup \mathbb{P}^1(\mathbb{Q})$ be any point.
Recall that $\mathrm{PSL}_2(\mathbb{Z})_x$ denotes the stabiliser of~$x$ for
the $\mathrm{PSL}_2(\mathbb{Z})$-action.
\begin{enumerate}[(a)]
\item The map
$$ \Gamma \backslash \mathrm{PSL}_2(\mathbb{Z}) / \mathrm{PSL}_2(\mathbb{Z})_x
\xrightarrow{g \mapsto gx} \Gamma \backslash \mathrm{PSL}_2(\mathbb{Z})x$$
is a bijection.
\item For $g \in \mathrm{PSL}_2(\mathbb{Z})$ the stabiliser of~$gx$
for the $\Gamma$-action is
$$ \Gamma_{gx} = \Gamma \cap g \mathrm{PSL}_2(\mathbb{Z})_x g^{-1}.$$
\item For all $i \in \mathbb{N}$, and all $R[\Gamma]$-modules, Mackey's formula (Prop.~\ref{propmackey})
gives an isomorphism
$$\h^i(\mathrm{PSL}_2(\mathbb{Z})_x, {\rm Coind}_\Gamma^{\mathrm{PSL}_2(\mathbb{Z})} V) \cong
\prod_{y \in \Gamma \backslash \mathrm{PSL}_2(\mathbb{Z})x} \h^i(\Gamma_y, V).$$
\end{enumerate}
\end{lemma}
\begin{proof}
(a) and (b) are clear and (c) follows directly from Mackey's formula.
\end{proof}
\begin{corollary}\label{corhzweinull}
Let $R$ be a ring and $\Gamma \le \mathrm{PSL}_2(\mathbb{Z})$ be a subgroup of
finite index such that all the orders of all stabiliser groups
$\Gamma_x$ for $x \in \mathbb{H}$ are invertible in~$R$. Then for all
$R[\Gamma]$-modules~$V$ one has
$$\h^1(\Gamma,V) \cong M/(M^{\langle \sigma \rangle} + M^{\langle \tau \rangle})$$
with $M = {\rm Coind}_\Gamma^{\mathrm{PSL}_2(\mathbb{Z})}(V)$ and
$$\h^i(\Gamma,V) = 0$$
for all $i \ge 2$.
\end{corollary}
\begin{proof}
By Lemma~\ref{mackeypsl}~(b), all non-trivial stabiliser groups for
the action of $\Gamma$ on $\mathbb{H}$ are of the form $g \langle \sigma
\rangle g^{-1} \cap \Gamma$ or $g \langle \tau \rangle g^{-1} \cap
\Gamma$ for some $g \in \mathrm{PSL}_2(\mathbb{Z})$.
Due to the invertibility assumption we get from Prop.~\ref{corres}
that the groups on the right in the equation in Lemma~\ref{mackeypsl}~(c) are zero.
Hence, by Shapiro's lemma (Prop.~\ref{shapiro}) we have
$$ \h^i(\Gamma,V) \cong \h^i(\mathrm{PSL}_2(\mathbb{Z}),M)$$
for all $i \ge 0$, so that by Equations (\ref{mayervietoris}) and~(\ref{mveins})
we obtain the proposition.
\end{proof}
By Exercise~\ref{exfiniteorder}, the assumptions of the proposition are
for instance always satisfied if $R$ is a field of characteristic not
$2$ or $3$. Look at Exercise~\ref{exfotwo} to see for which~$N$
the assumptions hold for $\Gamma_1(N)$ and $\Gamma_0(N)$ over an
arbitrary ring (e.g.\ the integers).
\subsection{Theory: Parabolic group cohomology}
Before going on, we include a description of the cusps as $\mathrm{PSL}_2(\mathbb{Z})$-orbits that is very useful for the sequel.
\begin{lemma}
The cusps $\mathbb{P}^1(\mathbb{Q})$ lie in a single $\mathrm{PSL}_2(\mathbb{Z})$-orbit.
The stabiliser group of $\infty$ for the $\mathrm{PSL}_2(\mathbb{Z})$-action is $\langle T \rangle$ and the map
$$ \mathrm{PSL}_2(\mathbb{Z})/\langle T \rangle \xrightarrow{g \langle T \rangle \mapsto g \infty}\mathbb{P}^1(\mathbb{Q})$$
is a $\mathrm{PSL}_2(\mathbb{Z})$-equivariant bijection.
\end{lemma}
\begin{proof}
The claim on the stabiliser follows from a simple direct computation. This makes the map well-defined and injective.
The surjectivity is equivalent to the claim that the cusps lie in a single $\mathrm{PSL}_2(\mathbb{Z})$-orbit and
simply follows from the fact that any pair of coprime integers $(a,c)$ appears as the first column of a matrix in $\mathrm{SL}_2(\mathbb{Z})$.
\end{proof}
Let $R$ be a ring, $\Gamma \le \mathrm{PSL}_2(\mathbb{Z})$ a subgroup of finite index.
One defines the {\em parabolic cohomology group for the left $R[\Gamma]$-module~$V$}
as the kernel of the restriction map in
\begin{equation}\label{pardef}
0 \to \h_{\mathrm{par}}^1(\Gamma, V) \to \h^1(\Gamma,V) \xrightarrow{\mathrm{res}}
\prod_{g \in \Gamma \backslash \mathrm{PSL}_2(\mathbb{Z}) / \langle T \rangle}
\h^1(\Gamma \cap \langle g T g^{-1} \rangle, V).
\end{equation}
\begin{proposition}\label{leraypar}
Let $R$ be a ring and $\Gamma \le \mathrm{PSL}_2(\mathbb{Z})$ be a subgroup of finite index such
that all the orders of all stabiliser groups $\Gamma_x$ for $x \in \mathbb{H}$ are invertible in~$R$.
Let $V$ be a left $R[\Gamma]$-module. Write for short $G = \mathrm{PSL}_2(\mathbb{Z})$
and $M = {\rm Hom}_{R[\Gamma]}(R[G],V)$.
Then the following diagram is commutative, its vertical maps are isomorphisms
and its rows are exact:
\begin{small}
$$ \xymatrix@R=.8cm@C=.4cm{
0 \ar@{->}[r]
& \h_{\mathrm{par}}^1(\Gamma, V) \ar@{->}[r]
& \h^1(\Gamma,V) \ar@{->}[r]^(.3){\mathrm{res}}
& \underset{g \in \Gamma \backslash \mathrm{PSL}_2(\mathbb{Z}) / \langle T \rangle}{\prod}
\h^1(\Gamma \cap \langle g T g^{-1} \rangle, V) \ar@{->}[r]
& V_\Gamma \ar@{->}[r]
& 0\\
0 \ar@{->}[r]
& \h_{\mathrm{par}}^1(G, M) \ar@{->}[r]\ar@{->}[u]^(.5){\textnormal{Shapiro}}
& \h^1(G,M) \ar@{->}[r]^(.3){\mathrm{res}} \ar@{->}[u]^(.5){\textnormal{Shapiro}}
& \h^1(\langle T \rangle, M) \ar@{->}[r]\ar@{->}[u]^(.5){\textnormal{Mackey}}
& V_\Gamma \ar@{->}[r]\ar@{=}[u]
& 0\\
0 \ar@{->}[r]
& \h_{\mathrm{par}}^1(G, M) \ar@{->}[r]\ar@{=}[u]
& M/(M^{\langle \sigma \rangle} + M^{\langle \tau \rangle}) \ar@{->}[r]^(.5){m \mapsto (1-\sigma)m} \ar@{->}[u]^{m \mapsto f_m}
& M/(1-T)M \ar@{->}[r]\ar@{<-}[u]^{c \mapsto c(T)}
& M_G \ar@{->}[r]\ar@{->}[u]^{\phi}
& 0
}$$%
\end{small}
The map $\phi: M_G \to V_\Gamma$ is given as $f \mapsto \sum_{g \in \Gamma \backslash G} f(g)$.
\end{proposition}
\begin{proof}
The commutativity of the diagram is checked in Exercise~\ref{exparcompat}.
By Exercise~\ref{exfree} we have $\h^1(\langle T \rangle, M) \cong M/(1-T)M$.
Due to the assumptions we may apply Corollary~\ref{corhzweinull}.
The cokernel of
$M/(M^{\langle \sigma \rangle} + M^{\langle \tau \rangle})
\xrightarrow{m \mapsto (1-\sigma)m} M/(1-T)M$
is immediately seen to be~$M/((1-\sigma)M + (1-T)M)$, which is
equal to~$M_G$, as $T$ and $\sigma$ generate~$\mathrm{PSL}_2(\mathbb{Z})$.
Hence, the lower row is an exact sequence.
We now check that the map $\phi$ is well-defined. For this we verify
that the image of $f(g)$ in $V_\Gamma$ only depends on the coset
$\Gamma \backslash G$:
$$ f(g) - f(\gamma g) = f(g) - \gamma f(g) = (1-\gamma) f(g) = 0 \in V_\Gamma.$$
Hence, for any $h \in G$ we get
$$\phi((1-h).f) = \sum_{g \in \Gamma \backslash \mathrm{PSL}_2(\mathbb{Z})} (f(g) - f(gh)) = 0,$$
as $gh$ runs over all cosets. Thus, $\phi$ is well-defined.
To show that $\phi$ is an isomorphism, we give an inverse $\psi$ to~$\phi$ by
$$ \psi: V_\Gamma \to {\rm Hom}_{R[\Gamma]}(R[G],V)_G, \;\;\; v \mapsto
e_v \textnormal{ with } e_v(g) = \begin{cases}
g v, & \textnormal{ for } g \in \Gamma\\
0, & \textnormal{ for } g \not\in\Gamma.
\end{cases}$$
It is clear that $\phi \circ \psi$ is the identity. The map~$\phi$ is
an isomorphism, as $\psi$ is surjective. In order to see this, fix a system of representatives
$\{1=g_1,g_2,\dots,g_n\}$ for $\Gamma \backslash \mathrm{PSL}_2(\mathbb{Z})$. We first have
$ f = \sum_{i=1}^n g_i^{-1}.e_{f(g_i)} $
because for all $h \in G$ we find
$$f(h) = g_j^{-1}.e_{f(g_j)}(h) = e_{f(g_j)}(hg_j^{-1}) = hg_j^{-1}.f(g_j)=.f(hg_j^{-1}g_j)=f(h),$$
where $1 \le j \le n$ is the unique index such that $h \in \Gamma g_j$.
Thus
$$f= \sum_{i=1}^n e_{f(g_i)} - \sum_{i=2}^n (1-g_i^{-1}).e_{f(g_i)}\in \Image(\psi),$$
as needed.
More conceptually, one can first identify
the coinduced module ${\rm Coind}_\Gamma^{\mathrm{PSL}_2(\mathbb{Z})}(V)$ with the
induced one ${\rm Ind}_\Gamma^{\mathrm{PSL}_2(\mathbb{Z})}(V) = R[G] \otimes_{R[\Gamma]} V$.
We claim that the $G$-coinvariants are isomorphic
to $R \otimes_{R[\Gamma]} V \cong V_\Gamma$.
As $R$-modules we have $R[G] = I_G \oplus R1_G$ since
$r \mapsto r 1_G$ defines a splitting of the augmentation map.
Here $I_G$ is the augmentation ideal defined in Exercise~\ref{exgp}.
Consequently,
$R[G] \otimes_{R[\Gamma]} V \cong (I_G \otimes_{R[\Gamma]} V) \oplus R \otimes_{R[\Gamma]} V$.
The claim follows, since
$I_G (R[G] \otimes_{R[\Gamma]} V) \cong I_G \otimes_{R[\Gamma]} V$.
Since all the terms in the upper and the middle row are isomorphic
to the respective terms in the lower row, all rows are exact.
\end{proof}
\subsection{Theory: Dimension computations}
This seems to be a good place to compute the dimension of $\h^1(\Gamma,V_{k-2}(K))$
and $\h_{\mathrm{par}}^1(\Gamma,V_{k-2}(K))$ over a field~$K$ under certain conditions.
The results will be important for
the proof of the Eichler-Shimura theorem.
\begin{lemma}\label{lemvknull}
Let $R$ be a ring and let $n \ge 1$ be an integer, $t = \mat 1 N 0 1$ and
$t' = \mat 1 0 N 1$.
\begin{enumerate}[(a)]
\item If $n! N$ is not a zero divisor in~$R$, then
for the $t$-invariants we have
$$V_n(R)^{\langle t \rangle} = \langle X^n \rangle$$
and for the $t'$-invariants
$$V_n(R)^{\langle t' \rangle} = \langle Y^n \rangle.$$
\item If $n! N$ is invertible in~$R$, then the coinvariants are
given by
$$V_n(R)_{\langle t \rangle} = V_n(R)/\langle Y^n, XY^{n-1}, \dots, X^{n-1}Y \rangle$$
respectively
$$V_n(R)_{\langle t' \rangle} = V_n(R)/\langle X^n, X^{n-1}Y, \dots, XY^{n-1} \rangle.$$
\item If $n! N$ is not a zero divisor in~$R$,
then the $R$-module of $\Gamma(N)$-invariants $V_n(R)^{\Gamma(N)}$ is zero.
In particular, if $R$ is a field of characteristic~$0$ and $\Gamma$ is
any congruence subgroup, then $V_n(R)^\Gamma$ is zero.
\item If $n! N$ is invertible in~$R$,
then the $R$-module of $\Gamma(N)$-coinvariants $V_n(R)_{\Gamma(N)}$ is zero.
In particular, if $R$ is a field of characteristic~$0$ and $\Gamma$ is
any congruence subgroup, then $V_n(R)_\Gamma$ is zero.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) The action of~$t$ is $t.(X^{n-i} Y^i) = X^{n-i} (NX + Y)^i$ and consequently
$$(t-1). (X^{n-i} Y^i)
= (\sum_{j=0}^i \vect ij N^{i-j} X^{i-j}Y^j)X^{n-i} - X^{n-i}Y^i
= \sum_{j=0}^{i-1} r_{i,j} X^{n-j}Y^j$$
with $r_{i,j} =N^{i-j} \vect{i}{j}$, which is not a zero divisor,
respectively invertible, by assumption.
For $x = \sum_{i=0}^n a_i X^{n-i}Y^i$ we have
\begin{multline*} (t-1).x
= \sum_{i=0}^n a_i \sum_{j=0}^{i-1}r_{i,j} X^{n-j}Y^j
= \sum_{j=0}^{n-1} X^{n-j}Y^j (\sum_{i=j+1}^n a_i r_{i,j})\\
= XY^{n-1} a_nr_{n,n-1} + X^2Y^{n-2} (a_n r_{n,n-2} + a_{n-1}r_{n-1,n-2}) + \dots.
\end{multline*}
If $(t-1).x = 0$, we conclude for $j = n-1$ that $a_n = 0$.
Next, for $j = n-2$ it follows that $a_{n-1} = 0$,
and so on, until $a_1 = 0$. This proves the statement on
the $t$-invariants. The one on the $t'$-invariants follows from symmetry.
(b) The claims on the coinvariants are proved in a very similar and
straightforward way.
(c) and (d) As $\Gamma(N)$ contains the matrices $t$ and $t'$, this follows
from Parts~(a) and~(b).
\end{proof}
\begin{proposition}\label{dimheins}
Let $K$ be a field of characteristic~$0$ and $\Gamma \le \mathrm{SL}_2(\mathbb{Z})$ be
a congruence subgroup of finite index~$\mu$ such that $\Gamma_y = \{1\}$ for all~$y \in \mathbb{H}$
(e.g.\ $\Gamma = \Gamma_1(N)$ with $N \ge 4$). We can and do consider $\Gamma$ as a subgroup of $\mathrm{PSL}_2(\mathbb{Z})$.
Then
$$ \dim_K \h^1(\Gamma,V_{k-2}(K)) = (k-1) \frac{\mu}{6} + \delta_{k,2}$$
and
$$ \dim_K \h_{\mathrm{par}}^1(\Gamma,V_{k-2}(K)) = (k-1) \frac{\mu}{6} - \nu_\infty + 2 \delta_{k,2},$$
where $\nu_\infty$ is the number of cusps of~$\Gamma$, i.e.\ the cardinality of $\Gamma \backslash \mathbb{P}^1(\mathbb{Q})$,
and $\delta_{k,2} = \begin{cases}1 & \textnormal{if } k=2\\0 & \textnormal{otherwise.}\end{cases}$
\end{proposition}
\begin{proof}
Let $M = {\rm Coind}_\Gamma^{\mathrm{PSL}_2(\mathbb{Z})} (V_{k-2}(K))$. This module has dimension
$(k-1)\mu$. From the Mayer-Vietoris exact sequence
$$0 \to M^{\mathrm{PSL}_2(\mathbb{Z})} \to M^{\langle \sigma \rangle} \oplus M^{\langle \tau \rangle}
\to M \to \h^1(\mathrm{PSL}_2(\mathbb{Z}),M) \to 0,$$
we obtain
$$ \dim \h^1(\Gamma,V_{k-2}(K)) = \dim M + \dim M^{\mathrm{PSL}_2(\mathbb{Z})}
- \dim \h^0 (\langle \sigma \rangle,M) - \dim \h^0 (\langle \tau \rangle,M).$$
Recall the left $\mathrm{PSL}_2(\mathbb{Z})$-action on ${\rm Hom}_{K[\Gamma]}(K[\mathrm{PSL}_2(\mathbb{Z})],V_{k-2}(K))$,
which is given by $(g.\phi)(h) = \phi(hg)$. It follows directly that
every function in the $K$-vector space ${\rm Hom}_{K[\Gamma]}(K[\mathrm{PSL}_2(\mathbb{Z})],V_{k-2}(K))^{\mathrm{PSL}_2(\mathbb{Z})}$
is constant and equal to its value at~$1$.
The $\Gamma$-invariance, however, imposes additionally
that this constant lies in $V_{k-2}(K)^\Gamma$.
Hence, by Lemma~\ref{lemvknull}, $\dim M^{\mathrm{PSL}_2(\mathbb{Z})} = \delta_{k,2}$.
The term $\h^0 (\langle \sigma \rangle,M)$ is handled by Mackey's formula:
$$\dim \h^0 (\langle \sigma \rangle,M)
= \sum_{x \in \Gamma \backslash \mathrm{PSL}_2(\mathbb{Z}).i} \dim V_{k-2}(K)^{\Gamma_x}
= (k-1) \#(\Gamma \backslash \mathrm{PSL}_2(\mathbb{Z}).i) = (k-1) \frac{\mu}{2},$$
since all $\Gamma_x$ are trivial by assumption and there are hence precisely
$\mu/2$ points in $Y_\Gamma$ lying over~$i$ in $Y_{\mathrm{SL}_2(\mathbb{Z})}$.
By the same argument we get
$$\dim \h^0 (\langle \tau \rangle,M) = \frac{\mu}{3}.$$
Putting these together gives the first formula:
$$ \dim_K \h^1(\Gamma,V_{k-2}(K)) = (k-1) (\mu - \frac{\mu}{2} - \frac{\mu}{3}) + \delta_{k,2} =
(k-1)\frac{\mu}{6} + \delta_{k,2}.$$
The second formula can be read off from the diagram in Proposition~\ref{leraypar}.
It gives directly
\begin{multline*}
\dim \h_{\mathrm{par}}^1(\Gamma,V_{k-2}(K)) =
\dim \h^1(\Gamma,V_{k-2}(K)) + \dim V_{k-2}(K)_\Gamma \\
- \sum_{g \in \Gamma \backslash \mathrm{PSL}_2(\mathbb{Z}) / \langle T \rangle}
\dim \h^1(\Gamma \cap \langle g T g^{-1} \rangle, V_{k-2}(K)).
\end{multline*}
All the groups $\Gamma \cap \langle g T g^{-1} \rangle$ are of the form
$\langle T^n \rangle$ for some $n \ge 1$. Since they are cyclic, we have
$$\dim \h^1(\Gamma \cap \langle g T g^{-1} \rangle, V_{k-2}(K)) =
\dim V_{k-2}(K)_{\langle T^n \rangle} = 1$$
by Lemma~\ref{lemvknull}. As the set $\Gamma \backslash \mathrm{PSL}_2(\mathbb{Z}) / \langle T \rangle$
is the set of cusps of~$\Gamma$, we conclude
$$ \sum_{g \in \Gamma \backslash \mathrm{PSL}_2(\mathbb{Z}) / \langle T \rangle}
\dim \h^1(\Gamma \cap \langle g T g^{-1} \rangle, V_{k-2}(K)) = \nu_\infty.$$
Moreover, also by Lemma~\ref{lemvknull}, $\dim V_{k-2}(K)_\Gamma = \delta_{k,2}$.
Putting everything together yields the formula
$$ \dim \h_{\mathrm{par}}^1(\Gamma,V_{k-2}(K)) = (k-1)\frac{\mu}{6} + 2 \delta_{k,2} - \nu_\infty,$$
as claimed.
\end{proof}
\begin{remark}\label{remdimcoh}
One can derive a formula for the dimension
even if $\Gamma$ is not torsion-free. One only needs to compute the dimensions
$V_{k-2}(K)^{\langle \sigma \rangle}$ and
$V_{k-2}(K)^{\langle \tau \rangle}$ and to modify the above proof slightly.
\end{remark}
\subsection{Theoretical exercises}
\begin{exercise}\label{exfreegroup}
\begin{enumerate}[(a)]
\item
Verify that $G*H$ is a group.
\item
Prove the universal property represented by the commutative diagram
$$ \xymatrix@=0.5cm{
& P \\
G\ar@{^(->}^{\eta_G}[ur]\ar@{^(->}_{\iota_G}[dr] && H \ar@{^(->}_{\eta_H}[ul]\ar@{^(->}^{\iota_H}[dl]\\
& G * H.\ar@{=>}^\phi[uu] & }$$
More precisely, let $\iota_G: G \to G*H$ and $\iota_H:H \to G*H$ be the natural inclusions.
Let $P$ be any group together with group injections
$\eta_G : G \to P$ and $\eta_H:H \to P$, then there is a unique
group homomorphism $\phi: G*H \to P$ such that
$\eta_G = \phi \circ \iota_G$ and $\eta_H = \phi \circ \iota_H$.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exfiniteorder}
\begin{enumerate}[(a)]
\item Let $M \in \mathrm{SL}_n(\mathbb{Z})$ be an element of finite order~$m$.
Determine the primes that may divide~$m$.
[Hint: Look at the characteristic polynomial of $M$.]
\item Determine all conjugacy classes of elements of finite
order in $\mathrm{PSL}_2(\mathbb{Z})$.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exfotwo}
\begin{enumerate}[(a)]
\item Determine the $N \ge 1$ for which $\Gamma_1(N)$
has no element of finite order apart from the identity.
[Hint: You should get $N \ge 4$.]
\item Determine the $N \ge 1$ for which $\Gamma_0(N)$
has no element of order~$4$. Also determine the cases in which
there is no element of order~$6$.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exmv}
\begin{enumerate}[(a)]
\item Prove that the explicit description of~$f_m$ in the Mayer-Vietoris sequence
(Equation~\ref{mayervietoris}) satisfies the properties required for the $0$-th connecting homomorphism
in Definition~\ref{defi:deltafunctor}.
Hint: Prove that if $f_m$ is a boundary, then $m \in M^{\langle \sigma \rangle} + M^{\langle \tau \rangle}$.
Moreover, prove that a $1$-cocycle in $\h^1(\mathrm{PSL}_2(\mathbb{Z}),M)$ which becomes a coboundary when restricted to either
$\langle \sigma \rangle$ or $\langle \tau \rangle$ can be changed by a coboundary to be of the form~$f_m$ for
some $m \in M$.
\item Let $0 \to A \to B \to C \to 0$ be an exact sequence of $G$-modules
for some group~$G$.
Let $c \in C^G$ and write it as a class $b+A \in B/A \cong C$. As it is $G$-invariant,
we have $0=(1-g)c=(1-g)(b+A)$, whence $(1-g)b \in A$ for all $g \in G$.
Define the $1$-cocycle $\delta^0(c)$ as the map $G \to A$ sending $g$ to $(1-g)b\in A$.
Prove that $\delta^0$ satisfies the properties required for the $0$-th connecting homomorphism
in Definition~\ref{defi:deltafunctor}.
Note that the connecting homomorphisms are not unique (one can, e.g.\ replace them by their negatives).
\item As an alternative approach to~(a), you may apply~(b) to the exact sequence from which the Mayer-Vietoris
sequence is derived as the associated long cohomology sequence in Proposition~\ref{mv}.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{exparcompat}
Verify the commutativity of the diagram in Proposition~\ref{leraypar}.
\end{exercise}
\subsection{Computer exercises}
\begin{cexercise}\label{cexpeins}
Let $N \ge 1$. Compute a list of the elements of $\mathbb{P}^1(\mathbb{Z}/N\mathbb{Z})$.
Compute a list of the cusps of $\Gamma_0(N)$ and $\Gamma_1(N)$
(cf.\ \cite{Stein}, p.~60).
I recommend to use the decomposition of $\mathbb{P}^1(\mathbb{Z}/N\mathbb{Z})$ into $\mathbb{P}^1(\mathbb{Z}/p^n\mathbb{Z})$.
\end{cexercise}
\begin{cexercise}\label{cexdirichlet}
Let $K$ be some field. Let $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to K^\times$ be
a Dirichlet character of modulus~$N$. For given $N$ and $K$, compute the
group of all Dirichlet characters. Every Dirichlet character should be
implemented as a map $\phi: \mathbb{Z} \to K^\times$ such that $\phi(a) = 0$
for all $a \in \mathbb{Z}$ with $(a,N) \neq 1$ and $\phi(a) = \chi(a \mod N)$
otherwise.
\end{cexercise}
\section{Modular symbols and Manin symbols}
\subsection{Theory: Manin symbols}
This section is an extended version of a specialisation of parts of my article~\cite{MS} to the group $\mathrm{PSL}_2(\mathbb{Z})$.
Manin symbols provide an alternative description of modular symbols.
See Definition~\ref{defi:manin} below.
We shall use this description for the comparison with
group cohomology and for implementing the modular symbols formalism.
We stay in the general setting over a ring~$R$.
\begin{proposition}\label{hparnull}
The sequence of $R$-modules
$$0 \to R[\mathrm{PSL}_2(\mathbb{Z})]N_\sigma + R[\mathrm{PSL}_2(\mathbb{Z})]N_\tau
\to R[\mathrm{PSL}_2(\mathbb{Z})] \xrightarrow{g \,\mapsto\, g(1-\sigma)\infty} R[\mathbb{P}^1(\mathbb{Q})]
\xrightarrow{g\infty \,\mapsto\, 1} R \to 0$$
is exact. Here we are considering $R[\mathrm{PSL}_2(\mathbb{Z})]$ as a right $R[\mathrm{PSL}_2(\mathbb{Z})]$-module.
\end{proposition}
\begin{proof}
Let $H$ be a finite subgroup of a group~$G$ and let
$H \backslash G = \{g_i \;|\; i \in I\}$ stand for a fixed system of representatives of the cosets.
We write $R[H \backslash G]$ for the free $R$-module on the set of representatives.
The map
$$ {\rm Hom}_R(R[H], R[H \backslash G]) \to R[G],\;\;\; f \mapsto \sum_{h \in H} h. f(h)$$
is an isomorphism.
Indeed, suppose that for $f \in {\rm Hom}_R(R[H], R[H \backslash G])$ we have
$$ 0 = \sum_{h \in H} h.(f(h)) = \sum_{h \in H} h.(\sum_{i \in I} a_{h,i} g_i)
= \sum_{h \in H} (\sum_{i \in I} a_{h,i} hg_i), $$
then $a_{h,i}=0$ for all $h\in H$ and all $i\in I$ (since the elements $hg_i$ are all distinct),
whence $f=0$. For the surjectivity, note that all elements in $R[G]$ can be written as
(finite) sums of the form $\sum_{h \in H} \sum_{i \in I} a_{h,i} hg_i$ because any element in $G$
is of the form $hg_i$ for a unique $h\in H$ and a unique $i \in I$.
This yields via Shapiro's lemma that
$$H^i(\langle \sigma \rangle, R[\mathrm{PSL}_2(\mathbb{Z})]) =
H^i(\langle 1 \rangle, R[\langle \sigma \rangle \backslash \mathrm{PSL}_2(\mathbb{Z})]) = 0$$
for all $i \ge 1$, and similarly for $\langle \tau \rangle$.
The resolution for a finite cyclic group \eqref{eq:res-cyclic-finite} gives
\begin{align*}
R[\mathrm{PSL}_2(\mathbb{Z})]N_\sigma &= \ker_{R[\mathrm{PSL}_2(\mathbb{Z})]} (1-\sigma)
= R[\mathrm{PSL}_2(\mathbb{Z})]^{\langle \sigma \rangle},\\
R[\mathrm{PSL}_2(\mathbb{Z})]N_\tau &= \ker_{R[\mathrm{PSL}_2(\mathbb{Z})]} (1-\tau)
= R[\mathrm{PSL}_2(\mathbb{Z})]^{\langle \tau \rangle}, \\
R[\mathrm{PSL}_2(\mathbb{Z})](1-\sigma) &= \ker_{R[\mathrm{PSL}_2(\mathbb{Z})]} N_\sigma \;\;\; \text{ and } \\
R[\mathrm{PSL}_2(\mathbb{Z})](1-\tau) &= \ker_{R[\mathrm{PSL}_2(\mathbb{Z})]} N_\tau.
\end{align*}
By Proposition~\ref{mvbieri}, we have the exact sequence
$$ 0 \to R[\mathrm{PSL}_2(\mathbb{Z})] \to R[\mathrm{PSL}_2(\mathbb{Z})]_{\langle \sigma \rangle} \oplus R[\mathrm{PSL}_2(\mathbb{Z})]_{\langle \tau \rangle}
\to R \to 0.$$
The injectivity of the first map in the exact sequence (which we recall
is a consequence of $\mathrm{PSL}_2(\mathbb{Z}) = \langle \sigma \rangle * \langle \tau \rangle$) leads to
$$ R[\mathrm{PSL}_2(\mathbb{Z})](1-\sigma) \cap R[\mathrm{PSL}_2(\mathbb{Z})](1-\tau) = 0.$$
Sending $g$ to $g\infty$ yields a bijection between
$R[\mathrm{PSL}_2(\mathbb{Z})]/R[\mathrm{PSL}_2(\mathbb{Z})](1-T)$ and $R[\mathbb{P}^1(\mathbb{Q})]$.
In order to prove the exactness at $R[\mathrm{PSL}_2(\mathbb{Z})]$,
we show that the equality
$ x(1-\sigma) = y (1-T)$ for $x,y \in R[\mathrm{PSL}_2(\mathbb{Z})]$ yields that
$x$ belongs to $R[\mathrm{PSL}_2(\mathbb{Z})]^{\langle \sigma \rangle} + R[\mathrm{PSL}_2(\mathbb{Z})]^{\langle \tau \rangle}$.
Note that $x(1-\sigma) = y(1-T) = y(1-\tau) -yT(1-\sigma)$ because of the equality $\tau = T\sigma$.
This yields $x(1-\sigma) +yT(1-\sigma) = y(1-\tau)$. This expression, however,
is equal to zero. Hence, there exists a $z \in R[\mathrm{PSL}_2(\mathbb{Z})]$ satisfying $y = zN_\tau$.
We have $N_\tau T = N_\tau \sigma$ because of $T = \tau \sigma$.
Consequently, we get
$$ y(1-T) = z N_\tau(1-T) = z N_\tau (1-\sigma) = y(1-\sigma).$$
The equality $x(1-\sigma) = y(1-\sigma)$ implies that
$x-y$ belongs to $R[\mathrm{PSL}_2(\mathbb{Z})]^{\langle \sigma \rangle}$.
Since $y \in R[\mathrm{PSL}_2(\mathbb{Z})]^{\langle \tau \rangle}$,
we see get that $x = (x-y) + y$ lies in $R[\mathrm{PSL}_2(\mathbb{Z})]^{\langle \sigma \rangle} + R[\mathrm{PSL}_2(\mathbb{Z})]^{\langle \tau \rangle}$,
as required.
It remains to prove the exactness at $R[\mathbb{P}^1(\mathbb{Q})]$.
The kernel of $R[\mathrm{PSL}_2(\mathbb{Z})] \xrightarrow{g \mapsto 1} R$ is the augmentation ideal,
which is generated by all elements of the $1-g$ for $g \in \mathrm{PSL}_2(\mathbb{Z})$. Noticing further that we can write
$$ 1-\alpha\beta = \alpha.(1-\beta)+(1-\alpha)$$
for $\alpha,\beta \in \mathrm{PSL}_2(\mathbb{Z})$, the fact that $\sigma$ and $T=\tau\sigma$ generate~$\mathrm{PSL}_2(\mathbb{Z})$ implies
that the kernel of $R[\mathrm{PSL}_2(\mathbb{Z})] \xrightarrow{g \mapsto 1} R$ equals
$$R[\mathrm{PSL}_2(\mathbb{Z})](1-\sigma) + R[\mathrm{PSL}_2(\mathbb{Z})](1-T)$$
inside $R[\mathrm{PSL}_2(\mathbb{Z})]$
It suffices to take the quotient by $R[\mathrm{PSL}_2(\mathbb{Z})](1-T)$ to obtain the desired exactness.
\end{proof}
\begin{lemma}\label{mrlem}
The sequence of $R$-modules
$$ 0 \to \mathcal{M}_R \xrightarrow{\{\alpha,\beta\} \mapsto \beta - \alpha} R[\mathbb{P}^1(\mathbb{Q})]
\xrightarrow{\alpha \mapsto 1} R \to 0$$
is exact.
\end{lemma}
\begin{proof}
Note that, using the relations defining~$\mathcal{M}_R$, any element in $\mathcal{M}_R$ can be written
$\sum_{\alpha \neq \infty} r_\alpha \{\infty, \alpha\}$ with $r_\alpha \in R$.
This element is mapped to $\sum_{\alpha \neq \infty} r_\alpha \alpha - (\sum_{\alpha \neq \infty} r_\alpha) \infty$.
If this expression equals zero, all coefficients $r_\alpha$ have to be zero.
This shows the injectivity of the first map.
Let $\sum_\alpha r_\alpha \alpha \in R[\mathbb{P}^1(\mathbb{Q})]$ be an element in the kernel
of the second map. Then $\sum_\alpha r_\alpha = 0$, so that we can write
$$\sum_\alpha r_\alpha \alpha = \sum_{\alpha \neq \infty} r_\alpha \alpha -
(\sum_{\alpha \neq \infty} r_\alpha) \infty$$
to obtain an element in the image of the first map.
\end{proof}
\begin{proposition}\label{propker}
The homomorphism of $R$-modules
$$ R[\mathrm{PSL}_2(\mathbb{Z})] \xrightarrow{\phi} \mathcal{M}_R,\;\;\;
g \mapsto \{g.0,g.\infty\}$$
is surjective with kernel
$R[\mathrm{PSL}_2(\mathbb{Z})]N_\sigma + R[\mathrm{PSL}_2(\mathbb{Z})]N_\tau$.
\end{proposition}
\begin{proof}
This follows from Proposition~\ref{hparnull} and Lemma~\ref{mrlem}.
\end{proof}
We have now provided all the input required to prove the description of modular symbols in terms of
Manin symbols. For this we need the notion of an induced module. In homology
it plays the role that the coinduced module plays in cohomology.
\begin{definition}
Let $R$ be a ring, $G$ a group, $H \le G$ a subgroup and $V$ a left
$R[H]$-module. The {\em induced module} of $V$ from $H$ to~$G$ is defined as
$$ {\rm Ind}_H^G (V) := R[G] \otimes_{R[H]} V,$$
where we view $R[G]$ as a right $R[H]$-module via the natural action.
The induced module is a left $R[G]$-module via the natural left
action of $G$ on~$R[G]$.
\end{definition}
In case of $H$ having a finite index in~$G$
(as in our standard example $\Gamma_1(N) \le \mathrm{PSL}_2(\mathbb{Z})$), the induced module is
isomorphic to the coinduced one:
\begin{lemma}\label{indcoind}
Let $R$ be a ring, $G$ a group, $H \le G$ a subgroup of finite index and $V$ a left
$R[H]$-module.
\begin{enumerate}[(a)]
\item ${\rm Ind}_H^G(V)$ and ${\rm Coind}_H^G(V)$ are isomorphic as left $R[G]$-modules.
\item Equip $(R[G] \otimes_R V)$ with the diagonal left $H$-action
$h.(g \otimes v) = h g \otimes h.v$ and the right
$G$-action $(g\otimes v).\tilde{g} = g\tilde{g} \otimes v$.
Consider the induced module ${\rm Ind}_H^G (V)$ as a
right $R[G]$-module by inverting the left action in the definition.
Then
$$ {\rm Ind}_H^G(V) \to (R[G] \otimes_R V)_H,\;\;\;
g\otimes v \mapsto g^{-1} \otimes v$$
is an isomorphism of right $R[G]$-modules.
\end{enumerate}
\end{lemma}
\begin{proof}
Exercise~\ref{exindcoind}.
\end{proof}
\begin{definition}\label{defi:manin}
Let $\Gamma \subseteq \mathrm{PSL}_2(\mathbb{Z})$ be a finite index subgroup, $V$ a left $R[\Gamma]$-module and consider
$M = {\rm Ind}_\Gamma^{\mathrm{PSL}_2(\mathbb{Z})} (V)$, which we identify with the
right $R[\mathrm{PSL}_2(\mathbb{Z})]$-module
$(R[\mathrm{PSL}_2(\mathbb{Z})] \otimes_R V)_\Gamma$ as in Lemma~\ref{indcoind}~(b).
Elements in $M / (M N_\sigma + M N_\tau)$ are called {\em Manin symbols} over~$R$ (for the subgroup $\Gamma \subseteq \mathrm{PSL}_2(\mathbb{Z})$ and the
left $R[\Gamma]$-module~$V$).
\end{definition}
\begin{theorem}\label{ManinSymbols}
In the setting of Definition~\ref{defi:manin}, the following statements hold:
\begin{enumerate}[(a)]
\item The homomorphism $\phi$ from Proposition~\ref{propker} induces the
exact sequence of $R$-modules
$$ 0 \to M N_\sigma + M N_\tau \to M \to \mathcal{M}_R(\Gamma,V) \to 0,$$
and the homomorphism $M \to \mathcal{M}_R(\Gamma,V)$ is given by
$g \otimes v \mapsto \{g.0, g.\infty\} \otimes v$.
In other words, this map induces an isomorphism between Manin symbols over~$R$
(for the subgroup $\Gamma \subseteq \mathrm{PSL}_2(\mathbb{Z})$ and the left $R[\Gamma]$-module~$V$)
and the modular symbols module $\mathcal{M}_R(\Gamma,V)$.
\item The homomorphism $R[\mathrm{PSL}_2(\mathbb{Z})] \to R[\mathbb{P}^1(\mathbb{Q})]$ sending $g$ to $g.\infty$
induces the exact sequence of $R$-modules
$$ 0 \to M(1-T) \to M \to \mathcal{B}_R(\Gamma,V) \to 0.$$
\item The identifications of (a) and~(b) imply the isomorphism
$$ \mathcal{CM}_R(\Gamma,V) \cong \ker\big(
M/ (M N_\sigma + M N_\tau ) \xrightarrow{m \mapsto m(1-\sigma)} M/M(1-T) \big).$$
\end{enumerate}
\end{theorem}
\begin{proof}
(a) Proposition~\ref{propker} gives the exact sequence
$$ 0 \to R[\mathrm{PSL}_2(\mathbb{Z})]N_\sigma + R[\mathrm{PSL}_2(\mathbb{Z})]N_\tau \to R[\mathrm{PSL}_2(\mathbb{Z})] \to \mathcal{M}_R \to 0,$$
which we tensor with $V$ over $R$, yielding the exact sequence of left $R[\Gamma]$-modules
$$ 0 \to (R[\mathrm{PSL}_2(\mathbb{Z})] \otimes_R V) N_\sigma + (R[\mathrm{PSL}_2(\mathbb{Z})] \otimes_R V) N_\tau
\to (R[\mathrm{PSL}_2(\mathbb{Z})] \otimes_R V) \to \mathcal{M}_R(V) \to 0.$$
Passing to left $\Gamma$-coinvariants yields~(a) because $M N_\sigma$ and $M N_\tau$ are the images
of $(R[\mathrm{PSL}_2(\mathbb{Z})] \otimes_R V) N_\sigma$ and $(R[\mathrm{PSL}_2(\mathbb{Z})] \otimes_R V) N_\tau$ inside $M$,
respectively.
(b) is clear from the definition and (c) has already been observed
in the proof of Proposition~\ref{hparnull}.
\end{proof}
In the literature on Manin symbols one usually finds a more explicit
version of the induced module. This is the contents of the following
proposition. It establishes the link with the main theorem
on Manin symbols in \cite{Stein}, namely Theorem~8.4.
Since in the following proposition left and right actions
are involved, we sometimes indicate left (co-)invariants by
using left subscripts (resp.\ superscripts) and right
(co-)invariants by right ones.
\begin{proposition}\label{indprop}
Let $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to R^\times$ be a character such that
$\chi(-1) = (-1)^k$.
Consider the $R$-module
$$X := R[\Gamma_1(N) \backslash \mathrm{SL}_2(\mathbb{Z})] \otimes_R V_{k-2}(R) \otimes_R R^\chi$$
equipped with the right $\mathrm{SL}_2(\mathbb{Z})$-action
$(\Gamma_1(N) h \otimes V \otimes r)g = (\Gamma_1(N) hg \otimes g^{-1}v \otimes r)$
and with the left $\Gamma_1(N) \backslash \Gamma_0(N)$-action
$g (\Gamma_1(N) h \otimes v \otimes r) = (\Gamma_1(N) g h \otimes v \otimes \chi(g)r)$.
Then
$$ X \cong {\rm Ind}_{\Gamma_1(N)}^{\mathrm{SL}_2(\mathbb{Z})}(V_k^\chi(R))$$
as a right $R[\mathrm{SL}_2(\mathbb{Z})]$-module and a left $R[\Gamma_1(N) \backslash \Gamma_0(N)]$-module.
Moreover,
$${}_{\Gamma_1(N) \backslash \Gamma_0(N)} X \cong {\rm Ind}_{\Gamma_0(N)}^{\mathrm{SL}_2(\mathbb{Z})}(V_k^\chi(R)).$$
If $N \ge 3$, then the latter module is isomorphic
to ${\rm Ind}_{\Gamma_0(N)/\{\pm 1\}}^{\mathrm{PSL}_2(\mathbb{Z})}(V_k^\chi(R))$.
\end{proposition}
\begin{proof}
Mapping
$g \otimes v \otimes r$ to $g \otimes g^{-1}v \otimes r$
defines an isomorphism of right $R[\mathrm{SL}_2(\mathbb{Z})]$-modules and of left
$R[\Gamma_1(N) \backslash \Gamma_0(N)]$-modules
$$ {}_{\Gamma_1(N)} (R[\mathrm{SL}_2(\mathbb{Z})] \otimes_R V_{k-2}(R) \otimes_R R^\chi) \to X.$$
As we have seen above, the left hand side module is naturally isomorphic
to the induced module ${\rm Ind}_{\Gamma_1(N)}^{\mathrm{SL}_2(\mathbb{Z})}(V_k^\chi(R))$ (equipped with its right
$R[\mathrm{SL}_2(\mathbb{Z})]$-action described before). This establishes the first statement.
The second one follows from
${}_{\Gamma_1(N) \backslash \Gamma_0(N)} \big( {}_{\Gamma_1(N)} M \big ) = {}_{\Gamma_0(N)} M$ for
any $\Gamma_0(N)$-module~$M$.
The third statement is due to the fact that
${}_{\langle -1 \rangle} (R[\mathrm{SL}_2(\mathbb{Z})] \otimes_R V_{k-2}^\chi(R))$
is naturally isomorphic to
$R[\mathrm{PSL}_2(\mathbb{Z})] \otimes_R V_{k-2}^\chi(R)$, since $-1$ acts
trivially on the second factor, as the assumption assures that
$-1 \in \Gamma_0(N)$ but $-1 \not\in \Gamma_1(N)$.
\end{proof}
For one more description of the induced module
${\rm Ind}_{\Gamma_0(N)/\{\pm 1\}}^{\mathrm{PSL}_2(\mathbb{Z})}(V_k^\chi(R))$
see Exercise~\ref{exmaninp}.
It is this description that uses up the least memory in an implementation.
Now all the prerequisites have been provided for implementing
Manin symbols (say for $\Gamma_0(N)$ and a character). This is the
task of Computer Exercise~\ref{cexmanin}.
\subsection{Theory: Manin symbols and group cohomology}
Let $\Gamma \le \mathrm{PSL}_2(\mathbb{Z})$ be a subgroup of finite index,
and $V$ a left $R[\Gamma]$-module for a ring~$R$.
\begin{theorem}\label{compthm}
Suppose that the orders of all stabiliser subgroups of~$\Gamma$
for the action on~$\mathbb{H}$ are invertible in~$R$.
Then we have isomorphisms:
$$ \h^1(\Gamma,V) \cong \mathcal{M}_R(\Gamma,V)$$
and
$$ \h_{\mathrm{par}}^1(\Gamma,V) \cong \mathcal{CM}_R(\Gamma,V).$$
\end{theorem}
\begin{proof}
As before, set $M={\rm Ind}_\Gamma^{\mathrm{PSL}_2(\mathbb{Z})} (V)$ and recall that this module is isomorphic to ${\rm Coind}_\Gamma^{\mathrm{PSL}_2(\mathbb{Z})} (V)$.
To see the first statement, in view of Theorem~\ref{ManinSymbols}
and the corollary of the Mayer-Vietoris exact sequence (Corollary~\ref{corhzweinull}), it suffices to
show $M^{\langle \sigma \rangle} = MN_\sigma$ and $M^{\langle \tau \rangle} = MN_\tau$.
By the resolution of~$R$ for a cyclic group in~\eqref{eq:res-cyclic-finite}, the quotient $M^{\langle \sigma \rangle} / MN_\sigma$
is equal to $H^2(\langle \sigma \rangle,M)$, but this one is zero by the application of Mackey's formula done in Lemma~\ref{mackeypsl}~(c).
The same argument works with $\tau$ instead of~$\sigma$.
The passage to the parabolic/cuspidal subspaces is immediate because the boundary map with source $M$ has the same explicit description in both
cases (see Theorem~\ref{ManinSymbols}~(c) and Proposition~\ref{leraypar}).
\end{proof}
\subsection{Algorithms and Implementations: Conversion between Manin and mo\-dular symbols}
We now use the Euclidean Algorithm to represent any element
$g \in \mathrm{PSL}_2(\mathbb{Z})$ in terms of $\sigma$ and~$T$.
\begin{algorithm}\label{algpsl}
\underline{Input}: A matrix~$M = \mat abcd$ with integer entries and determinant~$1$.
\underline{Output}: A list of matrices $[A_1,A_2,\dots,A_n]$
where all $A_i \in \{T^n | n \in \mathbb{Z}\} \cup \{\sigma\}$ and $\sigma$ and $T^n$ alternate.
\begin{enumerate}[(1)]
\itemsep=0cm plus 0pt minus 0pt
\item create an empty list {\tt output}.
\item if $|c| > |a|$ then
\item \hspace*{1cm} append $\sigma$ to {\tt output}.
\item \hspace*{1cm} $M := \sigma M$.
\item end if;
\item while $c \neq 0$ do
\item \hspace*{1cm} $q := a \textnormal{ div } c$.
\item \hspace*{1cm} append $T^q$ to {\tt output}.
\item \hspace*{1cm} append $\sigma$ to {\tt output}.
\item \hspace*{1cm} $M := \sigma T^{-q} M$.
\item end while;
\item if $M \not\in \{\mat 1001,\mat {-1}00{-1}\}$ then $\;\;\;$
[At this point $M \in \{ \mat 1*01, \mat {-1}*0{-1}\}$.]
\item \hspace*{1cm} append $M$ to {\tt output}.
\item end if;
\item return {\tt output}.
\end{enumerate}
\end{algorithm}
This algorithm gives a constructive proof of the fact (Proposition~\ref{prop:gensl2z}) that $\mathrm{PSL}_2(\mathbb{Z})$
is generated by $\sigma$ and~$T$, and hence also by $\sigma$ and~$\tau$.
Note, however, that the algorithm does not necessarily give the shortest
such representation. See Exercise~\ref{exctdfrac} for a relation to
continued fractions.
We can use the algorithm to make a conversion between modular symbols and
Manin symbols, as follows. Suppose we are given the modular symbols
$\{\alpha,\infty\}$ (this is no loss of generality, as we can represent
$\{\alpha,\beta\} = \{\alpha,\infty\} - \{\beta,\infty\}$).
Suppose $\alpha$ is given as $g \infty$ with some $g \in \mathrm{SL}_2(\mathbb{Z})$
(i.e.\ representing the cusp as a fraction $\frac{a}{c}$ with $(a,c)=1$,
then we can find $b,d$ by the Euclidean Algorithm such that $g=\mat abcd \in \mathrm{SL}_2(\mathbb{Z})$
satisfies the requirements).
We now use Algorithm~\ref{algpsl} to represent $g$ as $\sigma T^{a_1} \sigma T^{a_2} \sigma
\dots T^{a_n} \sigma$ (for example). Then we have
$$ \{\alpha,\infty\} = \sigma T^{a_1} \sigma T^{a_2} \sigma \dots T^{a_n} \{0,\infty\}
+ \sigma T^{a_1} \sigma T^{a_2} \sigma \dots T^{a_{n-1}} \{0,\infty\} + \dots +
\sigma T^{a_1} \{0,\infty\} + \{0,\infty\}.$$
If $g$ does not end in~$\sigma$ but $T^{a_n}$, then we must drop $T^{a_n}$ from the
above formula (since $T$ stabilises~$\infty$).
If $g$ starts in~$T^{a_1}$ (instead of~$\sigma$), then we must drop
the last summand.
Since we are in weight~$2$ (i.e.\ trivial module~$V$), the space of Manin symbols is a quotient
of $R[\mathrm{PSL}_2(\mathbb{Z})]/\Gamma$ (see Definition~\ref{defi:manin}).
The Manin symbol corresponding to the above example chosen for the modular symbol $\{\alpha,\infty\}$ is then simply represented by
the formal sum
\begin{equation}\label{eq:maninex}
\sigma T^{a_1} \sigma T^{a_2} \sigma \dots T^{a_n}
+ \sigma T^{a_1} \sigma T^{a_2} \sigma \dots T^{a_{n-1}} + \dots +
\sigma T^{a_1} + 1.
\end{equation}
If the module $V$ is not trivial, a modular symbol would typically look like $\{\alpha,\infty\} \otimes v$ for $v \in V$
and the corresponding Manin symbol would be the formal sum in~\eqref{eq:maninex} tensored with~$v$.
In Computer Exercise~\ref{cexconv} you are asked to implement a conversion
between Manin and modular symbols.
\subsection{Theoretical exercises}
\begin{exercise}\label{exindcoind}
Prove Lemma~\ref{indcoind}.
\end{exercise}
\begin{exercise}\label{exmaninp}
Assume the set-up of Proposition~\ref{indprop}.
Describe a right $\mathrm{PSL}_2(\mathbb{Z})$-action on
$$ Y := R[\mathbb{P}^1(\mathbb{Z}/N\mathbb{Z})] \otimes_R V_{k-2}(R) \otimes_R R^\chi$$
and an isomorphism
$$ {}_{\Gamma_1(N)\backslash \Gamma_0(N)} X \to Y$$
of right $\mathrm{PSL}_2(\mathbb{Z})$-modules.
\end{exercise}
\begin{exercise}\label{exctdfrac}
Provide a relationship between Algorithm~\ref{algpsl} and continued fractions.
\end{exercise}
\subsection{Computer exercises}
\begin{cexercise}\label{cexmanin}
Use the description of Exercise~\ref{exmaninp} and your results from
Computer Exercises \ref{cexpeins} and~\ref{cexdirichlet} to implement Manin symbols
for $\Gamma_0(N)$ and a character over a field.
As a first approach you may use the trivial character only.
\end{cexercise}
\begin{cexercise}\label{cexconv}
\begin{enumerate}[(a)]
\item Write an algorithm to represent any element of the group $\mathrm{PSL}_2(\mathbb{Z})$ in terms
of $\sigma$ and~$T$.
\item Write an algorithm that represents any modular symbol $\{\alpha,\beta\}$
as a Manin symbol (inside the vector space created in Computer Exercise~\ref{cexmanin}).
\end{enumerate}
\end{cexercise}
\section{Eichler-Shimura}
This section is devoted to proving the theorem by Eichler and Shimura that is at the
basis of the modular symbols algorithm and its group cohomological variant.
The standard reference for the Eichler-Shimura theorem is~\cite[\S 8.2]{Shimura}.
In the entire section, let $k \ge 2$ be an integer.
\subsection{Theory: Petersson scalar product}
Recall the standard fundamental domain for $\mathrm{SL}_2(\mathbb{Z})$
$$ \mathcal{F} = \{ z=x+iy \in \mathbb{H} \,| \, |z| > 1, |x| < \frac{1}{2}\}$$
from Proposition~\ref{prop:fd}.
Every subgroup $\Gamma \le \mathrm{SL}_2(\mathbb{Z})$ of finite index has a fundamental
domain, for example, $\bigcup_{\gamma \in \overline{\Gamma} \backslash \mathrm{PSL}_2(\mathbb{Z})} \gamma \mathcal{F}$
for any choice of system of representatives of the cosets $\overline{\Gamma} \backslash \mathrm{PSL}_2(\mathbb{Z})$,
where we put $\overline{\Gamma} = \Gamma/(\langle \pm 1 \rangle \cap \Gamma)$.
\begin{lemma}\label{lmdiff}
\begin{enumerate}[(a)]
\item Let $\gamma \in \mathrm{GL}_2(\mathbb{R})^+$ be a real matrix with positive determinant.
Let $f \in \Mkg k\Gamma\mathbb{C}$ and $g \in \Skg k\Gamma\mathbb{C}$. We have with
$z \in \mathbb{H}$
$$ f(\gamma z) \overline{g(\gamma z)} (\gamma z - \gamma \overline{z})^k =
\det(\gamma)^{2-k} f|_\gamma (z) \overline{g|_\gamma (z)} (z - \overline{z})^k$$
for all $\gamma \in \mathrm{SL}_2(\mathbb{R})$. The function
$G(z) := f(z) \overline{g(z)} (z-\overline{z})^k$ is bounded on~$\mathbb{H}$.
\item We have $d\gamma z = \frac{\det(\gamma)}{(cz+d)^2} dz$ for all $\gamma \in \mathrm{GL}_2(\mathbb{R})^+$.
\item The differential form $\frac{dz \wedge d\overline{z}}{(z-\overline{z})^2}$ is
$\mathrm{GL}_2(\mathbb{R})^+$-invariant. In terms of $z=x+iy$ we have
$\frac{dz \wedge d\overline{z}}{(z-\overline{z})^2} = \frac{i}{2} \frac{dx \wedge dy}{y^2}$.
\item Let $\Gamma \le \mathrm{SL}_2(\mathbb{Z})$ be a subgroup
with finite index $\mu = (\mathrm{PSL}_2(\mathbb{Z}):\overline{\Gamma})$.
The volume of any fundamental domain $\mathcal{F}_\Gamma$ for~$\Gamma$ with
respect to the differential form $\frac{2 dz \wedge d\overline{z}}{i(z-\overline{z})^2}$, i.e.\
$$ \vol (\mathcal{F}_\Gamma) = \int_{\mathcal{F}_\Gamma} \frac{2dz \wedge d\overline{z}}{i(z-\overline{z})^2},$$
is equal to $\mu \frac{\pi}{3}$.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) The first statement is computed as follows:
\begin{align*}
& f(\gamma z) \overline{g(\gamma z)} (\gamma z - \gamma \overline{z})^k \\
=& \det(\gamma)^{2(1-k)}(f|_\gamma (z) (cz+d)^k) \overline{(g|_\gamma(z) (cz +d)^k)}
(\frac{az+b}{cz+d} - \frac{a\overline{z}+b}{c\overline{z} + d})^k\\
=&\det(\gamma)^{2-2k}f|_\gamma (z) \overline{g|_\gamma(z)} ((az+b)(c\overline{z}+d)-(a\overline{z}+b)(cz+d))^k\\
=& \det(\gamma)^{2-k} f|_\gamma (z) \overline{g|_\gamma (z)} (z - \overline{z})^k,
\end{align*}
where we write $\gamma = \mat abcd$.
By the preceding computation, the function $G(z)$ is invariant under
$\gamma \in \Gamma$. Hence, it suffices to check that
$|G(z)|$ is bounded on the closure of any fundamental domain $\mathcal{F}_\Gamma$ for~$\Gamma$.
For this, it is enough to verify for every $\gamma$ in a system of
representatives of $\Gamma \backslash \mathrm{SL}_2(\mathbb{Z})$ that any of the functions $G(\gamma z)$
is bounded on the closure of the standard fundamental domain~$\mathcal{F}$.
By the preceding computation, we also have
$G(\gamma z) = f|_\gamma (z) \overline{g|_\gamma (z)} (z - \overline{z})^k$ for
$\gamma \in \mathrm{SL}_2(\mathbb{Z})$.
Note that $f(z) g(z)$ is a cusp form in $\Skg {2k}\Gamma\mathbb{C}$, in particular,
for every $\gamma \in \mathrm{SL}_2(\mathbb{Z})$ the function
$f|_\gamma (z) g |_\gamma (z)$ has a Fourier expansion in~$\infty$ of
the form $\sum_{n=1}^\infty a_n e^{2 \pi i z n}$. This series converges
absolutely and uniformly on compact subsets of~$\mathbb{H}$, in particular, for any $C > 1$
$$ K_\gamma := \sum_{n=1}^\infty |a_n e^{2 \pi i (x+iC) n}|
= \sum_{n=1}^\infty |a_n| e^{-2 \pi C n} $$
is a positive real number, depending on $\gamma$ (in a system of representatives
$\Gamma \backslash \mathrm{SL}_2(\mathbb{Z})$).
We have with $ z = x + iy$ and $y \ge C$
\begin{align*}
|G(\gamma z)| \le (2 y)^k \sum_{n=1}^\infty |a_n| e^{-2 \pi y n}
&= (2 y)^k e^{-2 \pi y} \sum_{n=1}^\infty |a_n| e^{-2 \pi y (n-1)} \\
& \le (2 y)^k e^{-2 \pi y} \sum_{n=1}^\infty |a_n| e^{-2 \pi C (n-1)} \\
& \le (2 y)^k e^{-2 \pi y} K_\gamma e^{2 \pi C}.
\end{align*}
This tends to~$0$ if $y$ tends to~$\infty$. Consequently, the function~$G(\gamma z)$
is bounded on the closure of the standard fundamental domain, as desired.
(b) Again writing $\gamma = \mat abcd$ we have
$$\frac{d\gamma z}{dz}= \frac{d\frac{az+b}{cz+d}}{dz} = \frac{1}{(cz+d)^2} (a(cz+d)-(az+b)c)
= \frac{\det(\gamma)}{(cz+d)^2},$$
which gives the claim.
(c) This is again a simple computation:
\begin{align*}
(\gamma z - \gamma \overline{z})^{-2} d\gamma z \wedge d \gamma \overline{z}
&= \det(\gamma)^2(\frac{az+b}{cz+d} - \frac{a\overline{z}+b}{c\overline{z} + d})^{-2} (cz+d)^{-2}(c\overline{z}+d)^{-2} dz\wedge d\overline{z}\\
&= (z-\overline{z})^{-2} dz \wedge d\overline{z},
\end{align*}
using~(b). The last statement is
$$ \frac{dz \wedge d\overline{z}}{(z-\overline{z})^2} = \frac{(dx +idy)\wedge(dx - i dy)}{(2iy)^2}
= \frac{-2i dx \wedge dy}{-4y^2} = \frac{i dx \wedge dy}{2y^2}.$$
(d) Due to the $\Gamma$-invariance, it suffices to show
$$ \int_\mathcal{F} \frac{dz \wedge d\overline{z}}{(z - \overline{z})^2} = \frac{i\pi}{6}.$$
Let $\omega = - \frac{dz}{z-\overline{z}}$. The total derivative of~$\omega$ is
$$d \omega = ((z-\overline{z})^{-2} dz - (z - \overline{z})^{-2} d \overline{z}) \wedge dz = \frac{dz \wedge d \overline{z}}{(z-\overline{z})^2}.$$
Hence, Stokes' theorem yields
$$ \int_\mathcal{F} \frac{dz \wedge d\overline{z}}{(z - \overline{z})^2} = - \int_{\partial \mathcal{F}} \frac{dz}{z-\overline{z}},$$
where $\partial \mathcal{F}$ is the positively oriented border of~$\mathcal{F}$, which we describe
concretely as the path $A$ from $\infty$ to~$\zeta_3$ on the vertical line, followed by the path
$C$ from $\zeta_3$ to~$\zeta_6$ on the unit circle and finally followed by $-TA$.
Hence with $z =x+iy$ we have
$$ \int_\mathcal{F} \frac{dz \wedge d\overline{z}}{(z - \overline{z})^2}
= -\frac{1}{2i} \big(\int_A \frac{dz}{y} - \int_{TA} \frac{dz}{y} + \int_C \frac{dz}{y} \big)
= -\frac{1}{2i} \int_C \frac{dz}{y},$$
since $dz = dTz$. Using the obvious parametrisation of~$C$ we obtain
\begin{multline*}
-\frac{1}{2i} \int_C \frac{dz}{y}
= -\frac{1}{2i} \int_{2\pi/3}^{2\pi/6} \frac{1}{\Imag(e^{i\phi})} \frac{d e^{i\phi}}{d\phi}d\phi
= -\frac{1}{2} \int_{2\pi/3}^{2\pi/6} \frac{e^{i\phi}}{\Imag(e^{i\phi})} d\phi\\
= -\frac{1}{2} \int_{2\pi/3}^{2\pi/6} (\frac{\cos(\phi)}{\sin(\phi)}+ i) d\phi
= -\frac{i}{2} (\frac{2\pi}{6} - \frac{2\pi}{3}) = \frac{i\pi}{6},
\end{multline*}
since $\sin$ is symmetric around $\pi/2$ and $\cos$ is antisymmetric, so that
the integral over $\frac{\cos(\phi)}{\sin(\phi)}$ cancels.
\end{proof}
\begin{definition}
Let $\Gamma \le \mathrm{SL}_2(\mathbb{Z})$ be a subgroup of finite index and
$\mu := (\mathrm{PSL}_2(\mathbb{Z}):\overline{\Gamma})$ be the index of
$\overline{\Gamma} = \Gamma/(\langle \pm 1 \rangle \cap \Gamma)$ in
$\mathrm{PSL}_2(\mathbb{Z})$.
We define the {\em Petersson pairing} as
\begin{align*}
\Mkg k\Gamma\mathbb{C} \times \Skg k\Gamma\mathbb{C} \to & \mathbb{C}\\
(f,g) \mapsto &\frac{-1}{(2i)^{k-1}\mu} \int_{\mathcal{F}_\Gamma} f(z) \overline{g(z)} (z-\overline{z})^k
\frac{dz \wedge d\overline{z}}{(z-\overline{z})^2} \\
= &\frac{1}{\mu} \int_{\mathcal{F}_\Gamma} f(z) \overline{g(z)} y^{k-2}
dx \wedge dy =: (f,g),
\end{align*}
where $\mathcal{F}_\Gamma$ is any fundamental domain for~$\Gamma$.
\end{definition}
\begin{proposition}
\begin{enumerate}[(a)]
\item The integral in the Petersson pairing converges. It does not
depend on the choice of the fundamental domain $\mathcal{F}_\Gamma$.
\item The Petersson pairing is a sesqui-linear pairing (linear in the first
and anti-linear in the second variable).
\item The restriction of the Petersson pairing to $\Skg k\Gamma\mathbb{C}$ is a positive
definite scalar product (the {\em Petersson scalar product}).
\item If $f,g$ are modular (cusp) forms for the group~$\Gamma$ and $\Gamma' \le \Gamma$
is a subgroup of finite index, then the Petersson pairing of $f$ and~$g$
with respect to~$\Gamma$ gives the same value as the one with respect to~$\Gamma'$.
\end{enumerate}
\end{proposition}
\begin{proof}
(a) By Lemma~\ref{lmdiff} the integral converges, since the function
$$G(z) := f(z) \overline{g(z)} (z-\overline{z})^k$$
is bounded on~$\mathcal{F}_\Gamma$ and the
volume of $\mathcal{F}_\Gamma$ for the measure in question is finite.
The integral does not depend on the choice of the fundamental domain
by the invariance of~$G(z)$ under~$\Gamma$.
(b) is clear.
(c) We have
$$(f,f) =\frac{1}{\mu}\int_{\mathcal{F}_\Gamma} |f(z)|^2 y^{k-2}dx \wedge dy,$$
which is clearly non-negative. It is~$0$ if and only if $f$ is the zero function,
showing that the product is positive definite.
(d) If $\mathcal{F}_\Gamma$ is a fundamental domain for~$\Gamma$, then
$\bigcup_{\gamma \in \Gamma' \backslash \Gamma} \gamma \mathcal{F}_\Gamma$ is a fundamental
domain for~$\Gamma'$ (for any choice of representatives of $\Gamma' \backslash \Gamma$).
But on every $\gamma \mathcal{F}_\Gamma$ the integral takes the same value.
\end{proof}
\begin{proposition}\label{petexplicit}
Let $f,g \in \Skg k\Gamma\mathbb{C}$. We have
$$ (f,g) = \frac{-1}{(2i)^{k-1}\mu}\sum_{\gamma \in \overline{\Gamma}\backslash \mathrm{PSL}_2(\mathbb{Z})}
\int_{\zeta_3}^i \int_\infty^0 f|_\gamma (z) \overline{g|_\gamma(z)} (z-\overline{z})^{k-2} dz d\overline{z}.$$
\end{proposition}
\begin{proof}
Let us write for short $G_\gamma (z,\overline{z}) = f|_\gamma (z) \overline{g|_\gamma(z)} (z-\overline{z})^k$
for $\gamma \in \mathrm{SL}_2(\mathbb{Z})$.
Then
$$ -(2i)^{k-1}\mu (f,g)
= \int_{\bigcup_\gamma \gamma\mathcal{F}} G(z,\overline{z}) \frac{dz \wedge d\overline{z}}{(z-\overline{z})^2}
= \sum_\gamma \int_\mathcal{F} G_\gamma(z,\overline{z}) \frac{dz \wedge d\overline{z}}{(z-\overline{z})^2}$$
by Lemma~\ref{lmdiff}, where the union resp.\ sum runs over a fixed system
of coset representatives of $\overline{\Gamma} \backslash \mathrm{PSL}_2(\mathbb{Z})$; by our
observations, everything is independent of this choice.
Consider the differential form
$$ \omega_\gamma :=
\big( \int_\infty^z f|_\gamma(u)(u-\overline{z})^{k-2}du \big) \overline{g|_\gamma(z)} d\overline{z}.$$
Note that the integral converges since $f$ is a cusp form.
The total derivative of $\omega_\gamma$ is
$d\omega_\gamma = G_\gamma (z,\overline{z}) \frac{dz \wedge d\overline{z}}{(z-\overline{z})^2}$.
Consequently, Stokes' theorem gives
$$ \sum_\gamma \int_\mathcal{F} G_\gamma(z,\overline{z}) \frac{dz \wedge d\overline{z}}{(z-\overline{z})^2} =
\sum_\gamma \int_{\partial \mathcal{F}} \big( \int_\infty^z f|_\gamma(u)(u-\overline{z})^{k-2}du \big) \overline{g|_\gamma(z)} d\overline{z},$$
where as above $\partial \mathcal{F}$ is the positively oriented border of the standard
fundamental domain~$\mathcal{F}$, which we describe as the path~$A$ along the vertical line
from $\infty$ to $\zeta_3$, followed by the path~$B$ from $\zeta_3$ to~$i$ along
the unit circle, followed by $-\sigma B$ and by~$-TA$.
We now make a small calculation. Let for this $C$ be any (piecewise
continuously differentiable) path in~$\mathbb{H}$ and $M \in \mathrm{SL}_2(\mathbb{Z})$:
\begin{align*}
&\int_{M C} \int_\infty^z f|_\gamma(u) \overline{g|_\gamma(z)} (u-\overline{z})^{k-2}du d\overline{z}\\
=& \int_C \int_\infty^{Mz} f|_\gamma(u) \overline{g|_\gamma(Mz)} (u-M\overline{z})^{k-2} du \frac{dM\overline{z}}{d\overline{z}} d\overline{z} \\
=& \int_C \int_{M^{-1}\infty}^z f|_{\gamma M}(u) \overline{g|_{\gamma M}(z)} (u-\overline{z})^{k-2} du d\overline{z}\\
=& \int_C \int_\infty^z f|_{\gamma M}(u) \overline{g|_{\gamma M}(z)} (u-\overline{z})^{k-2} du d\overline{z}
- \int_C \int_\infty^{M^{-1}\infty} f|_{\gamma M}(u) \overline{g|_{\gamma M}(z)} (u-\overline{z})^{k-2} du d\overline{z}.
\end{align*}
This gives
\begin{multline*}
\int_{C - MC} \int_{\infty}^z f|_\gamma(u) \overline{g|_\gamma(z)} (u-\overline{z})^{k-2}du d\overline{z} = \\
\int_C \int_\infty^z (G_\gamma(u,\overline{z})-G_{\gamma M}(u,\overline{z}))du d\overline{z}
+ \int_C \int_\infty^{M^{-1}\infty} G_{\gamma M}(u,\overline{z}) du d\overline{z}.
\end{multline*}
Continuing with the main calculation, we have
\begin{align*}
& -(2i)^{k-1}\mu(f,g) \\
=& \sum_\gamma \big[ \int_A \int_\infty^z (G_\gamma (u,\overline{z}) - G_{\gamma T} (u,\overline{z})) du d\overline{z}
+ \int_A \int_\infty^{T^{-1}\infty} G_{\gamma T} (u,\overline{z}) du d\overline{z} \big] \\
+& \sum_\gamma \big[ \int_B \int_\infty^z (G_\gamma (u,\overline{z}) - G_{\gamma \sigma} (u,\overline{z})) du d\overline{z}
+ \int_B \int_\infty^{\sigma^{-1}\infty} G_{\gamma \sigma} (u,\overline{z}) du d\overline{z} \big]\\
=& \sum_\gamma \int_B \int_\infty^0 G_{\gamma \sigma}(u,\overline{z}) du d\overline{z},
\end{align*}
using $T^{-1} \infty = \infty$, $\sigma^{-1} \infty = 0$ and the fact that the $\gamma T$
and $\gamma \sigma$ are just permutations of the cosets.
\end{proof}
\subsection{Theory: The Eichler-Shimura map}
Let $\Gamma \le \mathrm{SL}_2(\mathbb{Z})$ be a subgroup of finite index.
We fix some $z_0 \in \mathbb{H}$.
For $f \in \Mkg k\Gamma\mathbb{C}$ with $k \ge 2$ and $\gamma,\delta$ in $\mathbb{Z}^{2 \times2}$ with positive determinant, let
$$ I_f(\gamma z_0,\delta z_0) := \int_{\gamma z_0}^{\delta z_0} f(z) (Xz+Y)^{k-2} dz \in V_{k-2}(\mathbb{C}).$$
The integral is to be taken coefficient wise.
Note that it is independent of the chosen path since we are integrating holomorphic functions.
\begin{lemma}\label{esmaplem}
For any $z_0 \in \mathbb{H}$ and any matrices $\gamma,\delta \in \mathbb{Z}^{2 \times 2}$ with
positive determinant we have
$$ I_f(z_0,\gamma \delta z_0) = I_f(z_0,\gamma z_0) + I_f(\gamma z_0,\gamma \delta z_0)$$
and
$$ I_f(\gamma z_0,\gamma \delta z_0) = \det(\gamma)^{2-k} (\gamma.\big(I_{f|_\gamma} (z_0,\delta z_0)\big) )
= (\det(\gamma)^{-1} \gamma).\big(I_{f|_\gamma} (z_0,\delta z_0)\big). $$
\end{lemma}
\begin{proof}
The first statement is clear. Write $\gamma = \mat abcd$.
Recall that by Lemma~\ref{lmdiff}~(b), we have $d\gamma z = \frac{\det(\gamma)}{(cz+d)^2} dz.$
We compute further
\begin{align*}
I_f(\gamma z_0,\gamma\delta z_0) & = \int_{\gamma z_0}^{\gamma \delta z_0} f(z) (Xz+Y)^{k-2} dz\\
& = \int_{z_0}^{\delta z_0} f(\gamma z) (X\gamma z+Y)^{k-2} \frac{d\gamma z}{dz} dz\\
&= \det(\gamma)^{2-k}\int_{z_0}^{\delta z_0} f|_\gamma (z) (cz+d)^{k-2} (X\frac{az+b}{cz+d}+Y)^{k-2} dz\\
&= \det(\gamma)^{2-k}\int_{z_0}^{\delta z_0} f|_\gamma (z) (X(az+b)+Y(cz+d))^{k-2} dz\\
&= \det(\gamma)^{2-k}\int_{z_0}^{\delta z_0} f|_\gamma (z) ((Xa+Yc)z + (Xb+Yd))^{k-2} dz\\
&= \det(\gamma)^{2-k}\int_{z_0}^{\delta z_0} f|_\gamma (z) (\gamma.(Xz+Y))^{k-2} dz\\
&= \det(\gamma)^{2-k}\cdot \gamma .\big(\int_{z_0}^{\delta z_0} f|_\gamma (z) (Xz+Y)^{k-2} dz\big)\\
&= \det(\gamma)^{2-k}\cdot \gamma .\big(I_{f|_\gamma } (z_0,\delta z_0)\big).
\end{align*}
We recall that for a polynomial $P(X,Y)$ we have the action
$$(g.P)(X,Y) = P((X,Y) \mat abcd) = P(Xa+Yc,Xb+Yd).$$
\end{proof}
\begin{definition}
The space of {\em antiholomorphic cusp forms} $\overline{\Skg k\Gamma\mathbb{C}}$
consists of the functions $z \mapsto {\overline{f}}(z) := \overline{f(z)}$
with $f \in \Skg k\Gamma\mathbb{C}$.
\end{definition}
We can consider an antiholomorphic cusp form as a power series in~$\overline{z}$. For instance,
if $f(z)=\sum_{n=1}^\infty a_n e^{2 \pi i n z}$, then $\overline{f(z)} = \sum_{n=1}^\infty \overline{a_n} e^{2 \pi i n (-\overline{z})}
= \tilde{f}(-\overline{z})$, where $\tilde{f}(z)=\sum_{n=1}^\infty \overline{a_n} e^{2 \pi i n z}$.
Note that
\begin{equation}\label{eq:int-bar}
\int_\alpha \overline{F(z)} d\overline{z}
= \int_0^1 \overline{F(\alpha(t))} \frac{d\overline{\alpha}}{dt} dt
= \int_0^1 \overline{F(\alpha(t))} \overline{\frac{d\alpha}{dt}} dt
= \overline{\int_0^1 F(\alpha(t)) \frac{d\alpha}{dt} dt}
= \overline{\int_\alpha F(z) dz}
\end{equation}
for any piecewise analytic path $\alpha:[0,1] \to \mathbb{C}$ and any integrable
complex valued function~$F$.
This means for $f \in \Skg k\Gamma\mathbb{C}$:
$$ \overline{I_f(\gamma z_0, \delta z_0)} =
\int_{\gamma z_0}^{\delta z_0} \overline{f(z)} (X\overline{z}+Y)^{k-2} d\overline{z} \in V_{k-2}(\mathbb{C}).$$
\begin{proposition}\label{esmap}
Let $k \ge 2$ and $\Gamma \le \mathrm{SL}_2(\mathbb{Z})$ be a subgroup of finite
index and fix $z_0,z_1 \in \mathbb{H}$.
\begin{enumerate}[(a)]
\item The {\em Eichler-Shimura map}
\begin{align*}
\Mkg k\Gamma\mathbb{C} \oplus \overline{\Skg k\Gamma\mathbb{C}} & \to \h^1(\Gamma,V_{k-2}(\mathbb{C})),\\
(f,{\overline{g}}) & \mapsto (\gamma \mapsto I_f(z_0,\gamma z_0) + \overline{I_g(z_1,\gamma z_1)})
\end{align*}
is a well-defined homomorphism of $\mathbb{C}$-vector spaces.
It does not depend on the choice of $z_0$ and~$z_1$.
\item The {\em induced Eichler-Shimura map}
\begin{align*}
\Mkg k\Gamma\mathbb{C} \oplus \overline{\Skg k\Gamma\mathbb{C}} & \to \h^1(\mathrm{SL}_2(\mathbb{Z}),{\rm Hom}_{\mathbb{C}[\Gamma]}(\mathbb{C}[\mathrm{SL}_2(\mathbb{Z})],V_{k-2}(\mathbb{C}))),\\
(f,{\overline{g}}) & \mapsto (a \mapsto (b \mapsto I_f(b z_0,ba z_0) + \overline{I_g(b z_1,ba z_1)}))
\end{align*}
is a well-defined homomorphism of $\mathbb{C}$-vector spaces.
It does not depend on the choice of $z_0$ and~$z_1$.
Via the map from Shapiro's lemma, this homomorphism coincides with
the one from~(a).
\end{enumerate}
\end{proposition}
\begin{proof}
(a) For checking that the map is well-defined,
it suffices to compute that $\gamma \mapsto I_f(z_0,\gamma z_0)$ is
a $1$-cocycle:
$$ I_f(z_0,\gamma \delta z_0)
= I_f(z_0,\gamma z_0) + I_f(\gamma z_0 , \gamma \delta z_0)
= I_f(z_0,\gamma z_0) + \gamma.I_f( z_0 , \delta z_0),$$
using Lemma~\ref{esmaplem} and $f|_\gamma = f$ since $\gamma \in \Gamma$.
The independence of the base point is seen as follows.
Let $\tilde{z}_0$ be any base point.
$$ I_f(\tilde{z}_0,\gamma \tilde{z}_0) = I_f (\tilde{z}_0, z_0) + I_f(z_0, \gamma z_0) + I_f(\gamma z_0, \gamma \tilde{z}_0)
= I_f(z_0,\gamma z_0) + (1-\gamma) I_f(\tilde{z}_0,z_0).$$
The difference of the cocycles $(\gamma \mapsto I_f(\tilde{z}_0,\gamma \tilde{z}_0))$
and $(\gamma \mapsto I_f(z_0,\gamma z_0))$ is hence the coboundary
$(\gamma \mapsto (1-\gamma)I_f(\tilde{z}_0,z_0))$.
(b) We first check that the map $(b \mapsto I_f(b z_0,ba z_0) + \overline{I_g(b z_1,ba z_1)})$
is indeed in the coinduced module
${\rm Hom}_{\mathbb{C}[\Gamma]}(\mathbb{C}[\mathrm{SL}_2(\mathbb{Z})],V_{k-2}(\mathbb{C}))$.
For that let $\gamma \in \Gamma$. We have
$$ I_f(\gamma b z_0,\gamma ba z_0) = \gamma.(I_f(b z_0,ba z_0))$$
by Lemma~\ref{esmaplem}, as desired.
The map $\phi(a) := (b \mapsto I_f(b z_0,ba z_0) + \overline{I_g(b z_1,ba z_1)})$
is a cocycle:
\begin{align*}
\phi(a_1 a_2)(b) &= I_f(b z_0,ba_1 a_2 z_0) + \overline{I_g(b z_1,ba_1a_2 z_1)} \\
&=I_f(b z_0,ba_1 z_0) + I_f(ba_1 z_0,ba_1 a_2 z_0)
+ \overline{I_g(b z_1,ba_1 z_1)} + \overline{I_g(ba_1 z_1,ba_1 a_2 z_1)} \\
& = \phi(a_1)(b) + \phi(a_2) (b a_1) = \phi(a_1)(b) + (a_1.(\phi(a_2)))(b),
\end{align*}
by the definition of the left action of $\mathrm{SL}_2(\mathbb{Z})$ on the coinduced module.
Note that the map in Shapiro's lemma in our situation is given by
$$ \phi \mapsto (\gamma \mapsto \phi(\gamma)(1) = I_f(z_0,\gamma z_0)) + \overline{I_g(z_1,\gamma z_1))} ,$$
which shows that the maps from (a) and (b) coincide.
The independence from the base point in~(b) now follows
from the independence in~(a).
\end{proof}
Next we identify the cohomology of $\mathrm{SL}_2(\mathbb{Z})$ with the one of $\mathrm{PSL}_2(\mathbb{Z})$.
\begin{proposition}
Let $\Gamma \le \mathrm{SL}_2(\mathbb{Z})$ be a subgroup of finite index and let $R$
be a ring in which $2$ is invertible. Let $V$ be a left $R[\Gamma]$-module.
Assume that either $-1 \not\in \Gamma$ or $-1 \in \Gamma$ acts trivially on~$V$.
Then the inflation map
$$\h^1(\mathrm{PSL}_2(\mathbb{Z}),{\rm Hom}_{R[\overline{\Gamma}]}(R[\mathrm{PSL}_2(\mathbb{Z})],V)) \xrightarrow{\mathrm{infl}} \h^1(\mathrm{SL}_2(\mathbb{Z}),{\rm Hom}_{R[\Gamma]}(R[\mathrm{SL}_2(\mathbb{Z})],V))$$
is an isomorphism. We shall identify these two $R$-modules from now on.
\end{proposition}
\begin{proof}
If $-1 \not\in \Gamma$, then $\Gamma \cong \overline{\Gamma}$ and
${\rm Hom}_{R[\Gamma]}(R[\mathrm{SL}_2(\mathbb{Z})],V)^{\langle -1 \rangle}$ consists of all the
functions satisfying $f(g) = f(-g)$ for all $g \in \mathrm{SL}_2(\mathbb{Z})$, which are precisely
the functions in ${\rm Hom}_{R[\overline{\Gamma}]}(R[\mathrm{PSL}_2(\mathbb{Z})],V)$.
If $-1 \in \Gamma$ and $-1$ acts trivially on~$V$, then $f(-g) = (-1).f(g) = f(g)$
and so $-1$ already acts trivially on ${\rm Hom}_{R[\Gamma]}(R[\mathrm{SL}_2(\mathbb{Z})],V)$. This
$R[\mathrm{SL}_2(\mathbb{Z})]$-module is then naturally isomorphic to ${\rm Hom}_{R[\overline{\Gamma}]}(R[\mathrm{PSL}_2(\mathbb{Z})],V)$
since any function is uniquely determined on its classes modulo $\langle -1 \rangle$.
Due to the invertibility of~$2$, the Hochschild-Serre exact sequence (Theorem~\ref{thm:hochschild-serre})
shows that inflation indeed gives the desired isomorphism because the third term
$\h^1(\langle \pm 1\rangle, {\rm Hom}_{R[\Gamma]}(R[\mathrm{SL}_2(\mathbb{Z})],V))$ in the inflation-restriction sequence is zero
(see Proposition~\ref{corres}).
\end{proof}
\begin{proposition}\label{esres}
The kernel of the Eichler-Shimura map composed with the restriction
$$\Mkg k\Gamma\mathbb{C} \oplus \overline{\Skg k\Gamma\mathbb{C}}
\to \h^1(\Gamma,V_{k-2}(\mathbb{C})) \to
\prod_{c \in \Gamma \backslash \mathbb{P}^1(\mathbb{Q})} \h^1(\Gamma_c,V_{k-2}(\mathbb{C}))$$
is equal to $\Skg k\Gamma\mathbb{C} \oplus \overline{\Skg k\Gamma\mathbb{C}}$.
In particular, the image of $\Skg k\Gamma\mathbb{C} \oplus \overline{\Skg k\Gamma\mathbb{C}}$
under the Eichler-Shimura map lies in the parabolic cohomology
$\h_{\mathrm{par}}^1(\Gamma, V_{k-2}(\mathbb{C}))$.
\end{proposition}
\begin{proof}
In order to simplify the notation of the proof, we shall only prove the case of a modular form $f \in \Mkg k\Gamma\mathbb{C}$.
The statement for anti-homolorphic forms is proved in the same way.
The composition maps the modular form~$f$ to the $1$-cocycle (for $\gamma \in \Gamma_c$)
$$ \gamma \mapsto \int_{z_0}^{\gamma z_0} f(z) (Xz+Y)^{k-2} dz$$
with a fixed base point~$z_0 \in \mathbb{H}$. The aim is now to move the
base point to the cusps. We cannot just replace $z_0$ by~$\infty$,
as then the integral might not converge any more (it converges on cusp
forms).
Let $c = M\infty$ be any cusp with $M = \mat abcd \in \mathrm{SL}_2(\mathbb{Z})$.
We then have
$\Gamma_c = \langle MTM^{-1} \rangle \cap \Gamma = \langle MT^rM^{-1} \rangle$
for some $r \ge 1$.
Since $f$ is holomorphic in the cusps, we have
$$ f|_M(z) = \sum_{n=0}^\infty a_n e^{2\pi i n/r z} = a_0 + g(z)$$
and thus
$$ f(z) = a_0|_{M^{-1}}(z) + g|_{M^{-1}}(z) = \frac{a_0}{(-cz+a)^k} + g|_{M^{-1}}(z).$$
Now we compute the cocycle evaluated at $\gamma = MT^rM^{-1}$:
$$\int_{z_0}^{\gamma z_0} f(z) (Xz+Y)^{k-2} dz
= a_0 \int_{z_0}^{\gamma z_0} \frac{(Xz+Y)^{k-2}}{(-cz+a)^k} dz +
\int_{z_0}^ {\gamma z_0} g|_{M^{-1}}(z)(Xz+Y)^{k-2} dz. $$
Before we continue by evaluating the right summand, we remark that the integral
$$ I_{g |_{M^{-1}}} (z_0, M \infty)
= \int_{z_0}^{M\infty} g|_{M^{-1}}(z) (Xz+Y)^{k-2} dz
= M. \int_{M^{-1}z_0}^{\infty} g(z) (Xz+Y)^{k-2} dz$$
converges. We have
\begin{align*}
\int_{z_0}^{\gamma z_0} g|_{M^{-1}}(z)(Xz+Y)^{k-2} dz
&= (\int_{z_0}^{M \infty} + \int_{\gamma M\infty}^{\gamma z_0}) g|_{M^{-1}}(z)(Xz+Y)^{k-2} dz\\
&= (1-\gamma). \int_{z_0}^{M \infty} g|_{M^{-1}}(z) (Xz+Y)^{k-2} dz
\end{align*}
since $g|_{M^{-1}\gamma}(z) = g|_{T^rM^{-1}}(z) = g|_{M^{-1}}(z)$.
The $1$-cocycle
$\gamma \mapsto \int_{z_0}^{\gamma z_0} g|_{M^{-1}}(z)(Xz+Y)^{k-2} dz$
is thus a $1$-coboundary. Consequently, the class of the image of~$f$
is equal to the class of the $1$-cocycle
$$ \gamma \mapsto a_0 \int_{z_0}^{\gamma z_0} \frac{(Xz+Y)^{k-2}}{(-cz+a)^k} dz.$$
We have the isomorphism (as always for cyclic groups)
$$ \h^1(\Gamma_c,V_{k-2}(\mathbb{C})) \xrightarrow{\phi \mapsto \phi(MT^rM^{-1})} V_{k-2}(\mathbb{C})_{\Gamma_c}.$$
Furthermore, we have the isomorphism
$$ V_{k-2}(\mathbb{C})_{\Gamma_c} \xrightarrow {P \mapsto M^{-1}P}
V_{k-2}(\mathbb{C})_{\langle T^r \rangle} \xrightarrow{P \mapsto P(0,1)} \mathbb{C}$$
with polyomials~$P(X,Y)$.
Note that the last map is an isomorphism by the explicit description of
$V_{k-2}(\mathbb{C})_{\langle T^r \rangle}$.
Under the composition the image of the cocycle coming from the modular form~$f$
is
\begin{multline*} a_0 M^{-1}. \int_{z_0}^{\gamma z_0} \frac{(Xz+Y)^{k-2}}{(-cz+a)^k} dz (0,1)
= a_0 \int_{z_0}^{\gamma z_0} \frac{(Xz+Y)^{k-2}}{(-cz+a)^k} dz (-c,a)\\
= a_0 \int_{z_0}^{\gamma z_0} \frac{1}{(-cz+a)^2} dz
= a_0 \int_{M^{-1}z_0}^{T^rM^{-1}z_0} dz
= a_0 (M^{-1}z_0 + r - M^{-1}z_0) = r a_0,
\end{multline*}
as $(0,1) M^{-1} = (0,1) \mat d{-b}{-c}a = (-c,a)$.
This expression is zero if and only if $a_0 = 0$, i.e.\ if and only if $f$
vanishes at the cusp~$c$.
A similar argument works for anti-holomorphic cusp forms.
\end{proof}
\subsection{Theory: Cup product and Petersson scalar product}
This part owes much to the treatment of the Petersson scalar product by Haberland in~\cite{Haberland}
(see also \cite[\S 12]{CS}).
\begin{definition}\label{def:cup}
Let $G$ be a group and $M$ and $N$ be two left $R[G]$-modules. We
equip $M \otimes_R N$ with the diagonal left $R[G]$-action. Let $m,n \ge 0$.
Then we define the {\em cup product}
$$ \cup: \h^n(G,M) \otimes_R \h^m(G,N) \to \h^{n+m}(G,M \otimes_R N)$$
by
$$ \phi \cup \psi := ((g_1,\dots,g_n,g_{n+1},\dots,g_{n+m})
\mapsto \phi(g_1,\dots,g_n) \otimes (g_1 \cdots g_n).\psi(g_{n+1},\dots,g_{n+m})$$
on cochains of the bar resolution.
\end{definition}
This description can be derived easily from the natural one on the standard resolution.
For instance, \cite[\S5.3]{Brown} gives the above formula up to a sign (which does not matter in our application anyway
because we work in fixed degree).
In Exercise~\ref{excupwd} it is checked that the cup product is well-defined.
\begin{lemma}\label{lem:swap}
Keep the notation of Definition~\ref{def:cup} and let $\phi \in \h^n(G,M)$ and $\psi \in \h^m(G,N)$.
Then
$$\phi \cup \psi = (-1)^{mn} \psi \cup \phi$$
via the natural isomorphism $M \otimes_R N \cong N \otimes_R M$.
\end{lemma}
\begin{proof}
Exercise~\ref{ex:cup}.
\end{proof}
We are now going to formulate a pairing on cohomology, which will turn out
to be a version of the Petersson scalar product. We could introduce compactly
supported cohomology for writing it in more conceptual terms, but have decided
not to do this in order not to increase the amount of new material even more.
\begin{definition}
Let $M$ be an $R[\mathrm{PSL}_2(\mathbb{Z})]$-module.
The {\em parabolic $1$-cocycles} are defined as
$$ \Z_{\mathrm{par}}^1(\Gamma,M) = \ker(\Z^1(\Gamma,M) \xrightarrow{\mathrm{res}}
\prod_{g \in \Gamma \backslash \mathrm{PSL}_2(\mathbb{Z}) / \langle T \rangle}
\Z^1(\Gamma \cap \langle g T g^{-1} \rangle, M) ).$$
\end{definition}
\begin{proposition}\label{defpairing}
Let $R$ be a ring in which $6$ is invertible. Let $M$ and $N$ be left $R[\mathrm{PSL}_2(\mathbb{Z})]$-modules together with a $R[\mathrm{PSL}_2(\mathbb{Z})]$-module homomorphism
$\pi: M \otimes_R N \to R$
where we equip $M \otimes_R N$ with the diagonal action. Write $G$ for $\mathrm{PSL}_2(\mathbb{Z})$.
We define a pairing
$$ \langle,\rangle: \Z^1(G,M) \times \Z^1(G,N) \to R$$
as follows: Let $(\phi,\psi)$ be a pair of $1$-cocycles.
Form their cup product $\rho := \pi_*(\phi \cup \psi)$ in $\Z^2(G,R)$ via
$\Z^2(G,M \otimes_R N) \xrightarrow{\pi_*} \Z^2(G,R)$. As
$\h^2(G,R)$ is zero (Corollary~\ref{corhzweinull}), $\rho$ must be a $2$-coboundary, i.e.\
there is $a: G \to R$ (depending on $(\phi,\psi)$) such that
$$\rho (g,h) = \pi(\phi(g) \otimes g.\psi(h)) = a(h) - a(gh) + a(g).$$
We define the pairing by
$$\langle\phi,\psi\rangle := a(T).$$
\begin{enumerate}[(a)]
\item The pairing is well-defined and bilinear. It can be expressed as
$$ \langle \phi,\psi \rangle = -\rho(\tau,\sigma) + \frac{1}{2} \rho(\sigma,\sigma) +
\frac{1}{3} (\rho(\tau,\tau)+\rho(\tau,\tau^2)).$$
\item If $\phi \in \Z_{\mathrm{par}}^1(G,M)$, then $\rho(\tau,\sigma)=\rho(\sigma,\sigma)$ and
$$ \langle \phi,\psi \rangle = - \frac{1}{2} \rho(\sigma,\sigma) +
\frac{1}{3}(\rho(\tau,\tau) + \rho(\tau,\tau^2)).$$
Moreover, $\langle \phi,\psi \rangle$ only depends on the class of~$\psi$
in $\h^1(G,N)$.
\item If $\psi \in \Z_{\mathrm{par}}^1(G,N)$, then $\rho(\tau,\sigma) = \rho(\tau,\tau^2)$ and
$$ \langle \phi,\psi \rangle = \frac{1}{2} \rho(\sigma,\sigma) +
\frac{1}{3}\rho(\tau,\tau) - \frac{2}{3} \rho(\tau,\tau^2).$$
Moreover, $\langle \phi,\psi \rangle$ only depends on the class of~$\phi$
in $\h^1(G,M)$.
\item If $\phi \in \Z_{\mathrm{par}}^1(G,M)$ and $\psi \in \Z_{\mathrm{par}}^1(G,N)$,
then $\rho(\sigma,\sigma) = \rho(\tau,\tau^2)$
and
$$ \langle \phi,\psi \rangle = -\frac{1}{6} \rho(\sigma,\sigma) +
\frac{1}{3}\rho(\tau,\tau).$$
\end{enumerate}
\end{proposition}
\begin{proof}
(a) We first have
$$ 0 = \pi(\phi(1) \otimes \psi(1)) = \rho(1,1) = a(1) - a(1) + a(1) = a(1),$$
since $\phi$ and $\psi$ are $1$-cocycles. Recall that the value of a
$1$-cocycle at~$1$ is always~$0$ due to $\phi(1) = \phi(1\cdot 1) = \phi(1) + \phi(1)$.
Furthermore, we have
\begin{align*}
\rho(\tau,\sigma) &= a(\sigma) - a(T) + a(\tau)\\
\rho(\sigma,\sigma) &= a(\sigma) - a(1) + a(\sigma) = 2 a(\sigma)\\
\rho(\tau,\tau^2) &= a(\tau^2) - a(1) + a(\tau) = a(\tau) + a(\tau^2)\\
\rho(\tau,\tau) &= a(\tau) - a(\tau^2) + a(\tau) = 2a(\tau) - a(\tau^2)
\end{align*}
Hence, we get
$a(T) = - \rho(\tau,\sigma) + a(\sigma) + a(\tau)$ and
$a(\sigma) = \frac{1}{2} \rho(\sigma,\sigma)$ as well as
$a(\tau) = \frac{1}{3}(\rho(\tau,\tau) + \rho(\tau,\tau^2))$,
from which the claimed formula follows. The formula also shows the
independence of the choice of~$a$ and the bilinearity.
(b) Now assume $\phi(T) = 0$. Using $T = \tau \sigma$ we obtain
\begin{multline*}
\rho(\tau,\sigma) = \pi(\phi(\tau) \otimes \tau \psi(\sigma))
= - \pi(\tau.\phi(\sigma) \otimes \tau \psi(\sigma))\\
= - \pi(\phi(\sigma)\otimes \psi(\sigma))
= \pi((\phi(\sigma)\otimes \sigma \psi(\sigma))) = \rho(\sigma,\sigma)
\end{multline*}
because $0 = \phi(T) = \phi(\tau\sigma) = \tau.\phi(\sigma) + \phi(\tau)$
and $0 = \psi(1) = \psi(\sigma^2) = \sigma.\psi(\sigma) + \psi(\sigma)$.
This yields the formula.
We now show that the pairing does not depend on the choice of
$1$-cocycle in the class of~$\psi$. To see this, let $\psi(g) = (g-1)n$ with $n\in N$
be a $1$-coboundary. Put $b(g) := \pi(-\phi(g) \otimes gn)$. Then,
using $\phi(gh)=g(\phi(h))+\phi(g)$, one immediately checks the equality
$$ \rho(g,h) = \pi(\phi(g)\otimes g(h-1)n) = g.b(h)-b(gh)+b(g).$$
Hence, $(\phi,\psi)$ is mapped to $b(T) = \pi(-\phi(T) \otimes T n) = \pi(0 \otimes Tn) = 0$.
(c) Let now $\psi(T) = 0$. Then $0=\psi(T) = \psi(\tau\sigma) = \tau\psi(\sigma) + \psi(\tau)$
and $0=\psi(\tau^3) = \tau \psi(\tau^2) + \psi(\tau)$, whence
$\tau\psi(\tau^2) = \tau \psi(\sigma)$. Consequently,
$$\rho(\tau,\sigma) = \pi(\phi(\tau) \otimes \tau \psi(\sigma))
= \pi(\phi(\tau) \otimes \tau \psi(\tau^2)) = \rho(\tau,\tau^2),$$
implying the formula.
The pairing does not depend on the choice
of $1$-cocycle in the class of~$\phi$. Let $\phi(g) = (g-1)m$ be a $1$-coboundary
and put $c(g) := \pi(m \otimes \psi(g))$. Then the equality
$$ \rho(g,h) = \pi((g-1)m \otimes g\psi(h)) = g.c(h)-c(gh)+c(g)$$
holds. Hence, $(\phi,\psi)$ is mapped to $c(T) = \pi(m \otimes \psi(T)) = \pi(m \otimes 0) = 0$.
(d) Suppose now that $\phi(T)=0=\psi(T)$, then by what we have just seen
$$\rho(\tau,\sigma)=\rho(\sigma,\sigma)=\rho(\tau,\tau^2).$$
This implies the claimed formula.
\end{proof}
Our next aim is to specialise this pairing to the cocycles coming from modular
forms under the Eichler-Shimura map. We must first define a pairing
on the modules used in the cohomology groups.
On the modules $\Sym^{k-2}(R^2)$ we now define the {\em symplectic pairing}
over any ring~$R$ in which $(k-2)!$ is invertible.
Let $n=k-2$ for simplicity. The pairing for $n=0$ is just the
multiplication on~$R$. We now define the pairing for $n=1$ as
$$ R^2 \times R^2 \to R, \;\;\; \vect ac \bullet \vect bd := \det \mat abcd.$$
For any $g \in \mathrm{SL}_2(\mathbb{Z})$ we have
$$ g{\vect a c} \bullet g{\vect b d}
= {\det g \mat abcd} = \det \mat abcd
= {\vect a c} \bullet {\vect b d}.$$
As the next step, we define a pairing on the $n$-th tensor power
of~$R^2$
$$(R^2 \otimes_R \dots \otimes_R R^2) \times (R^2 \otimes_R \dots \otimes_R R^2) \to R$$
by
$$ (\vect {a_1} {c_1} \otimes \dots \otimes \vect{a_n}{c_n}) \bullet
(\vect {b_1} {d_1} \otimes \dots \otimes \vect{b_n}{d_n})
:= \prod_{i=1}^n \vect {a_i} {c_i} \bullet \vect {b_i} {d_i}.$$
This pairing is still invariant under the $\mathrm{SL}_2(\mathbb{Z})$-action.
Now we use the assumption on the invertibility of~$n!$ in order
to embed $\Sym^n(R^2)$ as an $R[S_n]$-module in the $n$-th tensor power,
where the action of the symmetric group~$S_n$ is on the indices. We have
that the map (in fact, $1/n!$ times the norm)
$$ \Sym^n(R^2) \to R^2 \otimes_R \dots \otimes_R R^2, \;\;\;
[ \vect {a_1}{c_1} \otimes \dots \otimes \vect{a_n}{c_n} ] \mapsto
\frac{1}{n!} \sum_{\sigma \in S_n} \vect {a_{\sigma(1)}} {c_{\sigma(1)}}
\otimes \dots \otimes \vect{a_{\sigma(n)}}{c_{\sigma(n)}}$$
is injective (one can use Tate cohomology groups to see this) as the
order of $S_n$ is invertible in the ring.
Finally, we define the pairing on $\Sym^n(R^2)$ as the restriction of the
pairing on the $n$-th tensor power to the image of $\Sym^n(R^2)$ under
the embedding that we just described. This pairing is, of course,
still $\mathrm{SL}_2(\mathbb{Z})$-invariant.
We point to the important special case
$${\vect a c}^{\otimes(k-2)} \bullet{\vect b d}^{\otimes(k-2)}=(ad-bc)^{k-2}.$$
Hence, after the identification $\Sym^{k-2}(R^2) \cong V_{k-2}(R)$ from
Exercise~\ref{exsym}, the resulting pairing on $V_{k-2}(R)$ has
the property
$$ (aX+cY)^{k-2} \bullet (bX+dY)^{k-2} \mapsto (ad-bc)^{k-2}.$$
This pairing extends to a paring on coinduced modules
\begin{equation}\label{eq:coind-pair}
\pi: {\rm Hom}_{R[\Gamma]}(R[\mathrm{PSL}_2(\mathbb{Z})],V_{k-2}(R)) \otimes_R {\rm Hom}_{R[\Gamma]}(R[\mathrm{PSL}_2(\mathbb{Z})],V_{k-2}(R)) \to R
\end{equation}
by mapping $(\alpha,\beta)$ to
$\sum_{\gamma \in \Gamma \backslash \mathrm{PSL}_2(\mathbb{Z})} \alpha(\gamma) \bullet \beta(\gamma)$.
\begin{proposition}\label{cuppet}
Let $k \ge 2$. Assume $-1 \not\in \Gamma$ (whence we view $\Gamma$ as a subgroup of $\mathrm{PSL}_2(\mathbb{Z})$).
Let $f,g \in \Skg k\Gamma\mathbb{C}$ be cusp forms. Denote by
$\phi_f$ the $1$-cocycle associated with~$f$
under the Eichler-Shimura map for the base point $z_0 = \infty$, i.e.\
$$ \phi_f (a) = (b \mapsto I_f(b \infty,ba \infty)) \in
\Z^1(\mathrm{PSL}_2(\mathbb{Z}),{\rm Coind}_\Gamma^{\mathrm{PSL}_2(\mathbb{Z})} (V_{k-2}(\mathbb{C}))).$$
Further denote
$$ \overline{\phi_f} (a) = (b \mapsto \overline{I_f(b \infty,ba \infty)}) \in
\Z^1(\mathrm{PSL}_2(\mathbb{Z}),{\rm Coind}_\Gamma^{\mathrm{PSL}_2(\mathbb{Z})} (V_{k-2}(\mathbb{C}))).$$
Similarly, denote by $\psi_g$ the $1$-cocycle
associated with~$g$ for the base point $z_1 = \zeta_6$.
Define a bilinear pairing as in Proposition~\ref{defpairing}
$$\langle,\rangle: \big(\Z^1(\mathrm{PSL}_2(\mathbb{Z}), {\rm Coind}_\Gamma^{\mathrm{PSL}_2(\mathbb{Z})} (V_{k-2}(\mathbb{C}))) \big)^2 \to \mathbb{C}$$
with the product on the coinduced modules described in~\eqref{eq:coind-pair}.
Then the equation
$$ \langle \phi_f, \overline{\psi_g} \rangle = (2i)^{k-1}\mu(f,g) $$
holds where $(f,g)$ denotes the Petersson scalar product and $\mu$ the index
of $\Gamma$ in $\mathrm{PSL}_2(\mathbb{Z})$.
\end{proposition}
\begin{proof}
Note that the choice of base point~$\infty$ is on the one hand well-defined
(the integral converges, as it is taken over a cusp form) and on the other hand
it ensures that $\phi_f(T) = \overline{\phi_f}(T) = 0$.
But note that $\psi_g$ is not a parabolic cocycle in general since the chosen base point is not~$\infty$ even
though $g$ is also a cusp form.
Now consider $\langle \phi_f,\overline{\psi_g} \rangle$.
Let $\rho(a,b) := \pi(\phi_f(a) \otimes a \overline{\psi_g}(b))$, where $\pi$ is from~\eqref{eq:coind-pair}.
We describe $\rho(a,b)$:
\begin{align*}
\rho(a,b) & = \sum_\gamma
\big( \int_{\gamma \infty}^{\gamma a \infty} f(z) (Xz+Y)^{k-2} dz\big) \bullet
\big( \int_{\gamma a \zeta_6}^{\gamma ab \zeta_6} \overline{g(z)} (X\overline{z}+Y)^{k-2} d\overline{z}\big)\\
&= \sum_\gamma
\int_{\gamma a \zeta_6}^{\gamma ab \zeta_6} \int_{\gamma \infty}^{\gamma a \infty}
f(z) \overline{g(z)} \big( (Xz+Y)^{k-2} \bullet (X\overline{z}+Y)^{k-2} \big) dz d\overline{z} \\
&= \sum_\gamma
\int_{\gamma a \zeta_6}^{\gamma ab \zeta_6} \int_{\gamma \infty}^{\gamma a \infty}
f(z)\overline{g(z)} (z-\overline{z})^{k-2} dz d\overline{z} \\
&= \sum_\gamma
\int_{a \zeta_6}^{ab \zeta_6} \int_{\infty}^{a \infty}
f|_\gamma (z)\overline{g|_\gamma (z)} (z-\overline{z})^{k-2} dz d\overline{z}.
\end{align*}
where the sums run over a system of representatives of $\Gamma \backslash \mathrm{PSL}_2(\mathbb{Z})$.
We obtain
\begin{align*}
&\rho(\sigma,\sigma)\\
=& \sum_\gamma
\int_{\sigma \zeta_6}^{\sigma^2 \zeta_6} \int_{\infty}^{\sigma \infty}
f|_\gamma (z)\overline{g|_\gamma (z)} (z-\overline{z})^{k-2} dz d\overline{z}\\
=& \sum_\gamma
\int_{\zeta_3}^{\zeta_6} \int_{\infty}^{0}
f|_\gamma (z)\overline{g|_\gamma (z)} (z-\overline{z})^{k-2} dz d\overline{z},\\
=& \sum_\gamma \big[ \int_{\zeta_3}^i \int_\infty^0 f|_\gamma (z) \overline{g|_\gamma (z)} (z-\overline{z})^{k-2} dz d\overline{z}
+ \int_{\sigma \zeta_3}^{\sigma i} \int_{\sigma \infty}^{\sigma 0} f|_\gamma (z)
\overline{g|_\gamma (z)} (z-\overline{z})^{k-2} dz d\overline{z} \big]\\
=& \sum_\gamma \big[\int_{\zeta_3}^i \int_\infty^0 f|_\gamma (z)\overline{g|_\gamma (z)} (z-\overline{z})^{k-2} dz d\overline{z}
+ \int_{\zeta_3}^{i} \int_{\infty}^{0} f|_{\gamma\sigma} (z)\overline{g|_{\gamma\sigma} (z)} (z-\overline{z})^{k-2} dz d\overline{z} \big]\\
=& 2 \sum_\gamma \int_{\zeta_3}^i \int_\infty^0 f|_\gamma (z)\overline{g|_\gamma (z)} (z-\overline{z})^{k-2} dz d\overline{z},
\end{align*}
and
\begin{align*}
\rho(\tau,\tau) =& \sum_\gamma
\int_{\tau \zeta_6}^{\tau^2 \zeta_6} \int_{\infty}^{\tau \infty}
f|_\gamma (z)\overline{g|_\gamma (z)} (z-\overline{z})^{k-2} dz d\overline{z} = 0\\
\rho(\tau,\tau^2) =& \sum_\gamma
\int_{\tau \zeta_6}^{\tau^3 \zeta_6} \int_{\infty}^{\tau \infty}
f|_\gamma (z)\overline{g|_\gamma (z)} (z-\overline{z})^{k-2} dz d\overline{z} = 0,
\end{align*}
since $\tau$ stabilises~$\zeta_6$.
It now suffices to compare with the formulas computed before
(Propositions \ref{defpairing} and~\ref{petexplicit}) to obtain the claimed formula.
\end{proof}
\subsection{Theory: The Eichler-Shimura theorem}
We can now, finally, prove that the Eichler-Shimura map is an isomorphism.
It should be pointed out again that the cohomology groups can be replaced by
modular symbols according to Theorem~\ref{compthm}.
\begin{theorem}[Eichler-Shimura]\label{esgammaeins}
Let $N \ge 4$ and $k\ge 2$. The Eichler-Shimura map and the induced Eichler-Shimura
map (Proposition~\ref{esmap}) are isomorphisms for $\Gamma = \Gamma_1(N)$.
The image of $\Skone kN\mathbb{C} \oplus \overline{\Skone kN\mathbb{C}}$ is isomorphic
to the parabolic subspace.
\end{theorem}
\begin{proof}
We first assert that the dimensions of both sides of the Eichler-Shimura
map agree and also that twice the dimension of the space of cusp forms
equals the dimension of the parabolic subspace. The dimension of the
cohomology group and its parabolic subspace was computed in
Proposition~\ref{dimheins}.
For the dimension of the left-hand side we refer to~\cite[\S 6.2]{Stein}.
Suppose that $(f,g)$ are in the kernel of the Eichler-Shimura map.
Then by Proposition~\ref{esres} it follows that $f$ and $g$ are both cuspidal.
Hence, it suffices to prove that the restriction
of the Eichler-Shimura map to $\Skone kN\mathbb{C} \oplus \overline{\Skone kN\mathbb{C}}$
is injective. In order to do this we choose $z_0=z_1=\infty$ as
base points for the Eichler-Shimura map, which is possible as the integrals
converge on cusp forms (as in Proposition~\ref{esmap} one sees that this
choice of base point does not change the cohomology class).
As in Proposition~\ref{cuppet}, we write $\phi_f$ for
the $1$-cocycle associated with a cusp form~$f$ for the base point~$\infty$.
We now make use of the pairing from Proposition~\ref{cuppet}
on $$\Z^1(\mathrm{PSL}_2(\mathbb{Z}),{\rm Coind}_\Gamma^{\mathrm{PSL}_2(\mathbb{Z})} (V_{k-2}(\mathbb{C}))),$$
where we put $\Gamma := \Gamma_1(N)$ for short.
This pairing induces a $\mathbb{C}$-valued pairing $\langle \;, \;\rangle$
on
$$\h_{\mathrm{par}}^1(\mathrm{PSL}_2(\mathbb{Z}),{\rm Coind}_\Gamma^{\mathrm{PSL}_2(\mathbb{Z})} (V_{k-2}(\mathbb{C}))).$$
Next observe that the map
$$ \Skone kN\mathbb{C} \oplus \overline{\Skone kN\mathbb{C}} \xrightarrow{(f,{\overline{g}}) \mapsto (f+g,{\overline{f}}-{\overline{g}})} \Skone kN\mathbb{C} \oplus \overline{\Skone kN\mathbb{C}}$$
is an isomorphism.
Let $f,g \in \Skone kN\mathbb{C}$ be cusp forms and assume now that $(f+g,{\overline{f}}-{\overline{g}})$ is sent to the zero-class in
$\h_{\mathrm{par}}^1(\mathrm{PSL}_2(\mathbb{Z}),{\rm Coind}_\Gamma^{\mathrm{PSL}_2(\mathbb{Z})} (V_{k-2}(\mathbb{C})))$.
In that cohomology space, we thus have
$$ 0 = \phi_f + \phi_g + \overline{\phi_f} - \overline{\phi_g} = (\phi_f + \overline{\phi_f}) + (\phi_g - \overline{\phi_g})
= 2\Real(\phi_f) + 2i \Imag(\phi_g).$$
We conclude that the cohomology classes of $\phi_f + \overline{\phi_f}$ and $\phi_g - \overline{\phi_g}$ are both zero.
Now we apply the pairing as follows:
$$ 0=\langle \phi_f, \phi_f + \overline{\phi_f} \rangle = \langle \phi_f, \phi_f \rangle + \langle \phi_f, \overline{\phi_f} \rangle
= (2i)^{k-1} \mu (f,f)$$
where we used $\langle \phi_f, \phi_f \rangle = 0$ because of Lemma~\ref{lem:swap} (since the pairing is
given by the cup product), as well as Proposition~\ref{cuppet}.
Hence, $(f,f)=0$ and, thus, $f=0$ since the Petersson scalar product is positive definite.
Similar arguments with $0=\langle \phi_g, \phi_g - \overline{\phi_g} \rangle$ show $g=0$.
This proves the injectivity.
\end{proof}
\begin{remark}
The Eichler-Shimura map is in fact an isomorphism for all subgroups~$\Gamma$
of $\mathrm{SL}_2(\mathbb{Z})$ of finite index. The proof is the same, but must use more involved
dimension formulae for the cohomology group (see Remark~\ref{remdimcoh}) and
modular forms.
In Corollary~\ref{esgammanull} we will see that there also is an Eichler-Shimura
isomorphism with a Dirichlet character.
\end{remark}
\subsection{Theoretical exercises}
\begin{exercise}\label{excupwd}
Check that the cup product is well-defined.
Hint: this is a standard exercise that can be found in many textbooks (e.g.\ \cite{Brown}).
\end{exercise}
\begin{exercise}\label{ex:cup}
Prove Lemma~\ref{lem:swap}.
Hint: \cite[(5.3.6)]{Brown}.
\end{exercise}
\section{Hecke operators}\label{sec:Hecke}
In this section we introduce Hecke operators on group cohomology using the double cosets approach
and we prove that the Eichler-Shimura isomorphism is compatible with the Hecke action on group
cohomology and modular forms.
\subsection{Theory: Hecke rings}
\begin{definition}
Let $N,n \in \mathbb{N}$. We define
\begin{align*}
\Delta_0^n(N) & = \{\mat abcd \in M_2(\mathbb{Z}) | \mat abcd \equiv \mat **0* \mod N, (a,N)=1, \det \mat abcd = n\},\\
\Delta_1^n(N) & = \{\mat abcd \in M_2(\mathbb{Z}) | \mat abcd \equiv \mat 1*0* \mod N, \det \mat abcd = n\},\\
\Delta_0(N) & = \bigcup_{n \in \mathbb{N}} \Delta_0^n(N),\\
\Delta_1(N) & = \bigcup_{n \in \mathbb{N}} \Delta_1^n(N).
\end{align*}
\end{definition}
From now on, let
$(\Delta,\Gamma) = (\Delta_1(N),\Gamma_1(N))$ or $(\Delta,\Gamma) = (\Delta_0(N),\Gamma_0(N))$.
\begin{lemma}
Let $\alpha \in \Delta$. We put
$$ \Gamma_\alpha = \Gamma \cap \alpha^{-1} \Gamma \alpha \textnormal{ and }
\Gamma^\alpha = \Gamma \cap \alpha \Gamma \alpha^{-1}.$$
Then $\Gamma_\alpha$ has finite index in $\Gamma$ and $\alpha^{-1}\Gamma\alpha$
(one says that $\Gamma$ and $\alpha^{-1}\Gamma\alpha$ are commensurable), and also
$\Gamma^\alpha$ has finite index in $\Gamma$ and $\alpha\Gamma\alpha^{-1}$
(hence, $\Gamma$ and $\alpha \Gamma \alpha^{-1}$ are commensurable).
\end{lemma}
\begin{proof}
Let $n = \det \alpha$. One checks by matrix calculation that
$$\alpha^{-1}\Gamma(Nn)\alpha \subset \Gamma(N).$$
Thus,
$$ \Gamma(Nn) \subset \alpha^{-1} \Gamma(N) \alpha \subset \alpha^{-1} \Gamma \alpha.$$
Hence, we have $\Gamma(Nn) \subset \Gamma_\alpha$ and the first claim follows.
For the second claim, one proceeds similarly.
\end{proof}
\begin{example}
Let $\Gamma = \Gamma_0(N)$ and $p$ a prime. The most important case for the
sequel is $\alpha = \mat 100p$. An elementary calculation shows
$\Gamma^\alpha = \Gamma_0(Np)$.
\end{example}
\begin{definition}
Let $\alpha \in \Delta$. We consider the diagram
$$\xymatrix@=1cm{
\Gamma_\alpha \backslash \mathbb{H} \ar@{->}[r]^\alpha_{\tau \mapsto \alpha \tau} \ar@{->}[d]^{\pi_\alpha} &
\Gamma^\alpha \backslash \mathbb{H} \ar@{->}[d]^{\pi^\alpha} \\
\Gamma \backslash \mathbb{H} & \Gamma \backslash \mathbb{H},
}$$
in which $\pi^\alpha$ and $\pi_\alpha$ are the natural projections. One checks
that this is well defined by using
$\alpha \Gamma_\alpha \alpha^{-1} = \Gamma^\alpha$.
The {\em group of divisors} ${\rm Div}(S)$ on a Riemann surface~$S$ consists of all formal $\mathbb{Z}$-linear combinations of points of~$S$.
For a morphism $\pi: S \to T$ of Riemann surfaces, define the {\em pull-back} $\pi^*: {\rm Div}(T) \to {\rm Div}(S)$ and the
{\em push-forward} $\pi_*:{\rm Div}(S) \to {\rm Div}(T)$ uniquely by the rules $\pi^*(t) = \sum_{s \in S: \pi(s)=t} s$ and $\pi_*(s)=\pi(s)$
for points $t \in T$ and $s \in S$.
The {\em modular correspondence} or {\em Hecke correspondence} $\tau_\alpha$ is
defined as
$$ \tau_\alpha: {\rm Div}(Y_\Gamma) \xrightarrow{\pi_\alpha^*}
{\rm Div}(Y_{\Gamma_\alpha}) \xrightarrow{\alpha_*} {\rm Div}(Y_{\Gamma^\alpha}) \xrightarrow{\pi^\alpha_*} {\rm Div}(Y_\Gamma).$$
\end{definition}
These modular correspondences will be described more explicitly in a moment.
First a lemma:
\begin{lemma}\label{lem:doub}
Let $\alpha_i \in \Gamma$ for $i \in I$ with some index set~$I$. Then we have
$$ \Gamma = \bigsqcup_{i \in I} \Gamma_\alpha \alpha_i \; \Leftrightarrow \;
\Gamma \alpha \Gamma = \bigsqcup_{i \in I} \Gamma \alpha \alpha_i.$$
\end{lemma}
\begin{proof}
This is proved by quite a straight forward calculation.
\end{proof}
\begin{corollary}
Let $\alpha \in \Delta$ and $\Gamma \alpha \Gamma = \bigsqcup_{i \in I} \Gamma \alpha \alpha_i$.
Then the Hecke correspondence $\tau_\alpha: {\rm Div}(Y_\Gamma) \to {\rm Div}(Y_\Gamma)$ is given
by
$\tau \mapsto \sum_{i \in I} \alpha \alpha_i \tau$
for representatives $\tau \in \mathbb{H}$.
\end{corollary}
\begin{proof}
It suffices to check the definition using Lemma~\ref{lem:doub}.
\end{proof}
\begin{remark}
We have $\Delta^n = \bigcup_{\alpha \in \Delta, \det \alpha = n} \Gamma \alpha \Gamma$
and one can choose finitely many $\alpha_i$ for $i \in I$ such that
$\Delta^n = \bigsqcup_{i \in I} \Gamma \alpha_i \Gamma$.
\end{remark}
\begin{definition}
Let $\Delta^n = \bigsqcup_{i \in I} \Gamma \alpha_i \Gamma$.
The Hecke operator $T_n$ on ${\rm Div}(Y_\Gamma)$ is defined as
$$ T_n = \sum_{i \in I} \tau_{\alpha_i}.$$
\end{definition}
Let us recall from equation~\eqref{sigmaa} the matrix $\sigma_a \in \Gamma_0(N)$ (for $(a,N)=1$) which satisfies
$$\sigma_a \equiv \mat {a^{-1}}00a \mod N.$$
\begin{proposition}\label{zerlegung}
\begin{enumerate}[(a)]
\item We have the decomposition
$$\Delta_0^n(N) = \bigsqcup_a \bigsqcup_b \Gamma_0(N)\mat ab0d,$$
where $a$ runs through the positive integers with
$a \mid n$ and $(a,N) = 1$ and $b$ runs through the integers such that $0 \le b < d := n/a$.
\item We have the decomposition
$$\Delta_1^n(N) = \bigsqcup_a \bigsqcup_b \Gamma_1(N)\sigma_a \mat ab0d$$
with $a,b,d$ as in~(a).
\end{enumerate}
\end{proposition}
\begin{proof}
This proof is elementary.
\end{proof}
Note that due to $\sigma_a \in \Gamma_0(N)$, the matrices $\sigma_a \mat ab0d$ used in part~(b)
also work in part~(a). One can thus use the same representatives regardless if one works with $\Gamma_0(N)$
or $\Gamma_1(N)$.
Note also that for $n=\ell$ a prime, these representatives are exactly the elements of $\mathcal{R}_\ell$
from equation \eqref{setrp}.
Next, we turn to the important description of the Hecke algebra as a double coset algebra.
\begin{definition}
The {\em Hecke ring} $R(\Delta,\Gamma)$ is the free abelian group on the
double cosets $\Gamma \alpha \Gamma$ for $\alpha \in \Delta$.
\end{definition}
As our next aim we would like to define a multiplication, which then
also justifies the name "ring".
First let
$\Gamma \alpha \Gamma = \bigsqcup_{i=1}^n \Gamma \alpha_i$ und
$\Gamma \beta \Gamma = \bigsqcup_{j=1}^m \Gamma \beta_j$.
We just start computing.
$$ \Gamma \alpha \Gamma \cdot \Gamma \beta \Gamma =
\bigcup_j \Gamma \alpha \Gamma \beta_j = \bigcup_{i,j} \Gamma \alpha_i \beta_j.$$
This union is not necessarily disjoint.
The left hand side can be written as a disjoint union of double cosets
$\bigsqcup_{k=1}^r \Gamma \gamma_k \Gamma$. Each of these double
cosets is again of the form
$$\Gamma \gamma_k \Gamma = \bigsqcup_{l=1}^{n_k} \Gamma \gamma_{k,l}.$$
We obtain in summary
$$ \Gamma \alpha \Gamma \cdot \Gamma \beta \Gamma = \bigcup_{i,j} \Gamma \alpha_i \beta_j
= \bigsqcup_k \bigsqcup_l \Gamma \gamma_{k,l}.$$
We will now introduce a piece of notation for the multiplicity with which every
coset on the right appears in the centre of the above equality. For fixed $k$ we define for every~$l$
$$ m_{k,l} = \# \{ (i,j) | \Gamma \gamma_{k,l} = \Gamma \alpha_i \beta_j \}.$$
The important point is the following lemma.
\begin{lemma}\label{lemmkl}
The number $m_{k,l}$ is independent of~$l$. We put $m_k := m_{k,l}$.
\end{lemma}
\begin{proof}
The proof is combinatorial and quite straight forward.
\end{proof}
\begin{definition}
We define the multiplication on $R(\Delta,\Gamma)$ by
$$ \Gamma \alpha \Gamma \cdot \Gamma \beta \Gamma
= \sum_{k=1}^n m_k \Gamma\gamma_k\Gamma,$$
using the preceding notation.
\end{definition}
In Exercise~\ref{exheckering} you are asked to check that the Hecke
ring is indeed a ring.
The definition of the multiplication makes sense,
as it gives for Hecke correspondences:
$$ \tau_\alpha \circ \tau_\beta = \sum_{k=1}^n m_k \tau_{\gamma_k}.$$
\begin{definition}
For $\alpha \in \Delta$ let $\tau_\alpha = \Gamma \alpha \Gamma$.
We define (as above)
$$T_n = \sum_{\alpha} \tau_\alpha \in R(\Delta,\Gamma),$$
where the sum runs over a set of $\alpha$ such that
$\Delta^n = \bigsqcup_\alpha \Gamma \alpha \Gamma$.
For $a \mid d$ and $(d,N) = 1$ we let
$$ T(a,d) = \Gamma \sigma_a \mat a00d \Gamma \in R(\Delta,\Gamma).$$
\end{definition}
From Exercise~\ref{aufghecke}, we obtain the the following important corollary.
\begin{corollary}
We have $T_m T_n = T_n T_m$ and hence $R(\Delta,\Gamma)$ is a
commutative ring.
\end{corollary}
\subsection{Theory: Hecke operators on modular forms}
In this section we again let
$(\Delta,\Gamma) = (\Delta_0(N),\Gamma_0(N))$
or $(\Delta_1(N),\Gamma_1(N))$.
We now define an action of the Hecke ring $R(\Delta,\Gamma)$ on modular forms.
\begin{definition}
Let $\alpha \in \Delta$.
Suppose $\Gamma \alpha \Gamma = \bigsqcup_{i=1}^n \Gamma \alpha_i$ and let $f \in M_k(\Gamma)$.
We put
$$ f.\tau_\alpha := \sum_{i=1}^n f |_{\alpha_i}.$$
\end{definition}
\begin{lemma}
The function $f.\tau_\alpha$ again lies in $M_k(\Gamma)$.
\end{lemma}
\begin{proof}
For $\gamma \in \Gamma$ we check the transformation rule:
$$ \sum_i f |_{\alpha_i} |_{\gamma} = \sum_i f |_{\alpha_i \gamma} =
\sum_i f |_{\alpha_i},$$
since the cosets $\Gamma (\alpha_i \gamma)$ are a permutation
of the cosets $\Gamma \alpha_i$.
The holomorphicity of $f.\tau_\alpha$ is clear and the holomorphicity in the
cusps is not difficult.
\end{proof}
This thus gives the desired operation of $R(\Delta,\Gamma)$
on $M_k(\Gamma)$.
\begin{proposition}
Let $(\Delta,\Gamma) = (\Delta_0(N),\Gamma_0(N))$ and
$f \in M_k(\Gamma)$. The following formulae hold:
\begin{enumerate}[(a)]
\item $(f.T_m)(\tau)
= \frac{1}{m} \sum_{{a \mid m}, {(a,N)=1}} \sum_{b=0}^{\frac{m}{a} - 1} a^k f(\frac{a\tau+b}{m/a})$,
\item $a_n(f.T_m) = \sum_{a \mid (m,n), (a,N)=1} a^{k-1} a_{\frac{mn}{a^2}}$.
\end{enumerate}
Similar formulae hold for $(\Delta_1(N),\Gamma_1(N))$, if one includes
a Dirichlet character at the right places.
\end{proposition}
\begin{proof}
(a) follows directly from Proposition~\ref{zerlegung}.
(b) is a simple calculation using
$ \sum_{b=0}^{d-1} e^{2 \pi i \frac{b}{d} n} =
\begin{cases}
0, & \textnormal{ if } d \nmid n\\
d, & \textnormal{ if } d \mid n.
\end{cases}$
\end{proof}
\begin{remark}
The Hecke ring $R(\Delta,\Gamma)$ also acts on $S_k(\Gamma)$.
\end{remark}
\begin{corollary}\label{corformel}
Let $(\Delta,\Gamma) = (\Delta_0(N),\Gamma_0(N))$.
For the action of the Hecke operators on $M_k(\Gamma)$ and $S_k(\Gamma)$
the following formulae hold:
\begin{enumerate}[(a)]
\item $T_n T_m = T_{nm}$ for $(n,m)=1$,
\item $T_{p^{r+1}} = T_p T_{p^r} - p^{k-1} T_{p^{r-1}}$, if $p \nmid N$, and
\item $T_{p^{r+1}} = T_p T_{p^r}$, if $p \mid N$.
\end{enumerate}
Here, $p$ always denotes a prime number.
Similar formulae hold for $(\Delta_1(N),\Gamma_1(N))$, if one includes
a Dirichlet character at the right places.
\end{corollary}
\begin{proof}
These formulae follow from Exercise~\ref{aufghecke} and the
definition of the action.
\end{proof}
Even though it is not directly relevant for our purposes, we include Euler products, which allow
us to express the formulae from the corollary in a very elegant way.
\begin{proposition}[Euler product]\label{euler}
The action of the Hecke operators $T_n$ on modular forms satisfies
the formal identity:
$$ \sum_{n=1}^\infty T_n n^{-s} = \prod_{p \nmid N} (1 - T_p p^{-s} + p^{k-1-2s})^{-1} \cdot
\prod_{p \mid N} (1 - T_p p^{-s})^{-1}.$$
\end{proposition}
That the identity is formal means that we can arbitrarily
permute terms in sums and products without considering questions
of convergence.
\begin{proof}
The proof is carried out in three steps.
\underline{1st~step}: Let $g: \mathbb{Z} \to \mathbb{C}$ be any function.
Then we have the formal identity
$$ \prod_{p\textnormal{ prime}} \sum_{r=0}^\infty g(p^r) =
\sum_{n=1}^\infty \prod_{p^r \parallel n} g(p^r).$$
For its proof, let first $S$ be a finite set of prime numbers. Then
we have the formal identity:
$$ \prod_{p \in S} \sum_{r=0}^\infty g(p^r) =
\sum_{n=1, n \textnormal{ only has prime factors in }S}^\infty
\prod_{p^r \parallel n} g(p^r),$$
which one proves by multiplying out the left hand side (Attention! Here one permutes
the terms!). We finish the first step by letting $S$ run through arbitrarily
large sets.
\underline{2nd~step:} For $p \nmid N$ we have
$$(\sum_{r=0}^\infty T_{p^r} p^{-rs}) (1-T_pp^{-s} + p^{k-1-2s}) = 1$$
and for $p \mid N$:
$$(\sum_{r=0}^\infty T_{p^r} p^{-rs}) (1-T_pp^{-s}) = 1.$$
The proof of the second step consists of multiplying out these expressions
and to identify a ``telescope''.
\underline{3rd~step:} The proposition now follows by using the
first step with $g(p^r) = T_{p^r} p^{-rs}$ and plugging in
the formulae from the second step.
\end{proof}
\subsection{Theory: Hecke operators on group cohomology}
In this section we again let
$(\Delta,\Gamma) = (\Delta_0(N),\Gamma_0(N))$
or $(\Delta_1(N),\Gamma_1(N))$.
Let $R$ be a ring and $V$ a left $R[\Gamma]$-module which extends
to a semi-group action by the semi-group consisting of all $\alpha^\iota$
for $\alpha \in \Delta^n$ for all~$n$.
Recall that $\mat abcd^\iota = \mat d{-b}{-c}a$.
We now give the definition of the Hecke operator $\tau_\alpha$ on
${\rm Div}(\Gamma \backslash \mathbb{H})$ (see, for instance, \cite{DiamondIm} or \cite{faithful}).
\begin{definition}\label{defheckegp}
Let $\alpha \in \Delta$.
The {\em Hecke operator} $\tau_\alpha$ acting on group
cohomology is the composite
$$ \h^1(\Gamma,V) \xrightarrow{\mathrm{res}} \h^1(\Gamma^\alpha, V)
\xrightarrow{\mathrm{conj}_\alpha} \h^1(\Gamma_\alpha, V)
\xrightarrow{\mathrm{cores}} \h^1(\Gamma,V).$$
The first map is the {\em restriction}, and the third one is the {\em corestriction}.
We explicitly describe the second map on cocycles:
$$ \mathrm{conj}_\alpha: \h^1(\Gamma^\alpha, V) \to \h^1(\Gamma_\alpha, V), \;\;
c \mapsto \big( g_\alpha \mapsto \alpha^{\iota}.c(\alpha g_\alpha \alpha^{-1}) \big).$$
There is a similar description on the parabolic subspace
and the two are compatible, see Exercise~\ref{exheckepar}.
\end{definition}
\begin{proposition}\label{shdesc}
Let $\alpha \in \Delta$.
Suppose that $\Gamma \alpha \Gamma = \bigcup_{i=1}^n \Gamma \delta_i$ is a disjoint
union. Then the Hecke operator $\tau_\alpha$ acts on $\h^1(\Gamma,V)$
and $\h_{\mathrm{par}}^1(\Gamma,V)$ by
sending the cocycle $c$ to $\tau_\alpha c$ defined by
$$ (\tau_\alpha c)(g) = \sum_{i=1}^n \delta_i^\iota c(\delta_i g \delta_{\sigma_g(i)}^{-1})$$
for $g \in \Gamma$. Here $\sigma_g(i)$ is the index such that
$\delta_i g \delta_{\sigma_g(i)}^{-1} \in \Gamma$.
\end{proposition}
\begin{proof}
We only have to describe the corestriction explicitly. For that we use that
$\Gamma = \bigcup_{i=1}^n \Gamma_\alpha g_i$
with $\alpha g_i = \delta_i$. Furthermore, by Exercise~\ref{excorestriction}
the corestriction of a cocycle $u \in \h^1(\Gamma_\alpha, V)$ is the cocycle $\mathrm{cores}(u)$ uniquely
given by
\begin{equation}\label{eqcorestriction}
\mathrm{cores}(u)(g) = \sum_{i=1}^n g_i^{-1} u(g_i g g_{\sigma_g(i)}^{-1})
\end{equation}
for $g \in \Gamma$. Combining with the explicit description of the map $\mathrm{conj}_\alpha$
yields the result.
\end{proof}
\begin{definition}
For a positive integer~$n$, the {\em Hecke operator} $T_n$ is
defined as $\sum_\alpha \tau_\alpha$,
where the sum runs through a system of representatives of the double
cosets $\Gamma \backslash \Delta^n / \Gamma$.
Let $a$ be an integer coprime to~$N$.
The {\em diamond operator}
$\diam {a}$ is defined as $\tau_\alpha$ for the matrix $\sigma_a \in \Gamma_0(N)$,
defined in Equation~\ref{sigmaa} (if the $\Gamma$-action on $V$ extends to
an action of the semi-group generated by $\Gamma$ and~$\alpha^\iota$; note
that $\alpha \in \Delta_0^1$, but in general not in $\Delta_1^1$).
\end{definition}
It is clear that the Hecke and diamond operators satisfy the
``usual'' Euler product.
\begin{proposition}
The Eichler-Shimura isomorphism is compatible with the Hecke
operators.
\end{proposition}
\begin{proof}
We recall the definition of Shimura's main involution:
${\mat abcd}^\iota = \mat d{-b}{-c}a$. In other words, for matrices
with a non-zero determinant, we have
$$ {\mat abcd}^\iota = (\det \mat abcd) \cdot {\mat abcd}^{-1}.$$
Let now $f \in \Mkg k\Gamma\mathbb{C}$ be a modular form, $\gamma \in \Gamma$ and $z_0 \in \mathbb{H}$.
For any matrix $g$ with non-zero determinant, Lemma~\ref{esmaplem} yields
$$ I_{f|_g} (z_0,\gamma z_0) = g^\iota I_f (g z_0,g \gamma z_0).$$
Let $\alpha \in \Delta$.
We show the compatibility of the Hecke operator $\tau_\alpha$ with the map
$$ f \mapsto (\gamma \mapsto I_f(z_0,\gamma z_0))$$
between $\Mkg k\Gamma\mathbb{C}$ and $\h^1(\Gamma,V_{k-2}(\mathbb{C}))$. The same arguments
will also work, when $I_f(z_0,\gamma z_0)$ is replaced by $J_{\overline{g}}(z_1,\gamma z_1))$
with anti-holomorphic cusp forms~${\overline{g}}$.
Consider a coset decomposition $\Gamma \alpha \Gamma = \bigsqcup_i \Gamma \delta_i$.
We use notation as in Proposition~\ref{shdesc} and compute:
\begin{align*}
&I_{\tau_\alpha f} (z_0,\gamma z_0) \\
= &I_{\sum_i f|_{\delta_i}}(z_0,\gamma z_0)
= \sum_i I_{f|_{\delta_i}}(z_0,\gamma z_0)
= \sum_i \delta_i^\iota I_f (\delta_i z_0, \delta_i\gamma z_0)\\
=& \sum_i \delta_i^\iota \big(
I_f(\delta_i z_0, z_0) + I_f(z_0,\delta_i \gamma \delta_{\sigma_\gamma(i)}^{-1} z_0)
+ I_f(\delta_i \gamma \delta_{\sigma_\gamma(i)}^{-1} z_0, \delta_i \gamma
\delta_{\sigma_\gamma(i)}^{-1} \delta_{\sigma_\gamma(i)} z_0) \big)\\
=& \sum_i \delta_i^\iota I_f(z_0,\delta_i \gamma \delta_{\sigma_\gamma(i)}^{-1} z_0)
+ \sum_i \delta_i^\iota I_f(\delta_i z_0, z_0)
- \sum_i \delta_i^\iota \delta_i \gamma \delta_{\sigma_\gamma(i)}^{-1}
I_f(\delta_{\sigma_\gamma(i)} z_0, z_0)\\
=& \sum_i \delta_i^\iota I_f(z_0,\delta_i \gamma \delta_{\sigma_\gamma(i)}^{-1} z_0)
+ (1-\gamma) \sum_i \delta_i^\iota I_f(\delta_i z_0, z_0),
\end{align*}
since $\delta_i^\iota \delta_i \gamma \delta_{\sigma_\gamma(i)}^{-1}
= \gamma \delta_{\sigma_\gamma(i)}^\iota$.
Up to coboundaries, the cocycle $\gamma \mapsto I_{\tau_\alpha f} (z_0,\gamma z_0)$
is thus equal to the cocycle $\gamma \mapsto \sum_i \delta_i^\iota I_f(z_0,\delta_i \gamma \delta_{\sigma_\gamma(i)}^{-1} z_0)$, which by Proposition~\ref{shdesc}
is equal to $\tau_\alpha$ applied to the cocycle $\gamma \mapsto I_f(z_0,\gamma z_0)$,
as required.
\end{proof}
\begin{remark}
The conceptual reason why the above proposition is correct, is, of course,
that the Hecke operators come from Hecke correspondences.
\end{remark}
\subsection{Theory: Hecke operators and Shapiro's lemma}
We now prove that the Hecke operators are compatible with Shapiro's lemma.
This was first proved by Ash and Stevens~\cite{AshStevens}.
We need to say what the action
of $\alpha \in \Delta$ on the coinduced module ${\rm Hom}_{R[\Gamma]}(R[\mathrm{SL}_2(\mathbb{Z})],V)$
should be. Here we are assuming that $V$ carries an action by the semi-group~$\Delta^\iota$
(that is, $\iota$ applied to all elements of~$\Delta$).
Let $U_N$ be the image of $\Delta^\iota$ in $\Mat_2(\mathbb{Z}/N\mathbb{Z})$.
The natural map
$$ \Gamma \backslash \mathrm{SL}_2(\mathbb{Z}) \to U_N \backslash \Mat_2(\mathbb{Z}/N\mathbb{Z})$$
is injective. Its image consists of those $U_N g$ such that
\begin{equation}\label{eq:star}
(0,1) g = (u,v) \textnormal{ with } \langle u,v \rangle = \mathbb{Z}/N\mathbb{Z}.
\end{equation}
If that is so, then we say for short that $g$ satisfies~\eqref{eq:star}.
Note that this condition does not depend on the choice of~$g$ in $U_N g$.
Define the $R[\Delta^\iota]$-module $\mathcal{C}(N,V)$ as
$$ \{f \in {\rm Hom}_R (R[U_N \backslash \Mat_2(\mathbb{Z}/N\mathbb{Z})],V) \; | \;
f(g) = 0 \textnormal{ if } g \textnormal{ does not satisfy \eqref{eq:star}}\}$$
with the action of $\delta \in \Delta^\iota$ given by
$(\delta.f)(g) = \delta.(f(g\delta))$.
The module $\mathcal{C}(N,V)$ is isomorphic to the coinduced module
${\rm Hom}_{R[\Gamma]}(R[\mathrm{SL}_2(\mathbb{Z})],V)$ as an $R[\Gamma]$-module by
\begin{align*}
{\rm Hom}_{R[\Gamma]}(R[\mathrm{SL}_2(\mathbb{Z})],V) &\to \mathcal{C}(N,V),\\
f &\mapsto \begin{cases}
(g \mapsto g f(g^{-1})) & \textnormal{for any $g \in \mathrm{SL}_2(\mathbb{Z})$,}\\
0 & \textnormal{if $g$ does not satisfy \eqref{eq:star}.}
\end{cases}
\end{align*}
One might wonder why we introduce the new module $\mathcal{C}(N,V)$ instead of working directly with ${\rm Hom}_{R[\Gamma]}(R[\mathrm{SL}_2(\mathbb{Z})],V)$.
The point is that we cannot directly act on the latter with a matrix of determinant different from~$1$.
Hence we need a way to naturally extend the action. We do this by embedding
$\Gamma\backslash \mathrm{SL}_2(\mathbb{Z})$ into $U_N \backslash \Mat_2(\mathbb{Z}/N\mathbb{Z})$. Of course, we then
want to work on the image of this embedding, which is exactly described by~\eqref{eq:star}.
The module $\mathcal{C}(N,V)$ is then immediately written down in view of the identification
between ${\rm Hom}_{R[\Gamma]}(R[\mathrm{SL}_2(\mathbb{Z})],V)$ and ${\rm Hom}_R(R[\Gamma\backslash \mathrm{SL}_2(\mathbb{Z})],V)$
given by sending $f$ to $(g \mapsto g.f(g^{-1}))$ (which is clearly independent of the choice of $g$
in the coset $\Gamma g$).
\begin{proposition}
The Hecke operators are compatible with Shapiro's Lemma. More precisely,
for all $n \in \mathbb{N}$ the following diagram commutes:
$$\xymatrix@=.8cm{
\h^1(\Gamma,V) \ar@{->}[r]^{T_n} &
\h^1(\Gamma,V) \\
\h^1(\mathrm{SL}_2(\mathbb{Z}),\mathcal{C}(N,V)) \ar@{->}[r]^{T_n} \ar@{->}[u]^{\textnormal{Shapiro}} &
\h^1(\mathrm{SL}_2(\mathbb{Z}),\mathcal{C}(N,V)). \ar@{->}[u]^{\textnormal{Shapiro}}
}$$
\end{proposition}
\begin{proof}
Let $j\in\{0,1\}$ indicate whether we work with $\Gamma_0$ or $\Gamma_1$.
Let $\delta_i$, for $i=1,\dots,r$ be the representatives of
$\mathrm{SL}_2(\mathbb{Z}) \backslash \Delta_j^n(1)$ provided by Proposition~\ref{zerlegung}.
Say, that they are ordered such that $\delta_i$ for $i=1,\dots,s$ with $s \le r$
are representatives for $\Gamma \backslash \Delta_j^n(N)$. This explicitly means
that the lower row of $\delta_i^\iota$ is $(0,a)$ with $(a,N)=1$
(or even $(0,1)$ if $j=1$) for $i=1,\dots,s$.
If $s < i \le r$, then the lower row is $(u,v)$ with $\langle u,v \rangle \lneq \mathbb{Z}/N\mathbb{Z}$.
Let $c \in \h^1(\mathrm{SL}_2(\mathbb{Z}),\mathcal{C}(N,V))$ be a $1$-cochain. Then, as required, we find
\begin{align*}
\textnormal{Shapiro}(T_n(c))(\gamma)
&= \sum_{i=1}^r (\delta_i^\iota.c(\delta^i \gamma \delta_{\sigma_\gamma(i)}^{-1}))(\mat 1001)
= \sum_{i=1}^r \delta_i^\iota(c(\delta^i \gamma \delta_{\sigma_\gamma(i)}^{-1})(\delta_i^\iota))\\
&= \sum_{i=1}^s (\delta_i^\iota.c(\delta^i \gamma \delta_{\sigma_\gamma(i)}^{-1}))(\mat 1001))
= T_n(\textnormal{Shapiro}(c))(\gamma),
\end{align*}
where the second equality is due to the definition of the action and the third one holds since
$c(\delta^i \gamma \delta_{\sigma_\gamma(i)}^{-1})$ lies in $\mathcal{C}(N,V)$ and thus evaluates
to~$0$ on $\delta_i^\iota$ for $i>s$.
\end{proof}
\begin{remark}
A very similar description exists involving $\mathrm{PSL}_2(\mathbb{Z})$.
\end{remark}
\begin{remark}
It is possible to give an explicit description of Hecke operators on Manin symbols from Theorem~\ref{ManinSymbols}
by using Heilbronn matrices and variations as, for instance, done in~\cite{MerelUniversal}.
\end{remark}
\begin{remark}
One can show that the isomorphisms from Theorem~\ref{compthm} are compatible with Hecke operators.
\end{remark}
\subsection{Theory: Eichler-Shimura revisited}
In this section we present some corollaries and extensions
of the Eichler-Shimura theorem.
We first come to modular symbols with a character and, thus, also to modular symbols
for~$\Gamma_0(N)$.
\begin{corollary}[Eichler-Shimura]\label{esgammanull}
Let $N \ge 1$, $k \ge 2$ and $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to \mathbb{C}^\times$
be a Dirichlet character. Then the Eichler-Shimura map gives isomorphisms
$$ \Mk kN\chi\mathbb{C} \oplus \overline{\Sk kN\chi\mathbb{C}} \to \h^1(\Gamma_0(N), V_{k-2}^{\iota,\chi}(\mathbb{C})),$$
and
$$ \Sk kN\chi\mathbb{C} \oplus \overline{\Sk kN\chi\mathbb{C}} \to \h_{\mathrm{par}}^1(\Gamma_0(N), V_{k-2}^{\iota,\chi}(\mathbb{C})),$$
which are compatible with the Hecke operators.
\end{corollary}
\begin{proof}
Recall that the $\sigma_a$ form a system of coset representatives for
$\Gamma_0(N) / \Gamma_1(N) =: \Delta$ and that the group~$\Delta$
acts on $\h^1(\Gamma_0(N), V)$ by sending a cocycle $c$ to the
cocycle $\delta c$ (for $\delta \in \Delta$) which is defined by
$$ \gamma \mapsto \delta. c(\delta^{-1} \gamma \delta).$$
With $\delta = \sigma_a^{-1} = \sigma_a^\iota $, this reads
$$ \gamma \mapsto \sigma_a^\iota. c(\sigma_a \gamma \sigma_a^{-1}) = \tau_{\sigma_a} c = \langle a \rangle c.$$
Hence, $\sigma_a \in \Delta$-action acts through the inverse of the diamond operators.
We now appeal to the Hochschild-Serre exact sequence, using that
the cohomology groups (from index $1$ onwards) vanish if the group order
is finite and invertible. We get the isomorphism
$$ \h^1(\Gamma_0(N),V_{k-2}^{\iota,\chi}(\mathbb{C})) \xrightarrow{\mathrm{res}}
\h^1(\Gamma_1(N),V_{k-2}^{\iota,\chi}(\mathbb{C}))^\Delta.$$
Moreover, the Eichler-Shimura isomorphism is an isomorphism
of Hecke modules
$$ \Mkone kN\mathbb{C} \oplus \overline{\Skone kN\mathbb{C}} \to
\h^1(\Gamma_1(N),V_{k-2}^{\iota, \chi}(\mathbb{C})),$$
since for matrices in $\Delta_1(N)$ acting through the Shimura main involution
the modules $V_{k-2}^{\iota,\chi}(\mathbb{C})$ and $V_{k-2}(\mathbb{C})$ coincide.
Note that it is necessary to take $V_{k-2}^{\iota,\chi}(\mathbb{C})$ because the action
on group cohomology involves the Shimura main involution. Moreover, with this
choice, the Eichler-Shimura isomorphism is $\Delta$-equivariant.
To finish the proof, it suffices to take $\Delta$-invariants on both sides,
i.e.\ to take invariants for the action of the diamond operators.
The result on the parabolic subspace is proved in the same way.
Since Hecke and diamond operators commute, the Hecke action is compatible with
the decomposition into $\chi$-isotypical components.
\end{proof}
Next we consider the action of complex conjugation.
\begin{corollary}
Let $\Gamma = \Gamma_1(N)$. The maps
$$ \Skg k\Gamma\mathbb{C} \to \h_{\mathrm{par}}^1(\Gamma, V_{k-2}(\mathbb{R})), \;\;\;
f \mapsto (\gamma \mapsto \Real( I_f(z_0,\gamma z_0)))$$
and
$$ \Skg k\Gamma\mathbb{C} \to \h_{\mathrm{par}}^1(\Gamma, V_{k-2}(\mathbb{R})), \;\;\;
f \mapsto (\gamma \mapsto \Imag( I_f(z_0,\gamma z_0)))$$
are isomorphisms (of real vector spaces) compatible with the Hecke operators.
A similar result holds in the presence of a Dirichlet character.
\end{corollary}
\begin{proof}
We consider the composite
$$ \Skg k\Gamma\mathbb{C} \xrightarrow{f \mapsto \frac{1}{2}(f + {\overline{f}})}
\Skg k\Gamma\mathbb{C} \oplus \overline{\Skg k\Gamma\mathbb{C}} \xrightarrow{\text{Eichler-Shimura}}
\h_{\mathrm{par}}^1(\Gamma,V_{k-2}(\mathbb{C})).$$
It is clearly injective.
As $J_{\overline{f}}(z_0,\gamma z_0) = \overline{I_f (z_0,\gamma z_0)}$,
the composite map coincides with the first map in the statement.
Its image is thus already contained in the real vector space $\h_{\mathrm{par}}^1(\Gamma, V_{k-2}(\mathbb{R}))$.
Since the real dimensions coincide, the map is an isomorphism.
In order to prove the second isomorphism, we use $f \mapsto \frac{1}{2i} (f - {\overline{f}})$
and proceed as before.
\end{proof}
We now treat the $+$ and the $-$-space for the involution attached
to the matrix~$\eta = \mat {-1}001$ from equation~\eqref{defeta}.
The action of $\eta$ on $\h^1(\Gamma,V)$ is the action of the Hecke
operator $\tau_\eta$; strictly speaking, this operator is not defined because
the determinant is negative, however we use the same definition. To be precise
we have
$$ \tau_\eta: \h^1(\Gamma,V) \to \h^1(\Gamma,V), \;\;\;
c \mapsto (\gamma \mapsto \eta^\iota. c(\eta \gamma \eta)),$$
provided, of course, that $\eta^\iota$ acts on~$V$ (compatibly with
the $\Gamma$-action).
We also want to define an involution $\tau_\eta$ on
$\Skg k\Gamma\mathbb{C} \oplus \overline{\Skg k\Gamma\mathbb{C}}$.
For that recall that if $f(z) = \sum a_n e^{2\pi i n z}$,
then $\tilde{f}(z) := \sum \overline{a_n} e^{2\pi i n z}$
is again a cusp form in $\Skg k\Gamma\mathbb{C}$ since we only applied
a field automorphism (complex conjugation) to the coefficients
(think of cusp forms as maps from the Hecke algebra over~$\mathbb{Q}$ to~$\mathbb{C}$).
We define $\tau_\eta$ as the composite
$$ \tau_\eta: \Skg k\Gamma\mathbb{C} \xrightarrow{f \mapsto (-1)^{k-1}\tilde{f}}
\Skg k\Gamma\mathbb{C} \xrightarrow{\tilde{f} \mapsto \overline{\tilde{f}}}
\overline{\Skg k\Gamma\mathbb{C}}.$$
Similarly, we also define $\tau_\eta: \overline{\Skg k\Gamma\mathbb{C}} \to\Skg k\Gamma\mathbb{C}$
and obtain in consequence an involution $\tau_\eta$ on
$\Skg k\Gamma\mathbb{C} \oplus \overline{\Skg k\Gamma\mathbb{C}}$.
We consider the function $(-1)^{k-1}\overline{\tilde{f}(z)}$ as a function of~$\overline{z}$.
We have
\begin{multline*}
\tau_\eta(f)(\overline{z})=(-1)^{k-1}\overline{\tilde{f}(z)}
= (-1)^{k-1}\overline{\sum_n \overline{a_n} e^{2\pi i n z}}
= (-1)^{k-1}\sum_n a_n e^{2 \pi i n (-\overline{z})}\\
= (-1)^{k-1}f(-\overline{z}) = f|_\eta (\overline{z}).
\end{multline*}
\begin{proposition}
The Eichler-Shimura map commutes with $\tau_\eta$.
\end{proposition}
\begin{proof}
Let $f \in \Skg k\Gamma\mathbb{C}$ (for simplicity). We have to check whether $\tau_\eta$
of the cocycle attached to~$f$ is the same as the cocycle attached to $\tau_\eta (f)$.
We evaluate the latter at a general $\gamma \in \Gamma$ and compute:
\begin{align*}
J_{(-1)^{k-1}\overline{\tilde{f}}}(\infty,\gamma \infty)
&= (-1)^{k-1} \int_{\infty}^{\gamma \infty} f(-\overline{z}) (X\overline{z}+Y)^{k-2} d\overline{z}\\
&= - \int_{\infty}^{\gamma \infty} f(-\overline{z}) (X(-\overline{z})-Y)^{k-2} d\overline{z}\\
&= \int_{\gamma \infty}^{\infty} f(-\overline{z}) (X(-\overline{z})-Y)^{k-2} d\overline{z}\\
&= \int_{0}^{\infty} f(-\overline{(\gamma \infty + it)}) (X(- \overline{(\gamma \infty + it)} )-Y)^{k-2} (-i) dt\\
&= - \int_{0}^{\infty} f(-\gamma \infty + it) (X(- \gamma \infty + it) -Y)^{k-2} i dt\\
&= \int_{\infty}^{-\gamma \infty } f(z) (Xz-Y)^{k-2} dz\\
&= \eta^\iota. I_f(\infty,-\gamma \infty)= \eta^\iota. I_f(\infty,\eta \gamma \eta \infty).
\end{align*}
This proves the claim.
\end{proof}
\begin{corollary}
Let $\Gamma = \Gamma_1(N)$. The maps
$$ \Skg k\Gamma\mathbb{C} \to \h_{\mathrm{par}}^1(\Gamma, V_{k-2}(\mathbb{C}))^+, \;\;\;
f \mapsto (1+\tau_\eta).(\gamma \mapsto I_f(z_0,\gamma z_0))$$
and
$$ \Skg k\Gamma\mathbb{C} \to \h_{\mathrm{par}}^1(\Gamma, V_{k-2}(\mathbb{C}))^-, \;\;\;
f \mapsto (1-\tau_\eta).(\gamma \mapsto I_f(z_0,\gamma z_0))$$
are isomorphisms compatible with the Hecke operators,
where the~$+$ (respectively the~$-$) indicate the subspace invariant
(respectively anti-invariant) for the involution~$\tau_\eta$.
A similar result holds in the presence of a Dirichlet character.
\end{corollary}
\begin{proof}
Both maps are clearly injective (consider them as being given
by $f \mapsto f + \tau_\eta f$ followed by the Eichler-Shimura map)
and so dimension considerations show that they are isomorphisms.
\end{proof}
\subsection{Theoretical exercises}
\begin{exercise}\label{exheckering}
Check that $R(\Delta,\Gamma)$ is a ring (associativity and
distributivity).
\end{exercise}
\begin{exercise}\label{aufghecke}
Show the formula
$$ T_m T_n = \sum_{d \mid (m,n), (d,N)=1} d T(d,d) T_{\frac{mn}{d^2}}.$$
Also show that $R(\Delta,\Gamma)$ is generated by $T_p$ and $T(p,p)$
for $p$ running through all prime numbers.
\end{exercise}
\begin{exercise}\label{exheckepar}
Check that the Hecke operator $\tau_\alpha$ from Definition~\ref{defheckegp}
restricts to $\h_{\mathrm{par}}^1(\Gamma,V)$.
\end{exercise}
\begin{exercise}\label{excorestriction}
Prove Equation~\ref{eqcorestriction}.
\end{exercise}
\subsection{Computer exercises}
\begin{cexercise}
Implement Hecke operators.
\end{cexercise}
|
1,116,691,500,138 | arxiv | \section{Introduction}
\subsection{Discussion}
We will consider the Ginzburg--Landau model of superconductivity. If a
$2$-dimensional superconducting sample with Ginzburg--Landau parameter $\kappa$
is submitted to a uniform magnetic field of strength $\sigma$, then (by a
theorem of Giorgi and Phillips~\cite{giph}) there exists a field strength
$\overline{H_{C_3}(\kappa)}$ such that if
$\sigma > \overline{H_{C_3}(\kappa)}$,
then the sample will be in its normal state, i.e. superconductivity is lost
altogether. It is at first sight natural to expect this phenomenon to mark a
monotone transition, i.e. to expect that the material is in its superconducting
(possibly mixed) state for all $\sigma<\overline{H_{C_3}(\kappa)}$.
Indeed, such a monotonicity result has been proved recently in a number of
geometric situations and in both $2$ and $3$ dimensional
settings~\cite{fohe3,fohe5,fohe6,fope} in the case where the Ginzburg--Landau
parameter $\kappa$ is large (it also follows from asymptotic expansions obtained
in other works such as \cite{MR2496304,MR3073418}). However, Nature does not
support this monotonicity
in general. The famous Little--Parks effect~\cite{lipa} shows that for narrow
cylinders (or $2$D annuli) one has an oscillatory behavior instead of
monotonicity.\footnote{In connection to the Little--Parks effect one often
discusses the (solid) disc as another example, where the effect of surface
superconductivity provides a localization to the boundary and therefore
effectively introduces non-trivial topology which should give oscillations.
However, as already the early studies of Saint-James~\cite{saja} show
(see also~\cite{fohe5}), in the case of the solid disc these oscillations are
superposed on a linear background and are not strong enough to break the
monotonicity of the background.}
In this paper we will establish such `oscillatory' effects rigorously in
different geometric settings.
The lack of monotonicity comes from the topology/geometry of the annulus. It is
natural
to ask whether one can get such an oscillatory effect for (non-vanishing)
magnetic fields defined on domains without topology. From the previous
investigations \cite{fohe3} we know this to be impossible for a uniform
magnetic field, but how about more general fields?
The analysis of constant magnetic fields tells us that this question is
linked to a
purely spectral problem, namely whether the first eigenvalue of the
Schr\"{o}dinger operator $(-i\nabla + B {\mathbf F})^2$ is monotone
increasing in the parameter (strength of the magnetic field) $B$ for
sufficiently large values of $B$. This property has been called `strong
diamagnetism' and has been proved for large classes of magnetic fields---it is
even `generically' satisfied \cite{fohe3,fohe5,fohe6,fope,MR2496304,MR3073418}.
However, we produce counterexamples in the general case.
\subsection{Ginzburg--Landau theory}
The Ginzburg--Landau theory of superconductivity is based on the energy
functional
\begin{align}\label{eq:DefGL}
\mathcal{G}_{\kappa,\sigma}(\psi,\mathbf{A})
&=\int_{\Omega} |(-i\nabla+\kappa\sigma \mathbf{A})\psi|^2
-\kappa^2|\psi|^2+\frac{\kappa^2}{2}|\psi|^4\,dx\nonumber \\
&\quad+(\kappa\sigma)^2\int_{\widetilde{\Omega}} |\curl \mathbf{A}-\beta|^2\,dx.
\end{align}
Here $\kappa > 0$ is a material parameter (the Ginzburg--Landau parameter),
$\sigma\geq 0$ is a parameter measuring the intensity of the external magnetic
field. The domain $\Omega \subseteq {\mathbb R}^2$ is the part of space occupied
by the superconducting material. For $\widetilde{\Omega}$ there are two natural
choices. One can take $\widetilde{\Omega} = {\mathbb R}^2$. That will not be our
choice here because for reasons of simplicity we want to avoid an unnecessary
technical complication connected with unbounded domains in ${\mathbb R}^2$
(for details on how to handle this issue see \cite{giti,gism}). One
can also---and that will be our convention here---take $\widetilde{\Omega}$ to
be the smallest simply connected domain containing $\Omega$, i.e. the union of
$\Omega$ and all the `holes' in $\Omega$. The function
$\beta \in L^2(\widetilde{\Omega})$ is the profile of the external magnetic
field.
In the setting of bounded $\Omega \subset {\mathbb R}^2$ the functional
$\mathcal{G}_{\kappa,\sigma}$ is naturally defined on
$(\psi, \mathbf{A}) \in H^1(\Omega)
\times H^1(\widetilde{\Omega}, {\mathbb R}^2)$.
The functional is immediately seen to be gauge invariant,
$\mathcal{G}_{\kappa,\sigma}(\psi,\mathbf{A})
= \mathcal{G}_{\kappa,\sigma}(\psi e^{-i \kappa \sigma \phi} ,\mathbf{A}
+\nabla \phi)$.
The vector field ${\mathbf A}$ models the induced magnetic vector potential.
The function $\psi$ measures the superconducting properties of the material,
with $|\psi(x)|$ being a measure of the local density of Cooper pairs.
We say that a minimizer $(\psi, \mathbf{A})$ of the Ginzburg--Landau functional
is trivial if
$\psi\equiv 0$ and $\curl\mathbf{A}=\beta$.
In each of the situations we will encounter, the notation $\mathbf{F}$ will be
reserved for a fixed choice of vector potential with $\curl \mathbf{F} = \beta$.
For trivial minimizers we clearly have
$\mathcal{G}_{\kappa,\sigma}(\psi,\mathbf{A})=0$. For a nontrivial
minimizer the functional must be negative, since one gets from the
Euler-Lagrange equations of a minimizer that
\[
\mathcal{G}_{\kappa,\sigma}(\psi,\mathbf{A}) =
-\frac{\kappa^2}{2} \| \psi \|_4^4,
\]
if $(\psi, \mathbf{A})$ is a minimizer.
We define the set
\[
\mathcal{N}(\kappa):=
\{\sigma>0~\mid~\mathcal{G}_{\kappa,\sigma}
\text{ has a nontrivial minimizer
$(\psi,\mathbf{A})$}\}.
\]
Following \cite{lupa1} one typically defines the third critical field to be
given by $\sup \mathcal{N}(\kappa)$, which is finite by \cite{giph}. However,
unless $\mathcal{N}(\kappa)$ is an interval, this definition is not the only
natural one to take---see \cite{fohe3,fohebook} for a discussion. We will see below
that this is not always the case.
\subsection{Oscillations in the third critical field}
Let $\Omega=\{x\in\mathbb{R}^2~\mid~R_{i}<|x|<R_{o}\}$ denote the annulus with
inner radius $R_{i}$ and outer radius $R_{o}$, let $\beta \equiv 1$.
In this case we will write $D = \widetilde{\Omega} = B(0,R_{o})$ i.e. the disc of
radius $R_{o}$.
\begin{thm}\label{thm:main3}
There exists an annulus $\Omega$ and a $\kappa_0>0$ such the set
$\mathcal{N}(\kappa_0)$ is not an interval.
\end{thm}
\begin{remark}
The mechanism behind this result is a convergence of the magnetic quadratic
form on the annulus to the corresponding form on the circle. This convergence
was already noticed in the works \cite{BR,RS}, where also `annuli' of
non-uniform width were considered. It is likely that one could deduce
Theorem~\ref{thm:main3} from these works, however, we prefer to give a simple
independent proof which also emphasizes the connection to the
Bohm--Aharonov-effect.
\end{remark}
\begin{remark}
\label{rem:largerkappa}
By shrinking the inner radius $R_{i}$ of the annulus, we can get $\kappa_0$ as
large as we want, since the eigenvalues of the limiting problem will then cross
at a level $1/(2R_{i})^2$. In particular it is possible to have
$\kappa_0>1/\sqrt{2}$, which means that Theorem~\ref{thm:main3} also applies
to superconductors of Type II.
\end{remark}
One may criticize the result of Theorem~\ref{thm:main3} on two accounts. One
could desire not to have the topology fixed a priori, but rather have it
generated by localization properties of the minimizer. Also most previous
mathematical analysis has considered the limit of large values of $\kappa$.
One can show that for sufficiently large values of $\kappa$ the set $\mathcal{N}(\kappa)$ of a superconducting
sample in the shape of an annulus will behave as the one of the disc with the same outer
radius, and it is known that for the disc and with constant magnetic field---for sufficiently large values of
$\kappa$---$\mathcal{N}(\kappa)$ is indeed an interval \cite{fohe3}.
Our next theorem remedies these defects.
\begin{thm}\label{thm:HC3-oscillation}
Let $\Omega$ be the unit disc in $\mathbb{R}^2$. There exists an everywhere positive
magnetic field $\beta(x)$ such that for all $\kappa_0>0$ there exists
$\kappa > \kappa_0$ satisfying that $\mathcal{N}(\kappa)$ is not an interval.
\end{thm}
In fact, the magnetic field can be chosen as $\beta(x) = \delta+(1-|x|)^2$,
where $\delta>0$ is some sufficiently small constant.
Theorem~\ref{thm:HC3-oscillation} follows directly from Theorem~\ref{thm:main}
(or Theorem~\ref{thm:intasymptot}) below using \cite[Prop. 13.1.7]{fohebook}.
Actually, it easily follows from Theorem~\ref{thm:intasymptot} below, that for
all integers $n>0$ we can choose $\delta$ so small that ${\mathcal N}(\kappa)$
will consist of at least $n$ intervals for all $\kappa$ sufficiently large.
\subsection{Lack of strong diamagnetism}\mbox{}\par
For easy reference we collect the notation and assumptions concerning the magnetic fields that we will treat.
We will work on an open set $\Omega$ being one the following three cases $\Omega \in \{ {\mathbb R}^2, B(0,1), {\mathbb R}^2 \setminus \overline{B(0,1)}\}$.
\begin{assump}\label{ass:Magnetic}
Suppose that $\beta(x) = \tilde{\beta}(|x|) \in L^{\infty}_{\text{loc}}(\Omega)$,
is a non-negative, radial magnetic field, possessing five continuous derivatives
in an open neighborhood $U$ of the unit circle $\{x\in\mathbb{R}^2: |x|=1\}$.
Define
\begin{align}
\delta := \tilde{\beta}(1) \geq 0,
\end{align}
and assume that $\tilde\beta'(1)=0$ and write
\begin{align}
\tilde{\beta}''(1) =:k .
\end{align}
When $\Omega \in \{ B(0,1),\ {\mathbb R}^2 \setminus \overline{B(0,1)}\}$,
we assume that
\begin{align}
\Theta_0 \delta < \inf_{x\in \Omega} \beta(x),
\end{align}
where $\Theta_0 <1$ is the spectral constant recalled in Appendix~\ref{sec:dG}.
When $\Omega = {\mathbb R}^2$, we impose the stronger assumption that
$\tilde{\beta}(r)$ has a unique, non-degenerate minimum at $r=1$ and that
\begin{align}
\inf_{x \in {\mathbb R}^2 \setminus U} \beta(x) > \delta.
\end{align}
\end{assump}
\begin{remark}
The assumptions assure that ground state eigenfunctions will be localized near $r=1$. For $\Omega = {\mathbb R}^2$, we have $k>0$ by assumption, but that is not necessarily true in the cases with boundary.
\end{remark}
\begin{defn}
We define
\begin{align}
\Phi := \frac{1}{2\pi} \int_{\{ |x| <1 \}} \beta(x)\,dx = \int_0^1 \tilde{\beta}(r) r\,dr,
\end{align}
i.e. $\Phi$ denotes the magnetic flux through the unit disc.
\end{defn}
For a magnetic field satisfying Assumption~\ref{ass:Magnetic} and $B>0$,
we study the lowest eigenvalue $\eigone{\mathcal{H}(B)}$ of the self-adjoint
magnetic Schr\"{o}dinger operator
\[
\mathcal{H}(B)=(-i\nabla+B\mathbf{F})^2
\]
in $L^2(\Omega)$. Here $\mathbf{F}$ is a magnetic vector potential associated
with the magnetic field $\beta$. We refer the reader to Section~\ref{sec:prel}
for a more complete definition of this operator and the eigenvalue.
We will study this eigenvalue problem in three cases, namely for
$\Omega$ the unit disc, the complement of the unit disc
and the whole plane $\mathbb{R}^2$. If $\Omega$ has a non-empty boundary we impose a
magnetic Neumann boundary condition.
The next theorem states that if $\Omega$ is the unit disc or its
complement,
then special choices of magnetic fields satisfying Assumption~\ref{ass:Magnetic} will give that the
function $B\mapsto \eigone{\mathcal{H}(B)}$ is \emph{not} monotonically increasing
for large $B$. Before stating the theorems, we remind the reader that
\[
\xi_0,\quad \Theta_0,\quad \text{and}\quad \phi_{\xi_0}(0)
\]
are universal (spectral) constants coming from the de Gennes model operator---this is
recalled in Appendix~\ref{sec:dG}.
\begin{thm}
\label{thm:main}
Let $\Omega$ be the unit disc or its complement. Suppose that $\beta$ satisfies Assumption~\ref{ass:Magnetic}.
Assume that $\delta>0$ and
\begin{align}\label{eq:fluxcondition}
\Phi > \frac{\Theta_0}{\xi_0 \varphi_{\xi_0}(0)^2} \delta.
\end{align}
Then for all $B_0>0$ there exist $B_1$ and
$B_2$, with $B_0<B_1<B_2$, such that
\[
\eigone{\mathcal{H}(B_1)}>\eigone{\mathcal{H}(B_2)}.
\]
On the other hand, if
\begin{align}\label{eq:fluxcondition2}
\Phi < \frac{\Theta_0}{\xi_0 \varphi_{\xi_0}(0)^2} \delta.
\end{align}
Then there exists $B_0>0$ such that $B \mapsto \eigone{\mathcal{H}(B)}$ is monotone increasing on $[B_0, \infty)$.
\end{thm}
\begin{remark}
In particular, \eqref{eq:fluxcondition} holds for the magnetic field
\begin{align}
\beta(x) = \delta + (1-|x|)^2,
\end{align}
for all $\delta>0$ sufficiently small---the flux in this case is
$\Phi = \frac{\delta}{2} + \frac{1}{12}$. Therefore, this magnetic field will
not display monotonicity for large field strength.
\end{remark}
Theorem~\ref{thm:main} is a consequence of the following precise asymptotic
formulas for the ground state eigenvalue given as Theorem~\ref{thm:extasymptot}
and Theorem~\ref{thm:intasymptot}.
\begin{thm}
\label{thm:extasymptot}
Suppose that $\Omega$ is the complement of the unit disc, that $\beta$ satisfies Assumption~\ref{ass:Magnetic} and that
$\delta>0$.
Then there are constants $C_0^{\text{ext}}$ and $C_1^{\text{ext}}$ such that if
\[
\Delta_B^{\text{ext}}
:=\inf_{m\in\mathbb{Z}}\bigl|
m-\Phi B-\xi_0 (\delta B)^{1/2}
-C_0^{\text{ext}}\bigr|,
\]
then, as $B\to+\infty$,
\[
\eigone{\mathcal{H}(B)} = \Theta_0\delta B
+ \frac{1}{3}\phi_{\xi_0}(0)^2 (\delta B)^{1/2}
+ \xi_0\,\phi_{\xi_0}(0)^2
\bigl((\Delta_B^{\text{ext}})^2+C_1^{\text{ext}}\bigr)+\mathcal{O}(B^{-1/2}).
\]
\end{thm}
\begin{remark}
By a careful reading of the proof, one will realize that the constant
$C_0^{\text{ext}}$ is independent of $\delta$ but that $C_1^{\text{ext}}$
depends on $\delta$. However, for our purposes this extra information is
irrelevant.
\end{remark}
A similar expansion holds in the interior of the unit disc.
\begin{thm}
\label{thm:intasymptot}
Suppose that $\Omega$ is the unit disc, that $\beta$ satisfies Assumption~\ref{ass:Magnetic} and that
$\delta>0$.
Then there exist constants $C_0^{\text{int}}$ and $C_1^{\text{int}}$ such that
if
\[
\Delta_B^{\text{int}}
:=\inf_{m\in\mathbb{Z}}\bigl|m-\Phi B+\xi_0 (\delta B)^{1/2}
-C_0^{\text{int}}\bigr|,
\]
then, as $B\to+\infty$,
\[
\eigone{\mathcal{H}(B)} = \Theta_0\delta B
-\frac{1}{3}\phi_{\xi_0}(0)^2 (\delta B)^{1/2}
+ \xi_0\,\phi_{\xi_0}(0)^2\bigl((\Delta_B^{\text{int}})^2
+C_1^{\text{int}}\bigr)+\mathcal{O}(B^{-1/2}).
\]
\end{thm}
\begin{remark}
Notice that for the disc or its complement, the constant magnetic field
$\beta(x) = \delta >0$ satisfies Assumption~\ref{ass:Magnetic}, so
Theorems~\ref{thm:extasymptot} and~\ref{thm:intasymptot} imply this special
case. This agrees with the calculations in~\cite{fohe3} (see
also~\cite{fohebook}). In the case of constant field~\eqref{eq:fluxcondition}
is not satisfied, and one does get monotonicity of the ground state energy for
large magnetic field (this is discussed in detail in~\cite{fohe3}).
\end{remark}
We continue with $\Omega=\mathbb{R}^2$. Here, we are only able to destroy monotonicity
in the case $\delta=0$.
\begin{thm}\label{thm:wholeplane}
Let $\Omega=\mathbb{R}^2$. Then, for all $\delta >0$ and all magnetic fields satisfying Assumption~\ref{ass:Magnetic} there exists a $B_0>0$ such that
$\eigone{\mathcal{H}(B)}$ is monotonically increasing for $B>B_0$. However, if
$\delta =0$, then $B \mapsto \eigone{\mathcal{H}(B)}$ is not monotone increasing on
any unbounded half-interval.
\end{thm}
As for the disc and the exterior of the disc, the proof of this result goes
via asymptotic expansions.
\begin{thm}\label{thm:R2deltapos}
Suppose that $\Omega=\mathbb{R}^2$, and that $\beta$ satisfies Assumption~\ref{ass:Magnetic} with $\delta>0$. Then, as $B\to+\infty$,
\[
\eigone{\mathcal{H}(B)} = \delta B+\frac{k}{4\delta} + \mathcal{O}(B^{-1/2}).
\]
\end{thm}
\begin{thm}\label{thm:R2deltazero}
Let $c_0>0$ and $\Xi$ be the spectral constants from~\eqref{eq:c0}
and~\eqref{eq:DefXi} respectively. Suppose that
$\Omega=\mathbb{R}^2$, and that and that $\beta$ satisfies Assumption~\ref{ass:Magnetic}
with $\delta=0$. There exist constants $C_1$ and $C_2$
such that if
\[
\Delta_B := \inf_{m\in\mathbb{Z}}\bigl|m-\Phi B-C_1\bigr|,
\]
then, as $B\to+\infty$,
\[
\eigone{\mathcal{H}(B)} = \Bigl(\frac{k}{2}\Bigr)^{1/2}\Xi B^{1/2}+\frac{c_0}{2}\bigl(\Delta_B^2+C_2\bigr) + o(1).
\]
\end{thm}
\begin{remark}
In all of the results above the ground state has angular momentum
$m \approx \Phi B$ (to leading order in $B$).
We recall that $\Phi B$ is the total flux through the unit
disc---the bounded domain enclosed by the curve where we have localization.
The possibility to obtain non-monotonicity comes from the condition that $m$
must be an integer, which leads to frustration.
This is similar to examples in~\cite{Er1}.
\end{remark}
\begin{remark}
Theorem~\ref{thm:wholeplane} raises the question whether one can break strong
diamagnetism with a strictly positive magnetic field on the whole plane.
\end{remark}
\subsection{Organization of the paper}
In the next section we define the operators involved and perform the Fourier
decomposition reducing the study to a family of ordinary differential operators.
In Section~\ref{sec:annulus} we prove a non-monotonicity result for an annulus
and use that to prove Theorem~\ref{thm:main3}. In Section~\ref{sec:section4} we
work in the exterior of the unit disc and prove Theorem~\ref{thm:extasymptot}.
We indicate in Section~\ref{sec:disc} how the proof of
Theorem~\ref{thm:extasymptot} can be modified to give the proof of
Theorem~\ref{thm:intasymptot}. In Section~\ref{sec:nonmondiscanddiscext} we see
how Theorem~\ref{thm:extasymptot} and Theorem~\ref{thm:intasymptot} imply
Theorem~\ref{thm:main}.
In Section~\ref{sec:planeg0} we prove Theorem~\ref{thm:R2deltapos} and in
Section~\ref{sec:plane0} we prove Theorem~\ref{thm:R2deltazero}. These two
results are used to prove Theorem~\ref{thm:wholeplane}.
\section{Preliminaries}
\label{sec:prel}
\subsection{Definition of the operator}
\label{sec:defs}
We consider the self-adjoint magnetic Neumann Schr\"odinger operator
\begin{equation}\label{eq:neumannop}
\mathcal{H}(B)=(-i\nabla+B\mathbf{F})^2
\end{equation}
with domain
\begin{align}\label{eq:neumanncond-New}
\dom(\mathcal{H}(B))=\bigl\{\psi\in L^2(\Omega)&\mid
(-i\nabla+B\mathbf{F})^2 \psi \in L^2(\Omega) \nonumber \\
&\quad \text{ and }
N(x)\cdot(-i\nabla+B\mathbf{F})\psi|_{\partial\Omega}=0\bigr\}.
\end{align}
Here $N(x)$ is the interior unit normal to $\partial\Omega$,
\[
\beta(x)=\Bigl(\frac{\partial F_2}{\partial x_1}
-\frac{\partial F_1}{\partial x_2}\Bigr),\quad \mathbf{F}=(F_1,F_2),
\]
and $B\geq 0$ is the strength of the magnetic field.
In general, for a self-adjoint operator $\mathcal{H}$ that is semi-bounded from below
we will write
\begin{equation*}
\eigone{\mathcal{H}} = \inf\spec\bigl(\mathcal{H}\bigr)
\end{equation*}
for the lowest point of the spectrum of $\mathcal{H}$.
In the case of the disc or if $\beta(x)\to+\infty$ as $|x|\to+\infty$
the operator has compact resolvent (see~\cite{avhesi}). If $\Omega$ is
unbounded and if $\beta(x)\not\to+\infty$, then the essential spectrum
will be bounded below by $\liminf_{r\to+\infty} B\tilde{\beta}(r)>B\delta$
(see~\cite{hemo88} for the case of $\mathbb{R}^2$ and \cite{kape} for the case
of the exterior of the disc). In any case, as it will follow by the results
below, $\eigone{\mathcal{H}(B)}$ will be an eigenvalue.
\subsection{Fourier decomposition}
We will work in domains $\Omega$ that are rotationally symmetric. For that
reason, we will often work in polar coordinates
\begin{equation*}
\left\{
\begin{aligned}
x_1 &= r \cos\theta,\\
x_2 &= r \sin\theta,
\end{aligned}\right.\qquad r\in I,\ 0\leq\theta<2\pi.
\end{equation*}
Here $I\subset[0,+\infty)$ will be an interval.
Moreover, we will work with magnetic fields that depends only on $r=|x|$.
For a radial magnetic field $\beta(x)=\tilde\beta(r)$ we will work
with the gauge
\[
\mathbf{F}(x)=a(r)(-\sin\theta,\cos\theta),
\]
where\footnote{Notice that $\int_0^r \tilde\beta(s)s\,ds
= \frac{1}{2\pi} \int_{B(0,r)} \beta(x)\,dx$, so $ra(r)$ has an immediate
interpretation in terms of the flux through the disc of radius $r$.}
\begin{align}
\label{eq:Def_a}
a(r)=\frac{1}{r}\int_0^r \tilde\beta(s) s\,ds.
\end{align}
In calculations, we will often meet the expression $(\frac{m}{r} - B a(r))^2$.
This we can write as
\begin{align}\label{eq:Cancellation}
\Bigl(\frac{m}{r} - B a(r)\Bigr)^2 &= \frac{1}{r^2} \bigl(m - B r a(r)\bigr)^2,
\end{align}
where
\begin{align}
\label{eq:rar-split}
r a(r) = \int_0^1 \tilde{\beta}(s) s \,ds + \int_1^r \tilde{\beta}(s) s \,ds = \Phi + \int_0^{r-1} \tilde{\beta}(1+s) (1+s) \,ds.
\end{align}
Thus, under Assumption~\ref{ass:Magnetic}, as $r\to 1$,
\begin{align}\label{eq:rar_expansion}
ra(r)&=\Phi + \delta(r-1) + \frac{\delta}{2}(r-1)^2 + \frac{k}{6}(r-1)^3
+\Bigl(\frac{c}{24}+\frac{k}{8}\Bigr)(r-1)^4+\mathcal{O}((r-1)^5).
\end{align}
with $c= \tilde{\beta}'''(1)$.
The expression for the operator $\mathcal{H}(B)$ in polar coordinates becomes
\[
\mathcal{H}(B)=-\frac{\partial^2}{\partial r^2}
-\frac{1}{r}\frac{\partial}{\partial r}
+\Bigl(\frac{i}{r}\frac{\partial}{\partial\theta}-Ba(r)\Bigr)^2.
\]
We decompose the Hilbert space as (Here $I$ denotes any of the intervals
$(R_{i},R_{o})$, $(0,1)$,
$(1,+\infty)$ or $(0,+\infty)$)
\begin{align*}
L^2(\Omega)\cong
L^2\bigl(I, rdr \bigr) \otimes L^2(\mathbb{S}^1,d\theta)
\cong \bigoplus_{\mathclap{m=-\infty}}^\infty L^2\bigl(I, rdr \bigr)
\otimes \frac{e^{-im\theta}}{\sqrt{2\pi}},
\end{align*}
that is, for a function $\psi\in L^2(\Omega)$, we write
\begin{equation*}
\psi(r,\theta)=\sum_{m\in\mathbb{Z}} \psi_m(r)
\frac{e^{-im\theta}}{\sqrt{2\pi}},
\end{equation*}
where $\psi_m\in L^2\bigl(I, r dr \bigr)$. Next, we write the operator
$\mathcal{H}(B)$ corresponding to this decomposition as
\begin{equation*}
\mathcal{H}(B) = \bigoplus_{\mathclap{m=-\infty}}^\infty \mathcal{H}_m(B)\otimes 1,
\end{equation*}
where $\mathcal{H}_m(B)$ is the self-adjoint operator acting in
$L^2\bigl(I, r\,dr\bigr)$, given by
\begin{equation*}
\mathcal{H}_m(B)
=-\frac{d^2}{dr^2}-\frac{1}{r}\frac{d}{dr}
+\Bigl(\frac{m}{r}-Ba(r)\Bigr)^2,
\end{equation*}
with Neumann boundary
conditions at the endpoints of $I$. The quadratic form
corresponding to $\mathcal{H}_m(B)$ is given by
\begin{equation}
\label{eq:quad}
\mathfrak{q}_m[\psi]=\int_I \Bigl[|\psi'(r)|^2
+\Bigl(\frac{m}{r}-Ba(r)\Bigr)^2|\psi(r)|^2
\Bigr]r\, dr.
\end{equation}
It holds that
\begin{equation}
\label{eq:infmeig}
\eigone{\mathcal{H}(B)} = \inf_{m\in\mathbb{Z}}\eigone{\mathcal{H}_m(B)}.
\end{equation}
\section{The analysis of the annulus}
\label{sec:annulus}
\subsection{Introduction}
In this section we will let
\[
\beta(x)=1 \quad \text{and}\quad
\Omega=\bigl\{x\in\mathbb{R}^2~\mid~R_{i}<|x|<R_{o}\bigr\}.
\]
We aim to prove Theorem~\ref{thm:main3}.
\subsection{The linear result}
We first notice the non-monotonicity of the function
$B\mapsto \eigone{\mathcal{H}(B)}$.
\begin{thm}
\label{thm:mainlin}
Let $R_{i}=1$ and $1<R_{o}<\sqrt{2}$. Then the operator $\mathcal{H}(B)$ in the annulus
$\Omega$ satisfies
\[
\left.\frac{d}{d B}\eigone{\mathcal{H}(B)}\right|_{B=1}<0.
\]
In particular, the function $B\mapsto \eigone{\mathcal{H}(B)}$ is monotonically
decreasing around ${B=1}$.
\end{thm}
One might suspect that some properties of $\mathcal{H}(B)$ are carried over to some
model problem on the circle, as $R_{o}\searrowR_{i}$. Let $\mathcal{A}(B)$ be the
self-adjoint operator
\begin{equation}
\label{eq:angb}
\mathcal{A}(B)=\Bigl(\frac{i}{R_{i}}\frac{d}{d\theta}-\frac{BR_{i}}{2}\Bigr)^2
\end{equation}
in $L^2\bigl((0,2\pi)\bigr)$ with periodic boundary conditions. Its spectrum
is easily seen to consist of eigenvalues
$\big\{\bigl(\frac{m}{R_{i}}-\frac{BR_{i}}{2}\bigr)^2\big\}_{m\in\mathbb{Z}}$.
In particular
\[
\eigone{\mathcal{A}(B)}=\min_{m\in\mathbb{Z}}
\Bigl(\frac{m}{R_{i}}-\frac{BR_{i}}{2}\Bigr)^2.
\]
Our next theorem states that $\eigone{\mathcal{H}(B)}$ will tend to
$\eigone{\mathcal{A}(B)}$ as $R_{o}\searrowR_{i}$.
\begin{thm}
\label{thm:main2}
Let $B>0$. Then
\[
\lim_{R_{o}\searrowR_{i}}\eigone{\mathcal{H}(B)}
= \eigone{\mathcal{A}(B)}
= \min_{m\in\mathbb{Z}}\Bigl(\frac{m}{R_{i}}-\frac{BR_{i}}{2}\Bigr)^2.
\]
\end{thm}
\begin{remark}
As a direct consequence of Theorem~\ref{thm:main2} it is possible to find an
annulus such that the function $B\mapsto \eigone{\mathcal{H}(B)}$ is monotonically
increasing and decreasing alternatively as many times as desired.
\end{remark}
\begin{remark}
Another direct consequence of Theorem~\ref{thm:main2} is that, although the
diamagnetic inequality tells us that $\eigone{\mathcal{H}(B)}>\eigone{\mathcal{H}(0)}=0$
for all $B>0$ we can actually get $\eigone{\mathcal{H}(B)}$ to be arbitrary close
to zero if $B=2m$, $m=1,2,\ldots$ by choosing $R_{o}$ close enough to $R_{i}$.
\end{remark}
\begin{remark}
Theorem~\ref{thm:main2} can easily be extended to thin cylinders in three
dimensions, since the third variable then separates.
\end{remark}
\subsection{Nonmonotonicity in the annulus}
In this section we will prove the spectral
results Theorem~\ref{thm:mainlin} and Theorem~\ref{thm:main2}. We will work in
polar coordinates.
\begin{proof}[Proof of Theorem~\ref{thm:mainlin}]
We recall that here $R_{i}=1$. Let
\begin{equation}
\label{eq:pot}
p_{m,B}(r)=\Bigl(\frac{m}{r}-\frac{Br}{2}\Bigr)^2
\end{equation}
denote the potential in the quadratic form $\mathfrak{q}_m$ in~\eqref{eq:quad}.
We start by showing that if $R_{o}>1$ and $m\in\mathbb{Z}\setminus\{1\}$ then
\begin{equation}
\label{eq:misone}
\eigone{\mathcal{H}_m(1)}>\eigone{\mathcal{H}_1(1)}.
\end{equation}
The function $f(r)=p_{m,1}(r)-p_{1,1}(r)$ is positive for
$r>1$. Indeed, $f(r)=1-m+(m^2-1)/r^2$. If $m\not\in\{0,1\}$ then $f$ is
decreasing, and $f(r)\geq f(1)=m^2-m >0$. If $m=0$ then $f(r)=1-1/r^2$ which is
clearly positive for all $r>1$. The inequality~\eqref{eq:misone} follows by a
comparison of quadratic forms.
Next, we show that if $1<R_{o}<\sqrt{2m/B}$ then
\begin{equation}
\label{eq:misdec}
\frac{d}{d B}\eigone{\mathcal{H}_m(B)}<0.
\end{equation}
By perturbation theory it holds that
\begin{equation}
\label{eq:pert}
\frac{d}{d B}\eigone{\mathcal{H}_m(B)}
= \int_{1}^{R_{o}}\Bigl(\frac{Br^2}{2}-m\Bigr)u(r)^2 r\, dr,
\end{equation}
where $u$ denotes the eigenfunction corresponding to $\eigone{\mathcal{H}_m(B)}$.
Moreover the factor $\bigl(\frac{Br^2}{2}-m\bigr)$ is negative for all $1<r<R_{o}$
if $R_{o}<\sqrt{2m/B}$. Inserting this into~\eqref{eq:pert} gives~\eqref{eq:misdec}
It is now easy to finish the proof of Theorem~\ref{thm:mainlin}. Let
$1<R_{o}<\sqrt{2}$. Inequality~\eqref{eq:misone} and analytic perturbation theory
imply that
\[
\eigone{\mathcal{H}(B)}=\eigone{\mathcal{H}_1(B)}
\]
for $B$ in a neighborhood of $1$. Since, by~\eqref{eq:misdec}, it holds
that the derivative of $\eigone{\mathcal{H}_1(B)}$ is negative at $B=1$ the same is
true for the derivative of $\eigone{\mathcal{H}(B)}$. By continuity of the derivative
this holds in a neighborhood of $B=1$. In particular we conclude that the
function $B\mapsto\eigone{\mathcal{H}(B)}$ is strictly decreasing for these values
of $B$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main2}]
Since
\[
\eigone{\mathcal{H}(B)}=\inf_m \eigone{\mathcal{H}_m(B)},
\]
Theorem~\ref{thm:main2} is a direct consequence of the fact that,
for $m\in\mathbb{Z}$ and $B\geq 0$,
\begin{equation}
\label{eq:limRoRi}
\lim_{R_{o}\searrowR_{i}} \eigone{\mathcal{H}_m(B)}
= \Bigl(\frac{m}{R_{i}}-\frac{BR_{i}}{2}\Bigr)^2.
\end{equation}
To get an upper bound we use a trial state. In fact, we use the simplest possible
one. Let $u=\sqrt{2/(R_{o}^2-R_{i}^2)}$. Then $\|u\|_{L^2((R_{i},R_{o}), rdr)}=1$. A
simple calculation shows that
\[
\begin{aligned}
\lim_{R_{o}\searrowR_{i}}\mathfrak{q}_m[u]
&= \lim_{R_{o}\searrowR_{i}}
\biggl(
\frac{2m^2}{R_{o}+R_{i}}\frac{\logR_{o}-\logR_{i}}{R_{o}-R_{i}}
-Bm
+\frac{B^2}{8}\bigl(R_{i}^2+R_{o}^2\bigr)
\biggr)\\
& = \Bigl(\frac{m}{R_{i}}-\frac{BR_{i}}{2}\Bigr)^2.
\end{aligned}
\]
Hence $\lim_{R_{o}\searrowR_{i}} \eigone{\mathcal{H}_m(B)}
\leq \bigl(\frac{m}{R_{i}}-\frac{BR_{i}}{2}\bigr)^2$.
The lower bound is obtained by using the potential $p_{m,B}(r)$.
Let $u$ be a normalized eigenfunction corresponding to $\eigone{\mathcal{H}_m(B)}$.
then
\[
\eigone{\mathcal{H}_m(B)} = \mathfrak{q}_m[u] \geq
\int_{R_{i}}^{R_{o}} \Bigl(\frac{Br}{2}-\frac{m}{r}\Bigr)^2|u|^2 r\,dr
\geq \min_{R_{i}\leq r\leq R_{o}}\Bigl(\frac{Br}{2}-\frac{m}{r}\Bigr)^2.
\]
Since $\min_{R_{i}\leq r\leq R_{o}}\bigl(\frac{Br}{2}-\frac{m}{r}\bigr)^2
\to \bigl(\frac{m}{R_{i}}-\frac{BR_{i}}{2}\bigr)^2$ as
$R_{o}\searrowR_{i}$ we conclude that
\[
\lim_{R_{o}\searrow R_{i}} \eigone{\mathcal{H}_m(B)}
\geq \Bigl(\frac{m}{R_{i}}-\frac{BR_{i}}{2}\Bigr)^2.
\]
This completes the proof of~\eqref{eq:limRoRi}, and thus
finishes the proof of Theorem~\ref{thm:main2}.
\end{proof}
\subsection{Application to the Ginzburg--Landau functional}
In this section we prove Theorem~\ref{thm:main3}. We recall the reader
that $D$ below denotes the disc with radius $R_{o}$, centered at the
origin. We need the following lemma, and refer to~\cite{fohebook} for
its proof.
\begin{lemma}\label{lem:curl}
Let $R_{i}$ be fixed and let $R_{i}\leq R_{o} \leq 2$. There exists a constant
$\widehat{C}>0$ (independent of $R_{o}$) such that for all
$\mathbf{a}\in H^1_{\Div}(D)$ we have
\[
\|\mathbf{a}\|_{L^2(D)}\leq
\widehat{C}\|\curl\mathbf{a}\|_{L^2(D)}.
\]
Combining this with the Sobolev embedding we get the existence of a constant
$\widehat{C}_0$ (independent of
$R_{o} \in [R_{i},2]$) such that for all $\mathbf{a}\in H^1_{\Div}(D)$
\begin{equation}
\label{eq:Sobolev}
\|\mathbf{a}\|_{L^4(D)}\leq
\widehat{C}_0\|\curl\mathbf{a}\|_{L^2(D)}.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm:main3}]
Given $0<\epsilon<1$, the Cauchy inequality implies that
\[
|(i\nabla+\kappa \sigma \mathbf{A})\psi|^2
\geq (1-\epsilon)|(i\nabla+\kappa \sigma \mathbf{F})\psi|^2
- \epsilon^{-1} (\kappa \sigma )^2|\mathbf{A}-\mathbf{F}|^2|\psi|^2,
\]
and so
\begin{align}
\mathcal{G}_{\kappa,\sigma }(\psi,\mathbf{A}) &\geq
\int_{\Omega}(1-\epsilon)|(i\nabla+\kappa \sigma \mathbf{F})\psi|^2
-\kappa^2|\psi|^2+\frac{\kappa^2}{2}|\psi|^4\, dx\nonumber \\
&\quad- \epsilon^{-1} (\kappa \sigma)^2
\int_{\Omega}|\mathbf{A}-\mathbf{F}|^2|\psi|^2\,dx
+(\kappa \sigma)^2\int_{D}|\curl\mathbf{A}-1|^2\, dx \nonumber \\
&\geq \bigl((1-\epsilon) \eigone{\mathcal{H}(\kappa \sigma)}
- \kappa^2\bigr) \|\psi \|_{L^2(\Omega)}^2\nonumber \\
&\quad
- \epsilon^{-1} (\kappa \sigma)^2 \| \mathbf{A}
- \mathbf{F} \|_{L^4(D)}^2 \| \psi \|_{L^4(\Omega)}^2
+(\kappa \sigma)^2\int_{D}|\curl\mathbf{A}-1|^2\, dx \nonumber \\
&\geq
\bigl((1-\epsilon) \eigone{\mathcal{H}(\kappa \sigma)}
- \kappa^2\bigr) \|\psi \|_{L^2(\Omega)}^2\nonumber \\
&\quad
+(\kappa \sigma)^2
\Bigl( 1 - \widehat{C}_0^2 \epsilon^{-1} \sqrt{\pi} (R_{o}^2-R_{i}^2)^{1/2}\Bigr)
\int_{D}|\curl\mathbf{A}-1|^2\, dx.
\end{align}
Here we used~\eqref{eq:Sobolev} and $\|\psi\|_\infty\leq 1$ to get the last
inequality.
If we choose $\epsilon = (R_{o}-R_{i})^{1/4}$, then we see that if
$\eigone{\mathcal{A}(\kappa \sigma)} > \kappa^2$, then for all $R_{o}$ sufficiently close
to $R_{i}$ and all $(\psi, \mathbf{A})$,
\begin{align}\label{eq:trivial}
\mathcal{G}_{\kappa,\sigma }(\psi,\mathbf{A}) &\geq
0.
\end{align}
On the other hand, if $\eigone{\mathcal{H}(B = \sigma \kappa)}<\kappa^2$, then
we have (with $\mathbf{F}=1/2(-x_2,x_1)$ and $u$ the normalized eigenfunction
corresponding to $\eigone{\mathcal{H}(\sigma\kappa)}$)
\begin{align}\label{eq:Nontrivial}
\mathcal{G}_{\kappa,\sigma}(cu,\mathbf{F})
= c^2(\eigone{\mathcal{H}(\sigma \kappa)}-\kappa^2)
+c^4\frac{\kappa^2}{2}\int_\Omega |u|^4\,dx < 0
\end{align}
for sufficiently small values of $c$.
Therefore, by the explicit spectrum of $\mathcal{A}(B)$ we can choose $\kappa_0>0$
and $B_0<B_1<B_2$ such that
\[
\eigone{\mathcal{A}(B_j)} < \kappa_0^2, \qquad j=0,2,\qquad
\eigone{\mathcal{A}(B_1)} > \kappa_0^2.
\]
Define $\sigma_j := B_j/\kappa_0$.
By the convergence of the spectrum given in Theorem~\ref{thm:main2}
and~\eqref{eq:Nontrivial} we find the existence of $\widetilde R>R_{i}$ such that
${\mathcal G}_{\kappa_0, \sigma_j}$ has a non-trivial minimizer for all
$R_{i}<R_{o} \leq \widetilde R$ and $j\in \{0,2\}$.
On the other hand, it follows from \eqref{eq:trivial} that the minimizer of
${\mathcal G}_{\kappa_0, \sigma_1}$ is trivial for all $R_{o}>R_{i}$ sufficiently
close to $R_{i}$.
We conclude the existence of $R_{o}>R_{i}$ such that there exist non-trivial minimizers
when $\sigma=\sigma_0$ and $\sigma=\sigma_2$ but not when $\sigma=\sigma_1$.
Since $\sigma_0<\sigma_1<\sigma_2$ it is clear that $\mathcal{N}(\kappa_0)$ is
not an interval.
\end{proof}
\section{The case of the complement of the disc}
\label{sec:section4}
\subsection{Introduction}
In this section we consider the case $\Omega=\{x\in\mathbb{R}^2~:~|x|>1\}$ and assume that the magnetic field satisfies Assumption~\ref{ass:Magnetic} with $\delta>0$.
Our aim is to prove Theorem~\ref{thm:extasymptot}.
\subsection{Localization estimate}
Before continuing we give an Agmon estimate for the lowest eigenfunction.
\begin{prop}
\label{prop:Agmon_Outside}
Assume that $\beta$ satisfies Assumption~\ref{ass:Magnetic} with $\delta>0$.
Let $t \in (0,1)$. Then there exist positive
constants $C$, $a$ and $B_0$ such that if $B>B_0$, and if $\psi$ is an
eigenfunction of $\mathcal{H}(B)$ corresponding to an eigenvalue
$\lambda \leq t \delta B$. then
\begin{equation}
\label{eq:agmon_surface}
\int_{\{|x|>1\}}\exp\bigl(a B^{1/2}\bigl||x|-1\bigr|\bigr)
\bigl(|\psi|^2 + B^{-1} |(-i\nabla + B\mathbf{F}) \psi|^2 \bigr)\,dx
\leq C \int_{\{|x|>1\}}|\psi|^2\,dx.
\end{equation}
\end{prop}
Theorem~8.2.4 in~\cite{fohebook} gives the same estimate with the
restriction that the domain should be bounded.
However, since we give a similar Agmon estimate in Section~\ref{sec:planeg0}
with proof we omit the proof here.
\subsection{A detailed expansion}
We recall that the quadratic form after decomposition is given by (with $a(r)$ from \eqref{eq:Def_a})
\[
\mathfrak{q}_m[u]=\int_{1}^{+\infty}\Bigl(|u'(r)|^2+
\Bigl(\frac{m}{r}-B a(r)\Bigr)^2|u(r)|^2\Bigr)r\,dr.
\]
Notice that at $r=1$ the potential takes the value
\[
\Bigl(\frac{m}{r}-B a(r)\Bigr)^2\Big|_{r=1}
= (m-\Phi B)^2.
\]
This suggests that we will find the lowest energy for
$m\approx \Phi B$. That this is the case is the content of the following Lemma.
\begin{lemma}\label{lem:RestrictionOnM}
Let $t \in (0,1)$.
Suppose $\psi = u_m e^{-im \theta}$ is an eigenfunction of $\mathcal{H}(B)$ with
eigenvalue $\lambda \leq t \delta B$. Then
\[
m = \Phi B + {\mathcal O}(B^{1/2}).
\]
\end{lemma}
\begin{proof}
We neglect the kinetic energy in the expression for $\mathfrak{q}_m$.
Recall the calculation~\eqref{eq:Cancellation}.
For $1<r<2$, we get
\begin{align}
\label{eq:intbetabound}
\Bigl| \int_0^{r-1} (1+s) \tilde\beta(1+s)\,ds \Bigr| \leq C (r-1),
\end{align}
so, estimating the quadratic form with the potential,
combining~\eqref{eq:intbetabound} and~\eqref{eq:rar-split}, and using
Proposition~\ref{prop:Agmon_Outside}, we get
\begin{align}\label{eq:quadratic}
\mathfrak{q}_m[u_m] & \geq
\int_1^2 \frac{1}{r^2}\bigl(m-B r a(r)\bigr)^2 |u_m(r)|^2 r\, dr\\
&\geq \int_1^2 \frac{1}{r^2}\Bigl[\frac{1}{2}(m-\Phi B)^2-(CB)^2(r-1)^2\Bigr]|u_m(r)|^2 r\, dr\\
&\geq \frac{1}{8}(m-\Phi B)^2\bigl(1+\mathcal{O}(B^{-\infty})\bigr)
-\widetilde{C}B,
\end{align}
from which the lemma follows.
\end{proof}
\begin{lemma}
\label{lem:uniqueeigenvalue}
Let $t \in (0,1)$. There exists $B_0>0$ such that if $m \in {\mathbb Z}$ and
$B\geq B_0$, then $\mathcal{H}_m(B)$ admits at most one eigenvalue below $t \delta B$.
\end{lemma}
\begin{proof}
Fix $\tilde{t}$ with $t<\tilde{t}<1$. By the lower bound
\eqref{eq:quadratic}, we see that there exist $B_0, C_0 >0$
such that if $|m - \Phi B| \geq C_0 B^{1/2}$, then $\mathfrak{q}_m
\geq \tilde{t}\delta B$.
So we will
restrict attention to $m$'s such that $m = \Phi B + \Delta m$,
with $|\Delta m| \leq C_0 B^{1/2}$. Suppose, to get a contradiction,
that $u_1, u_2$ are eigenfunctions of $\mathfrak{q}_m$ corresponding to eigenvalues
below $t \delta B$.
We write
\begin{align*}
\Bigl(\frac{m}{r}-B a(r)\Bigr)^2 = \frac{1}{r^2}
\bigl( m - B r a(r) \bigr)^2,
\end{align*}
with
\begin{align*}
r a(r) = \Phi + \delta (r-1)
+ {\mathcal O}\bigl((r-1)^2\bigr),
\end{align*}
as $r \rightarrow 1$. So
\begin{align*}
|m - B r a(r)| \geq | m - \Phi B - B \delta (r-1) |
+ {\mathcal O}\bigl(B (r-1)^2\bigr).
\end{align*}
Using the Agmon estimates, this yields the following bound on normalized
functions $v$ in $\text{span}\{u_1, u_2\}$.
\begin{equation}
\label{eq:qformest}
\begin{aligned}
\mathfrak{q}_m[v] &\geq\int_1^{\infty} \Bigl( |v'(r)|^2 +
\frac{1}{r^2}\bigl(\Delta m - B \delta (r-1) \bigr)^2 |v(r)|^2\Bigr) r\,dr
+ {\mathcal O}(B^{1/2})\\
&=\widetilde{\mathfrak{q}}_m[v] + {\mathcal O}(B^{1/2}),
\end{aligned}
\end{equation}
with
\begin{equation}
\label{eq:tildeq}
\widetilde{\mathfrak{q}}_m[v] = \int_1^{\infty}
|v'(r)|^2 + \bigl(\Delta m - B \delta (r-1) \bigr)^2 |v(r)|^2\,dr.
\end{equation}
By translation and scaling $\widetilde{\mathfrak{q}}_m$ is unitarily equivalent to
(the quadratic form of) a de Gennes operator (see Appendix~\ref{sec:dG}) and
therefore has spectrum given
by
\[
B \delta
\bigl\{ \eig{j}{\Ham_{\text{dG}}}(\Delta m/(\delta B)^{1/2})\bigr\}_{j=1}^{+\infty}
\]
Only the first of these $\eig{j}{\Ham_{\text{dG}}}$---counted with multiplicity---is below
$1$ (for some values of $\Delta m/(\delta B)^{1/2}$), so we reach a
contradiction if we have a subspace of dimension $2$ on which the quadratic
form is small.
\end{proof}
\begin{lemma}\label{lem:FurtherRestrictionOnM}
Let $M>0$.
Suppose $\mathcal{H}_m(B)$ admits an eigenvalue below $\Theta_0 \delta B + M B^{1/2}$.
Then there exists a constant $C>0$ such that
\begin{equation}
\label{eq:mcond}
\bigl|m - \bigl(\Phi B + \xi_0 (\delta B)^{1/2}\bigr)\bigr|
\leq CB^{1/4}.
\end{equation}
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:RestrictionOnM}, $|m - \Phi B| = {\mathcal O}(B^{1/2})$.
Assuming that $u$ is the eigenfunction corresponding to the unique
(by Lemma~\ref{lem:uniqueeigenvalue}) eigenvalue
$\lambda$ below $\Theta_0 \delta B + M B^{1/2}$ we can use the estimate
in~\eqref{eq:qformest} to find that
\[
\mathfrak{q}_m[u]\geq \widetilde{\mathfrak{q}}_m[u] + \mathcal{O}(B^{1/2}),
\]
with $\widetilde{\mathfrak{q}}_m$ as in~\eqref{eq:tildeq}. Implementing the change of
variable $r=1+(\delta B)^{-1/2}\rho$, we get (here we write
$v(\rho)=(\delta B)^{-1/4}u(1+(\delta B)^{-1/2}\rho)$)
\[
\widetilde{\mathfrak{q}}_m[v]=\delta B\int_0^{+\infty}
|v'(\rho)|^2+
\biggl(\rho-\xi_0+\xi_0-\frac{m-\Phi B}{(\delta B)^{1/2}}\biggr)^2
|v|^2\,d\rho.
\]
We recognize this as the quadratic form for the de~Gennes operator (see
Appendix~\ref{sec:dG}). By noticing that the first eigenvalue
$\eigone{\Ham_{\text{dG}}}(\xi)$ has a quadratic minimum $\Theta_0$ at $\xi_0$ (and using
the bound on $(m-\Phi B)/(\delta B)^{1/2}$) we find
that there exists a positive constant $C_0$ such that
\[
\widetilde{\mathfrak{q}}_m[v]\geq \biggl[\Theta_0\delta B
+ C_0\delta B\biggl(\xi_0-\frac{m-\Phi B}{(\delta B)^{1/2}}\biggr)^2\,\biggr] \| v \|^2.
\]
The second term above is bounded by some constant times $B^{1/2}$ according to
the assumption. This in turn gives the existence of a positive constant $C$
such that~\eqref{eq:mcond} holds.
\end{proof}
In the remainder of this section we will always restrict our attention to $m$'s
satisfying the conclusion of Lemma~\ref{lem:FurtherRestrictionOnM}.
The strategy of the rest of the proof is as follows. We will construct an
explicit trial state for the operator $\mathfrak{h}=\frac1B\mathcal{H}_m(B)$
(here we suppress the dependence on $m$ and $B$ for the simplicity of notation).
This trial
state will be constructed as the first terms of a formal expansion. By taking
only finitely many terms (for our purposes $3$ terms suffice) and performing a
localization one gets a well-defined trial state.
In terms of the objects calculated below our explicit trial state will be as
follows. Let
\begin{align}
v(\rho) = v_0 + B^{-1/2} v_1 + B^{-1} v_2,\quad
\lambda = \lambda_0 + B^{-1/2} \lambda_1 + B^{-1} \lambda_2.
\end{align}
Let furthermore, $\chi\in C_0^{\infty}({\mathbb R})$, with $\chi(0) =1$, and
define (with suitable $\epsilon$, say $\epsilon = (100)^{-1}$)
\begin{align}\label{eq:explicitQuasimode}
\tilde v(r) = (\delta B)^{1/4} \chi(B^{1/2-\epsilon} (r-1)) v( (\delta B)^{1/2} (r-1)).
\end{align}
Then $\| \tilde v \|_{L^2} = 1 + {\mathcal O}(B^{-1/2})$ and
\begin{align}
\| (\mathfrak{h} - \lambda) \tilde v \| = {\mathcal O}(B^{-3/2}).
\end{align}
By self-adjointness of $\mathfrak{h}$ we get that
$\dist(\lambda, \sigma(\mathfrak{h})) = {\mathcal O}(B^{-3/2})$. Since we by
Lemma~\ref{lem:uniqueeigenvalue} know that $\mathfrak{h}$ has at most
one eigenvalue near $\lambda_0=\delta\Theta_0$, we can conclude
that $\lambda$ gives
the first terms of the asymptotic expansion of that lowest eigenvalue of~$\mathfrak{h}$.
We proceed to the termwise construction of the trial state.
Since (by Proposition~\ref{prop:Agmon_Outside}) we have localization around
$r=1$, we implement unitarily the change of variables
\[
\rho=(\delta B)^{1/2}(r-1),\quad r = 1 +(\delta B)^{-1/2}\rho.
\]
Here, the $\delta$ is included for convenience. Then
\begin{equation}
\label{eq:apot}
B r a(r) = \Phi B + (\delta B)^{1/2} \rho + \frac{1}{2} \rho^2
+ \frac{k}{6 \delta^{3/2}} B^{-1/2} \rho^3 + {\mathcal O}(B^{-1}).
\end{equation}
Here the estimate on the remainder should be understood in the following sense: We will only act with our operator on the function $\tilde v$ from \eqref{eq:explicitQuasimode} which is localized near $r=1$ on the scale $B^{-1/2}$. So we may consider $\rho$ as a quantity of order $1$.
By Lemma~\ref{lem:FurtherRestrictionOnM} the constant term $m - \Phi B$
vanishes to leading order.
For reasons of expositions we will write
\[
m = \Phi B + \mu_1B^{1/2}+\mu_2,
\]
and not insert the choice $\mu_1 = \xi_0 \delta^{1/2}$ until later. Recall
that $\mu_2 B^{-1/4}$ is bounded.
Integrating by parts, we find (with
$v(\rho)=(\delta B)^{-1/4}u(1+(\delta B)^{-1/2}\rho)$)
\begin{equation}
\label{eq:diffexp}
\begin{multlined}
\frac{1}{B}\int_1^{+\infty}\Bigl|\frac{d u}{dr} \Bigr|^2\,r\,dr\\
=\delta \int_0^{+\infty} \overline{v}
\Bigl( - \frac{d^2 v}{d\rho^2} - (\delta B)^{-1/2}
( 1 + (\delta B)^{-1/2} \rho)^{-1} \frac{dv}{d\rho} \Bigr)
(1+(\delta B)^{-1/2}\rho)\,d\rho.
\end{multlined}
\end{equation}
We expand our operator $\mathfrak{h}$ as
\[
\mathfrak{h} = \mathfrak{h}_0+B^{-1/2}\mathfrak{h}_1+B^{-1}\mathfrak{h}_2+\ldots
\]
and obtain
\begin{equation}\label{eq:expanded}
\begin{aligned}
\mathfrak{h}_0 &= \delta\Bigl(-\frac{d^2}{d\rho^2}+(\rho-\mu_1/\delta^{1/2})^2\Bigr),\\
\mathfrak{h}_1 &= -\delta^{1/2}\frac{d}{d\rho}
-2\mu_2\delta^{1/2}(\rho-\mu_1/\delta^{1/2})
-\frac{2\mu_1^2}{\delta^{1/2}}\rho+3\mu_1\rho^2-\delta^{1/2}\rho^3,\\
\mathfrak{h}_2 &=
\rho\frac{d}{d\rho}
+\mu_2^2
-\frac{4\mu_1 \mu_2}{\delta^{1/2}}\rho
+3 \mu_2 \rho ^2
+\frac{3 \mu_1^2}{\delta } \rho ^2
-\frac{k \mu_1}{3 \delta ^{3/2}} \rho^3
-\frac{4 \mu_1 }{\delta^{1/2}}\rho ^3
+\frac{k}{3 \delta }\rho ^4
+\frac{5}{4}\rho^4.
\end{aligned}
\end{equation}
We make the Ansatz
\[
v = \sum_{j=0}^{+\infty} v_j B^{-j/2},\quad
\lambda = \sum_{j=0}^{+\infty} \lambda_j B^{-j/2}.
\]
Equating order by order in the relation $(\mathfrak{h}-\lambda)v=0$ gives:
\paragraph{{\bf Order $B^0$:}} To leading order we find
\[
\mathfrak{h}_0 v_0=\lambda_0 v_0,
\]
which is the eigenvalue problem for the de Gennes operator discussed in
Appendix~\ref{sec:dG}. The optimal eigenvalue $\lambda_0=\delta\Theta_0$ is
attained for $v_0=\phi_{\xi_0}$ and $\mu_1=\delta^{1/2}\xi_0$.
\paragraph{{\bf Order $B^{-1/2}$:}} Here we get
\[
(\mathfrak{h}_0-\lambda_0)v_1 = (\lambda_1-\mathfrak{h}_1)v_0.
\]
By taking scalar product (with measure $d\rho$), we find
\[
0=\langle v_0,(\mathfrak{h}_0-\lambda_0)v_1\rangle
= \lambda_1-\langle v_0,\mathfrak{h}_1v_0\rangle.
\]
Via the formulas~\eqref{eq:momentone}--\eqref{eq:dphi} we find
\begin{equation}
\label{eq:lambdaone}
\begin{aligned}
\lambda_1 &= \langle \phi_{\xi_0},\mathfrak{h}_1\phi_{\xi_0}\rangle\\
& = \Bigl\langle \phi_{\xi_0},
\bigl(-\delta^{1/2}\frac{d}{d\rho}
-2\mu_2\delta^{1/2}(\rho-\mu_1/\delta^{1/2})
-\frac{2\mu_1^2}{\delta^{1/2}}\rho+3\mu_1\rho^2
-\delta^{1/2}\rho^3\bigr)\phi_{\xi_0}\Bigr\rangle\\
& = -\delta^{1/2}\langle \phi_{\xi_0},\phi_{\xi_0}'\rangle
-2\mu_2\delta^{1/2}\langle \phi_{\xi_0},(\rho-\xi_0)\phi_{\xi_0}\rangle
-2\xi_0^2\delta^{1/2}\langle \phi_{\xi_0},\rho\phi_{\xi_0}\rangle\\
&\quad+3\xi_0\delta^{1/2}\langle \phi_{\xi_0},\rho^2\phi_{\xi_0}\rangle
-\delta^{1/2}\langle\phi_{\xi_0},\rho^3\phi_{\xi_0}\rangle\\
& = \frac{1}{3}\phi_{\xi_0}(0)^2\delta^{1/2}.
\end{aligned}
\end{equation}
In particular $\lambda_1$ is independent of $\mu_2$. Moreover, since we can
choose $v_1\perp v_0$, we can let $v_1$ be the regularized resolvent
$(\mathfrak{h}_0-\lambda_0)^{-1}_{\text{reg}}$ of $-\mathfrak{h}_1v_0$.
This regularized resolvent is defined as the inverse of the operator
$(\mathfrak{h}_0-\lambda_0)$ restricted to the space $\{v_0\}^{\perp}$.
So we have,
\begin{align}\label{eq:Solv1}
v_1 = -(\mathfrak{h}_0-\lambda_0)^{-1}_{\text{reg}}\bigl[\mathfrak{h}_1v_0\bigr].
\end{align}
\paragraph{{\bf Order $B^{-1}$:}} We get
\begin{align}\label{eq:degree2}
(\mathfrak{h}_0-\lambda_0)v_2 = (\lambda_2-\mathfrak{h}_2)v_0+(\lambda_1-\mathfrak{h}_1)v_1.
\end{align}
Taking scalar product with $v_0$ again gives
\[
\lambda_2 = \langle v_0,\mathfrak{h}_2v_0\rangle
+ \langle v_0,(\mathfrak{h}_1-\lambda_1) v_1\rangle.
\]
We will not calculate this expression in all detail. We are only interested in the
dependence on $\mu_2$. An inspection gives that it will be a polynomial of
degree two. We will calculate the coefficient in front of $\mu_2^2$ to see that
it is positive so that $\lambda_2$ has a unique minimum with respect to $\mu_2$.
The term $\langle v_0,\mathfrak{h}_2v_0\rangle$ is easily calculated since $\mathfrak{h}_2$
contains one $\mu_2^2$ only.
For the term $\langle v_0,(\mathfrak{h}_1-\lambda_1) v_1\rangle$ we find one $\mu_2$ in
$\mathfrak{h}_1$ and therefore also one in $v_1$. The coefficient in front of $\mu_2$ in
that term becomes
\begin{multline*}
\Bigl\langle v_0,-2\delta^{1/2}(\rho-\xi_0)
\bigl(\mathfrak{h}_0-\lambda_0\bigr)^{-1}_{\text{reg}}
\bigl[2\delta^{1/2}(\rho-\xi_0)v_0\bigr]\Bigr\rangle\\
=
-4\Bigl\langle (\rho-\xi_0)\phi_{\xi_0},
\bigl(\Ham_{\text{dG}}(\xi_0)-\Theta_0\bigr)^{-1}_{\text{reg}}
\bigl[(\rho-\xi_0)\phi_{\xi_0}\bigr]\Bigr\rangle.
\end{multline*}
So, the coefficient in front of $\mu_2^2$ in $\lambda_2$ will be
(see~\eqref{eq:seconddG})
\[
1-4\Bigl\langle (\rho-\xi_0)\phi_{\xi_0},
\bigl(\Ham_{\text{dG}}(\xi)-\Theta_0\bigr)^{-1}_{\text{reg}}
\bigl[(\rho-\xi_0)\phi_{\xi_0}\bigr]\Bigr\rangle
= \xi_0\phi_{\xi_0}(0)^2>0.
\]
This means that we can write
\begin{align}\label{eq:lambda_2}
\lambda_2 =
\xi_0\phi_{\xi_0}(0)^2
\Bigl(\bigl(\mu_2-C_0^{\text{ext}}\bigr)^2+C_1^{\text{ext}}\Bigr),
\end{align}
where $C_0^{\text{ext}}$ and $C_1^{\text{ext}}$ depend only on $k$, $\delta$,
$\xi_0$ and $\phi_{\xi_0}(0)$ (but not on $\Phi$).
We summarize these findings in a Lemma.
\begin{lemma}\label{lem:expansion}
Suppose
\[
m = \Phi B+ \xi_0\,(\delta B)^{1/2}+\mu_2,
\]
with $\mu_2 = {\mathcal O}(B^{1/4})$. Then
\begin{align}
\eigone{\mathcal{H}_m(B)} &= \Theta_0\delta B
+ \frac{1}{3}\phi_{\xi_0}(0)^2 (\delta B)^{1/2}
+ \xi_0\phi_{\xi_0}(0)^2
\bigl((\mu_2 - C_0^{\rm ext})^2+C_1^{\text{ext}}\bigr) \nonumber \\
&\quad+\mathcal{O}((1+\mu_2^3)B^{-1/2}).
\end{align}
\end{lemma}
\begin{proof}
We have to control the asymptotic expansion in $\mu_2$ subject to the bound
$|\mu_2| \leq C B^{1/4}$. Define
\begin{align}
\lambda^{\rm app} = \lambda_0 + \lambda_1 B^{-1/2} + \lambda_2 B^{-1},
\end{align}
with $\lambda_0, \lambda_1$ being the constants from above and $\lambda_2$
being the quadratic function of $\mu_2$ from \eqref{eq:lambda_2}.
We also define an approximate eigenfunction by
\begin{align}
v = v_0 + B^{-1/2} v_1 + B^{-1} v_2,
\end{align}
with $v_0 = \varphi_{\xi_0}$, $v_1$ given by \eqref{eq:Solv1}
and $v_2$ being given by solving \eqref{eq:degree2}, i.e.
\begin{align}
v_2 = (\mathfrak{h}_0-\lambda_0)^{-1}_{\text{reg}}\bigl[ (\lambda_2-\mathfrak{h}_2)v_0
+(\lambda_1-\mathfrak{h}_1)v_1 \bigr].
\end{align}
Notice from the explicit form of the operators that $v_1$ depends linearly on
$\mu_2$ and $v_2$ depends quadratically, so $v$ is normalized to leading order.
Also, by the mapping properties of $(\mathfrak{h}_0-\lambda_0)^{-1}_{\text{reg}}$ each
$v_i$ is a smooth, rapidly decreasing function (see
Lemma~3.2.9 in~\cite{fohebook}).
We can now estimate as follows
\begin{align}
\| ( \mathfrak{h} - \lambda^{\rm app} )v \| &\leq \| ( \mathfrak{h}_0 + B^{-1/2} \mathfrak{h}_1 +
B^{-1}\mathfrak{h}_2 - \lambda^{\rm app} )v \| \nonumber \\
&\quad+ \| (\mathfrak{h} - [\mathfrak{h}_0 + B^{-1/2} \mathfrak{h}_1 + B^{-1}\mathfrak{h}_2]) v \|.
\end{align}
By the decay properties of $v$, the last term is bounded by
$C (1 + \mu_2^2) B^{-3/2}$.
Our choice of $v$ gives that the first term is equal to
\begin{align}
\| B^{-3/2}[(\mathfrak{h}_1-\lambda_1) v_2 +(\mathfrak{h}_2-\lambda_2) v_1 ]
+ B^{-2} (\mathfrak{h}_2 - \lambda_2) v_2 \|,
\end{align}
which is easily seen to be bounded by ${\mathcal O}(B^{-3/2}(1+ |\mu_2|^3))$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:extasymptot}]
Using Lemma~\ref{lem:FurtherRestrictionOnM}, Theorem~\ref{thm:extasymptot}
follows from Lemma~\ref{lem:expansion} by
the following argument. Notice that the positive quadratic term in
$(\mu_2 - C_0^{\rm ext})$ dominates the error term $\mu_2^3 B^{-1/2}$ unless
$\mu_2$ is bounded in which case the dependence on $\mu_2$ in the error term
disappears. This finishes the proof of Theorem~\ref{thm:extasymptot}.
\end{proof}
\section{The case of the disc}
\label{sec:disc}
In this section we will indicate a similar calculation of the ground state
eigenvalue in the case of the unit disc, thereby proving
Theorem~\ref{thm:intasymptot}, i.e. we work on $\Omega=\{x\in\mathbb{R}^2~:~|x|<1\}$ and for a magnetic field satisfying Assumption~\ref{ass:Magnetic}.
We mainly give the results of the calculations referring to the exterior case
for details. We will
have exponential localization estimate like the one of
Proposition~\ref{prop:Agmon_Outside} (with domain of integration being
$\{|x|< 1\}$, of course). Therefore, also the rough `localization' of the
relevant angular momenta---Lemma~\ref{lem:RestrictionOnM}---will hold in this
case as well. So we can proceed to make a change of variable to the region near
(on the scale $(\delta B)^{-1/2}$ as before) the boundary.
The leading order terms in the expansion of the operator become very similar
to the case of the exterior of the disc:
\begin{equation*}
\begin{aligned}
\mathfrak{h}_0 &= \delta\Bigl(-\frac{d^2}{d\rho^2}
+(\rho+\mu_1/\delta^{1/2})^2\Bigr),\\
\mathfrak{h}_1 &= \delta^{1/2}\frac{d}{d\rho}
+2\mu_2\delta^{1/2}(\rho+\mu_1/\delta^{1/2})
+\frac{2\mu_1^2}{\delta^{1/2}}\rho+3\mu_1\rho^2+\delta^{1/2}\rho^3,\\
\mathfrak{h}_2 &=
\rho\frac{d}{d\rho}
+\mu_2^2
+\frac{4\mu_1 \mu_2}{\delta^{1/2}}\rho
+3 \mu_2 \rho ^2
+\frac{3 \mu_1^2}{\delta } \rho ^2
+\frac{k \mu_1}{3 \delta ^{3/2}}\rho^3
+\frac{4 \mu_1 \rho ^3}{\delta^{1/2}}
+\frac{k}{3 \delta }\rho ^4
+\frac{5}{4}\rho^4.
\end{aligned}
\end{equation*}
The same calculations (using the same Ansatz) as in the previous section
show that (with $\mu_1=-\xi_0/\delta^{1/2}$)
\[
\lambda_0 = \Theta_0,\quad
\lambda_1 = -\frac{1}{3}\phi_{\xi_0}(0)^2\delta^{1/2},\quad
\lambda_2 =
\xi_0\phi_{\xi_0}(0)^2\Bigl(\bigl(\mu_2-C_0^{\text{int}}\bigr)^2
+C_1^{\text{int}}\Bigr),
\]
for some constants $C_0^{\text{int}}$ and $C_1^{\text{int}}$, depending only
on the spectral parameters and $\delta$.
Thus, Theorem~\ref{thm:intasymptot} follows from calculations/arguments
completely analogous to the ones in Section~\ref{sec:section4} and we omit the
details.
\section{(Non)-monotonicity in the disc and its complement}
\label{sec:nonmondiscanddiscext}
Using the results of Theorem~\ref{thm:extasymptot} and~\ref{thm:intasymptot}
it is now easy to prove Theorem~\ref{thm:main}.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
We only consider the case of the disc, the complement of the disc being
similar (using Theorem~\ref{thm:extasymptot} instead of
Theorem~\ref{thm:intasymptot}).
Assume first that
\begin{align}
\Phi > \frac{\Theta_0}{\xi_0 \phi_{\xi_0}(0)^2} \delta.
\end{align}
Denote by $f$ the function
\[
f(B) = \Phi B-\xi_0(\delta B)^{1/2}+C_0^{\text{int}}.
\]
Notice that $B \mapsto f(B)$ is increasing for all large vaues of $B$.
Choose a sequence $\{B_1^{(n)}\}$ such that
$f(B_1^{(n)})=n+1/2$, i.e. is a half-integer.
Let $\varepsilon \in (0,\frac{1}{2\Phi})$.
Choose $B_2^{(n)} = B_1^{(n)}+ \varepsilon$.
Then, for all sufficiently large $n$, $n+1/2 < f(B_2^{(n)}) < n+1$.
So $\Delta_{B_1^{(n)}}^{\text{int}}=1/2$ and
\begin{align}
\lim_{n\to +\infty} \Delta_{B_2^{(n)}}^{\text{int}}
= \lim_{n\to +\infty} \bigl(n+1 - f(B_2^{(n)})\bigr)
= \frac{1}{2} - \Phi \varepsilon.
\end{align}
So we get from the eigenvalue asymptotics
that
\begin{align}
\eigone{\mathcal{H}(B_2^{(n)})} - \eigone{\mathcal{H}(B_1^{(n)})}
&= \Theta_0 \delta \bigl(B_2^{(n)}- B_1^{(n)}\bigr)
-\frac{1}{3}\phi_{\xi_0}(0)^2 \delta^{1/2}
\bigl[(B_2^{(n)})^{1/2}- (B_1^{(n)})^{1/2}\bigr] \nonumber \\
&\quad+ \xi_0\phi_{\xi_0}(0)^2[(1/2 - \Phi \varepsilon)^2-1/4] + o(1) \nonumber \\
&= \Theta_0 \delta \varepsilon - \xi_0\phi_{\xi_0}(0)^2[\Phi \varepsilon - \Phi^2 \varepsilon^2] + o(1),
\end{align}
which is negative for small $\varepsilon$ (and for all sufficiently
large $n$) since $\Phi > \frac{ \Theta_0}{\xi_0 \phi_{\xi_0}(0)^2}\delta$
by assumption.
Suppose now that
\begin{align}\label{eq:LowFluxGivesMonotone}
\Phi < \frac{\Theta_0}{\xi_0 \phi_{\xi_0}(0)^2} \delta.
\end{align}
We restrict attention to the interval near infinity on which $f(B)$ is increasing. Here we can calculate the right-hand derivative
\begin{align}
\frac{d}{dB}_{+} ( \Delta_{B}^{\text{int}})^2 = \begin{cases} 2 \Delta_B^{\text{int}} f'(B), & \text{if }f(B) \in {\mathbb Z} + [0,1/2),\\
- 2\Delta_B^{\text{int}} f'(B),& \text{if }f(B) \in {\mathbb Z} + [1/2,1).
\end{cases}
\end{align}
So we see that for any $\eta>0$ there exists $B_0>0$ such that for all $\varepsilon>0$ and all $B > B_0$,
\begin{align}\label{eq:RHD_Delta}
( \Delta_{B+\varepsilon}^{\text{int}})^2- ( \Delta_{B}^{\text{int}})^2
\geq
-2\int_B^{B+\varepsilon}\Delta_{b}^{\text{int}} f'(b)\, db
\geq -(\Phi+\eta) \varepsilon.
\end{align}
We aim to prove monotonicity of $\eigone{\mathcal{H}(B)}$, so it suffices to prove
a positive lower bound on its right hand derivative
$\frac{d}{dB}_{+}\eigone{\mathcal{H}(B)}$,
which exists by perturbation theory.
Perturbation theory yields, for any $\epsilon>0$,
\begin{align}
\frac{d}{dB}_{+}\eigone{\mathcal{H}(B)}
&= 2 \Re \langle \psi_B, {\bf A} \cdot (-i\nabla + B {\bf A}) \psi \rangle\\
&\geq \frac{\lambda_1(B+\varepsilon) - \lambda_1(B)}{\varepsilon}
- \varepsilon \int_{\{|x|<1\}} {\bf A}^2 |\psi|^2 \,dx.
\end{align}
Here we completed the square and used the variational characterization of the eigenvalue in order to get the inequality.
Since $\int_{\{|x|<1\}} {\bf A}^2 |\psi|^2 \,dx \leq K$, for some constant
$K$ independent of $B$, we can estimate, using the eigenvalue asymptotics
and \eqref{eq:RHD_Delta}
\begin{align}
\liminf_{B\rightarrow +\infty} \frac{d}{dB}_{+}\eigone{\mathcal{H}(B)}
\geq \Theta_0 \delta - \xi_0\phi_{\xi_0}(0)^2 (\Phi+\eta)
- \varepsilon K.
\end{align}
Since $\varepsilon, \eta$ were arbitrary, we get that
\begin{align}
\liminf_{B\rightarrow +\infty} \frac{d}{dB}_{+}\eigone{\mathcal{H}(B)}
\geq \Theta_0 \delta - \xi_0\phi_{\xi_0}(0)^2 \Phi.
\end{align}
In particular, $\eigone{\mathcal{H}(B)}$ is monotone increasing for large value
of $B$ if~\eqref{eq:LowFluxGivesMonotone} is satisfied.
\end{proof}
\section{The case of the whole plane with $\delta>0$}
\label{sec:planeg0}
\subsection{Introduction}
In this section we will consider the case $\Omega=\mathbb{R}^2$ and a magnetic field $\beta$ satisfying Assumption~\ref{ass:Magnetic} with $\delta>0$.
We aim to prove Theorem~\ref{thm:wholeplane} for $\delta>0$. This, however,
follows directly once the asymptotic expansion in Theorem~\ref{thm:R2deltapos}
is obtained, since then it follows that (see~\cite[Section~2.3]{fohebook})
\[
\lim_{B\to+\infty}\frac{d}{dB} \eigone{\mathcal{H}(B)} = \delta.
\]
The proof of Theorem~\ref{thm:R2deltapos} follows the same idea as the proof
of Theorem~\ref{thm:extasymptot}. We use a localization of the ground
state to restrict the situation to certain values of the angular momentum. Then
we show that if we find a trial state with low enough energy, it must be related
to the ground state energy. Finally we expand our operator formally and
construct a trial state that has the correct energy.
\subsection{Agmon estimate for $\delta\geq 0$}
\label{sec:agmonge0}
We start with a localization estimate valid for $\delta \geq 0$. For
$\delta=0$ it gives the right length scale of the localization.
\begin{prop}\label{prop:firstagmon}
Suppose $\beta$ satisfies Assumption~\ref{ass:Magnetic} with $\delta \geq 0$.
Let $\psi$ be an eigenfunction of $\mathcal{H}(B)$ corresponding to an eigenvalue
$\lambda \leq \delta B+\omega B^{1/2}$ for some $\omega>0$. Then there exist
positive constants $C$ and $B_0$ such that
\begin{equation}
\label{eq:agmon}
\int_{\mathbb{R}^2}\exp\bigl(2B^{1/4}\bigl|1-|x|\bigr|\bigr)|\psi|^2\,dx
\leq C \int_{\mathbb{R}^2}|\psi|^2\,dx
\end{equation}
and
\begin{align}\label{eq:AgmonGradient}
\int_{\mathbb{R}^2}\exp\bigl(2B^{1/4}\bigl|1-|x|\bigr|\bigr)
|(-i\nabla + B {\mathbf F})\psi|^2\,dx
\leq C (\delta B + B^{1/2})\int_{\mathbb{R}^2}|\psi|^2\,dx
\end{align}
if $B>B_0$.
\end{prop}
By the localization estimates of Proposition~\ref{prop:firstagmon}, the
quadratic forms $\mathfrak{q}_m$ are well approximated by harmonic oscillators, whose
ground state eigenvalues are simple. This implies simplicity of the low-lying
eigenvalues of $\mathcal{H}_m(B)$.
\begin{lemma}
\label{lem:uniqueeigenvalue2}
Let $\delta >0$.
Let $\omega > 0$. There exists $B_0>0$ such that if $m \in {\mathbb Z}$ and
$B\geq B_0$, then $\mathcal{H}_m(B)$
admits at most one eigenvalue
below $\delta B + \omega B^{1/2}$.
\end{lemma}
The proof of Lemma~\ref{lem:uniqueeigenvalue2} is similar to that of
Lemma~\ref{lem:uniqueeigenvalue} and will be omitted.
\begin{proof}[Proof of Prop.~\ref{prop:firstagmon}]
Let $\chi(s)$ be a smooth cut-off function of the real variable $s$ satisfying
\begin{equation}
\label{eq:cutofffunction}
\chi(s)=
\begin{cases}
1, & |s|\leq 1/2,\\
0, & |s|\geq 1,
\end{cases}
\end{equation}
and such that $|\chi'(s)|\leq 3$ for all $s$, and
$\bigl(1-\chi^2\bigr)^{1/2} \in C^1({\mathbb R})$. Next, let $M$ and $\alpha$ be
positive (to determined below) real numbers and define in $\mathbb{R}^2$ the
functions $\chi_1$ and $\chi_2$ via
$\chi_1(x)=\chi\bigl(MB^{\alpha}(1-|x|)\bigr)$ and
$\chi_1(x)^2+\chi_2(x)^2=1$. Then there exists a constant $C_1$ such that
\begin{equation}
\label{eq:nablachi}
\|\nabla \chi_j\|_{\infty} \leq C_1 M B^{\alpha},\quad j\in\{1,2\}.
\end{equation}
Next, for $\ell>0$, let $\Phi_\ell(x)=B^{\sigma}\bigl|1-|x|\bigr|\chi(|x|/\ell)$.
Then, pointwise in $\mathbb{R}$, it holds that $\Phi_\ell(x)\to B^{\sigma}\bigl|1-|x|\bigr|$
as $\ell\to+\infty$. Moreover, $\Phi_\ell$ is differentiable almost everywhere and
if $\ell\geq 2$ its gradient
satisfies
\begin{equation}
\label{eq:nablaPhi}
\|\nabla\Phi_\ell\|_{\infty} \leq 4B^{\sigma}.
\end{equation}
Moreover, $\Phi_\ell$ is bounded for all $\ell>0$, so the function
$\Psi=\psi e^{\Phi_\ell}$ belongs to the form-domain of $\mathcal{H}(B)$.
With the IMS formula, we find that
\begin{equation}
\label{eq:estone}
\begin{aligned}
\mathfrak{q}[\chi_1\Psi]+\mathfrak{q}[\chi_2\Psi]&
\leq \bigl(2C_1M^2B^{2\alpha}+\lambda+16B^{2\sigma}\bigr)\|\Psi\|^2\\
&\leq \bigl(2C_1M^2B^{2\alpha}
+\delta B+\omega B^{1/2}+16B^{2\sigma}\bigr)\|\Psi\|^2.
\end{aligned}
\end{equation}
Using that the smallest Dirichlet eigenvalue is greater than the smallest value
of the magnetic field (again, see~\cite{avhesi}), we find that
\begin{equation}
\mathfrak{q}[\chi_1\Psi]\geq \delta B\|\chi_1\Psi\|^2
\end{equation}
and
\begin{equation}
\label{eq:qtwoest}
\mathfrak{q}[\chi_2\Psi]\geq
\Bigl(\delta B+\frac{kB^{1-2\alpha}}{4M^2}\Bigr)\|\chi_2\Psi\|^2.
\end{equation}
Inserting this into~\eqref{eq:estone} we find that
\begin{equation*}
\delta B\|\Psi\|^2+\frac{kB^{1-2\alpha}}{4M^2}\|\chi_2\Psi\|^2
\leq \bigl(2C_1M^2B^{2\alpha}+\delta B+\omega B^{1/2}
+16B^{2\sigma}\bigr)\|\Psi\|^2,
\end{equation*}
which can be written
\begin{multline*}
\Bigl(\frac{kB^{1-2\alpha}}{4M^2}-2C_1M^2B^{2\alpha}
-\omega B^{1/2}-16B^{2\sigma}\Bigr)\|\chi_2\Psi\|^2 \\
\leq
\bigl(2C_1M^2B^{2\alpha}+\omega B^{1/2}
+16B^{2\sigma}\bigr)\|\chi_1\Psi\|^2.
\end{multline*}
Choosing $\alpha=\sigma=\frac{1}{4},$ we find that all $B$s factor out, and
hence
\begin{equation*}
\Bigl(\frac{k}{4M^2}-2C_1M^2-\omega-16\Bigr)\|\chi_2\Psi\|^2
\leq
\bigl(2C_1M^2+\omega+16\bigr)\|\chi_1\Psi\|^2.
\end{equation*}
With $M$ so small that the left parenthesis above becomes positive, we find that
there exists a constant $C_2$ such that
\begin{equation}
\label{eq:chitwochione}
\|\chi_2\Psi\|^2\leq C_2 \|\chi_1\Psi\|^2.
\end{equation}
On the support of $\chi_1$ it holds that $MB^{1/4}\bigl|1-|x|\bigr|\leq 1$,
and hence
\[
\exp(\Phi_\ell)=\exp\bigl(B^{1/4}\bigl|1-|x|\bigr|\chi(|x|/\ell)\bigr)
\leq \exp\bigl(\chi(|x|/\ell)/M\bigr)\leq \exp(1/M).
\]
Inserting this in~\eqref{eq:chitwochione} above yields
\[
\|\chi_2\Psi\|^2\leq
C_2 \exp(2/M)\|\chi_1\psi\|^2\leq C_2 \exp(2/M)\|\psi\|^2.
\]
Using monotone convergence we find that
\[
\bigl\|\chi_2\exp\bigl(B^{1/4}\bigl|1-|x|\bigr|\bigr)\psi\bigr\|^2\leq
C_2 \exp(2/M)\|\chi_1\psi\|^2\leq C_2 \exp(2/M)\|\psi\|^2.
\]
On the other hand, since $MB^{1/4}\bigl|1-|x|\bigr|\leq 1$ on the support
of $\chi_1$ it is clear that
\[
\bigl\|\chi_1\exp\bigl(B^{1/4}\bigl|1-|x|\bigr|\bigr)\psi\bigr\|^2
\leq \exp(2/M)\|\psi\|^2.
\]
Combining these two last inequalities we find~\eqref{eq:agmon} with
$C=(1+C_2)\exp(1/M)$.
To prove \eqref{eq:AgmonGradient} we essentially only have to reinsert the
$L^2$-estimate in the previous calculations. By monotone convergence and the
IMS-formula, we have
\begin{align}
\int_{\mathbb{R}^2}\exp\bigl(2B^{1/4}\bigl|1-|x|\bigr|\bigr)
&|(-i\nabla + B {\mathbf F})\psi|^2\,dx \nonumber \\
&= \lim_{\ell \rightarrow \infty}
\int_{\mathbb{R}^2}\exp\bigl(2\Phi_\ell\bigr)|(-i\nabla + B {\mathbf F})\psi|^2\,dx
\nonumber\\
&=\lim_{\ell \rightarrow \infty}\mathfrak{q}[\Psi]
- \int |\nabla \Phi_\ell|^2 |\Psi|^2 \,dx.
\end{align}
The last term is negative, and we can estimate the first term using again the
IMS-formula and \eqref{eq:estone} as
\begin{align}
\mathfrak{q}[\Psi] \leq \mathfrak{q}[\chi_1\Psi] + \mathfrak{q}[\chi_2 \Psi]
\leq (\delta B + C_2 B^{1/2}) \| \Psi \|^2
\end{align}
(with $C_2 = 2 C_1 M^2 + \omega + 16$ and using $\alpha = \sigma =1/4$).
Now~\eqref{eq:AgmonGradient} follows from~\eqref{eq:agmon}.
\end{proof}
With the help of Proposition~\ref{prop:firstagmon}, we now get a first
control of the involved angular momenta.
\begin{lemma}\label{lem:LocAngMomFirst}
Let $\delta \geq 0$.
Suppose $\psi = u_m e^{-im\theta}$ is an eigenfunction of $\mathcal{H}(B)$
with eigenvalue below $\delta B + \omega B^{1/2}$. Then
\begin{align}
m = \Phi B + {\mathcal O}(B^{3/4}).
\end{align}
\end{lemma}
The proof of Lemma~\ref{lem:LocAngMomFirst} is similar to the one of
Lemma~\ref{lem:RestrictionOnM}---taking into account the weaker localization
given by Proposition~\ref{prop:firstagmon}---and will be omitted.
\subsection{A detailed expansion for $m- \Phi B = {\mathcal O}(B^{1/2})$}
By Lemma~\ref{lem:uniqueeigenvalue2} there is at most one eigenvalue of
$\mathcal{H}_m(B)$ for sufficiently low energy. So it suffices to construct a trial
state. The trial function (and all its derivatives) will be localized on the
length scale $B^{-1/2}$ near $r=1$ (see \eqref{eq:TrialStateFinalHorrible} for
the explicit choice of trial state). Also the function has support away from
$r=0$. The calculation is slightly different in different regimes of angular
momenta $m$. In this subsection, we consider angular momenta satisfying that
\begin{align}\label{eq:GoodRestrictionOnm}
| m - \Phi B | \leq M B^{1/2},
\end{align}
(for some fixed $M>0$). The other case, where $M B^{1/2} \leq | m - \Phi B |
\leq M' B^{3/4}$ is the object of the next subsection.
We will start by doing a formal expansion of the operator
$\mathfrak{h}=\frac{1}{B}\mathcal{H}_m(B)$. We write
\begin{equation}
\label{eq:mexp}
m=\Phi B+\mu_1B^{1/2}+\mu_2.
\end{equation}
With the localization of the trial state in mind, we introduce the new variable
\[
\rho=(\delta B)^{1/2}(r-1).
\]
This leads to the expansion of our operator as in \eqref{eq:expanded} but as operators on $L^2({\mathbb R})$. Since in the present situation we do not have a boundary, we make the further translation $s:= \rho - \mu_1/\sqrt{\delta}$ to find
\[
\mathfrak{h} = \mathfrak{h}_0+B^{-1/2}\mathfrak{h}_1+B^{-1}\mathfrak{h}_2+\ldots
\]
where
\begin{equation*}
\begin{aligned}
\mathfrak{h}_0 &= \delta\Bigl(-\frac{d^2}{ds^2}+s^2\Bigr),\\
\mathfrak{h}_1 &= -\delta^{1/2}\frac{d}{ds}+
s\delta^{-1/2}\bigl(\mu_1^2-\delta \bigl(2\mu_2+s^2\bigr)\bigr),\\
\mathfrak{h}_2 &=(s+\mu_1\delta^{-1/2})\frac{d}{ds}
+
\mu_2^2
+\frac{\left(-\mu_1^2+3\delta s^2
+2 \delta^{1/2} \mu_1 s\right)}{\delta }\mu_2 \\
&+\frac{(\mu_1+\delta^{1/2}s)^2\bigl(4ks(\mu_1+\delta^{1/2}s)
+3\delta^{1/2}(\mu_1^2-6\delta^{1/2}s\mu_1+5\delta s^2)\bigr)}
{12\delta^{5/2}}.
\end{aligned}
\end{equation*}
We do the same Ansatz as above and compare order by order:
\paragraph{\bf Order $B^0$:} To leading order we find
\[
\mathfrak{h}_0 v_0=\lambda_0 v_0.
\]
Thus, we choose
\begin{equation}
\label{eq:gaussian}
v_0=\frac{1}{\pi^{1/4}}\exp(-s^2/2)
\end{equation}
as the normalized ground
state of the harmonic oscillator, and $\lambda_0=\delta$.
\paragraph{\bf Order $B^{-1/2}$:} Here we get
\[
(\mathfrak{h}_0-\lambda_0)v_1 = (\lambda_1-\mathfrak{h}_1)v_0.
\]
By taking scalar product (with measure $ds$), we find
\[
0=\langle v_0,(\mathfrak{h}_0-\lambda_0)v_1\rangle
= \lambda_1-\langle v_0,\mathfrak{h}_1v_0\rangle.
\]
Since $v_0$ is an even function it holds that $\langle v_0,\mathfrak{h}_1v_0\rangle=0$
and thus $\lambda_1=0$. Moreover, since we can choose $v_1\perp v_0$, we can
let $v_1$ be the regularized resolvent
$(\mathfrak{h}_0-\lambda_0)^{-1}_{\text{reg}}$ of $-\mathfrak{h}_1v_0$,
\begin{equation}
\label{eq:gaussianv1}
v_1 = -(\mathfrak{h}_0-\lambda_0)^{-1}_{\text{reg}}\bigl[\mathfrak{h}_1v_0\bigr].
\end{equation}
\paragraph{\bf Order $B^{-1}$:} We get
\[
(\mathfrak{h}_0-\lambda_0)v_2 = (\lambda_2-\mathfrak{h}_2)v_0+(\lambda_1-\mathfrak{h}_1)v_1.
\]
Taking scalar product with $v_0$ again and using the fact that $\lambda_1=0$,
gives
\[
\lambda_2 = \langle v_0,\mathfrak{h}_2v_0\rangle + \langle v_0,\mathfrak{h}_1 v_1\rangle.
\]
Now it holds that (remember: $v_0=\frac{1}{\pi^{1/4}}\exp(-s^2/2)$)
\begin{align*}
\langle sv_0'(s),v_0(s)\rangle&=-\frac{1}{2},&
\langle s^2 v_0(s),v_0(s)\rangle & = \frac{1}{2},\\
\langle v_0'(s),v_0(s)\rangle & = 0,&
\langle s^4 v_0(s),v_0(s)\rangle & = \frac{3}{4},\\
\langle s^j v_0(s),v_0(s)\rangle & = 0,\quad j\text{ odd},&
\langle s^6 v_0(s),v_0(s)\rangle & = \frac{15}{8},\\
\end{align*}
and so
\begin{equation}
\label{eq:oBm11}
\langle v_0,\mathfrak{h}_2v_0\rangle = \frac{1}{4 \delta ^2}\mu_1^4
+\frac{2k-3\delta-4\delta\mu_2}{4\delta^2}\mu_1^2
+\mu_2^2+\frac{3}{2}\mu_2+\frac{7}{16}+\frac{k}{4\delta}.
\end{equation}
The term $\langle v_0,\mathfrak{h}_1 v_1\rangle$ is more difficult do calculate.
But noting that
\begin{equation*}
\begin{aligned}
(\mathfrak{h}_0-\lambda_0)\frac{1}{2\delta} sv_0&=sv_0,\\
(\mathfrak{h}_0-\lambda_0)\Bigl(-\frac{1}{2\delta}sv_0\Bigr) &= v_0',\quad\text{and}\\
(\mathfrak{h}_0-\lambda_0)\frac{s(s^2+3)}{6\delta}v_0 &= s^3 v_0,
\end{aligned}
\end{equation*}
we find that
\[
\begin{aligned}
v_1(s) &= -(\mathfrak{h}_0-\lambda_0)^{-1}_{\text{reg}}\bigl(\mathfrak{h}_1v_0)\\
&= (\mathfrak{h}_0-\lambda_0)^{-1}_{\text{reg}}\Bigl(\delta^{1/2}v_0'(s)
-\delta^{-1/2}\mu_1^2 sv_0(s)+2\delta^{1/2}\mu_2sv_0(s)
+\delta^{1/2}s^3v_0(s)\Bigr)\\
& = -\frac{1}{2\delta^{1/2}}sv_0-\frac{\mu_1^2}{2\delta^{3/2}}sv_0
+\frac{\mu_2}{\delta^{1/2}}sv_0+\frac{s(s^2+3)}{6\delta^{1/2}}v_0.
\end{aligned}
\]
A direct calculation shows that
\[
\begin{aligned}
\mathfrak{h}_1v_1 &= -\frac{s^2}{2 \delta ^2}\mu_1^4
+\frac{\left(4 s^4+3 (4 \mu_2-1) s^2+3\right)}{6 \delta}\mu_1^2\\
&\quad+\frac{1}{6} \left(-6 \mu_2-s^6+(1-8 \mu_2) s^4
-3 \left(4 \mu_2^2-2 \mu_2+1\right)s^2\right)v_0,
\end{aligned}
\]
so, using the relations above, we find that
\begin{equation}
\label{eq:oBm12}
\langle v_0,\mathfrak{h}_1 v_1\rangle = -\frac{1}{4 \delta ^2}\mu_1^4
+\frac{(4 \mu_2+3)}{4 \delta }\mu_1^2
-\frac{1}{16} (8 \mu_2 (2 \mu_2 + 3)+7).
\end{equation}
Combining~\eqref{eq:oBm11} and~\eqref{eq:oBm12} we get
\[
\lambda_2 = \langle v_0,\mathfrak{h}_2v_0\rangle + \langle v_0,\mathfrak{h}_1 v_1\rangle
= k\Bigl(\frac{1}{2\delta^2}\mu_1^2 + \frac{1}{4\delta}\Bigr).
\]
We see that $\lambda_2$ is minimal when $\mu_1=0$.
\begin{proof}[Proof of Theorem~\ref{thm:R2deltapos}]
Using Proposition~\ref{prop:BetterLocInm} below it suffices to consider angular momenta satisfying \eqref{eq:GoodRestrictionOnm}.
To finish the proof, based on the calculations above, it is sufficient to provide the trial state that
gives the right energy. This is done as in the case of the exterior of the disc,
see Section~\ref{sec:section4} for the details.
We write down the trial state (and $\lambda$) for the sake of completeness. From
the calculations above it follows that (here $\mu_1=0$ and $\mu_2$ is bounded)
\[
\lambda = \lambda_0+\lambda_1 B^{-1/2}+\lambda_2 B^{-1} = \delta
+\frac{k}{4\delta}B^{-1}
\]
Let $v_0$ be the gaussian
given in~\eqref{eq:gaussian}, $v_1$ the function given in~\eqref{eq:gaussianv1}
and
\[
v_2(s)=(\mathfrak{h}_0-\lambda_0)^{-1}_{\text{reg}}\bigl[(\lambda_2-\mathfrak{h}_2)v_0
+ (\lambda_1-\mathfrak{h}_1)v_1\bigr].
\]
Next, let
\[
v(s)=v_0 + v_1 B^{-1/2} + v_2 B^{-1}.
\]
With $\chi\in C_0^{\infty}(\mathbb{R})$ satisfying $\chi(0)=1$ and $\epsilon=1/100$ we
define our trial state $\tilde{v}(r)$ as
\begin{align}\label{eq:TrialStateFinalHorrible}
\tilde{v}(r)
=B^{1/4}\chi(B^{1/2-\epsilon}(r-1))v\bigl((\delta B)^{1/2}(r-1)\bigr).
\end{align}
\end{proof}
\subsection{Exluding large values of $m - \Phi B$}
In this subsection we will make a preliminary calculation to show that the ground state energy of $\mathcal{H}(B)$ restricted to angular momentum $m$ is too large, unless $m - \Phi B = {\mathcal O}(B^{-1/2})$.
\begin{lemma}\label{lem:shooting}
Let $C_0>0$, then there exists $C_1>0$ such that
if $|m - \Phi B| \leq C_0 B^{3/4}$, then
\begin{align}
\dist(\sigma(\mathcal{H}_m(B), \delta B + f(\eta)\sqrt{B}) \leq C_1(|\eta| B^{-1/4} + B^{-1/2}).
\end{align}
Here $\eta := \frac{B \Phi - m}{\delta B^{3/4}}$, and
\begin{align}
f(\eta) =
\frac{1}{2}k\eta^2.
\end{align}
\end{lemma}
From Lemma~\ref{lem:shooting} we can improve the localization in angular momentum.
\begin{prop}\label{prop:BetterLocInm}
Let $\omega>0$. Then there exists $M, B_0>0$ such that if $B\geq B_0$ and $\mathcal{H}_m(B)$ has an eigenvalue below $\delta B + \omega$, then
\begin{align}
| m - \Phi B | \leq M B^{1/2}.
\end{align}
\end{prop}
\begin{proof}[Proof of Proposition~\ref{prop:BetterLocInm}]
This follows by combing Lemma~\ref{lem:uniqueeigenvalue2} and \ref{lem:shooting}.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:shooting}]
The proof is by trial state. We will construct a function (see specific choice in \eqref{eq:TrialStateChoice} below) $\phi \in \dom(\mathcal{H}_m(B))$ such that $\| \phi \| \approx 1$ and
\begin{align}\label{eq:TrialStateCalcAgain}
\| (\mathcal{H}_m(B) - [ \delta B + f(\eta)\sqrt{B}]) \phi\| \leq C_1 (|\eta| B^{-1/4} + B^{-1/2}).
\end{align}
By the Spectral Theorem, this implies the Lemma, like in Section~\ref{sec:section4}.
The function that we construct will be localized near $r=1$ on the length scale $B^{-1/2}$ (again this is exactly as in Section~\ref{sec:section4}).
We recall that
\begin{align}
\mathcal{H}_m(B) & = - \frac{d^2}{dr^2} - \frac{1}{r}\frac{d}{dr}
+ \frac{1}{r^2}\bigl(m-B ra(r)\bigr)^2.
\end{align}
Here we will need to expand $\tilde{\beta}$ further than the second
derivative, so we use the full expansion of $r a(r)$ from~\eqref{eq:rar_expansion}.
Introducing $\eta$ as in the lemma and $\rho = (r-1+ B^{-1/4} \eta) \sqrt{B}$,
we find
\begin{align}
\mathcal{H}_m(B) &= - B \frac{d^2}{d\rho^2} - \frac{\sqrt{B}}{1- B^{-1/4} \eta + B^{-1/2} \rho} \frac{d}{d\rho} \nonumber \\
&\quad + \frac{1}{(1- B^{-1/4} \eta + B^{-1/2} \rho)^2}\times \\
&\qquad \biggl[ m - B (1- B^{-1/4} \eta + B^{-1/2} \rho) a(1- B^{-1/4} \eta + B^{-1/2} \rho) \biggr]^2.
\end{align}
Since we will only act with $\mathcal{H}_m(B)$ on functions which in the $\rho$ variable are Schwartz functions (see specific choice in \eqref{eq:TrialStateChoice} below), we can treat $\rho$ as a quantity of order $1$ (in terms of powers of $B$), and expand
\begin{align}
\mathcal{H}_m(B) = B \left( \mathfrak{h}_{m,0} + B^{-1/4} \mathfrak{h}_{m,1} + B^{-1/2} \mathfrak{h}_{m,2} \right) + {\mathcal O}( |\eta| B^{1/2}) + {\mathcal O}(1),
\end{align}
where
\begin{align}
\mathfrak{h}_{m,0} &= - \frac{d^2}{d\rho^2} + \delta^2 \Bigl(\rho + \frac{\eta^2}{2}\Bigr)^2,\\
\mathfrak{h}_{m,1}&=
\frac{1}{3}\delta\eta^3(3\delta-k)\Bigl(\rho + \frac{\eta^2}{2}\Bigr),\quad\text{and}\\
\mathfrak{h}_{m,2} &= - \frac{d}{d\rho}
-\delta^2\Bigl(\rho + \frac{\eta^2}{2}\Bigr)^3
+k\delta\eta^2\Bigl(\rho + \frac{\eta^2}{2}\Bigr)^2\\
&\quad+\frac{1}{12}\Bigl(\delta\eta^4(c-7k+15\delta)\Bigr)\Bigl(\rho + \frac{\eta^2}{2}\Bigr)
+\frac{1}{36}(k-3\delta)^2\eta^6.
\end{align}
We choose
\begin{align}
v_0 &= \Bigl(\frac{\delta}{\pi}\Bigr)^{1/4} \exp\Bigl( - \frac{\delta}{2}\bigl(\rho + \eta^2/2\bigr)^2\Bigr),
\end{align}
which is the normalized ground state eigenfunction of $\mathfrak{h}_{m,0}$ with eigenvalue $\delta$.
Next,
\[
\mathfrak{h}_{m,1} v_0 = \frac{1}{3}\delta\eta^3(3\delta-k)(\rho+\eta^2/2)v_0.
\]
Thus, we want to solve
\[
(\mathfrak{h}_{m,0}-\delta) v_1 = -\mathfrak{h}_{m,1} v_0=-\frac{1}{3}\delta\eta^3(3\delta-k)(\rho+\eta^2/2)v_0,
\]
for $v_1$. A calculation shows that (note that
$\mathfrak{h}_{m,1} v_0$ is the first exited state of $\mathfrak{h}_{m,1}$ with eigenvalue $3\delta$, in particular orthogonal to $v_0$)
\[
v_1 = -\frac{1}{2\delta}\mathfrak{h}_{m,1} v_0=-\frac{1}{6}\eta^3(3\delta-k)(\rho+\eta^2/2)v_0
\]
gives a solution.
Further calculations yields (the $0$ is there since
$\langle v_0,\mathfrak{h}_{m,1}v_0\rangle = 0$)
\[
\langle v_0,\mathfrak{h}_{m,2} v_0\rangle + \langle v_0,(\mathfrak{h}_{m,1}-0)v_1\rangle
=\frac{1}{2}k\eta^2.
\]
We further choose
\begin{align}
v_2 &= - (\mathfrak{h}_{m,0} - \delta)_{\text{reg}}^{-1} [ (\mathfrak{h}_{m,2} - f(\eta))v_0 + \mathfrak{h}_{m,1} v_0].
\end{align}
With $\phi$ in \eqref{eq:TrialStateCalcAgain} being chosen as
\begin{align}\label{eq:TrialStateChoice}
\phi =( v_0 + B^{-1/4} v_1 + B^{-1/2} v_2) \times \chi(B^{1/2-\varepsilon}(r-1)),
\end{align}
(similarly to \eqref{eq:explicitQuasimode}), it is immediate to verify \eqref{eq:TrialStateCalcAgain}.
\end{proof}
\section{The case of the whole plane with $\delta=0$}
\label{sec:plane0}
Here we will consider the case $\Omega=\mathbb{R}^2$ and a magnetic field $\beta$
satisfying Assumption~\ref{ass:Magnetic}, with $\delta=0$. We recall that
in this section, $k>0$.
By Proposition~\ref{prop:firstagmon} and Lemma~\ref{lem:LocAngMomFirst} we have
localization of eigenfunctions corresponding to low-lying eigenvalues on the
length scale $B^{-1/4}$ and to angular momenta
$m = \frac{B}{12} + {\mathcal O}(B^{3/4})$.
By~\eqref{eq:Cancellation} and~\eqref{eq:rar-split}, for $|r-1|\leq 1$,
\begin{align}\label{eq:TaylorAgain}
\Bigl( \frac{m}{r} - B a(r)\Bigr)^2 &= \frac{1}{r^2}\biggl(m-\Phi B - \frac{Bk}{6}(r-1)^3+B\mathcal{O}(|r-1|^4)\biggr)^2 \nonumber \\
&\geq \frac{1}{2} \frac{(m-B \Phi)^2}{r^2} - C B^2 r^{-2} (r-1)^6.
\end{align}
Invoking the localization estimates we get the following
strengthening of Lemma~\ref{lem:LocAngMomFirst}.
\begin{lemma}
Let $\delta = 0$.
Suppose $\psi = u_m e^{-im\theta}$ is an eigenfunction of $\mathcal{H}(B)$
with eigenvalue below $ \omega B^{1/2}$. Then
\begin{align}
m = \Phi B + {\mathcal O}(B^{1/4}).
\end{align}
\end{lemma}
\begin{proof}
The proof follows from inserting \eqref{eq:TaylorAgain} in the formula for the
quadratic form $\mathfrak{q}_m$ and using the decay estimates in
Proposition~\ref{prop:firstagmon}.
\end{proof}
We also get a similar result to Lemma~\ref{lem:uniqueeigenvalue}.
\begin{lemma}
\label{lem:uniqueeigenvalue3}
Let $\omega < \inf_{\alpha \in {\mathbb R}}\eig{2}{\Ham_{\text{M}}(\alpha)}$, with $\Ham_{\text{M}}(\alpha)$
the operator from Appendix~\ref{sec:Mont}. There exists $B_0>0$ such that if
$m \in {\mathbb Z}$ and
$B\geq B_0$, then $\mathfrak{q}_m$ admits at most one eigenvalue below
$(k/2)^{1/2}\omega B^{1/2}$.
\end{lemma}
\begin{proof}
The proof is analogous to that of Lemma~\ref{lem:uniqueeigenvalue}.
By the localization estimates already obtained, we can see that $\mathfrak{q}_m$ is
given---up to a lower order error---by the quadratic form of the operator
$\mathfrak{h}_0$ from \eqref{eq:h_0_Section8} below which can be recognized as the `Montgomery' operator reviewed in
Appendix~\ref{sec:Mont}.
\end{proof}
So now we are again in a situation where we know that a sufficiently precise
trial state must give the asymptotics of the ground state energy.
We write
\begin{equation}
\label{eq:mexputandelta}
m=\Phi B+\mu_3B^{1/4}+\mu_4
\end{equation}
where we will keep $\mu_3$ and $\mu_4$ bounded.
We perform the change of variables
\[
\rho=B^{1/4}(r-1).
\]
Integrating by parts, we find (with $v(\rho)=B^{-1/8}u(1+B^{-1/4}\rho)$ and
assuming that $u$ is supported away from $0$) that
\begin{equation}
\label{eq:diffexputandelta}
\frac{1}{B^{1/2}}\int_{0}^{+\infty}\Bigl|\frac{d u}{dr} \Bigr|^2\,r\,dr
=\int_{- \sqrt[4]{B}}^{+\infty} \overline{v}
\Bigl( - \frac{d^2 v}{d\rho^2} - B^{-1/4}
( 1 + B^{-1/4} \rho)^{-1} \frac{dv}{d\rho} \Bigr) (1+B^{-1/4}\rho)\,d\rho.
\end{equation}
We let $\mathfrak{h}=\frac{1}{B^{1/2}}\mathcal{H}_m(B)$ and make the Ansatz
\[
\mathfrak{h} = \sum_{j=0}^{+\infty}\mathfrak{h}_jB^{-j/4},
\quad
\lambda=\sum_{j=0}^{+\infty} \lambda_j B^{-j/4},
\quad\text{and}\quad
v = \sum_{j=0}^{+\infty} v_j B^{-j/4},
\]
and get (with notation from \eqref{eq:rar_expansion} and where
$d= \tilde{\beta}^{(4)}(1)$)
\begin{align}
\label{eq:h_0_Section8}
\mathfrak{h}_0 & = -\frac{d^2}{d\rho^2}+\Bigl(\frac{k\rho^3}{6}-\mu_3\Bigr)^2,\\
\mathfrak{h}_1 & = -\frac{d}{d\rho}
- \Bigl(\frac{k\rho^3}{6}-\mu_3\Bigr)
\Bigl(\frac{(k-c)\rho^4}{12}-2\mu_3\rho+2\mu_4\Bigr) \\
\mathfrak{h}_2 & = \rho\frac{d}{d\rho}
+ \mu_4^2-4\mu_3\mu_4\rho+3\mu_3^2\rho^2
+ \frac{1}{12}(5k-c)\mu_4\rho^4\\
&\qquad+\frac{1}{60}(6c-d-30k)\mu_3\rho^5
+\frac{1}{2880}(5c^2-18ck+8dk+45k^2)\rho^8.
\end{align}
Next we compare the powers of $B$.
\paragraph{\bf Order $B^0$:}
We note that, after a scaling, $\mathfrak{h}_0$ becomes
\[
\Bigl(\frac{k}{2}\Bigr)^{1/2}\Bigl[-\frac{d^2}{d\rho^2}
+\Bigl(\frac{\rho^3}{3}-(2/k)^{1/4}\mu_3\Bigr)^2\Bigr]
=\Bigl(\frac{k}{2}\Bigr)^{1/2} \Ham_{\text{M}}\bigl((2/k)^{1/4}\mu_3\bigr),
\]
with the notation from Appendix~\ref{sec:Mont}. By the results of the appendix,
the ground state eigenvalue
$\eigone{\Ham_{\text{M}}(\alpha)}$ has a unique
non-degenerate minimum $\Xi$ at $\alpha=0$. So we take $\mu_3=0$ and
find that $\lambda_0= (k/2)^{1/2}\eigone{\Ham_{\text{M}}(0)}=(k/2)^{1/2}\Xi$.
We furthermore take $v_0$ to be the ground state eigenfunction of $\mathfrak{h}_0$
(with $\mu_3=0$).
\paragraph{\bf Order $B^{-1/4}$:}
Here the equation becomes
\begin{align}
(\mathfrak{h}_1-\lambda_1) v_0 + (\mathfrak{h}_0 - \lambda_0)v_1 =0.
\end{align}
Taking scalar products with $v_0$, we get
\begin{align}
\lambda_1 = \langle v_0, \mathfrak{h}_1 v_0\rangle = 0,
\end{align}
where we used that $\mu_3 = 0$ and that $v_0$ is an even function.
We also determine the function $v_1$ as
\begin{align}
v_1 = - (\mathfrak{h}_0 - \lambda_0)^{-1}_{\text{reg}} (\mathfrak{h}_1 v_0)
\end{align}
\paragraph{\bf Order $B^{-1/2}$:}
At this order, we consider the equation
\begin{align}
(\mathfrak{h}_2-\lambda_2) v_0 + \mathfrak{h}_1 v_1 + (\mathfrak{h}_0 - \lambda_0) v_2=0.
\end{align}
Taking scalar products with $v_0$ determines $\lambda_2$,
\begin{align}
\lambda_2= \langle v_0, \mathfrak{h}_2 v_0 \rangle + \langle v_0, \mathfrak{h}_1 v_1\rangle.
\end{align}
As a function of $\mu_4$ we see that $\lambda_2$ is a polynomial of degree $2$.
We determine the coefficient to $\mu_4^2$ as
\[
1 - 4 \Bigl\langle \frac{k\rho^3}{6} v_0,
(\mathfrak{h}_0 - \lambda_0)^{-1}_{\text{reg}} \frac{k\rho^3}{6} v_0
\Bigr\rangle.
\]
From perturbation theory, we recognize this expression as
$\tfrac{1}{2}\frac{d^2}{d\alpha^2}\eigone{\Ham_{\text{M}}(\alpha)}|_{\alpha=0}$,
which is positive (by
Theorem~\ref{thm:Non-degenerate} and Proposition~\ref{prop:secdiffzeropos}).
Thus
\[
\lambda_2(\mu_4)= \frac{c_0}{2} (\mu_4 - C_1)^2 + C_2,
\]
with $c_0>0$ and for suitable constants $C_1, C_2$. We fix
\begin{align}
v_2 = - (\mathfrak{h}_0 - \lambda_0)^{-1}_{\text{reg}} \Bigl[ (\mathfrak{h}_2-\lambda_2) v_0 + \mathfrak{h}_1 v_1 \Bigr].
\end{align}
\begin{proof}[Proof of Theorem~\ref{thm:R2deltazero}]
To complete the proof of Theorem~\ref{thm:R2deltazero} we only need to give the
trial state that gives the right energy. This is done in the same way as it was
done for the complement of the disc in Lemma~\ref{lem:expansion}.
We omit the details, but mention that the trial state is given by (here
$\epsilon=1/100$ and $\chi\in C_0^\infty$ with $\chi(0)=1$)
\[
\tilde{v}(r) = B^{1/8} \chi(B^{\frac{1}{4}-\epsilon}(r-1)) v(B^{1/4}(r-1)),
\]
with $v= v_0 + B^{-1/4} v_1 + B^{-1/2} v_2$ from the calculations above.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:wholeplane}]
From Theorem~\ref{thm:R2deltazero} it follows exactly like in the proof of
Theorem~\ref{thm:main} that $B \mapsto \eigone{\mathcal{H}(B)}$ is not monotone
increasing on any half-interval of the form $[B_0, \infty)$. This finishes
the proof of Theorem~\ref{thm:wholeplane}.
\end{proof}
|
1,116,691,500,139 | arxiv | \section{Introduction}
The heavy elements Y, Ba, La, and Eu are among the most well-studied
neutron-capture elements because they show a number of measurable lines in stellar spectra. Chemical abundances in
the solar system indicate that, while Y, Ba, and La, are
dominantly formed through slow neutron-capture process,
Eu is dominantly a rapid neutron-capture element. The $s$- and $r$-process contribution to the solar system abundances
is given for example in studies by \citet{Seeger.etal:1965}, \citet{Arlandini.etal:1999},
\citet{Simmerer.etal:2004}, and \citet{Sneden.etal:2008}. At early times in the Galaxy, heavy elements are expected to
be due to an $r$-process contribution, since there was no time for the main $s$-process to operate,
as first suggested by \citet{Truran:1981}.
Recently, it was suggested by \citet{Cescutti.Chiappini:2014}
that $s$-process elements could have been produced
by massive spinstars at early times in the halo. \citet{Chiappini.etal:2011} and \citet{Barbuy.etal:2014} also suggested to use abundances ratios
of neutron-capture elements in old bulge stars
to determine whether they are products of $r$- or $s$-process.
The sites of $r$-process element production are still not known
\citep[e.g.][]{Schatz.etal:2001b, Wanajo.Ishimaru:2006, Kratz.etal:2007, Thielemann.etal:2011}.
Nucleosynthesis in neutron star mergers for the production of $r$-elements
was favoured in recent works by \citet{Rosswog.etal:2014} and \citet{Wanajo.etal:2014}. They
confirmed that the so-called strong $r$-process
yields elements with atomic number A~$>$~130 (second
peak of the heavy elements and above), whereas
the so-called weak $r$-process could take place
in neutrino-driven winds of the same neutron star mergers
and produce elements with
50~$<$~A~$<$~130, which includes the first peak of the heavy elements.
Abundance ratios of first- and second-peak elements and $s$-dominant elements such as Ba
over $r$-dominant elements such as Eu can contribute to distinguishing different nucleosynthesis
process contributions \citep[e.g.][]{SiqueiraMello.etal:2014, SiqueiraMello.Barbuy:2014}.
In the seminal work by \citet{Kappeler.etal:1989} and \citet{Kappeler.etal:2011}
three different channels for the
production $s$-process elements are described:
the weak $s$-process in He-burning and C-burning shells of massive stars, and the main and the strong $s$-process
taking place in asymptotic giant branch (AGB) stars.
These processes were inspected in detail by
\citet{Gallino.etal:1998} and \citet{Travaglio.etal:2004},
among others.
Heavy-element abundances have been examined more frequently in metal-poor than in metal-rich stars.
Previous heavy-element abundance derivations in metal-rich stars were carried out for thin-disk
\citep{Edvardsson.etal:1993, Reddy.etal:2003, AllendePrieto.etal:2004}, and thick-disk stars
\citep{Reddy.etal:2006, Bensby.etal:2005}.
Recently, \citet{Mishenina.etal:2013} derived heavy-element abundances of thin-, thick-disk and Hercules-stream
stars, and abundances of bulge-field stars were determined by \citet{Bensby.etal:2013} and
\citet{Johnson.etal:2012, Johnson.etal:2013}. In addition, heavy-element abundances were derived for
open-cluster stars \citep[e.g.][]{Jacobson.Friel:2013} and for two bulge barium stars by \citet{Lebzelter.etal:2013}.
We here derive Y, Ba, La, and Eu abundances in 71
metal-rich stars in the solar neighbourhood. The sample stars are high proper motion, old stars,
with low maximum heights above the Galactic plane,
and their kinematics is indicative
of stars wandering from inside the Galactic bar \citep{Raboud.etal:1998}.
\citet[][hereafter Paper~I]{Trevisan.etal:2011} presented the detailed analysis of the sample and
derived their $\alpha$-element abundances.
Based on their kinematical properties, in Paper~I we identified the present sample as partly belonging
to the thin disk (11 stars),
a majority of them
to the thick disk (42 stars), and 17 stars to an intermediate population.
The thick-disk and intermediate populations might be similar to the
thick-disk-kinematics-thin-disk-abundances (designated TKTA) stars \citep{Reddy.etal:2006}, which show
thick-disk kinematics combined with thin-disk abundances.
To probe the formation history of this sample,
in Paper~I we
derived their metallicities, $\alpha$-element abundances
(O, Mg, Si, Ca, and Ti), ages, and Galactic orbits.
We showed that models of radial mixing and dynamical effects of the bar and bar/spiral arms might explain the presence of these
old metal-rich dwarf stars in the solar neighbourhood.
In Paper~I we concluded that our sample stars might
be identified as old thin-disk stars \citep{Haywood:2008}.
To better understand the properties of these stars,
we here study the abundances of the neutron-capture elements
Y, Ba, La, and Eu.
The paper is organized as follows: in Sect.~\ref{Sec_sample}, we briefly describe the sample and the derivation of stellar parameters that were presented
in Paper~I.
The Y, Ba, La, and Eu abundance derivation is described in Sect.~\ref{Sec_abons}, and the results are presented and discussed
in Sects. \ref{Sec_results} and \ref{Sec_discussion}. Finally, the results are summarized in Sect.~\ref{Sec_summary}.
\section{Sample}
\label{Sec_sample}
The sample star selection,
their kinematics, the stellar parameter determination, and the abundance analysis of $\alpha$-elements are described in detail in Paper~I.
Below we summarize the information about the sample stars.
The sample comprises high proper motion and metal-rich dwarf stars from the NLTT catalogue
\citep{Luyten:1980}, as described in \citet{Grenon:1999} and Paper~I. The observations
were carried out in 2000-2002, within an ESO-ON-IAG agreement.
The spectra were obtained using the Fiber-Fed Extended Range Optical Spectrograph (FEROS) \citep{Kaufer.etal:2000}
at the 1.52~m telescope at ESO, La Silla.
The total wavelength coverage is 3560-9200 {\rm \AA} with a resolving power
(R=$\lambda/\Delta\lambda$) = 48~000, and with mean signal-to-noise ratios of $\sim 100$.
\subsection{Galactic orbits and membership assignment}
\label{Sec_orbits}
\citet{Grenon:1999} derived U, V, and W space velocities for the sample stars, which were used
to calculate the Galactic orbits. The GRINTON integrator was used for this purpose \citep{Carraro.etal:2002, Bedin.etal:2006}.
This code integrates
the orbits back in time for several Galactic revolutions and returns the minimum and maximum distances from the Galactic
centre ($R_{\rm min}$ , $R_{\rm max}$), maximum height from the Galactic plane
($Z_{\rm max}$) and the eccentricity $e$ of the orbit.
Based on Galactic velocities U, V, W and eccentricities $e$,
we estimated in Paper~I the probability of each sample star to belong
to either the thin or the thick disk.
The procedure adopted
relies on the assumption that the space velocities of each population (thin disk, thick disk and halo) follow a
Gaussian distribution, with given mean values and dispersions $\sigma_{\rm U}$, $\sigma_{\rm V}$, $\sigma_{\rm W}$.
We used the velocity ellipsoids by \citet{Soubiran.etal:2003} and found that 42 stars in
the sample can be assigned to the thick disk, 11 are more
likely to be thin-disk stars, and the other 17 stars
are intermediate between thin- and thick-disk components.
\subsection{Stellar parameters}
The effective temperatures $T_{\rm eff}$ were calculated from the $V-K_S$ colour using the \citet{Casagrande.etal:2010} colour-temperature relations. The surface gravities log $g$ were derived using the HIPPARCOS parallaxes, and the stellar ages and masses from the Yonsei-Yale evolutionary tracks \citep[$Y^2$][]{Demarque.etal:2004}.
Then, fixing $T_{\rm eff}$ and $\log g$, the iron abundances were derived from \ion{Fe}{I} and \ion{Fe}{II} lines through a local thermodynamic equilibrium (LTE) analysis, employing plane-parallel MARCS model atmospheres \citep{Gustafsson.etal:2008} with scaled-solar composition. The microturbulence velocity, $v_{\rm t}$, was obtained by imposing constant iron abundance as a function of equivalent width.
For our cooler stars ($T_{\rm eff} < 5000$~K), a positive trend of [Fe/H] vs. equivalent width was found even when $v_{\rm t} \sim 0$~km~s$^{-1}$. For this reason,
we considered that $v_{\rm t}$ can be defined as a function of temperature and gravity,
and using stars with $T_{\rm eff} > 5200$~K, we defined $v_{\rm t} = f(T_{\rm eff}, \log g)$ and then extrapolated this function to
$T_{\rm eff} < 5200$~K. We adopted $0.3$~km~s$^{-1}$ from the extrapolation of our fit. The metallicity was then replaced by the new
iron abundance, and the procedure was repeated until there were no significant changes in ($T_{\rm eff}$, $\log g$,
[Fe/H]\footnote{We use the standard notation, i.e., for elements A and B, ${\rm [A / B]} = \log(N_{\rm A} / N_{\rm B})_{\rm star} - \log (N_{\rm A} / N_{\rm B})_{\odot}$
and $\epsilon({\rm A}) = \log (N_{\rm A} / N_{\rm H}) + 12$.}).
The stellar parameters were tested against excitation and ionization equilibria, and $T_{\rm eff}$ and $\log g$ were further adjusted when necessary. Only two stars in the sample (HD~94374 and HD~182572) required adjustments to be in satisfactory spectroscopic equilibrium.
The final adopted temperatures, gravities, and metallicities were compared with data available in the literature. For comparison purposes, the parameters ($T_{\rm eff}$, $\log g$, [Fe/H]) were retrieved from the PASTEL catalogue \citep{Soubiran.etal:2010}, which compiles stellar atmospheric parameters obtained from the analysis of high-resolution, high signal-to-noise spectra. The parameters of 38 of our sample stars are available in this catalogue. Differences between temperatures considered here and those from the PASTEL catalogue do not exceed $2$\%, except for two stars (HD~31827 and HD~35854). Gravities and metallicities also agree well, with differences within $\sim 0.2$~dex.
\begin{figure*}[!ht]
\centering
\begin{tabular}{c}
\vspace{-2.0cm}
\resizebox{0.95\hsize}{!}{\includegraphics[angle=-90]{Sun_Y_lines.ps}} \\
\vspace{-1.4cm}
\resizebox{0.95\hsize}{!}{\includegraphics[angle=-90]{HD93800_Y_lines.ps}} \\
\end{tabular}
\caption { Synthetic and observed profiles of \ion{Y}{II} lines. {\it Upper panels:} Solar spectrum in the region of \ion{Y}{II} lines. Solid lines indicate the synthetic spectra for $\epsilon({\rm Y})$ $ = 2.14,\ 2.24,\ 2.34$; the dots indicate the observed spectrum. {\it Lower panels:} Spectral synthesis
of \ion{Y}{II} lines for HD~93800. }
\label{Fig_solar_YLines}
\end{figure*}
\section{Abundance derivation}
\label{Sec_abons}
We derived abundances of yttrium, barium, lanthanum, and europium for the sample stars. We performed a spectral synthesis, and the abundances were obtained by minimising the $\chi^2$ between the observed and synthetic spectra. The synthetic spectra were obtained using the PFANT code described in \citet{Cayrel.etal:1991}, \citet{Barbuy.etal:2003}, and \citet{Coelho.etal:2005}. The MARCS model atmospheres \citep{Gustafsson.etal:2008} were employed.
The solar abundances from the literature and adopted values are reported in Table \ref{sol}.
The most updated values computed with 3D model atmospheres by \citet{Asplund.etal:2009} and \citet{Grevesse.etal:2014} recommended values
were not adopted here because we used 1D model atmospheres to derive the abundances of the sample stars.
All lines were checked against the solar spectrum obtained with the same instrumentation as the program stars,
and gf-values were adjusted when necessary.
For lines of the elements \ion{Ba}{II}, \ion{La}{II}, and \ion{Eu}{II}, the hyperfine structure (HFS) was taken into account,
as described in the following sections.
\begin{table}
\caption{Solar {\it r}- and {\it s}-process fractions \citep{Simmerer.etal:2004} and solar abundances
from 1: \citet{Grevesse.Sauval:1998}; 2: \citet{Asplund.etal:2009}; 3: \citet{Lodders.etal:2009};
4: \citet{Grevesse.etal:2014}. Adopted abundances are given in Col. 5. }
\label{sol}
\centering
\begin{tabular}{c c c c c c c c c}
\hline\hline
\hbox{El.} & \hbox{Z} & \multicolumn{2}{c}{Fraction} & \multicolumn{5}{c}{\hbox{$\epsilon$(X)$_{\odot}$}} \\
\cline{5-9} \\
& & r & s & (1) & (2) & (3) & (4) & (5) \\
\hline
\hbox{Fe} & 26 & --- & --- & 7.50 & 7.50 & 7.45 & 7.45 & 7.50 \\
\hbox{Y} & 39 & 0.281 & 0.719 & 2.24 & 2.21 & 2.21 & 2.21 & 2.24 \\
\hbox{Ba} & 56 & 0.147 & 0.853 & 2.13 & 2.18 & 2.18 & 2.25 & 2.13 \\
\hbox{La} & 57 & 0.246 & 0.754 & 1.17 & 1.10 & 1.14 & 1.11 & 1.22 \\
\hbox{Eu} & 63 & 0.973 & 0.027 & 0.51 & 0.52 & 0.52 & 0.52 & 0.51 \\
\hline
\end{tabular}
\end{table}
\subsection*{Yttrium}
\label{Sec_Y}
The \ion{Y}{II} lines were selected from \citet{Hannaford.etal:1982}, \citet{Spite.etal:1987}, \citet{Hill.etal:2002},
and \citet{Grevesse.etal:2014}, and they are listed in Table \ref{Tab_linelist}.
The $\log gf$ values were adjusted to fit the solar spectrum.
Figure \ref{Fig_solar_YLines} presents the solar observed and synthetic spectra in the region of \ion{Y}{II} lines. The spectrum synthesis fitting of \ion{Y}{II} lines to the observed profiles for one of the sample stars, HD~93800, is also presented in Fig.~\ref{Fig_solar_YLines}.
The yttrium line list comprises lines that systematically give reliable abundances,
as shown in Fig.~\ref{Fig_YBaLaEuLines}. Given that $\left\langle A \right\rangle_i$ is the Yttrium abundance of each star and $A_{\lambda i}$ is the abundance derived from an individual line, the quantity
$A_{\lambda i} - \left\langle A \right\rangle_i$ indicates the departure of the abundance derived from the line $i$.
This approach allows us to detect lines that give abundances systematically higher or lower than the average abundance of the star and lines that seem to lead to inaccurate abundance values. The abundances $A_{\lambda i}$ presented in Fig. \ref{Fig_YBaLaEuLines} correspond to the corrected values obtained
by removing the dependence with temperature (see Sect. \ref{Sec_trends}).
\begin{figure}[!ht]
\centering
\begin{tabular}{c}
\resizebox{0.95\hsize}{!}{\includegraphics[angle=0]{linelist_YBaLaEu.ps}} \\
\end{tabular}
\caption {Selected Y, Ba, La, and Eu lines. The abundance of each star, $\left\langle A \right\rangle_i$, is the average
of abundances derived from individual lines, $A_{\lambda i}$. Therefore, $A_{\lambda i} - \left\langle A \right\rangle_i$
indicates the differences between the mean abundance value and those derived from individual lines. The mean deviation for each line
is represented by black circles. Dashed lines indicate $\pm 0.1$~dex.}
\label{Fig_YBaLaEuLines}
\end{figure}
\begin{table*}[!ht]
\centering
\footnotesize
\caption{Line list}
\begin{tabular}{cccccccccccc}
\hline\hline
$\lambda$ (\AA) & Species & $\chi_{exc}$ & $\log gf$ & $\log gf$ & $\log gf$ & $\log gf$ & $\log gf$ & $\log gf$ & $\log gf$ & $\log gf$& $\log gf$ \\
& & (eV) & (adopted) & (HPC+02) & (SHFS87) & (HLG+82) & (GSAS14) & (NIST) & (Kurucz) & (VALD) & (MPK13) \\
\hline
4982.14 & Y II & 1.03 & -1.34 & --- & -1.26 & -1.29 & --- & --- & -1.29 & -1.29 & -1.26 \\
5087.43 & Y II & 1.08 & -0.38 & -0.17 & -0.20 & -0.17 & -0.17 & --- & -0.17 & -0.17 & -0.26 \\
5123.21 & Y II & 0.99 & -0.83 & -0.83 & --- & -0.83 & --- & --- & -0.83 & -0.83 & --- \\
5200.41 & Y II & 0.99 & -0.62 & -0.57 & --- & -0.57 & -0.57 & --- & -0.57 & -0.57 & -0.63 \\
\vspace{0.2cm}
5402.78 & Y II & 1.84 & -0.58 & --- & -0.59 & --- & --- & --- & -0.51 & -0.51 & -0.55 \\
4554.03 & Ba II & 0.00 & +0.30$^a$ & +0.17 & --- & --- & +0.17 & +0.14 & +0.17 & +0.17 & --- \\
5853.69 & Ba II & 0.60 & -0.90$^a$ & -1.01 & -0.53 & --- & -1.03 & -0.91 & -1.00 & -1.00 & --- \\
6141.71 & Ba II & 0.70 & +0.15$^a$ & -0.07 & +0.40 & --- & --- & -0.03 & -0.08 & -0.08 & --- \\
\vspace{0.2cm}
6496.91 & Ba II & 0.60 & -0.10$^a$ & --- & +0.18 & --- & -0.41 & -0.41 & -0.38 & -0.38 & --- \\
5114.55 & La II & 0.24 & -1.02$^a$ & --- & --- & --- & --- & --- & -1.06 & -1.03 & --- \\
5290.82 & La II & 0.00 & -1.70$^a$ & --- & -1.29 & --- & --- & --- & -1.75 & -1.65 & --- \\
6320.38 & La II & 0.17 & -1.32$^a$ & -1.52 & -1.43 & --- & --- & --- & -1.61 & -1.56 & -1.33 \\
\vspace{0.2cm}
6390.48 & La II & 0.32 & -1.31$^a$ & --- & --- & --- & --- & --- & -1.45 & -1.41 & --- \\
6437.64 & Eu II & 1.32 & +0.26$^a$ & -0.32 & --- & --- & --- & --- & -0.28 & -0.32 & --- \\
\vspace{0.2cm}
6645.11 & Eu II & 1.38 & +0.20$^a$ & +0.12 & --- & --- & --- & --- & +0.20 & +0.12 & --- \\
\hline
\end{tabular}
\label{Tab_linelist}
\tablefoot{
HPC+02: \citet{Hill.etal:2002}; SHFS87: \citet{Spite.etal:1987}; HLG+82: \citet{Hannaford.etal:1982};
GSAS14: \citet{Grevesse.etal:2014}; MPK13: \citet{Mishenina.etal:2013}.
\tablefoottext{a}{$\log gf$ reported corresponds to the sum of individual values.}
}
\end{table*}
\begin{figure*}[!ht]
\centering
\begin{tabular}{c}
\vspace{-2.8cm}
\resizebox{0.95\hsize}{!}{\includegraphics[angle=-90]{Sun_Ba_lines.ps}} \\
\vspace{-2cm}
\resizebox{0.95\hsize}{!}{\includegraphics[angle=-90]{HD93800_Ba_lines.ps}}
\end{tabular}
\caption {Spectra in the region of \ion{Ba}{II} lines.
{\it Upper panels:} Solar spectrum. Solid lines indicate the synthetic spectra for
$\epsilon({\rm Ba})$ $ = 2.03,\ 2.13,\ 2.23$; the observed spectrum is represented by dots. {\it Lower panels:} Spectrum synthesis fitting
of \ion{Ba}{II} lines for HD~93800. }
\label{Fig_solar_BaLines}
\end{figure*}
\begin{figure*}[!ht]
\centering
\begin{tabular}{c}
\vspace{-2.8cm}
\resizebox{0.95\hsize}{!}{\includegraphics[angle=-90]{Sun_La_lines.ps}} \\
\vspace{-2.0cm}
\resizebox{0.95\hsize}{!}{\includegraphics[angle=-90]{HD15133_La_lines.ps}}
\end{tabular}
\caption { Spectra in the region of \ion{La}{II} lines. {\it Upper panels:}
Observed and synthetic solar spectra.
Solid lines indicate the synthetic spectra for $\epsilon({\rm La})$ $ = 1.12,\ 1.22,\ 1.32$;
the dots indicate the observed spectrum. {\it Lower panels:} Spectrum synthesis fitting
for HD~15133. }
\label{Fig_solar_LaLines}
\end{figure*}
\subsection*{Barium}
The barium abundances were obtained using \ion{Ba}{II} lines at $4554$, $5853$, $6141$, and $6496$~\AA, as listed in Table \ref{Tab_linelist}.
Odd-numbered Ba isotopes exhibit hyperfine splitting, which was taken into account by employing a code made available by \citet{McWilliam:1998}, following the calculations described by \citet{Prochaska.McWilliam:2000}, as reported in \citet{Barbuy.etal:2014}.
We considered that the nuclides $^{134}$Ba, $^{135}$Ba, $^{136}$Ba, $^{137}$Ba, $^{138}$Ba
contribute 2.42\%, 6.59\%, 7.85\%, 11.23\%, and 71.7\% to the total abundance, respectively \citep{Lodders.etal:2009}.
Figure \ref{Fig_solar_BaLines} shows the spectra of the Sun and the star HD~93800 in the region of \ion{Ba}{II} lines.
To fit the solar spectrum, the total $\log gf$ of the Ba lines was increased by $\sim 0.1-0.2$~dex with respect
to values from the literature. The source of this difference is the adopted value
for the solar microturbulence velocity; we used the same value as in Paper~I ($v_{\rm t, \odot} = 0.90$~km~s$^{-1}$).
We checked the reliability of abundances derived from individual Ba lines using the same procedure as applied for Y lines,
as presented in Fig. \ref{Fig_YBaLaEuLines}.
The mean deviation of abundances derived from each Ba line do not exceed $0.1$~dex.
\subsection*{Lanthanum}
The La abundances were obtained using the lines listed in Table~\ref{Tab_linelist}.
The solar and HD~15133 spectra in the region of La lines are shown in Fig.~\ref{Fig_solar_LaLines}.
The HFS of the \ion{La}{II} line at 5114~\AA\ was taken from \citet{Ivans.etal:2006}.
For lines at 5290~\AA, 6320~\AA\ and 6390~\AA, the HFS was obtained by using the
code made available by \citet{McWilliam:1998} \citep[see][]{Prochaska.McWilliam:2000}, as reported in \citet{Barbuy.etal:2014}.
The nuclide $^{139}$La is the dominant isotope, contributing 99.9\% to the total solar system abundance \citep{Lawler.etal:2001a}.
For the \ion{La}{II} line at $6320$~\AA, the effect of a \ion{Ca}{I} line at $6318.3$~\AA\ that shows autoionization effects and produces a $\sim$5~\AA\ broad line was taken into account \citep[e.g.][see also the derivation of Mg abundances in Paper~I]{Lecureur.etal:2007}.
The \ion{Ca}{I} autoionization line was treated by increasing its radiative broadening to reflect the much shorter lifetime of the level that is autoionized compared with the radiative lifetime of this level. The radiative broadening had to be increased by 16~000 compared with its standard value ($\propto 1/ \lambda^2$, based on the radiative lifetimes alone) to reproduce the \ion{Ca}{I} dip in the solar spectrum. The Ca abundance for each star was taken into account when computing the \ion{Ca}{I} line in the 6320 \AA\ region.
\subsection*{Europium}
Europium abundances were derived using \ion{Eu}{II} lines at $6437$~\AA\ and $6645$~\AA.
We used the HFS given by \citet{Ivans.etal:2006},
and isotopic proportions of 47.8\% for $^{151}$Eu and 52.2\% for $^{153}$Eu \citep{Lawler.etal:2001b}.
The total $\log gf$ value was adjusted to fit the solar spectrum (Fig.~\ref{Fig_solar_EuLines}) by
adopting $\epsilon({\rm Eu})$~=~0.51 \citep{Grevesse.Sauval:1998}.
The observed and the synthetic spectra of HD~15133 in the region of \ion{Eu}{II} are also presented in Fig.~\ref{Fig_solar_EuLines}.
\begin{figure}[!ht]
\centering
\begin{tabular}{c}
\vspace{-2.2cm}
\resizebox{0.9\hsize}{!}{\includegraphics[angle=-90]{Sun_Eu_lines.ps}} \\
\vspace{-1.8cm}
\resizebox{0.9\hsize}{!}{\includegraphics[angle=-90]{HD15133_Eu_lines.ps}}
\end{tabular}
\caption {Profiles of the \ion{Eu}{II} line at $6437$~\AA\ and $6645$~\AA. {\it Upper panels:} Solar spectrum. Solid lines indicate the synthetic spectra for $\epsilon({\rm Eu})$ $ = 0.41,\ 0.51,\ 0.61$; the dots indicate the observed spectrum. {\it Lower panels:} Spectrum synthesis fitting
of \ion{Eu}{II} lines for HD~15133. }
\label{Fig_solar_EuLines}
\end{figure}
\subsection{Spurious abundance trends}
\label{Sec_trends}
We determined the dependence of our final abundances on $T_{\rm eff}$.
Panels a, b, c, and d in Fig.~\ref{Fig_teff_abonds} show the abundances {\it vs.} $T_{\rm eff}$,
and a significant trend is observed for Y, Ba, and La abundances. Following a procedure similar to that adopted in Paper~I,
we corrected for this trend by fitting a second-order polynomial and by assuming that abundance trend corrections are null at $T_{\rm eff}~=~5777$~K.
This procedure was applied for abundances individually derived from each line , and the corrected abundances {\it vs.} $T_{\rm eff}$
are shown in panels e, f, g, and h in Fig.~\ref{Fig_teff_abonds}.
\begin{figure}
\centering
\begin{tabular}{cc}
\resizebox{\hsize}{!}{\includegraphics{YBaLaEu_teff_abo.ps}}
\end{tabular}
\caption {Abundances of Y, Ba, La, and Eu {\it vs.} temperature before ({\it panels a, b, c,} and {\it d}) and after ({\it panels e, f, g,} and {\it h}) the correction for the trend with $T_{\rm eff}$. We considered the mean abundance in the range $5680 < T_{\rm eff} < 5880$ K as the zero point of the correction. The curve in the left panels is shown to illustrate the procedure, which was applied
for abundances individually derived from each line.}
\label{Fig_teff_abonds}
\end{figure}
\subsection{Errors}
The errors in abundances result mainly from the errors of the stellar parameters and from the errors in fitting the synthetic spectra.
We estimated the errors in abundances due to uncertainties in the stellar parameters, as presented in Table
\ref{Tab_errors}. The variations $\Delta$[Y, Ba, La, Eu/H] were computed by changing the atmospheric parameters by $\Delta T_{\rm eff} = +100$~K,
$\Delta \log g = +0.2$~dex and $\Delta v_{\rm t} = +0.2$~km~s$^{-1}$. This procedure was performed for three of the sample stars with different temperatures
($T_{\rm eff} \sim 5900$~K and $\sim 5200$~K) and metallicities ([Fe/H]~$\sim 0.2$ and $\sim 0.5$).
Errors in the final abundance of Y and Eu are $\sim 0.06-0.08$~dex, and errors in [La/H] are $\sim 0.10-0.12$ for the three stars.
Uncertainty in the Ba abundance reach $0.19$~dex for the star with high temperature, but it is smaller for cool stars ($\sim 0.05-0.11$~dex).
The final abundances of Y, Ba, La and Eu are given in Table \ref{Tab_final_abonds}.
The uncertainties presented in this Table correspond to line-to-line scatter.
\begin{table}[!ht]
\centering
\tiny
\caption{Abundance variations with stellar parameters.}
\begin{tabular}{lcccc}
\hline\hline
& \multicolumn{4}{l}{HD 82943 } \\
& \multicolumn{4}{l}{($T_{\rm eff} = 5929$~K, $\log g = 4.35$, $v_{\rm t} = 1.22$~km~s$^{-1}$, [Fe/H]$= 0.23$)} \\
\cline{2-5} \\
& $\Delta {\rm T}_{\rm eff}$ & $\Delta \log g$ & $\Delta v_{\rm t}$ & Total \\
& ($100$~K) & ($0.2$~dex) & ($0.2$~km~s$^{-1}$) & \\
\hline
$\Delta$[Y/H] & $+0.01$ & $+0.07$ & $-0.03$ & $+0.07$ \\
$\Delta$[Ba/H] & $+0.04$ & $+0.04$ & $-0.18$ & $+0.19$ \\
$\Delta$[La/H] & $+0.05$ & $+0.09$ & $-0.00$ & $+0.10$ \\
$\Delta$[Eu/H] & $-0.01$ & $+0.07$ & $-0.01$ & $+0.07$ \\
\hline
& \multicolumn{4}{l}{HD 87007 } \\
& \multicolumn{4}{l}{($T_{\rm eff} = 5282$~K, $\log g = 4.54$, $v_{\rm t} = 0.61$~km~s$^{-1}$, [Fe/H]$= 0.29$)} \\
\cline{2-5} \\
& $\Delta {\rm T}_{\rm eff}$ & $\Delta \log g$ & $\Delta v_{\rm t}$ & Total \\
\hline
$\Delta$[Y/H] & $+0.01$ & $+0.07$ & $-0.04$ & $+0.08$ \\
$\Delta$[Ba/H] & $+0.03$ & $+0.02$ & $-0.10$ & $+0.11$ \\
$\Delta$[La/H] & $+0.04$ & $+0.11$ & $+0.01$ & $+0.12$ \\
$\Delta$[Eu/H] & $-0.01$ & $+0.06$ & $+0.01$ & $+0.06$ \\
\hline
& \multicolumn{4}{l}{HD 93800 } \\
& \multicolumn{4}{l}{($T_{\rm eff} = 5181$~K, $\log g = 4.44$, $v_{\rm t} = 0.30$~km~s$^{-1}$, [Fe/H]$= 0.49$)} \\
\cline{2-5} \\
& $\Delta {\rm T}_{\rm eff}$ & $\Delta \log g$ & $\Delta v_{\rm t}$ & Total \\
\hline
$\Delta$[Y/H] & $+0.03$ & $+0.07$ & $-0.04$ & $+0.08$ \\
$\Delta$[Ba/H] & $+0.02$ & $+0.00$ & $-0.05$ & $+0.05$ \\
$\Delta$[La/H] & $+0.04$ & $+0.10$ & $+0.01$ & $+0.11$ \\
$\Delta$[Eu/H] & $-0.01$ & $+0.07$ & $-0.01$ & $+0.07$ \\
\hline
\end{tabular}
\label{Tab_errors}
\end{table}
\section{Results}
\label{Sec_results}
The final abundances of Y, Ba, La, and Eu are shown in Fig.~\ref{Fig_abonds}, using iron as the reference element.
We compare our results with data from the literature for halo stars \citep{Francois.etal:2007, Ishigaki.etal:2013, Nissen.Schuster:2011}
and disk stars \citep{Edvardsson.etal:1993, Reddy.etal:2003, Reddy.etal:2006, Bensby.etal:2005,
Nissen.Schuster:2011, Mishenina.etal:2013}.
The halo data show a large scatter, as pointed out by \citet{Francois.etal:2007}, and
the trend of element-to-Fe ratio for these four elements becomes tighter above [Fe/H]$>$-2.8.
All these samples together show the chemical evolution for these elements. The
moderate-metallicity halo and thick-disk stars studied by \citet{Ishigaki.etal:2013} show essentially solar ratios,
except for a few of them, which are enhanced in
La and Ba. There is also a scatter of Eu abundances and a mean overabundance of [Eu/Fe].
The halo and thick-disk data from \citet{Nissen.Schuster:2011} are very homogeneous and close to solar.
The solar-neighbourhood FGK stars from \citet{Mishenina.etal:2013} illustrate the disk abundance ratios
and metallicities, which reach [Fe/H]$\sim$+0.3. Our sample stars are the most metal-rich sample
and show the abundances of these elements for high metallicities.
\begin{figure*}[!ht]
\centering
\begin{tabular}{c}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{All_pops.ps}} \\
\end{tabular}
\caption {Abundances of Y, Ba, La and Eu {\it vs.} metallicities. The sample stars are indicated by
black open circles. Halo stars studied by \citet{Francois.etal:2007} and
\citet{Ishigaki.etal:2013} are represented by red crosses and blue triangles, respectively.
The disk stars are shown as blue diamonds \citep{Edvardsson.etal:1993}, red open circles \citep{Reddy.etal:2003, Reddy.etal:2006},
magenta triangles \citet{Bensby.etal:2005}, green stars \citep{Nissen.Schuster:2011} and yellow squares \citep{Mishenina.etal:2013}.}
\label{Fig_abonds}
\end{figure*}
\begin{figure*}[!ht]
\centering
\begin{tabular}{c}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{Disk_pops.ps}} \\
\end{tabular}
\caption {Abundances of Y, Ba, La, and Eu {\it vs.} metallicities. The abundances derived in this work (circles)
are compared with the thin- (green squares) and thick-disk (red stars) stars and
the intermediate population (blue diamonds) from
\citet{Edvardsson.etal:1993}, \citet{Reddy.etal:2003, Reddy.etal:2006},
\citet{Bensby.etal:2005}, \cite{Nissen.Schuster:2011}, \citet{Mishenina.etal:2013}, and \citet{Ishigaki.etal:2013}.
Grey crosses represent stars with no available U, V, and W velocities, therefore no membership assignment was possible.
The symbols representing our sample stars are filled according to their membership classification: thin disk (green), thick disk (red) and
intermediate population (blue).}
\label{Fig_abonds2}
\end{figure*}
\subsection{Abundance trends}
In Fig.~\ref{Fig_abonds2} we present a more detailed view of the abundance trends within the Galactic disk by separating the sample
into thin- and thick-disk components and the intermediate population.
The membership assignment procedure is described in Sect.~\ref{Sec_orbits}.
Using the U, V, and W space velocities from the Geneva-Copenhagen
Survey \citep{Holmberg.etal:2009, Casagrande.etal:2011}, the same procedure was
applied to stars studied by \citet{Edvardsson.etal:1993}, \citet{Reddy.etal:2003, Reddy.etal:2006},
\citet{Bensby.etal:2005}, and \citet{Mishenina.etal:2013}. The assigned stellar population identifications are used
to plot the literature stars with different symbols in Figs. 8-16, according to their kinematic classification
in the terms employed in Paper~I and as briefly described in Sect.~\ref{Sec_orbits}.
The yttrium-to-iron abundance ratio decreases with increasing metallicity, which is compatible with the trend
found for the disk population. [Ba/Fe] increases with metallicities up to [Fe/H]~$\sim -0.1$, and above this limit, the
[Ba/Fe] ratio seems to remain constant. Data from the literature indicate solar [La/Fe] values in the metallicity range
$-0.8 <$~[Fe/H]~$< -0.3$, which
decreases to [La/Fe]~$\sim -0.1$ for stars with [Fe/H]~$\sim 0.0$. Our sample stars folow the trend found for the
literature data, which presents a decrease of La-to-Fe ratios with metallicity.
The [La/Fe] drops from [La/Fe]~$\sim -0.1$ for stars with solar iron abundance
towards [La/Fe]~$\sim -0.2$ at high metallicities.
The literature data show a trend of
Eu-to-Fe ratios dropping from [Eu/Fe]~$\sim$0.4 in the halo and metal-poor thick disk towards
a solar value at the solar metallicity. The [Eu/Fe] values of our sample stars
follow this trend: they reach slightly sub-solar values ([Eu/Fe]~$\sim -0.1$)
at high metallicities ([Fe/H]~$> 0.4$).
\begin{figure}[!t]
\centering
\begin{tabular}{c}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{Disk_pops_YBaLa_vs_FeH_BaH.ps}} \\
\end{tabular}
\caption {Abundance ratios among neutron-capture elements. The symbols are the same as in
Fig.~\ref{Fig_abonds2}. The horizontal lines indicate the abundance ratios for the $r$-process (dash-dotted) and $s$-process (dashed) components
in the solar system, as predicted by \citet{Arlandini.etal:1999}.}
\label{Fig_ratios}
\end{figure}
\subsection{Abundance ratios among neutron-capture elements}
Abundance ratios among neutron-capture elements can provide clues on possible
nucleosynthesis sites of these elements. For this purpose, we inspect the
abundance ratios among Y, Ba, and La (Fig.~\ref{Fig_ratios}), as well as the ratio
between the abundances of these elements and the $r$-element Eu abundances (Fig.~\ref{Fig_XEu_ratio}).
The ratio of heavy (second peak) to light (first peak) $s$-process elements
depends on the metallicity, as discussed by \citet{Gallino.etal:1998} and \citet{Busso.etal:1999}.
In Fig.~\ref{Fig_ratios}, we show the [Y/Ba], [Y/La] and [Ba/La] vs. [Fe/H] and vs. [Ba/H].
Stars with abundance ratios between the pure-$s$ and pure-$r$ lines should result from both processes.
[Y/Ba] (Fig.~\ref{Fig_ratios}a, b) is closer to the pure-$s$ line and tends to be compatible with the s-process production.
It is slightly enhanced for stars with ${\rm [Fe/H]} \lesssim -0.5$, which maybe requires an additional nucleosynthesis process
as suggested by \citet{Francois.etal:2007} and \cite{Ishigaki.etal:2013}, and [Y/Ba]~$\lesssim$~0.0 for metallicities above this limit.
The Y-to-La ratios (Figs. \ref{Fig_ratios}c,d) are approximately constant in stars with metallicities ranging from
$-1.0$ to $+0.3$. The most metal-rich stars in our sample ([Fe/H]~$> +0.4$) appear to have slightly higher [Y/La] values, with
values closer to the pure-$s$ line. The [Ba/La] values (Figs. \ref{Fig_ratios}e,f) increase with [Fe/H],
from [Ba/La]~$\sim -0.1$ for metallicity $-1.0$ up
to [Ba/La]~$\sim +0.2$ for high metallicities.
[Ba/Eu] vs. [Fe/H] and vs. [Eu/H] values (Figs. \ref{Fig_XEu_ratio}c,d) are low for thick-disk stars at low metallicities,
indicating that these stars were probably enriched by SNe type II, at early times in the Galaxy.
[Ba/Eu] increases with [Fe/H] and reaches solar values around [Fe/H]~$\sim 0.0$.
[Y/Eu] and [La/Eu] vs. [Fe/H] and [Eu/H] (Figs. \ref{Fig_XEu_ratio}a,b,e,f) are low for metal-poor thick disk stars,
and essentially solar for most of the sample thin- and thick-disk stars and intermediate populations.
\begin{figure}[!t]
\centering
\begin{tabular}{c}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{Disk_pops_XEu_vs_FeH.ps}} \\
\end{tabular}
\caption {Element-to-Eu ratios {\it vs.} stellar metallicities ({\it left}) and europium abundances ({\it right}).
The symbols are the same as in
Fig.~\ref{Fig_abonds2}.
The horizontal lines indicate the abundance ratios for the $r$-process (dash-dotted) and $s$-process (dashed) components
in the solar system, as predicted by \citet{Arlandini.etal:1999}.}
\label{Fig_XEu_ratio}
\end{figure}
\subsection{Neutron-capture versus $\alpha$-elements}
Another interesting piece of information comes from inspecting the abundance
ratios between heavy and $\alpha$-elements.
Iron, which is frequently used as the reference element, is produced in
both SNe II and SNe Ia events. The $r$-process component of the heavy elements and the
$\alpha$-elements, in particular oxygen and magnesium, are produced in SNe II. Therefore,
it is interesting to investigate how the
abundance ratios of neutron-capture elements vary as a function of the abundances of oxygen and magnesium, as shown in
Figs. \ref{Fig_XO_ratios} and \ref{Fig_XMg_ratios}.
\begin{figure*}[!ht]
\centering
\begin{tabular}{c}
\resizebox{0.9\hsize}{!}{\includegraphics[angle=-90]{Disk_pops_XO_vs_FeH_OH.ps}} \\
\end{tabular}
\caption {Abundance ratios between neutron-capture elements and oxygen as a function of metallicity ({\it left})
and [O/H] ({\it right}). The symbols are the same as in
Fig.~\ref{Fig_abonds2}.}
\label{Fig_XO_ratios}
\end{figure*}
\begin{figure*}[!ht]
\centering
\begin{tabular}{c}
\resizebox{0.9\hsize}{!}{\includegraphics[angle=-90]{Disk_pops_XMg_vs_FeH_MgH.ps}} \\
\end{tabular}
\caption {Abundance ratios between neutron-capture elements and magnesium as a function of metallicity ({\it left})
and [Mg/H] ({\it right}). The symbols are the same as in
Fig.~\ref{Fig_abonds2}.}
\label{Fig_XMg_ratios}
\end{figure*}
\begin{figure*}[!ht]
\centering
\begin{tabular}{c}
\resizebox{0.9\hsize}{!}{\includegraphics[angle=-90]{Disk_pops_Ratios_vs_Rm.ps}} \\
\end{tabular}
\caption {Abundance ratios among neutron-capture element {\it vs.} mean Galactocentric distance. The symbols are the same as in
Fig.~\ref{Fig_abonds2}.}
\label{Fig_ratios_Rm}
\end{figure*}
\begin{figure*}[!ht]
\centering
\resizebox{0.9\hsize}{!}{\includegraphics[angle=-90]{Disk_pops_Ratios-alpha_vs_Rm.ps}} \\
\caption {[Element/O] ({\it left panels}) and [element/Mg] ({\it right panels}) ratios {\it vs.} mean Galactocentric distances. The symbols are the same as
in Fig.~\ref{Fig_abonds2}.}
\label{Fig_ratios_alpha_Rm}
\end{figure*}
[Y/O] and [Ba/O] vs. [Fe/H] (Figs. \ref{Fig_XO_ratios}a,c) increase with metallicity for all populations,
but our sample stars show lower [Y,Ba,La/O] with respect to the thin disk.
It is interesting to note that La and Eu have similar behaviours: [La/O] and [Eu/O] vs. [Fe/H] (Figs. \ref{Fig_XO_ratios}e,g)
increase steadily with metallicity, and our results are compatible with the thin-disk behaviour.
More interesting in these comparisons is the behaviour of [Y/O] and [Ba/O] vs. [O/H] (Figs. \ref{Fig_XO_ratios}b,d),
which indicates that our sample has higher oxygen abundances than the thin-disk stars,
and show higher [Y,Ba/O] ratios relative to literature thick-disk stars. The same applies to
[La/O] and [Eu/O] vs. [O/H] (Figs. \ref{Fig_XO_ratios}f,h), but in this case, the literature thick-disk stars have
the same behaviour as the sample stars.
\begin{figure*}[!ht]
\centering
\begin{tabular}{ccc}
\resizebox{0.3\hsize}{!}{\includegraphics{Disk_pops_Kin_El-O_vs_Age.ps}} &
\resizebox{0.3\hsize}{!}{\includegraphics{Disk_pops_Kin_El-Mg_vs_Age.ps}} &
\resizebox{0.3\hsize}{!}{\includegraphics{Disk_pops_Kin_El-Fe_vs_Age.ps}} \\
\end{tabular}
\caption {Element-to-oxygen ({\it left}), element-to-magnesium ({\it middle}), and element-to-iron ratios ({\it right}) {\it vs.} stellar ages.
The notation is the same as in
Fig.~\ref{Fig_abonds2}. Ages for the metal-rich sample were derived as described in Paper~I.
Ages for samples from the literature were retrieved from the catalogue of \citet{Casagrande.etal:2011} .}
\label{Fig_abonds_Age}
\end{figure*}
\begin{figure}[!ht]
\centering
\begin{tabular}{c}
\resizebox{0.95\hsize}{!}{\includegraphics{Disk_pops_EuBa_vs_Age.ps}} \\
\end{tabular}
\caption {[Ba/Eu] {\it vs.} stellar ages.The notation is the same as in
Fig.~\ref{Fig_abonds2}.}
\label{Fig_EuBa_Age}
\end{figure}
Figure~\ref{Fig_XMg_ratios} shows [Y,Ba,La,Eu/Mg] vs. [Fe/H] and [Mg/H]. It is similar to
Fig.~\ref{Fig_XO_ratios}, but it shows a strikingly better defined
chemical enrichment.
Besides, [Y,Ba/Mg] ratios are compatible with the ratios in thin-disk stars.
[Eu/Mg] is constant across the whole metallicity range, indicating that Eu and Mg are produced in constant proportions
in SNe type II.
It is important to note that for metal-rich stars, magnesium might be a better reference element
than oxygen. A plot of [Mg/O] vs. [Mg/H], where oxygen drops relative to magnesium at high metallicities, was given for example in
\citet{Bensby.etal:2004}.
\subsection{Abundances vs. Galactocentric distances}
Figures \ref{Fig_ratios_Rm} and \ref{Fig_ratios_alpha_Rm} show abundance ratios vs.
the mean Galactocentric distances, defined as $R_{\rm m}$ = (R$_{\rm a}$ + R$_{\rm p}$)/2,
that is the mean of the apocentric and pericentric distances.
The abundance ratios in our sample stars appear to be constant with $R_{\rm m}$.
On the other hand, data from the literature indicate that [Y,Ba/Eu], [Y,Ba,Eu/O], and [Y,Ba/Mg] increase
with $R_{\rm m}$.
The observed increase of [Y,Ba/Eu] with $R_{\rm m}$ could mean that
stars close to the centre are older than the outermost stars. Note that Ba has been used as an age indicator by
\citet{Edvardsson.etal:1993}.
[Y/Eu,O,Mg] and [Ba/Eu,O,Mg] (Figs. \ref{Fig_ratios_Rm}d,e and \ref{Fig_ratios_alpha_Rm}a,b,e,f) for our thick-disk stars are
clearly enhanced relative to the literature thick-disk stars.
These ratios are more compatible to the literature thin-disk stars,
but they are located in the inner regions, and might indicate
that our sample of stars could be old inner thin-disk stars (cf. Paper~I).
\subsection{Abundances vs. stellar ages}
The chemical evolution of our Galaxy with time can be inspected based on estimate of stellar ages.
However, age is difficult to derive for most stars.
In Paper~I we derived stellar ages with uncertainties smaller than 30\% for 36 sample stars.
Figures \ref{Fig_abonds_Age}a,b,c present [Y,Ba,La,Eu/O], [Y,Ba,La,Eu/Mg], [Y,Ba,La,Eu/Fe] vs. age,
respectively.
There is a clear distinction between the literature thin- and thick-disk stars in the [Y/O], [Ba/O] vs. ages plane.
Our sample matches the thin-disk stars, again indicating that they might be old thin-disk stars.
The scatter also increases with age, and different ratios might indicate that stars were formed at
different Galactocentric distances.
[La/O] is more similar to [Eu/O], and both show solar values at all ages, except that
[Eu/O] for thick-disk stars is somewhat lower than in thin-disk stars.
Figure \ref{Fig_abonds_Age}b is similar to Fig.~\ref{Fig_abonds_Age}a, but with a smaller scatter.
Abundances ratios using Fe as the reference element (Fig.~\ref{Fig_abonds_Age}c) are constant with stellar age for all populations,
except [Eu/Fe] in thick-disk stars, which increases with age.
Thus the plots relative to O are better suited to distinguish different stellar populations.
It is interesting to note that, again, the Eu and Mg production is tighly correlated
for all ages.
[Ba/Eu] ratio, which is plotted against stellar ages in Fig. \ref{Fig_EuBa_Age}, is constant and close to solar
across the whole age range for thin-disk and our sample stars. On the other hand, thick-disk stars show
low Ba-to-Eu ratios.
\subsection{Comparison with bulge stars}
We compared the abundances of the sample stars and the bulge microlensed dwarf and subgiant stars
studied by \citet{Bensby.etal:2010b, Bensby.etal:2011, Bensby.etal:2013}, and red giants from the
bulge Plaut field by \citet{Johnson.etal:2012, Johnson.etal:2013}.
Figure \ref{Fig_bulge_YBaFe} presents Y, Ba, La, and Eu abundances as a function of [Fe/H].
Literature thin- and thick-disk data are shown for comparison.
There is no clear distinction between the bulge stars and our sample
in terms of metallicity and [Y,Ba,Eu/Fe] abundance ratios.
Figure~\ref{Fig_bulge_YBaOMg} shows the abundance of Y and Ba using oxygen and magnesium as the reference element.
There is a clear distinction between the trends in thin-disk and bulge stars, whereas our
sample follows the trend observed for the bulge.
In addition, these abundance ratios indicate that the sample stars may be the high-metallicity extension of the thick-disk stars.
The trend of the thick-disk stars together with our sample is similar to the trend observed for the bulge stars.
\begin{figure*}[!ht]
\centering
\begin{tabular}{c}
\resizebox{0.95\hsize}{!}{\includegraphics[angle=-90]{Bulge_YBaLaEuFe.ps}} \\
\end{tabular}
\caption {Abundances of Y, Ba, La, and Eu vs. [Fe/H] for the sample stars (black dots) and bulge stars (red squares).
The abundances of the bulge population were derived by \citet[][microlensed dwarf stars]{Bensby.etal:2010b, Bensby.etal:2011, Bensby.etal:2013}
and by \citet[][RGB stars]{Johnson.etal:2012}. The grey and green crosses
indicate thin- and thick-disk stars. }
\label{Fig_bulge_YBaFe}
\end{figure*}
\begin{figure*}[!ht]
\centering
\begin{tabular}{c}
\vspace{-1.2cm}
\resizebox{0.95\hsize}{!}{\includegraphics[angle=-90]{Bulge_YBaFeO.ps}} \\
\resizebox{0.95\hsize}{!}{\includegraphics[angle=-90]{Bulge_YBaFeMg.ps}} \\
\end{tabular}
\caption {Abundance ratios between neutron-capture elements and $\alpha$-elements for the sample stars and for bulge stars.
{\it Upper panel:} [Y/O] and [Ba/O] abundance ratios vs. [Fe/H] ({\it left}) and [O/H] ({\it right}).
{\it Lower panel:} [Y/Mg] and [Ba/Mg] abundance ratios vs. [Fe/H] ({\it left}) and [Mg/H] ({\it right}).
The notation is the same as in Fig.~\ref{Fig_bulge_YBaFe}.}
\label{Fig_bulge_YBaOMg}
\end{figure*}
\section{Discussion}
\label{Sec_discussion}
The mean Galactocentric distance is related to thin- and thick-disk membership probability,
as can be seen in Figs. \ref{Fig_ratios_Rm} and
\ref{Fig_ratios_alpha_Rm}.
The thick-disk stars
have the lowest mean distances $R_{\rm m}$, since they have pericentric distances closer to the Galactic centre than thin-disk stars.
For this reason, it is difficult to separate
the inner thin-disk population and the thick-disk components based on kinematics alone.
Abundance ratios reflect the enrichment of the interstellar medium at the time the star was formed,
and depends on the region of the Galaxy where
the star formation occurred (e.g. radial gradients, that is, inner parts of the Galaxy contain stars with higher metallicities
than the outer parts).
If there is much radial migration in the dynamical evolution of the disk, interpreting [X/Fe] vs. [Fe/H]
for stars in the solar neighbourhood is difficult.
However, while the kinematics of the stars may change during their lifetime, the stellar abundances do not.
Therefore, chemical tagging is more efficient in identifying different stellar populations.
Our sample shows clear similarities with respect to the thin-disk stars, but with a few indicators
point a different nucleosynthesis for these two stellar populations.
These differences are, in particular, better seen when oxygen is used as the reference element,
for instance as [Y/O] and [Ba/O] vs. [Fe/H] and [O/H] (Figs.~\ref{Fig_XO_ratios}a,b,c,d).
Our sample stars show lower [Y/O] and [Ba/O] ratios, higher oxygen abundances,
and lower [Mg/O] than thin-disk stars.
Figure \ref{Fig_bulge_YBaOMg}, which shows [Y/O] and [Ba/O] vs. [Fe/H] and [O/H],
and [Y/Mg] and [Ba/Mg] vs. [Fe/H] and [Mg/H], suggests similarities between our sample and bulge stars.
The lower [Y/O] and [Ba/O] vs. [O/H] relative to thin-disk stars may be due to the older ages of our sample,
since the contribution from AGB stars to the $s$-element abundances is expected to be lower in the older stars.
\section{Summary}
\label{Sec_summary}
We have derived the abundances of the heavy elements Y, Ba, La, and Eu for the sample of 71 metal-rich
stars for which a detailed analysis was carried out in \citet{Trevisan.etal:2011}.
Chemical tagging is a most efficient indicator for identifying different stellar populations.
We compared our results with data from the literature for samples of thin- and thick-disk stars (Fig.~\ref{Fig_abonds2}).
Our sample of stars is more metal-rich than these samples.
We inspected the abundance ratios among neutron-capture elements (Figs.~\ref{Fig_ratios} and \ref{Fig_XEu_ratio}) and
between heavy and $\alpha$-elements (Figs. \ref{Fig_XO_ratios} and \ref{Fig_XMg_ratios}).
Our sample shows clear similarities with the thin-disk stars,
but there are differences, in particular, the [Y/O] and [Ba/O] vs. [Fe/H] and [O/H]
behaviour (Fig.~\ref{Fig_XO_ratios}a,b,c,d).
A comparison with the bulge abundances showed that [Y/Fe,O,Mg] and [Ba/Fe,O,Mg] vs. [Fe/H], [O/H], and [Mg/H]
(Figs. \ref{Fig_bulge_YBaFe} and \ref{Fig_bulge_YBaOMg})
suggest similarities between our sample and bulge stars.
It would be important to be able to study more bulge stars in terms of the discriminators
La, and Eu vs. the alpha-elements O and Mg, and compare them with our sample
to verify the degree of similarity between this sample and bulge stars.
The similar behaviour of La and Eu, which seems to vary in lockstep,
as well as the
constant value of [Eu/Mg] vs. [Mg/H] is remarkable. It suggests that Eu and Mg
are produced in the same SNe type II,
in the same proportions, at least at metallicities [Fe/H]~$> -1$.
In conclusion, our results indicate that the sample stars might either be bulge stars,
which justifies the denomination ``bulge-like'', as previously suggested by \citet{Pompeia.etal:2003},
or inner old thin-disk stars \citep{Haywood:2008}. In either cases they would have been
ejected by the bar towards the solar neighbourhood. It is less likely that they represent a high-metallicity complement to
the thick-disk stars. Finally, since some abundance ratios are different from those in the bulge and thin-
and thick-disk stars, the sample stars may be yet another stellar population. This makes additional verifications desirable.
\section*{Acknowledgments}
We would like to thank the anonymous referee for his or her helpful
comments, which helped to improve this manuscript.
MT acknowledges the support of FAPESP, process no. 2012/05142-5.
The observations were carried out within Brazilian time in a ESO-ON agreement,
and within an IAG-ON agreement funded by FAPESP project n$^{\circ}$ 1998/10138-8.
We acknowledge partial financial support from CNPq and Fapesp.
\onecolumn
\normalsize
\begin{longtable}{lcrrrrr}
\caption{Final abundances}\\
\hline\hline
Star & $T_{\rm eff}$ (K) & \multicolumn{1}{c}{[Fe/H]} & \multicolumn{1}{c}{[Y/Fe]} & \multicolumn{1}{c}{[Ba/Fe]} & \multicolumn{1}{c}{[La/Fe]} & \multicolumn{1}{c}{[Eu/Fe]}\\
\hline
\endfirsthead
\caption{continued.} \\
\hline\hline
Star & $T_{\rm eff}$ (K) & \multicolumn{1}{c}{[Fe/H]} & \multicolumn{1}{c}{[Y/Fe]} & \multicolumn{1}{c}{[Ba/Fe]} & \multicolumn{1}{c}{[La/Fe]} & \multicolumn{1}{c}{[Eu/Fe]}\\
\hline
\endhead
\hline
\multicolumn{7}{l}{The number of lines used to compute the final abundances are indicated between parenthesis.} \\
\multicolumn{7}{l}{The errors correspond to line-to-line scatter.}
\endfoot
G 161-029 & 4798 & 0.01 $\pm$ 0.08 & 0.01 $\pm$ --- (1) & -0.02 $\pm$ 0.09 (4) & -0.07 $\pm$ 0.27 (2) & \multicolumn{1}{c}{---} \\
BD-02 180 & 5004 & 0.33 $\pm$ 0.04 & -0.22 $\pm$ 0.05 (3) & 0.05 $\pm$ 0.10 (4) & -0.07 $\pm$ 0.08 (2) & 0.00 $\pm$ 0.14 (2) \\
BD-05 5798 & 4902 & 0.20 $\pm$ 0.05 & -0.27 $\pm$ 0.07 (3) & -0.07 $\pm$ 0.13 (4) & -0.20 $\pm$ 0.19 (2) & 0.11 $\pm$ --- (1) \\
BD-17 6035 & 4892 & 0.09 $\pm$ 0.05 & -0.08 $\pm$ 0.07 (3) & -0.12 $\pm$ 0.12 (3) & 0.06 $\pm$ --- (1) & -0.16 $\pm$ --- (1) \\
CD-32 0327 & 4957 & -0.01 $\pm$ 0.05 & -0.01 $\pm$ 0.05 (2) & 0.04 $\pm$ 0.07 (4) & -0.02 $\pm$ 0.01 (2) & 0.18 $\pm$ 0.09 (2) \\
CD-40 15036 & 5429 & -0.03 $\pm$ 0.07 & -0.15 $\pm$ 0.08 (4) & 0.08 $\pm$ 0.09 (4) & -0.14 $\pm$ 0.02 (2) & 0.02 $\pm$ --- (1) \\
HD 8389 & 5274 & 0.58 $\pm$ 0.04 & -0.17 $\pm$ 0.10 (5) & 0.06 $\pm$ 0.13 (4) & -0.29 $\pm$ 0.13 (4) & -0.14 $\pm$ --- (1) \\
HD 9174 & 5599 & 0.41 $\pm$ 0.06 & -0.11 $\pm$ 0.06 (5) & 0.06 $\pm$ 0.09 (4) & -0.22 $\pm$ 0.08 (3) & -0.11 $\pm$ 0.10 (2) \\
HD 9424 & 5449 & 0.12 $\pm$ 0.05 & -0.13 $\pm$ 0.06 (4) & 0.03 $\pm$ 0.04 (4) & -0.16 $\pm$ 0.15 (2) & -0.07 $\pm$ --- (1) \\
HD 10576 & 5929 & 0.02 $\pm$ 0.06 & -0.08 $\pm$ 0.04 (5) & 0.13 $\pm$ 0.06 (4) & -0.07 $\pm$ 0.12 (3) & -0.03 $\pm$ 0.28 (2) \\
HD 11608 & 4966 & 0.39 $\pm$ 0.05 & -0.09 $\pm$ 0.10 (3) & 0.00 $\pm$ 0.11 (4) & -0.11 $\pm$ 0.07 (3) & 0.02 $\pm$ --- (1) \\
HD 12789 & 5810 & 0.27 $\pm$ 0.05 & -0.04 $\pm$ 0.09 (5) & 0.08 $\pm$ 0.09 (4) & -0.14 $\pm$ 0.12 (4) & -0.03 $\pm$ 0.00 (2) \\
HD 13386 & 5269 & 0.36 $\pm$ 0.04 & -0.13 $\pm$ 0.09 (5) & 0.02 $\pm$ 0.06 (4) & -0.11 $\pm$ 0.06 (3) & -0.11 $\pm$ 0.10 (2) \\
HD 15133 & 5223 & 0.46 $\pm$ 0.04 & -0.03 $\pm$ 0.09 (5) & 0.08 $\pm$ 0.10 (4) & -0.16 $\pm$ 0.08 (4) & -0.04 $\pm$ 0.04 (2) \\
HD 15555 & 4867 & 0.37 $\pm$ 0.05 & -0.21 $\pm$ 0.03 (5) & -0.12 $\pm$ 0.06 (4) & -0.30 $\pm$ 0.08 (3) & -0.11 $\pm$ 0.01 (2) \\
HD 16905 & 4866 & 0.27 $\pm$ 0.04 & -0.16 $\pm$ 0.08 (2) & -0.03 $\pm$ 0.09 (4) & -0.20 $\pm$ 0.04 (3) & -0.03 $\pm$ 0.14 (2) \\
HD 25061 & 5307 & 0.18 $\pm$ 0.04 & -0.05 $\pm$ 0.10 (5) & 0.05 $\pm$ 0.09 (4) & -0.05 $\pm$ 0.09 (4) & -0.11 $\pm$ 0.05 (2) \\
HD 26151 & 5383 & 0.33 $\pm$ 0.05 & -0.10 $\pm$ 0.06 (5) & 0.02 $\pm$ 0.05 (4) & -0.22 $\pm$ 0.11 (4) & -0.04 $\pm$ 0.02 (2) \\
HD 26794 & 4920 & 0.07 $\pm$ 0.03 & -0.17 $\pm$ 0.10 (2) & 0.02 $\pm$ 0.05 (4) & -0.10 $\pm$ 0.02 (3) & -0.02 $\pm$ --- (1) \\
HD 27894 & 4920 & 0.37 $\pm$ 0.03 & -0.17 $\pm$ 0.03 (2) & -0.00 $\pm$ 0.12 (4) & -0.16 $\pm$ 0.16 (2) & 0.01 $\pm$ 0.02 (2) \\
HD 30295 & 5406 & 0.32 $\pm$ 0.04 & -0.15 $\pm$ 0.08 (5) & 0.01 $\pm$ 0.06 (4) & -0.16 $\pm$ 0.07 (3) & 0.02 $\pm$ 0.07 (2) \\
HD 31452 & 5250 & 0.23 $\pm$ 0.04 & -0.18 $\pm$ 0.05 (5) & -0.01 $\pm$ 0.08 (4) & -0.11 $\pm$ 0.08 (2) & -0.01 $\pm$ --- (1) \\
HD 31827 & 5608 & 0.48 $\pm$ 0.04 & -0.04 $\pm$ 0.06 (5) & 0.06 $\pm$ 0.08 (4) & -0.15 $\pm$ 0.15 (4) & -0.03 $\pm$ 0.03 (2) \\
HD 35854 & 4901 & -0.04 $\pm$ 0.03 & 0.08 $\pm$ 0.16 (2) & 0.10 $\pm$ 0.04 (4) & -0.14 $\pm$ 0.08 (4) & \multicolumn{1}{c}{---} \\
HD 37986 & 5503 & 0.30 $\pm$ 0.04 & -0.07 $\pm$ 0.06 (5) & -0.06 $\pm$ 0.08 (4) & -0.08 $\pm$ 0.08 (4) & 0.07 $\pm$ 0.07 (2) \\
HD 39213 & 5473 & 0.45 $\pm$ 0.05 & -0.15 $\pm$ 0.08 (5) & 0.05 $\pm$ 0.08 (4) & -0.17 $\pm$ 0.06 (3) & -0.12 $\pm$ 0.05 (2) \\
HD 39715 & 4741 & -0.10 $\pm$ 0.03 & 0.09 $\pm$ 0.18 (2) & 0.09 $\pm$ 0.07 (4) & 0.01 $\pm$ 0.08 (2) & -0.02 $\pm$ 0.00 (2) \\
HD 43848 & 5161 & 0.43 $\pm$ 0.03 & -0.10 $\pm$ 0.07 (5) & 0.04 $\pm$ 0.10 (4) & -0.31 $\pm$ 0.11 (4) & -0.10 $\pm$ 0.03 (2) \\
HD 77338 & 5346 & 0.41 $\pm$ 0.04 & -0.14 $\pm$ 0.11 (5) & 0.05 $\pm$ 0.12 (4) & -0.19 $\pm$ 0.12 (3) & -0.10 $\pm$ 0.00 (2) \\
HD 81767 & 4966 & 0.22 $\pm$ 0.04 & -0.16 $\pm$ 0.09 (3) & 0.01 $\pm$ 0.10 (4) & -0.19 $\pm$ 0.06 (4) & -0.19 $\pm$ 0.05 (2) \\
HD 82943 & 5929 & 0.23 $\pm$ 0.04 & -0.06 $\pm$ 0.06 (5) & 0.06 $\pm$ 0.10 (4) & -0.16 $\pm$ 0.12 (3) & -0.11 $\pm$ 0.14 (2) \\
HD 86065 & 4938 & 0.09 $\pm$ 0.04 & -0.00 $\pm$ 0.18 (3) & 0.09 $\pm$ 0.06 (4) & -0.07 $\pm$ 0.03 (3) & -0.01 $\pm$ 0.16 (2) \\
HD 86249 & 4957 & 0.12 $\pm$ 0.04 & 0.01 $\pm$ 0.16 (3) & 0.09 $\pm$ 0.06 (4) & -0.11 $\pm$ 0.08 (3) & -0.21 $\pm$ 0.02 (2) \\
HD 87007 & 5282 & 0.29 $\pm$ 0.05 & -0.04 $\pm$ 0.13 (5) & -0.05 $\pm$ 0.05 (4) & -0.18 $\pm$ 0.13 (4) & 0.01 $\pm$ 0.08 (2) \\
HD 90054 & 6047 & 0.29 $\pm$ 0.05 & -0.16 $\pm$ 0.09 (4) & -0.07 $\pm$ 0.10 (4) & -0.29 $\pm$ 0.04 (3) & 0.02 $\pm$ 0.05 (2) \\
HD 91585 & 5144 & 0.25 $\pm$ 0.04 & -0.09 $\pm$ 0.06 (5) & -0.01 $\pm$ 0.07 (4) & -0.14 $\pm$ 0.03 (3) & 0.03 $\pm$ 0.11 (2) \\
HD 91669 & 5278 & 0.44 $\pm$ 0.04 & -0.11 $\pm$ 0.09 (5) & -0.07 $\pm$ 0.13 (4) & -0.25 $\pm$ 0.11 (3) & -0.11 $\pm$ 0.19 (2) \\
HD 93800 & 5181 & 0.49 $\pm$ 0.04 & -0.03 $\pm$ 0.08 (5) & 0.03 $\pm$ 0.10 (4) & -0.29 $\pm$ 0.15 (3) & -0.09 $\pm$ 0.03 (2) \\
HD 94374 & 5000 & -0.10 $\pm$ 0.03 & -0.11 $\pm$ --- (1) & 0.12 $\pm$ 0.05 (4) & 0.28 $\pm$ --- (1) & \multicolumn{1}{c}{---} \\
HD 95338 & 5175 & 0.21 $\pm$ 0.04 & -0.12 $\pm$ 0.06 (5) & -0.04 $\pm$ 0.07 (4) & -0.20 $\pm$ 0.09 (2) & -0.04 $\pm$ 0.01 (2) \\
HD 104212 & 5833 & 0.13 $\pm$ 0.05 & -0.18 $\pm$ 0.08 (5) & 0.04 $\pm$ 0.06 (4) & -0.24 $\pm$ 0.08 (4) & -0.13 $\pm$ 0.04 (2) \\
HD 107509 & 6102 & 0.03 $\pm$ 0.05 & -0.14 $\pm$ 0.05 (5) & -0.04 $\pm$ 0.06 (4) & -0.12 $\pm$ 0.05 (4) & -0.08 $\pm$ 0.12 (2) \\
HD 120329 & 5617 & 0.31 $\pm$ 0.06 & -0.16 $\pm$ 0.05 (5) & -0.05 $\pm$ 0.07 (4) & -0.14 $\pm$ 0.10 (3) & -0.11 $\pm$ 0.09 (2) \\
HD 143102 & 5547 & 0.16 $\pm$ 0.05 & -0.17 $\pm$ 0.05 (5) & -0.02 $\pm$ 0.05 (4) & -0.21 $\pm$ 0.05 (4) & -0.11 $\pm$ 0.06 (2) \\
HD 148530 & 5392 & 0.03 $\pm$ 0.05 & -0.12 $\pm$ 0.06 (5) & 0.02 $\pm$ 0.06 (4) & -0.05 $\pm$ 0.15 (3) & 0.09 $\pm$ 0.19 (2) \\
HD 149256 & 5406 & 0.34 $\pm$ 0.05 & -0.18 $\pm$ 0.05 (5) & -0.09 $\pm$ 0.06 (4) & -0.23 $\pm$ 0.06 (4) & 0.03 $\pm$ 0.02 (2) \\
HD 149606 & 4976 & 0.20 $\pm$ 0.04 & -0.11 $\pm$ --- (1) & 0.05 $\pm$ 0.10 (4) & -0.17 $\pm$ 0.13 (3) & -0.16 $\pm$ 0.14 (2) \\
HD 149933 & 5486 & 0.13 $\pm$ 0.06 & -0.06 $\pm$ 0.12 (4) & -0.06 $\pm$ 0.08 (4) & 0.23 $\pm$ 0.03 (2) & 0.29 $\pm$ 0.02 (2) \\
HD 165920 & 5336 & 0.36 $\pm$ 0.04 & -0.10 $\pm$ 0.07 (5) & 0.01 $\pm$ 0.08 (4) & -0.16 $\pm$ 0.04 (4) & -0.11 $\pm$ 0.08 (2) \\
HD 168714 & 5686 & 0.48 $\pm$ 0.05 & -0.18 $\pm$ 0.06 (5) & -0.00 $\pm$ 0.08 (4) & -0.18 $\pm$ 0.07 (3) & -0.13 $\pm$ 0.16 (2) \\
HD 171999 & 5304 & 0.29 $\pm$ 0.04 & -0.18 $\pm$ 0.09 (5) & -0.02 $\pm$ 0.08 (4) & -0.13 $\pm$ 0.10 (4) & -0.09 $\pm$ 0.00 (2) \\
HD 177374 & 5044 & -0.08 $\pm$ 0.03 & 0.03 $\pm$ 0.17 (2) & -0.05 $\pm$ 0.05 (4) & -0.03 $\pm$ 0.02 (2) & \multicolumn{1}{c}{---} \\
HD 179764 & 5323 & -0.05 $\pm$ 0.04 & -0.11 $\pm$ 0.07 (5) & -0.08 $\pm$ 0.05 (4) & -0.08 $\pm$ 0.12 (4) & 0.11 $\pm$ 0.01 (2) \\
HD 180865 & 5218 & 0.21 $\pm$ 0.04 & -0.15 $\pm$ 0.03 (4) & 0.06 $\pm$ 0.08 (4) & -0.12 $\pm$ 0.10 (4) & 0.02 $\pm$ 0.02 (2) \\
HD 181234 & 5311 & 0.45 $\pm$ 0.04 & -0.09 $\pm$ 0.12 (5) & 0.08 $\pm$ 0.11 (4) & -0.18 $\pm$ 0.08 (3) & -0.04 $\pm$ 0.12 (2) \\
HD 181433 & 4902 & 0.41 $\pm$ 0.04 & 0.02 $\pm$ 0.18 (3) & -0.09 $\pm$ 0.10 (4) & -0.17 $\pm$ 0.09 (2) & -0.08 $\pm$ 0.16 (2) \\
HD 182572 & 5700 & 0.48 $\pm$ 0.03 & -0.15 $\pm$ 0.03 (5) & 0.03 $\pm$ 0.07 (4) & -0.19 $\pm$ 0.08 (4) & -0.11 $\pm$ 0.12 (2) \\
HD 196397 & 5404 & 0.38 $\pm$ 0.05 & -0.16 $\pm$ 0.17 (5) & 0.03 $\pm$ 0.08 (4) & -0.19 $\pm$ 0.06 (4) & -0.04 $\pm$ --- (1) \\
HD 196794 & 5094 & 0.06 $\pm$ 0.04 & 0.14 $\pm$ 0.05 (3) & 0.24 $\pm$ 0.05 (4) & -0.01 $\pm$ 0.12 (4) & 0.07 $\pm$ --- (1) \\
HD 197921 & 4866 & 0.22 $\pm$ 0.04 & -0.22 $\pm$ 0.08 (3) & 0.00 $\pm$ 0.10 (4) & -0.16 $\pm$ 0.07 (4) & -0.03 $\pm$ 0.08 (2) \\
HD 201237 & 4829 & 0.00 $\pm$ 0.04 & -0.22 $\pm$ 0.06 (2) & -0.05 $\pm$ 0.11 (4) & -0.27 $\pm$ 0.09 (3) & -0.10 $\pm$ 0.14 (2) \\
HD 209721 & 5503 & 0.28 $\pm$ 0.04 & -0.15 $\pm$ 0.13 (5) & -0.15 $\pm$ 0.10 (4) & -0.18 $\pm$ 0.11 (4) & -0.03 $\pm$ 0.01 (2) \\
HD 211706 & 6017 & 0.09 $\pm$ 0.07 & -0.10 $\pm$ 0.05 (4) & 0.00 $\pm$ 0.05 (4) & -0.16 $\pm$ 0.06 (2) & 0.04 $\pm$ 0.03 (2) \\
HD 213996 & 5314 & 0.33 $\pm$ 0.04 & -0.12 $\pm$ 0.08 (5) & -0.06 $\pm$ 0.05 (4) & -0.05 $\pm$ 0.12 (3) & -0.12 $\pm$ 0.10 (2) \\
HD 214463 & 5122 & 0.34 $\pm$ 0.04 & -0.10 $\pm$ 0.13 (4) & -0.09 $\pm$ 0.11 (4) & -0.21 $\pm$ 0.08 (4) & 0.05 $\pm$ 0.03 (2) \\
HD 218566 & 4849 & 0.28 $\pm$ 0.14 & -0.11 $\pm$ 0.07 (3) & 0.04 $\pm$ 0.12 (4) & -0.20 $\pm$ 0.13 (4) & -0.04 $\pm$ 0.07 (2) \\
HD 218750 & 5134 & 0.17 $\pm$ 0.04 & -0.18 $\pm$ 0.10 (4) & 0.05 $\pm$ 0.08 (4) & -0.21 $\pm$ 0.07 (2) & 0.11 $\pm$ --- (1) \\
HD 221313 & 5153 & 0.31 $\pm$ 0.05 & -0.15 $\pm$ 0.08 (5) & 0.02 $\pm$ 0.09 (4) & -0.14 $\pm$ 0.11 (3) & -0.10 $\pm$ --- (1) \\
HD 221974 & 5213 & 0.46 $\pm$ 0.04 & -0.01 $\pm$ 0.06 (4) & 0.03 $\pm$ 0.10 (4) & -0.12 $\pm$ 0.16 (3) & -0.17 $\pm$ 0.12 (2) \\
HD 224230 & 4873 & -0.08 $\pm$ 0.04 & 0.04 $\pm$ 0.27 (2) & 0.03 $\pm$ 0.07 (4) & -0.07 $\pm$ 0.02 (2) & 0.15 $\pm$ 0.09 (2) \\
HD 224383 & 5760 & -0.10 $\pm$ 0.05 & -0.06 $\pm$ 0.12 (5) & -0.04 $\pm$ 0.05 (4) & -0.08 $\pm$ 0.13 (4) & 0.13 $\pm$ 0.09 (2) \\
\label{Tab_final_abonds}
\end{longtable}
\twocolumn
\clearpage
\bibliographystyle{aa}
\input{smr_II.bbl}
\end{document}
|
1,116,691,500,140 | arxiv | \section{Introduction}
A substantial surge in the number of user equipments (UE) utilizing mobile services is expected the near future. It is estimated that the number of UEs will reach 4 billion by 2021 \cite{UENumber}. As the number of UEs connected to a terrestrial base station (BS) increases, the quality of service (QoS) per user tends to reduce. It is highly probable that the BS will even be out of service, and therefore, UEs will not be able to use their mobile services. The current solution for such situations is applying to capacity injection, such as a mobile BS \cite{MobileBS}. The service provider deploys the mobile BS in {\color{black}a} crowded area, which eventually increases the mobile network's average QoS.
A radio access node on-board unmanned aerial vehicle (UxNB) is a radio access node providing service to the UEs deployed on an unmanned aerial vehicle (UAV) according to 3GPP TS 22-125 \cite{3GPP22125}. It can connect to the core network like terrestrial base station, which is next generation NodeB (gNB) in 5G new radio (NR). The research community already has an interest in UxNB in order to enhance the mobile network coverage. It can be exploited in several scenarios, such as emergencies, and high-density areas. UxNB can be deployed an area fast without terrain constraints\cite{3GPP22829}.
Security in mobile networks is often focused on the robustness of the UE authentication and encryption of over-the-air traffic. Only the air part of the mobile traffic is encrypted by UE and sent to the BS, and the traffic is decrypted in the BS and sent to the core network. Traffic within the core network is transmitted in plaintext. Traffic between base stations is therefore sent in plaintext. For example, during a handover operation, {\color{black}the} serving-BS (\textit{s-BS}) sends handover credentials to the target-BS (\textit{t-BS}) in plaintext. Even when there is no authentication process between BSs before this transmission \cite{BSComm}. With fake BS attacks, attackers can capture personal data and track the location of UEs. The biggest reason for these attacks to be carried out is the lack of authentication within the core network.
Fake BS attacks are carried out for International Mobile Subscriber Identity (IMSI) catching. A fake BS sends strong identity request signals to the UEs around it. UEs respond to this request by sending their IMSI information. Attackers who have captured IMSI information can authorize themselves through the nearest BS. The lack of authentication between BSs can allow that. Thus, they can track all data and location of the relevant user. Although a professional fake BS production costs between \$ 68,000-134,000, homemade fake BS can also be made for \$ 1500 \cite{IMSI}. With the widespread use of UxNB technology in the future, this cost may decrease even more, and the UE data can be stolen in a short time with its fast deployment feature. There must inevitably be an authentication scheme between {\color{black}the} next-generation UxNBs and terrestrial BSs. Also, in order to encrypt the communication between {\color{black}two BSs}, both {\color{black}BSs} must have the same encryption key as a result of this authentication.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{usecase.png}
\caption{An examplary use case for capacity injection and group handover. \\ The terrestrial BS cannot provide the required QoS and the UxNBs are sent to the game for capacity injection. The UxNBs should be authenticated by terrestrial BS and several handovers should be performed between terrestrial BS and UxNBs.}
\label{fig:usecase}
\end{figure}
As a use case, consider a football game where there is an increase in the number of users in a particular area (stadium) in a specific period ($90$ min.). During the game, only one terrestrial BS may provide service to all UEs. It will be more beneficial to use UxNB to increase the BS's capacity, as shown in Figure \ref{fig:usecase}. While capacity injection is the first issue in such dense deployment scenarios, the handover among the overlay cells after capacity injection is the second issue. The use of several BSs within a particular area will result in overlay cells. Terrestrial BS providing service to all UEs in the stadium will delegate some UEs to UxNB to reduce the burden. In the meantime, there will be many handover operations. In the currently used LTE \cite{3GPP36300}, and 5G standards \cite{3GPP501}, these handover operations should be done {\color{black}sequentially, ie one by one. Yet, due} to the limitations of the UxNBs such as weight and battery life, their flying time will be a maximum of one hour \cite{3GPP22829}. Considering that a football game is 90 minutes, {\color{black} a new UxNB will take over from ex-UxNB at least once}. This will cause an increase in the number of handover operations to be made. In this study, it is proposed that UEs should be transferred from terrestrial to UxNB not individually but as a group in order to make this handover process faster.
{\color{black}Group handovers usually take place in metropolitan cities, and they occur} with the need for multiple UEs to be connected to another BS at the same time. A group of people inside a bus can be an excellent example of group handover. These UEs will receive a stronger signal from another BS at the same time as the signal strength from the \textit{s-BS} will decrease. Since it is the UE that chooses the BS to be served, many UEs will choose the new BS at the same time. This will cause bandwidth reduction in the new BS. At this point, a group handover problem will occur \cite{GroupHandover}. Handover of all UEs one by one, as in the standards, will cause service disruption and energy loss. The same problem is valid for the stadium example. When the UxNBs become active, many UEs will receive stronger signals from UxNBs and will request handover. An influential group handover solution is required in this scenario, as well.
In light of these challenges, our main contributions are listed as;
\begin{itemize}
\item We propose a fast and energy-efficient handover scheme for the high-density areas where UxNB can be exploited efficiently to provide service to the UEs. The current handover solution in both LTE \cite{3GPP36300} and 5G NR \cite{3GPP501} requires to share UE data from the \textit{s-BS} to the \textit{t-BS}. However, there is no data-sharing between BSs in our proposed method, which is the reason for the time and energy-saving solution. Besides, confirming UEs by the \textit{t-BS} as a group decreases the time for handover.
\item Fake BS attack is a security issue for current mobile networks. Using UxNB also creates the same problem. Any intruder can impersonate a UxNB and try to control UEs. Our proposal offers an authentication solution between BSs. UxNB can obtain public and private key pair from the core network before becoming active. When UxNB comes over the high-density area, the \textit{s-BS} can authenticate UxNB easily by using the public key of the \textit{t-BS} as explained in the group handover framework section.
\item As the \textit{s-BS} shares the group secret information with the \textit{t-BS}. By using this function, the \textit{t-BS} can authenticate UEs easily in the group handover phase. The authentication can be accomplished as a group. Thanks to having a private function, there is no phase for the control packet transmissions of data between BSs. While the \textit{s-BS} sends data for each UE to the \textit{t-BS} in 3GPP Release 16\cite{3GPP501}, no data-sharing between BSs is required in our proposed method. Therefore, in our proposed method, the number of control packet transmissions between BSs is zero.
\item In all authentication solutions for mobile networks, UE must have a private key. In the proposed method, we recommend that UE turns the private key into a public key with an elliptic curve multiplication operation for handover operation. Handover operation is carried out when the \textit{t-BS} verify the public key. These private keys must be distributed before authentication for each UE. In the proposed method, it is possible to use a subscription permanent identifier (SUPI) belonging to each UE as a private key. This solution eliminates the need for private key distribution before authentication.
\begin{table}[h!]
\caption{Abbreviations}
\label{table:abbreviation}
\centering
\begin{tabular}{l l}
\hline
\textbf{Abbreviation} & \textbf{Description} \\
\hline
3GPP & 3rd Generation Partnership Project \\
BS & Base Station \\
RAN & Radio Area Network \\
UE & User Equipment \\
NR & New Radio\\
QoS & Quality of Service \\
UAV & Unmanned Aerial Vehicle \\
s-BS & Serving Base Station \\
t-BS & Target Base Station \\
LTE & Long Term Evolution \\
MME & Mobility Management Entity \\
MR & Measurement Report \\
AMF & Access and Management Function \\
SGW & Serving Gateway \\
UPF & User Plane Function \\
SUPI & Subscription Permanent Identifier \\
ECDLP & Elliptic Curve Discrete Logarithm Problem \\
SUCI & Subscription Concealed İdentifier \\
UDM & User Data Management \\
AUSF & Authentication Server Function \\
SEAF & Security Anchor Function \\
MAC & Message Authentication Code \\
ENC & Encryption \\
gNB & Next Generation NodeB \\
\hline
\end{tabular}
\end{table}
\end{itemize}
\hfill \break
This paper is organized as follows. The next section provides an overview of the handover process in the 3GPP standards for both LTE and 5G NR and an overview of existing handover methods. In Section III, the preliminaries of our proposal are explained in detail. System and thread models are given in Section IV. Our proposed approach for capacity injection and group handover is presented in Section V. The security and performance evaluation is provided in Section VI and Section VII, respectively. The study is completed by a conclusion in Section VIII. We present a list of abbreviations which are used throughout the paper in Table \ref{table:abbreviation}.
\section{Literature Overview and Related Background}
We begin to explain the related works on mobile network handover with detailed information about handover both in LTE and in 5G NR. Although 5G NR will be the near-future standard of mobile networks, LTE has still been used by most countries. Ultra-densification is one of the new approaches for 5G NR. The coverage area of BSs is shrunk, and the number of users served by each BS is reduced. Therefore, the frequency of handover increases due to the densification's impact in 5G \cite{LTEtoNR}.
\subsection{UAVs in 3GPP Standards}
The use of UxNB to increase the coverage area is specified in 3GPP standards. A UxNB can connect to 5G core network as a terrestrial BS via wireless backhaul link\cite{3GPP22829}. A UxNB can be used in various scenarios such as emergencies, temporary coverage for UEs, hots-spot events, due to their fast deployment and broad coverage capabilities \cite{3GPP22829}. UxNBs should be authenticated by the core network before operating as a BS. One of the requirements for using a BS on UAV is to keep the energy used at the lowest level because UAV has limited energy.
The use of UAVs alone is limited due to their airborne time and energy constraints. For example, using a single drone in delivery services results in waiting for that vehicle to come back to the base. For this reason, UAVs should be used as a swarm. The essential requirement for a swarm of UAVs is group management \cite{3GPP22829}. Group management requires group authentication and secure communication inside a group, as detailed later.
\subsection{Handover Management in LTE}
There are two types of handover scenario in LTE, which differ according to the existing of the mobility management entity (MME) change \cite{LTEtoNR}. Inter-BS with intra-MME is denoted in this section step by step. The UE disconnects from serving BS (\textit{s-BS}) and connects to the target (\textit{t-BS}) without changing MME. The reason we choose this scenario is that the number of communications is less than in other scenarios.
The handover steps are shown in Figure \ref{fig:ltehandoff}{\color{black}; and also listed below.}
\begin{enumerate}
\item The UE measurement procedure is configured by the \textit{s-BS}.
\item The UE sends a measurement report (MR) to the \textit{s-BS}.
\item According to the report, the \textit{s-BS} makes a handover decision.
\item The \textit{s-BS} sends a handover request to the the \textit{t-BS}.
\item The \textit{t-BS} sends an acknowledgment to the \textit{s-BS} according to its resources.
\item The \textit{t-BS} informs the UE for handover with necessary information.
\item The UE attaches to the target cell.
\item The \textit{t-BS} sends uplink allocation and timing information to the UE.
\item The \textit{t-BS} informs the MME for UE cell change.
\item MME informs the serving gateway (SGW) for UE.
\item SGW updates the path for UE.
\item MME informs the \textit{t-BS} for path update.
\item The \textit{t-BS} informs the \textit{s-BS} for the completion of the handover.
\end{enumerate}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{LTEhandoff.png}
\caption{Handover in LTE and 5G NR. In LTE, handover begins with the \textit{t-BS} command to UE, whereas handover begins with the \textit{s-BS} command in 5G.}
\label{fig:ltehandoff}
\end{figure}
\subsection{Handover Management in 5G NR}
The handover procedure for 5G NR is almost the same with LTE with little changes. Access and mobility management function (AMF) executes the duties of MME, while the user plane function (UPF) is the same as SGW.
The handover steps are listed as:
\begin{enumerate}
\item The UE measurement procedure is configured by the \textit{s-BS}.
\item The UE sends MR to the \textit{s-BS}.
\item According to the report, the \textit{s-BS} makes a handover decision.
\item The \textit{s-BS} sends a handover request to the \textit{t-BS}.
\item The \textit{t-BS} sends an acknowledgment to the \textit{s-BS} according to its resources.
\item The \textit{s-BS} sends a handover command to the UE.
\item The UE attaches to the target cell.
\item The \textit{t-BS} sends uplink allocation and timing information to the UE.
\item The \textit{t-BS} informs the AMF for UE cell change.
\item AMF informs UPF for UE.
\item UPF updates the path for UE.
\item AMF informs the \textit{t-BS} for path update.
\item The \textit{t-BS} informs the \textit{s-BS} for the completion of the handover.
\end{enumerate}
\subsection{Key Hierarchy in 5G NR}
It is crucial to understand the key generation process in 5G NR in order to explain the key exchange between the base stations in the handover phase. The key generation steps are in Figure \ref{fig:keyhierarchy}. The main key ($K_{AMF}$) is known by AMF and UE \cite{vulnerabilities}. The key for BS ($K_{gNB}$) and the integrity and encryption keys ($K_{AMF-UE-INT}$, $K_{AMF-UE-ENC}$) for the secure communication between AMF and UE are derived from $K_{AMF}$. AMF sends $K_{gNB}$ to BS, and UE can compute the same keys since UE has the primary key. Once BS has the $K_{gNB}$, the integrity and encryption keys {\color{black}($K_{gNB-UE-INT}$ and $K_{gN-UE-ENC}$, respectively)} for the secure communication between BSs and UE can be computed by BS.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{keyhierarchy2.png}
\caption{Key Generation in 5G NR. $K_{gNB}$ is derived from $K_{AMF}$. Both AMF and UE has the $K_{AMF}$. AMF sends $K_{gNB}$ to BS. UE can create $K_{gNB}$ for a secure communication with BS. The encryption and integrity keys are dervied from the main keys $K_{AMF}$ and $K_{gNB}$.}
\label{fig:keyhierarchy}
\end{figure}
\subsection{Handover Key Exchange in 5G NR}
Once the handover decision is taken by the \textit{s-BS}, the BS key ($K_{gNB}$) is shared with the \textit{t-BS} \cite{vulnerabilities}. The \textit{t-BS} computes the next BS key value $K_{gNB*}$ by using $K_{gNB}$ and other security parameters, which are the same for all BS connected to the same AMF. The new integrity and encryption keys for secure communication between UE and the \textit{t-BS} are derived from $K_{gNB*}$. The UE also can compute $K_{gNB*}$, since the UE has $K_{gNB}$ and security parameters. Then, the UE can compute the new encryption keys and message authentication codes (MAC) for further communications.
\subsection{Security Exploits in 5G NR}
Most of the vulnerabilities in LTE-A, such as IMSI-catching are solved in 5G NR. However, there are still on-going security issues in 5G NR \cite{exploits}.
IMSI, which is the UE's private key, is sent to BSs as cleartext in the LTE-A standards. The UE's private key in 5G NR (SUPI) is encrypted by using the public key of the BS. Unfortunately, with the fake BS attack, the attackers can still obtain the UE's private key. The BS is providing the public key to the UE. An attacker may impersonate the BS and deliver a fake public key to the UE.
According to 3GPP Release 16\cite{3GPP33846}, attackers can perform SUPI guessing attacks. Attackers can generate random SUPI and encrypt it by the BS public key. If the attackers obtain a valid response from BS, the SUPI can be assumed as a real SUPI. By trying all possibilities, valid SUPIs for a dedicated network can be obtained.
Authentication complexity is another security issue for 5G handover process. The \textit{t-BS} must establish a connection with AMF for each UE in order to update the current handover situation. The BS can be located far from AMF and can lose connection with AMF. The assumption of too much mobility in 5G can increase authentication complexity \cite{5GHandoffVulnerabilities}.
Fake BSs can perform a desynchronization attack. BSs should be synchronized to continue the handover process. The next-hop chaining counter (NCC) is the parameter that should be increased for each handover. A fake BS can perform a fake handover with a \textit{t-BS} to break down the counter and obtain security keys.
\subsection{Handover Studies for UxNBs}
There are not so many studies on the security aspect of UxNB since the usage of UxNB is a new emerging topic. We shall give some handover solutions in the literature.
In \cite{Wifibackhaul}, the authors investigate the scenario in which UEs move from one UxNB to another one. The difficulty of using the $X_2$ logical interface as in LTE is explained in the study. The authors model the LTE handover over the WiFi link and analyze the result of using the WiFi backhaul for UxNB. Their result is the WiFi backhaul can represent a solution for a small number of UxNBs and the distance of UxNBs must be a maximum of 5km.
\cite{handoverdecision1,handoverdecision2} are related to choosing the best time for the handover. The aim of the papers is to decrease the handover latency. The security aspect of the handover process is not taken into account. A deep learning solution is proposed in \cite{handoverdecision1} to decide the best handover time, while an optimal coverage decision algorithm is exploited in \cite{handoverdecision2}.
There exist some researches which investigate the security link between a UAV and control station and the handover key management in LTE-based aerial vehicle control network \cite{UAVControl1, UAVControl2}. According to their simulation results, the handover of UAVs is mostly performed between BSs. Inter-MME handover of UAVs is expected to happen scarcely in Korea \cite{UAVControl1}. IPSec solution provides secure communication between control stations and BSs. The studies focus mostly on the wireless channel between BS and UAV. The security key between UAV and BS is recovered from a forward root key and a backward root key. UAV authenticates itself to the MME by using the IMSI key first. MME shares the forward and backward keys to UAV and BS. UAV and BS can construct the security key and begin to communicate securely. In order to handover a UAV to a new BS, the \textit{s-BS} shares the forward and backward keys with the \textit{t-BS}. the \textit{t-BS} can easily construct security after having the root keys.
One of the gaps in the previous studies is the lack of the authentication solution between UxNB and terrestrial BS. An intruder can impersonate a UxNB and be involved to the system without authentication. Also, the scalability problem for terrestrial BSs is not addressed in the studies. In high-density areas, BSs can drop the requests from UEs. Besides, the scalability is also an issue for handover in high-denstiy areas. These points are not taken into consideration in the previous studies. Our proposed scheme is based on capacity injection for the densely populated areas and group handover and addresses the authentication issue between BSs. Besides, the scheme is a promising solution for handover between terrestrial BS and UxNB by using less energy and time for scalability. {\color{black}This study builds on our previous work \cite{arXiv1}. In \cite{arXiv1}, a group authentication scheme was proposed to authenticate users in a group at the same time. The group handover method appears as an application area of the proposed group authentication scheme. While all users in a group are verified for the first time in the group authentication scheme, in the group handover solution, there is a BS, which already authenticated the UEs in its coverage area by the group authenticatin scheme and a sub-group of these UEs will be handed over to the UxNBs coming to the area for capacity injection.}
\section{PRELIMINARIES}
This section aims to provide a summary of building blocks employed in the proposed algorithm. Considering the notation, we denote by $G$ a cyclic group, and by $P$ a generator for a cyclic group. Encryption algorithm is denoted by $E$, decryption algorithm $D$, and by the $t$ threshold value. A private key represented by $s$, and $H(\cdot)$ denotes the hash function. The random function is denoted by $f(x)$, public keys for UEs by $x_i, f(x_i)P$, and by $f(x_i)$ the private key for UEs. $ID_i$ is the identification number of each UE and $m$ is the number of UEs in a group.
Our proposed scheme is built on the assumption that the terrestrial BS and UEs create a group and perform a group authentication as in the \cite{arXiv1}. In this section, we give the details of group authentication and the elliptic curve discrete logarithm problem (ECDLP), which is widely used in the proposal.
The group authentication is based on the secret sharing scheme and ECDLP. The group authentication scheme consists of two steps. In the initialization phase, group manager (GM), which has better sources than other group members, selects the initialization parameters. These parameters are elliptic curve parameters and secret and public keys for each user. The GM determines these parameters and shares them with the relevant group members. The confirmation phase is to check whether the group members are legitimate. These phases stated below;
\begin{center}
{\bf The Initialization Phase}
\end{center}
GM selects a cyclic group $G$ and a generator $P$ for $G$. GM selects $E=Encryption(\cdot)$ and $D= Decryption(\cdot)$ algorithms and a hashing function $H(\cdot)$. A polynomial with degree $t-1$ is chosen by GM and the constant term is determined as group key $s$. GM selects one public key $x_i$ and one private key $f(x_i)$ for each user in the group $U$ where each user is denoted by $U_i$ for $i=1,\ldots,n$. GM computes $Q=s\times P$. GM makes $P, Q, E, D, H(s), H(\cdot), x_i$ public and shares $f(x_i)$ with only user $U_i$ for $i=1,\ldots,n$.
\begin{center}
{\bf The Confirmation Phase}
\end{center}
The steps of the confirmation phase are performed as in the Algorithm 1. Each user computes $f(x_i)\times P$ and sends $f(x_i)\times P\ ||ID_i$ to the GM and other users ($ID_i$) is the identification number of the user and $\|$ symbol shows the concatenation of two values). If the GM verifies the authentication, the GM computes $f(x_i)\times P$ for each user and verifies whether the values are valid or not. If the GM is not included in the verification process, any user in the group computes
\begin{equation}
C_i=\left(\prod^{m}_{r=1, r\neq i}\dfrac{-x_r}{x_i-x_r})\right)f(x_i)\times P
\end{equation}
for each user ($m$ denotes the number of the users in the group and $m$ must be equal to or larger than $t$). User verifies whether
\begin{equation}
\sum_{i=1}^{m}C_i {\stackrel{?}{=}} Q
\end{equation}
holds. If $(2)$ holds, authentication is completed. Otherwise, the process will be repeated from the initialization phase.
\begin{algorithm}
{Compute $f(x_i)\times P$, and share $f(x_i)\times P \| ID_i$}\newline\\
\If{GM verifies the authentication}
{
GM computes $f(x_i)\times P$.\newline\\
\If{All values are valid}
{
Print `Authentication is complete.'
}
\Else{
Repeat.
}
}
\Else{
{User computes $c_i$=$f(x_i)\times P{\overset{m}{\underset{r=1, r\neq i}{{\displaystyle\prod}}}(-x_r/(x_i-x_r))}$.}\newline\\
\If{${\overset{m}{\underset{i=1}{{\displaystyle\sum}}}c_i}$ = Q}
{
Print `Authentication is complete'.
}
\Else{
Repeat.
}
}
\BlankLine
\caption{Confirmation Phase}
\end{algorithm}
Subscription concealed identifier (SUCI) and SUPI are the keys used in the initial authentication between UE and BS, according to the 3GPP Release 16\cite{3GPP501}. The SUPI is a globally unique 5G subscription identifier, which is allocated for each UE and saved in UDM. The SUCI is a one-time password generated and encrypted by UE. These globally unique identifiers can be exploited both in group authentication between BS and UEs and in our handover proposal. As explained in the group authentication scheme, each UE must have a private key $f(x_i)$ to participate in the group authentication. In \cite{arXiv1}, these private keys are shared with UEs in the initialization phase. For our proposal, $x_i$ values, which are public, can be equal to the encrypted version of a globally unique SUPI or SUCI.
In our proposal, ECDLP is significantly used. Given an elliptic curve over a finite field $F_p$ and two points $P,Q$ over an elliptic curve, to find an integer $k$ such that $Q = kP$ is defined as a discrete logarithm problem. We shall use ECDLP in order to provide confidentiality of the private keys $f(x_i)$ by multiplying it with a point $P$ to obtain a new point $f(x_i)P$.
\section{System and Threats Models}
In this section, details of capacity injection and group handover scenario in a densely populated area where the proposed method can be used efficiently are given. In addition, the attacks that can be performed against the specified scenario are described.
\subsection{System Model}
A terrestrial BS provides service to a group of UEs in its coverage area in the system model. Due to the number of the UEs and the high volume of traffic, the service provider sends UxNB to the close area of the terrestrial BS. Both terrestrial and UxNBs have secure connectivity with AMF. The UxNBs should be authenticated by the terrestrial BS. Afterward, the UEs in the range of UxNB should be handed-over from terrestrial to UxNB. The UxNB may confirm the UE by sending the handover request to the AMF for each UE. However, this can be time-consuming, or AMF can collapse due to the number of UEs. A fast and lightweight handover scheme is needed for the system.
According to group authentication solutions, a certain number of devices with the same or different capacities come together to form a group in order to perform a fast authentication. Within this group, members are preferred to authenticate quickly as a group rather than authenticate with each other one by one. In group authentication methods, the initial parameters must be determined by a GM. The GM generally has more resources and capacities than other members. In the determined system model, the GM is the terrestrial BS that provides service to the UEs. The terrestrial BS and UEs inside the stadium form a group.
\subsection{Threat Model}
The communication between terrestrial and UxNB is vulnerable to man-in-the-middle and replay attacks. A malicious UxNB can interrupt the communication and authenticate itself by using the credentials taken from the legitimate UxNB. A secure authentication method is needed to detect legitimate and malicious UxNB. A malicious UE also can perform a man-in-the-middle attack to authenticate itself.
Attackers can eavesdrop on the traffic between UEs and BS if the communication is not encrypted. If any credential is in plaintext format, attackers even can impersonate UEs. Besides, a fake BS attack can deceive the real UEs in order to capture the signals. Overall, the signals must be transmitted as encrypted ciphertext, and source and target authentication must be performed.
\section{Capacity Injection and Group Handover Framework}
The secure capacity injection and group handover solution is denoted in this section for the predetermined scenario. An UxNB should be authenticated by the closest terrestrial BS in order to assume the emerging UxNB is legitimate. After succeeding authentication, the handover of UEs, which are in the range of UxNB, must be fulfilled from terrestrial to UxNB. Before authentication of UxNB, we assume that terrestrial BS with UEs of the range formed a group and a group authentication was carried out as in the \cite{arXiv1}. Consequently, terrestrial BS has a function $f(x)$, which is private and only known by terrestrial BS and AMF. AMF must have a table which stores the identity number of terrestrial BSs with their corresponding private function. Besides, after successful group authentication, each UE in the range of terrestrial BS has a private value $f(x_i)$ and public values $(x_i, f(x_i)P)$. The $i$ is the identity of UE, and $P$ is the generator in the elliptic group, which is used to keep $f(x_i)$ private by elliptic curve multiplication operation. AMF stores the private values of UEs in the database as well. To authenticate the new emerging UxNB, the work sequence at below should be followed.
\subsection{Authentication of Emerging UxNB for Capacity Injection}
AMF assigns private key $f(x_i)$ and public key pairs ($x_i, f(x_i)P$) which did not designate any other UE to the UxNB as in Algorithm 2. If an adversary can obtain private keys $f(x_i)$ more than the threshold value, the secret function $f(x)$ can be recovered by an adversary and a legitimate UxNB can be impersonated. Once UxNB is the range of terrestrial BS, UxNB transmit $x_i$ and $f(x_i)P$ pairs to terrestrial BS. Afterward, terrestrial BS verifies the pairs by the private function $f(x)$. Finally, if the UxNB is legitimate, $f(x)$ is shared with UxNB. Both terrestrial BS and UXNB have the private key $f(x_i)$ of UxNB. By a symmetric key encryption method, the $f(x)$ function can be encrypted and sent securely to the UxNB by terrestrial BS.
After accomplishing of authentication of UxNB, BSs can communicate with each other confidently by a symmetric key encryption. After successful authentication of UxNB, group handover can be performed anytime needed. UEs send their public values ($x_i, f(x_i)P$) to UxNB and UxNB confirms UEs. The work sequence at below for group handover should be followed, as detailed below:
\begin{algorithm}
{Assign the private key $f(x_i)$ and public key $x_i, f(x_i)P$ to the UxNB. }\newline\\
{Transmit $x_i$ and $f(x_i)P$ pairs to terrestrial BS.}\newline\\
\If{$x_i$ and $f(x_i)P$ pairs are valid}
{
Send $f(x)$.
}
\Else{
{Not valid UxNB.}
}
\BlankLine
\caption{Authentication of Emerging UxNB for Capacity Injection
}
\end{algorithm}
\subsection{Group Handover}
Each UE sends its public value ($x_i, f(x_i)P$) to the UxNB as in Algorithm 3. UxNB performs addition operation for each $f(x_i)$ and $f(x_i)P$ separately. At the end of the additional computation, the total $f(x_i)$ value is multiplied by the generator $P$. If the result is equal to the total $f(x_i)P$ value, all UEs are valid. Otherwise, the UEs are verified one by one, as shown in Algorithm 3. After successful control, UxNB begins to provide service for UEs. All requests from UE to UxNB is going to be encrypted by the private key $f(x_i)$ of UE, and also $x_i$ value should be appended to all data.
\begin{algorithm}
{Send ($x_i, f(x_i)P$) to UxNB.}\\
{$m$ is the number of UEs in the handover group.}\\
{$TotalX$ is equal to $0$.}\\
{$TotalPoint$ is the first $f(x_i)P.$}\\
\For{$m$ down to 1}
{
{$TotalX=TotalX+f(x_i)$.}\\
{$TotalPoint=TotalPoint+f(x_i)P$.}\\
}
\If{$TotalPoint$ is equal to $TotalX.P$}
{
{Provide service to the UEs.} \\
}
\Else{
\For {$m$ down to 1}
{
\If{$f(x_m)P$ is valid}
{
{Provide service $UE_m$.}\\
}
\Else{
{The UE is not valid.}\\
}
}
}
\BlankLine
\caption{Group Handover}
\end{algorithm}
\section{Security Analysis}
In the security analysis section, the security provided by our proposal for all possible attacks against the scenario we explained in the models section are explained one by one. The prevention for an attack is given as a theorem and the solution is proved in the proof part for each attack scenario.
\\
\textbf{Theorem 1:} \textit{The malicious UxNB which captures public values ($x_i,f(x_i)P)$ of legitimate UxNB can not perform replay attack.}
\textit{Proof.} The secret function $f(x)$ is encrypted by the private key $f(x_i)$ of UxNB and sent it to the UxNB by terrestrial BS. The malicious UxNB may intercept the communication between legitimate UxNB and terrestrial BS. Also, it may pass the confirmation phase by sending public values of legitimate UxNB. However, when the terrestrial BS sends the $f(x)$ polynomial in the encrypted version, the malicious UxNB can not decrypt it due to not having the private key $f(x_i)$. {$ \hspace{6.6 cm} \qed$}
\textbf{Theorem 2:} \textit{The malicious UE, which captures public values ($x_i,f(x_i)P_2)$ of UE, can not perform a replay attack.}
\textit{Proof.} The malicious UE may intercept the communication between legitimate UxNB and UE. Also, it may pass the confirmation phase by sending the public values of UE. However, when it requests service from legitimate UxNB, it can not send an encrypted message due to not having the private key $f(x_i)$. {$ \hspace{7.1 cm} \qed$}
\textbf{Theorem 3:} \textit{The attacker who captures the value of $P$ and $f(x_i)P$ sent by the UxNB publicly cannot have knowledge of private key $f(x_i)$.}
\textit{Proof.} Given two points $P$ and $f(x_i)P$ on an elliptic curve group, it is hard to find the $f(x_i)$ value that provides the relationship $f(x_i)P = f(x_i)\times P$. This open problem is called ECDLP. Therefore, it is hard to find $f(x_i)$ by having $P$ and $f(x_i)P$. {$ \hspace{6.9 cm} \qed$}
\textbf{Theorem 4:} \textit{The attacker can not decrypt the communication between UEs and UxNB after handover.}
\textit{Proof.} The communication is encrypted by the private of UE $f(x_i)$, and the public value $x_i$ is sent with the encrypted message. The UxNB can compute $f(x_i)$ with the knowledge of the secret function $f(x)$ and the UE public key $x_i$. Attackers can only eavesdrop on encrypted traffic. {$ \hspace{2.9 cm} \qed$}
The attacks in the four theorems mentioned are very crucial problems for the security in the specified densely populated scenario. First, the intruder can act as a legitimate UE or UxNB by eavesdropping the communication channel and obtaining some security parameters. Keeping secret of private keys in the proposed system is very critical. If intruders can capture private keys equal to the specified threshold value, they cab recover the secret polynomial and decrypt the information in the entire system.
\section{Performance Analysis}
Our main objectives in the performance analysis are to show the importance of capacity injection for QoS and compare the handover time and the number of control packet transmissions in group handover. The SimuLTE\cite{simulte} library built on top of the Omnet++ and INET framework is used to simulate our football stadium scenario, as seen in Figure \ref{fig:omnetScenario}. The most complex LTE scenarios can be simulated in SimuLTE in accordance with the 3GPP Release 16 \cite{3GPP36300}. The simulation framework exploits the layer base structured environment, and the handover process is accomplished mostly by the physical layer. Further, the $X_2$ link between BSs and protocols are well-designed and implemented by SimuLTE. The channel model configurations for SimuLTE we used for our simulation can be seen in Table \ref{table:channel}.
\begin{table}[h!]
\caption{Channel Model}
\label{table:channel}
\centering
\begin{tabular}{ ||c||c|| }
\hline \hline
Channel Model Type & Realistic Channel Model \\
\hline
Shadowing & False \\
\hline
Fading & False \\
\hline
BSTxPower & 30 dBm \\
\hline
UETxPower & 26 dBm \\
\hline
Cable loss & 2 dB \\
\hline
Noise Figure & 5 dB \\
\hline\hline
\end{tabular}
\end{table}
According to the simulated scenario, a terrestrial BS provides service to UEs inside a high capacity football stadium. Due to the excess number of UEs, the BS cannot provide the desired QoS. More than one UxNB is sent to the zone throughout the game for capacity injection. According to the scenario, it is necessary to authenticate UxNBs and to handover UEs from terrestrial BS to the nearest UxNB.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{omnetScenario.png}
\caption{Simulation scenario. Several UEs detach from the \textit{s-BS} and attach to the \textit{t-BS}. Handover solution both in LTE and our proposed method is simulated by SimuLTE.}
\label{fig:omnetScenario}
\end{figure}
\subsection{Capacity Injection}
In parallel with the technological advancements in mobile networks, the peak data rates of downlink and uplink increase. While the average downlink value provided today in LTE technology is $100$ megabits per second (Mbps), the uplink value has been $50$ Mbps \cite{lteuplink}. A BS that encounters a request above this uplink and downlink threshold values will start dropping packets. As a result, there will be a decrease in the QoS values are determined by the service provider.
In high-density areas, such as stadiums, the uplink value will typically be high. In our simulation, we have simulated that UEs request service to watch a video simultaneously. According to the simulation we implemented with SimuLTE, as the number of UEs increases, the required uplink value increases, as shown in Figure \ref{fig:CapacityInjection}. For example, the request created by $100$ UEs at the same time creates an uplink value of $110$ Mbps for the BS. Only $100$ UEs can consume all the downlink limit for one terrestrial BS if they all watch a video simultaneously.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{CapacityInjection.png}
\caption{Base station throughput per UE. The throughput is proportional with the number of UEs. The graphic shows that the capacity injection is crucial when the number of UEs is high.}
\label{fig:CapacityInjection}
\end{figure}
As can be seen, it is not possible to provide service with only one BS in very crowded environments. The use of UxNB emerges as a promising solution. Another question at this point is how many UxNBs on average can be sufficient to cover a stadium. According to the study \cite{droneuplink}, the downlink value for UxNB is $160$ Mbps, a typical flight height of 150 m. The solution will need one UxNB for approximately $10$ UE. Using too much UxNBs will cause several handover processes. Therefore, a group handover is a promising solution for our scenario.
\subsection{Group Handover}
The latency is one of the main issues in the handover schemes. If the latency is huge in the communications, the quality of service dedicated by providers will be low. The time used up in the handover process, and the number of control packet transmissions between UEs and BS bring along the latency for handover. As we remember, several numbers of UEs in a football stadium will switch their access network from a terrestrial BS to an UxNB. The pre-designed scheme in SimuLTE, in accordance with standards are exploited to simulate the LTE handover scheme. We reconfigured some of the codes in the LTE scheme in order to attain statistical information about total handover time and the number of control packet transmissions created by BS and UEs.
As we can see in Figure \ref{fig:handofftime}, the total handover time is increasing in the LTE scheme when the number of UEs increments. The \textit{s-BS} should send user-related data for each UE to the t- BS according to the standards. Hence, the communication between BSs is proportional to the number of UEs, as in Figure \ref{fig:Transmission}. Each UE is linked with the core network to update handover parameters, and six transmissions from UE to the core network is a fixed value in both standards and our proposal. The most energy-consuming transmissions occurs between BSs.
If we check out the handover time result for our proposal in Figure \ref{fig:handofftime}, the number of UEs is not affecting the total handover time. Because a group handover scheme is performed by the \textit{t-BS}. The \textit{t-BS} collects public values of UEs and compares the received values with values produced by the private function. The number of control packet transmissions for our proposal is in Figure \ref{fig:Transmission}. The control packet transmissions per UE is still six as in standards. Because the UE must update handover parameters with the core network. The advantage of our proposal on the LTE is the communication between BSs is zero. The \textit{t-BS} can handle authentication UEs by confirming their public keys without the requirement of communication with the \textit{s-BS}.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{HandoffTime.jpg}
\caption{The comparison of handover time. In the proposed method, the handover time is not related with the number of UEs due to the group handover solution. However, in LTE, handover time is proportional with the number of UEs.}
\label{fig:handofftime}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{TheNumberofTransmission.jpg}
\caption{Total number of control packet transmissions. The number of control packet transmissions both in LTE and proposed method is six for each UE. UE needs six control packet transmissions to update the core network. There are no control packet transmissions between BSs for handover in our proposed method. However, in LTE, the \textit{s-BS} must send credentials for each UE to the \textit{t-BS}.}
\label{fig:Transmission}
\end{figure}
\subsection{Performance Assessment}
Both in TS 36.300 \cite{3GPP36300} and our proposal, the number of control packet transmissions per UE is six. The reason behind that is the UE contacts with the core network six times to transfer the new connection parameters for the \textit{t-BS}.
In LTE standard \cite{3GPP36300}, the \textit{s-BS} sends the connection information for related UE to the \textit{t-BS}. Every connection between the \textit{s-BS} and the \textit{t-BS} has an acknowledgment message to endorse the receiving of the data. At the end of the handover process, the \textit{t-BS} informs the \textit{s-BS} for completing the handover. Hence, the total number of control packet transmissions by the \textit{s-BS} or the \textit{t-BS} is double the number of UEs.
In our proposal, the communication between the \textit{s-BS} and the \textit{t-BS} is not performed. The \textit{t-BS} knows the secret $f(x)$ function, which the key factor for the confirmation, when the \textit{t-BS} becomes active. The \textit{s-BS} (terrestrial) authenticates the \textit{t-BS} (UxNB) when the \textit{t-BS} become active at the football stadium. Once authentication is confirmed, the \textit{s-BS} shares the secret function with the \textit{t-BS}. In the handover process, the \textit{t-BS} performs the confirmation by using the private function. The relevant UE sends the public keys $(x_i,f(x_i)P)$ to the \textit{t-BS}. The \textit{t-BS} performs modulo addition for each $x_i$ value and elliptic curve addition for each $f(x_i)P$ value. Once all the UEs in the group send public values, the \textit{t-BS} compares the total $x_i$ and total $f(x_i)P$.
When we compare the total handover time for LTE and our proposal, the enhancement of the number of UEs does not change the total handover time in our proposal. However, the handover time is proportional to the number of UEs in standards. The reason for that is the communication between the \textit{s-BS} and the \textit{t-BS} is getting increased if the number of UEs is too much.
According to the simulation results, the time for one packet control packet transmission between BSs is approximately $7.5$ nanoseconds. The \textit{s-BS} sends one packet to the \textit{t-BS} for UE information and receives one packet from the \textit{t-BS} for acknowledgment. The total time to send one UE data from the \textit{s-BS} to the \textit{t-BS} is $15$ nanoseconds. $0.05$ seconds is the standard time for both our proposal and LTE standard. This time slot is required to start the handover process by UE and to update the core network about cell change. Unlike our suggestion, in LTE, the \textit{t-BS} sends the acknowledgment that the handover is completed to the \textit{s-BS}. This equates to a time of about $10$ microseconds. Another reason for the change in handover time in LTE is data sharing between BSs. While the data sharing process for one UE is $15$ nanoseconds, it is $1.5$ microseconds for $100$ UEs, and this value increases as the number of UEs increases, as in Figure \ref{fig:handofftime}.
\section{Conclusion}
A group handover solution is proposed in this paper. The main objectives of our study are to decrease the total handover time in the high-density places, to propose an authentication scheme between BSs and to decrease energy consumption by restricting the number of control packet transmissions in the handover process.
Results taken from our simulations show that the handover time is constant in our proposal, which is $0.05 s$, and there are no control packet transmissions between BS. Whereas according to 3GPP Release 16 \cite{3GPP36300}, the \textit{s-BS} must send data to the \textit{t-BS} for each UE to complete the handover successfully. The handover time and the number of control packet transmissions are entirely related to the number of UEs in the handover process. Hence, the handover time and the number of control packet transmissions between BSs are increasing, and they are proportional to the number of UEs.
In our proposal, the \textit{t-BS} can authenticate a group of UEs with a private function without revealing any security parameter. The discrete logarithm problem in the elliptic curve groups hides the private keys of UEs. The control packet transmission between the \textit{s-BS} and the \textit{t-BS} is zero and not affecting the handover time. Besides, the performance issue of the $X_n$ link between terrestrial and UxNB can be solved by our authentication solution.
The scenario for which we propose our scheme does not include a high mobility. The terrestrial and UxNB are both fixed. The UxNB comes over the high-density area and provides service to the UEs in its coverage area. The main objective of UxNB is to decrease the burden on the terrestrial BS. However, UxNB is going to move into urban or disaster areas.
In case of a high number of UxNBs in the air to cover all the areas along with densely deployed UEs, new schemes are required to provide authentication between several UEs and UxNB, between UxNBs. Besides, UEs will be transferred from one to another UxNB. Handover schemes are also the need of future beyond 5G networks.
|
1,116,691,500,141 | arxiv | \section{Introduction}
\subsection{Motivation}
Much effort has been spent in order to better understand and potentially treat the impairment of wound healing in diabetic patients. In this regard, one phenomenon that has recently gained
attention from biologists is \emph{lymphangiogenesis};
that is, the formation or reformation of lymphatic vasculature
\cite{huggenberger2011,huggenberger2011b,kim2012,tammela2010}. Insufficient lyphangiogenesis, as observed in diabetic subjects, appears to correlate with failed or delayed wound healing.
Impaired wound healing is a major health problem worldwide
and in recent decades has attracted the attention of both biologists and mathematicians.
In many cases unresolved wound healing correlates with prolonged infection,
which negatively affects the patient's quality of life,
causing pain and impairing their physical abilities.
Particularly serious infection may even require the amputation of an affected limb
\cite{langer2009}.
Furthermore, impaired wound healing also constitutes a major problem for health care systems,
accounting for approximately 3\% of all health service expenses in the UK
and 20 billion dollars annually in the USA \cite{delatorre2013,dowsett2009,drew2007,posnett2008}.
Several systemic factors contribute
to the delay or complete failure of the wound healing process
\cite{asai2012,fadini2010,jeffcoate2003,lerman2003,swift2001}.
In particular, diabetic patients exhibit a slower and sometimes insufficient response to infection after injury.
Such a delay often results in a chronic wound; that is, the wound fails to progress through the normal stages of healing
and usually remains at the inflammation stage \cite{brem2007,pierce2001}.
Interest in lymphangiogenesis in reference to wound healing is very recent:
for example in the Singer \& Clark 1999 review \cite{singer1999} the process is not mentioned.
Nevertheless, lymphatic vessels have recently become regarded as a crucial factor in wound healing \cite{cho2006,ji2005}.
They mediate the immune response and maintain the right pressure in the tissues \cite{swartz2001},
thus playing a very important role in inflammation and contributing to the healing of a wound \cite{oliver2002,witte2001}.
Moreover, failed restoration of a lymphatic network (observed, for example, in diabetic patients)
is now thought to be a major cause of impairment to wound healing \cite{asai2012,maruyama2007,saaristo2006}.
Mathematical modelling has proven a useful tool
in understanding the mechanisms behind numerous biological processes.
It is therefore of interest, and potentially great utility, to build a model
describing lymphangiogenesis in wound healing,
considering both the normal and pathological (diabetic) cases.
\subsection{Biology}
Wound healing is a very complex process involving a number of entwined events,
which partially overlap in time and influence one another.
For simplicity and educational purposes,
it is often divided into four different phases:
hemostasis, inflammation, proliferation and remodeling.
Here the key events in each of the phases are summarised; for further details,
see for instance \cite{delatorre2013,gabriel2013,park2013,singer1999,stadelmann1998}).
A few minutes after injury, the contact between blood and the
extracellular matrix (ECM)
causes a biochemical reaction that leads to the formation of a blood clot.
This ``crust'' has the double function of stopping the bleeding (\emph{hemostasis})
and providing a ``scaffold'' for other cells involved in the process to be described below.
Concurrently, chemical regulators (such as Transforming Growth Factor $\beta$, or TGF-$\beta$) are released,
which attract cells such as neutrophils and monocytes to the wound site.
These cells clean the wound of debris and neutralise any infectious agents
that have invaded the tissue.
This stage is called \emph{inflammation};
in a normal wound inflammation begins a couple of hours after wounding
and lasts a few weeks.
Monocytes metamorphose into macrophages,
which complete the removal of the pathogens and also secrete some proteins
(like Vascular Endothelial Growth Factor, or VEGF).
This leads to the next stage: the \emph{proliferation} or \emph{reepithelialisation} phase.
At this point VEGF and other substances stimulate the growth and aggregation of the surrounding cells,
restoring the different tissue functions.
The clot is slowly substituted by a ``temporary skin'' called \emph{granulation tissue}
and the interrupted blood and lymphatic capillary networks are restored in processes
named \emph{(blood) angiogenesis} and \emph{lymphangiogenesis}, respectively.
After a lengthy period the granulation tissue is replaced by normal skin;
this happens during the final, long phase of \emph{remodeling},
which can take up to one or two years.
Although wound healing has been studied extensively
and the main underlying mechanisms are well understood,
little is known about how lymphangiogenesis takes place.
Far more biological (and mathematical) literature has been produced
about its sibling process, blood angiogenesis;
it was not until the 1990s that lymphangiogenesis received
significant attention from researchers
\cite{adams2007,benest2008,choi2012}.
This discrepancy was mainly due to the previous lack of markers and information
on the growth factors involved in the lymphangiogenesis process;
such a dearth of biochemical tools impeded a detailed and quantifiable
study of lymphatic dynamics \cite{choi2012,oliver2002}. For biological reviews about lymphangiogenesis see \cite{kim2012,norrmen2011,tammela2010}
and for particularly significant biological research papers see \cite{boardman2003,bruyere2010,rutkowski2006}.
Naively, lymphatic vessels may appear
``interchangeable'' with their blood equivalents from a modelling perspective.
However, it is stressed that the two vasculatures are quite different; for biological papers comparing lymphangiogenesis with (blood) angiogenesis see \cite{adams2007,lohela2009,sweat2012}.
First of all, the capillary structure is completely distinct:
while blood vessel walls are relatively thick,
surrounded by smooth muscles which pump the blood around the body,
lymphatic capillaries are made of a single layer of
endothelial cells known as \emph{lymphatic endothelial cells} (LEC) \cite{norrmen2011}.
Moreover, the formation of new lymphatic capillaries,
or the restoration of preexisting ones, is very different from blood angiogenesis.
While growing blood capillaries are known to sprout from existing interrupted ones,
several studies suggest that lymphangiogenesis occurs in a different way \cite{benest2008,nakao2010}.
For instance, in \cite{rutkowski2006} it is observed that LECs migrate as single cells in the direction of interstitial flow and after sufficient numbers have congregated in the wound region, they organise into vessels (see Figure \ref{fig:fotoBoardman}).
\begin{figure}[h]
\centering
\includegraphics[height=6cm]{boardmanFig2.png}
\caption{The photo is taken from \cite[Figure 2]{boardman2003} and shows lymphatic channels formation in the mouse tail. Note that at day 10 (a) fluid channeling is not observed, but at day 25 (b) discrete channels are present and at day 60 (c) a hexagonal lymphatic network is nearly complete.
Notice also that lymph and interstitial fluid flows from left (tail tip) to right (tail base): this is in contrast to what happens during blood angiogenesis, which occurs equally from both sides of the implanted tissue equivalent.
In \cite{boardman2003} the authors use a new model of skin regeneration consisting of a collagen implant in a mouse tail (whose location is indicated by the dashed lines in the picture). The aim of the experiment is mainly to characterise the process of lymphatic regeneration.
Lymph fluid is detected (green in the photograph) and in \cite{boardman2003} it is shown that LECs follow this fluid.
Therefore, this photo can be seen as the migration of LECs into the wound. (Bar=1mm) }
\label{fig:fotoBoardman}
\end{figure}
We are therefore facing a new process whose mathematical description
cannot be drawn from any previous model for blood angiogenesis.
In the following section more details are given about the lymphangiogenesis process
and a mathematical model is proposed.
\subsection{Outline}
Clearly, successful lymphangiogenesis is an essential element in the wound healing process. Yet, as a novel and developing area of attention, its mechanistic basis, relation to other components of wound healing and impairment during diabetes remain unclear. With the aim of furthering our understanding of lymphangiogenesis, in this paper a mathematical model is developed to describe this process.
This paper is structured as follows.
In Section \ref{sec:modelling} a brief review of mathematical modelling of wound healing and lymphangiogenesis is given, followed by a detailed description of how the terms in our model were chosen. At the end of the section, a list of the model parameters and initial conditions is provided (a detailed description of parameter estimation can be found in \ref{appPAR}).
In Section \ref{sec:results} a typical solution of the model is shown and compared with real biological datasets. Furthermore, the diabetic case is introduced and modelled by changing specific parameters. The simulations for normal and diabetic cases are then run together and compared with available data. Finally, a steady state analysis (detailed in \ref{appSS}) and parameter sensitivity analysis of the model are performed.
In Section \ref{sec:therapies} three existing experimental treatments aimed at enhancing lymphangiogenesis are presented and simulated; then, potential therapeutic targets are identified based on observations from the parameter sensitivity analysis.
Finally, Section \ref{sec:conclusions} briefly summarises the main results of the work and possible future extensions of the present model are mentioned.
\section{Modelling} \label{sec:modelling}
\subsection{Brief review of existing models}
Being of such a complicated nature and evident medical interest,
wound healing has been the subject of mathematical modelling
for decades; see for instance \cite{sherratt1990,tranquillo1992} for the first models in the 1990s, \cite{sherratt2002} for a 2002 review and \cite{geris2010} for a 2010 review.
A variety of mathematical formalisms have been involved in wound models:
from classical PDEs \cite{dale1995,olsen1997,sherratt1990},
often derived from bio-mechanical considerations \cite{friedman2011,murphy2012},
to stochastic models \cite{boyarsky1986}, to discrete models \cite{dallon1999}.
Some authors have used moving boundary methods to study the movement
of the wound edge during healing \cite{javierre2009}, and attempts have been
made to understand more specific aspects of the healing process, for
example macrophage dynamics in diabetic wounds \cite{waugh2006} and the resolution of the
inflammatory phase \cite{dunster2014}.
One aspect of wound healing that has received considerable attention from mathematicians is (blood) angiogenesis
(see, for instance, \cite{byrne2000,chaplain2000,flegg2012,gaffney2002,machado2011,pettet1996,schugart2008,zheng2013}).
A comprehensive review of mathematical models for vascular network formation can be found in Scianna et al's 2013 review \cite{scianna2013}.
Little work has been done with regards to modelling lymphangiogenesis and, indeed, \cite{scianna2013} cites only a limited number of mathematical works concerning this topic.
Of these a representative sample is given by \cite{friedman2005,roose2008}
which deal with tumor lymphangiogenesis
and the collagen pre-patterning caused by interstitial fluid flow, respectively.
More specifically, \cite{roose2008} uses the physical theory of rubber materials
to develop a model explaining the morphology of the lymphatic network in collagen gels,
following the experimental observations of \cite{boardman2003}. This is the only existent model for lymphangiogenesis in wound healing known to the authors.
A further brief review of lymphatic modelling can be found in \cite{margaris2012},
where the phenomenon is approached from an engineering perspective.
In summary, a small number of papers have considered modelling the lymphangiogenesis process in the context of tumors; the modelling of lymphangiogenesis in wound healing is confined to \cite{roose2008}, where two fourth order PDEs are used to describe the evolution of the collagen volume fraction and of the proton concentration in a collagen implant.
That work does not address the healing process as a whole which is the aim of the present paper.
Here a simple model (comprised of a system of ODEs) is presented that provides an effective description of the main dynamics observed in wound healing lymphangiogenesis.
\subsection{Model development}
In the present model five time-dependent variables are considered:
two chemical concentrations (TGF-$\beta$ and VEGF)
and three cell densities
(macrophages, LECs and lymphatic capillaries).
Their interactions are described in Figure \ref{fig:modelscheme} and the formulation of the ODE model is based on the following set of processes
(the full system is given by the set of equations (\ref{eq:system})).
The initial (or pre-wounding) state is altered when latent TGF-$\beta$ is activated
(thus becoming active TGF-$\beta$, denoted in the sequel by $T$) by macrophages and enzymes
released immediately after wounding.
This active form of TGF-$\beta$ attracts more macrophages ($M$) to the wound site,
through \emph{chemotaxis}.
Macrophages in turn produce VEGF ($V$), a growth factor that chemoattracts and
stimulates the proliferation of LECs ($L$).
Note that LEC growth is also inhibited by TGF-$\beta$.
In the final stage of the process, LECs cluster in a network structure,
transdifferentiating into lymphatic capillaries ($C$).
This latter process happens spontaneously, although it is enhanced by VEGF.
\begin{center}
\begin{figure}[h]
\tikzstyle{constant} = [circle, draw, fill=yellow!15,text width=4em, text centered, node distance=2cm, inner sep=0pt]
\tikzstyle{block} = [rectangle, draw, fill=yellow!35,text width=6em, text centered, rounded corners, minimum height=3em]
\tikzstyle{line} = [draw, -latex']
\tikzset{->-/.style={decoration={
markings,
mark=at position .5 with {\arrow{>}}},postaction={decorate}}}
\tikzset{->--/.style={decoration={
markings,
mark=at position .25 with {\arrow{>}}}, postaction={decorate}}}
\tikzset{-->-/.style={decoration={
markings,
mark=at position .75 with {\arrow{>}}}, postaction={decorate}}}
\tikzstyle{solid}=[dash pattern=]
\tikzstyle{dashed}=[dash pattern=on 3pt off 3pt]
\tikzstyle{dotted}=[dash pattern=on \pgflinewidth off 2pt]
\tikzstyle{dashdotted}=[dash pattern=on 3pt off 2pt on \the\pgflinewidth off 2pt]
\label{modelscheme}
\begin{tikzpicture}[node distance = 5 cm, auto]
\node [constant] (tl) {Latent TGF-$\beta$};
\node [block, right of=tl] (tgf) {Active TGF-$\beta$};
\node [block, right of=tgf] (vegf) {VEGF};
\node [block, below of=tl] (mph) {Macrophages};
\node [block, below of=tgf] (lec) {LECs};
\node [block, below of=vegf] (cap) {Capillaries};
\draw[ultra thick, solid,LimeGreen, ->--] (tl) -- node [midway, name=a1] {} (tgf);
\draw[ultra thick, solid,LimeGreen, -->-] (tl) to (tgf);
\draw[ultra thick, dashed,Red, ->-] (tgf) to (mph);
\draw[ultra thick, dashdotted,YellowOrange, ->--] (tgf) to (lec);
\draw[ultra thick, dashdotted,YellowOrange, -->-] (tgf) to (lec);
\draw[ultra thick, dashed,Red, ->-] (vegf) to (lec);
\draw[ultra thick, dotted,NavyBlue, ->--] (mph) to (vegf);
\draw[ultra thick, dotted,NavyBlue, -->-] (mph) to (vegf);
\draw[ultra thick, solid,LimeGreen, ->--] (lec) -- node [midway, below, name=a2] {} (cap);
\draw[ultra thick, solid,LimeGreen, -->-] (lec) to (cap);
\draw[ultra thick, dotted,NavyBlue, ->-] (mph) to (a1);
\draw[ultra thick, dotted,NavyBlue, ->-] (vegf) to (a2);
\draw[ultra thick, dashdotted,YellowOrange, ->-] (cap) to [bend left] (mph) ;
\end{tikzpicture}
\caption{A schematic representation of the model dynamics. The five time-dependent variables correspond to the levels of the factors in each of the rectangular boxes; the quantity in the circle (latent TGF-$\beta$) is assumed constant. Concerning the arrows, {\color{red} \textbf{dashed red}} denotes chemotactic attraction, {\color{LimeGreen} \textbf{solid green}} denotes activation/transdifferentiation, {\color{NavyBlue} \textbf{dotted blue}} denotes production/enhancement and {\color{YellowOrange} \textbf{dash-dotted orange}} denotes inhibition. }
\label{fig:modelscheme}
\end{figure}
\end{center}
In the following the time-dependent variables introduced above
are discussed in detail.
In particular, the derivation of the corresponding evolution equation is individually presented for each variable.
\subsubsection*{TGF-$\beta$} This chemical is normally stored
in an inactive or latent form in the body; however, only \emph{active} TGF-$\beta$ plays an important role in wound healing lymphangiogenesis, and therefore we will only consider the dynamics of the active chemical.
Effectively, the active TGF-$\beta$ protein is bound to a molecule called Latency Associated Peptide,
forming the so-called Small Latent TGF-$\beta$ Complex.
This in turn is linked to another protein called Latent TGF-$\beta$ Binding Protein,
overall forming the Large Latent TGF-$\beta$ Complex \cite{taylor2009}.
Hence, the two ``peptide shells'' must be removed before the organism can use the TGF-$\beta$.
Another feature of this growth factor is that it exists in at least three isoforms (TGF-$\beta$1, 2 and 3) which play different roles at different stages of wound healing \cite{cheifetz1990}. The isoform of primary interest in our application is TGF-$\beta$1.
The differential equation describing (active) TGF-$\beta$ concentration has the following form:
\begin{center}
\begin{tabular}{ C{2.8cm} c C{2.5cm} c c c c }
change in TGF-$\beta$ concentration & $=$ & activation by enzymes and macrophages &$\times$ & latent TGF-$\beta$ & $-$ & decay.
\end{tabular}
\end{center}
A review of the activation process is presented in \cite{taylor2009}
where it is reported that TGF-$\beta$ can be activated in two ways.
The first is \emph{enzyme-mediated activation} whereby enzymes, mainly plasmin, release the Large Latent TGF-$\beta$ Complex from the ECM and then the Latency Associated Peptide binds to surface receptors.
The second form of activation is \emph{receptor-mediated activation}. Here cells bind the Latency Associated Peptide and later deliver active TGF-$\beta$ to their own TGF-$\beta$-Receptors or to the receptors of another cell.
This behaviour is often observed in activated macrophages \cite{gosiewska1999,nunes1995}.
Thus both enzyme concentration and macrophage density $M$ are influential in the activation process and thereby appear in the activation term, $a_p p_0 e^{-a_p T_L t} + a_M M$.
Here $a_p$ and $a_M$ denote the activation rates by enzymes and by macrophages, respectively. $T_L$ denotes the (constant) amount of available latent TGF-$\beta$ (more details in the next paragraph).
In addition, the enzyme/plasmin concentration is assumed to decrease exponentially from the initial value $p_0$, as in \cite{dale1996}; this reproduces quite well the enzyme dynamics in real wounds \cite{sinclair1994}.
It is widely accepted that a variety of cells have the potential to secrete latent TGF-$\beta$,
including platelets, keratinocytes, macrophages, lymphocytes and fibroblasts \cite{barrientos2008,khalil1993,taylor2009}.
Moreover, this latent complex is stored in the ECM in order to be constantly available to the surrounding cells \cite{shi2011}.
This fact is manifested in a constant production rate $T_L$ in our model equation.
Furthermore, it is well known that macrophages secrete latent TGF-$\beta$ \cite{khalil1993},
we assume that this occurs at a constant rate $r_1$.
Together these considerations imply that the amount of available latent TGF-$\beta$ in the wound will be modelled by $T_L + r_1 M$.
Finally, TGF-$\beta$ naturally decays at rate $d_1$,
so the term $- d_1 T$ will be included in the differential equation.
Therefore, the full equation for active TGF-$\beta$ is
\begin{equation} \label{eq:Teqn}
\frac{dT}{dt} = \left[ a_p p_0\exp(-a_p T_L t) + a_M M \right]
\cdot \left[ T_L + r_1 M \right] - d_1 T \; .
\end{equation}
\subsubsection*{Macrophages} These are a type of white blood cell
that removes debris, pathogenic microorganisms and cancer cells through
\emph{phagocytosis}.
They are produced by the differentiation of monocytes and are found in most of the tissues,
patrolling for potential pathogens.
Perversely, in addition to enhancing inflammation and stimulating the immune system, macrophages can also contribute to decreased immune reactions.
For this reason they are classified either
as \emph{M1} (or \emph{inflammatory}) macrophages if they encourage inflammation,
or as \emph{M2} (or \emph{repair}) macrophages if they decrease inflammation and encourage tissue repair \cite{martinez2006}.
Henceforth we restrict attention to inflammatory macrophages, since they are the most involved in lymphangiogenesis-related processes.
A useful review of the multifaceted and versatile role of macrophages in wound healing can be found in \cite{rodero2010}.
The following scheme will be considered for macrophage dynamics:
\begin{center}
\begin{tabular}{ C{1.8cm} c C{1.6cm} c C{1.8cm} c C{1.5cm} c C{1.8cm} }
\footnotesize change in macrophage density & $=$ & \footnotesize constant source & $+$ & \footnotesize chemotaxis by TGF-$\beta$ & $+$ & \footnotesize logistic growth & $-$ & \footnotesize removal and metamorphoses.
\end{tabular}
\end{center}
The various terms appearing in the right-hand side of this equation are discussed below.
The number of inflammatory macrophages increases due to their migration into the wound, in part due to movement of existing inflammatory macrophages from the surrounding tissue, as well as by chemotaxis of monocytes up gradients of TGF-$\beta$ \cite{wahl1987}, a fraction $\alpha$ of which differentiate into inflammatory macrophages \cite{mantovani2004}.
The former is modelled by assuming a constant source $s_M$ dictated by the non-zero level of inflammatory macrophages \cite{weber1990}, and the latter by the term $\alpha h_1 (T) = \alpha { b_1 T^2}/{(b_2 + T^4)}$.
Here $h_1(T)$ is the ``chemotactic function'', whose form is discussed in detail in \ref{appPAR}.
Only a (small) percentage $\beta$ of macrophages
undergo mitosis~\cite{greenwood1973}; we thus assume the logistic growth term $\beta r_2 M \left( 1 - {M}/{k_1} \right)$ where $r_2$ denotes the macrophage growth rate and $k_1$ the carrying capacity of the wound.
Notice that here only $M$ appears over the carrying capacity and the other cell types $L$ and $C$ are omitted. However, since the logistic term is small overall, adding $L$ and $C$ here
would just increase the numerical complexity of the system
without adding any significant contribution to the dynamics of the problem. This is reflected in the parameter sensitivity analysis provided later in the paper, and simulations (not shown) including all populations showed no appreciable difference.
Finally, inflammatory macrophages can die, metamorphose into repair macrophages
or be washed away by the lymph flow. This is embodied in the removal term $- ( d_2 + \rho C ) M$, where $d_2$ denotes the constant death rate. Here metamorphosis and removal are considered to be linearly proportional to the capillary density $C$ through the coefficient $\rho$: in particular, capillary formation is an index of progression through the healing process and, to reflect the decreased requirement for inflammatory macrophages as wounding proceeds, we assume the metamorphosis/removal rate increases with the size of $C$.
Combining these observations one derives the macrophage equation
\begin{equation} \label{eq:Meqn}
\frac{dM}{dt} = s_M + \alpha \frac{ b_1 T^2}{b_2 + T^4}
+ \beta r_2 M \left( 1-\frac{M}{k_1} \right) - (d_2 + \rho C) M \; .
\end{equation}
\subsubsection*{VEGF} This is a signal protein
whose main function is to induce the formation of vascular networks
by stimulating proliferation, migration and self-organisation of cells
after binding to specific receptors on their surface.
There are many kinds of VEGF:
while VEGF-A and VEGF-B are involved mainly in blood angiogenesis,
VEGF-C and VEGF-D are the most important biochemical mediators of lymphangiogenesis via the receptor VEGFR3
(although VEGF-C can also stimulate angiogenesis via VEGFR2).
For a comprehensive description of the growth factors involved in lymphangiogenesis see \cite{jussila2002,lohela2003}.
Henceforth ``VEGF'' refers to VEGF-C (and, to a lesser extent, VEGF-D), unless otherwise stated.
For VEGF we assume the following dynamics:
\begin{center}
\begin{tabular}{ C{2cm} c C{1.6cm} c C{2.2cm} c c c C{1.5cm} }
\footnotesize change in VEGF concentration & $=$ & \footnotesize constant source & $+$ & \footnotesize production by macrophages & $-$ & \footnotesize decay & $-$ & \footnotesize use by LECs.
\end{tabular}
\end{center}
\noindent
Since the normal VEGF level in the skin is nonzero \cite{hormbrey2003,papaioannou2009},
it is assumed there is a constant source $s_V$ of this growth factor from the surrounding tissues.
VEGF is produced by several cells, but macrophages are considered to be one of its main sources
in the context of wound healing \cite{kiriakidis2003,xiong1998}.
It is therefore natural to add the production term $+ r_3 M$
to the VEGF equation, where $r_3$ is the production rate of the chemical by macrophages.
On the other hand, the VEGF level is reduced by natural decay at constant rate $d_3$,
taken into account by the term $- d_3 V $.
In addition VEGF is internalised by cells:
effectively, LECs use VEGF to divide and form capillaries \cite{matsumoto2001,zachary2001};
it is assumed this process occurs at a constant rate $\gamma$, leading to the term $- \gamma VL$.
Thus, in this model the equation for VEGF dynamics is
\begin{equation} \label{eq:Veqn}
\frac{dV}{dt} = s_V + r_3 M - d_3 V - \gamma V L \; .
\end{equation}
\subsubsection*{LECs} As discussed above, lymphatic vessel walls are made of (lymphatic) endothelial cells.
The equation describing the presence of LECs in the wound consists of the following terms:
\begin{center}
\begin{tabular}{ C{1.5cm} c C{2.3cm} c C{1.8cm} c C{1.5cm} c C{1.6cm} }
\footnotesize change in LEC density & $=$ & \footnotesize growth, upregulated by VEGF and downregulated by TGF-$\beta$ & $+$ & \footnotesize inflow and chemotaxis by VEGF & $-$ & \footnotesize crowding effect and apoptosis & $-$ & \footnotesize transdifferentiation into capillaries.
\end{tabular}
\end{center}
\noindent
LEC growth is upregulated by VEGF \cite{bernatchez1999,whitehurst2006,zachary2001}
and downregulated by TGF-$\beta$ \cite{muller1987,sutton1991}.
The former observation is described mathematically
by augmenting the normal/basal constant growth rate $c_1$ with $V$ in an increasing saturating manner through the parameters $c_2$ and $c_3$.
To account for the latter, the growth term is multiplied by a decreasing function of $T$.
Explicitly, the whole proliferation term is
\begin{equation} \label{eq:Leq-growth}
\left( c_1 + \frac{V}{c_2 + c_3 V} \right) \left( \frac{1}{1+c_4 T} \right) L
\end{equation}
where $c_4$ takes into account the ``intensity'' of the TGF-$\beta$ inhibition on LEC growth.
\noindent
It is assumed that LECs are brought into the wound by lymph flow at a constant rate $s_L$
and are chemoattracted by VEGF \cite{bernatchez1999,tammela2010}.
Considering a chemotactic function $h_2(V)$ of the same form
as that used for TGF-$\beta$-mediated chemotaxis (see \ref{appPAR}),
these phenomena are captured by the terms
$$
s_L + h_2(V) = s_L + \frac{b_3 V^2}{b_4 + V^4} \; .
$$
Since both of these movements originate from the interrupted lymphatic vasculature at the edges of the wound,
this flow will tend to decrease as the lymphatic network is restored.
Hence, supposing a linear correlation between the term above and the lymphatic regeneration,
the former is multiplied by the piecewise linear function $f(C)$ defined by
\begin{equation} \label{eq:f(C)def}
f(C) = \left\{ \begin{array}{cl}
1 - \frac{C}{C^*} & \mbox{ if } C < C^* \\
0 & \mbox{ if } C \geq C^*
\end{array} \right. \; .
\end{equation}
Here $C^*$ is a capillary density threshold value above which the lymphatic network is functional and uninterrupted and LEC flow stops.
Hence the final term for LEC inflow and chemotaxis is
\begin{equation} \label{eq:Leq-chemotax}
\left( s_L + \frac{b_3 V^2}{b_4 + V^4} \right) f(C) \; .
\end{equation}
LEC growth is limited by over-crowding of the wound space,
a fact that is taken into account by the negative term $- L{(M+L+C)}/{k_2}$ where $k_2$ relates to the carrying capacity.
Finally, individual or small clusters of LECs migrate into the wound and later form
multicellular groups that slowly connect to one another,
organising into vessel structures \cite{boardman2003,rutkowski2006}.
Here it is assumed that when LECs are sufficiently populous
(that is, their density becomes larger than a threshold value $L^*$)
they self-organise into capillaries at a rate which depends linearly on VEGF concentration via the term $\delta_2 V$.
In particular, endothelial cells tend to form network structures spontaneously (at a constant rate $\delta_1$, say) but the rate is increased by the presence of VEGF \cite{podgrabinska2002}.
These observations result in the transdifferentiation term $- \sigma(L,C)\cdot (\delta_1 + \delta_2 V)L$
where $\sigma(L,C)$ is the step function
\begin{equation} \label{eq:sigma(L,C)def}
\sigma(L,C) = \left\{ \begin{array}{cl}
1 & \mbox{ if } L+C \geq L^* \\
0 & \mbox{ if } L+C < L^*
\end{array} \right. \; .
\end{equation}
Note that $\sigma$ depends both on $L$ and $C$: this is justified by the observation that the self-organisation process begins when $L$ reaches the threshold $L^*$ and then continues as LECs start forming the first capillaries.
Therefore the complete LEC equation is
\begin{eqnarray} \label{eq:Leqn}
\frac{dL}{dt} & = & \left( c_1 + \frac{V}{c_2+c_3V} \right) \left(\frac{1}{1+c_4T}\right) L
+ \left( s_L + \frac{b_3 V^2}{b_4 + V^4} \right)f(C) \nonumber \\
& & - \frac{M+L+C}{k_2}L - \sigma(L,C)\cdot (\delta_1 + \delta_2 V) L.
\end{eqnarray}
\subsubsection*{Lymphatic capillaries} We assume that the lymphatic capillaries form simply from the self-organisation of LECs into a network structure.
Thus the capillary formation term is just the transdifferentiation term from the LEC equation above
and the dynamics of $C$ is modelled by
\begin{equation} \label{eq:Ceqn}
\frac{dC}{dt} = \sigma(L,C)\cdot (\delta_1 + \delta_2 V) L \; .
\end{equation}
The full system of equations is therefore given by
\begin{eqnarray} \label{eq:system}
\frac{dT}{dt} & = & \underbrace{ \left[ a_p p_0\exp(-a_p T_L t) + a_M M \right] }
_{\stackrel{ \mbox{\footnotesize \footnotesize activation by}}{ \mbox{\footnotesize enzymes \& M$\Phi$s}}}
\cdot \underbrace{ \left[ T_L + r_1 M \right] }_{ \mbox{\footnotesize latent TGF-$\beta$}}
- \underbrace{ d_1 T }_{ \mbox{\footnotesize decay} } \nonumber \\
\frac{dM}{dt} & = & \underbrace{s_M}_{ \stackrel{ \mbox{\footnotesize constant}}{ \mbox{\footnotesize source}} }
+ \underbrace{ \alpha \frac{ b_1 T^2}{b_2 + T^4}}
_{\stackrel{ \mbox{\footnotesize chemotaxis}}{ \mbox{\footnotesize by TGF-$\beta$}}}
+ \underbrace{ \beta r_2 M \left( 1-\frac{M}{k_1} \right) }_{ \mbox{\footnotesize logistic growth} }
- \underbrace{ (d_2 + \rho C) M }_{ \stackrel{ \mbox{\footnotesize removal and}}{ \mbox{\footnotesize metamorphoses}} } \nonumber \\
\frac{dV}{dt} & = & \underbrace{s_V}_{ \stackrel{ \mbox{\footnotesize constant}}{ \mbox{\footnotesize source}} }
+ \underbrace{ r_3 M }_{\stackrel{ \mbox{\footnotesize production}}{ \mbox{\footnotesize by M$\Phi$s}}}
- \underbrace{ d_3 V }_{ \mbox{\footnotesize decay}}
- \underbrace{ \gamma VL }_{ \mbox{\footnotesize use by LECs}} \nonumber \\
\frac{dL}{dt} & = & \underbrace{ \left( c_1 + \frac{V}{c_2+c_3V} \right) \left(\frac{1}{1+c_4T}\right) L }
_{\stackrel{ \mbox{\footnotesize growth upregulated by VEGF} }{ \mbox{\footnotesize and downregulated by TGF-$\beta$}}}
+ \underbrace{ \left( s_L + \frac{b_3 V^2}{b_4 + V^4} \right)f(C) }
_{\stackrel{ \mbox{\footnotesize inflow and}}{ \mbox{\footnotesize chemotaxis by VEGF}}} \\
& & - \underbrace{\frac{M+L+C}{k_2}L}_{ \stackrel{ \mbox{\footnotesize crowding effect}}{ \mbox{\footnotesize and apoptosis}} }
- \underbrace{ \sigma(L,C)\cdot (\delta_1 + \delta_2 V) L }_{\stackrel{ \mbox{\footnotesize transdifferentiation}}{ \mbox{\footnotesize into capillaries}}} \nonumber \\
\frac{dC}{dt} & = & \underbrace{ \sigma(L,C)\cdot (\delta_1 + \delta_2 V) L }_{ \mbox{\footnotesize transdifferentiation of LECs} } \nonumber
\end{eqnarray}
where $f(C)$ and $\sigma(L,C)$ are defined in (\ref{eq:f(C)def}) and (\ref{eq:sigma(L,C)def}), respectively.
\subsection{Parameters and initial conditions}
\subsubsection*{Parameters}
Table \ref{table:parameters} gives a full list of parameter values, their units and the sources used for their estimation in the normal (non-diabetic) case.
It is remarked that great care was put into assessing the parameter values, and of the 31 parameters listed in the table, 25 have been estimated from biological data.
A detailed description of the estimation of each parameter can be found in \ref{appPAR}.
\begin{table}[p]
\begin{small}
\makebox[\textwidth][c]{\begin{tabular}{cccc}
\hline
\textsc{parameter} & \textsc{value} & \textsc{units} & \textsc{source} \\
\hline
$a_p$ & $2.9\times 10^{-2}$ & $\mbox{mm}^3\mbox{pg}^{-1}\mbox{day}^{-1}$ & \cite{decrescenzo2001} \\
$p_0$ & $2.5\times 10^5$ & $\mbox{pg mm}^{-3}$ & no data found \\
$a_M$ & 0.45 & $\mbox{mm}^3\mbox{cells}^{-1}\mbox{day}^{-1}$ & \cite{gosiewska1999,nunes1995} \\
$T_L$ & 18 & $\mbox{pg mm}^{-3}$ & (\cite{oi2004}) \\
$r_1$ & $3\times 10^{-5}$ & $\mbox{pg cells}^{-1}\mbox{day}^{-1}$ & \cite{khalil1993} \\
$d_1$ & $5\times 10^2$ & $\mbox{day}^{-1}$ & \cite{kaminska2005} \\
\hline
$s_M$ & $5.42\times 10^2$ & $\mbox{cells mm}^{-3}\mbox{day}^{-1}$ & (\cite{weber1990})\\
$\alpha$ & 0.5 & 1 & \cite{waugh2006} \\
$b_1$ & $8\times 10^8$ & $\mbox{cells pg}^2(\mbox{mm}^3)^{-3}\mbox{day}^{-1}$ & (\cite{nor2005})\\
$b_2$ & $8.1\times 10^9$ & $(\mbox{pg mm}^{-3})^4$ & \cite{wahl1987,yang1999} \\
$\beta$ & $5\times 10^{-3}$ & 1 & \cite{greenwood1973} \\
$r_2$ & 1.22 & $\mbox{day}^{-1}$ & \cite{zhuang1997} \\
$k_1$ & $6\times 10^5$& $\mbox{mm}^{3}\mbox{cells}^{-1}$ & \cite{zhuang1997} \\
$d_2$ & 0.2 & $\mbox{day}^{-1}$ & \cite{cobbold2000} \\
$\rho$ & $10^{-5}$ & $\mbox{day}^{-1}\mbox{cells}^{-1}$ & \cite{rutkowski2006} \\
\hline
$s_V$ & 1.9 & $\mbox{cells}\mbox{ day}^{-1}$ & (\cite{hormbrey2003,papaioannou2009}) \\
$r_3$ & $1.9\times 10^{-3}$ & $\mbox{pg cells}^{-1}\mbox{day}^{-1}$ & (\cite{kiriakidis2003,sheikh2000})\\
$d_3$ & 11 & $\mbox{day}^{-1}$ & \cite{kleinheinz2010} \\
$\gamma$ & $1.4\times 10^{-3}$ & $\mbox{mm}^{3}\mbox{cells}^{-1}\mbox{day}^{-1}$& \cite{gabhann2004} \\
\hline
$c_1$ & 0.42 & $\mbox{day}^{-1}$ & \cite{nguyen2007} \\
$c_2$ & 42 & $\mbox{day}$ & \cite{whitehurst2006} \\
$c_3$ & 4.1 & $\mbox{pg day mm}^{-3}$ & \cite{whitehurst2006} \\
$c_4$ & 0.24 & $\mbox{mm}^{3}\mbox{pg}^{-1}$ & \cite{muller1987}\\
$s_L$ & $5\times 10^2$ & $\mbox{cells day}^{-1}$ & no data found \\
$b_3$ & $10^7$ & $\mbox{cells pg}^2(\mbox{mm}^3)^{-3}\mbox{day}^{-1}$ & no data found \\
$b_4$ & $8.1\times 10^9$ & $(\mbox{pg mm}^{-3})^4$ & estimated $\approx b_2$\\
$C^*$ & $10^4$ & $\mbox{cells mm}^{-3}$ & \cite{rutkowski2006} \\
$k_2$ & $4.71\times 10^5$ & $\mbox{cells day mm}^{-3}$ & \cite{nguyen2007}\\
$L^*$ & $10^4$ & $\mbox{cells mm}^{-3}$ & \cite{rutkowski2006} \\
$\delta_1$ & $5\times 10^{-2}$ & $\mbox{day}^{-1}$ & no data found \\
$\delta_2$ & $10^{-3}$ & $\mbox{mm}^3\mbox{pg}^{-1}\mbox{day}^{-1}$ & no data found \\
\hline
\end{tabular}}
\end{small}
\caption{ A list of all the parameters appearing in the model equations (details of the estimation are provided in \ref{appPAR}). Each one is supplied with its estimated value, units and source used (when possible) to assess it. References in brackets mean that although the parameter was not \emph{directly} estimated from a dataset, its calculated value was compared with the biological literature; the caption ``no data found'' signifies that no suitable data were found to estimate the parameter. Concerning the VEGF value corresponding to maximum LEC chemotaxis $b_4$, it was assumed that its value is similar to its TGF-$\beta$ correspondent $b_2$; this choice was dictated by the lack of relevant/applicable biological data, to the authors' knowledge. }
\label{table:parameters}
\end{table}
\subsubsection*{Initial conditions}
In the present model, the initial time-point $t=0$ corresponds to the release of enzymes by platelets within the first hour after wounding \cite{sinclair1994,singer1999}.
The initial amounts of active TGF-$\beta$, macrophages and VEGF are taken to be their equilibrium values,
estimated from experimental data as shown in Table \ref{table:initialconds}.
It is assumed that there are no endothelial cells or capillaries at $t=0$.
\begin{table}[h]
\begin{small}
\makebox[\textwidth][c]{\begin{tabular}{cccc}
\hline
\textsc{init.value} & \textsc{value} & \textsc{units} & \textsc{source} \\
\hline
$T(0)$ & 30 & pg/mm$^3$ & \cite{yang1999} \\
$M(0)$ & 1875 & cells/mm$^3$ & \cite{weber1990} \\
$V(0)$ & 0.5 & pg/mm$^3$ & \cite{hormbrey2003,papaioannou2009} \\
$L(0)$ & 0 & cells/mm$^3$ & assumption \\
$C(0)$ & 0 & cells/mm$^3$ & assumption \\
\hline
\end{tabular}}
\end{small}
\caption{ Values of the model variables at $t=0$. }
\label{table:initialconds}
\end{table}
\section{Results and analysis} \label{sec:results}
We now present a typical solution of the system (\ref{eq:system})
and compare it with biological data.
The system is solved numerically with the MatLab standard ODE solver \texttt{ode45} with relative tolerance $10^{-6}$ and absolute tolerance $10^{-9}$ over a time interval of 100 days.
It is remarked that the present model chiefly addresses inflammation and the early proliferation stage of the wound healing process.
In healthy subjects the inflammatory phase starts a few hours after injury and lasts approximately 1 or 2 weeks,
but it is prolonged in diabetic patients.
Moreover, lymphangiogenesis occurs between 25 and 60 days after wounding,
much later than blood angiogenesis which is observed between day 7 and day 17 \cite{benest2008,rutkowski2006}.
Thus, the equations are expected to realistically describe the phenomenon for about the first 100 days post-wounding.
The TGF-$\beta$ level is expected to display a rapid spike in the first day post injury before returning to its equilibrium value \cite{yang1999}.
In Figure \ref{fig:typicalsol-T} the simulation output is compared with a biological dataset. Both demonstrate the expected initial spike, but in the data a second peak is visible around day 5, reported also in \cite{nor2005}. We recall that TGF-$\beta$ exists in at least three known isoforms: TGF-$\beta$1, TGF-$\beta$2 and TGF-$\beta$3; the biological data set concerns all kinds of TGF-$\beta$ involved in other wound healing processes, such as collagen deposition, which are not modelled here (the time dynamics of the different TGF-$\beta$ isoforms can be found in \cite{yamano2013}). Nevertheless, the overall predicted trend of TGF-$\beta$ concentration in the wound matches the biological reality fairly well.
\begin{figure}[h]
\centering
\makebox[\textwidth][c]{\includegraphics[height=4.5cm]{typsol-T.png}\includegraphics[height=4.5cm]{TGFyang.png}}
\caption{A comparison of the simulation output for the time course of TGF-$\beta$ concentration
with data from \cite[Figure 2]{yang1999}, showing the time course of active TGF-$\beta$ generation during wound repair in rats.}
\label{fig:typicalsol-T}
\end{figure}
Macrophage levels are observed to reach a peak approximately 5 days after injury before returning to their equilibrium level \cite{nor2005}.
Again the model prediction is consistent with the biological literature,
as can be seen in Figure~\ref{fig:typicalsol-M}.
\begin{figure}[h]
\centering
\includegraphics[height=4.5cm]{typsol-M.png}
\includegraphics[height=4.5cm]{norFig3c.png}
\caption{ A comparison between model and experimental data for the time course of macrophage density. Experimental data taken from \cite[Figure 3c]{nor2005}: note that the time-course comparison here is against the black bars, representing macrophage numbers in normal (wild-type) mice. }
\label{fig:typicalsol-M}
\end{figure}
VEGF is also reported to reach its maximum concentration 5 days after wounding \cite{sheikh2000}. This is unsurprising given the above macrophage dynamics and the fact that macrophages are understood to be primarily responsible for the production of the protein.
Once again there is a strong correlation between the results of the theoretical model and experimental observations, as shown in Figure \ref{fig:typicalsol-V}.
\begin{figure}[h]
\centering
\includegraphics[height=4.5cm]{typsol-V.png}
\includegraphics[height=5.5cm]{VEGFsheikh.png}
\caption{A comparison of the simulation output for the time course of VEGF concentration with data from \cite[Figure 2]{sheikh2000}, where VEGF was measured in rat wound fluid (note that the units on the vertical axis are pg/mL, where 1000 pg/mL = 1 pg/mm$^3$). }
\label{fig:typicalsol-V}
\end{figure}
LEC levels are expected to increase immediately after wounding
but only later do the LECs self-organise into capillaries, around day 25 \cite{rutkowski2006}.
This is reflected in the simulation shown in Figure \ref{fig:typicalsol-L+C}.
Here LECs proliferate in the wound space until reaching the threshold level $L^* = 10^4$ around day 20. They then start agglomerating in capillary structures, commencing the lymphangiogenesis process proper.
\begin{figure}[h]
\centering
\includegraphics[height=4.1cm]{typsol-L.png}
\includegraphics[height=4.1cm]{typsol-C.png}
\includegraphics[height=4.1cm]{typsol-L+C.png}
\caption{Simulation output for the time course of LEC density, lymphatic capillary density and their sum.
Note that the sum density has also been plotted the sum,
since LEC and capillary cells are difficult to differentiate
and any cell counts are likely to reflect the total density of these two cell types.}
\label{fig:typicalsol-L+C}
\end{figure}
\subsection{Modelling the diabetic case} In order to simulate the diabetic case, some parameter values are changed as described in the following.
Unfortunately, it is difficult to obtain precise quantitative assessment of the appropriate changes,
and therefore the values chosen here only have a qualitative significance.
Several studies report that the TGF-$\beta$ level is significantly lower in diabetic wounds compared with controls.
This seems to be caused by impaired TGF-$\beta$ activation both by platelets and macrophages and by reduced production of TGF-$\beta$ by macrophages \cite{almulla2011,mirza2011,yamano2013}.
These features of the diabetic case are modelled by applying the following modifications to the parameters:
$$
a_p^{diab} = \frac{1}{2} a_p^{norm} < a_p^{norm} \quad , \quad a_M^{diab} = \frac{1}{2} a_M^{norm} < a_M^{norm} \quad .
$$
Furthermore, in diabetic wounds the macrophage density is higher than normal.
In particular, the inflammatory macrophage phenotype persists through several days after injury, showing an impaired transition to the repair phenotype \cite{mirza2011}.
In addition, macrophage functions (such as phagocytosis and migration) are impaired in the diabetic case \cite{khanna2010,xu2013}.
These differences from the normal case are reflected in the following choice of parameter changes:
\begin{equation*}
\begin{array}{r c l c r c l}
\alpha^{diab} &=& 0.8 > \alpha^{norm}, &\quad& b_1^{diab} &=& \frac{3}{4} b_1^{norm} < b_1^{norm}, \\
& & & & & & \\
d_2^{diab} &=& \frac{1}{2} d_2^{norm} < d_2^{norm}, &\quad& r_3^{diab} &=& \frac{1}{2} r_3^{norm} < r_3^{norm}.
\end{array}
\end{equation*}
Finally, it is reported that endothelial cell proliferation
is markedly reduced in diabetic wounds when compared with the normal case \cite{darby1997,curcio1992,kolluru2012}
(a detailed discussion of endothelial dysfunction in diabetes can be found in \cite{calles2001}).
This phenomenon is reflected in the model by reducing the basal proliferation rate of endothelial cells:
$$
c_1^{diab} = \frac{1}{2} c_1^{norm} < c_1^{norm} \; .
$$
\subsection{Comparison of results in the normal and diabetic cases}
Figures \ref{fig:sim-diab-T}-\ref{fig:sim-diab-L+C} show numerical simulations of the model comparing the time-course of the five model variables in the normal (blue solid line) and diabetic (red dashed line) case.
\subsubsection*{TGF-$\beta$} The level of TGF-$\beta$ recorded in diabetic wounds is lower than in the normal case,
at least in the first 20 days after injury \cite{almulla2011,mirza2011,yamano2013}.
Model simulations are consistent with this (Figure \ref{fig:sim-diab-T}).
\begin{figure}[h]
\centering
\includegraphics[height=4.5cm]{SIM-T.png}
\includegraphics[height=3.2cm]{yamano-fig3GH.png}
\caption{ Time course of TGF-$\beta$ concentration in normal and diabetic wounds
as predicted by the model and as found empirically in
\cite[Figures 3G and 3H]{yamano2013}, where the authors study molecular dynamics during oral wound healing in normal (blue lines) and diabetic (red lines) mice. }
\label{fig:sim-diab-T}
\end{figure}
\subsubsection*{Macrophage} Experiments show that the density of macrophages in diabetic wounds is higher than in the normal case
and they persist for longer in the wound site \cite{mirza2011,rodero2010,xu2013}. Model simulations match these observations (Figure \ref{fig:sim-diab-M}).
\begin{figure}[h]
\centering
\makebox[\textwidth][c]{\includegraphics[height=4.5cm]{SIM-M.png}
\includegraphics[height=4.5cm]{mirzaFig1B.png}}
\caption{Time course of macrophage density in the normal and diabetic cases.
We compare the model prediction with \cite[Figure 1B]{mirza2011}.
In the experimental results, the relative height of shaded to solid white bars indicates the relative macrophage density in diabetic/non-diabetic wounds in mice (assessed via Ly6C expression, a marker for the macrophage lineage). }
\label{fig:sim-diab-M}
\end{figure}
\subsubsection*{VEGF} The VEGF level during wound healing is lower in diabetic patients \cite{almulla2011,mirza2011}.
In fact, as described below, a key target for the design of new therapies has been increasing VEGF levels.
The simulation output and a biological dataset are compared in Figure \ref{fig:sim-diab-V}.
\begin{figure}[h]
\centering
\includegraphics[height=4.5cm]{SIM-V.png}
\includegraphics[height=4.5cm]{almullaFig2G.png}
\caption{Time course of VEGF in normal and diabetic case wounds.
Compare the simulation with the data reported in \cite[Figure 2G]{almulla2011},
representing VEGF presence in control (white bars) and diabetic (black bars) rats.
In fact, \cite{almulla2011} investigates the connection between a defect in resolution of inflammation and the impairment of TGF-$\beta$ signaling, resulting in delayed wound healing in diabetic female rats.
The abbreviations in the legend stand for: C, control; D, diabetic; +E2, diabetic-treated with estrogen; TNFR1, diabetic treated with the TNF receptor antagonist PEG-sTNF-RI; VEGF, vascular endothelial growth factor. }
\label{fig:sim-diab-V}
\end{figure}
\subsubsection*{LECs \& Capillaries}
In diabetic patients, lymphatic capillary formation is delayed and insufficient \cite{asai2012,maruyama2007,saaristo2006}.
The model simulations are consistent with this (Figure \ref{fig:sim-diab-L+C}).
\begin{figure}[h]
\centering
\includegraphics[height=4.3cm]{SIM-L.png}
\includegraphics[height=4.3cm]{SIM-C.png}
\includegraphics[height=4cm]{maruyamaFig2b.png}
\caption{Time course of LEC and capillary density in normal and diabetic cases,
compared with \cite[Figure 2b]{maruyama2007}.
The study presented in \cite{maruyama2007} investigates the role of wound-associated lymphatic vessels in corneal inflammation and in a skin wound model of wild-type and diabetic mice. The figure shows a quantification of lymphangiogenesis in the corneal suture model assay in the wild-type (db/+) and diabetic (db/db) cases. No suitable data were found in the skin wound model.
}
\label{fig:sim-diab-L+C}
\end{figure}
\subsection{Analysis of the model}
In this section the steady states of the system are identified and a sensitivity analysis of the model parameters is performed.
\subsubsection{Steady States}
For the parameter set studied, at $t=0$ there are no LECs in the wound, but subsequently they increase towards a positive value of approximately $2\times 10^5$ cells/mm$^3$. However, when they reach the ``threshold'' density $L^*=10^4$ cells/mm$^3$, the system steady states change and $L$ starts to decrease towards zero. In the meantime, lymphatic capillaries start forming; their final value will depend on the dynamics of the system, but in any case it will be bigger than $C^*=10^4$ cells/mm$^3$.
This ``switch'' in the steady state values is due to the presence of two piecewise defined functions ($\sigma$ and $f$) in the system.
On the other hand, there is always one stable steady state for $M$ which also defines one for $T$:
$$
M^{eq}\approx 1875 \mbox{ cells/mm}^3 \quad
\mbox{and} \quad T^{eq}=\frac{a_M}{d_1}(T_L+r_1M^{eq})M^{eq} \approx 30 \mbox{ pg/mm}^3 \; .
$$
Note that the $M$ steady state (and thus also that for $T$) is unique for parameters with biologically relevant values.
For the $V$-equilibrium, the following expression is found:
$$
V^{eq}=\frac{s_V+r_3 M^{eq}}{d_3+\gamma L^{eq}} \; .
$$
Note that $V^{eq}$ depends on $L$; therefore $V$ will tend to different values according to the current $L$-steady state; for $L^{eq}=0$ it is $V^{eq}=0.5$ pg/mm$^3$.
Details about how these steady states were determined can be found in \ref{appSS}.
The stability of the steady states is determined numerically. The stability of $M^{eq}$ is deduced from the shape of the numerically-plotted $M$-nullcline, and the stability of the other steady states can be inferred from the simulations of the full system. See for instance the simulation shown in Figure \ref{fig:sim250}, where the model is run over a time interval of 250 days: here it is evident that all the variables tend to stay at a stable value after about 100 days post-wounding.
Since some of the parameters were modified to simulate diabetes-related conditions, the steady states for the diabetic case are different than the corresponding ``normal'' ones. In particular, TGF-$\beta$, VEGF and capillary equilibrium values are lower in the diabetic case, while the macrophage level is higher than in the normal case. LECs go to zero in both cases.
\begin{figure}[p]
\centering
\includegraphics[height=4.1cm]{sim250-T.png}
\includegraphics[height=4.1cm]{sim250-M.png}
\includegraphics[height=4.1cm]{sim250-V.png}
\includegraphics[height=4.1cm]{sim250-L.png}
\includegraphics[height=4.1cm]{sim250-C.png}
\caption{ Simulation of the model in both normal (solid blue) and diabetic (dashed red) cases over a time period of 250 days. }
\label{fig:sim250}
\end{figure}
Although this analysis does not give very profound insights into the understanding of lymphangiogenesis, it provides some extra information about the dynamics of the system. More specifically, it shows that for realistic parameter values the system has only one steady state, which is in agreement with experimental observations.
\subsubsection{Parameter Sensitivity Analysis}
Here a numerical parameter sensitivity analysis of the model is presented which plays two important roles. On the one hand, it demonstrates which parameters are most significant in the model,
and thereby provides a deeper understanding of the dynamics of the system. On the other hand, it constitutes the first step towards the design of new therapeutic approaches.
To estimate the dependence of the model on a given parameter $p$,
a quantification of the affect of a change in $p$ on the (final) capillary density $C$ at day 100 is calculated.
To begin, $p$ is increased by 10\% and thereafter the system is solved over the time interval $[0,100]$. The final value of the capillary density thus obtained, denoted $C^{+10\%}$, is then compared with the reference value $C^{ref}$ of the corresponding density in the original system.
The percentage change is defined by
\begin{equation} \label{eq:PercentChange}
\mbox{ percentage change in }C = 100 \times \frac{C^{+10\%}-C^{ref}}{C^{ref}} \quad .
\end{equation}
The same procedure is then repeated, this time substituting the parameter $p$ with its value \emph{decreased} by 10\% and the corresponding change $C^{-10\%}$ is calculated.
The results are summarised in Figure \ref{fig:PSA}.
\begin{figure}[h]
\makebox[\textwidth][c]{\includegraphics[width=17cm]{graphOK.png}}
\caption{Percentage change in the final capillary density $C(100)$ when every parameter is increased/decreased by 10\%.}
\label{fig:PSA}
\end{figure}
It is notable that perturbing any parameter does not result in a percentage change in final capillary density of more than 15\%.
Thus, the model is quite robust in terms of dependence on the parameters. Percentage changes over 5\% are observed only for eight parameters.
Of these, one needs to \emph{decrease} $a_M$, $T_L$, $s_M$ or $c_4$ to observe an increase in the final capillary density;
while a similar enhancement is obtained by \emph{increasing} $d_1$, $d_2$, $c_1$ or $k_2$.
\section{Therapies} \label{sec:therapies}
\subsection{Existing experimental treatments}
Although there is at present no approved therapy for enhancement of lymphangiogenesis
(in wound healing or in any other context), many studies and experiments have been published exploring potential treatments.
In the following three such experiments are reported and then simulated.
\subsubsection*{Administration of TGF-$\beta$ Receptor-Inhibitor}
This substance binds to the TGF-$\beta$ receptors on the surface of surrounding cells,
thus making them ``insensitive'' to TGF-$\beta$ molecules and their effect.
\cite{oka2008} reports a study of the effect of TGF-$\beta$ on lymphangiogenesis in which human LECs are cultured and quantified after treatments with TGF-$\beta$1 or T$\beta$R-I inhibitor to assess cell growth, cord formation and cell migration.
It is observed that TGF-$\beta$1 treatment decreases cord formation, while T$\beta$R-I inhibitor treatment increases it.
These results are consistent with those found in \cite{clavin2008}, where it is reported that a
higher level of TGF-$\beta$1 is associated with delayed recruitment and decreased proliferation of LECs during wound repair.
To simulate the treatment with T$\beta$R-I inhibitor, the cell migration assay is considered.
Here, the inhibitor was added at 3 $\mu$M = 817 pg/mm$^3$ (the molecular weight is 272).
Since this is significantly bigger than the concentration of TGF-$\beta$ in our model and in normal skin
(in both cases the maximum level is 300 pg/mm$^3$),
this treatment is simulated by setting the parameters $\alpha$ and $c_4$ equal to zero
(that is, TGF-$\beta$ molecules have no effect on cells because their receptors are ``occupied'' by the inhibitor).
The effect of this ``virtual treatment'' are shown in Figure \ref{fig:oka}, and match well with the described TGF-$\beta$ inhibitor experiment: LEC and capillary densities are markedly increased by the treatment.
\begin{figure}[h]
\centering
\makebox[\textwidth][c]{\includegraphics[height=4.2cm]{oka-T.png}
\includegraphics[height=4.2cm]{oka-M.png}
\includegraphics[height=4.2cm]{oka-V.png}}
\makebox[\textwidth][c]{\includegraphics[height=4.2cm]{oka-L.png}
\includegraphics[height=4.2cm]{oka-C.png}
\includegraphics[height=4.2cm]{oka-L+C.png}}
\caption{ Time courses of $T$, $M$, $V$, $L$, $C$ and $L+C$ in a simulation of the T$\beta$R-I inhibitor experiment described in \cite{oka2008}. }
\label{fig:oka}
\end{figure}
\subsubsection*{Macrophage-based treatments}
Another therapeutic approach is to add macrophages to the wound,
so that they secrete VEGF and other substances that are known to induce lymphangiogenesis.
In \cite{kataru2009} an ``opposite'' experiment is implemented:
here a systemic depletion of macrophages is reported to markedly reduce lymphangiogenesis.
This is in accordance with \cite{watari2008}, in which it is observed that
the induction of macrophage apoptosis inhibits IL-1$\beta$-induced lymphangiogenesis.
One hypothesis suggested to explain such results is that
because of the reduced level of macrophages, less VEGF is produced and this impairs LEC proliferation and capillary formation.
We simulated the increase in macrophage apoptosis by taking a bigger (for instance, the double) value of $d_2$ in the system.
The output of the model in which $d_2$ is doubled (both in normal and diabetic cases)
is reported in Figure \ref{fig:kataru}.
In this case, the output is in contrast with what is described in the biological studies:
although fewer macrophages and consequently less VEGF are present,
more LECs and capillaries form after the simulated treatment.
\begin{figure}[h]
\centering
\makebox[\textwidth][c]{\includegraphics[height=4.2cm]{kataru-T.png}
\includegraphics[height=4.2cm]{kataru-M.png}
\includegraphics[height=4.2cm]{kataru-V.png}}
\makebox[\textwidth][c]{ \includegraphics[height=4.2cm]{kataru-L.png}
\includegraphics[height=4.2cm]{kataru-C.png}
\includegraphics[height=4.2cm]{kataru-L+C.png}}
\caption{ Time courses of $T$, $M$, $V$, $L$, $C$ and $L+C$ in a simulation of the macrophage-depletion experiment
described in \cite{kataru2009}. }
\label{fig:kataru}
\end{figure}
This result could be explained by the fact that, in the model, a reduction in macrophage density implies a reduction in TGF-$\beta$ level, so that the inhibition of LEC proliferation is smaller and hence there are more endothelial cells to form the capillaries. In fact, in the previous section it was found that the system is much more sensitive to $c_4$
than to $c_2$, $c_3$ or $\delta_2$.
It is then natural to consider the effect of fixing $T=30$ pg/mm$^3$ in the LEC growth term (\ref{eq:Leq-growth}); this level of $T$ corresponds to the TGF-$\beta$ equilibrium.
The simulation output in this case is shown in Figure \ref{fig:macrophagetreatT=30}.
\begin{figure}[h]
\centering
\makebox[\textwidth][c]{\includegraphics[height=4.2cm]{mtr2-T.png}
\includegraphics[height=4.2cm]{mtr2-M.png}
\includegraphics[height=4.2cm]{mtr2-V.png}}
\makebox[\textwidth][c]{\includegraphics[height=4.2cm]{mtr2-L.png}
\includegraphics[height=4.2cm]{mtr2-C.png}
\includegraphics[height=4.2cm]{mtr2-L+C.png}}
\caption{ Time courses of $T$, $M$, $V$, $L$, $C$ and $L+C$ in a simulation of the macrophage-depletion experiment described in \cite{kataru2009}, where the $T$ in the LEC growth inhibition term is substituted by $T^{eq} = 30$ pg/mm$^3$. }
\label{fig:macrophagetreatT=30}
\end{figure}
With $T$ fixed, the difference between the treated and untreated cases is very small,
but still an increase in capillary formation is observed in spite of the lower VEGF level.
This may be due to the fact that, with fewer macrophages, the crowding term in the LEC equation is smaller, which facilitates the growth and accumulation of endothelial cells.
In fact, if the $M$ in the crowding term is fixed at its equilibrium value of 1,875 cells/mm$^3$,
there is no difference at all between treated and untreated cases.
Note that this result could have been foreseen from the parameter sensitivity analysis (Figure \ref{fig:PSA})
which predicted that a 10\% increase in $d_2$ induces a 5 to 10\% \emph{increase} in final capillary density.
\subsubsection*{VEGF supply}
A third documented approach to enhance lymphangiogenesis consists of supplying VEGF to the wound,
since this protein promotes both LEC growth and the ability of LECs to form a network-like structure.
For instance, in \cite{zheng2007} a wound healing assessment is done in normal and diabetic mice
after a VEGF-treatment.
More precisely, two different types of VEGF were studied: VEGF-A$_{164}$ and VEGF-E$_{NZ7}$.
The authors observed that the treatment with VEGF-A$_{164}$ increased macrophage numbers and the extent of lymphangiogenesis in both wild-type and diabetic cases, while VEGF-E$_{NZ7}$ does not induce any significant change.
In order to reproduce the experiment \emph{in silico},
the amount of supplied VEGF is estimated in \ref{app-VEGFsupply}.
Then, a 10 days constant VEGF supply of $1.8\times 10^2$ pg/mm$^3$ is introduced in the model system.
The output is reported in Figure \ref{fig:zheng}.
\begin{figure}[h]
\centering
\makebox[\textwidth][c]{\includegraphics[height=4.2cm]{zheng-T.png}
\includegraphics[height=4.2cm]{zheng-M.png}
\includegraphics[height=4.2cm]{zheng-V.png}}
\makebox[\textwidth][c]{\includegraphics[height=4.2cm]{zheng-L.png}
\includegraphics[height=4.2cm]{zheng-C.png} }
\caption{ Time courses of TGF-$\beta$, macrophage, VEGF, LEC and capillary densities in a simulation of the 10-days VEGF-supply experiment described in \cite{zheng2007}. Here the original model is altered by adding a constant VEGF supply of $1.8\times 10^2$ pg/mm$^3$ for $0\leq t \leq 10$. }
\label{fig:zheng}
\end{figure}
There is apparently no difference between capillary formations of treated and untreated cases.
Moreover this result is relatively insensitive to the amount of VEGF supplied.
What if the same treatment is applied for 30 days instead of 10?
A simulation of this is shown in Figure~\ref{fig:vegftreat20}.
\begin{figure}[h]
\centering
\makebox[\textwidth][c]{\includegraphics[height=4.2cm]{vt2-M.png}
\includegraphics[height=4.2cm]{vt2-L.png}
\includegraphics[height=4.2cm]{vt2-C.png}}
\caption{ Time courses of $M$, $L$, $C$ and $L+C$ in a simulation of the 30-days VEGF-supply experiment.}
\label{fig:vegftreat20}
\end{figure}
There is now a clear difference in the treated cases (especially the diabetic one) showing a lower level of macrophages and an earlier onset of capillary formation, even if the final capillary density is similar to that in the untreated case.
\subsubsection*{Observation}
There is an important feature common to all three modelled therapies:
in order to stimulate lymphatic capillary formation
one cannot consider TGF-$\beta$ or VEGF levels individually.
A precise balance of TGF-$\beta$ and VEGF is necessary for successful lymphangiogenesis.
This mutual equilibrium may be reached \emph{in vivo} by the production of either of these growth factors
by other cell types not considered in the present work. In particular, this suggests the model does not take into
account certain elements of the process.
Nevertheless, the model does effectively describe both normal and diabetic lymphangiogenesis in wound healing
which suggests that the variables considered here are the most relevant.
This indicates that potential therapies should focus on these aspects of the regeneration process.
\subsection{Novel therapeutic approaches}
As mentioned above, parameter sensitivity analysis proves useful in designing novel therapeutic approaches.
Among the ``sensitive'' parameters only $a_M$, $d_2$ and $c_1$ vary between the normal and diabetic cases
(note that the \emph{increasing $d_2$}-case was discussed above in the macrophage-based treatment).
Thus, at least theoretically, $a_M$, $d_2$ and $c_1$ are the natural targets for a therapeutic strategy aiming to increase the final lymphatic capillary density. The feasibility of each suggested parameter change is now explored.
\subsubsection*{Decreasing $a_M$} Decreasing $a_M$ means lowering the macrophage-mediated activation of TGF-$\beta$.
First of all, note that the increase in final capillary density due to a decrease in $a_M$ is explained by the fact that less active TGF-$\beta$ implies less TGF-$\beta$-inhibition of LEC growth and hence a larger LEC growth term.
For the practical implementation of this change, it is recalled that receptor-mediated TGF-$\beta$ activation consists of the binding of Latency Associated Peptide to the cell surface through receptors such as TSP-1/CD36, M6PR and multiple $\alpha$V-containing integrins \cite{taylor2009}.
Hence, a decrease of $a_M$ might be obtainable by blocking these receptors.
\subsubsection*{Increasing $c_1$} Increasing $c_1$ would be achieved by increasing the LEC growth rate.
Several possible implementations of this are found in the literature.
\emph{Recombinant human IL-8} induces (human umbilical vein and dermal microvascular) endothelial cell proliferation and capillary tube organization \cite{li2003}.
\emph{DNA dependent protein kinase} (DNA-PK) is well known for its importance in repairing DNA double strand breaks;
in \cite{mannell2010} it is observed that DNA-PKcs suppression induces basal endothelial cell proliferation.
In \cite{luo2013} it is reported that \emph{polydopamine}-modified surfaces were beneficial to the proliferation of endothelial cells.
Finally, \emph{non-thermal dielectric barrier discharge plasma} is being developed for a wide range of medical applications, including wound healing; in particular, \cite{kalghatgi2010} reports that endothelial cells treated with plasma for 30s demonstrated twice as much proliferation as untreated cells, five days after plasma treatment.
\subsubsection*{Other parameters}
To increase the final capillary density, one could also think about targeting other parameters to which the system is sensitive.
In particular:
\begin{itemize}
\item \emph{Decreasing $T_L$} means reducing available (latent) TGF-$\beta$
and hence reducing the TGF-$\beta$-inhibition over LECs.
Suppression of TGF-$\beta$ by antibodies has been proposed as a possible therapy to reduce scar formation \cite{eslami2009,finnson2013,shah1995}.
Thus many studies of TGF-$\beta$ antibodies are available.
\item \emph{Decreasing $c_4$} involves reducing the (inhibitory) effect of TGF-$\beta$ on LECs,
which can be achieved by blocking specific TGF-$\beta$-receptors on the endothelial cell surface.
Now, TGF-$\beta$ signalling is very well studied \cite{mullen2011};
in particular, it is known that TGF-$\beta$ family proteins act through two type II and two type I receptors and that ALK-1 antagonizes the activities of the canonical TGF-$\beta$ type I receptor, T$\beta$RI/ALK-5, in the control of endothelial function \cite{derynck2013}.
Moreover, a few studies have been published which deal with blocking of TGF-$\beta$ receptors in the specific case of endothelial cells \cite{liao2011,meeteren2012,watabe2003}.
\item Changes in the other ``sensitive'' parameters do not appear feasible.
Increasing $d_1$ would mean increasing the TGF-$\beta$ decay rate;
decreasing $s_M$ would mean reducing the constant source of macrophages;
increasing $k_2$ requires an increased ``carrying capacity'' for the wound.
We are not aware of practical approaches that could cause these changes.
\item Finally, among the parameters that, when changed by 10\% of their value, induce a change in final capillary density between 2 and 5\% (that is, a bit less than those analysed above),
only $b_1$ merits discussion.
\emph{Reducing $b_1$} corresponds to reducing macrophage chemotaxis towards TGF-$\beta$,
which might be achievable by blocking specific receptors on the macrophage surface.
\end{itemize}
\section{Conclusions} \label{sec:conclusions}
Our model procures new insights into the mechanisms behind lymphangiogenesis in wound healing.
The major contributors to the process have been identified (TGF-$\beta$, macrophages, VEGF and LECs);
the self-organisation hypothesis for the lymphatic network formation described in \cite{boardman2003,rutkowski2006} has been confirmed
and the importance of the \emph{balance} between different factors has been highlighted.
Moreover, the present work suggests novel therapeutic approaches to enhance the lymphangiogenic process in impaired wound healing.
In addition, nearly all of the relevant parameters have been estimated from biological data and therefore this work provides fairly reliable numerical values for the parameters encountered.
However, any parameter estimation is limited by, for example, the specific experimental method used or discrepancies between the system considered here and that studied in a given reference. The results should therefore be viewed with care. In particular, the numerical values pertaining to the aforementioned balance between the TGF-$\beta$ and VEGF may be shifted under an alteration of the parameter set.
This paper is intended as a first step in studying wound healing lymphangiogenesis through mathematical modelling.
Further work should include a spatial variable (and thus involve PDEs rather than ODEs) in order to take into account the important role of lymph flow in lymphatic capillary network formation. Introducing a spatial variable would also enable a fuller description of chemotaxis.
A PDE model would also be able to reflect further differences between angiogenesis and lymphangiogenesis. In particular, contrary to blood angiogenesis, lymphangiogenesis is unidirectional: as opposed to sprouting from both sides of the wound,
LECs appear to predominantly migrate downstream to the wound space in the direction of the interstitial flow \cite{boardman2003}.
The model could also be extended to include other aspects of wound healing, such as blood angiogenesis: implemented effectively,
this would give a more detailed overview of the different mechanisms and the time-scales involved in the various processes.
\section*{Acknowledgment}
A.B. would like to give special thanks to Jonathan Hickman for carefully proofreading this document and making numerous suggestions which improved the final presentation.
|
1,116,691,500,142 | arxiv | \section{Conclusions}
\label{sec:conclusions}
In this paper, we introduce the joint satellite gateway placement and routing problem over an ISTN, for facilitating the terrestrial-satellite communications while adhering to propagation latency requirements, in a cost-optimal manner.
We also balance the corresponding load between selected gateways.To yield a polynomial solution time, we relax the integer variables and derive an LP-based rounding approximation for our model.
In SDN-enabled ISTNs, the problem of controller placement needs to be addressed mutually with the placement of the satellite-gateways having in mind different design strategies such as cross-layer data delivery, load balancing, reliability and latency optimization, etc. Moreover, instead of placing the satellite gateways on the terrestrial nodes, aerial platforms can be an alternative choice leading to a cross-layer network design problem. The flow routing in this setting is more challenging than the presented work. These two problems are the topics of our future research.
\section{Performance Evaluation}
\label{sec:evaluation}
In this section we evaluate the performance of our satellite gateway placement method. We first describe the simulation environment setup and scenarios, then we review the performance evaluation results.
\subsection{Performance Evaluation Setup}
\label{evaluation:setup}
We evaluate the performance of our JSGPR and JSGPR-LB approaches on multiple real network topologies publicly available at the Topology Zoo \cite{zoo}. The five different topologies we consider are listed in table \ref{topo}. The link lengths and capacities are extracted from the topology zoo. The propagation delays are calculated based on the
lengths of the links with the propagation velocity of $C = 2 \times 10 ^8 m/s$ \cite{speed}. The value of $d_{max}$ is set to $10 ms$, and the deployment cost for each node is taken from a uniform random generator $~U(500, 1000)$. The unit bandwidth cost for all the links is set to be equal to $1$. Also, the value of $q_j$ is set to $240 Mbps$ for all the gateways.
To develop and solve our MILP and LP models we use the CPLEX commercial solver. Our tests are carried out on a server with an Intel i5 CPU at 2.3 GHz and 8 GB of main memory.
\begin{table}[b]
\centering
\begin{tabular}{||c c c||}
\hline
Topology & Nodes & Links \\
\hline\hline
Sinet & 13 & 18 \\
Ans & 18 & 25 \\
Agis & 25 & 32 \\
Digex & 31 & 35 \\
Bell Canada & 48 & 64 \\
\hline
\end{tabular}
\caption{Summary of the studied topologies}
\label{topo}
\end{table}
\subsection{Evaluation Scenarios}
\label{evaluation:setup}
We have conducted two sets of experiments for two different cases. In the first case we benchmark our LP-based approximation algorithm against the MILP-based optimal one for JSGPR, whereas the second case aims to observe the impact of our load balancing approach on the gateway placement.
For both cases and for all topologies, if $c^i_{max}$ is the maximum capacity among all the outgoing links of node $i$, the traffic rate originated at node $i$ is taken from a uniform distribution $~ U(\frac{2c^i_{max}}{3}, c^i_{max})$. Specifically, we will use the following metrics to evaluate the performance of our algorithm:
\begin{itemize}
\item \textbf{Solver Runtime} is the amount of time the CPLEX solver takes to solve the generated MILP or LP instances.
\item \textbf{Average Delay} as explained in section \ref{sec:desc}.
\item \textbf{Total Cost} is the total cost of deploying the satellite gateways and routing the traffic.
\item \textbf{Gateway Load} is the amount of traffic assigned to the gateways after the gateway placement problem is solved.
\end{itemize}
\subsection{Evaluation Results}
\label{sec3}
\begin{figure*}[t]
\begin{center}
\begin{minipage}[h]{0.225\textwidth}
\includegraphics[width=1\linewidth]{results/ICC2020/costGW.png}
\caption{Exp-A: Normalized Total Cost}
\label{fig:acc}
\end{minipage}
\hspace{1em}
\begin{minipage}[h]{0.225\textwidth}
\includegraphics[width=1\linewidth]{results/ICC2020/time.png}
\caption{Exp-A: Average Solver Runtime}
\label{fig:util}
\end{minipage}
\hspace{1em}
\begin{minipage}[h]{0.225\textwidth}
\includegraphics[width=1\linewidth]{results/ICC2020/digex_time_bp.png}
\caption{Exp-A: Solver Runtime for Digex}
\label{fig:prev}
\end{minipage}
\hspace{1em}
\begin{minipage}[h]{0.225\textwidth}
\includegraphics[width=1\linewidth]{results/ICC2020/Agg_delay_bp.png}
\caption{Exp-A: Average Delay ($d_{max} = 10 ms$ )}
\label{fig:agg}
\end{minipage}
\hspace{1em}
\begin{minipage}[h]{0.225\textwidth}
\includegraphics[width=1\linewidth]{results/ICC2020/perdelay.png}
\caption{Exp-A: Total Cost with Varying Delay Bound}
\label{fig:comp}
\end{minipage}
\hspace{1em}
\begin{minipage}[h]{0.225\textwidth}
\includegraphics[width=1\linewidth]{results/ICC2020/costload.png}
\caption{Exp-B: Normalized Total Cost}
\label{fig:cost2}
\end{minipage}
\hspace{1em}
\begin{minipage}[h]{0.225\textwidth}
\includegraphics[width=1\linewidth]{results/ICC2020/Agg_load_bp.png}
\caption{Exp-B: Gateway Load }
\label{fig:load_profile}
\end{minipage}
\hspace{1em}
\end{center}
\end{figure*}
\subsubsection{Experiment A - Approximation vs. Exact Method}Fig. \ref{fig:acc} reflects the average normalized total cost for both the optimal MILP-based approach and the LP-based approximation of JSGPR. Due to the suboptimal placement, the LP-based approach results in additional deployment costs within the range of at most $13\%$ of the optimal placement, but as the scale of the network gets larger this gap decreases. For Digex the approximate approach leads to about $4\%$ increase in the deployment cost.
Fig. \ref{fig:util} depicts the average runtime for both the MILP and LP formulations for the $4$ topologies. We note that for larger networks, CPLEX failed to provide the solution for the MILP problem, while our LP model continued to solve the problem within the expected time limit. Particularly, for the Bell-Canada topology, in $40\%$ of the runs, CPLEX was not able to find any feasible solutions within the first $10$ hours while the approach via approximation could provide the suboptimal placement in less than $15$ minutes. The solver runtime for Digex is shown in \ref{fig:prev}. The average runtime for the LP is $230$ seconds while for the MILP, it is around $3000$ seconds.
Fig. \ref{fig:agg}, represents the average delay for Ans, Agis, and Digex. The average of the expected experienced delay under the MILP model is $2.95, 6.18,$ and $ 2.15$ seconds, while this value under the LP model is $4.68, 5.6,$ and $1.65$ seconds. We note that the suboptimal procedure leads to additional deployed gateways in Agis and Digex, which in return will have the nodes end up closer to the gateways and consequently, experience lower average delay.
Further, an insightful observation is that, in Agis, $d_{max}$ is relatively a tight upper bound on the average delay experienced by each node. Therefore, the delay profile is pushed towards its upper bound, whereas, in Digex, due to the low density of links, more gateways are required to be deployed on the terrestrial nodes, which will make the gateways available more quickly; Therefore, the delay profile is inclined towards its lower bound.
Fig. \ref{fig:comp} depicts the normalized cost of the JSGPR problem for different values of $d_{max}$. For instance, as indicated by this figure, if in Digex, a delay of $10 ms$ is tolerable instead of $2ms$, a $35\%$ reduction in cost results; Similarly, upgrading the service from a delay of $5ms$ to $2ms$ in Agis, will almost double the cost.
Overall, the aforementioned figures illustrate that the performance of our approximation algorithm is very close to the exact approach, but with an important advantage of reduced time complexity which shows the efficiency of our proposed approximation method.
\subsubsection{Experiment B - The Impact of Load Minimization}
Fig. \ref{fig:cost2} shows the profile of the average load on the gateways for JSGPR,
and JSGPR-LB, considering the Agis and Digex topologies. As expected, JSGPR-LB is more costly, since in order to evenly share the load between the gateways,
a larger number of gateways will be required.
Fig. \ref{fig:load_profile} depicts the profile of the load assigned to the gateways. In both depicted topologies, the load profile under JSGPR-LB is very thin and concentrated over its average, proving the efficiency of the formulation.
Lower load on the placed gateways is achieved by the sub-optimal placement of the gateways (due to the larger number of gateways) which results in a more expensive gateway placement. As depicted in Fig. \ref{fig:cost2} the total cost of gateway placement in the studied topologies
for JSGPR-LB is above and within the range of $16\%$ of the optimal placement cost achieved by JSGPR.
\section{Problem Formulation}
\label{sec:problem}
\subsection{Satellite GW placement problem MILP Formulation I as CFLP}
In order to formulate the problem as a MILP we consider
\begin{itemize}
\item The set of binary variables $\textbf{y}$, where $y_j$ expresses the placement of a gateway at node $j$.
\item The set of continuous variables $\textbf{x}$, where $x_{ij}$ expresses the fraction of traffic demand originating at $i$ passing through gateway $j$.
\end{itemize}
\vspace{2mm}
\noindent\textbf{Objective function:}
\begin{equation}
\textbf{Minimize} \quad \sum_{j\in J}c_j {y_j} + \sum_{i \in I}\sum_{j \in J}{c_{ij} a_{i}x_{ij}}
\end{equation}
\begin{equation}
\sum_{i \in V}{x_{ij}} = 1,\; \quad \forall j \in V
\label{c1}
\end{equation}
\begin{equation}
x_{ij}\leq y_j, \; \quad \forall i,j \in V
\label{c2}
\end{equation}
\begin{equation}
\sum_{i \in V}a_i {x_{ij}} \leq q_j y_j,\; \quad \forall j \in V
\label{c3}
\end{equation}
\begin{equation}
\quad y_{j} \in \{0, 1\},\; \quad \forall j \in J
\label{dom1a}
\end{equation}
\begin{equation}
\quad 1 \geq x_{ij} \geq 0,\; \quad \forall i \in I, j \in J
\label{dom2a}
\end{equation}
The objective function minimizes the total cost; which comprises of two terms: the first one represents the cost of installing and opening a gw where $c_j$ denotes the cost of installing and opening a gw at location $j$, the second term accounts for the demand allocation cost,
where $c_{ij}$ denotes the cost of supplying by gw j the fraction $x_{ij}$ of the demand $a_i$ originating at $i$. Constraint set \ref{c1} assures that traffic demands are supported through the gws, while constraint set \ref{s2} assures that demands are only served by open gws. Constraints \ref{c3} state the capacity constraints for the facilities. The domains of yi and xij variables are defined in constraints \ref{dom1} and \ref{dom2}, respectively. The UFLP is modeled by dropping constraints \ref{c3}.
\subsection{Satellite GW placement problem and routing MILP Formulation I as CFLP}
In order to formulate the problem as a MILP we consider
\begin{itemize}
\item The set of binary variables $\textbf{y}$, where $y_j$ expresses the placement of a gateway at node $j$.
\item The set of continuous variables $\textbf{x}$, where $x_{ij}$ expresses the fraction of traffic demand originating at $i$ passing through gateway $j$.
\item The set of continuous variables $\textbf{f}$, where $f_{uv}^{ij}$ expresses the amount of traffic demand originating at $i$ assigned to gateway $j$ passing through the link $(u,v)$.
\end{itemize}
\vspace{2mm}
\noindent\textbf{Objective function:}
\begin{align}
\textbf{Minimize} \quad \sum_{j\in J}c_j {y_j} + \sum_{i \in I}\sum_{j \in J}{c_{ij} a_{i}x_{ij}} + \sum_{i \in I}\sum_{j \in J}\sum_{uv \in L}c_{uv}f^{ij}_{uv}
\end{align}
\noindent\textbf{Demand Constraints:}
\begin{align}
\sum_{i \in V}{x_{ij}} = 1,\; \quad \forall j \in V
\label{c1b}
\end{align}
\noindent\textbf{Feasibility Constraints:}
\begin{align}
x_{ij}\leq y_j, \; \quad \forall i,j \in V
\label{c2b}
\end{align}
\begin{align}
f_{uv}^{ij} \leq a_i x_{ij}, \; \quad \forall i,j \in V
\label{c3b}
\end{align}
\noindent\textbf{Capacity Constraints:}
\begin{align}
\sum_{i \in V}a_i {x_{ij}} \leq q_j y_j,\; \quad \forall j \in V
\label{c4b}
\end{align}
\begin{align}
\sum_{i \in V} \sum_{j \in V} f_{uv}^{ij} \leq q_{uv}\; \quad \forall j \in V
\label{c5b}
\end{align}
\noindent\textbf{Flow Constraints: }
\begin{align}
\quad a_i x_{ii} + \sum_{v\in V: iv \in L}\sum_{j \in J }f^{ij}_{iv} = a_i,\; \quad \forall i \in I
\label{flow1b}
\end{align}
\begin{align}
&\quad \sum_{v \in V: (v,u) \in L}\sum_{j\in J}f^{ij}_{vu} - \sum_{v \in V: (u,v) \in L}\sum_{j\in J}f^{ij}_{uv} = a_i x_{iu}\; \\
& \quad \forall i \in I, j \in J, u \in V, u \neq i
\label{flow2b}
\end{align}
\noindent\textbf{Domain Constraints: }
\begin{align}
\quad y_{j} \in \{0, 1\},\; \quad \forall j \in J
\label{dom1b}
\end{align}
\begin{align}
\quad x_{ij} \in \{0, 1\},\; \quad \forall i \in I, j \in J
\label{dom2b}
\end{align}
\begin{align}
\quad f_{uv}^{ij} \geq 0,\; \quad \forall uv \in L, i \in I, j \in J
\label{dom3b}
\end{align}
The objective function minimizes the total cost; which comprises of three terms; the first one represents the cost of installing and opening a gw at location $j$. The second term accounts for the demand allocation cost, where $c_{ij}$ denotes the cost of supplying by gw j the fraction $x_{ij}$ of the demand $a_i$ originating at $i$. The last term corresponds to the transport/connection cost from the demand originating at $i$ to its assigned gw $j$,
Constraints set \ref{c1b} assures that traffic demands are supported by the selected gateways. Feasibility constraints set \ref{c2b} make sure that demands are only served by open gateways, while in \ref{c3b} the amount of traffic from the demand originating at $i$ assigned to gateway $j$ passing through the link $(u,v)$ cannot exceed the requested traffic demand $a_i$. Constraints \ref{c4b} and \ref{c5b} state the capacity constraints for the gateways and the links. Constraints \ref{flow1b} and ref{flow2b} enforce the flow conservation.
The domains of $y_i$, $x_{ij}$ and $f_{uv}^{ij}$ variables are defined in constraints \ref{dom1}, \ref{dom2}, \ref{dom3}, respectively.
In the aforementioned formulation we can alleviate the constraints set \ref{c4b} if we do not consider any other (e.g., maximum CPU load) on the potential gateways, as well as the $2^{nd}$ term of the objective function if there is no demand allocation cost.
//////////////////////////////////////////////////////
\subsection{JSGPR MILP Formulation II}
In order to formulate the JSGPR problem as a MILP we consider
\begin{itemize}
\item The set of binary variables $\textbf{y}$, where $y_j$ expresses the placement of a gateway at node $j$.
\item The set of continuous variables $\textbf{f}$, where $f_{uv}^{ij}$ expresses the traffic rate passing through the link $(u,v)$ corresponding to flow $i\rightarrow j$.
\end{itemize}
The MILP formulation is as follows:
\vspace{2mm}
\noindent\textbf{Objective: }
\begin{equation}
\textbf{Minimize} \qua
\sum_{j \in J}c_jy_{j} + \phi\sum_{i \in I}\sum_{j \in J}\sum_{uv \in L}c_{uv}f^{ij}_{uv} \;
\label{obj}
\end{equation}
\noindent\textbf{Demand Guarantee Constraints: }
\begin{equation}
\quad\sum_{j \in J}\sum_{u \in V} f^{ij}_{uj} = a_i,\; \quad \forall i \in I
\label{demgar}
\end{equation}
\noindent\textbf{Feasibility Guarantee Constraints: }
\begin{equation}
\quad \sum_{u \in V} f^{ij}_{uj} \leq y_j a_i,\; \quad \forall i \in I, \forall j \in J
\label{leq}
\end{equation}
\noindent\textbf{Delay Guarantee Constraints:}
\begin{equation}
\quad \sum_{j \in J}\sum_{uv \in L}\frac{f^{ij}_{uv}}{a_i}d_{uv}\leq d_{max} \; \quad \forall{i \in I}
\label{delgar}
\end{equation}
\noindent\textbf{Link Capacity Constraints: }
\begin{equation}
\quad \sum_{i \in I}\sum_{j \in J}f^{ij}_{uv} \leq q_{uv},\; \quad \forall uv \in L
\label{linkcap}
\end{equation}
\noindent\textbf{Flow Constraints: }
\begin{equation}
\quad f^{ii}_{ii} + \sum_{v\in V: iv \in L}\sum_{j\neq i \in J }f^{ij}_{iv} = a_i,\; \quad \forall i \in I
\label{flow1}
\end{equation}
\begin{equation}
\begin{aligned}
&\quad \sum_{v \in V: (u,v) \in L}\sum_{j\in J}f^{ij}_{vu} = \sum_{v \in V: (u,v) \in L}\sum_{j\in J}f^{ij}_{uv} \; \\
& \quad \forall i \in I, j \in J, u \in V, u \neq i
\end{aligned}
\label{flow2}
\end{equation}
\begin{equation}
\quad y_{j} \in \{0, 1\},\; \quad \forall j \in J
\label{dom1}
\end{equation}
\begin{equation}
\quad f_{uv}^{ij} \geq 0,\; \quad \forall uv \in L, i \in I, j \in J
\label{dom2}
\end{equation}
\vspace{2mm}
The objective function \eqref{obj} is the total cost of the process, consisting of the gateway deployment cost and the bandwidth cost
Constraints \eqref{demgar} are to ensure that the traffic demand for each node is satisfied using one or multiple gateways. Constraints \eqref{leq} allow for the formation of the flow $i\rightarrow j$ only if a gateway is placed at node $j$, while constraints \eqref{delgar} will guarantee that the average end-to-end delay for each demand point will not exceed a pre-defined upper bound $d_{max}$.
Constraints \eqref{linkcap} make sure that the resources allocated by each link does not exceed the capacity of that link. Constraints \eqref{flow1} and \eqref{flow2} enforce the flow conservation; i.e. the inbound traffic to a switch must be equal to its outbound traffic. Finally, the constraints \eqref{dom1}, and \eqref{dom2} express the domain requirements for the variables $\textbf{y}$, and $\textbf{f}$ respectively.
Since the MILP is known to be NP-hard, the proposed formulation is intractable for larger scale networks. Next we derive an LP model to strike a balance between the time-complexity of the optimization problem and the accuracy of the results.
\subsection{JSGPR-LM MILP Formulation}
We define a new term $l_{max}$ to represent the maximum of assigned traffic to a gateway, among all the gateways in the optimal placement. In order to minimize the forwarding latency we wish to minimize the value of $l_{max}$. Hence, we will add $l_{max}$ as an additional term to the objective function. The new objective function will look like:
\begin{equation}
\textbf{Minimize} \quad(\sum_{j \in J}c_jy_{j}+\phi\sum_{i \in I}\sum_{j \in J}\sum_{uv \in L}c_{uv}f^{ij}_{uv}) + \alpha l_{max}\;
\label{objlm}
\end{equation}
Where $\alpha$ is a constant factor determining the balance between the two terms of the objective function. To ensure that the traffic assigned to each gateway does not exceed $l_{max}$, we add the following set of constraints to our model:
\begin{equation}
\quad \sum_{i \in I}\sum_{u \in V} f^{ij}_{uj} \leq l_{max}\; \quad \forall j \in J
\end{equation}
The resulted optimization problem aims to minimize the load of the gateways in conjunction with the cost of the gateway deployment.
\section{Problem Formulation}
\label{sec:problem}
\subsection{JSGPR MILP Formulation}
In order to formulate the JSGPR problem as a MILP we consider
\begin{itemize}
\item The set of binary variables $\textbf{y}$, where $y_j$ expresses the placement of a gateway at node $j$.
\item The set of continuous variables $\textbf{f}$, where $f_{uv}^{ij}$ expresses the traffic rate passing through the link $(u,v)$ corresponding to flow $i\rightarrow j$.
\end{itemize}
The MILP formulation is as follows:
\vspace{2mm}
\noindent\textbf{Objective: }
\begin{equation}
\textbf{Minimize} \qua
\sum_{j \in J}c_jy_{j} + \phi\sum_{i \in I}\sum_{j \in J}\sum_{uv \in L}c_{uv}f^{ij}_{uv} \;
\label{obj}
\end{equation}
\noindent\textbf{Demand Guarantee Constraints: }
\begin{equation}
\quad\sum_{j \in J}\sum_{u \in V} f^{ij}_{uj} = a_i,\; \quad \forall i \in I
\label{demgar}
\end{equation}
\noindent\textbf{Feasibility Guarantee Constraints: }
\begin{equation}
\quad \sum_{u \in V} f^{ij}_{uj} \leq y_j a_i,\; \quad \forall i \in I, \forall j \in J
\label{leq}
\end{equation}
\noindent\textbf{Delay Guarantee Constraints:}
\begin{equation}
\quad \sum_{j \in J}\sum_{uv \in L}\frac{f^{ij}_{uv}}{a_i}d_{uv}\leq d_{max} \; \quad \forall{i \in I}
\label{delgar}
\end{equation}
\noindent\textbf{Link Capacity Constraints: }
\begin{equation}
\quad \sum_{i \in I}\sum_{j \in J}f^{ij}_{uv} \leq q_{uv},\; \quad \forall uv \in L
\label{linkcap}
\end{equation}
\noindent\textbf{Flow Constraints: }
\begin{equation}
\quad f^{ii}_{ii} + \sum_{v\in V: iv \in L}\sum_{j\neq i \in J }f^{ij}_{iv} = a_i,\; \quad \forall i \in I
\label{flow1}
\end{equation}
\begin{equation}
\begin{aligned}
&\quad \sum_{v \in V: (u,v) \in L}\sum_{j\in J}f^{ij}_{vu} = \sum_{v \in V: (u,v) \in L}\sum_{j\in J}f^{ij}_{uv} \; \\
& \quad \forall i \in I, j \in J, u \in V, u \neq i
\end{aligned}
\label{flow2}
\end{equation}
\begin{equation}
\quad y_{j} \in \{0, 1\},\; \quad \forall j \in J
\label{dom1}
\end{equation}
\begin{equation}
\quad f_{uv}^{ij} \geq 0,\; \quad \forall uv \in L, i \in I, j \in J
\label{dom2}
\end{equation}
\vspace{2mm}
The objective function \eqref{obj} is the total cost of the process, consisting of the gateway deployment cost and the bandwidth cost
Constraints \eqref{demgar} are to ensure that the traffic demand for each node is satisfied using one or multiple gateways. Constraints \eqref{leq} allow for the formation of the flow $i\rightarrow j$ only if a gateway is placed at node $j$, while constraints \eqref{delgar} will guarantee that the average end-to-end delay for each demand point will not exceed a pre-defined upper bound $d_{max}$.
Constraints \eqref{linkcap} make sure that the resources allocated by each link does not exceed the capacity of that link. Constraints \eqref{flow1} and \eqref{flow2} enforce the flow conservation; i.e. the inbound traffic to a switch must be equal to its outbound traffic. Finally, the constraints \eqref{dom1}, and \eqref{dom2} express the domain requirements for the variables $\textbf{y}$, and $\textbf{f}$ respectively.
Since the MILP is known to be NP-hard, the proposed formulation is intractable for larger scale networks. Next we derive an LP model to strike a balance between the time-complexity of the optimization problem and the accuracy of the results.
\subsection{LP Relaxation and Rounding Algorithm}
The LP model is obtained from the original MILP formulation by relaxing the integrality of the binary $\textbf{y}$ variables. We replace the domain constraints \eqref{dom1} by the following ones:
\begin{equation}
\quad y_{j} \in [0, 1],\; \quad \forall j \in J
\label{domain1new}
\end{equation}
The rest of the model remains intact. The LP model can be solved within a polynomial effort, but does not necessarily leave us with a valid gateway placement. Therefore, inspired by \cite{torkzaban2019trust}, we use a deterministic rounding algorithm to derive binary values for the $\textbf{y}$ placement variables, with the cost of solution suboptimality. The rounding is pseudo-coded in \textbf{Algorithm-\ref{alg}}, and works as follows; In each iteration (Lines \ref{2}-\ref{11}), first an LP model is solved, then the value $y_j$ for each satellite gateway candidate is extracted as presented in the line \ref{6}. Then the candidate corresponding to the maximum $y_j$ is chosen to be rounded to $1$ and added to the LP model (Lines \ref{6}-\ref{8}). Since the $y_j$ variables only show up in the right half-side of the $(\leq)$constraints, non of the constraints get violated by this choice. The algorithm terminates after all the dimensions of the solution are rounded (i.e. no fractional values show up in the solution of the LP model), or when the LP model becomes infeasible in which case the gateway placement is denied. After the placement of the gateways is determined, a multi-commodity flow (MCF) allocation algorithm is run to find out the corresponding flow variables.
Since the integral part of the solution has at most $|J|$ dimensions, the $\textbf{Solve\_LP(..)}$ function will run at most $|J|$ times which results in a polynomial time-complexity as stated earlier.
\begin{algorithm}
\caption{LP Rounding}
\label{alg}
\begin{algorithmic}[1]
\REPEAT \label{1}
\STATE $\{y_j, f^{ij}_{uv}\} \leftarrow \textbf{Solve\_LP(..)}$ \label{2}
\STATE If solution exists \textit{Sol:= true}, Otherwise \textit{Sol:= false} \label{3}
\STATE $Y \leftarrow\{y_j |y_j \notin \{0, 1\} \}$ \label{4}
\IF{$Y \neq 0$} \label{5}
\STATE $\{y_0\} \leftarrow \arg \max_{\{j \in J\}} Y $ \label{6}
\STATE \textbf{Add LP Constraint $y_0=1$} \label{8}
\ENDIF \label{11}
\UNTIL $(Y=0) \lor(Sol =false)$ \label{13}
\IF{$Sol= true$} \label{14}
\STATE $\{y_j, f^{ij}_{uv}\} \leftarrow \textbf{Solve\_MCF(..)}$ \label{15}
\STATE If solution exists \textit{Sol:= true}, Otherwise \textit{Sol:= false} \label{16}
\ENDIF \label{26}
\IF{\textit{Sol:= true}} \label{17}
\RETURN $\{y_j, f^{ij}_{uv}\}$ \{Accept the placement\} \label{18}
\ELSE \label{19}
\STATE $\{\forall{y_j} :=0, \forall{f^{ij}_{uv}} :=0\}$ \label{20}
\RETURN $\{y_j, f^{ij}_{uv}\}$ \{Reject the placement\} \label{21}
\ENDIF \label{22}
\end{algorithmic}
\end{algorithm}
\subsection{JSGPR-LM MILP Formulation}
We define a new term $l_{max}$ to represent the maximum of assigned traffic to a gateway, among all the gateways in the optimal placement. In order to minimize the forwarding latency we wish to minimize the value of $l_{max}$. Hence, we will add $l_{max}$ as an additional term to the objective function. The new objective function will look like:
\begin{equation}
\textbf{Minimize} \quad(\sum_{j \in J}c_jy_{j}+\phi\sum_{i \in I}\sum_{j \in J}\sum_{uv \in L}c_{uv}f^{ij}_{uv}) + \alpha l_{max}\;
\label{objlm}
\end{equation}
Where $\alpha$ is a constant factor determining the balance between the two terms of the objective function. To ensure that the traffic assigned to each gateway does not exceed $l_{max}$, we add the following set of constraints to our model:
\begin{equation}
\quad \sum_{i \in I}\sum_{u \in V} f^{ij}_{uj} \leq l_{max}\; \quad \forall j \in J
\end{equation}
The resulted optimization problem aims to minimize the load of the gateways in conjunction with the cost of the gateway deployment.
\section{Problem Formulation}
\label{sec:problem}
\subsection{JSGPR MILP Formulation I}
Inspired by \cite{Bell}, we formulate a baseline for the JSGPR problem as the capacitated facility location-routing problem considering:
\begin{itemize}
\item The set of binary variables $\textbf{y}$, where $y_j$ expresses the placement of a gateway at node $j$.
\item The set of continuous variables $\textbf{x}$, where $x_{ij}$ expresses the fraction of traffic demand originating at $i$ passing through gateway $j$.
\item The set of continuous variables $\textbf{f}$, where $f_{uv}^{ij}$ expresses the amount of traffic demand originating at $i$, assigned to gateway $j$ passing through the link $(u,v)$.
\end{itemize}
\vspace{2mm}
We note that for a gateway $u$, the variable $f_{uu}^{iu}$ represents the amount of traffic which is originated at node $i$ and is forwarded to the satellite through the gateway placed at node $u$.
The resulting MILP formulation is as follows:
\begin{align}
\textbf{Minimize} \quad \sum_{j\in J}c_j {y_j} +
\sum_{i \in I}\sum_{j \in J}\sum_{(u,v) \in E}c_{uv}f^{ij}_{uv}
\end{align}
\noindent\textbf{Demand Constraints:}
\begin{align}
\sum_{j \in J}{x_{ij}} = 1,\; \quad \forall i \in I
\label{c1b}
\end{align}
\noindent\textbf{Feasibility Constraints:}
\begin{align}
x_{ij}\leq y_j, \; \quad \forall i\in I, j \in J
\label{c2b}
\end{align}
\noindent\textbf{Capacity Constraints:}
\begin{align}
\sum_{i \in I}a_i {x_{ij}} \leq q_j y_j,\; \quad \forall j \in J
\label{c4b}
\end{align}
\begin{align}
\sum_{i \in I} \sum_{j \in J} f_{uv}^{ij} \leq q_{uv}\; \quad \forall (u,v) \in E
\label{c5b}
\end{align}
\noindent\textbf{Flow Constraints: }
\begin{align}
\quad a_i x_{ii} + \sum_{v\in V: (i,v) \in E}\sum_{j \in J }f^{ij}_{iv} = a_i,\; \quad \forall i \in I
\label{flow1b}
\end{align}
\begin{equation}
\begin{aligned}
&\quad \sum_{v \in V: (v,u) \in E}f^{ij}_{vu} - \sum_{v \in V: (u,v) \in E}f^{ij}_{uv} = a_i x_{iu} \; \\% a_i x_{iu}\; \\
& \quad \forall i \in I, j \in J, u \in V, u \neq
\end{aligned}
\label{flow2b}
\end{equation}
\noindent\textbf{Domain Constraints: }
\begin{align}
\quad y_{j} \in \{0, 1\},\; \quad \forall j \in J
\label{dom1b}
\end{align}
\begin{align}
\quad x_{ij} \in [0, 1],\; \quad \forall i \in I, j \in J
\label{dom2b}
\end{align}
\begin{align}
\quad f_{uv}^{ij} \geq 0,\; \quad \forall (u,v) \in E, i \in I, j \in J
\label{dom3b}
\end{align}
The objective function minimizes the total cost comprised of two terms; the first one represents the cost of installing and operating a gateway at location $j$, aggregated over the total number of gateways used.
The second term corresponds to the transport/connection cost from the demand originating at $i$ to its assigned gateway $j$, aggregated for all demands.
Constraints set \eqref{c1b} assures that traffic demands are supported by the selected gateways. Feasibility constraints set \eqref{c2b} makes sure that demands are only served by open gateways.
Constraints \eqref{flow1b} and \eqref{flow2b} enforce the flow conservation.
The domains of $y_i$, $x_{ij}$ and $f_{uv}^{ij}$ variables are defined in constraints \eqref{dom1b}, \eqref{dom2b}, \eqref{dom3b}, respectively.
In the aforementioned formulation,
we can express the set of $x_{ij}$ variables, utilizing the corresponding last-hop flow variables:
\begin{equation}
x_{ij} = \frac{\sum_{u \in V}{f^{ij}_{uj}}}{a_i},\; \quad \forall i \in I, j \in J
\label{aux}
\end{equation}
We can replace the set of $x_{ij}$ variables as in \eqref{aux}. The new MILP model for JSGPR is presented in the next subsection.
\subsection{JSGPR MILP Formulation II}
The new MILP formulation is as follows:
\vspace{2mm}
\begin{equation}
\textbf{Minimize} \qua
\sum_{j \in J}c_jy_{j} + \phi\sum_{i \in I}\sum_{j \in J}\sum_{(u,v) \in E}c_{uv}f^{ij}_{uv} \;
\label{obj}
\end{equation}
\noindent\textbf{Demand Constraints: }
\begin{equation}
\quad\sum_{j \in J}\sum_{u \in V} f^{ij}_{uj} = a_i,\; \quad \forall i \in I
\label{demgar}
\end{equation}
\noindent\textbf{Feasibility Constraints: }
\begin{equation}
\quad \sum_{u \in V} f^{ij}_{uj} \leq y_j a_i,\; \quad \forall i \in I, \forall j \in J
\label{leq}
\end{equation}
\noindent\textbf{Capacity Constraints: }
\begin{equation}
\sum_{i\in I}\sum_{u\in V}{f^{ij}_{uj}} \leq q_j y_j,\; \quad \forall j \in J
\label{c4b2}
\end{equation}
\begin{equation}
\quad \sum_{i \in I}\sum_{j \in J}f^{ij}_{uv} \leq q_{uv},\; \quad \forall (u,v) \in E
\label{linkcap}
\end{equation}
\noindent\textbf{Flow Constraints: }
\begin{equation}
\sum_{u \in V: (u,i) \in E}{f^{ii}_{ui}} + \sum_{v\in V: (i,v) \in E}\sum_{j\neq i \in J }f^{ij}_{iv} = a_i,\; \quad \forall i \in I
\label{flow1}
\end{equation}
\begin{equation}
\begin{aligned}
&\quad \sum_{v \in V: (u,v) \in E}f^{ij}_{vu} - \sum_{v \in V: (u,v) \in E}f^{ij}_{uv} = \sum_{u \in V}{f^{iu}_{uu}} \; \\
& \quad \forall i \in I, j \in J, u \in V, u \neq
\end{aligned}
\label{flow2}
\end{equation}
\noindent\textbf{Domain Constraints: }
\begin{equation}
\quad y_{j} \in \{0, 1\},\; \quad \forall j \in J
\label{dom1}
\end{equation}
\begin{equation}
\quad f_{uv}^{ij} \geq 0,\; \quad \forall (u,v) \in E, i \in I, j \in J
\label{dom2}
\end{equation}
\vspace{2mm}
Where $\phi = \frac{1}{(\sum_{i\in I}{a_i})}$ is the normalization factor between the two terms of the objective function.
Additionally, in order to meet the average delay requirement for each demand point $i$ we will impose a new constraint exploiting the corresponding flow variables:
\begin{equation}
\quad \sum_{j \in J}\sum_{(u,v) \in E}\frac{f^{ij}_{uv}}{a_i}d_{uv}\leq d_{max} \; \quad \forall{i \in I}
\label{newdelgar}
\end{equation}
where $d_{max}$ is the maximum allowed average delay for each terrestrial node. We note that we have ignored the delay of the terrestrial-satellite link in the above calculation, since it is a constant term as explained in section \ref{sec:desc}.
\subsection{JSGPR-LB MILP Formulation}
In the aforementioned specification, the assigned load to a gateway is solely bounded by the capacity constraints \eqref{c4b2} and \eqref{linkcap}. We define a new single decision variable $l_{max}$ to represent the maximum traffic assigned to a gateway, common for all the selected gateways. We add $l_{max}$ as an additional term to the objective function. The new objective function is described in \eqref{objlm}:
\begin{align}
\textbf{Minimize} \quad(\sum_{j \in J}c_jy_{j}+\phi\sum_{i \in I}\sum_{j \in J}\sum_{(u,v) \in E}c_{uv}f^{ij}_{uv}) + \alpha l_{max}\;
\label{objlm}
\end{align}
where $\alpha$ is a constant factor determining the balance between the two terms of the objective function.
Also, we change constraints \eqref{c4b2} as following:
\begin{equation}
\quad \sum_{i \in I}\sum_{u \in V} f^{ij}_{uj} \leq l_{max}\; \quad \forall j \in J
\end{equation}
where $l_{max} \leq q_j \quad \forall j \in J$. We will call this last model, $JSGPR-LB$. The resulted optimization problem aims to balance the load of the gateways in conjunction with the cost of the gateway deployment \cite{manet}, \cite{lb}.
\subsection{LP Relaxation and Approximation Algorithm}
Since the MILP model is known to be NP-hard, the problem is intractable for larger scale networks \cite{Bell}. For the aforementioned JSGPR and JSGPR-LB MILP formulations, the optimal fractional solution is computed for the problem via linear programming relaxation. The relaxed problem can be solved by any suitable linear programming method, in polynomial time. A rounding technique is applied, similar to \cite{torkzaban2019trust} to obtain the integer solution of the MIP problem. The resulting multi-commodity flow allocation problem is solved to identify the routing paths.
\section{Introduction}
Over the past years, the integration of satellite communications with current and emerging terrestrial networks (e.g., 5G mobile networks) is gaining attention, given the growing data traffic volume which is predicted to increase by over 10,000 times in the next 20 years \cite{8438267}. The trend is also supported by national directives and initiatives to support broadband connectivity in rural and remote areas, as it is considered a crucial factor for economic growth. Due to their large footprint, satellites can complement and extend terrestrial networks, both in densely populated areas and in rural zones, and provide reliable mission critical services. Standardization bodies as 3GPP, ETSI \cite{doi:10.1002/sat.1292} and ITU \cite{8438267} also recognize and promote integrated and/or hybrid satellite and terrestrial networks.
The integrated satellite-terrestrial networks (ISTNs) can be a cornerstone to the realization of a heterogeneous global system to enhance the end-user experience \cite{artiga2016terrestrial}. An ISTN, as depicted in Fig.~\ref{fig:map}, is composed of satellites organized in a constellation that support routing, adaptive access control, and spot-beam management \cite{vasavada2016architectures}, whereas the terrestrial optical network consists of ground stations (gateways), switches, and servers.
Delay-sensitive data services are more suitable to transport in the low earth orbit (LEO) satellite networks, which provide inherent advantages in power consumption, pole coverage, latency, and lower cost compared with the geostationary earth orbit (GEO) satellite networks and the medium earth orbit (MEO) satellite networks \cite{guo19}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{results/ICC2020/SGP2.jpg}
\caption{Example of an integrated satellite-terrestrial network.}
\label{fig:map}
\end{center}
\end{figure}
However, there are multiple challenges associated with the ISTNs. Satellite networks and terrestrial networks are widely heterogeneous, supported by distinct control and management frameworks, thus their integration to date was based on proprietary and custom solutions. Additionally, in the integrated network, as there are multiple available paths along which data traffic can be routed, optimized path-selection is important to satisfy the Quality of Service (QoS) requirements of the traffic flows and improve the utilization of network resources \cite{jia2016broadband}. Existing satellite networks employ a decentralized management architecture, scheduled link allocation and static routing strategies, which make it difficult to support flexible traffic scheduling adapting to the changes in traffic demands \cite{service}. Complex handover mechanisms should be in place at the gateway nodes, while their high-power consumption requirements need to be also taken into consideration.
Nowadays the inclusion of plug-and-play SatCom solutions are investigated, primarily in the context of 5G. Exploiting software-defined networking (SDN)\cite{zhang2017energy} and network function virtualization (NFV), ISTNs can be programmed and network behavior can be centrally controlled using open APIs, thus supporting a more agile integration of terrestrial (wired and wireless) and satellite domains interworking with each other to deliver end-to-end services. Based on the extra degrees of freedom in ISTN deployment, brought forth by the softwarization and programmability of the network, the satellite gateway placement problem is of paramount importance for the space-ground integrated network. The problem entails the selection of an optimal subset of the terrestrial nodes for hosting the satellite gateways while satisfying a group of design requirements. Various strategic objectives can be pursued when deciding the optimal placement of the satellite gateways including but not limited to cost reduction, latency minimization, reliability assurance, etc. \cite{GW}, \cite{GWC}, \cite{taghlid}.
In this paper, we propose a method for the cost-optimal deployment of satellite gateways on the terrestrial nodes. Particularly, we aim to minimize the overall cost of gateway deployment and traffic routing, while satisfying latency requirements. To this end, we formulate the problem as a mixed-integer linear program (MILP), with latency bounds as hard constraints. We derive an approximation method from our MILP model to significantly reduce the time-complexity of the solution, at the expense of sub-optimal gateway placements, and investigate the corresponding trade-off. Furthermore, in order to reduce latency and processing power at the gateways, we impose a (varying) upper bound on the load that can be supported by each gateway.
It is important to note that, traffic routing and facility placement
are usually solved as different problems, but assigning
a demand point to a facility without considering the other
demand points might not be realistic [9]. Given the significant
interrelation of the two problems we develop and solve a single
aggregated model instead of solving the two problems in a
sequential manner.
The remainder of the paper is organized as follows. Section~\ref{sec:desc} describes the problem and the network model. In Section~\ref{sec:problem} we introduce the MILP formulation and its LP-based approximation followed by a variant of our model with load minimization. We present our evaluation results in section \ref{sec:evaluation}, and provide the overview of the related works in section \ref{sec:relatedwork}. Finally, in Section \ref{sec:conclusions}, we highlight our conclusions and discuss the directions for our
future work.
\section{Network Model and Problem Description}
\label{sec:desc}
The ISTN network under our study is depicted in Fig. \ref{fig:map}.We model the terrestrial network as an undirected $G = (V, E)$ graph where $(u,v) \in E $ if there is a link between the nodes $u,v \in V$. Let $J\subseteq V $ be the set of all potential nodes for gateway placement and $I\subseteq V$ be the set of all demand points. We note that the sets $I$ and $J$ are not necessarily disjoint. A typical substrate node $v \in V$ may satisfy one or more of the following statements: (i) node $v$ is a gateway to the satellite, (ii) node $v$ is an initial demand point of the terrestrial network, (iii) node $v$ relays the traffic of other nodes, to one or more of the gateways. The ideal solution will introduce a set of terrestrial nodes for gateway placement which result in the cheapest deployment \& routing, together with the corresponding routes from all the demand points to the satellite, while satisfying the design constraints.
Regarding the delay of the network we consider only propagation delay.
The propagation delay of a path in the network is the sum of the propagation delays over its constituting links. A GEO satellite is considered in the particular system model \cite{GW}. Let $d_{uv}$ represent the contribution of the terrestrial link $(u,v)$ to the propagation delay of a path which contains that link. The propagation delay from a gateway to the satellite is constant \cite{GWC}. Moreover, we consider multi-path routing for the traffic demands. Therefore, we define a flow $i\rightarrow j$ as the fraction of the traffic originated at node $i \in I$, routed to the satellite through gateway $j \in J$.
Once all the flows corresponding to node $i$ are determined, all the routing paths from node $i$ to the satellites are defined. Let $c_j$ denote the cost associated with deploying a satellite gateway at node $j$ and $c_{uv}$ the bandwidth unit cost for each link $(u,v)$. We also define $a_i$ as the traffic demand rate of node $i$ which is to be served by the satellite. The capacity of each link $(u,v)$ is denoted by $q_{uv}$ while the capacity of the gateway-satellite link is $q_j$ for gateway $j$. Table \ref{paramv} summarizes all the notations used for the parameters and variables throughout the paper.
\begin{table*}[ht]
\centering
\begin{center}
\scalebox{0.9}{
\begin{tabular}{|c|c|}
\hline
Variables & Description\\
\hline
$y_j$ & The binary decision variable of gateway placement at node $j$\\
$x_{ij}$ & The fraction of traffic demand originating at $i$ passing through gateway $j$\\
$f_{uv}^{ij}$ & The amount of traffic originating at $i$ assigned to gateway $j$ passing through the link $(u,v)$\\
\hline
Parameters & Description\\
\hline
$G= (V,E)$ & Terrestrial network graph\\
$J$ & The set of potential nodes for gateway placement \\
$I$ & The set of demand points\\
$c_j$ & The cost associated with deploying a satellite gateway at node $j$\\
$c_{uv}$ & Bandwidth unit cost for link $(u,v)$\\
$a_i$ & Traffic demand rate of node $i$\\
$q_{uv}$ & Capacity of link $(u,v)$\\
$q_j$ & Capacity of gateway-satellite link for gateway $j$\\
\hline
\end{tabular}}
\caption{System model parameters and variables}
\label{paramv}
\end{center}
\end{table*}
Since the propagation latency is usually the dominant factor in determining the network delay \cite{taghlid}, we first develop a joint satellite gateway placement and routing (JSGPR) MILP formulation to minimize the cumulative gateway placement cost with hard constraints on the average
propagation delay for the traffic of each demand point.
Following that, we will derive a variant of JSGPR with load balancing (JSGPR-LB) which aims at mutually optimizing the overall cost and the load assigned to all the deployed gateways. Finally, we use an LP-based approximation approach to reduce the time-complexity of the proposed scheme at the cost of a sub-optimal gateway placement.
\section{Related Work}
\label{sec:relatedwork}
Although the gateway placement problem over a general network is very-well studied, the satellite gateway placement problem on an ISTN is fairly new.
For sensor \cite{bzoor}, vehicular\cite{mezza}, wireless mesh \cite{gorbe} and MANETs \cite{manet}, different approaches have been proposed with different optimization objectives such as load balancing \cite{lb}, latency minimization \cite{delay}, reliability maximization \cite{rel}, etc.
The recent works on ISTNs , mostly focus on optimizing the average network latency\cite{GW} or reliability\cite{GWC}, \cite{capac}, \cite{taghlid}, \cite{same}. In \cite{GW}, the authors propose a particle swarm-based optimization approximation (PSOA) approach for mimizing the average network latency and benchmark it against an optimal brute force algorithm (OBFA) to show the imporvement in the time complexity. In \cite{GWC}, a hybrid simulated annealing and clustering algorithm (SACA) is used to provide near optimal solution for the joint SDN controller and satellite gateway placement with the objective of network reliability maximization. For the same purpose as \cite{GWC}, authors in \cite{taghlid}, provide a simulated annealing partion-based k-means (SAPKM) approach and compare the advantages and weaknesses of their approach with that of \cite{GWC}. In \cite{capac}, the authors consider the satellite link capacity as a limiting constraint and solve the placement problem to maximize the reliability.
There is another line of research which aims to place the satellite gateways on the aerial platforms. More precisely in \cite{cross}, a new aerial layer is added in between the terrestrial and the satellite layer which relays the traffic between the satellite and the terrestrial nodes. The authors have used a greedy optimization approach to make the gateway selection.
Although the above approaches provide a great insight into the satellite gateway placement problem, they assume the number of gateways is known prior to design. Secondly, none of the above studies investigates the traffic routes from the terrestrial switches to the satellite, whereas we propose the joint optimization of satellite gateway placement together with the corresponding traffic routing for ISTNs.
\section{Problem Formulation}
\label{sec:problem}
\subsection{JSGPR MILP Formulation}
In order to formulate the JSGPR problem as a MILP we consider
\begin{itemize}
\item The set of binary variables $\textbf{y}$, where $y_j$ expresses the placement of a gateway at node $j$.
\item The set of continuous variables $\textbf{f}$, where $f_{uv}^{ij}$ expresses the traffic rate passing through the link $(u,v)$ corresponding to flow $i\rightarrow j$.
\end{itemize}
The MILP formulation is as follows:
\vspace{2mm}
\noindent\textbf{Objective: }
\begin{equation}
\textbf{Minimize} \qua
\sum_{j \in J}c_jy_{j} + \phi\sum_{i \in I}\sum_{j \in J}\sum_{uv \in L}c_{uv}f^{ij}_{uv} \;
\label{obj}
\end{equation}
\noindent\textbf{Demand Guarantee Constraints: }
\begin{equation}
\quad\sum_{j \in J}\sum_{u \in V} f^{ij}_{uj} = a_i,\; \quad \forall i \in I
\label{demgar}
\end{equation}
\noindent\textbf{Feasibility Guarantee Constraints: }
\begin{equation}
\quad \sum_{u \in V} f^{ij}_{uj} \leq y_j a_i,\; \quad \forall i \in I, \forall j \in J
\label{leq}
\end{equation}
\noindent\textbf{Delay Guarantee Constraints:}
\begin{equation}
\quad \sum_{j \in J}\sum_{uv \in L}\frac{f^{ij}_{uv}}{a_i}d_{uv}\leq d_{max} \; \quad \forall{i \in I}
\label{delgar}
\end{equation}
\noindent\textbf{Link Capacity Constraints: }
\begin{equation}
\quad \sum_{i \in I}\sum_{j \in J}f^{ij}_{uv} \leq q_{uv},\; \quad \forall uv \in L
\label{linkcap}
\end{equation}
\noindent\textbf{Flow Constraints: }
\begin{equation}
\quad f^{ii}_{ii} + \sum_{v\in V: iv \in L}\sum_{j\neq i \in J }f^{ij}_{iv} = a_i,\; \quad \forall i \in I
\label{flow1}
\end{equation}
\begin{equation}
\begin{aligned}
&\quad \sum_{v \in V: (u,v) \in L}\sum_{j\in J}f^{ij}_{vu} = \sum_{v \in V: (u,v) \in L}\sum_{j\in J}f^{ij}_{uv} \; \\
& \quad \forall i \in I, j \in J, u \in V, u \neq i
\end{aligned}
\label{flow2}
\end{equation}
\begin{equation}
\quad y_{j} \in \{0, 1\},\; \quad \forall j \in J
\label{dom1}
\end{equation}
\begin{equation}
\quad f_{uv}^{ij} \geq 0,\; \quad \forall uv \in L, i \in I, j \in J
\label{dom2}
\end{equation}
\vspace{2mm}
The objective function \eqref{obj} is the total cost of the process, consisting of the gateway deployment cost and the bandwidth cost
Constraints \eqref{demgar} are to ensure that the traffic demand for each node is satisfied using one or multiple gateways. Constraints \eqref{leq} allow for the formation of the flow $i\rightarrow j$ only if a gateway is placed at node $j$, while constraints \eqref{delgar} will guarantee that the average end-to-end delay for each demand point will not exceed a pre-defined upper bound $d_{max}$.
Constraints \eqref{linkcap} make sure that the resources allocated by each link does not exceed the capacity of that link. Constraints \eqref{flow1} and \eqref{flow2} enforce the flow conservation; i.e. the inbound traffic to a switch must be equal to its outbound traffic. Finally, the constraints \eqref{dom1}, and \eqref{dom2} express the domain requirements for the variables $\textbf{y}$, and $\textbf{f}$ respectively.
Since the MILP is known to be NP-hard, the proposed formulation is intractable for larger scale networks. Next we derive an LP model to strike a balance between the time-complexity of the optimization problem and the accuracy of the results.
\subsection{LP Relaxation and Rounding Algorithm}
The LP model is obtained from the original MILP formulation by relaxing the integrality of the binary $\textbf{y}$ variables. We replace the domain constraints \eqref{dom1} by the following ones:
\begin{equation}
\quad y_{j} \in [0, 1],\; \quad \forall j \in J
\label{domain1new}
\end{equation}
The rest of the model remains intact. The LP model can be solved within a polynomial effort, but does not necessarily leave us with a valid gateway placement. Therefore, inspired by \cite{torkzaban2019trust}, we use a deterministic rounding algorithm to derive binary values for the $\textbf{y}$ placement variables, with the cost of solution suboptimality. The rounding is pseudo-coded in \textbf{Algorithm-\ref{alg}}, and works as follows; In each iteration (Lines \ref{2}-\ref{11}), first an LP model is solved, then the value $y_j$ for each satellite gateway candidate is extracted as presented in the line \ref{6}. Then the candidate corresponding to the maximum $y_j$ is chosen to be rounded to $1$ and added to the LP model (Lines \ref{6}-\ref{8}). Since the $y_j$ variables only show up in the right half-side of the $(\leq)$constraints, non of the constraints get violated by this choice. The algorithm terminates after all the dimensions of the solution are rounded (i.e. no fractional values show up in the solution of the LP model), or when the LP model becomes infeasible in which case the gateway placement is denied. After the placement of the gateways is determined, a multi-commodity flow (MCF) allocation algorithm is run to find out the corresponding flow variables.
Since the integral part of the solution has at most $|J|$ dimensions, the $\textbf{Solve\_LP(..)}$ function will run at most $|J|$ times which results in a polynomial time-complexity as stated earlier.
\begin{algorithm}
\caption{LP Rounding}
\label{alg}
\begin{algorithmic}[1]
\REPEAT \label{1}
\STATE $\{y_j, f^{ij}_{uv}\} \leftarrow \textbf{Solve\_LP(..)}$ \label{2}
\STATE If solution exists \textit{Sol:= true}, Otherwise \textit{Sol:= false} \label{3}
\STATE $Y \leftarrow\{y_j |y_j \notin \{0, 1\} \}$ \label{4}
\IF{$Y \neq 0$} \label{5}
\STATE $\{y_0\} \leftarrow \arg \max_{\{j \in J\}} Y $ \label{6}
\STATE \textbf{Add LP Constraint $y_0=1$} \label{8}
\ENDIF \label{11}
\UNTIL $(Y=0) \lor(Sol =false)$ \label{13}
\IF{$Sol= true$} \label{14}
\STATE $\{y_j, f^{ij}_{uv}\} \leftarrow \textbf{Solve\_MCF(..)}$ \label{15}
\STATE If solution exists \textit{Sol:= true}, Otherwise \textit{Sol:= false} \label{16}
\ENDIF \label{26}
\IF{\textit{Sol:= true}} \label{17}
\RETURN $\{y_j, f^{ij}_{uv}\}$ \{Accept the placement\} \label{18}
\ELSE \label{19}
\STATE $\{\forall{y_j} :=0, \forall{f^{ij}_{uv}} :=0\}$ \label{20}
\RETURN $\{y_j, f^{ij}_{uv}\}$ \{Reject the placement\} \label{21}
\ENDIF \label{22}
\end{algorithmic}
\end{algorithm}
\subsection{JSGPR-LM MILP Formulation}
We define a new term $l_{max}$ to represent the maximum of assigned traffic to a gateway, among all the gateways in the optimal placement. In order to minimize the forwarding latency we wish to minimize the value of $l_{max}$. Hence, we will add $l_{max}$ as an additional term to the objective function. The new objective function will look like:
\begin{equation}
\textbf{Minimize} \quad(\sum_{j \in J}c_jy_{j}+\phi\sum_{i \in I}\sum_{j \in J}\sum_{uv \in L}c_{uv}f^{ij}_{uv}) + \alpha l_{max}\;
\label{objlm}
\end{equation}
Where $\alpha$ is a constant factor determining the balance between the two terms of the objective function. To ensure that the traffic assigned to each gateway does not exceed $l_{max}$, we add the following set of constraints to our model:
\begin{equation}
\quad \sum_{i \in I}\sum_{u \in V} f^{ij}_{uj} \leq l_{max}\; \quad \forall j \in J
\end{equation}
The resulted optimization problem aims to minimize the load of the gateways in conjunction with the cost of the gateway deployment.
\section{Performance Evaluation}
\label{sec:evaluation}
In this section we evaluate the performance of our satellite gateway placement method. We first describe the simulation environment setup and scenarios, then we review the performance evaluation results.
\subsection{Performance Evaluation Setup}
\label{evaluation:setup}
We evaluate the performance of our JSGPR and JSGPR-LB approaches on multiple real network topologies publicly available at the Topology Zoo \cite{zoo}. The five different topologies we consider are listed in table \ref{topo}. The link lengths and capacities are extracted from the topology zoo. The propagation delays are calculated based on the
lengths of the links with the propagation velocity of $C = 2 \times 10 ^8 m/s$ \cite{speed}. The value of $d_{max}$ is set to $10 ms$, and the deployment cost for each node is taken from a uniform random generator $~U(500, 1000)$. The unit bandwidth cost for all the links is set to be equal to $1$. Also, the value of $q_j$ is set to $240 Mbps$ for all the gateways.
To develop and solve our MILP and LP models we use the CPLEX commercial solver. Our tests are carried out on a server with an Intel i5 CPU at 2.3 GHz and 8 GB of main memory.
\begin{table}[b]
\centering
\begin{tabular}{||c c c||}
\hline
Topology & Nodes & Links \\
\hline\hline
Sinet & 13 & 18 \\
Ans & 18 & 25 \\
Agis & 25 & 32 \\
Digex & 31 & 35 \\
Bell Canada & 48 & 64 \\
\hline
\end{tabular}
\caption{Summary of the studied topologies}
\label{topo}
\end{table}
\subsection{Evaluation Scenarios}
\label{evaluation:setup}
We have conducted two sets of experiments for two different cases. In the first case we benchmark our LP-based approximation algorithm against the MILP-based optimal one for JSGPR, whereas the second case aims to observe the impact of our load balancing approach on the gateway placement.
For both cases and for all topologies, if $c^i_{max}$ is the maximum capacity among all the outgoing links of node $i$, the traffic rate originated at node $i$ is taken from a uniform distribution $~ U(\frac{2c^i_{max}}{3}, c^i_{max})$. Specifically, we will use the following metrics to evaluate the performance of our algorithm:
\begin{itemize}
\item \textbf{Solver Runtime} is the amount of time the CPLEX solver takes to solve the generated MILP or LP instances.
\item \textbf{Average Delay} as explained in section \ref{sec:desc}.
\item \textbf{Total Cost} is the total cost of deploying the satellite gateways and routing the traffic.
\item \textbf{Gateway Load} is the amount of traffic assigned to the gateways after the gateway placement problem is solved.
\end{itemize}
\subsection{Evaluation Results}
\label{sec3}
\begin{figure*}[t]
\begin{center}
\begin{minipage}[h]{0.225\textwidth}
\includegraphics[width=1\linewidth]{results/ICC2020/costGW.png}
\caption{Exp-A: Normalized Total Cost}
\label{fig:acc}
\end{minipage}
\hspace{1em}
\begin{minipage}[h]{0.225\textwidth}
\includegraphics[width=1\linewidth]{results/ICC2020/time.png}
\caption{Exp-A: Average Solver Runtime}
\label{fig:util}
\end{minipage}
\hspace{1em}
\begin{minipage}[h]{0.225\textwidth}
\includegraphics[width=1\linewidth]{results/ICC2020/digex_time_bp.png}
\caption{Exp-A: Solver Runtime for Digex}
\label{fig:prev}
\end{minipage}
\hspace{1em}
\begin{minipage}[h]{0.225\textwidth}
\includegraphics[width=1\linewidth]{results/ICC2020/Agg_delay_bp.png}
\caption{Exp-A: Average Delay ($d_{max} = 10 ms$ )}
\label{fig:agg}
\end{minipage}
\hspace{1em}
\begin{minipage}[h]{0.225\textwidth}
\includegraphics[width=1\linewidth]{results/ICC2020/perdelay.png}
\caption{Exp-A: Total Cost with Varying Delay Bound}
\label{fig:comp}
\end{minipage}
\hspace{1em}
\begin{minipage}[h]{0.225\textwidth}
\includegraphics[width=1\linewidth]{results/ICC2020/costload.png}
\caption{Exp-B: Normalized Total Cost}
\label{fig:cost2}
\end{minipage}
\hspace{1em}
\begin{minipage}[h]{0.225\textwidth}
\includegraphics[width=1\linewidth]{results/ICC2020/Agg_load_bp.png}
\caption{Exp-B: Gateway Load }
\label{fig:load_profile}
\end{minipage}
\hspace{1em}
\end{center}
\end{figure*}
\subsubsection{Experiment A - Approximation vs. Exact Method}Fig. \ref{fig:acc} reflects the average normalized total cost for both the optimal MILP-based approach and the LP-based approximation of JSGPR. Due to the suboptimal placement, the LP-based approach results in additional deployment costs within the range of at most $13\%$ of the optimal placement, but as the scale of the network gets larger this gap decreases. For Digex the approximate approach leads to about $4\%$ increase in the deployment cost.
Fig. \ref{fig:util} depicts the average runtime for both the MILP and LP formulations for the $4$ topologies. We note that for larger networks, CPLEX failed to provide the solution for the MILP problem, while our LP model continued to solve the problem within the expected time limit. Particularly, for the Bell-Canada topology, in $40\%$ of the runs, CPLEX was not able to find any feasible solutions within the first $10$ hours while the approach via approximation could provide the suboptimal placement in less than $15$ minutes. The solver runtime for Digex is shown in \ref{fig:prev}. The average runtime for the LP is $230$ seconds while for the MILP, it is around $3000$ seconds.
Fig. \ref{fig:agg}, represents the average delay for Ans, Agis, and Digex. The average of the expected experienced delay under the MILP model is $2.95, 6.18,$ and $ 2.15$ seconds, while this value under the LP model is $4.68, 5.6,$ and $1.65$ seconds. We note that the suboptimal procedure leads to additional deployed gateways in Agis and Digex, which in return will have the nodes end up closer to the gateways and consequently, experience lower average delay.
Further, an insightful observation is that, in Agis, $d_{max}$ is relatively a tight upper bound on the average delay experienced by each node. Therefore, the delay profile is pushed towards its upper bound, whereas, in Digex, due to the low density of links, more gateways are required to be deployed on the terrestrial nodes, which will make the gateways available more quickly; Therefore, the delay profile is inclined towards its lower bound.
Fig. \ref{fig:comp} depicts the normalized cost of the JSGPR problem for different values of $d_{max}$. For instance, as indicated by this figure, if in Digex, a delay of $10 ms$ is tolerable instead of $2ms$, a $35\%$ reduction in cost results; Similarly, upgrading the service from a delay of $5ms$ to $2ms$ in Agis, will almost double the cost.
Overall, the aforementioned figures illustrate that the performance of our approximation algorithm is very close to the exact approach, but with an important advantage of reduced time complexity which shows the efficiency of our proposed approximation method.
\subsubsection{Experiment B - The Impact of Load Minimization}
Fig. \ref{fig:cost2} shows the profile of the average load on the gateways for JSGPR,
and JSGPR-LB, considering the Agis and Digex topologies. As expected, JSGPR-LB is more costly, since in order to evenly share the load between the gateways,
a larger number of gateways will be required.
Fig. \ref{fig:load_profile} depicts the profile of the load assigned to the gateways. In both depicted topologies, the load profile under JSGPR-LB is very thin and concentrated over its average, proving the efficiency of the formulation.
Lower load on the placed gateways is achieved by the sub-optimal placement of the gateways (due to the larger number of gateways) which results in a more expensive gateway placement. As depicted in Fig. \ref{fig:cost2} the total cost of gateway placement in the studied topologies
for JSGPR-LB is above and within the range of $16\%$ of the optimal placement cost achieved by JSGPR.
\section{Conclusions}
\label{sec:conclusions}
In this paper, we introduce the joint satellite gateway placement and routing problem over an ISTN, for facilitating the terrestrial-satellite communications while adhering to propagation latency requirements, in a cost-optimal manner.
We also balance the corresponding load between selected gateways.To yield a polynomial solution time, we relax the integer variables and derive an LP-based rounding approximation for our model.
In SDN-enabled ISTNs, the problem of controller placement needs to be addressed mutually with the placement of the satellite-gateways having in mind different design strategies such as cross-layer data delivery, load balancing, reliability and latency optimization, etc. Moreover, instead of placing the satellite gateways on the terrestrial nodes, aerial platforms can be an alternative choice leading to a cross-layer network design problem. The flow routing in this setting is more challenging than the presented work. These two problems are the topics of our future research.
\section{Problem Formulation}
\label{sec:problem}
\subsection{Satellite GW placement problem MILP Formulation I as CFLP}
In order to formulate the problem as a MILP we consider
\begin{itemize}
\item The set of binary variables $\textbf{y}$, where $y_j$ expresses the placement of a gateway at node $j$.
\item The set of continuous variables $\textbf{x}$, where $x_{ij}$ expresses the fraction of traffic demand originating at $i$ passing through gateway $j$.
\end{itemize}
\vspace{2mm}
\noindent\textbf{Objective function:}
\begin{equation}
\textbf{Minimize} \quad \sum_{j\in J}c_j {y_j} + \sum_{i \in I}\sum_{j \in J}{c_{ij} a_{i}x_{ij}}
\end{equation}
\begin{equation}
\sum_{i \in V}{x_{ij}} = 1,\; \quad \forall j \in V
\label{c1}
\end{equation}
\begin{equation}
x_{ij}\leq y_j, \; \quad \forall i,j \in V
\label{c2}
\end{equation}
\begin{equation}
\sum_{i \in V}a_i {x_{ij}} \leq q_j y_j,\; \quad \forall j \in V
\label{c3}
\end{equation}
\begin{equation}
\quad y_{j} \in \{0, 1\},\; \quad \forall j \in J
\label{dom1a}
\end{equation}
\begin{equation}
\quad 1 \geq x_{ij} \geq 0,\; \quad \forall i \in I, j \in J
\label{dom2a}
\end{equation}
The objective function minimizes the total cost; which comprises of two terms: the first one represents the cost of installing and opening a gw where $c_j$ denotes the cost of installing and opening a gw at location $j$, the second term accounts for the demand allocation cost,
where $c_{ij}$ denotes the cost of supplying by gw j the fraction $x_{ij}$ of the demand $a_i$ originating at $i$. Constraint set \ref{c1} assures that traffic demands are supported through the gws, while constraint set \ref{s2} assures that demands are only served by open gws. Constraints \ref{c3} state the capacity constraints for the facilities. The domains of yi and xij variables are defined in constraints \ref{dom1} and \ref{dom2}, respectively. The UFLP is modeled by dropping constraints \ref{c3}.
\subsection{Satellite GW placement problem and routing MILP Formulation I as CFLP}
In order to formulate the problem as a MILP we consider
\begin{itemize}
\item The set of binary variables $\textbf{y}$, where $y_j$ expresses the placement of a gateway at node $j$.
\item The set of continuous variables $\textbf{x}$, where $x_{ij}$ expresses the fraction of traffic demand originating at $i$ passing through gateway $j$.
\item The set of continuous variables $\textbf{f}$, where $f_{uv}^{ij}$ expresses the amount of traffic demand originating at $i$ assigned to gateway $j$ passing through the link $(u,v)$.
\end{itemize}
\vspace{2mm}
\noindent\textbf{Objective function:}
\begin{align}
\textbf{Minimize} \quad \sum_{j\in J}c_j {y_j} + \sum_{i \in I}\sum_{j \in J}{c_{ij} a_{i}x_{ij}} + \sum_{i \in I}\sum_{j \in J}\sum_{uv \in L}c_{uv}f^{ij}_{uv}
\end{align}
\noindent\textbf{Demand Constraints:}
\begin{align}
\sum_{i \in V}{x_{ij}} = 1,\; \quad \forall j \in V
\label{c1b}
\end{align}
\noindent\textbf{Feasibility Constraints:}
\begin{align}
x_{ij}\leq y_j, \; \quad \forall i,j \in V
\label{c2b}
\end{align}
\begin{align}
f_{uv}^{ij} \leq a_i x_{ij}, \; \quad \forall i,j \in V
\label{c3b}
\end{align}
\noindent\textbf{Capacity Constraints:}
\begin{align}
\sum_{i \in V}a_i {x_{ij}} \leq q_j y_j,\; \quad \forall j \in V
\label{c4b}
\end{align}
\begin{align}
\sum_{i \in V} \sum_{j \in V} f_{uv}^{ij} \leq q_{uv}\; \quad \forall j \in V
\label{c5b}
\end{align}
\noindent\textbf{Flow Constraints: }
\begin{align}
\quad a_i x_{ii} + \sum_{v\in V: iv \in L}\sum_{j \in J }f^{ij}_{iv} = a_i,\; \quad \forall i \in I
\label{flow1b}
\end{align}
\begin{align}
&\quad \sum_{v \in V: (v,u) \in L}\sum_{j\in J}f^{ij}_{vu} - \sum_{v \in V: (u,v) \in L}\sum_{j\in J}f^{ij}_{uv} = a_i x_{iu}\; \\
& \quad \forall i \in I, j \in J, u \in V, u \neq i
\label{flow2b}
\end{align}
\noindent\textbf{Domain Constraints: }
\begin{align}
\quad y_{j} \in \{0, 1\},\; \quad \forall j \in J
\label{dom1b}
\end{align}
\begin{align}
\quad x_{ij} \in \{0, 1\},\; \quad \forall i \in I, j \in J
\label{dom2b}
\end{align}
\begin{align}
\quad f_{uv}^{ij} \geq 0,\; \quad \forall uv \in L, i \in I, j \in J
\label{dom3b}
\end{align}
The objective function minimizes the total cost; which comprises of three terms; the first one represents the cost of installing and opening a gw at location $j$. The second term accounts for the demand allocation cost, where $c_{ij}$ denotes the cost of supplying by gw j the fraction $x_{ij}$ of the demand $a_i$ originating at $i$. The last term corresponds to the transport/connection cost from the demand originating at $i$ to its assigned gw $j$,
Constraints set \ref{c1b} assures that traffic demands are supported by the selected gateways. Feasibility constraints set \ref{c2b} make sure that demands are only served by open gateways, while in \ref{c3b} the amount of traffic from the demand originating at $i$ assigned to gateway $j$ passing through the link $(u,v)$ cannot exceed the requested traffic demand $a_i$. Constraints \ref{c4b} and \ref{c5b} state the capacity constraints for the gateways and the links. Constraints \ref{flow1b} and ref{flow2b} enforce the flow conservation.
The domains of $y_i$, $x_{ij}$ and $f_{uv}^{ij}$ variables are defined in constraints \ref{dom1}, \ref{dom2}, \ref{dom3}, respectively.
In the aforementioned formulation we can alleviate the constraints set \ref{c4b} if we do not consider any other (e.g., maximum CPU load) on the potential gateways, as well as the $2^{nd}$ term of the objective function if there is no demand allocation cost.
//////////////////////////////////////////////////////
\subsection{JSGPR MILP Formulation II}
In order to formulate the JSGPR problem as a MILP we consider
\begin{itemize}
\item The set of binary variables $\textbf{y}$, where $y_j$ expresses the placement of a gateway at node $j$.
\item The set of continuous variables $\textbf{f}$, where $f_{uv}^{ij}$ expresses the traffic rate passing through the link $(u,v)$ corresponding to flow $i\rightarrow j$.
\end{itemize}
The MILP formulation is as follows:
\vspace{2mm}
\noindent\textbf{Objective: }
\begin{equation}
\textbf{Minimize} \qua
\sum_{j \in J}c_jy_{j} + \phi\sum_{i \in I}\sum_{j \in J}\sum_{uv \in L}c_{uv}f^{ij}_{uv} \;
\label{obj}
\end{equation}
\noindent\textbf{Demand Guarantee Constraints: }
\begin{equation}
\quad\sum_{j \in J}\sum_{u \in V} f^{ij}_{uj} = a_i,\; \quad \forall i \in I
\label{demgar}
\end{equation}
\noindent\textbf{Feasibility Guarantee Constraints: }
\begin{equation}
\quad \sum_{u \in V} f^{ij}_{uj} \leq y_j a_i,\; \quad \forall i \in I, \forall j \in J
\label{leq}
\end{equation}
\noindent\textbf{Delay Guarantee Constraints:}
\begin{equation}
\quad \sum_{j \in J}\sum_{uv \in L}\frac{f^{ij}_{uv}}{a_i}d_{uv}\leq d_{max} \; \quad \forall{i \in I}
\label{delgar}
\end{equation}
\noindent\textbf{Link Capacity Constraints: }
\begin{equation}
\quad \sum_{i \in I}\sum_{j \in J}f^{ij}_{uv} \leq q_{uv},\; \quad \forall uv \in L
\label{linkcap}
\end{equation}
\noindent\textbf{Flow Constraints: }
\begin{equation}
\quad f^{ii}_{ii} + \sum_{v\in V: iv \in L}\sum_{j\neq i \in J }f^{ij}_{iv} = a_i,\; \quad \forall i \in I
\label{flow1}
\end{equation}
\begin{equation}
\begin{aligned}
&\quad \sum_{v \in V: (u,v) \in L}\sum_{j\in J}f^{ij}_{vu} = \sum_{v \in V: (u,v) \in L}\sum_{j\in J}f^{ij}_{uv} \; \\
& \quad \forall i \in I, j \in J, u \in V, u \neq i
\end{aligned}
\label{flow2}
\end{equation}
\begin{equation}
\quad y_{j} \in \{0, 1\},\; \quad \forall j \in J
\label{dom1}
\end{equation}
\begin{equation}
\quad f_{uv}^{ij} \geq 0,\; \quad \forall uv \in L, i \in I, j \in J
\label{dom2}
\end{equation}
\vspace{2mm}
The objective function \eqref{obj} is the total cost of the process, consisting of the gateway deployment cost and the bandwidth cost
Constraints \eqref{demgar} are to ensure that the traffic demand for each node is satisfied using one or multiple gateways. Constraints \eqref{leq} allow for the formation of the flow $i\rightarrow j$ only if a gateway is placed at node $j$, while constraints \eqref{delgar} will guarantee that the average end-to-end delay for each demand point will not exceed a pre-defined upper bound $d_{max}$.
Constraints \eqref{linkcap} make sure that the resources allocated by each link does not exceed the capacity of that link. Constraints \eqref{flow1} and \eqref{flow2} enforce the flow conservation; i.e. the inbound traffic to a switch must be equal to its outbound traffic. Finally, the constraints \eqref{dom1}, and \eqref{dom2} express the domain requirements for the variables $\textbf{y}$, and $\textbf{f}$ respectively.
Since the MILP is known to be NP-hard, the proposed formulation is intractable for larger scale networks. Next we derive an LP model to strike a balance between the time-complexity of the optimization problem and the accuracy of the results.
\subsection{JSGPR-LM MILP Formulation}
We define a new term $l_{max}$ to represent the maximum of assigned traffic to a gateway, among all the gateways in the optimal placement. In order to minimize the forwarding latency we wish to minimize the value of $l_{max}$. Hence, we will add $l_{max}$ as an additional term to the objective function. The new objective function will look like:
\begin{equation}
\textbf{Minimize} \quad(\sum_{j \in J}c_jy_{j}+\phi\sum_{i \in I}\sum_{j \in J}\sum_{uv \in L}c_{uv}f^{ij}_{uv}) + \alpha l_{max}\;
\label{objlm}
\end{equation}
Where $\alpha$ is a constant factor determining the balance between the two terms of the objective function. To ensure that the traffic assigned to each gateway does not exceed $l_{max}$, we add the following set of constraints to our model:
\begin{equation}
\quad \sum_{i \in I}\sum_{u \in V} f^{ij}_{uj} \leq l_{max}\; \quad \forall j \in J
\end{equation}
The resulted optimization problem aims to minimize the load of the gateways in conjunction with the cost of the gateway deployment.
\section{Problem Formulation}
\label{sec:problem}
\subsection{JSGPR MILP Formulation I}
Inspired by \cite{Bell}, we formulate a baseline for the JSGPR problem as the capacitated facility location-routing problem considering:
\begin{itemize}
\item The set of binary variables $\textbf{y}$, where $y_j$ expresses the placement of a gateway at node $j$.
\item The set of continuous variables $\textbf{x}$, where $x_{ij}$ expresses the fraction of traffic demand originating at $i$ passing through gateway $j$.
\item The set of continuous variables $\textbf{f}$, where $f_{uv}^{ij}$ expresses the amount of traffic demand originating at $i$, assigned to gateway $j$ passing through the link $(u,v)$.
\end{itemize}
\vspace{2mm}
We note that for a gateway $u$, the variable $f_{uu}^{iu}$ represents the amount of traffic which is originated at node $i$ and is forwarded to the satellite through the gateway placed at node $u$.
The resulting MILP formulation is as follows:
\begin{align}
\textbf{Minimize} \quad \sum_{j\in J}c_j {y_j} +
\sum_{i \in I}\sum_{j \in J}\sum_{(u,v) \in E}c_{uv}f^{ij}_{uv}
\end{align}
\noindent\textbf{Demand Constraints:}
\begin{align}
\sum_{j \in J}{x_{ij}} = 1,\; \quad \forall i \in I
\label{c1b}
\end{align}
\noindent\textbf{Feasibility Constraints:}
\begin{align}
x_{ij}\leq y_j, \; \quad \forall i\in I, j \in J
\label{c2b}
\end{align}
\noindent\textbf{Capacity Constraints:}
\begin{align}
\sum_{i \in I}a_i {x_{ij}} \leq q_j y_j,\; \quad \forall j \in J
\label{c4b}
\end{align}
\begin{align}
\sum_{i \in I} \sum_{j \in J} f_{uv}^{ij} \leq q_{uv}\; \quad \forall (u,v) \in E
\label{c5b}
\end{align}
\noindent\textbf{Flow Constraints: }
\begin{align}
\quad a_i x_{ii} + \sum_{v\in V: (i,v) \in E}\sum_{j \in J }f^{ij}_{iv} = a_i,\; \quad \forall i \in I
\label{flow1b}
\end{align}
\begin{equation}
\begin{aligned}
&\quad \sum_{v \in V: (v,u) \in E}f^{ij}_{vu} - \sum_{v \in V: (u,v) \in E}f^{ij}_{uv} = a_i x_{iu} \; \\% a_i x_{iu}\; \\
& \quad \forall i \in I, j \in J, u \in V, u \neq
\end{aligned}
\label{flow2b}
\end{equation}
\noindent\textbf{Domain Constraints: }
\begin{align}
\quad y_{j} \in \{0, 1\},\; \quad \forall j \in J
\label{dom1b}
\end{align}
\begin{align}
\quad x_{ij} \in [0, 1],\; \quad \forall i \in I, j \in J
\label{dom2b}
\end{align}
\begin{align}
\quad f_{uv}^{ij} \geq 0,\; \quad \forall (u,v) \in E, i \in I, j \in J
\label{dom3b}
\end{align}
The objective function minimizes the total cost comprised of two terms; the first one represents the cost of installing and operating a gateway at location $j$, aggregated over the total number of gateways used.
The second term corresponds to the transport/connection cost from the demand originating at $i$ to its assigned gateway $j$, aggregated for all demands.
Constraints set \eqref{c1b} assures that traffic demands are supported by the selected gateways. Feasibility constraints set \eqref{c2b} makes sure that demands are only served by open gateways.
Constraints \eqref{flow1b} and \eqref{flow2b} enforce the flow conservation.
The domains of $y_i$, $x_{ij}$ and $f_{uv}^{ij}$ variables are defined in constraints \eqref{dom1b}, \eqref{dom2b}, \eqref{dom3b}, respectively.
In the aforementioned formulation,
we can express the set of $x_{ij}$ variables, utilizing the corresponding last-hop flow variables:
\begin{equation}
x_{ij} = \frac{\sum_{u \in V}{f^{ij}_{uj}}}{a_i},\; \quad \forall i \in I, j \in J
\label{aux}
\end{equation}
We can replace the set of $x_{ij}$ variables as in \eqref{aux}. The new MILP model for JSGPR is presented in the next subsection.
\subsection{JSGPR MILP Formulation II}
The new MILP formulation is as follows:
\vspace{2mm}
\begin{equation}
\textbf{Minimize} \qua
\sum_{j \in J}c_jy_{j} + \phi\sum_{i \in I}\sum_{j \in J}\sum_{(u,v) \in E}c_{uv}f^{ij}_{uv} \;
\label{obj}
\end{equation}
\noindent\textbf{Demand Constraints: }
\begin{equation}
\quad\sum_{j \in J}\sum_{u \in V} f^{ij}_{uj} = a_i,\; \quad \forall i \in I
\label{demgar}
\end{equation}
\noindent\textbf{Feasibility Constraints: }
\begin{equation}
\quad \sum_{u \in V} f^{ij}_{uj} \leq y_j a_i,\; \quad \forall i \in I, \forall j \in J
\label{leq}
\end{equation}
\noindent\textbf{Capacity Constraints: }
\begin{equation}
\sum_{i\in I}\sum_{u\in V}{f^{ij}_{uj}} \leq q_j y_j,\; \quad \forall j \in J
\label{c4b2}
\end{equation}
\begin{equation}
\quad \sum_{i \in I}\sum_{j \in J}f^{ij}_{uv} \leq q_{uv},\; \quad \forall (u,v) \in E
\label{linkcap}
\end{equation}
\noindent\textbf{Flow Constraints: }
\begin{equation}
\sum_{u \in V: (u,i) \in E}{f^{ii}_{ui}} + \sum_{v\in V: (i,v) \in E}\sum_{j\neq i \in J }f^{ij}_{iv} = a_i,\; \quad \forall i \in I
\label{flow1}
\end{equation}
\begin{equation}
\begin{aligned}
&\quad \sum_{v \in V: (u,v) \in E}f^{ij}_{vu} - \sum_{v \in V: (u,v) \in E}f^{ij}_{uv} = \sum_{u \in V}{f^{iu}_{uu}} \; \\
& \quad \forall i \in I, j \in J, u \in V, u \neq
\end{aligned}
\label{flow2}
\end{equation}
\noindent\textbf{Domain Constraints: }
\begin{equation}
\quad y_{j} \in \{0, 1\},\; \quad \forall j \in J
\label{dom1}
\end{equation}
\begin{equation}
\quad f_{uv}^{ij} \geq 0,\; \quad \forall (u,v) \in E, i \in I, j \in J
\label{dom2}
\end{equation}
\vspace{2mm}
Where $\phi = \frac{1}{(\sum_{i\in I}{a_i})}$ is the normalization factor between the two terms of the objective function.
Additionally, in order to meet the average delay requirement for each demand point $i$ we will impose a new constraint exploiting the corresponding flow variables:
\begin{equation}
\quad \sum_{j \in J}\sum_{(u,v) \in E}\frac{f^{ij}_{uv}}{a_i}d_{uv}\leq d_{max} \; \quad \forall{i \in I}
\label{newdelgar}
\end{equation}
where $d_{max}$ is the maximum allowed average delay for each terrestrial node. We note that we have ignored the delay of the terrestrial-satellite link in the above calculation, since it is a constant term as explained in section \ref{sec:desc}.
\subsection{JSGPR-LB MILP Formulation}
In the aforementioned specification, the assigned load to a gateway is solely bounded by the capacity constraints \eqref{c4b2} and \eqref{linkcap}. We define a new single decision variable $l_{max}$ to represent the maximum traffic assigned to a gateway, common for all the selected gateways. We add $l_{max}$ as an additional term to the objective function. The new objective function is described in \eqref{objlm}:
\begin{align}
\textbf{Minimize} \quad(\sum_{j \in J}c_jy_{j}+\phi\sum_{i \in I}\sum_{j \in J}\sum_{(u,v) \in E}c_{uv}f^{ij}_{uv}) + \alpha l_{max}\;
\label{objlm}
\end{align}
where $\alpha$ is a constant factor determining the balance between the two terms of the objective function.
Also, we change constraints \eqref{c4b2} as following:
\begin{equation}
\quad \sum_{i \in I}\sum_{u \in V} f^{ij}_{uj} \leq l_{max}\; \quad \forall j \in J
\end{equation}
where $l_{max} \leq q_j \quad \forall j \in J$. We will call this last model, $JSGPR-LB$. The resulted optimization problem aims to balance the load of the gateways in conjunction with the cost of the gateway deployment \cite{manet}, \cite{lb}.
\subsection{LP Relaxation and Approximation Algorithm}
Since the MILP model is known to be NP-hard, the problem is intractable for larger scale networks \cite{Bell}. For the aforementioned JSGPR and JSGPR-LB MILP formulations, the optimal fractional solution is computed for the problem via linear programming relaxation. The relaxed problem can be solved by any suitable linear programming method, in polynomial time. A rounding technique is applied, similar to \cite{torkzaban2019trust} to obtain the integer solution of the MIP problem. The resulting multi-commodity flow allocation problem is solved to identify the routing paths.
\section{Introduction}
Over the past years, the integration of satellite communications with current and emerging terrestrial networks (e.g., 5G mobile networks) is gaining attention, given the growing data traffic volume which is predicted to increase by over 10,000 times in the next 20 years \cite{8438267}. The trend is also supported by national directives and initiatives to support broadband connectivity in rural and remote areas, as it is considered a crucial factor for economic growth. Due to their large footprint, satellites can complement and extend terrestrial networks, both in densely populated areas and in rural zones, and provide reliable mission critical services. Standardization bodies as 3GPP, ETSI \cite{doi:10.1002/sat.1292} and ITU \cite{8438267} also recognize and promote integrated and/or hybrid satellite and terrestrial networks.
The integrated satellite-terrestrial networks (ISTNs) can be a cornerstone to the realization of a heterogeneous global system to enhance the end-user experience \cite{artiga2016terrestrial}. An ISTN, as depicted in Fig.~\ref{fig:map}, is composed of satellites organized in a constellation that support routing, adaptive access control, and spot-beam management \cite{vasavada2016architectures}, whereas the terrestrial optical network consists of ground stations (gateways), switches, and servers.
Delay-sensitive data services are more suitable to transport in the low earth orbit (LEO) satellite networks, which provide inherent advantages in power consumption, pole coverage, latency, and lower cost compared with the geostationary earth orbit (GEO) satellite networks and the medium earth orbit (MEO) satellite networks \cite{guo19}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{results/ICC2020/SGP2.jpg}
\caption{Example of an integrated satellite-terrestrial network.}
\label{fig:map}
\end{center}
\end{figure}
However, there are multiple challenges associated with the ISTNs. Satellite networks and terrestrial networks are widely heterogeneous, supported by distinct control and management frameworks, thus their integration to date was based on proprietary and custom solutions. Additionally, in the integrated network, as there are multiple available paths along which data traffic can be routed, optimized path-selection is important to satisfy the Quality of Service (QoS) requirements of the traffic flows and improve the utilization of network resources \cite{jia2016broadband}. Existing satellite networks employ a decentralized management architecture, scheduled link allocation and static routing strategies, which make it difficult to support flexible traffic scheduling adapting to the changes in traffic demands \cite{service}. Complex handover mechanisms should be in place at the gateway nodes, while their high-power consumption requirements need to be also taken into consideration.
Nowadays the inclusion of plug-and-play SatCom solutions are investigated, primarily in the context of 5G. Exploiting software-defined networking (SDN)\cite{zhang2017energy} and network function virtualization (NFV), ISTNs can be programmed and network behavior can be centrally controlled using open APIs, thus supporting a more agile integration of terrestrial (wired and wireless) and satellite domains interworking with each other to deliver end-to-end services. Based on the extra degrees of freedom in ISTN deployment, brought forth by the softwarization and programmability of the network, the satellite gateway placement problem is of paramount importance for the space-ground integrated network. The problem entails the selection of an optimal subset of the terrestrial nodes for hosting the satellite gateways while satisfying a group of design requirements. Various strategic objectives can be pursued when deciding the optimal placement of the satellite gateways including but not limited to cost reduction, latency minimization, reliability assurance, etc. \cite{GW}, \cite{GWC}, \cite{taghlid}.
In this paper, we propose a method for the cost-optimal deployment of satellite gateways on the terrestrial nodes. Particularly, we aim to minimize the overall cost of gateway deployment and traffic routing, while satisfying latency requirements. To this end, we formulate the problem as a mixed-integer linear program (MILP), with latency bounds as hard constraints. We derive an approximation method from our MILP model to significantly reduce the time-complexity of the solution, at the expense of sub-optimal gateway placements, and investigate the corresponding trade-off. Furthermore, in order to reduce latency and processing power at the gateways, we impose a (varying) upper bound on the load that can be supported by each gateway.
It is important to note that, traffic routing and facility placement
are usually solved as different problems, but assigning
a demand point to a facility without considering the other
demand points might not be realistic [9]. Given the significant
interrelation of the two problems we develop and solve a single
aggregated model instead of solving the two problems in a
sequential manner.
The remainder of the paper is organized as follows. Section~\ref{sec:desc} describes the problem and the network model. In Section~\ref{sec:problem} we introduce the MILP formulation and its LP-based approximation followed by a variant of our model with load minimization. We present our evaluation results in section \ref{sec:evaluation}, and provide the overview of the related works in section \ref{sec:relatedwork}. Finally, in Section \ref{sec:conclusions}, we highlight our conclusions and discuss the directions for our
future work.
\section{Related Work}
\label{sec:relatedwork}
Although the gateway placement problem over a general network is very-well studied, the satellite gateway placement problem on an ISTN is fairly new.
For sensor \cite{bzoor}, vehicular\cite{mezza}, wireless mesh \cite{gorbe} and MANETs \cite{manet}, different approaches have been proposed with different optimization objectives such as load balancing \cite{lb}, latency minimization \cite{delay}, reliability maximization \cite{rel}, etc.
The recent works on ISTNs , mostly focus on optimizing the average network latency\cite{GW} or reliability\cite{GWC}, \cite{capac}, \cite{taghlid}, \cite{same}. In \cite{GW}, the authors propose a particle swarm-based optimization approximation (PSOA) approach for mimizing the average network latency and benchmark it against an optimal brute force algorithm (OBFA) to show the imporvement in the time complexity. In \cite{GWC}, a hybrid simulated annealing and clustering algorithm (SACA) is used to provide near optimal solution for the joint SDN controller and satellite gateway placement with the objective of network reliability maximization. For the same purpose as \cite{GWC}, authors in \cite{taghlid}, provide a simulated annealing partion-based k-means (SAPKM) approach and compare the advantages and weaknesses of their approach with that of \cite{GWC}. In \cite{capac}, the authors consider the satellite link capacity as a limiting constraint and solve the placement problem to maximize the reliability.
There is another line of research which aims to place the satellite gateways on the aerial platforms. More precisely in \cite{cross}, a new aerial layer is added in between the terrestrial and the satellite layer which relays the traffic between the satellite and the terrestrial nodes. The authors have used a greedy optimization approach to make the gateway selection.
Although the above approaches provide a great insight into the satellite gateway placement problem, they assume the number of gateways is known prior to design. Secondly, none of the above studies investigates the traffic routes from the terrestrial switches to the satellite, whereas we propose the joint optimization of satellite gateway placement together with the corresponding traffic routing for ISTNs.
\section{Network Model and Problem Description}
\label{sec:desc}
The ISTN network under our study is depicted in Fig. \ref{fig:map}.We model the terrestrial network as an undirected $G = (V, E)$ graph where $(u,v) \in E $ if there is a link between the nodes $u,v \in V$. Let $J\subseteq V $ be the set of all potential nodes for gateway placement and $I\subseteq V$ be the set of all demand points. We note that the sets $I$ and $J$ are not necessarily disjoint. A typical substrate node $v \in V$ may satisfy one or more of the following statements: (i) node $v$ is a gateway to the satellite, (ii) node $v$ is an initial demand point of the terrestrial network, (iii) node $v$ relays the traffic of other nodes, to one or more of the gateways. The ideal solution will introduce a set of terrestrial nodes for gateway placement which result in the cheapest deployment \& routing, together with the corresponding routes from all the demand points to the satellite, while satisfying the design constraints.
Regarding the delay of the network we consider only propagation delay.
The propagation delay of a path in the network is the sum of the propagation delays over its constituting links. A GEO satellite is considered in the particular system model \cite{GW}. Let $d_{uv}$ represent the contribution of the terrestrial link $(u,v)$ to the propagation delay of a path which contains that link. The propagation delay from a gateway to the satellite is constant \cite{GWC}. Moreover, we consider multi-path routing for the traffic demands. Therefore, we define a flow $i\rightarrow j$ as the fraction of the traffic originated at node $i \in I$, routed to the satellite through gateway $j \in J$.
Once all the flows corresponding to node $i$ are determined, all the routing paths from node $i$ to the satellites are defined. Let $c_j$ denote the cost associated with deploying a satellite gateway at node $j$ and $c_{uv}$ the bandwidth unit cost for each link $(u,v)$. We also define $a_i$ as the traffic demand rate of node $i$ which is to be served by the satellite. The capacity of each link $(u,v)$ is denoted by $q_{uv}$ while the capacity of the gateway-satellite link is $q_j$ for gateway $j$. Table \ref{paramv} summarizes all the notations used for the parameters and variables throughout the paper.
\begin{table*}[ht]
\centering
\begin{center}
\scalebox{0.9}{
\begin{tabular}{|c|c|}
\hline
Variables & Description\\
\hline
$y_j$ & The binary decision variable of gateway placement at node $j$\\
$x_{ij}$ & The fraction of traffic demand originating at $i$ passing through gateway $j$\\
$f_{uv}^{ij}$ & The amount of traffic originating at $i$ assigned to gateway $j$ passing through the link $(u,v)$\\
\hline
Parameters & Description\\
\hline
$G= (V,E)$ & Terrestrial network graph\\
$J$ & The set of potential nodes for gateway placement \\
$I$ & The set of demand points\\
$c_j$ & The cost associated with deploying a satellite gateway at node $j$\\
$c_{uv}$ & Bandwidth unit cost for link $(u,v)$\\
$a_i$ & Traffic demand rate of node $i$\\
$q_{uv}$ & Capacity of link $(u,v)$\\
$q_j$ & Capacity of gateway-satellite link for gateway $j$\\
\hline
\end{tabular}}
\caption{System model parameters and variables}
\label{paramv}
\end{center}
\end{table*}
Since the propagation latency is usually the dominant factor in determining the network delay \cite{taghlid}, we first develop a joint satellite gateway placement and routing (JSGPR) MILP formulation to minimize the cumulative gateway placement cost with hard constraints on the average
propagation delay for the traffic of each demand point.
Following that, we will derive a variant of JSGPR with load balancing (JSGPR-LB) which aims at mutually optimizing the overall cost and the load assigned to all the deployed gateways. Finally, we use an LP-based approximation approach to reduce the time-complexity of the proposed scheme at the cost of a sub-optimal gateway placement.
|
1,116,691,500,143 | arxiv | \section{Definitions and notation}
\label{sec:def-not}We first provide some definitions and notation which are useful for our analysis later on.
The gate strings in the pearl-necklace encoder and the gates in the convolutional encoder are numbered from left to right. We denote the $i^{\text{th}}$ gate string in the pearl-necklace encoder,
$\overline{U}_i,$ and the $i^{\text{th}}$ gate in the convolutional encoder, $U_i.$
Let $\overline{U}$, without any index
specified, denote a particular infinitely repeated sequence of $U$ gates, where the sequence contains the
same $U$ gate for every frame of qubits.
Let $U$ be either CNOT or CPHASE gate. The notation $\overline{U}\left( a,bD^{l}\right)$ refers to a string of gates in a pearl-necklace encoder and
denotes an infinitely repeated sequence of $U$ gates from qubit $a$ to qubit
$b$ in every frame where qubit $b$ is in a frame delayed by $l$.
\footnote{Instead of the previously used notation $\overline{U}( a,b)(D^{l}),$
we prefer to use $\overline{U}\left( a,bD^{l}\right)$
as it seems to better represent the concept.}
Let $U$ be either phase or Hadamard gate. The notation $\overline{U}\left(b\right) $ refers to a string of gates in a pearl-necklace encoder and
denotes an infinitely repeated sequence of $U$ gates which act on qubit $b$ in every frame.
By convention we call this qubit, the target of $\overline{U}\left(b\right) $ during this paper.
If $\overline{U_i}$ is $\overline{\text{CNOT}}$ or $\overline{\text{CPHASE}}$ the notation $a_i,b_i,$ and $l_i$ are used to denote its source index, target index and degree, respectively. If $\overline{U_i}$ is $\overline{H}$ or $\overline{P}$ the notation $b_i$ is used to denote its target index.
For example,
the strings of gates in Figure \ref{pearl2convosimple}(a)
correspond
to: \begin{align}
\overline{H}\left( 3\right) \overline{\text{CPHASE}}%
\left( 1,2D\right) \overline{\text{CNOT}}\left( 1,3\right),
\end{align}
$b_1=1$, $a_2=1,$ $b_2=2,$ $l_2=1,$ $a_3=1,$ $b_3=3,$ and $l_3=0.$
Suppose the number of gate strings in the pearl-necklace encoder is $N.$ The members of the sets $I_{\text{CNOT}}^{+}$, $I_{\text{CNOT}}^{-},$ $I_{\text{CPHASE}}^{+},$ and $I_{\text{CPHASE}}^{-}$ are the indices of gate strings in the encoder which are $\overline{\text{CNOT}}$ with non-negative degree, $\overline{\text{CNOT}}$ with negative degree, $\overline{\text{CPHASE}}$ with non-negative degree and $\overline{\text{CPHASE}}$ with negative degree respectively:
\[I_{\text{CNOT}}^{+}=\{i|\,\overline{U}_i\,\, \text{is}\,\,\overline{\text{CNOT}},l_i\geq 0, i\in \{1,2,\cdots,N\}\},\]
\[I_{\text{CNOT}}^{-}=\{i|\,\overline{U}_i\,\, \text{is}\,\,\overline{\text{CNOT}},l_i< 0, i\in \{1,2,\cdots,N\}\},\]
\[I_{\text{CPHASE}}^{+}=\{i|\,\overline{U}_i \,\,\text{is}\,\,\overline{\text{CPHASE}},l_i\geq 0, i\in \{1,2,\cdots,N\}\},\]
\[I_{\text{CPHASE}}^{-}=\{i|\,\overline{U}_i \,\,\text{is}\,\,\overline{\text{CPHASE}},l_i< 0, i\in \{1,2,\cdots,N\}\}.\]
The members of the sets $I_{\text{H}}$ and $I_{\text{P}}$ are the indices of gate strings of the encoder which are $\overline{H}$ and $\overline{P}$ respectively:
\[I_{\text{H}}=\{i|\,\overline{U}_i \,\,\text{is}\,\,\overline{H}, i\in \{1,2,\cdots,N\}\},\]
\[I_{\text{P}}=\{i|\,\overline{U}_i \,\,\text{is}\,\,\overline{P}, i\in \{1,2,\cdots,N\}\}.\]
Our convention for numbering the
frames upon which the unitary of a convolutional encoder acts is from
\textquotedblleft bottom\textquotedblright\ to \textquotedblleft
top.\textquotedblright\ Figure \ref{graph-convo1}(b) illustrates this convention for a convolutional encoder. If ${U}_i$ is CNOT or CPHASE gate, then let $\sigma_{i}$ and $\tau_{i}$ denote the frame index of the
respective source and target qubits of the ${U}_i$ gate in a
convolutional encoder. If $U_i$ is Hadamard or Phase gate, let $\tau_{i}$ denote the frame index of the target qubit of the ${U}_i$ gate in a convolutional encoder. For example, consider the convolutional encoder in Figure \ref{graph-convo1}(b). The convolutional encoder in this figure consists of six gates; $\tau_{1}=0,$ $\tau_{2}=0,$ $\sigma_{3}=0,$ $\tau_{3}=1,$ $ \sigma_{4}=2,$ $\tau_{4}=0,$ $ \sigma_{5}=3,$ $\tau_{5}=2,$ $\sigma_{6}=4, $ and $\tau_{6}=3.$
While referring to a convolutional encoder, the following notation are defined as follows:\\
The notation CNOT$(a,b)\left( \sigma,\tau \right)$ denotes a CNOT gate from qubit $a$ in
frame $\sigma$ to qubit $b$ in frame $\tau$.\\
The notation CPHASE$(a,b)\left( \sigma,\tau \right) $ denotes a CPHASE gate from qubit $a$ in
frame $\sigma$ to qubit $b$ in frame $\tau$.\\
The notation $H(b)(\tau)$ denotes a Hadamard gate which acts on qubit $b$ in frame $\tau.$\\
The notation $P(b)(\tau)$ denotes a Phase gate which acts on qubit $b$ in frame $\tau.$
For example the gates in Figure \ref{graph-convo1}(b) correspond to:\begin{align}
&H\left( 1\right)(0) P\left( 1\right)(0) \text{CPHASE}%
\left( 1,2\right)\left(0,1 \right) \text{CPHASE}%
\left( 2,3\right) \left( 2,0\right)
\text{ CNOT}\left(
3,2\right) \left( 3,2 \right) \text{CNOT}\left( 2,3\right) \left( 4,3 \right)
\nonumber.
\end{align}
\section{Memory requirements for an arbitrary pearl-necklace encoder}
\label{memory-general}As discussed before, for finding the practical realization of a pearl-necklace encoder it is required to rearrange the gates as a convolutional encoder.
To do this rearrangement, we must first find a
set of gates consisting of a single gate for each gate string
in the pearl-necklace encoder
such that all the gates that remain after the set
commute with it. Then we can shift all these gates to the right and infinitely repeat this operation on
the remaining gates to obtain a convolutional encoder. When all gates in the pearl-necklace encoder commute with each other, there is no constraint on frame indices of target (source) qubits of gates in the convolutional encoder \cite{ourpaper}. (Figure~\ref{pearl2convosimple} shows an example
of the rearrangement of commuting gate strings into a convolutional encoder.) On the other hand, when the gate strings do not commute, the constraint of commutativity of the remaining gates with the chosen set results in constraints on frame indices of target (source) qubits of gates in the convolutional encoder.
In the following sections, after defining different types of non-commutativity and their imposed constraints, the algorithm for finding the minimal-memory convolutional encoder for an arbitrary pearl-necklace encoder is presented.
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
natheight=9.879900in,
natwidth=19.139999in,
height=2.35in,
width=6.24in
]
{pearl2convosimple.png}
\end{center}
\caption{Simple (since all gate strings commute with each other) example of the rearrangement of a pearl-necklace encoder for a non-CSS code
into a convolutional encoder. (a) The pearl-necklace encoder consists of
the gate strings
$\overline{H}\left( 1\right) \overline{\text{CPHASE}}
\left( 1,2\right)\left( D\right) \overline{\text{CNOT}}\left( 1,3\right) $.
(b) The
rearrangement of gates after the first three by shifting them
to the right. (c) The repeated application of the procedure in (b) realizes a
convolutional encoder from a pearl-necklace encoder.}
\label{pearl2convosimple}
\end{figure}
\subsection{Different types of non-commutativity and their imposed constraints}
There may arise three types of non-commutativity for any two gate strings of shift-invariant Clifford: source-target non-commutativity, target-source non-commutativity and target-target non-commutativity. Each imposes a different constraint on frame indices of gates in the convolutional encoder. These types of non-commutativity and their constraints are explained in the following sections.
\subsubsection{Source-target non-commutativity}
The gate strings in~(\ref{cn-cn-s-t1}-\ref{cp-h-s-t1}) below do not commute with each other. In all of them, the index of each source qubit in the first gate string is the same as the index of each target qubit in the second gate string, therefore we call this type of non-commutativity \emph{source-target non-commutativity}.
\begin{equation}
\overline{\text{CNOT}}(a,bD^{l}) \overline{\text{CNOT}}(a^{\prime},b^{\prime}D^{l^{\prime}}),\,\,\text{where}\,\, a=b^{\prime},\label{cn-cn-s-t1}
\end{equation}
\begin{equation}
\overline{\text{CPHASE}}(a,bD^{l}) \overline{\text{CNOT}}(a^{\prime},b^{\prime}D^{l^{\prime}}),\,\,\text{where}\,\,a=b^{\prime}, \label{cp-cn-s-t1}
\end{equation}
\begin{equation}
\overline{\text{CNOT}}(a,bD^{l})\overline{H}(b^{\prime}),\,\,\text{where}\,\, a=b^{\prime},\label{cn-h-s-t1}
\end{equation}
\begin{equation}
\overline{\text{CPHASE}}(a,bD^{l})\overline{H}(b^{\prime}),\,\,\text{where}\,\, a=b^{\prime}.\label{cp-h-s-t1}
\end{equation}
With an analysis similar to the analysis in Section 3.1 of~\cite{ourpaper}, it can be proved that the following inequality applies to any correct choice of a convolutional encoder that implements either of the transformations in~(\ref{cn-cn-s-t1}-\ref{cp-h-s-t1}):
\begin{equation}
\sigma\leq \tau^{\prime}, \label{s-t}
\end{equation}
where $\sigma$ and $\tau^{\prime}$ denote the frame index of the source qubit of the first gate and the frame index of the target qubit of the second gate in a convolutional encoder respectively.
We call the inequality in~(\ref{s-t}), \emph{source-target constraint.}
As an example, the gate strings of the pearl-necklace encoder, $\overline{\text{CPHASE}}(2,3D)\overline{\text{CNOT}}(1,2D)$, (Figure \ref{constraintex}(a))
have source-target non-commutativity. A correct choice of convolutional encoder is (the encoder depicted over a first arrow in the Figure \ref{constraintex}):
\begin{equation}
\text{CPHASE}(2,3)(1,0)\text{CNOT}(1,2)(2,1).
\end{equation}
In this case $\sigma=1\leq \tau^{\prime}=1.$ Since the source-target constraint is satisfied the remaining gates after the chosen set in Figure\ref{constraintex}(b) can be shifted to the right. Repeated application of the procedure in (b) realizes a convolutional encoder representation from
a pearl-necklace encoder(Figure\ref{constraintex}(c)).
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
natheight=9.879900in,
natwidth=19.139999in,
height=2.35in,
width=5.03in
]
{constraintex1.png}
\end{center}
\caption{Finding a correct choice for a two non-commutative gate strings . (a) The pearl-necklace encoder consists of
the gate strings
$\overline{\text{CPHASE}}(2,3D)\overline{\text{CNOT}}(1,2D)$, which have source-target non-commutativity.
(b) The
rearrangement of gates after the first three by shifting them
to the right. (c) The repeated application of the procedure in (b) realizes a
convolutional encoder from a pearl-necklace encoder.}
\label{constraintex}
\end{figure}
The following Boolean function is used to determine whether this type of non-commutativity exists for two gate strings:
\[\text{Source-Target}\left(\overline{U}_i,\overline{U}_j \right).\]
This function takes two gate strings $\overline{U}_i$ and $\overline{U}_j$ as
input. It returns TRUE if $\overline{U}_i$ and
$\overline{U}_j$ have source-target
non-commutativity and returns FALSE otherwise.
\subsubsection{Target-source non-commutativity}
It is obvious that the gate strings in~(\ref{cn-cn-t-s1}-\ref{h-cp-t-s1}) do not commute. In all of them, the index of each target qubit in the first gate string
is the same as the index of each source qubit in the second gate string. Therefore we call this type of non-commutativity, \emph{target-source non-commutativity}.
\begin{equation}
\overline{\text{CNOT}}(a,bD^{l})\overline{\text{CNOT}}(a^{\prime},b^{\prime}D^{l^{\prime}}),\,\,\text{where}\,\, b=a^{\prime}, \label{cn-cn-t-s1}
\end{equation}
\begin{equation}
\overline{\text{CNOT}}(a,bD^{l}) \overline{\text{CPHASE}}(a^{\prime},b^{\prime}D^{l^{\prime}}),\,\, \text{where}\,\, b=a^{\prime},\label{cn-cp-t-s1}
\end{equation}
\begin{equation}
\overline{H}(b)\overline{\text{CNOT}}(a^{\prime},b^{\prime}D^{l^{\prime}}),\,\,\text{where}\,\, b=a^{\prime}, \label{h-cn-t-s1}
\end{equation}
\begin{equation}
\overline{H}(b)\overline{\text{CPHASE}}(a^{\prime},b^{\prime}D^{l^{\prime}}),\,\,\text{where}\,\, b=a^{\prime}.\label{h-cp-t-s1}
\end{equation}
With an analysis similar to the analysis in Section 3.1 of~\cite{ourpaper}, it can be proved that the following inequality applies to any correct choice of a convolutional encoder that implements either of the transformations in~(\ref{cn-cn-t-s1}-\ref{h-cp-t-s1}):
\begin{equation}
\tau\leq \sigma^{\prime},\label{t-s}
\end{equation}
where $\tau$ and $\sigma^{\prime}$ denote the frame index of the target qubit of the first gate and the frame index of the source qubit of the second gate in a convolutional encoder respectively. We call the inequality in~(\ref{t-s}), \emph{target-source constraint.}
The following Boolean function is used to determine whether target-source non-commutativity exists for two gate strings:
\[\text{Target-Source}\left(\overline{U}_i,\overline{U}_j \right).\]
This function takes two gate strings $\overline{U}_i$ and $\overline{U}_j$ as
input. It returns TRUE if $\overline{U}_i$ and
$\overline{U}_j$ have target-source
non-commutativity and returns FALSE otherwise.
\subsubsection{Target-target non-commutativity}
It is obvious that the gate strings in~(\ref{cp-cn-t-t1}-\ref{h-p-t-t1}) do not commute. In all of them, the index of each target qubit in the first gate string is the same as the index of each target qubit in the second gate string. Therefore we call this type of non-commutativity, \emph{target-target non-commutativity}.
\begin{equation}
\overline{\text{CPHASE}}(a,bD^{l})\ \overline{\text{CNOT}}(a^{\prime},b^{\prime}D^{l^{\prime}}),\,\,\text{where}\,\,b=b^{\prime}, \label{cp-cn-t-t1}
\end{equation}
\begin{equation}
\overline{\text{CNOT}}(a,bD^{l}) \overline{\text{CPHASE}}(a^{\prime},b^{\prime}D^{l^{\prime}}),\,\, \text{where}\,\, b=b^{\prime},\label{cn-cp-t-t1}
\end{equation}
\begin{equation}
\overline{\text{CNOT}}(a,bD^{l})\overline{H}(b),\,\,\text{where}\,\, b=b^{\prime}, \label{cn-h-t-t1}
\end{equation}
\begin{equation}
\overline{\text{CPHASE}}(a,bD^{l})\overline{H}(b^{\prime}),\,\,\text{where}\,\, b=b^{\prime}, \label{cp-h-t-t1}
\end{equation}
\begin{equation}
\overline{H}(b)\overline{\text{CNOT}}(a^{\prime},b^{\prime}D^{l^{\prime}}),\,\,\text{where}\,\, b=b^{\prime},\label{h-cn-t-t1}
\end{equation}
\begin{equation}
\overline{H}(b)\overline{\text{CPHASE}}(a^{\prime},b^{\prime}D^{l^{\prime}}),\,\,\text{where}\,\, b=b^{\prime},\label{h-cp-t-t1}
\end{equation}
\begin{equation}
\overline{\text{CNOT}}(a,bD^{l})\overline{P}(b^{\prime}),\,\,\text{where}\,\, b=b^{\prime}, \label{cn-p-t-t1}
\end{equation}
\begin{equation}
\overline{P}(b)\overline{\text{CNOT}}(a^{\prime},b^{\prime}D^{l^{\prime}}),\,\,\text{where}\,\, b=b^{\prime},\label{p-cn-t-t1}
\end{equation}
\begin{equation}
\overline{P}(b)\overline{H}(b^{\prime}),\,\, \text{where}\,\, b=b^{\prime},\label{p-h-t-t1}
\end{equation}
\begin{equation}
\overline{H}(b)\overline{P}(b^{\prime}),\,\,\text{where} \,\, b=b^{\prime}. \label{h-p-t-t1}
\end{equation}
With a analysis similar to the analysis in Section 3.1 of~\cite{ourpaper}, it can be proved that the following inequality applies to any correct choice of a convolutional encoder that implements either of the transformations in~(\ref{cp-cn-t-t1}-\ref{h-p-t-t1}):
\begin{equation}
\tau\leq \tau^{\prime},\label{t-t}
\end{equation}
where $\tau$ and $\tau^{\prime}$ denote the frame index of the target qubit of the first gate and the frame index of the target qubit of the second gate in a convolutional encoder respectively. We call the inequality in (\ref{t-t}), \emph{target-target constraint.}
The following Boolean function is used to determine whether target-target non-commutativity exists for two gate strings:
\[\text{Target-Target}\left(\overline{U}_i,\overline{U}_j \right)=\text{TRUE}.\]
This function takes two gate strings $\overline{U}_i$ and $\overline{U}_j$ as
input. It returns TRUE if $\overline{U}_i$ and
$\overline{U}_j$ have target-target
non-commutativity and returns FALSE otherwise.
Consider the $j^{\text{th}}$ gate string, $\overline{U}_j$ in the encoder. It is important
to consider the gate strings preceding this one that do not commute with this gate string and categorize them based on the type of non-commutativity. Therefore we define the following sets:
\[
(\mathcal{S-T})_{j} =\{i\mid
\text{Source-Target}(\overline{U}_i,\overline{U}_j)=\text{TRUE},i\in\{1,2,\cdots,j-1\}\},
\]
\[
(\mathcal{T-S})_{j} =\{i\mid
\text{Target-Source}(\overline{U}_i,\overline{U}_j)=\text{TRUE},i\in\{1,2,\cdots,j-1\}\},
\]
\[
(\mathcal{T-T})_{j} =\{i\mid
\text{Target-Target}(\overline{U}_i,\overline{U}_j)=\text{TRUE},i\in\{1,2,\cdots,j-1\}\}.
\]
\subsection{The proposed algorithm for finding minimal memory requirements for an arbitrary pearl-necklace encoder}
In this section we find the minimal-memory realization for an arbitrary pearl-necklace encoder which include all gate strings that are in shift-invariant Clifford group: Hadamard gates, Phase gates, controlled-phase and finite depth and infinite-depth controlled-NOT gate strings.
To achieve this goal, we consider any non-commutativity that may exist for a particular gate and its preceding gates.
Suppose that
a pearl-necklace encoder features the following succession of $N$ gate strings:
\begin{equation}
\overline{U_1},\overline{U_2},\cdots,\overline{U_N}. \label{encoder}
\end{equation}
If the first gate string is $\overline{\text{CNOT}}(a_1,b_1D^{l_1}),l_1\geq 0$, the first gate in the convolutional encoder is
\begin{equation}
\text{CNOT}(a_1,b_1)(\sigma_1=l_1,\tau_1=0).\label{cn+}
\end{equation}
If the first gate string is $\overline{\text{CNOT}}(a_1,b_1D^{l_1}),l_1< 0$ the first gate in the convolutional encoder is
\begin{equation}
\text{CNOT}(a_1,b_1)(\sigma_1=0,\tau_1=|l_1|).\label{cn-}
\end{equation}
If the first gate string is $\overline{\text{CPHASE}}(a_1,b_1D^{l_1}),l_1\geq 0$
the first gate in the convolutional encoder is
\begin{equation}
\text{CPHASE}(a_1,b_1)(\sigma_1=l_1,\tau_1=0).\label{cp+}
\end{equation}
If the first gate string is $\overline{\text{CPHASE}}(a_1,b_1D^{l_1}),l_1< 0$ the first gate in the convolutional encoder is
\begin{equation}
\text{CPHASE}(a_1,b_1)(\sigma_1=0,\tau_1=|l_1|).\label{cp-}
\end{equation}
If the first gate string is $\overline{H}(b_1)$ or $\overline{P}(b_1)$ the first gate in the convolutional encoder is as follows respectively:
\begin{equation}
H(b_1)(0),\label{h}
\end{equation}
\begin{equation}
P(b_1)(0).\label{p}
\end{equation}
For the
target indices of each gate $j$ where $2 \leq j \leq N$, we should choose a value for $\tau_{j}$ that satisfies all the
constraints that the gates preceding it impose.
First consider $\overline{U_j}$ is the $\overline{\text{CNOT}}$ or $\overline{\text{CPHASE}}$ gate, then
the following inequalities must be satisfied to target index of $\overline{U_j},$ $\tau_{j}$:
By applying the source-target constraint in (\ref{s-t}) we have:
\begin{align}
\sigma_{i} & \leq\tau_{j}\,\,\,\,\,\,\forall i\in(\mathcal{S-T})_{j}, \nonumber\\
\therefore\,\,\,\tau_{i}+l_i & \leq\tau_{j} \,\,\,\,\,\,\forall i\in(\mathcal{S-T})_{j},\nonumber\\
\therefore\,\,\max\{\tau_{i}+l_{i}\}_{i\in(\mathcal{S-T})_{j}} & \leq\tau_{j}, \label{cn-cp-s-t}
\end{align}
by applying the target-source constraint in (\ref{t-s}) we have:
\begin{align}
\tau_{i} & \leq\sigma_{j}\,\,\,\,\,\,\forall i\in(\mathcal{T-S})_{j}\nonumber\\
\therefore\,\,\,\tau_{i} & \leq\tau_{j}+l_j \,\,\,\,\,\,\forall i\in(\mathcal{T-S})_{j}, \nonumber\\
\therefore\,\,\,\tau_{i}-l_j & \leq\tau_{j}\,\,\,\,\,\,\forall i\in(\mathcal{T-S})_{j},\nonumber\\
\therefore\,\,\max\{\tau_{i}-l_{j}\}_{i\in(\mathcal{T-S})_{j}} & \leq\tau_{j}.
\label{cn-cp-t-s}
\end{align}
By applying the target-target constraint in (\ref{t-t}) we have:
\begin{align}
\tau_{i} & \leq\tau_{j}\,\,\,\,\,\,\forall i\in(\mathcal{T-T})_{j}\nonumber\\
\therefore\,\,\max\{\tau_{i}\}_{i\in(\mathcal{T-T})_{j}} & \leq\tau_{j}. \label{cn-cp-t-t}
\end{align}
The following constraint applies to the frame index $\tau_j$ of the target qubit by applying (\ref{cn-cp-s-t}-\ref{cn-cp-t-t}):
\begin{align}
\max\{{\{\tau_i+l_i\}}_{i\in(\mathcal{S-T})_{j}},{\{\tau_i-l_j\}}_{i\in (\mathcal{T-S})_{j}},\{\tau_{i}\}_{i\in (\mathcal{T-T})_{j}}\} \leq\tau_{j}.\,\,\,\,\,\,
\end{align}
Thus, the minimal value for $\tau_{j}$ (which corresponds to the minimal-memory realization) that satisfies all the constraints is:
\begin{align}
\tau_{j}=\max\{{\{\tau_i+l_i\}}_{i\in(\mathcal{S-T})_{j}},{\{\tau_i-l_j\}}_{i\in (\mathcal{T-S})_{j}},\{\tau_{i}\}_{i\in (\mathcal{T-T})_{j}}\} .\,\,\,\,\,\, \label{cn-cp-min1}
\end{align}
It can be easily shown that there is no constraint for the frame index $\tau_{j}$ if the gate
string $\overline{U_j}$ commutes with all
previous gate strings. Thus if $l_j\geq0$ we choose the frame index $\tau
_{j}$ as follows:%
\begin{equation}
\tau_{j}=0.\label{cn-cp+min2}%
\end{equation}
and if $l_j<0$ we choose $\tau_{j}$ as follows:
\begin{equation}
\tau_{j}=|l_j|.\label{cn-cp-min2}%
\end{equation}
If $l_j\geq0,$ a good choice for the frame index $\tau_{j},$ by considering (\ref{cn-cp-min1}) and (\ref{cn-cp+min2}) is as follows:
\begin{align}
\tau_{j}=\max\{0,{\{\tau_i+l_i\}}_{i\in(\mathcal{S-T})_{j}},{\{\tau_i-l_j\}}_{i\in (\mathcal{T-S})_{j}},\{\tau_{i}\}_{i\in (\mathcal{T-T})_{j}}\} .\,\,\,\,\,\, \label{cn-cp+}
\end{align}
and if $l_j<0,$ a good choice for the frame index $\tau_{j},$ by considering (\ref{cn-cp-min1}) and (\ref{cn-cp-min2}) is as follows:
\begin{align}
\tau_{j}=\max\{|l_j|,{\{\tau_i+l_i\}}_{i\in(\mathcal{S-T})_{j}},{\{\tau_i-l_j\}}_{i\in (\mathcal{T-S})_{j}},\{\tau_{i}\}_{i\in (\mathcal{T-T})_{j}}\} .\,\,\,\,\,\, \label{cn-cp-}
\end{align}
Now consider $\overline{U_j}$ is the $\overline{H}$, then
the following inequalities must be satisfied to target index of $\overline{U_j},$ $\tau_{j}$:
By applying the source-target constraint in (\ref{s-t}) we have:
\begin{align}
\sigma_{i} & \leq\tau_{j}\,\,\,\,\,\,\forall i\in(\mathcal{S-T})_{j}, \nonumber\\
\therefore\,\,\,\tau_{i}+l_i & \leq\tau_{j} \,\,\,\,\,\,\forall i\in(\mathcal{S-T})_{j},\nonumber\\
\therefore\,\,\max\{\tau_{i}+l_{i}\}_{i\in(\mathcal{S-T})_{j}} & \leq\tau_{j}. \label{h-p-s-t}
\end{align}
By applying target-target constraint in (\ref{t-t}) we have:
\begin{align}
\tau_{i} & \leq\tau_{j}\,\,\,\,\,\,\forall i\in(\mathcal{S-T})_{j}\nonumber\\
\therefore\,\,\max\{\tau_{i}\}_{i\in(\mathcal{T-T})_{j}} & \leq\tau_{j}, \label{h-p-t-t}
\end{align}
The following constraint applies to the frame index $\tau_j$ of the target qubit by applying ($\ref{h-p-s-t}$) and ($\ref{h-p-t-t}$):
\begin{align}
\max\{{\{\tau_i+l_i\}}_{i\in(\mathcal{S-T})_{j}},\{\tau_{i}\}_{i\in (\mathcal{T-T})_{j}}\} \leq\tau_{j}\nonumber.
\end{align}
Thus, the minimal value for $\tau_{j}$ (which corresponds to the minimal-memory realization) that satisfies all the constraints is:
\begin{align}
\tau_{j}=\max\{{\{\tau_i+l_i\}}_{i\in(\mathcal{S-T})_{j}},\{\tau_{i}\}_{i\in (\mathcal{T-T})_{j}}\} .\label{H-min1}
\end{align}
It can be easily shown that there is no constraint for the frame index $\tau_{j}$ if the gate
string $\overline{U_j}$ commutes with all
previous gate strings. Thus, in this case, we choose the frame index $\tau
_{j}$ as follows:%
\begin{equation}
\tau_{j}=0.\label{H-min2}%
\end{equation}
A good choice for the frame index $\tau_{j},$ by considering (\ref{H-min1}) and (\ref{H-min2}) is as follows:
\begin{align}
\tau_{j}=\max\{{0,\{\tau_i+l_i\}}_{i\in(\mathcal{S-T})_{j}},\{\tau_{i}\}_{i\in (\mathcal{T-T})_{j}}\}.
\end{align}
Now consider $\overline{U_j}$ is the $\overline{P}$, then
by applying the target-target constraint in (\ref{t-t}), the following inequality must be satisfied to target index of $\overline{U_j},$ $\tau_{j}$:
\begin{align}
\tau_{i} & \leq\tau_{j}\,\,\,\,\,\,\forall i\in(\mathcal{T-T})_{j}\nonumber\\
\therefore\,\,\max\{\tau_{i}\}_{i\in(\mathcal{T-T})_{j}} & \leq\tau_{j}\nonumber.
\end{align}
Thus, the minimal value for $\tau_{j}$ (which corresponds to the minimal-memory realization) that satisfies all the constraints is:
\begin{align}
\tau_{j}=\max\{\tau_{i}\}_{i\in(\mathcal{T-T})_{j}}. \label{P-min1}
\end{align}
It can be easily shown that there is no constraint for the frame index $\tau_{j}$ if the gate
string $\overline{U_j}$ commutes with all
previous gate strings. Thus, in this case, we choose the frame index $\tau_{j}$ as follows:%
\begin{equation}
\tau_{j}=0.\label{P-min2}%
\end{equation}
A good choice for the frame index $\tau_{j},$ by considering (\ref{P-min1}) and (\ref{P-min2}) is as follows:
\begin{align}
\tau_{j}=\max\{0,\{\tau_{i}\}_{i\in (\mathcal{T-T})_{j}}\}.
\end{align}
\subsubsection{Construction of the non-commutativity graph}
We introduce the notion of a \emph{non-commutative} graph, $\mathcal{G}$ in order to find the
best values for the target qubit
frame indices. The graph is a weighted, directed acyclic graph constructed
from the non-commutativity relations of the gate strings
in~(\ref{encoder}). Algorithm \ref{graph} presents pseudo code for constructing the non-commutativity graph.
\begin{algorithm}
\caption{Algorithm for determining the non-commutativity graph $\mathcal{G}$ for general case}
\label{graph}
\begin{algorithmic}
\STATE$N \gets$ Number of gate strings in the pearl-necklace encoder.
\STATE Draw a {\bf START} vertex.
\FOR{$j := 1$ to $N$}
\STATE Draw a vertex labeled $j$ for the $j^{\text{th}}$ gate string $\overline{U}_j$
\IF{$j\in (I_{\text{CNOT}}^{-}\cup I_{\text{CPHASE}}^{-})$}
\STATE DrawEdge({\bf START}, $j$, $|l_j|$)
\ELSE
\STATE DrawEdge({\bf START}, $j$, 0)
\ENDIF
\FOR{$i$ := $1$ to $j-1$}
\IF{$j \in (I_{\text{CNOT}}^{+}\cup I_{\text{CPHASE}}^{+}\cup I_{\text{CNOT}}^{-}\cup I_{\text{CPHASE}}^{-}) $}
\IF{$i\in (\mathcal{S-T})_{j}$}
\STATE DrawEdge($i,j,l_i$ )
\ENDIF
\IF{$i\in (\mathcal{T-S})_{j}$}
\STATE DrawEdge($i,j,-l_j$)
\ENDIF
\IF{$i\in (\mathcal{T-T})_{j}$}
\STATE DrawEdge($i,j,0$)
\ENDIF
\ELSE
\IF{$j\in I_{H}$}
\IF{$i\in (\mathcal{S-T})_{j}$}
\STATE DrawEdge($i,j,l_i$)
\ENDIF
\IF{$i\in (\mathcal{T-T})_{j}$}
\STATE DrawEdge($i,j,0$)
\ENDIF
\ELSE
\IF{$i\in (\mathcal{T-T})_{j}$}
\STATE DrawEdge($i,j,0$)
\ENDIF
\ENDIF
\ENDIF
\ENDFOR
\ENDFOR
\STATE Draw an {\bf END} vertex.
\FOR{$j$ := $1$ to $N$}
\IF{$j\in (I_{\text{CNOT}}^+ \cup I_{\text{CPHASE}}^+$)} \STATE DrawEdge($j$,{\bf END}, $l_j$)
\ELSE \STATE DrawEdge($j$,{\bf END}, $0$)
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
$\mathcal{G}$ consists of $N$ vertices, labeled
$1,2,\cdots,N$, where the $j^{\text{th}}$ vertex corresponds to the
$j^{\text{th}}$ gate string $\overline{U}_j$.
It also has two dummy vertices, named \textquotedblleft
START\textquotedblright\ and \textquotedblleft END.\textquotedblright%
\ DrawEdge$\left( i,j,w\right) $ is a function that draws a directed edge
from vertex $i$ to vertex $j$ with an edge weight equal to $w$.
\subsubsection{The longest path gives the minimal memory requirements}
Theorem~\ref{thm:longest-path-is-memory} below states that the weight of the
longest path from the START vertex to the END vertex is equal to the minimal memory
required for a convolutional encoder implementation.
\begin{theorem}
\label{thm:longest-path-is-memory}The weight $w$\ of the longest path from
the START vertex to END vertex in the non-commutativity graph $\mathcal{G}$ is equal
to the minimal memory $L$ that the convolutional encoder requires.
\end{theorem}
\begin{proof}
We first prove by induction that the weight $w_{j}$ of the longest path from
the START vertex to vertex $j$ in the non-commutativity graph $\mathcal{G}$ is%
\begin{equation}
w_{j}=\mathbb{\tau}_{j}. \label{eq:Gwlp}%
\end{equation}
Based on the algorithm, a zero-weight edge connects the START vertex to the first vertex, if $1\in(I_{\text{CNOT}}^{+}\cup I_{\text{CPHASE}}^{+}\cup I_{H}\cup I_{P})$ and in this case based on (\ref{cn+}), (\ref{cp+}), (\ref{h}) and (\ref{p}), $\tau_{1}=0$ therefore
$w_{1}=\tau_{1}=0$. An edge with the weight equal to $|l_1|$ connects the START vertex to the first gate if $1 \in (I_{\text{CNOT}}^{-}\cup I_{\text{CPHASE}}^{-}),$ and based on (\ref{cn-}) and (\ref{cp-}), $\tau_{1}=|l_1|$ therefore
$w_{1}=\tau_{1}=|l_1|$.
Thus the base step holds for the target index of the first
gate in a minimal-memory convolutional encoder. Now suppose the property
holds for the target indices of the first $k$ gates in the convolutional
encoder:%
\[
w_{j}=\mathbb{\tau}_{j}\,\,\,\,\,\,\forall j : 1\leq j\leq k.
\]
Suppose we add a new gate string $\overline{U}_{k+1}$ to the pearl-necklace encoder, and
Algorithm~\ref{graph} then adds a new vertex $k+1$ to the
graph $\mathcal{G}.$ Suppose $(k+1)\in (I^{+}_\text{CNOT}\cup I^{+}_\text{CPHASE})$. The following edges are added to $\mathcal{G}$:
\begin{enumerate}
\item A zero-weight edge from the START vertex to vertex $k+1$.
\item An $l_{i}$-weight edge from each vertex $\{i\}_{i\in\mathcal({S-T})_{k+1}}$
to vertex $k+1$.
\item A $-l_{k+1}$-weight edge from each vertex $\{i\}_{i\in\mathcal({T-S})_{k+1}%
}$ to vertex $k+1$.
\item A zero-weight edge from each vertex $\{i\}_{i\in(\mathcal{T-T})_{k+1}%
}$ to vertex $k+1$.
\item An $l_{k+1}$-weight edge from vertex $k+1$ to the END vertex.
\end{enumerate}
It is clear that the following relations hold:%
\begin{align}
w_{k+1} & =\max\{0,\{w_{i}+l_{i}\}_{i\in(\mathcal{S-T})_{k+1}},\{w_{i}%
-l_{k+1}\}_{i\in(\mathcal{T-S})_{k+1}},\{w_{i}\}_{i\in(\mathcal{T-T})_{k+1}}\},\nonumber\\
& =\max\{0,\{\mathbb{\tau}_{i}+l_{i}\}_{i\in(\mathcal{S-T})_{k+1}},\{\mathbb{\tau
}_{i}-l_{k+1}\}_{i\in(\mathcal{T-S})_{k+1}},\{\mathbb{\tau}_{i}\}_{i\in(\mathcal{T-T})_{k+1}}\}. \label{Gwlp2}%
\end{align}
By applying
(\ref{cn-cp+}) and (\ref{Gwlp2}) we have:%
\[
w_{k+1}=\tau_{k+1}.
\]
In a similar way we can show that if the $U_{k+1}$ is any other gate string of Clifford shift-invariant:
\[w_{k+1}=\tau_{k+1}.\]
The proof of the theorem then follows by considering the following equalities:%
\begin{align}
w & =\max\{\max_{i\in (I_{\text{CNOT}}^{+}\cup I_{\text{CPHASE}}^{+}\cup I_{H}\cup I_{P})}\{w_{i}+l_{i}\}, \max_{i\in (I_{\text{CNOT}}^{-}\cup I_{\text{CPHASE}}^{-})}\{w_i\}\} \nonumber\\
& =\max\{\max_{i\in (I_{\text{CNOT}}^{+}\cup I_{\text{CPHASE}}^{+}\cup I_{H}\cup I_{P})}\{\tau_{i}+l_{i}\}, \max_{i\in (I_{\text{CNOT}}^{-}\cup I_{\text{CPHASE}}^{-})}\{\tau_{i}\}\}\nonumber\\
& =\max\{\max_{i\in (I_{\text{CNOT}}^{+}\cup I_{\text{CPHASE}}^{+}\cup I_{H}\cup I_{P})}\{\sigma_{i}\}, \max_{i\in (I_{\text{CNOT}}^{-}\cup I_{\text{CPHASE}}^{-})}\{\tau_{i}\}\nonumber\}.
\end{align}
The first equality holds because the longest path in the graph is the maximum of the
weight of the path to the $i^{\text{th}}$ vertex summed with the weight of the
edge to the END\ vertex. The second equality follows by applying
(\ref{eq:Gwlp}). The final equality follows because ${{\sigma}_{i}}=\tau
_{i}+l_{i}$. It is clear that\[\max\{\max_{i\in (I_{\text{CNOT}}^{+}\cup I_{\text{CPHASE}}^{+}\cup I_{H}\cup I_{P})}\{\sigma_{i}\}, \max_{i\in (I_{\text{CNOT}}^{-}\cup I_{\text{CPHASE}}^{-})}\{\tau_{i}\}\},\] is equal to minimal required memory for a minimal-memory convolutional
encoder, hence the theorem holds.
\end{proof}
The final task is to determine the longest path in $\mathcal{G}$. Finding the
longest path in a graph, in general is an NP-complete problem, but in a
weighted, directed acyclic graph requires linear time with dynamic
programming~\cite{cormen}. The non-commutativity graph $\mathcal{G}$ is an acyclic graph because a directed edge
connects each vertex only to vertices for which its corresponding gate comes later in the pearl-necklace encoder.
The running time for the construction of the graph is quadratic in the number of gate strings in the pearl-necklace
encoder. Since in
Algorithm~\ref{graph}, the instructions in the
inner \textbf{for} loop requires constant time $O(1)$. The sum of iterations
of the \textbf{if} instruction in the $j^{\text{th}}$ iteration of the outer
\textbf{for} loop is equal to $j-1$. Thus the running time $T(N)$\ of
Algorithm~\ref{graph} is
\[
T(N)=\sum_{i=1}^{N}{\sum_{k=1}^{j-1}O(1)}=O(N^{2}).
\]
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
natheight=9.879900in,
natwidth=19.139999in,
height=3.7in,
width=6.37in
]
{pearl-convo-ex.png}
\end{center}
\caption
{(a) pearl-necklace representation, and (b) minimal-memory convolutional encoder representation for example
1.}
\label{pearl-convo-ex}
\end{figure}%
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
natheight=8.879900in,
natwidth=19.139999in,
height=3.213in,
width=4.61839in
]
{graph-convo1.png}
\end{center}
\caption
{(a) The non-commutativity graph $\mathcal{G}$ and (b) a minimal-memory convolutional encoder for Example
1}
\label{graph-convo1}
\end{figure}%
Example 1: Consider the following succession of gate strings in a pearl-necklace encoder(Figure~\ref{pearl-convo-ex}(a)):
\begin{align}
&\overline{H}\left( 1\right) P\left( 1\right) \overline{\text{CPHASE}}%
\left( 1,2D^{-1}\right) \overline{\text{CPHASE}}%
\left( 2,3D^2\right)
\overline{ \text{CNOT}}\left(
3,2D\right) \overline{\text{CNOT}}\left( 2,3D\right)
\nonumber ,
\end{align}
Figure \ref{graph-convo1}(a) draws $\mathcal{G}$ for this pearl-necklace encoder,
after running Algorithm. The longest path through the graph is
\[
\text{START}\rightarrow4\rightarrow5\rightarrow6\rightarrow\text{END},
\]
with weight equal to four (0+2+1+1). Therefore the minimal memory for the convolutional encoder
is equal to four frames of
memory qubits. Also from inspecting the graph $\mathcal{G}$,
we can determine the locations for all the target qubit frame indices: $\tau_{1}=0,$ $\tau_{2}=0,$ $\tau_{3}=1,$ $\tau_{4}=0,$ $\tau_{5}=2,$ and $\tau_{6}=3.$
Figure~\ref{graph-convo1}(b) depicts a
minimal-memory convolutional encoder for this example. Figure~\ref{pearl-convo-ex}(b) depicts minimal-memory convolutional encoder representation for the pearl-necklace encoder in Figure~\ref{pearl-convo-ex}(a).
\section{Conclusion}
\label{conclu}
In this paper, we have proposed an algorithm to find a practical realization with a minimal
memory requirement for a a pearl-necklace encoder of a general quantum convolutional code,
which includes any gate string in the
shift-invariant Clifford group.
We have shown that the non-commutativity relations of gate strings in the encoder
determine the realization. We introduce a non-commutativity graph, whose each vertex
corresponds to a gate string in the pearl-necklace encoder. The weighted edges represent
non-commutativity relations in the encoder. Using the graph, the minimal-memory realization
can be obtained. The weight of the longest path in the graph is equal to the minimal
required memory of the encoder. The running time for the construction of the graph and
finding the longest path is quadratic in the number of gate strings
in the pearl-necklace encoder.
As we mentioned in our previous paper~\cite{ourpaper}, an open question still remains.
The proposed algorithm begins with
a particular pearl-necklace encoder, and finds the minimal required memory for it.
But one can start with polynomial description of convolutional code and find the minimal
required memory for the code. There are two problems here to work on:
(1) finding the pearl-necklace encoder with minimal-memory requirements among all
pearl-necklace encoders that implement the same code, (2) constructing a repeated
unitary directly from the polynomial description of the code itself, and
attempting to minimize the memory requirements of realizing this code.
\section*{Acknowledgements}
The authors would like to thank Mark M. Wilde for his helpful discussions,
comments and careful reading of an earlier version of this paper.
\bibliographystyle{unsrt}
|
1,116,691,500,144 | arxiv | \section{introduction}
Blind quantum computing~\cite{BFK,FK,MABQC,Barz,Vedran,
composableV,
Lorenzo,Joeoptimal,Barz2,NandV}
is a secure delegated quantum computing, where a client (Alice),
who does not have enough quantum technology, delegates
her quantum computing to a server (Bob), who has a fully-fledged
quantum computer, without leaking any information about
her computation to Bob.
A blind quantum computing protocol for almost classical Alice
was first proposed by Broadbent, Fitzsimons, and Kashefi~\cite{BFK}
by using the measurement-based model due to Raussendorf
and Briegel~\cite{MBQC}.
In their protocol, Alice only needs a device which emits randomly
rotated single-qubit states.
Later it was shown that weak coherent pulses, instead of
single-photon states,
are sufficient for blind quantum computation~\cite{Vedran}.
Recently, it was shown
that blind quantum computing can be verifiable~\cite{FK,Barz2,NandV}.
Here, verifiable means that Alice can test Bob's computation~\cite{FK,Barz2,
NandV}. The verifiability is an important requirement,
since Alice cannot recalculate the result of the delegated
computation by herself to check the correctness
(remember that she does not have any quantum computer),
and therefore if there is no verification method,
she might be palmed off with a wrong result
by a fishy company who tries to sell a fake
quantum computer~\cite{Barz2,NandV}.
The verifiable blind protocol was experimentally demonstrated with a photonic
qubit system~\cite{Barz2,NandV}.
\if0
In quantum cryptography, such as the quantum key distribution,
device independence~\cite{DI,DI2,BCK,BCK2,McKague}
is one of the most important concepts.
It roughly means that the devices used in the protocol
do not need to be trusted.
(For the precise definition,
we note that there are several different definitions for the device independence,
depending on implicit assumptions such as assuming quantum physics or no-signaling theory,
or allowing multiple isolated devices or not, etc.)
The device independence is a desirable property, since normally the user of the
device is assumed to be technologically weak, and therefore
the user cannot verify the device by herself
although most likely she has to buy the device from a third party,
which is not necessarily trusted.
Can we achieve such a device independence in blind quantum computing?
For protocols where Alice emits single-qubit states (or weak
coherent pulses)~\cite{BFK,FK,Vedran}, the device-independent security,
i.e., Alice's privacy is protected without checking her device, is
not guaranteed, since in order to guarantee the security
Alice has to check that
her device generates the correct states.
(For example, if her device emits more than two photons, Bob can exploit
the extra photon to learn Alice's secret. Or more generally, if Alice's device
is malicious and it emits the state $|correct\rangle\otimes|secret\rangle^{\otimes 100}$,
where $|correct\rangle$ is the state that is supposed to be
generated if the device is honest, and $|secret\rangle$ is a state
that contains Alice's secret, then Bob can learn Alice's secret. Alice has to verify
that the generated state is not $|correct\rangle\otimes|secret\rangle^{\otimes 100}$
but $|correct\rangle$.)
For the protocol where Alice does measurements~\cite{MABQC},
the device independent security is easily satisfied as is shown in Ref.~\cite{MABQC}.
However, the protocol is not verifiable, and therefore
it is not clear whether the verifiability and the device independence
can be compatible with each other.
\fi
Recently, another type of blind quantum computing protocol was proposed
in Ref.~\cite{MABQC}.
In this protocol, Alice needs only a device that can measure
quantum states.
One advantage of this protocol is that the security is device
independent~\cite{DI,DI2,BCK,BCK2,McKague},
and is based on the no-signaling principle~\cite{Popescu},
which is more fundamental than quantum physics.
However, it has been an open problem whether the protocol can enjoy verification.
In this paper, we propose a verification protocol for
the measurement-only blind quantum computing.
We will propose two protocols.
Interestingly, our protocols are based on
the combination of
two different concepts from different fields:
the no-signaling principle~\cite{Popescu} from the foundation of physics and
the topological quantum error correcting code~\cite{RHG1,RHG2,Kitaev}
from a practical application in quantum information.
The no-signaling principle means that a shared quantum (or more general) state
cannot be used to transmit information.
It is one of the most central principles in physics, and known to
be more fundamental than quantum physics (i.e., there is a theory
which is more non-local than quantum physics but does not violate
the no-signaling principle~\cite{Popescu}).
The topological quantum error correcting code
is a specific type of the quantum error correcting code
which cleverly uses the topological order of exotic quantum
symmetry-breaking systems
to globally encode logical states.
\section{Topological measurement-based quantum computation}
The Raussendorf-Harrington-Goyal state $|RHG\rangle$ is the three-dimensional
graph state with the elementary cell given in Fig.~\ref{DI_fig3} (a).
Defects in the graph state are created by $Z$ measurements
on $|RHG\rangle$ as usual in the cluster measurement-based model.
Topological braidings of defect tubes
can implement some Clifford gates~\cite{RHG1,RHG2,Kitaev}.
Non-Clifford gates, that are necessary for the universal quantum
computation, are implemented by the magic state
preparation and distillation~\cite{magic}.
A string of $Z$ operators acting on the resource state, which has at least one open edge,
is considered as an error, and its edge(s)
is detected by syndrome measurements
of cubicles of $X$ operators (Fig.~\ref{DI_fig3} (b)).
A string of $Z$ operators on the resource states, which connects
or surrounds defects (Fig.~\ref{DI_fig3} (c)),
is not detected, and can be a logical error.
Local adaptive measurements can implement quantum
computation as well as syndrome error detection.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.4\textwidth]{DI_fig3.eps}
\end{center}
\caption{
The topological measurement-based quantum computation.
(a) The elementary cell of the Raussendorf-Harrington-Goyal state.
Green balls are qubits, and red bonds are $CZ$ gates.
(b) The error detection. Red strings are errors. Green boxes
are syndrome operators.
(c) Undetected errors or logical operations. Blue tubes are defects.
Red and yellow strings are strings of operators,
which surround or connect defects, respectively.
}
\label{DI_fig3}
\end{figure}
\section{First protocol}
Let us explain our first protocol.
The basic idea of our protocol is illustrated in Fig.~\ref{DI_fig1}:
Bob prepares the resource state, and Alice performs measurements.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.3\textwidth]{DI_fig1.eps}
\end{center}
\caption{
Our setup.
Bob first prepares a resource state.
Bob next sends each particle to Alice one by one.
Alice measures each particle according to her algorithm.
}
\label{DI_fig1}
\end{figure}
More precisely, our protocol runs as follows (Fig.~\ref{DI_fig2}).
First, Bob prepares a universal
resource state,
and sends each qubit of it to Alice one by one (Fig.~\ref{DI_fig2} (a)).
Alice measures each qubit until she remotely creates
the $N$-qubit state, $\sigma_q|\Psi_P\rangle$,
in Bob's laboratory (Fig.~\ref{DI_fig2} (b)),
where
$
\sigma_q\equiv \bigotimes_{j=1}^NX_j^{x_j}Z_j^{z_j}
$
with $q\equiv(x_1,...,x_N,z_1,...,z_N)\in\{0,1\}^{2N}$
is the byproduct of the measurement-based quantum computation~\cite{MBQC},
and $X_j$ and $Z_j$ are Pauli operators acting on $j$th qubit.
The state
$
|\Psi_P\rangle\equiv P\Big(|R\rangle\otimes|+\rangle^{\otimes N/3}\otimes|0\rangle^{\otimes N/3}\Big),
$
is the $N$-qubit state,
where $|R\rangle$ is an $N/3$-qubit universal resource state of the
measurement-based quantum computation encoded with a quantum
error-correcting code of the code distance $d$.
(The size of $|R\rangle$ and the number of traps are optimal, since if
there are too many traps, the efficiency of the computation becomes small,
whereas if there are too few traps, the probability of detecting malicious Bob
becomes small.)
For example, $|R\rangle$ can be the $N/3$-qubit Raussendorf-Harrington-Goyal
state~\cite{RHG1,RHG2} with sufficiently many magic states
being already distilled.
(The Raussendorf-Harrington-Goyal state is the resource state
of the topological measurement-based quantum computing~\cite{RHG1,RHG2}.
In stead of the RHG state, any other quantum error correcting code
can be utilized. Therefore,
we can also assume that $|R\rangle$ is a normal resource state of
the measurement-based quantum computation encoded with a quantum error-correcting
code.)
We define $|+\rangle\equiv\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$,
and $P$ is an $N$-qubit permutation, which keeps
the order of qubits in $|R\rangle$.
This permutation is randomly chosen by Alice and kept secret to Bob.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.3\textwidth]{DI_fig2.eps}
\end{center}
\caption{
Our protocol. Here, $|\Psi_P\rangle\equiv P(|R\rangle\otimes|+\rangle^{\otimes N/3}
\otimes|0\rangle^{\otimes N/3})$, $P$ is a $N$-qubit permutation, and $|R\rangle$
is a universal resource state.
}
\label{DI_fig2}
\end{figure}
Throughout this paper, we assume that there is no communication
channel from Alice to Bob.
Then, due to the no-signaling principle, Bob cannot learn anything
about $P$~\cite{MABQC}.
If Bob can learn something about $P$,
Alice can transmit some message to Bob by encoding her message
into $P$, which contradicts to
the no-signaling principle.
Bob sends each qubit of $\sigma_q|\Psi_P\rangle$ to Alice
one by one,
and Alice does the measurement-based quantum computation
on $\sigma_q|\Psi_P\rangle$ with correcting $\sigma_q$
(Fig.~\ref{DI_fig2} (c)).
This means that before measuring $j$th qubit of $\sigma_q|\Psi_P\rangle$
she applies $\sigma_q^\dagger|_j$ on $j$th qubit,
where
$\sigma_q^\dagger|_j$ is the restriction
of $\sigma_q^\dagger$ on $j$th qubit.
For example, $(I\otimes XZ\otimes Z)|_2=XZ$.
Qubits belonging to $|R\rangle$ are used to implement
the Alice's desired quantum computation.
States $|0\rangle$ and $|+\rangle$
are used as ``traps"~\cite{FK}.
In other words,
she measures $Z$ on $|0\rangle$ and $X$ on $|+\rangle$,
and if she obtains the minus result (i.e., $|1\rangle$ or $|-\rangle$ state),
she aborts the protocol.
If results are plus for all traps, she accepts the result
of the measurement-based quantum computation on $|R\rangle$.
\section{Verifiability}
Now we show that if all measurements on traps show the correct results,
the probability that a logical state of Alice's computation is changed
is exponentially small.
In other words, the probability that Alice is fooled by Bob is exponentially small.
Hence, our protocol is verifiable.
Since Bob might be dishonest, he might deviate from the above procedure.
His general attack
is a creation of a different state $\rho$
instead of $\sigma_q|\Psi_P\rangle$.
If he is honest,
$\rho=\sigma_q|\Psi_P\rangle\langle \Psi_P|
\sigma_q^\dagger$.
If he is not honest, $\rho$ can be any state.
However, for any $N$-qubit state $\rho$,
there exists a completely-positive-trace-preserving (CPTP) map
which satisfies
$
\rho=\sum_jE_j\sigma_q|\Psi_P\rangle\langle\Psi_P|
\sigma_q^\dagger E_j^\dagger,
$
where
$E_j\equiv\sum_\alpha C_j^\alpha\sigma_\alpha$,
is a Kraus operator of the CPTP map,
and $C_j^\alpha$ is a complex number (see Appendix~\ref{App:CPTP}).
Since $E_j^\dagger E_j$ is a POVM,
$
I=\sum_jE_j^\dagger E_j
=\sum_j\sum_{\alpha,\beta}C_j^{\alpha *}C_j^\beta\sigma_\alpha^\dagger
\sigma_\beta,
$
we obtain
$\sum_j\sum_\alpha|C_j^\alpha|^2=1$.
Bob does not know $q$. Therefore, from Bob's view point,
the state is averaged over all $q$:
\begin{eqnarray}
&&\frac{1}{4^N}\sum_q\sum_j
\sigma_q^\dagger E_j\sigma_q|\Psi_P\rangle\langle\Psi_P|
\sigma_q^\dagger
E_j^\dagger \sigma_q\nonumber\\
&=&
\frac{1}{4^N}
\sum_q\sum_{j,\alpha,\beta}C_j^\alpha C_j^{\beta *}
\sigma_q^\dagger \sigma_\alpha \sigma_q|\Psi_P\rangle\langle\Psi_P|
\sigma_q^\dagger \sigma_\beta^\dagger \sigma_q\nonumber\\
&=&
\frac{1}{4^N}
\sum_q\sum_{j,\alpha}|C_j^\alpha|^2
\sigma_q^\dagger \sigma_\alpha \sigma_q|\Psi_P\rangle\langle\Psi_P|
\sigma_q^\dagger \sigma_\alpha^\dagger \sigma_q\nonumber\\
&=&
\sum_{j,\alpha}|C_j^\alpha|^2
\sigma_\alpha |\Psi_P\rangle\langle\Psi_P|\sigma_\alpha^\dagger\nonumber\\
&=&
\sum_\alpha\tilde{C}_\alpha
\sigma_\alpha |\Psi_P\rangle\langle \Psi_P|\sigma_\alpha^\dagger,
\label{attack}
\end{eqnarray}
where
$\tilde{C}_\alpha\equiv\sum_j|C_j^\alpha|^2$
and
$
\sum_\alpha\tilde{C}_\alpha=
\sum_\alpha\sum_j|C_j^\alpha|^2
=1.
$
Here, we have used the following equations~\cite{Aharonov}
\begin{eqnarray}
\sum_q \sigma_q^\dagger\sigma_\alpha\sigma_q\rho \sigma_q^\dagger\sigma_\beta^\dagger\sigma_q&=&0
\label{formula}\\
\frac{1}{4^N}\sum_q \sigma_q^\dagger\sigma_\alpha\sigma_q\rho \sigma_q^\dagger\sigma_\alpha^\dagger\sigma_q&=&
\sigma_\alpha\rho \sigma_\alpha^\dagger\nonumber
\end{eqnarray}
for any $\rho$ and $\alpha\neq\beta$.
The second equation is easy to show.
For a proof of Eq.~(\ref{formula}), see Appendix~\ref{App:diagonalize}.
Equation (\ref{attack}) shows that we can assume that Bob's attack is
the ``random Pauli" attack, i.e., Bob randomly applies Pauli operators on each qubit.
Bob's attacks after creating $\rho$ can also
be included in the preparation
of $\rho$.
This is understood as follows.
Let us assume that, after creating $\rho$,
Bob sends a subsystem $S_1$ of $\rho$
to Alice, and then Alice measures all particles of $S_1$.
After this Alice's measurement, Bob might apply an operation
on another subsystem $S_2$ of $\rho$ which has not been sent to Alice.
However, Bob cannot know Alice's measurement angles and results on $S_1$
due to the no-signaling principle, and therefore Bob's
operation on $S_2$ is independent of Alice's measurements
on $S_1$.
Furthermore, Bob's operation on $S_2$ commutes with Alice's measurements
on $S_1$.
Hence we can consider as if
Bob applied such an operation on $S_2$ immediately after
he preparing $\rho$.
In short, we can assume that
Bob's attack is a random Pauli attack on the correct
state $|\Psi_P\rangle$ as is shown in Eq.~(\ref{attack}).
Hence
let us focus on $\sigma_\alpha|\Psi_P\rangle$.
For many quantum
error correcting code (such as the topological one~\cite{RHG1,RHG2}),
if the weight $|\alpha|$ of $\sigma_\alpha$
is less than a certain integer $d$ (the code distance),
then such an error is detected or does not
change logical states~\cite{RHG1,RHG2,Kitaev,FK}.
For example, in the topological code, $d$ is determined by the defect thickness and
distance between defects~\cite{RHG1,RHG2}.
Here, the weight $|\alpha|$ of $\sigma_\alpha$ means
the number of nontrivial operators in $\sigma_\alpha$.
(For example, the weight of $I\otimes XZ\otimes Z\otimes I\otimes X$
is 3.)
Therefore, in order for $\sigma_\alpha$ to change a logical state
of the computation,
$|\alpha|$ must be larger than $d$.
(To understand it, let us consider a simple example. If we encode
the logical 0 as $|0_L\rangle\equiv|000\rangle$
and the logical 1 as $|1_L\rangle\equiv|111\rangle$,
we must flip more than two qubits to change the logical state.
A single bit flip is detected and corrected when the majority vote
is done.)
Alice randomly chooses a permutation $P$.
In this case, the probability of $P^\dagger\sigma_\alpha P$
not changing any trap
is at most $(\frac{2}{3})^{|\alpha|/3}$.
(For a calculation, see Appendix~\ref{App:prob}).
Therefore,
the probability that the logical state is changed and
no trap is flipped
is at most
$
\sum_{|\alpha|\ge d}\tilde{C}_\alpha \left(\frac{2}{3}\right)^{|\alpha|/3}
\le
\left(\frac{2}{3}\right)^{d/3}\sum_{|\alpha|\ge d}\tilde{C}_\alpha
\le
\left(\frac{2}{3}\right)^{d/3},
$
where we have used the fact
$\tilde{C}_\alpha\ge0$
and
$\sum_{|\alpha|\ge d}\tilde{C}_\alpha\le\sum_\alpha \tilde{C}_\alpha=1$.
Here, we have said ``at most", since
the above sum includes
the contribution from
$\sigma_\alpha$ which has a weight larger than $d$ but
does not contain any logical error.
In this way, we have shown that the probability that Alice is fooled
by Bob is exponentially small ($d$ can be sufficiently large
by concatenating the code).
As we have seen, no communication from Alice to Bob
is required for the verification. Therefore,
whatever Alice's measurement device does, Bob cannot learn Alice's
computational information because of the no-signaling principle.
In other words,
the security of the protocol is device-independent.
\section{Second protocol}
Let us explain our second protocol, which uses the property of the topological code,
and does not use
any trap.
Alice randomly chooses
$k\equiv(h_1,...,h_N,t_1,...,t_N)\in\{0,1\}^{2N}$,
and defines the $N$-qubit operator
$
K_k\equiv \bigotimes_{j=1}^NT_j^{t_j}H_j^{h_j},
$
where $T\equiv |0\rangle\langle0|+i|1\rangle\langle1|$
and $H$ is the Hadamard operator.
Note that $T^\dagger XT=-iXZ$, $T^\dagger ZT=Z$, and $T^\dagger XZ T=-iX$.
Next, Alice defines the $N$-qubit state
$|\Psi_k\rangle\equiv K_k|RHG'\rangle$,
where $|RHG'\rangle$ is the $N$-qubit Raussendorf-Harrington-Goyal
state~\cite{RHG1,RHG2}
with
sufficient number of magic states being already distilled~\cite{RHG1,RHG2}.
Bob prepares a universal resource state, and sends each qubit of it to Alice
one by one.
Alice does measurements and creates
$\sigma_q|\Psi_k\rangle$ in Bob's laboratory,
where
$\sigma_q$ is the byproduct of the measurement-based quantum
computation.
Due to the no-signaling principle, Bob cannot learn $k$.
Bob sends each qubit of $\sigma_q|\Psi_k\rangle$
to Alice one by one, and Alice does her topological measurement-based
quantum computation with correcting $\sigma_qK_k$.
If Alice detects any error, she aborts the protocol.
Again, because of Eq.~(\ref{attack}),
we can assume that Bob's attack is a random Pauli attack.
Therefore let us focus on $\sigma_\alpha|\Psi_k\rangle$.
In order for $\sigma_\alpha$ to change a logical state
without being detected by syndrome measurements,
$\sigma_\alpha$ must contain at least one
string $s_\alpha$ of operators
which connects or surrounds defects
(Fig.~\ref{DI_fig3} (c))~\cite{RHG1,RHG2,Kitaev}.
Since Alice randomly chooses $k$, the probability that
all operators in $K_k^\dagger s_\alpha K_k$
become $Z$ or $XZ$ operators is at most $(\frac{3}{4})^{|s_\alpha|}$,
where $|s_\alpha|$ is the weight of $s_\alpha$.
Note that $|s_\alpha|\ge d$ because it connects or
surrounds defects.
Hence,
the probability that the logical state is changed
and Alice does not detect any error
is at most
$
\sum_{|\alpha|\ge d}\tilde{C}_\alpha \left(\frac{3}{4}\right)^{|s_\alpha|}
\le
\left(\frac{3}{4}\right)^d\sum_{|\alpha|\ge d}\tilde{C}_\alpha
\le
\left(\frac{3}{4}\right)^d.
$
In short, our second protocol is also verifiable.
Again, the device-independent security is guaranteed
by the no-signaling principle.
\if0
\section{Discussion}
It would be valuable to see here
the relation between our result
and results about the device-independent quantum key distribution (QKD).
In Ref.~\cite{BCK}, it was pointed out that if Alice and Bob use
the same device many times, the QKD is not device-independent secure,
since the malicious device can store the secret key in its memory, and
can make it public by pretending it
to be a legal classical message in the next round of QKD protocol.
Interestingly, in blind quantum computing,
the device-independent security can be guaranteed even
if Alice use multiple devices that are communicating with each other
due to the no-signaling principle.
In particular, Alice can use the same measurement device for
generating traps and for doing the computation itself.
However, if Alice does not have any trusted random number generator,
or uses the same device several times, the verifiability
of blind quantum computing is no longer device-independent.
In QKD, several new results have been published recently that
if trusted random number generators or isolated multiple devices
are available, Alice and Bob do not need to check device
for the secure QKD~\cite{BCK,BCK2,McKague}. (Hence device-independent in
that sense.)
In a similar way, if we relax some strict requirements,
we can show the device-independent verifiability.
For example, if Alice
can use trusted random number generators and isolated multiple devices
including a single trusted device,
the device-independent verifiability is possible,
which is an analogy of the result~\cite{McKague} that the trusted random number generator
and a single secure QKD allow the multiple use of the same device.
\if0
Testing computation is also important
problem and has a long history in computer science.
In particular, testing quantum computing has attracting much
attention recently.
Reichardt, Unger, and Vazirani~\cite{RUV} (RUV)
showed that a ``classical command
of quantum systems"
is possible even for a completely classical client
if she interacts with two quantum servers.
However, in order to achieve the completely classical client,
their scheme needs an artificial assumption that
two quantum servers cannot communicate with each other.
On the other hand, in our protocols, there is no such assumption
since the client interacts with only a single server like BFK protocol.
In order to have a single server,
we require the client to do polarization measurements of
bulk photons with a threshold detector, but it is almost classical
job, and in fact ubiquitous in today's laboratories.
\fi
\fi
\acknowledgements
The author acknowledges
supports by JSPS and Tenure Track System by MEXT Japan.
|
1,116,691,500,145 | arxiv | \section{Introduction}
The propagation of soliton envelopes in nonlinear
optical media has been predicted and demonstrated experimentally.\cite{ht}
,\cite{m}
This prediction arises from a reduction the Maxwell equations
which govern the electro magnetic fields in the medium to a single,
completely integrable, partial differential equation.
The well-studied example is the nonlinear Schr\"{o}dinger (NLS) equation:
\begin{equation}
{\rm i}\frac{\partial \phi}{\partial t}
+\frac{\partial^{2} \phi}{\partial x^{2}}
+2|\phi|^{2}\phi=0.
\label{nls}
\end{equation}
This equation describes the wave propagation of
picosecond pulse envelopes $\phi(x,t)$ in a
lossless single mode fiber.
The NLS equation (\ref{nls}) is the one of the completely integrable system.\cite{zs}
Recently the interactions among several modes
are studied.\cite{my}-\cite{h1}
In general the coupled mode approach
still permits description of the pulse propagation in a multi-mode
waveguide by means of vector version of (\ref{nls}).
Although these systems of equations are no longer integrable
except for the special parameters,
one may obtain quantitative information about the
pulse propagation restoring to numerical and perturbative methods.
Physically interesting situations that can be described
by coupled NLS equations include two parallel waveguides coupled through
field overlap.
The study of the propagation of optical soliton in multi-mode nonlinear
couplers is important in the view point of their possible
applications in technology.
From the detailed theoretical investigations, several integrable
coupled NLS equations possessing soliton have been introduced
for the special parameters.
In this paper we consider the discrete coupled NLS equation.
This equation is integrable
and $N$-soliton solution is obtained by the Hirota method.\cite{o}
In the continuum limit this equation becomes the coupled NLS equation.
This equation is embedded in the 2-dimensional Toda
equation.\cite{ut}
There, two times $t$ and $\bar{t}$ are complex conjugate.
From the relation to the Toda equation
we study the integrability of the discrete coupled NLS equation.
Using this method we
show the integrability of the new equations.
This paper is organized as follows.
In \S 2
we consider the discrete coupled NLS equation.
In any coupled case of the bright and dark equations
we show the integrability.
In \S 3
we present the integrable new equation
using the method in \S 2.
In \S 4
we discuss this systems from a viewpoint of the conservation laws.
The last section is devoted to the concluding remarks.
\setcounter{equation}{0}
\section{Discrete coupled nonlinear Schr\"{o}dinger
and Toda equation}
Let $\phi_{n}^{(j)}(t)$, $j=1,2,\cdots,l,$ denote
$l$ component dynamical variables.
We consider the discrete coupled
nonlinear Schr\"{o}dinger (DCNS) equation.\cite{o}
\begin{equation}
\frac{\partial \phi^{(j)}_{n}}{\partial t_{1}}
-{\rm i}(\phi_{n-1}^{(j)}+\phi_{n+1}^{(j)})
(1+\kappa \sum_{k=1}^{l}|\phi_{n}^{(k)}|^{2})
+2{\rm i}\phi_{n}^{(j)}=0,
\label{dcns1}
\end{equation}
where $t_{1}$ is the time and $\kappa$ is a constant.
We introduce the new time $t_{2}$ and
assume that $\phi_{n}^{(j)}(t_{1},t_{2})$ satisfies the following equation:
\begin{equation}
\frac{\partial \phi^{(j)}_{n}}{\partial t_{2}}
+(\phi_{n-1}^{(j)}-\phi_{n+1}^{(j)})
(1+\kappa \sum_{k=1}^{l}|\phi_{n}^{(k)}|^{2})
=0
\label{dcns2}
\end{equation}
This means a restriction that $\phi_{n}^{(j)}(t_{1},t_{2})$ is the solution
of (\ref{dcns1}) and (\ref{dcns2}).
The relation between (\ref{dcns1}) and (\ref{dcns2})
is ``dual''.
We call (\ref{dcns2}) the coupled modified Volterra (CMV) equation.
We find it convenient to use complex times $t$ and $\bar{t}$
which is related to
$t_{1}$ and $t_{2}$.
\begin{equation}
t=t_{2}+{\rm i}t_{1},\;\;\;\bar{t}
=t_{2}-{\rm i}t_{1}.
\end{equation}
In terms of the complex times, we rewrite (\ref{dcns1}) and (\ref{dcns2}):
\setcounter{eqalph}{\value{equation}
\begin{equation}
\frac{\partial \phi_{n}^{(j)}}{\partial t}
=
\phi_{n+1}^{(j)}(1+\kappa\sum_{k=1}^{l}|\phi_{n}^{(k)}|^{2})
-\phi_{n}^{(j)},
\end{equation}
\begin{equation}
\frac{\partial \phi_{n}^{(j)}}{\partial \bar{t}}
=
-\phi_{n-1}^{(j)}(1+\kappa\sum_{k=1}^{l}|\phi_{n}^{(k)}|^{2})
+\phi_{n}^{(j)},
\end{equation}
\begin{equation}
\frac{\partial \bar{\phi}_{n}^{(j)}}{\partial t}
=
-\bar{\phi}_{n-1}^{(j)}(1+\kappa\sum_{k=1}^{l}|\phi_{n}^{(k)}|^{2})
+\bar{\phi}_{n}^{(j)},
\end{equation}
\begin{equation}
\frac{\partial \bar{\phi}_{n}^{(j)}}{\partial \bar{t}}
=
\bar{\phi}_{n+1}^{(j)}(1+\kappa\sum_{k=1}^{l}|\phi_{n}^{(k)}|^{2})
-\bar{\phi}_{n}^{(j)}.
\end{equation}
\setcounter{equation}{\value{eqalph}
Here we introduce new dependent variables:
\begin{equation}
a_{n}^{(j)}=\kappa|\phi_{n}^{(j)}|^{2},
\;\;\;\;\;
b_{n}^{(j)}=\kappa\phi_{n}^{(j)}\bar{\phi}_{n-1}^{(j)},
\;\;\;\;\;
\bar{b}_{n}^{(j)}=\kappa\bar{\phi}_{n}^{(j)}\phi_{n-1}^{(j)}.
\end{equation}
The physical meanings of $a_{n}^{(j)}$ and $b_{n}^{(j)}$
are the amplitude and the momentum.
We rewrite (2.4) using these variables
\setcounter{eqalph}{\value{equation}
\begin{equation}
\frac{\partial a_{n}^{(j)}}{\partial t}
=
(b_{n+1}^{(j)}-b_{n}^{(j)})(1+\kappa\sum_{k=1}^{l}
a_{n}^{(k)}),
\end{equation}
\begin{equation}
\frac{\partial a_{n}^{(j)}}{\partial \bar{t}}
=
(\bar{b}_{n+1}^{(j)}-\bar{b}_{n}^{(j)})(1+\kappa\sum_{k=1}^{l}
a_{n}^{(k)}),
\end{equation}
\begin{equation}
\frac{\partial b_{n}^{(j)}}{\partial \bar{t}}
=
-a_{n-1}^{(j)}(1+\kappa\sum_{k=1}^{l}a_{n}^{(k)})
+a_{n}^{(j)}(1+\kappa\sum_{k=1}^{l}a_{n-1}^{(k)}),
\end{equation}
\begin{equation}
\frac{\partial \bar{b}_{n}^{(j)}}{\partial t}
=
-a_{n-1}^{(j)}(1+\kappa\sum_{k=1}^{l}a_{n}^{(k)})
+a_{n}^{(j)}(1+\kappa\sum_{k=1}^{l}a_{n-1}^{(k)}),
\end{equation}
\setcounter{equation}{\value{eqalph}
It is remarkable that these equations are the 2-dimensional Toda lattice equation:
\setcounter{eqalph}{\value{equation}
\begin{equation}
\frac{\partial a_{n}}{\partial t}
=
(b_{n+1}-b_{n})a_{n},
\;\;\;\;
\frac{\partial a_{n}}{\partial \bar{t}}
=
(\bar{b}_{n+1}-\bar{b}_{n})a_{n},
\label{tl1}
\end{equation}
and
\begin{equation}
\frac{\partial b_{n}}{\partial \bar{t}}
=
\frac{\partial \bar{b}_{n}}{\partial t}
=
a_{n}-a_{n-1},
\label{tl2}
\end{equation}
\setcounter{equation}{\value{eqalph}
where
\begin{equation}
a_{n}=1+\sum_{j=1}^{l}a_{n}^{(j)},
\;\;\;\;\;
b_{n}=\sum_{j=1}^{l}b_{n}^{(j)},\;\;\;
\bar{b}_{n}=\sum_{j=1}^{l}\bar{b}_{n}^{(j)}
\end{equation}
We note that $a_{n}$ and $b_{n}$
are the sum of the amplitudes and the sum of the momentum
over all components.
It means that
DCNS and CMV equations are the variants of the 2-dimensional Toda lattice equation.
However (\ref{tl1}) and (\ref{tl2}) are not regular
2-dimensional Toda lattice.
This equation contains complex conjugate
in $t$ ($\bar{t}$) and $b_{n}$ ($\bar{b}_{n}$).
The Toda equation has the $N$-soliton solution.
Then both DCNS and CMV equations have
solutions corresponding to the solutions of the Toda lattice.
Hereafter we call the two equations are ``dual'' when
from these equations we can construct the Toda lattice equation.
Next we consider the coupled case of the bright and dark equations:
\begin{eqnarray}
& &
\frac{\partial \phi_{n}^{(j)}}{\partial t_{1}}
-{\rm i}(\phi_{n-1}^{(j)}+\phi_{n+1}^{(j)})
(1+\kappa \sum_{j=1}^{l}|\phi_{n}^{(j)}|^{2}
-\kappa\sum_{j=l+1}^{m}|\phi_{n}^{(j)}|^2)
+2{\rm i}\phi_{n}^{(j)}=0,
\nonumber \\
& &
j=1,2,\cdots,l
\end{eqnarray}
\begin{eqnarray}
& &
\frac{\partial \phi_{n}^{(j)}}{\partial t_{1}}
+{\rm i}(\phi_{n-1}^{(j)}+\phi_{n+1}^{(j)})
(1+\kappa \sum_{j=1}^{l}|\phi_{n}^{(j)}|^{2}
-\kappa\sum_{j=l+1}^{m}|\phi_{n}^{(j)}|^2)
+2{\rm i}\phi_{n}^{(j)}=0.
\nonumber \\
& &
j=l+1,l+2,\cdots,m
\end{eqnarray}
Among the $m$-components equations,
$l$ equations are bright-type and $(m-l)$ equations are dark-type.
We consider also the ``dual'' equations:
\begin{eqnarray}
& &\frac{\partial \phi_{(j)}}{\partial t_{2}}
+(\phi_{n-1}^{(j)}-\phi_{n+1}^{(j)})
(1+\kappa \sum_{j=1}^{l}|\phi_{n}^{(j)}|^{2}
-\kappa\sum_{j=l+1}^{m}|\phi_{n}^{(j)}|^2)
=0,
\nonumber \\
& &
j=1,2,\cdots,l
\end{eqnarray}
\begin{eqnarray}
& &
\frac{\partial \phi_{(j)}}{\partial t_{2}}
+(-\phi_{n-1}^{(j)}+\phi_{n+1}^{(j)})
(1+\kappa \sum_{j=1}^{l}|\phi_{n}^{(j)}|^{2}
-\kappa\sum_{j=l+1}^{m}|\phi_{n}^{(j)}|^2)
=0,
\nonumber \\
& & j=l+1,l+2,\cdots,m
\end{eqnarray}
In the same way introducing the complex times
we can cast
these equations as the Toda lattice equation (\ref{tl1}) and (\ref{tl2}).
The only difference is the definition of $a_{n}$:
\begin{equation}
a_{n}=1+\sum_{j=1}^{l}a_{n}^{(j)}
-\sum_{j=l+1}^{m}a_{n}^{(j)}.
\end{equation}
That is
$a_{n}$ is the sum of the amplitudes
but for the dark-type
the sign is ``-''.
Usual one dark soliton is described
as $A-B{\rm sech}^{2} X$.
Then from the view point of the Toda equation
the dark and bright soliton are same.
\setcounter{equation}{0}
\section{New Integrable Equations}
Here we consider new coupled equations:
\begin{equation}
\frac{\partial \phi_{n}^{(j)}}{\partial t_{1}}
-{\rm i}(\phi_{n-1}^{(j)}+\phi_{n+1}^{(j)})
(1+\kappa \sum_{k, l}g_{kl}\phi_{n}^{(k)}\bar{\phi}_{n}^{(l)})
+2{\rm i}\phi_{n}^{(j)}=0.
\label{n1}
\end{equation}
where
\begin{equation}
g_{ij}=g_{ji},\;\;\;\;
g_{ij}=0 \;\;\;{\rm or} \;\;\;1.
\end{equation}
As in the previous section
we introduce
the ``dual '' equation
\begin{equation}
\frac{\partial \phi_{(j)}}{\partial t_{2}}
+(\phi_{n-1}^{(j)}-\phi_{n+1}^{(j)})
(1+\kappa \sum_{k,l}g_{kl}\phi_{n}^{(k)}\bar{\phi}_{n}^{(l)})
=0.
\label{n2}
\end{equation}
Moreover we define new dependent variables:
\begin{equation}
a_{n}^{(ij)}=\kappa\phi_{n}^{(i)}\bar{\phi}_{n}^{(j)},
\;\;\;
b_{n}^{(ij)}=\kappa\phi_{n}^{(i)}\bar{\phi}_{n-1}^{(j)},
\;\;\;
\bar{b}_{n}^{(ij)}=\kappa\bar{\phi}_{n}^{(i)}\phi_{n-1}^{(j)}.
\end{equation}
In the same way introducing the complex times,
we can obtain the Toda lattice equation (\ref{tl1}) and (\ref{tl2})
for the variables:
\begin{equation}
a_{n}=1+\sum_{i,j}g_{ij}a_{n}^{(ij)},
\;\;\;
b_{n}=\sum_{i,j}g_{ij}b_{n}^{(ij)},
\;\;\;
\bar{b}_{n}=\sum_{i,j}g_{ij}\bar{b}_{n}^{(ij)}.
\end{equation}
From these results we can see that (\ref{n1}) and (\ref{n2})
are integrable.
If we set $g_{ij}=\delta_{ij}$ then
(\ref{n1}) and (\ref{n2}) become
DCNS and CMV equations respectively.
If we set $g_{ij}=1$ then
(\ref{n1}) and (\ref{n2})
can be reduced
to the (no coupled) discrete nonlinear Schr\"{o}dinger (DNLS)
and the (no coupled) modified Volterra (MV) equations.
As examples
we obtain the following new integrable equations:
i)
\begin{equation}
\frac{\partial \phi_{n}^{(j)}}{\partial t_{1}}
-{\rm i}(\phi_{n-1}^{(j)}+\phi_{n+1}^{(j)})
(1+\kappa \sum_{k\neq l}\phi_{n}^{(k)}\bar{\phi}_{n}^{(l)})
+2{\rm i}\phi_{n}^{(j)}=0.
\label{ne1}
\end{equation}
ii)
\begin{eqnarray}
& & \frac{\partial \phi_{(j)}}{\partial t_{1}}
-{\rm i}(\phi_{n-1}^{(j)}+\phi_{n+1}^{(j)})
(1+\kappa \sum_{k}^{l}
\phi_{n}^{(k)}\bar{\phi}_{n}^{(k+1)})
+2{\rm i}\phi_{n}^{(j)}=0,
\nonumber \\
& & \phi_{n}^{(k+l)}=\phi_{n}^{(k)},\;\;\;\;
\bar{\phi}_{n}^{(k+l)}=\bar{\phi}_{n}^{(k)}.
\label{ne2}
\end{eqnarray}
In the case (i) or (ii)
the soliton solution in a line must run together
with the soliton in the other line.
On the other hands in the case DCNS (\ref{dcns1}) equation
the soliton solution can run independently.
If the difference of the phases between $j$ and $k$ lines is
$-\pi/2<\theta^{ij}<\pi/2$,
the amplitude becomes $a_{n}^{(jk)}<0$ and the
solution is ``dark''.
(\ref{n1}) and (\ref{n2}) may be integrable in the continuum limit.
The discrete Hirota equation reads as:\cite{n}
\begin{equation}
\frac{\partial \phi_{n}}{\partial t_{1}}
-{\rm i}\alpha(\phi_{n-1}+\phi_{n+1})
(1+\kappa |\phi_{n}|^{2})
+\beta(\phi_{n-1}-\phi_{n+1})
(1+\kappa |\phi_{n}|^{2})
+2i\alpha\phi_{n}=0
\label{he}
\end{equation}
This equation is a hybrid of
the discrete nonlinear Schr\"{o}dinger (DNLS) equation
and the modified Volterra (MV) equation.
In the continuous limit (\ref{he}) becomes
\begin{equation}
{\rm i}\frac{\partial \phi}{\partial t}
+k_{1} \frac{\partial^{2} \phi}{\partial x^{2}}
+k_{2}|\phi|^{2}\phi
+
{\rm i}[k_{3}\frac{\partial^{3} \phi}{\partial x^{3}}
+k_{4}|\phi|^{2}\frac{\partial \phi }{\partial x}]=0.
\label{he1}
\end{equation}
where $k_{i}$ is the arbitrary parameters.
(\ref{he1}) contains several generalized (continuous) NLS equations.
As pulse lengths become comparable to the
wavelength, NLS equation is not adequate,
as the additional effects must be considered.
For these cases (\ref{he1}) is useful.\cite{hh}
We consider the coupled version of the discrete
Hirota (DCH) equation:
\begin{eqnarray}
& &
\frac{\partial \phi_{(j)}}{\partial t_{1}}
-{\rm i}\alpha(\phi_{n-1}^{(j)}+\phi_{n+1}^{(j)})
(1+\kappa \sum_{k=1}^{l}|\phi_{n}^{(k)}|^{2})
+\beta(-\phi_{n-1}^{(j)}+\phi_{n+1}^{(j)})
(1+\kappa \sum_{k=1}^{l}|\phi_{n}^{(k)}|^{2})
\nonumber
\\
& &
+2{\rm i}\alpha\phi_{n}^{(j)}=0.
\label{n3}
\end{eqnarray}
In the same way we introduce the
``dual'' equation:
\begin{eqnarray}
& &
\frac{\partial \phi^{(j)}_{n}}{\partial t_{2}}
-{\rm i}\beta(\phi_{n-1}^{(j)}+\phi_{n+1}^{(j)})
(1+\kappa \sum_{k=1}^{l}|\phi_{n}^{(k)}|^{2})
+\alpha(\phi_{n-1}^{(j)}-\phi_{n+1}^{(j)})
(1+\kappa \sum_{k=1}^{l}|\phi_{n}^{(k)}|^{2})
\nonumber \\
& &
+2{\rm i}\beta\phi_{n}^{(j)}
=0.
\label{n4}
\end{eqnarray}
Notice that the ``dual '' equation also becomes the
(DCH) equation,
with the parameters $\alpha$ and $\beta$ being exchanged.
We introduce the following new independent variables:
\begin{equation}
a_{n}^{(j)}=\kappa|\phi_{n}^{(j)}|^{2},
\;\;\;
b_{n}^{(j)}=\kappa(\alpha+\beta {\rm i})\phi_{n}^{(j)}
\bar{\phi}_{n-1}^{(j)},
\;\;\;
\bar{b}_{n}^{(j)}=\kappa(\alpha-\beta {\rm i})\bar{\phi}_{n}^{(j)}
\phi_{n-1}^{(j)}.
\end{equation}
Then we can obtain the Toda lattice equation (\ref{tl1}) and (\ref{tl2})
for the variables:
\begin{equation}
a_{n}=(\alpha^{2}+\beta^{2})(1+\sum_{j=1}^{l}
a_{n}^{(j)}),\;\;\;
b_{n}=\sum_{j=1}^{(l)}b_{n}^{(j)},
\;\;\;
\bar{b}_{n}=\sum_{j=1}^{(l)}\bar{b}_{n}^{(j)}.
\end{equation}
Form theses results (\ref{n3}) and (\ref{n4})
are seen to be integrable.
These equations are also the variants of the
Toda lattice equation.
In the case that the bright and dark equations
are coupled,
such equations are also integrable as
discussed in the previous section.
Here we change $\alpha+{\rm i}\beta $ to $ r {\rm e}^{{\rm i}\theta}$.
If we set
$
\tilde{\phi}_{n}^{(j)}={\rm e}^{{\rm i}\theta n+2{\rm i}(r-\alpha)t_{1}}\phi_{n}^{(j)},
$
then $\tilde{\phi}_{n}^{(j)}$ satisfy the
DCNS equation.
In this meaning DCH (\ref{n3}) equation is the same as
the DCNS (\ref{dcns1}) equation.
But in the continuum limit
there are diffrernces.
In the continuum limit (\ref{n3})
becomes generalized coupled NLS equations:
\begin{eqnarray}
& &
{\rm i}\frac{\partial \phi^{(j)}}{\partial t}
+k_{1} \frac{\partial^{2} \phi^{(j)}}{\partial x^{2}}
+k_{2}(\sum_{l}|\phi^{(l)}|^{2})\phi^{(j)}
+
{\rm i}k_{3}\frac{\partial^{3} \phi^{(j)}}{\partial x^{3}}
\nonumber \\
& &
+{\rm i}k_{4}(\sum_{l}|\phi^{(l)}|^{2})\frac{\partial \phi^{(j)} }{\partial x}=0,
\label{he2}
\end{eqnarray}
while (\ref{he2}) contains nonlinear derivative term.
\setcounter{equation}{0}
\section{Conservation Laws}
The 2-dimensional Toda lattice system
(\ref{tl1}) and (\ref{tl2})
has infinite number of conserved quantities.
The conserved density are obtained
from the Lax operator. \cite{m3}
\begin{eqnarray}
D_{1}^{T}&=&b_{n},\;\;\;\bar{D}_{1}^{T}=\bar{b}_{n}
\nonumber \\
D_{2}^{T}&=&\frac{1}{2}b_{n}^{2}+\frac{a_{n}}{1-a_{n}}b_{n}b_{n-1}
,\;\;\;
\bar{D}_{2}^{T}=\frac{1}{2}\bar{b}_{n}^{2}+\frac{a_{n}}{1-a_{n}}
\bar{b}_{n}\bar{b}_{n-1},
\nonumber \\
D_{3}^{T}&=&\frac{1}{3}b_{n}^{3}+\frac{a_{n}}{1-a_{n}}
\frac{a_{n-1}}{1-a_{n-1}}b_{n}b_{n-1}b_{n-2}
\nonumber \\
& &
+\frac{a_{n}}{1-a_{n}}b_{n}b_{n-1}(b_{n}+b_{n-1}),
\nonumber \\
\bar{D}_{3}^{T}&=&\frac{1}{3}\bar{b}_{n}^{3}+\frac{a_{n}}{1-a_{n}}
\frac{a_{n-1}}{1-a_{n-1}}\bar{b}_{n}\bar{b}_{n-1}\bar{b}_{n-2}
\nonumber \\
& &
+\frac{a_{n}}{1-a_{n}}\bar{b}_{n}\bar{b}_{n-1}(
\bar{b}_{n}+\bar{b}_{n-1}),
\label{tc}
\end{eqnarray}
On the other hand
the conserved densities of the DNLS and MV equation
are
obtained from the Abolowitz and Ladik (AL) system:\cite{al}
\begin{eqnarray}
D_{1}^{AL}&=&\phi_{n}\bar{\phi}_{n-1}
,\;\;\;\bar{D}_{1}^{AL}=\bar{\phi}_{n}\phi_{n-1}
\nonumber \\
D_{2}^{AL}&=&\frac{1}{2}(\phi_{n}\bar{\phi}_{n-1})^{2}
+\phi_{n+1}\bar{\phi}_{n}(1+|\phi_{n}^{j}|^{2})
\nonumber \\
\bar{D}_{2}^{AL}&=&\frac{1}{2}(\bar{\phi}_{n+1}\phi_{n-1})^{2}
+\bar{\phi}_{n+1}\phi_{n-1}(1+|\phi_{n}|^{2}),
\nonumber \\
D_{3}^{AL}&=&\frac{1}{3}(\phi_{n}^{(j)}\bar{\phi}_{n-1})^{3}
+\phi_{n+1}\bar{\phi}_{n-2}(1+|\phi_{n}|^{2})(1+|\phi_{n-1}|^{2})
\nonumber \\
& &
+\phi_{n+1}\bar{\phi}_{n-1}(\phi_{n+1}\bar{\phi}_{n}
+\phi_{n}\bar{\phi}_{n-1})(1+|\phi_{n}|^{2}),
\nonumber \\
\bar{D}_{3}^{AL}&=&\frac{1}{3}(\bar{\phi}_{n}^{(j)}\phi_{n-1})^{3}
+\bar{\phi}_{n+1}\phi_{n-2}(1+|\phi_{n}|^{2})(1+|\phi_{n-1}|^{2})
\nonumber \\
& &
+\bar{\phi}_{n+1}\phi_{n-1}(\bar{\phi}_{n+1}\phi_{n}
+\bar{\phi}_{n}\phi_{n-1})(1+|\phi_{n}|^{2}).
\end{eqnarray}
The conserved densities of the Toda equation
agree with those of the AL system.
For the multi component cases
the number of the conserved densities
is the same as the one component case.\cite{h1}
Then, the conserved densities of the Toda equation
agree with those the multi component AL systems.
From these results we can obtain the conserved density for the
DCNS, CMV and
new integrable equations using (\ref{tc}).
\setcounter{equation}{0}
\section{Concluding Remarks}
In this paper we have studied the
discrete coupled nonlinear Schr\"{o}dinger (DCNS) and
coupled modified Volterra (CMV) equations
from the view point of the 2-dimensional Toda equation.
Introducing the complex time, coupling the DCNS and CMV
equations we have obtained the Toda equation.
The DCNS and CMV equation are equivalent
to the Toda system.
This can be seen from the explicit transformations and conservation laws.
From this point of view
when the bright and the dark equation are coupled,
the DCNS equation is also integrable.
In the Toda equation
the dark and bright soliton
are equivalent.
Using this method we can present
several new equations.
All these equations are embedded
in the 2-dimensional Toda equation
of which time variable $t$ and $\bar{t}$ are complex conjugate.
In the continuum limit these new equations may be integrable.
|
1,116,691,500,146 | arxiv | \section{Introduction}
\label{sec:intro}
Let $V$ be a finite set of elements.
A {\em set system} on a set $V$ of elements is defined to be
a pair $(V,{\mathcal C})$ of $V$ of elements and
a family ${\mathcal C}\subseteq 2^V$,
where
a set in ${\mathcal C}$ is called a {\em component}.
For a subset $X\subseteq V$ in a system $(V,{\mathcal C})$,
a component $Z\in {\mathcal C}$ with $Z\subseteq X$ is called {\em $X$-maximal}
if no other component $W\in {\mathcal C}$ satisfies $Z\subsetneq W\subseteq X$,
and let ${\mathcal C}_{\mathrm{max}}(X)$ denote the family of all $X$-maximal components.
For two subsets $X\subseteq Y\subseteq V$,
let ${\mathcal C}_{\mathrm{max}}(X;Y)$ denote the family of components $C\in {\mathcal C}_{\mathrm{max}}(Y)$
such that $X\subseteq C$.
We call a set function $\rho$ from $2^V$ to the set $\mathbb{R}$ of reals
a {\em volume function} if
$\rho(X)\leq \rho(Y)$ for any subsets $X\subseteq Y\subseteq V$.
A subset $X\subseteq V$ is called {\em $\rho$-positive} if $\rho(X)> 0$.
To discuss the computational complexities for solving a problem in a system,
we assume that a system $(V,{\mathcal C})$ is implicitly given as two oracles
$\mathrm{L}_1$ and $\mathrm{L}_2$ such that
\begin{itemize}
\item[-]
given non-empty subsets $X\subseteq Y\subseteq V$,
$\mathrm{L}_1(X,Y)$ returns a component $Z\in {\mathcal C}_{\mathrm{max}}(X;Y)$
(or $\emptyset$ if no such $Z$ exists)
in $\theta_{\mathrm{1,t}}$ time and
$\theta_{\mathrm{1,s}}$ space; and
\item[-]
given a non-empty subset $Y\subseteq V$,
$\mathrm{L}_2(Y)$ returns ${\mathcal C}_{\mathrm{max}}(Y)$
in $\theta_{\mathrm{2,t}}$ time and
$\theta_{\mathrm{2,s}}$ space.
\end{itemize}
Given a volume function $\rho$,
we assume that whether $\rho(X)> 0$ holds or not can be
tested in $\theta_{\rho,\mathrm{t}}$ time
and $\theta_{\rho,\mathrm{s}}$ space.
We also denote by $\delta(X)$ an upper bound on $|{\mathcal C}_{\mathrm{max}}(X)|$,
where we assume that $\delta$ is a non-decreasing function in the sense that
$\delta(Y)\leq \delta(X)$ holds for any subsets $Y\subseteq X\subseteq V$.
We define an {\em instance} to be a tuple $\mathcal{I}=(V,{\mathcal C},I,\sigma)$ of
a set $V$ of $n\geq 1$ elements, a family ${\mathcal C}\subseteq 2^V$,
a set $I$ of $q\geq 1$ items and a function $\sigma:V\to 2^I$.
Let $\mathcal{I}=(V,{\mathcal C},I,\sigma)$ be an instance.
The common item set $I_\sigma(X)$ over a subset $X\subseteq V$
is defined to be $I_\sigma(X) = \bigcap_{v\in X}\sigma(v)$.
A {\em solution\/} to instance $\mathcal{I}$ is defined
to be a component $X\in {\mathcal C}$
such that
\[\mbox{ every component $Y\in {\mathcal C}$ with $Y\supsetneq X$ satisfies
$I_\sigma(Y)\subsetneq I_\sigma(X)$.
}\]
Let ${\mathcal S}$ denote the family of all solutions to instance $\mathcal{I}$.
%
Our aim is to design an efficient algorithm for enumerating all solutions in ${\mathcal S}$.
We call an enumeration algorithm
$\mathcal A$ \\
~- {\em output-polynomial}
if the overall computation time is polynomial with respect to\\
\quad the input and output size; \\
~- {\em incremental-polynomial}
if the computation time between the $i$-th output and \\
\quad the $(i-1)$-st output is bounded by a polynomial with respect to \\
\quad the input size and $i$; and \\
~- {\em polynomial-delay} if the delay (i.e., the time between any two consecutive outputs),\\
\quad preprocessing time and postprocessing time are all bounded by a polynomial \\
\quad with respect to the input size. \\
In this paper, we design an algorithm
that enumerates all solutions in $\mathcal S$
by traversing a {\em family tree} over the solutions in $\mathcal S$,
where the family tree is a tree structure that represents
a parent-child relationship among solutions.
The following theorem summarizes our main result.
\begin{thm}\label{thm:main}
Let $\mathcal{I}=(V,{\mathcal C},I,\sigma)$ be
an instance on a set system $(V,{\mathcal C})$ with a volume function $\rho$,
where $n=|V|$ and $q=|I|$.
All $\rho$-positive solutions in $\mathcal S$
to the instance $\mathcal{I}$ can be enumerated
in $O\big( (n+q)q\delta(V)\oraclex{1,t}+q\oraclex{2,t}+q\delta(V)\oraclex{\rho,t}+(n^2+nq)q\delta(V) \big)$
delay and
in $O\big(n\oraclex{1,s}+n\oraclex{2,s}+n\oraclex{\rho,s}+(n+q)n\big)$
space.
\end{thm}
The problem is motivated by enumeration of solutions
in an instance $(V,{\mathcal C},I,\sigma)$ such that $(V,{\mathcal C})$ is transitive.
We call a system $(V,{\mathcal C})$ {\em transitive} if
any tuple of components $X,Y,Z\in{\mathcal C}$
with $Z\subseteq X\cap Y$ implies $X\cup Y\in{\mathcal C}$.
For such an instance, we proposed an algorithm in \cite{HN.2020}
that enumerates all solutions such that the delay is bounded by
a polynomial with respect to the input size and the running times of oracles.
The proposed algorithm yields the first polynomial-delay algorithms
for enumerating connectors in an attributed graph~\cite{ASA.2019,HMSN2.2018,HMSN.2019,HN.2019,HN.2020,O.2017,OHNYS.2014,OHNYS.2016,SS.2008,SSF.2010}
and for enumerating all subgraphs with various types of connectivities
such as all $k$-edge/vertex-connected induced subgraphs
and all $k$-edge/vertex-connected spanning subgraphs
in a given undirected/directed graph for a fixed $k$.
It is natural to ask whether the result in \cite{HN.2020}
is extensible to an instance with a general set system.
This paper gives an affirmative answer to the question;
even when we have no assumption on the system $(V,{\mathcal C})$
of a given instance $(V,{\mathcal C},I,\sigma)$,
there is an algorithm that enumerates all solutions
in polynomial-delay with respect to
the input size and the running times of oracles.
The paper is organized as follows.
We prepare notations and terminologies in \secref{prel}.
In \secref{nontransitive}, we present a polynomial-delay algorithm
that enumerates all solutions in an instance $(V,{\mathcal C},I,\sigma)$
such that $(V,{\mathcal C})$ is an arbitrary set system.
We also show that all components are enumerable
in polynomial-delay, using the algorithm.
Finally we conclude the paper in \secref{conc}.
\section{Preliminaries}\label{sec:prel}
Let $\mathbb{R}$ (resp., $\mathbb{R}_+$) denote the set of reals
(resp., non-negative reals).
For a function $f: A\to \mathbb{R}$ for a finite subset $A$
and a subset $B\subseteq A$,
we let $f(B)$ denote $\sum_{a\in B}f(a)$.
For two integers $a$ and $b$, let $[a,b]$ denote the set of integers
$i$ with $a\leq i\leq b$.
For a set $A$ with a total order $<$ over the elements in $A$,
we define a total order $\prec$ over the subsets of $A$ as follows.
For two subsets $J,K\subseteq A$,
we denote by $J\prec K$
if the minimum element in $(J\setminus K)\cup(K\setminus J)$ belongs to $J$.
We denote $J\preceq K$ if $J\prec K$ or $J=K$.
Note that $J\preceq K$ holds whenever $J\supseteq K$.
Let $a_{\max}$ denote the maximum element in $A$.
Then $J\prec K$ holds for
$J=\{j_1,j_2,\ldots,j_{|J|}\}$, $j_1<j_2<\cdots<j_{|J|}$ and
$K=\{k_1,k_2,\ldots,k_{|K|}\}$, $k_1<k_2<\cdots<k_{|K|}$,
if and only if
the sequence $(j_1,j_2,\ldots,j_{|J|},j'_{|J|+1},j'_{|J|+2},\ldots, j'_{|A|})$
of length $|A|$ with $j'_{|J|+1}=j'_{|J|+2}=\cdots= j'_{|A|}=a_{\max}$
is lexicographically smaller than
the sequence $(k_1,k_2,\ldots,k_{|K|},k'_{|K|+1},k'_{|K|+2},\ldots, k'_{|A|})$
of length $|A|$ with $k'_{|K|+1}=k'_{|K|+2}=\cdots=k'_{|A|}=a_{\max}$.
Hence we see that $\preceq$ is a total order on $2^A$.
Suppose that an instance $(V,{\mathcal C},I,\sigma)$ is given.
To facilitate our aim, we introduce a total order over the items in $I$
by representing $I$ as a set $[1,q]=\{1,2,\ldots,q\}$ of integers.
We define subsets $V_{\angleb{0}}\triangleq V$
and $V_{\angleb{i}}\triangleq \{v\in V\mid i\in \sigma(v)\}$
for each item $i\in I$.
For each non-empty subset $J\subseteq I$, define subset
$V_{\angleb{J}}\triangleq \bigcap_{i\in J}V_{\angleb{i}}=\{v\in V\mid J\subseteq \sigma(v)\}$.
For $J=\emptyset$, define $V_{\angleb{J}}\triangleq V$.
For each subset $X\subseteq V$, let
$\min I_\sigma(X)\in [0,q]$ denote the minimum item in
$I_\sigma(X)$, where $\min I_\sigma(X)\triangleq 0$ for $I_\sigma(X)=\emptyset$.
For each $i\in [0,q]$, define a family of solutions in ${\mathcal S}$,
\[{\mathcal S}_i\triangleq \{X\in {\mathcal S}\mid \min I_\sigma(X)=i\}.\]
Note that ${\mathcal S}$ is a disjoint union of ${\mathcal S}_i$, $i\in [0,q]$.
In Section~\ref{sec:trav},
we will design an algorithm that enumerates
all solutions in ${\mathcal S}_k$ for any specified integer $k\in [0,q]$.
\section{Enumerating Solutions}
\label{sec:nontransitive}
For a notational convenience,
let ${\mathcal C}_{\mathrm{max}}(X;i)$ for each item $i\in I_\sigma(X)$ denote
the family ${\mathcal C}_{\mathrm{max}}(X;V_{\angleb{i}})$ of components and
let ${\mathcal C}_{\mathrm{max}}(X;J)$ for each subset $J\subseteq I_\sigma(X)$
denote the family ${\mathcal C}_{\mathrm{max}}(X;V_{\angleb{J}})$ of components.
We can test whether a given component is a solution or not as follows.
\begin{lem} \label{lem:solution-test}
Let $(V,{\mathcal C},I=[1,q],\sigma)$ be an instance,
$C$ be a component in ${\mathcal C}$ and $J=I_\sigma(C)$.
\begin{enumerate}
\item[{\rm (i)}] $C\in {\mathcal S}$ if and only if
$\{C\}={\mathcal C}_{\mathrm{max}}(C; J)$; and
\item[{\rm (ii)}] Whether $C$ is a solution or not can be tested
in $O\big(\oraclex{1,t}+|C|q\big)$ delay and in $O\big(\oraclex{1,s}+|C|+q\big)$
space.
\end{enumerate}
\end{lem}
\begin{proof}
(i) Note that $C\in {\mathcal C}$.
By definition, $C\not\in {\mathcal S}$ if and only if
there is a component $C'\in {\mathcal S}$ such that $C\subsetneq C'$
and $I_\sigma(C)=I_\sigma(C')=J$, where
a maximal one of such components $C'$ belongs to ${\mathcal C}_{\mathrm{max}}(C; J)$.
Hence if no such component $C'$ exists then
${\mathcal C}_{\mathrm{max}}(C; J)=\{C\}$.
Conversely, if ${\mathcal C}_{\mathrm{max}}(C; J)=\{C\}$ then no such component $C'$ exists.
(ii)
Let $Y$ be a subset such that $C\subseteq Y\subseteq V$.
We claim that
${\mathcal C}_{\mathrm{max}}(C;Y)=\{C\}$ holds if and only if
L$_1(C,Y)$ returns the component $C$.
The necessity is obvious. For the sufficiency,
if there is $X\in{\mathcal C}_{\mathrm{max}}(C;Y)$ such that $X\ne C$,
$X$ would be a superset of $C$, contradicting the $Y$-maximality of $C$.
By (i), to identify whether $C\in {\mathcal S}$ or not,
it suffices to see whether L$_1(C,J)$ returns $C$.
We can compute $J=I_\sigma(C)$ in $O(|C|q)$ time
and in $O(q)$ space,
and can decide whether the oracle returns $C$
in $O(\oraclex{1,t}+|C|)$ time and
in $O(\oraclex{1,s}+|C|)$ space.
\end{proof}
\subsection{Defining Family Tree}\label{sec:defn}
To generate all solutions in ${\mathcal S}$ efficiently,
we use the idea of family tree, where we first introduce
a parent-child relationship among solutions, which defines
a rooted tree (or a set of rooted trees), and
we traverse each tree starting from the root
and generating the children of a solution recursively.
Our tasks to establish such an enumeration algorithm are as follows:
\begin{itemize}
\item[-]
Select some solutions from the set ${\mathcal S}$ of solutions
as the roots, called ``bases;''
\item[-]Define the ``parent'' $\pi(S)\in {\mathcal S}$ of
each non-base solution $S\in {\mathcal S}$,
where the solution $S$ is called a ``child'' of the solution $T=\pi(S)$;
\item[-] Design an algorithm~A that, given a solution $S\in {\mathcal S}$,
returns its parent $\pi(S)$; and
\item[-] Design an algorithm~B that, given a solution $T\in {\mathcal S}$,
generates
a set $\mathcal{X}$ of components $X\in{\mathcal C}$ such that
$\mathcal{X}$ contains all children of $T$.
We can test whether each component $X\in\mathcal{X}$ is a child of $T$
by constructing $\pi(X)$ by algorithm~A and checking if $\pi(X)$
is equal to $T$.
\end{itemize}
Starting from each base, we recursively generate the children of a solution.
The complexity of delay-time of the entire algorithm depends on
the time complexity of algorithms A and B,
where $|\mathcal{X}|$ is bounded from above
by the time complexity of algorithm~B.
\subsection{Defining Base}
For each integer $i\in [0,q]$, define a set of components
\[\mbox{
${\mathcal B}_i\triangleq \{X\in {\mathcal C}_{\mathrm{max}}(V_{\angleb{i}})\mid \min I_\sigma(X)=i\}$,}\]
and ${\mathcal B}\triangleq \bigcup_{i\in [0,q]}{\mathcal B}_i$.
We call each component in ${\mathcal B}$ a {\em base}.
\begin{lem} \label{lem:base}
Let $(V,{\mathcal C},I=[1,q],\sigma)$ be an instance.
\begin{enumerate}
\item[{\rm (i)}] For each non-empty set $J\subseteq [1,q]$ or $J=\{0\}$,
it holds that ${\mathcal C}_{\mathrm{max}}(V_{\angleb{J}})\subseteq {\mathcal S}$;
\item[{\rm (ii)}] For each $i\in [0,q]$, any solution $S\in {\mathcal S}_i$
is contained in a base in ${\mathcal B}_i$; and
\item[{\rm (iii)}] ${\mathcal S}_0={\mathcal B}_0$ and ${\mathcal S}_q={\mathcal B}_q$.
\end{enumerate}
\end{lem}
\begin{proof}
(i) Let $X$ be a component in ${\mathcal C}_{\mathrm{max}}(V_{\angleb{J}})$.
Note that $J\subseteq I_\sigma(X)$ holds.
When $J=\{0\}$ (i.e., $V_{\angleb{J}}=V$),
no proper superset of $X$ is a component, and $X$ is a solution.
Consider the case of $\emptyset\neq J\subseteq [1,q]$.
To derive a contradiction, assume that $X$ is not a solution; i.e.,
there is a proper superset $Y$ of $X$ such that $I_\sigma(Y)=I_\sigma(X)$.
Since $\emptyset\neq J\subseteq I_\sigma(X)=I_\sigma(Y)$,
we see that $V_{\angleb{J}}\supseteq Y$.
This, however, contradicts the $V_{\angleb{J}}$-maximality of $X$.
This proves that $X$ is a solution.
(ii) We prove that each solution $S\in {\mathcal S}_i$
is contained in a base in ${\mathcal B}_i$.
Note that $i=\min I_\sigma(S)$ holds.
By definition, it holds that $S\subseteq V_{\angleb{i}}$.
Let $C\in {\mathcal C}_{\mathrm{max}}(S;V_{\angleb{i}})$ be a solution.
Note that $I_\sigma(S)\supseteq I_\sigma(C)$ holds.
Since $i\in I_\sigma(C)$ for $i\geq 1$
(resp., $I_\sigma(C)=\emptyset$ for $i=0$),
we see that $\min I_\sigma(S)=i=\min I_\sigma(C)$.
This proves that $C$ is a base in ${\mathcal B}_i$.
Therefore $S$ is contained in a base $C\in {\mathcal B}_i$.
(iii)
Let $k\in \{0,q\}$.
We see from (i) that ${\mathcal C}_{\mathrm{max}}(V_{\angleb{k}})\subseteq {\mathcal S}$,
which implies that
${\mathcal B}_k
=\{X\in {\mathcal C}_{\mathrm{max}}(V_{\angleb{k}})\mid \min I_\sigma(X)=k\}
\subseteq \{X\in {\mathcal S}\mid \min I_\sigma(X)=k\} ={\mathcal S}_k$.
We prove that any solution $S\in {\mathcal S}_k$ is a base in ${\mathcal B}_k$.
By (ii), there is a base $X\in {\mathcal B}_k$ such that $S\subseteq X$,
which implies that
$I_\sigma(S)\supseteq I_\sigma(X)$ and $\minI_\sigma(S)\leq \minI_\sigma(X)$.
We see that $ I_\sigma(S)= I_\sigma(X)$, since
$\emptyset=I_\sigma(S)\supseteq I_\sigma(X)$ for $k=0$,
and $q=\minI_\sigma(S)\leq \minI_\sigma(X)\leq q$ for $k=q$.
Hence $S\subsetneq X$ would contradict that $S$ is a solution.
Therefore
$S=X\in {\mathcal B}_k$,
as required.
\end{proof}
Lemma~\ref{lem:base}(iii) tells that all solutions in ${\mathcal S}_0\cup {\mathcal S}_q$
can be found
by calling oracle $\mathrm{L}_2(Y)$ for $Y=V_{\angleb{0}}=V$ and
$Y=V_{\angleb{q}}$.
In the following, we consider how to generate
all solutions in ${\mathcal S}_k$ for each item $k\in [1,q-1]$.
\subsection{Defining Parent}
This subsection defines the ``parent'' of a non-base solution.
For two subsets $X,Y\subseteq V$, we denote
$(I_\sigma(X),X)\prec (I_\sigma(Y),Y)$
if
``$I_\sigma(X) \prec I_\sigma(Y)$'' or
``$I_\sigma(X) =I_\sigma(Y)$ and $X\prec Y$''
and let
$(I_\sigma(X),X)\preceq (I_\sigma(Y),Y)$ mean
$(I_\sigma(X),X)\prec (I_\sigma(Y),Y)$ or $X=Y$.
Let $X\subseteq V$ be a subset such that $k=\min I_\sigma(X)\in [1,q-1]$.
We call a solution $T\in {\mathcal S}$ a {\em superset solution} of $X$
if $T\supsetneq X$ and $T\in {\mathcal S}_k$.
A superset solution $T$ of $X$ is called {\em minimal}
if no proper subset $Z\subsetneq T$ is a superset solution of $X$.
We call a minimal superset solution $T$ of $X$
{\em the lex-min solution} of $X$ if
$(I_\sigma(T),T)\preceq (I_\sigma(T'), T')$
for all minimal superset solutions $T'$ of $X$.
For each item $k\in [1,q-1]$,
we define the {\em parent} $\pi(S)$ of a non-base solution
$S\in {\mathcal S}_k\setminus {\mathcal B}_k$
to be the lex-min solution of $S$,
and define a {\em child} of a solution $T\in {\mathcal S}_k$
to be a non-base solution $S\in {\mathcal S}_k\setminus {\mathcal B}_k$ such that $\pi(S)=T$.
The next lemma tells us how to find the item set
$I_\sigma(T)$ of the parent $T=\pi(S)$ of a given solution $S$.
\begin{lem} \label{lem:item_minimal}
Let $(V,{\mathcal C},I=[1,q],\sigma)$ be an instance,
$S\in {\mathcal S}_k\setminus {\mathcal B}_k$ be a non-base solution
for some item $k\in [1,q-1]$,
and $T\in {\mathcal S}_k$ denote the lex-min solution of $S$.
Denote $I_\sigma(S)$ by $\{k, i_1,i_2,\ldots,i_p\}$ so that $k<i_1<i_2<\cdots<i_p$.
%
For each integer $j\in [1,p]$,
$i_j\in I_\sigma(T)$ holds if and only if ${\mathcal C}_{\mathrm{max}}(S;J\cup\{i_j\})\neq\{S\}$ holds
for the item set $J=I_\sigma(T)\cap \{k, i_1,i_2,\ldots,i_{j-1}\}$.
\end{lem}
\begin{proof}
By Lemma~\ref{lem:base}(i) and $\minI_\sigma(S)=k$,
we see that ${\mathcal C}_{\mathrm{max}}(S;J\cup\{i_j\}) \subseteq {\mathcal S}_k$
for any integer $j\in [1,p]$. \\
\noindent
{\bf Case~1.} ${\mathcal C}_{\mathrm{max}}(S;J\cup\{i_j\})=\{S\}$:
For any subset $J'\subseteq \{i_{j+1},i_{j+2},\ldots,i_p\}$,
the family ${\mathcal C}_{\mathrm{max}}(S;J\cup\{i_j\}\cup J')$ is equal to $\{S\}$ and cannot
contain any minimal superset solution of $S$.
This implies that $i_j\not\in I_\sigma(T)$.\\
\noindent
{\bf Case~2.} ${\mathcal C}_{\mathrm{max}}(S;J\cup\{i_j\})\neq \{S\}$:
Let $C$ be an arbitrary component in ${\mathcal C}_{\mathrm{max}}(S;J\cup\{i_j\})$.
Then $C$ is a solution by Lemma~\ref{lem:base}(i).
Observe that $k\in J\cup\{i_j\}\subseteq I_\sigma(C)\subseteq I_\sigma(S)$ and
$\minI_\sigma(C)=k$, implying that $C\in {\mathcal S}_k$ is a superset solution of $S$.
Then $C$ contains
a minimal superset solution $T^*\in {\mathcal S}_k$ of $S$, where
$I_\sigma(T^*)\cap[1,i_{j-1}]=I_\sigma(T^*)\cap \{k,i_1,i_2,\ldots,i_{j-1}\}\supseteq
J= I_\sigma(T)\cap \{k,i_1,i_2,\ldots,i_{j-1}\}=I_\sigma(T)\cap[1,i_{j-1}]$
and $i_j \inI_\sigma(T^*)$.
If $I_\sigma(T^*)\cap[1,i_{j-1}]\supsetneq J$ or $i_j\not\in I_\sigma(T)$,
then $I_\sigma(T^*)\prec I_\sigma(T)$ would hold, contradicting that $T$ is
the lex-min solution of $S$.
Hence $I_\sigma(T)\cap[1,i_{j-1}]=J=I_\sigma(T^*)\cap[1,i_{j-1}]$
and $i_j\in I_\sigma(T)$.
\end{proof}
The next lemma tells us how to construct
the parent $T=\pi(S)$ of a given solution $S$.
\begin{lem} \label{lem:vertex-index-minimal}
Let $(V,{\mathcal C},I=[1,q],\sigma)$ be an instance,
$S\in {\mathcal S}_k\setminus {\mathcal B}_k$ be a non-base solution
for some item $k\in [1,q-1]$,
and $T$ denote the lex-min solution of $S$.
Let $J=I_\sigma(T)$.
Let $S'$ be a set such that
$S\subseteq S'\subsetneq T$,
where $V_{\angleb{J}}\setminus S'$ is denoted by
$\{u_i\mid i\in[1,s=|V_{\angleb{J}}|-|S'|]\}$ such that $u_1<u_2<\cdots<u_s$.
Then:
\begin{enumerate}
\item[{\rm (i)}]
$T\in {\mathcal C}_{\mathrm{max}}(S'\cup\{u \};V_{\angleb{J}})$
for any vertex $u \in T\setminus S'$;
\item[{\rm (ii)}]
Every component $C\in {\mathcal C}$ with $S'\subsetneq C\subseteq V_{\angleb{J}}$
satisfies $I_\sigma(C)=J$;
\item[{\rm (iii)}]
There is an integer $r\in [1,s]$ such that
${\mathcal C}_{\mathrm{max}}(S'\cup\{u_j\};V_{\angleb{J}})=\emptyset$
for each $j\in [1,r-1]$ and
all components $C\in{\mathcal C}_{\mathrm{max}}(S'\cup\{u_r\};V_{\angleb{J}})$
satisfy $I_\sigma(C)=J$;
\item[{\rm (iv)}]
For the integer $r$ in {\rm (iii)},
$T\cap \{u_j\mid j\in[1,r] \}=\{u_r\}$ holds; and
\item[{\rm (v)}]
For the integer $r$ in {\rm (iii)},
if $S'\cup\{u_r\}\in {\mathcal S}$ then $T=S'\cup\{u_r\}$ holds.
\end{enumerate}
\end{lem}
\begin{proof}
(i) Since $S'\subsetneq T$, there exists a vertex $u \in T\setminus S'$.
For such a vertex $u $,
$T$ is a component such that
$S'\cup\{u \}\subseteq T\subseteq V_{\angleb{J}}$.
If $T$ is not a $V_{\angleb{J}}$-maximal component, then
there would exist a component $Z\in {\mathcal C}$ with
$T\subsetneq Z\subseteq V_{\angleb{J}}$ and
$J=I_\sigma(T)\supseteq I_\sigma(Z)\supseteq I_\sigma(V_{\angleb{J}})\supseteq J$,
contradicting that $T$ is a solution.
Hence $T\in {\mathcal C}_{\mathrm{max}}(S'\cup\{u \};V_{\angleb{J}})$
for any vertex $u \in T\setminus S'$.
(ii)
Let $C\in {\mathcal C}$ be a component with $S'\subseteq C\subseteq V_{\angleb{J}}$.
Note that
$I_\sigma(S)\supseteq I_\sigma(S')\supseteq I_\sigma(C)\supseteq
I_\sigma(V_{\angleb{J}})\supseteq J=I_\sigma(T)$
and
$k=\minI_\sigma(S)=\minI_\sigma(T)$.
%
Since $C$ is a component, there is a solution $S_C$ such that
$S_C\supseteq C$ and $I_\sigma(S_C)=I_\sigma(C)$.
Since $S\subseteq S'\subsetneq C\subseteq S_C$,
$S$ and $S_C$ are distinct solutions and there
must be a minimal superset solution $S^*_C\in {\mathcal S}_k$ of $S$ such that
$S\subsetneq S^*_C\subseteq S_C$, where we see that
$I_\sigma(S) \supsetneq I_\sigma(S^*_C)\supseteq I_\sigma(S_C)
=I_\sigma(C)\supseteq I_\sigma(T)$ and
$k=\minI_\sigma(S)=\minI_\sigma(S^*_C) =\minI_\sigma(T)$.
%
If $I_\sigma(C)\supsetneq J$, then
$I_\sigma(S^*_C)\supseteq I_\sigma(S_C)=I_\sigma(C)\supsetneq J=I_\sigma(T)$
implies that
$I_\sigma(S^*_C)\prec I_\sigma(T)$, contradicting that $T$ is the lex-min
solution of $S$.
(iii) By (i), for some integer $r\in [1,s]$,
$T\in {\mathcal C}_{\mathrm{max}}(S'\cup\{u_r\};V_{\angleb{J}})$
holds and some component $C\in{\mathcal C}_{\mathrm{max}}(S'\cup\{u_r\};V_{\angleb{J}})$
satisfies $I_\sigma(C)=I_\sigma(T)=J$.
Let $r$ denote the smallest index
such that no component $C\in{\mathcal C}_{\mathrm{max}}(S'\cup\{u_j\};V_{\angleb{J}})$
satisfies $I_\sigma(C)= J$ for each $j\in [1,r-1]$.
By (ii), for such $r$, the statement of (iii) holds.
(iv)
Since no component $C\in{\mathcal C}_{\mathrm{max}}(S'\cup\{u_j\};V_{\angleb{J}})$
satisfies $I_\sigma(C)=J$ for all integers $j\in [1,r-1]$,
no component $T'\supsetneq S'$ such that
$T'\cap \{u_j\mid j\in[1,r-1] \}\neq\emptyset$
can be the lex-min solution $T$.
Since some component $C\in{\mathcal C}_{\mathrm{max}}(S'\cup\{u_r\};V_{\angleb{J}})$
satisfies $I_\sigma(C)=I_\sigma(T)=J$,
there is a component $T'\in {\mathcal C}$ such that $u_r\in T'$ and
$I_\sigma(T')=J=I_\sigma(T)$.
The lex-min solution $T$ satisfies $ T \preceq T' $
for all minimal superset solutions $T'$ of $S$ with $I_\sigma(T')=J$.
Therefore $T$ must contain $u_r$.
(v) By (iv),
$u_r\in T$.
If $S'\cup\{u_r\}\in {\mathcal S}$ then
$S'\cup \{u_r\}$ is a unique minimal superset solution of $S$
such that $T\supseteq S'\cup \{u_r\}\supseteq S$,
implying that $T=S'\cup \{u_r\}$.
\end{proof}
\begin{lem} \label{lem:greedy_minimal}
Let $(V,{\mathcal C},I=[1,q],\sigma)$ be an instance,
$S\in {\mathcal S}_k\setminus {\mathcal B}_k$ be a non-base solution
for some item $k\in [1,q-1]$.
Then {\sc Parent}$(S)$ in \algref{parent_c} correctly delivers
the lex-min solution of $S$
in $O\big((n+q)\oraclex{1,t}+n^2+nq\big)$
time
and in $O\big(\oraclex{1,s}+\oraclex{2,s}+n+q\big)$ space.
\end{lem}
\begin{proof}
Let $T$ denote the lex-min solution of $S$.
The item set $J$ constructed in the first for-loop
(lines \ref{line:parent_former_for} to \ref{line:parent_former_endfor})
satisfies $J=I_\sigma(T)$ by \lemref{item_minimal}.
The second for-loop
(lines \ref{line:parent_latter_for} to \ref{line:parent_latter_endfor})
picks up $u_i\in T\setminus(S\cup Z)$ by
\lemref{vertex-index-minimal}(iv),
and the termination condition (\lineref{parent_if}) is
from \lemref{vertex-index-minimal}(v).
The first for-loop is repeated $p\le q$ times,
where we can decide whether the condition in \lineref{parent_eq} holds
in $O(\oraclex{1,t}+|S|)$ time and in $O(\oraclex{1,s}+|S|)$ space.
The time and space complexities of the first for-loop
are $O(q(\oraclex{1,t}+|S|))$ and $O(\oraclex{1,s}+|S|)$.
We can decide the set $V_{\angleb{J}}$ in $O(nq)$ time and in $O(n+q)$ space.
The second for-loop is repeated $s\le n=|V|$ times.
We can decide whether the condition of \lineref{parent_if} is satisfied
by calling the oracle L$_1(S\cup Z\cup \{u_i\};V_{\angleb{J}})$,
which takes $O(\oraclex{1,t})$ time and $O(\oraclex{1,s})$ space.
When the condition of \lineref{parent_if} is satisfied,
we can decide whether $S\cup Z\in{\mathcal S}$ or not (\lineref{parent_sol})
in $O(\oraclex{1,t}+|S\cup Z|q)$ time
and in $O(\oraclex{1,s}+|S\cup Z|+q)$
space by \lemref{solution-test}(ii).
The time and space complexities of the second for-loop
are $O(n(\oraclex{1,t}+n))$
and $O(\oraclex{1,s}+n)$.
The overall time and space complexities are
$O\big((n+q)\oraclex{1,t}+n^2+nq\big)$
and $O\big(\oraclex{1,s}+\oraclex{2,s}+n+q\big)$.
\end{proof}
\begin{algorithm}[h]
\caption{{\sc Parent}$(S)$:
Finding the lex-min solution of a solution $S$ }
\label{alg:parent_c}
\begin{algorithmic}[1]
\Require An instance $(V,{\mathcal C},I=[1,q],\sigma)$,
an item $k\in [1,q-1]$, and
a non-base solution $S\in {\mathcal S}_k\setminus {\mathcal B}_k$,
where $k= \min I_\sigma(S)$.
\Ensure
The lex-min solution $T\in {\mathcal S}_k$ of $S$.
\State Let $\{k,i_1,i_2,\ldots,i_p\}:=I_\sigma(S)$, where $k<i_1<i_2<\cdots<i_p$;
\State $J:=\{k\}$;
\For {{\bf each} integer $j:=1,2,\ldots,p$}
\label{line:parent_former_for}
\If {${\mathcal C}_{\mathrm{max}}(S;J\cup\{i_j\})\neq\{S\}$ }
\label{line:parent_eq}
\State $J:=J\cup\{i_j\}$
\EndIf
\EndFor; \Comment{$J=I_\sigma(T)$ holds}
\label{line:parent_former_endfor}
\State Let $\{u_1,u_2,\ldots,u_s\}:= V_{\angleb{J}}\setminus S$,
where $u_1<u_2<\cdots<u_s$;
\State $Z:=\emptyset$;
\For{{\bf each} integer $i:=1,2,\dots,s$}
\label{line:parent_latter_for}
\If {${\mathcal C}_{\mathrm{max}}(S\cup Z\cup\{u_i\};V_{\angleb{J}})\neq\emptyset$}
\label{line:parent_if}
\State $Z:=Z\cup\{u_i\}$;
\If{$S\cup Z\in{\mathcal S}$}
\label{line:parent_sol}
\State Output $T:=S\cup Z$ and halt
\EndIf
\EndIf
\EndFor
\label{line:parent_latter_endfor}
\end{algorithmic}
\end{algorithm}
\subsection{Generating Children}
This subsection shows how to construct a family $\mathcal{X}$ of
components for a given solution $T$ so that $\mathcal{X}$ contains
all children of $T$.
\begin{lem} \label{lem:child_candidate}
Let $(V,{\mathcal C},I=[1,q],\sigma)$ be an instance
and $T\in {\mathcal S}_k$ be a solution for some item $k\in [1,q-1]$.
Then:
\begin{enumerate}
\item[{\rm (i)}]
Every child $S$ of $T$
satisfies $[k+1,q]\cap (I_\sigma(S)\setminus I_\sigma(T)) \neq\emptyset$
and is a component in ${\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})$
for any item $j\in[k+1,q]\cap (I_\sigma(S)\setminus I_\sigma(T))$;
\item[{\rm (ii)}]
The family of children $S$ of $T$ is
equal to the disjoint collection of families
$\mathcal{C}_j =
\{ C\in{\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})\mid
k= \min I_\sigma(C), C\in {\mathcal S},
j=\min\{i\mid i\in [k+1,q]\cap (I_\sigma(C)\setminus I_\sigma(T))\},
T=${\sc Parent}$(C)\}$ over all items
$j\in[k+1, q]\setminus I_\sigma(T)$; and
\item[{\rm (iii)}]
The set of all children of $T$ can be constructed in
$O\big((n+q)q\delta(T)\oraclex{1,t}+q\oraclex{2,t}+(n^2+nq)q\delta(T)\big)$ time and
$O(\oraclex{1,s}+\oraclex{2,s}+n+q)$ space.
\end{enumerate}
\end{lem}
\begin{proof}
(i)
Note that $[0,k]\cap I_\sigma(S)=[0,k]\cap I_\sigma(T)=\{k\}$ since $S,T\in {\mathcal S}_k$.
Since $S\subseteq T$ are both solutions,
$I_\sigma(S)\supsetneq I_\sigma(T)$.
Hence
$[k+1,q]\cap (I_\sigma(S)\setminus I_\sigma(T)) \neq\emptyset$.
Let $j$ be an arbitrary item in $[k+1,q]\cap (I_\sigma(S)\setminus I_\sigma(T))$.
We see $S\subseteq T\cap V_{\angleb{j}}$ since
$S\subseteq T$ and $j\inI_\sigma(S)$.
To show that $S$ is a component in ${\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})$,
suppose that there is a component $C\in{\mathcal C}$ such that
$S\subsetneq C\in{\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})$.
Since $I_\sigma(S)\supseteqI_\sigma(C)\supseteqI_\sigma(T\cap V_{\angleb{j}})\supsetneqI_\sigma(T)$
and $\minI_\sigma(S)=\minI_\sigma(T)=k$,
we see that $\minI_\sigma(C)=k$.
Then $C$ should not be a solution since otherwise
it would be a superset solution of $S$ such that $S\subsetneq C\subsetneq T$,
contradicting that $T$ is a minimal superset solution of $S$.
Since $C$ is not a solution but a component,
there is a solution $C'$ such that $C'\supsetneq C$ and $I_\sigma(C')=I_\sigma(C)\subseteq\{ k\}$.
Hence $C'\in{\mathcal S}_k$.
Such a solution $C'$ contains a minimal superset solution $C''$ of $S$
such that $C'\supseteq C''\supsetneq S$ and $I_\sigma(C')\subseteqI_\sigma(C'')\subsetneqI_\sigma(S)$.
Then we have $I_\sigma(S)\supsetneqI_\sigma(C'')\supseteqI_\sigma(C')=I_\sigma(C)\supsetneqI_\sigma(T)$,
and thus $C''\prec T$ holds,
which contradicts that $T$ is the lex-min solution of $S$.
Therefore, such $C$ does not exist, implying that $S\in{\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})$.
\invis{
Let $j$ be an arbitrary item in $[k+1,q]\cap (I_\sigma(S)\setminus I_\sigma(T))$,
it holds that $S\subseteq V_{\angleb{j}}$ and $S\subseteq T$.
There is no other component $C\in {\mathcal C}$ such that
$S\subsetneq C\subsetneq T$, since otherwise
$I_\sigma(S)\supseteq I_\sigma(C)\supseteq I_\sigma(T)$ and
$k=\min I_\sigma(S)=\min I_\sigma(T)$ would imply $ \min I_\sigma(C)= k$
and $C\subseteq T\cap V_{\angleb{j}}$,
contradicting that $C$ is a $(T\cap V_{\angleb{j}})$-maximal component
in ${\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})$.
Hence $S\in{\mathcal C}_{\mathrm{max}}(S;T\cap V_{\angleb{j}})$
for any item $j\in [k+1,q]\cap (I_\sigma(S)\setminus I_\sigma(T))$.
}
(ii)
By (i), the family $\mathcal{S}_T$ of children of $T$ is
contained in the family of
$(T\cap V_{\angleb{j}})$-maximal components $C\in{\mathcal S}$
over all items $j\in [k+1,q]\cap I_\sigma(T)$.
Hence $\mathcal{S}_T
=\cup_{j\in [k+1,q]\cap I_\sigma(T)}\{C\in {\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})
\mid C\in{\mathcal S}, T=${\sc Parent}$(C)\}$.
Note that
if a subset $S\subseteq V$ is a child of $T$, then
$k= \min I_\sigma(S)$, $C\in{\mathcal S}$ and
$S\in {\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})$ for all items
$j\in [k+1,q]\cap (I_\sigma(S)\setminus I_\sigma(T))$.
Hence we see that $\mathcal{S}_T$ is
equal to the disjoint collection of families
$\mathcal{C}_j =
\{ C\in{\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})\mid
k= \min I_\sigma(C), C\in{\mathcal S},
j=\min\{i\mid i\in [k+1,q]\cap (I_\sigma(C)\setminus I_\sigma(T))\},
T=${\sc Parent}$(C)\}$ over all items
$j\in[k+1, q]\setminus I_\sigma(T)$.
(iii)
We show an algorithm to generate all children of $T\in{\mathcal S}_k$
in \algref{children}. The correctness directly follows from (ii).
The outer for-loop (lines \ref{line:children_outer_begin}
to \ref{line:children_outer_end}) is repeated at most $q$ times.
Computing ${\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})$ in \lineref{children_l2}
can be done in $\oraclex{2,t}$ time and in $\oraclex{2,s}$ space.
For each $C\in{\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})$,
the complexity of deciding
whether $C$ satisfies the condition in \lineref{children_if} or not
is dominated by {\sc Parent$(C)$}.
Let $\tau$ denote the time complexity of {\sc Parent$(C)$}.
The time complexity of the entire algorithm is
\[
O\big(q(\oraclex{2,t}+\delta(T)\tau)\big)
=O\big((n+q)q\delta(T)\oraclex{1,t}+q\oraclex{2,t}+(n^2+nq)q\delta(T)\big);
\]
and the space complexity is $O(\oraclex{1,s}+\oraclex{2,s}+n+q)$,
where the computational complexities of {\sc Parent$(C)$}
are from \lemref{greedy_minimal}.
\end{proof}
\begin{algorithm}[h]
\caption{{\sc Children}$(T,k)$: Generating all children}\label{alg:children}
\begin{algorithmic}[1]
\Require An instance $(V,{\mathcal C},I,\sigma)$, an item $k\in [1,q-1]$ and
a solution $T\in {\mathcal S}_k$.
\Ensure All children of $T$, each of which is output whenever it is generated.
\For{{\bf each} item $j\in[k+1, q]\setminus I_\sigma(T)$}
\label{line:children_outer_begin}
\State Compute ${\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})$;
\label{line:children_l2}
\For{{\bf each} component $C\in{\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})$}
\label{line:children_inner_begin}
\If {$k= \min I_\sigma(C)$,
$C\in {\mathcal S}$,
$j=\min\{i\mid i\in [k+1,q]\cap (I_\sigma(C)\setminus I_\sigma(T))\}$ \par
\hskip\algorithmicindent and $T=${\sc Parent}$(C)$
}
\label{line:children_if}
\State Output $C$ as one of the children of $T$
\EndIf
\EndFor
\label{line:children_inner_end}
\EndFor
\label{line:children_outer_end}
\end{algorithmic}
\end{algorithm}
\newpage
\subsection{Traversing Family Tree}
\label{sec:trav}
We are ready to describe an entire algorithm for enumerating
solutions in ${\mathcal S}_k$
for a given integer $k\in [0,q]$.
We first compute the component set ${\mathcal C}_{\mathrm{max}}(V_{\angleb{k}})$.
We next compute the family ${\mathcal B}_k~(\subseteq {\mathcal C}_{\mathrm{max}}(V_{\angleb{k}}))$ of bases
by testing whether $k=\min I_\sigma(T)$ or not for each component $T\in {\mathcal C}_{\mathrm{max}}(V_{\angleb{k}})$.
When $k=0$ or $q$, we are done with ${\mathcal B}_k={\mathcal S}_k$
by Lemma~\ref{lem:base}(iii).
Let $k\in [1,q-1]$.
Suppose that we are given a solution $T\in {\mathcal S}_k$.
We find all the children of $T$ by
{\sc Children}$(T,k)$ in Algorithm~\ref{alg:children}.
By applying Algorithm~\ref{alg:children}
to a newly found child recursively,
we can find all solutions in ${\mathcal S}_k$.
When no child is found to a given solution $T\in {\mathcal S}_k$,
we may need to go up to an ancestor by traversing
recursive calls $O(n)$ times before we generate the next solution.
This would result in time delay of $O(n\alpha)$, where $\alpha$ denotes
the time complexity required for a single run of
{\sc Children}$(T,k)$.
To improve the delay to $O(\alpha)$, we employ the
{\em alternative output method\/}~\cite{U.2003}, where
we output the children of $T$ after (resp., before)
generating all descendants when the depth of the recursive call to $T$
is an even (resp., odd) integer.
Assume that a volume function $\rho: 2^V\to \mathbb{R}$
is given.
An algorithm that enumerates all $\rho$-positive solutions in ${\mathcal S}_k$
is described in Algorithm~\ref{alg:enumalg}
and Algorithm~\ref{alg:enumdesc}.
\begin{algorithm}[h]
\caption{An algorithm to enumerate $\rho$-positive solutions in ${\mathcal S}_k$
for a given $k\in [0,q]$}
\label{alg:enumalg}
\begin{algorithmic}[1]
\Require An instance $(V,{\mathcal C},I=[1,q],\sigma)$,
and an item $k\in [0,q]$
\Ensure The set ${\mathcal S}_k$ of solutions to $(V,{\mathcal C},I,\sigma)$
\State Compute ${\mathcal C}_{\mathrm{max}}(V_{\angleb{k}})$; $d:=1$;
\label{line:enum_vk}
\For{{\bf each} $T\in{\mathcal C}_{\mathrm{max}}(V_{\angleb{k}})$}
\label{line:enum_for}
\If{$k=\min I_\sigma(T)$ (i.e., $T\in {\mathcal B}_k$) and $\rho(T)>0$}
\label{line:enum_if}
\State Output $T$;
\If {$k\in [1,q-1]$}
\State {\sc Descendants}$(T,k,d+1)$
\EndIf
\EndIf
\EndFor
\label{line:enum_endfor}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[h]
\caption{{\sc Descendants}$(T,k,d)$: Generating all $\rho$-positive descendant solutions}
\label{alg:enumdesc}
\begin{algorithmic}[1]
\Require An instance $(V,{\mathcal C},I,\sigma)$, $k\in [1,q-1]$,
a solution $T\in {\mathcal S}_k$,
the current depth $d$ of recursive call of {\sc Descendants}, and
a volume function $\rho:2^V\to\mathbb{R}$
\Ensure All $\rho$-positive descendant solutions of $T$ in ${\mathcal S}_k$
\For{{\bf each} item $j\in[k+1, q]\setminus I_\sigma(T)$}
\label{line:desc_for}
\State Compute ${\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})$;
\label{line:desc_max}
\For{{\bf each} component $S\in{\mathcal C}_{\mathrm{max}}(T\cap V_{\angleb{j}})$}
\label{line:desc_inner_for}
\If {$k= \min I_\sigma(S)$,
$j=\min\{i\mid i\in [k+1,q]\cap (I_\sigma(S)\setminus I_\sigma(T))\}$, \par
\hskip\algorithmicindent $T= ${\sc Parent}$(S)$ (i.e., $S$ is a child of $T$), and $\rho(S)>0$ }
\If{$d$ is odd}
\State Output $S$
\EndIf;
\State {\sc Descendants}$(S,k,d+1)$;
\If{$d$ is even}
\State Output $S$
\EndIf
\EndIf
\EndFor
\label{line:desc_inner_endfor}
\EndFor
\label{line:desc_endfor}
\end{algorithmic}
\end{algorithm}
\begin{lem} \label{lem:main_poly}
Let $(V,{\mathcal C},I=[1,q],\sigma)$ be an instance.
For each $k\in [0,q]$, all $\rho$-positive solutions in ${\mathcal S}_k$ can be enumerated
in
$O\big( (n+q)q\delta(V_{\angleb{k}})\oraclex{1,t}+q\oraclex{2,t}+q\delta(V_{\angleb{k}})\oraclex{\rho,t}+(n^2+nq)q\delta(V_{\angleb{k}}) \big)$
delay and
$O\big(n(\oraclex{1,s}+\oraclex{2,s}+\oraclex{\rho,s}+n+q)\big)$
space.
\end{lem}
\begin{proof}
Let $T\in{\mathcal S}_k$ be a solution such that $\rho(T)\leq 0$.
In this case, $\rho(S)\le\rho(T)\leq 0$ holds
for all descendants $S$ of $T$ since $S\subseteq T$.
Then we do not need to make recursive calls for such $T$.
We analyze the time delay.
Let $\alpha$ denote the time complexity required for
a single run of {\sc Children}$(T,k)$.
By \lemref{child_candidate}(iii)
and $\delta(T)\le\delta(V_{\angleb{k}})$,
we have $\alpha=O\big((n+q)q\delta(V_{\angleb{k}})\oraclex{1,t}+q\oraclex{2,t}+(n^2+nq)q\delta(V_{\angleb{k}})\big)$.
In \algref{enumalg} and {\sc Descendants},
we also need to compute $\rho(S)$ for all child candidates $S$.
The complexity is $O(q\delta(V_{\angleb{k}})\theta_{\rho,\mathrm{t}})$
since $\rho(S)$ is called at most $q\delta(V_{\angleb{k}})$ times.
Hence we see that the time complexity of \algref{enumalg}
and {\sc Descendants}
without including recursive calls is $O(\alpha+q\delta(V_{\angleb{k}})\theta_{\rho,\mathrm{t}})$.
From \algref{enumalg} and {\sc Descendants}, we observe: \\
(i) When $d$ is odd, the solution $S$ for any call {\sc Descendants}$(S,k,d+1)$
is output\\
~~ immediately before {\sc Descendants}$(S,k,d+1)$ is executed; and \\
(ii) When $d$ is even, the solution $S$ for any call
{\sc Descendants}$(S,k,d+1)$
is output\\
~~ immediately after {\sc Descendants}$(S,k,d+1)$ is executed. \\
Let $m$ denote the number of all calls of {\sc Descendants} during a whole
execution of \algref{enumalg}.
Let $d_1=1,d_2,\ldots,d_m$ denote the sequence of depths $d$
in each {\sc Descendants}$(S,k,d+1)$ of the $m$ calls.
Note that $d=d_i$ satisfies (i) when $d_{i+1}$ is odd and $d_{i+1}=d_i+1$,
whereas $d=d_i$ satisfies (ii) when $d_{i+1}$ is even and $d_{i+1}=d_i-1$.
Therefore we easily see that during three consecutive calls
with depth $d_i$, $d_{i+1}$ and $d_{i+2}$,
at least one solution will be output.
This implies that the time delay for outputting a solution is
$O(\alpha+q\delta(V_{\angleb{k}})\theta_{\rho,\mathrm{t}})$.
We analyze the space complexity.
Observe that the number of calls {\sc Descendants} whose executions
are not finished
during an execution of \algref{enumalg} is the depth $d$
of the current call {\sc Descendants}$(S,k,d+1)$.
In \algref{enumdesc},
$|T|+d\leq n+1$ holds initially,
and
{\sc Descendants}$(S,k,d+1)$ is called
for a nonempty subset $S\subsetneq T$, where $|S|<|T|$.
Hence $|S|+d\leq n+1$ holds when {\sc Descendants}$(S,k,d+1)$ is called.
Then \algref{enumalg} can be implemented to run
in $O(n(\beta+\theta_{\rho,\mathrm{s}}))$ space, where
$\beta$ denotes the space required for a single run of {\sc Children}$(T,k)$.
We have $\beta=O(\theta_{1,\mathrm{s}}+\theta_{2,\mathrm{s}}+n+q)$
by \lemref{child_candidate}(ii).
Then the overall space complexity is
$O\big(n(\theta_{1,\mathrm{s}}+\theta_{2,\mathrm{s}}+\theta_{\rho,\mathrm{s}}+n+q)\big)$.
\end{proof}
The volume function is introduced to impose a condition on the output solutions.
For example,
when $\rho(X)=|X|-p$ for a constant integer
$p$,
all solutions $X\in{\mathcal S}_k$ with $|X|\ge p+1$ will be output.
In particular, all solutions in ${\mathcal S}_k$ will be output for $p\le0$.
In this case, we have
$\theta_{\rho,\mathrm{t}}=\theta_{\rho,\mathrm{s}}=O(n)$,
and thus the delay is $O\big((n+q)q\delta(V_{\angleb{k}})\oraclex{1,t}+q\oraclex{2,t}+(n^2+nq)q\delta(V_{\angleb{k}})\big)$ and
the space is $O\big(n(\oraclex{1,s}+\oraclex{2,s}+n+q)\big)$.
\thmref{main} is immediate from \lemref{main_poly}
since $\delta(V_{\angleb{k}})\le\delta(V)$ holds
by our assumption
that $\delta(Y)\le\delta(X)$ for subsets $Y\subseteq X\subseteq V$.
\subsection{Enumerating Components}\label{sec:enum}
This subsection shows that our algorithm in the previous subsection can enumerate
all components in a given system $(V,{\mathcal C})$ with $n=|V|\geq 1$.
For this, we construct an instance $\mathcal{I}=(V,{\mathcal C},I=[1,n],\varphi)$
as follows.
Denote $V$ by $\{v_1,\dots,v_n\}$.
We set $I=[1,n]$
and define a function $\varphi:V\to 2^I$
to be $\varphi(v_k)\triangleq I\setminus\{k\}$ for each element $v_k\in V$.
For each subset $X\subseteq V$, let $\mathsf{Ind}(X)$ denote the set of indices $i$
of elements $v_i\in X$; i.e., $\mathsf{Ind}(X)=\{i\in [1,n]\mid v_i\in X\}$,
and $I_\varphi(X)\subseteq [1,n]$
denote the common item set over $\varphi(v)$, $v\in X$;
i.e., $I_\varphi(X) = \bigcap_{v\in X}\varphi(v)$.
Observe that $I_\varphi(X)=I\setminus\mathsf{Ind}(X)$.
\begin{lem}
\label{lem:compsol}
Let $(V=\{v_1,\dots,v_n\},{\mathcal C})$ be a system with $n\geq 1$.
The family ${\mathcal C}$ of all components is equal
to the family ${\mathcal S}$ of all solutions
in the instance $(V,{\mathcal C},I=[1,n],\varphi)$.
\end{lem}
\begin{proof} Since any solution $S\in {\mathcal S}$ is a component, it holds that
${\mathcal C}\supseteq {\mathcal S}$.
We prove that ${\mathcal C}\subseteq {\mathcal S}$.
Let $X\in{\mathcal C}$.
For any superset $Y\supsetneq X$,
it holds that $I_{\varphi}(Y)=I\setminus \mathsf{Ind}(Y)\subsetneq I\setminus \mathsf{Ind}(X)=I_{\varphi}(X)$.
The component $X$ is a solution in $(V,{\mathcal C},I,\varphi)$
since no superset of $X$ has the same common item set as $X$.
\end{proof}
Since the family ${\mathcal C}$ of components is equal to the family ${\mathcal S}$ of solutions
to the instance $\mathcal{I}=(V,{\mathcal C},I,\varphi)$ by \lemref{compsol},
we can enumerate all components in $(V,{\mathcal C})$ by
running our algorithm on the instance $\mathcal{I}$.
By $|I|=n$, we have the following corollary to \thmref{main}.
\begin{cor}\label{cor:comp}
Let $(V,{\mathcal C})$ be a system with $n=|V|\geq 1$
and a volume function $\rho$.
All $\rho$-positive components in ${\mathcal C}$ can be enumerated
in $O\big(n^2\delta(V)\oraclex{1,t}+n\oraclex{2,t}+n\delta(V)\oraclex{\rho,t}+n^3\delta(V) \big)$ delay and
$O\big(n\oraclex{1,s}+n\oraclex{2,s}+n\oraclex{\rho,s}+n^2\big)$ space.
\end{cor}
\section{Concluding Remarks}
\label{sec:conc}
In this paper, we have shown that all solutions
in a given instance $(V,{\mathcal C},I,\sigma)$ can be enumerated
in polynomial-delay with respect to the input size
and the running times of the oracles
even when $(V,{\mathcal C})$ is an arbitrary system (\thmref{main}).
As a corollary to the theorem,
we have also shown that all components in $(V,{\mathcal C})$
are enumerable in polynomial-delay (\corref{comp}).
The achievements generalize the result of \cite{HN.2020}
in which $(V,{\mathcal C})$ is restricted to a transitive system.
In our study, we assume that the oracles L$_1$ and L$_2$ are implicitly given. %
When we can implement the oracles so that the running times are polynomial with respect to the input size,
$\delta(V)$ is also polynomially bounded,
and thus we would have a polynomial-delay solution/enumeration algorithm with respect to the input size.
We provided such examples for transitive systems in \cite{HN.2020}.
Among the examples are enumeration of connected induced subgraphs,
$k$-edge/vertex-connected induced subgraphs, and
$k$-edge/vertex-connected spanning subgraphs for a given graph.
Whether some class of systems admits an efficient
algorithm to enumerate maximal components
is a core research problem, far from trivial.
For example, maximal independent sets (or maximal cliques) in a graph~\cite{CGMV.2020,LLK.1980,MU.2004,TIAS.1977}
are enumerable in polynomial-delay.
Cohen et al.~\cite{CKS.2008} proposed a general framework
of enumerating maximal subgraphs that satisfy the
hereditary/connected-hereditary property,
which is generated to the strongly accessible property
by Conte et al.~\cite{CGMV.2019}.
More recently,
Conte and Uno~\cite{CU.2019} proposed proximity search,
a novel framework of polynomial-delay algorithms to enumerate maximal components.
The delay of the proposed algorithm is bounded by $\delta(V)$, an upper bound on $|{\mathcal C}_{\mathrm{max}}(V)|$.
We do not like to use $\delta(V)$ in the time complexity bound
since it could be exponential to the input size.
Our future work is to develop a solution enumeration
algorithm such that the delay is polynomially bounded
whenever the oracles run in polynomial-delay.
For a graph $G=(V,E)$, let ${\mathcal C}$ denote the family of all cliques in $G$,
and suppose an instance $(V,{\mathcal C},I,\sigma)$ for arbitrary $I$ and $\sigma$.
If such an algorithm is possible, we would have polynomial-delay algorithms
to enumerate all solutions $S\subseteq V$ such that $S$ induces a clique,
by using existing polynomial-delay maximal clique enumeration algorithms
as subroutines/coroutines.
|
1,116,691,500,147 | arxiv | \section{Appendix A: Spectrum for Nodal Vortex Phase}
In this section, we study the topological vortex Majorana bound states (vMBSs) in topological iron-based superconductors (tFeSCs), whose normal band structure contains both Dirac semi-metal phase and topological insulator phase.
Therefore, the minimal model to capture the main physics for tFeSCs is a 6-band Hamiltonian, given in Eq.~(1) in the main text.
It could be generic for topological phases of matters, whose basis function reads
\begin{align}
\Psi_{\mathbf{k}} = \left\{ \vert p_z,\uparrow\rangle, \vert p_z,\downarrow\rangle, \vert d_{xz+iyz},\downarrow\rangle, \vert d_{xz-iyz},\uparrow\rangle, \vert d_{xz+iyz},\uparrow\rangle, \vert d_{xz-iyz},\downarrow\rangle \right\},
\end{align}
which can be rewritten in terms of z-component total angular momentum and the parity of the basis state as,
\begin{align}
\Psi_{\mathbf{k}} = \left\{ \vert p_-, +\tfrac{1}{2}\rangle, \vert p_-, -\tfrac{1}{2}\rangle, \vert d_+, +\tfrac{1}{2}\rangle, \vert d_+, -\tfrac{1}{2}\rangle, \vert d_+, +\tfrac{3}{2}\rangle, \vert d_+, -\tfrac{3}{2}\rangle \right\}.
\end{align}
The normal Hamiltonian reads
\begin{align}\label{eq-six-ham0}
\mathcal{H}_0 = \begin{pmatrix}
M_1(\mathbf{k}) & 0 & A_2k_z & -A_1k_- & A_1k_+ & 0 \\
0 & M_1(\mathbf{k}) & A_1k_+ & A_2k_z & 0 & -A_1k_- \\
A_2k_z & A_1k_- & M_2(\mathbf{k}) & 0 & 0 & D^\ast(\mathbf{k}) \\
-A_1k_+ & A_2k_z & 0 & M_2(\mathbf{k}) & D(\mathbf{k}) & 0 \\
A_1k_- & 0 & 0 & D^\ast(\mathbf{k}) & M_2(\mathbf{k})+\delta_{so} & 0 \\
0 & -A_1k_+ & D(\mathbf{k}) & 0 & 0 & M_2(\mathbf{k})+\delta_{so}
\end{pmatrix},
\end{align}
where $k_\pm = k_x \pm ik_y$, $M_i(\mathbf{k})=M_0^i + M_1^i(k_x^2+k_y^2)+M_2^ik_z^2$ and $D(\mathbf{k})=B_1(k_x^2-k_y^2)-iB_2k_xk_y$.
Here, the $D(\mathbf{k})$ term determines that $d_{xz}$ and $d_{yz}$ bands have distinct masses,
leading to two hole pockets in iron-based superconductors.
And $\delta_{so}$ is the spin-orbital coupling (SOC), which leads to the shifting of the 3D Dirac point when varying the SOC splitting of $d$-orbital bands.
We will show it is the most important parameter for vortex topology later.
To simplify the calculation, $k_z\to \sin k_z$ and $k_z^2\to 2(1-\cos k_z) $ will be used.
And we adopted the following parameters to get the similar (001) surface spectra by DFT calculation,
\begin{align}\label{eq-parameter-toy-model}
\begin{split}
&M_0^2 = -2, M_1^2 = -1, M_2^2=1, \\
&M_0^1 = 2, M_1^1 = 1, M_2^1 = -1, \\
&A_1=2, A_2=0.1, B_1=0.5, B_2=-2B_1.
\end{split}
\end{align}
Please note that $\delta_{so}$ is a tunning parameter in this work.
$A_1=0.5$ is used in the main text and $A_1=1,2$ are used in the following discussions.
This model could capture the band topology of real materials, see the band structures in Fig.~(2) in the main text.
For Hamiltonian~\eqref{eq-six-ham0}, the important symmetries are
\begin{align}
C_{4z} = e^{i\tfrac{\pi}{2}J_z}, \quad
\Theta = \begin{pmatrix}
-is_y & 0 & 0 \\
0 & is_y & 0 \\
0 & 0 & -is_y
\end{pmatrix} \mathcal{K}, \quad
\mathcal{P},
\end{align}
where $J_z=\text{diag}[\frac{1}{2},-\frac{1}{2},\frac{1}{2},-\frac{1}{2},\frac{3}{2},-\frac{3}{2}]$ is the z-component of the total angular momentum, $\mathcal{K}$ is the complex conjugation and $\mathcal{P}=\text{diag}[-1,-1,1,1,1,1]$ is the spatial inversion symmetry.
In addition, the system has mirror symmetry $M_z$ with respect to z axis, defied as $M_z=i\times\text{diag}[-1,1,1,-1,-1,1]$.
To study the topological vortex states, we introduce the 1D vortex line with $\pi$-flux inserted along z axis.
Next, we solve the Bogoliubov–de Gennes (BdG) Hamiltonian as,
\begin{align}\label{eq-bdg-ham}
\mathcal{H}_{BdG} = \begin{pmatrix}
\mathcal{H}_0(\mathbf{k}) & \mathcal{H}_\Delta \\
\mathcal{H}_\Delta^\dagger & -\mathcal{H}_0^\ast(-\mathbf{k})
\end{pmatrix} ,
\end{align}
of which we take the Nambu basis $\{ \Psi_{\mathbf{k}}^\dagger, \Psi_{-\mathbf{k}}^T \}$.
As a result, the particle-hole symmetry operator $\Xi$ is defined as,
\begin{align}
\Xi = \gamma_x\mathcal{K},
\end{align}
where $\gamma_x$ is Pauli matrix acting on particle-hole subspace and $\mathcal{K}$ is the complex conjugate.
Here the normal Hamiltonian $\mathcal{H}_0(\mathbf{k})$ is given by Eq.~\eqref{eq-six-ham0} with out symmetry breaking perturbations.
The s-wave pairing is considered in this work,
\begin{align}
\mathcal{H}_\Delta = \Delta_0\begin{pmatrix}
is_y & 0 & 0 \\
0 & -is_y & 0 \\
0 & 0 & is_y
\end{pmatrix}
=\begin{pmatrix}
0 & \Delta_0 & 0 & 0 & 0 & 0 \\
-\Delta_0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & -\Delta_0 & 0 & 0 \\
0 & 0 & \Delta_0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & \Delta_0 \\
0 & 0 & 0 & 0 & -\Delta_0 & 0
\end{pmatrix} ,
\end{align}
where the pairing profile in the real space is give by $\Delta_0\to\Delta_0\tanh(r/\xi_0) e^{i\theta}$.
Since the vortex line is orienting along z direction, the 3D Hamiltonian calculation is reduced to a 2D problem by treating $k_z$ as a parameter.
To solve the 2D BdG Hamiltonian with fixed $k_z$ for vortex bound states (VBSs), we take the disc geometry with natural boundary condition.
In the polar coordinate system $(r,\theta)$, the momentum operators $k_\pm = k_x\pm i k_y$ can be expressed as,
\begin{align}
k_{+}=e^{i\theta}\left\lbrack -i\frac{\partial}{\partial r}+\frac{1}{r}\frac{\partial}{\partial \theta} \right\rbrack,
\text{ and } k_{-}=e^{-i\theta}\left\lbrack -i\frac{\partial}{\partial r}-\frac{1}{r}\frac{\partial}{\partial\theta} \right\rbrack,
\end{align}
which satisfies that
\begin{align}
k_{+}\left(e^{in\theta}J_n(\alpha r)\right) = i\alpha e^{i(n+1)\theta}J_{n+1}(\alpha r),
\text{ and } k_{-}\left(e^{in\theta}J_n(\alpha r)\right) = -i\alpha e^{i(n-1)\theta}J_{n-1}(\alpha r),
\end{align}
where $n$ is an integer and $J_n$ is the Bessel function of the first kind.
Given that the vortex line has winding number $+1$, the eigenfunctions of the reduced BdG equations for Eq.\eqref{eq-bdg-ham} take the following general forms,
\begin{align}
\mathcal{H}_{BdG}(k_z) &= \oplus_{n\in\mathcal{Z}} \mathcal{H}_{n}(k_z),\\
\mathcal{H}_{n}(k_z) \vert E_j(n,k_z) \rangle &= E_j(n,k_z) \vert E_j(n,k_z) \rangle, \\
\vert E_j(n,k_z) \rangle &= (u_{j,k_z}(n,r,\theta),v_{j,k_z}(n,r,\theta))^T,
\end{align}
which labels the j$^{th}$ solution in the $n$-subspace with fixed $k_z$, and $u$ (electron wave functions) and $v$ (hole wave functions) are expressed as,
\begin{align}
u_{j,k_z}(n,r,\theta) &= e^{in\theta} \left(u_1(n,r), u_2(n+1,r)e^{i\theta}, u_3(n,r), u_4(n+1,r)e^{i\theta}, u_5(n-1,r)e^{-i\theta}, u_6(n+2,r)e^{2i\theta} \right), \\
v_{j,k_z}(n,r,\theta) &= e^{in\theta} \left(v_1(n,r), v_2(n-1,r)e^{-i\theta}, v_3(n,r), v_4(n-1,r)e^{-i\theta}, v_5(n+1,r)e^{i\theta}, v_6(n-2,r)e^{-2i\theta} \right),
\end{align}
where the components $u_{i}(n,r)$ and $v_{i}(n,r)$ with $i=1,2,3,4,5,6$ can be both expanded by the normalized Bessel function as,
\begin{align}
\begin{split}
u(n,r) = \sum_{k=1}^{N} c_{k,n} \phi(n,r,\alpha_k), \\
v(n,r) = \sum_{k=1}^{N} c_{k,n}^\prime \phi(n,r,\alpha_k),
\end{split}
\end{align}
where $\phi(n,r,\alpha_k)=\frac{\sqrt{2}}{R J_{n+1}(\alpha_k)}J_n(\alpha_k r/R)$.
Please note that $n$ used here is $l_z$ used in the main text.
Here, $c$ and $c^\prime$ are the expansion coefficients, \(\alpha_k\) is the \(k^{\text{th}}\) zero of \(J_n(r)\), and $R$ is the radius of the disc.
In our calculation, $\xi_0=1$ and $R=120$ are used. And the truncation number for Bessel zeros are $N=140$. In this setting, finite size effect is weak enough for the low-energy VBSs.
Therefore, there are two types of numerical results,
\begin{itemize}
\item[1.)] To search for the nodal vortex phase. \\
By fixing the chemical potential $\mu$ and $\delta_{so}$, we calculate the vortex line spectrum as a function of $k_z$ for the $n$-subspace. Normally, the 1D nodal vortex lives in $\vert n\vert\ge1$ subspaces. The results are shown in Fig.~\ref{fig5}.
\item[2.)] To map out topological phase diagram. \\
Fix $k_z$ at TR-invariant planes with band inversion (in our six-band model, band inversion happens at $k_z=\pi$), then calculate the vortex spectrum as a function of $\mu$. The gap closing at $\mu_c$ indicates topological phase transition.
The results are shown in Fig.~(1) in the main text.
By increasing the velocity of topological states, the hybrid vortex phase is significantly enlarged, shown in Fig.~\ref{fig7}.
\end{itemize}
\begin{figure}[t]
\includegraphics[width=0.85\textwidth]{figure5.pdf}
\caption{The BdG quasi-particle spectrum of the superconducting vortex line in $n=0,\pm 1, \pm 2,\pm 3$ subspaces. The blue line shows the 1D gapless vMBSs in (b).}
\label{fig5}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.75\textwidth]{figure7.pdf}
\caption{
The effect of velocity of topological states on the vortex topological phase diagram for a minimal tFeSC model, in (a) $A_1=2$ and in (b) $A_1=1$.
The phase diagram is mapped out as a function of chemical potential $\mu$ and $\delta_{so}$, the separation energy between TI gap and bulk Dirac nodes. It contains four phases: (I) trivial vortex; (II) nodal vortex; (III) hybrid vortex with both 0D end-localized vMBSs (red cone) and 1D gapless channels. (IV) Kitaev vortex with 0D end-localized vMBSs. Similar to Fig.~(1) in the main text ($A_1=0.5$ is used there).
}
\label{fig7}
\end{figure}
\section{Appendix B: Low-energy Projection for Effective Vortex Hamiltonians}
In this section, we derive the low-energy effective Hamiltonian to address the symmetry breaking effects.
This is the key result in the main text.
\subsection{Low-energy effective Hamiltonian for Kitaev vortex}
Before that, let us briefly discuss the low-energy states.
As for the $n=0$ subspace, we firstly set two lowest eigen-states of the 2D BdG Hamiltonian at $k_z=\pi$,
\begin{align}
\mathcal{H}_{n=0}(k_z=\pi) \vert f_\pm \rangle = \pm m_0 \vert f_\pm \rangle, \text{ with } \Xi \vert f_\pm \rangle = \vert f_\mp\rangle.
\end{align}
Next, we take $k_z\neq0$ terms expanded around $k_z=\pi$ for Eq.~\eqref{eq-bdg-ham} as perturbations, including the $\mathcal{H}_{n=0}(k_z^2)\propto k_z^2$-term and the $\mathcal{H}_{n=0}(k_z)\propto k_z$-term, which obey
\begin{align}
\{ \Xi , \mathcal{H}_{n=0}(k_z^2)\} =0, \text{ and } [\Xi , \mathcal{H}_{n=0}(k_z)]=0.
\end{align}
Without symmetry breaking terms, the BdG Hamiltonian also preserve the Mirror symmetry $M_z^{BdG}=\text{diag}[M_z,M_z^\ast]$. And we set $M_z^{BdG} \vert f_\pm\rangle = \pm i\vert f_\pm\rangle$. For the perturbation terms,
\begin{align}
[ M_z^{BdG}, \mathcal{H}_{n=0}(k_z^2)] =0, \text{ and } \{M_z^{BdG}, \mathcal{H}_{n=0}(k_z)\}=0.
\end{align}
Thus, the eigen-states are simplified as,
\begin{align}
\vert f_+\rangle &=\left(0,u_2(1,r)e^{i\theta}, u_3(0,r),0,0, u_6(2,r)e^{2i\theta},
v_1(0,r),0,0,v_4(-1,r)e^{-i\theta},v_5(1,r)e^{i\theta},0 \right),\\
\vert f_-\rangle &=\left(u_1(0,r),0,0,u_4(1,r)e^{i\theta},u_5(-1,r)e^{-i\theta},0 , 0,v_2(-1,r)e^{-i\theta}, v_3(0,r),0,0, v_6(-2,r)e^{-2i\theta} \right).
\end{align}
Here $u_i=v_i^\ast$ is required by the particle-hole symmetry.
Please notice that all the symmetry constraints are numerically checked.
According to symmetry constraints of $\Xi $ and $M_z^{BdG}$, we have
\begin{align}
\langle f_+ \vert \mathcal{H}_{n=0}(k_z^2) \vert f_+\rangle &= -\langle f_- \vert \mathcal{H}_{n=0}(k_z^2) \vert f_-\rangle = m_1 k_z^2 , \text{ and } \langle f_\pm \vert \mathcal{H}_{n=0}(k_z^2) \vert f_\mp \rangle=0, \\
\langle f_+ \vert \mathcal{H}_{n=0}(k_z) \vert f_-\rangle &= \langle f_- \vert \mathcal{H}_{n=0}(k_z) \vert f_+\rangle = m_2 k_z, \text{ and } \langle f_\pm \vert \mathcal{H}_{n=0}(k_z) \vert f_\pm \rangle=0,
\end{align}
which leads to,
\begin{align}
\mathcal{H}_{eff,n=0} = (m_0+m_1k_z^2)\sigma_z + m_2 k_z \sigma_x,
\end{align}
which describes the lowest vortex line states in the $n=0$ subspace, resembling the 1D topological Kitaev chain.
As a result, we call it Kitaev vortex in the main text.
Once $m_0m_1<0$ and $m_2\neq0$, the vortex line is topological and there exists a single 0D Majorana zero mode (MZM) at each end of the vortex line. Here we take an example with $\mu=0,\delta_{so}=0.5$, and the numerically calculated $m_i$ are
\begin{align}
m_0=0.0178, m_1 = -0.0041, m_2 = 0.0154.
\end{align}
It confirms the topological Kitaev vortex phase discussed in the main text.
\subsection{Low-energy effective Hamiltonian for Nodal vortex}
Next, let us derive the effective Hamiltonian for $n=\pm1$ subspaces for the nodal vortex phase, which satisfy
\begin{align}
\mathcal{H}_{n=\pm 1}(k_z=\pi) \vert f'_\pm \rangle = \pm m_0 \vert f'_\pm \rangle, \text{ with } \Xi \vert f'_\pm \rangle = \vert f'_\mp\rangle \text{ and } M_z^{BdG} \vert f'_\pm\rangle = \pm i\vert f'_\pm\rangle.
\end{align}
The symmetry constraints for $\vert f'_\pm \rangle$ are express as,
\begin{align}
\vert f'_+ \rangle &= e^{i\theta}\left(0,u_2(1,r)e^{i\theta}, u_3(0,r),0,0, u_6(2,r)e^{2i\theta},
v_1(0,r),0,0,v_4(-1,r)e^{-i\theta},v_5(1,r)e^{i\theta},0 \right),\\
\vert f'_- \rangle &= e^{-i\theta}\left(u_1(0,r),0,0,u_4(1,r)e^{i\theta},u_5(-1,r)e^{-i\theta},0 , 0,v_2(-1,r)e^{-i\theta}, v_3(0,r),0,0, v_6(-2,r)e^{-2i\theta} \right).
\end{align}
In the $ \{ \vert f'_+ \rangle, \vert f'_- \rangle \} $ subspace, we project $\vert \mathcal{H}_{n=0}(k_z^2)$ term and $\vert \mathcal{H}_{n=0}(k_z)$ term into this lowest energy basis. As a result,
\begin{align}
\langle f'_+ \vert \mathcal{H}_{n=0}(k_z^2) \vert f'_+\rangle &= -\langle f'_- \vert \mathcal{H}_{n=0}(k_z^2) \vert f'_-\rangle = m_1 k_z^2 , \text{ and } \langle f'_\pm \vert \mathcal{H}_{n=0}(k_z^2) \vert f'_\mp \rangle=0, \\
\langle f'_+ \vert \mathcal{H}_{n=0}(k_z) \vert f'_-\rangle &= \langle f'_- \vert \mathcal{H}_{n=0}(k_z) \vert f'_+\rangle = \langle f'_\pm \vert \mathcal{H}_{n=0}(k_z) \vert f'_\pm \rangle=0,
\end{align}
Since there is a $+2$ difference in the total angular momentum, therefore, the off-diagonal $\langle f'_i \vert \mathcal{H}_{n=0}(k_z) \vert f'_j \rangle =0$.
Please notice that $\mathcal{H}_{n=0}(k_z)$ couples electrons with the same angular momentum.
Mathematically, the integral over $\theta$ vanishes.
It leads to,
\begin{align}
\mathcal{H}_{eff,n=\pm 1} = (m_0'+m_1'k_z^2)\sigma_z,
\end{align}
which indicates topological nodal vortex phase under criteria $m_0'm_1'<0$.
As a result, we call it nodal vortex in the main text.
\subsection{Perturbations to break $C_{4z}$ down to $C_{2z}$}
Firstly, we focus on the perturbations to break $C_{4z}$ down to $C_{2z}$. Following Ref.~[\onlinecite{qin_prl_2019}], one possibility is to lift the degeneracy of the bulk Dirac cone along the $k_z$ direction by introducing the following term,
\begin{align} \label{eq-ham-c4toc2}
\Delta\mathcal{H}_{C_4\to C_2} = \begin{pmatrix}
0 & 0 & 0 & 0 & 0 & iC_1k_z \\
0 & 0 & 0 & 0 & -iC_1k_z & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & iC_1k_z & 0 & 0 & 0 & 0 \\
-iC_1k_z & 0 & 0 & 0 & 0 & 0
\end{pmatrix} ,
\end{align}
which satisfied both the TR $\Theta$ and the inversion symmetry $\mathcal{P}$, while the $C_4$ is broken.
Please notice that $C_1 \propto C_1 + C_1'k_+^4 + C_1''k_+^8+\cdots$ due to the conservation of the angular momentum.
The general symmetry breaking terms will be discussed in detail later.
As a result,
\begin{align}
\langle f'_+ \vert \Delta\mathcal{H}_{C_4\to C_2}(k_z) \vert f'_-\rangle &= \langle f'_- \vert \Delta\mathcal{H}_{C_4\to C_2}(k_z) \vert f'_+\rangle = m_2' k_z.
\end{align}
Therefore, we have the effective Hamiltonian as,
\begin{align}
\mathcal{H}_{eff,n=\pm 1} = (m_0'+m_1'k_z^2)\sigma_z + m_2' k_z \sigma_x,
\end{align}
which leads to 0D MZMs at ends of vortex line.
As a result, two nodal vortices are coupled with each other and emerged into a single Kitaev vortex.
However, it coexists with the 0D MZMs from the $n=0$ subspace. It is the hybrid vortex in the phase diagram in Fig.~(1) of the main text.
Notably, a pair of 0D MZMs are protected by the $C_{2z}$ rotational symmetry.
\subsection{Perturbations to break $C_{2z}$ down to $C_1$}
Because of $C_{2z}\to C_1$, the low energy Hamiltonian $\mathcal{H}_{eff,n=\pm 1}$ will couple to $\mathcal{H}_{eff,n=0}$, leading to the vanishing of 0D MZMs. One possible coupling term between two Kitaev vortices is,
\begin{align}
\langle f_+ \vert \Delta\mathcal{H}_{C_2\to C_1} \vert f'_+\rangle = \langle f'_+ \vert \Delta\mathcal{H}_{C_2\to C_1} \vert f_+\rangle =
\langle f_- \vert \Delta\mathcal{H}_{C_2\to C_1} \vert f'_-\rangle = \langle f'_- \vert \Delta\mathcal{H}_{C_2\to C_1} \vert f_-\rangle = m_3.
\end{align}
Therefore, the coupled effective 4-by-4 Hamiltonian becomes,
\begin{align}
\mathcal{H}_{eff} = \begin{pmatrix}
m_0+m_1k_z^2 & m_2k_z & m_3 & 0 \\
m_2k_z & -(m_0+m_1k_z^2) & 0 & m_3 \\
m_3 & 0 & m_0'+m_1'k_z^2 & m'_2k_z \\
0 & m_3 & m'_2k_z & -(m_0'+m_1'k_z^2)
\end{pmatrix}
\end{align}
which is based on the basis $\{ \vert f_+ \rangle, \vert f_- \rangle, \vert f'_+ \rangle , \vert f'_- \rangle \}$.
As a result, the hybrid vortex becomes trivial.
\subsection{Symmetry breaking effects on the low-energy Density of states}
In this section, we discuss how to experimental identification to the symmetry breaking effects by studying the low-energy density of states calculated from the low-energy effective vortex model.
\begin{itemize}
\item [(1.)] No symmetry breaking terms. The topological hybrid vortex model is described by,\\
\begin{align}
\mathcal{H}_{eff,n=0} &= (m_0+m_1k_z^2)\sigma_z + m_2 k_z \sigma_x,\\
\mathcal{H}_{eff,n=\pm 1} &= (m_0'+m_1'k_z^2)\sigma_z,
\end{align}
\item [(2.)] $C_4$ is broken to $C_2$. The hybrid vortex model becomes,\\
\begin{align}
\mathcal{H}_{eff,n=0} &= (m_0+m_1k_z^2)\sigma_z + m_2 k_z \sigma_x,\\
\mathcal{H}_{eff,n=\pm 1} &= (m_0'+m_1'k_z^2)\sigma_z + m_2' k_z \sigma_x,
\end{align}
\item [(3.)] $C_2$ is broken to $C_1$. The trivialized hybrid vortex could be,\\
\begin{align}
\mathcal{H}_{eff,n=0\oplus n=\pm 1} = \begin{pmatrix}
m_0+m_1k_z^2 & m_2k_z & m_3 & 0 \\
m_2k_z & -(m_0+m_1k_z^2) & 0 & m_3 \\
m_3 & 0 & m_0'+m_1'k_z^2 & m'_2k_z \\
0 & m_3 & m'_2k_z & -(m_0'+m_1'k_z^2)
\end{pmatrix}
\end{align}
\end{itemize}
A general picture for the symmetry breaking effects on the vortex topology is illustrated in Fig.~(4) of the main text.
And, the zero bias peak disappears due to the hybridization of two 0D MZMs, shown in Fig.~\ref{fig6}.
\begin{figure}[t]
\includegraphics[width=0.8\textwidth]{figure6.pdf}
\caption{The LDOS near vortex core of the trivial vortex phase in tFeSCs. With increasing the hybridization $m_3$, the trivial gap is enlarged.
}
\label{fig6}
\end{figure}
\section{Appendix C: Analytical Results on the Projected TI model with Infinite $\delta_{so}$}
In this section, let us check the TI limit from the six-band model in Eq.~\eqref{eq-six-ham0} by taking $\delta_{sc}\to\infty$.
It implies that the bulk Dirac cone is far away from the TI surface Dirac cone, so that we can eliminate the high-energy bands from $\vert d_+, +\tfrac{3}{2}\rangle, \vert d_+, -\tfrac{3}{2}\rangle$.
Therefore, the six-band model is reduced to a four-band model which describes the topological insulator.
The vortex phase transition is studied by P.~Hosur in Ref.~[\onlinecite{hosur_prl_2011}], and they find the critical chemical potential for topological vMBSs is,
\begin{align}
M_1(k_\parallel,k_z=\pi) = 0 \; \Rightarrow k_\parallel = \sqrt{-(M_0^1+4M_2^1)/M_1^1},
\end{align}
which indicates that $\mu_c^+ = -\mu_c^- = A_1k_\parallel = A_1 \sqrt{-(M_0^1+4M_2^1)/M_1^1} $.
Here $k_\parallel = \sqrt{k_x^2+k_y^2}$ is in-plane momentum.
The analytical result is consistent with numerical calculation in Ref.~[\onlinecite{hosur_prl_2011,li_scpma_2019}].
Please note that the results are only for the $\delta_{so}\to\infty$ limit in the 6-band model.
Numerically, we find that the critical chemical potential $\mu_c^-$ varies very rapidly and obtains a large negative value for a small $\delta_{so}$. We use perturbative method to provide a semiquantitative understanding about this phenomenon.
Then, let us analyze the case with finite but large enough $\delta_{so}$ by perturbation theory.
We also take the high-energy bands from $\vert d_+, +\tfrac{3}{2}\rangle, \vert d_+, -\tfrac{3}{2}\rangle$ as perturbation terms,
\begin{align}
\mathcal{H}_0 &= \begin{pmatrix}
H_{TI} & V \\
V^\dagger & H_p
\end{pmatrix}, \\
H_{TI} = \begin{pmatrix}
M_1(\mathbf{k}) & 0 & A_2k_z & -A_1k_- \\
0 & M_1(\mathbf{k}) & A_1k_+ & A_2k_z \\
A_2k_z & A_1k_- & M_2(\mathbf{k}) & 0 \\
-A_1k_+ & A_2k_z & 0 & M_2(\mathbf{k})
\end{pmatrix} ,
V &= \begin{pmatrix}
A_1k_+ & 0 \\
0 & -A_1k_- \\
0 & D^\ast(\mathbf{k}) \\
D(\mathbf{k}) & 0 \\
\end{pmatrix},
H_p = \begin{pmatrix}
M_2(\mathbf{k})+\delta_{so} & 0 \\
0 & M_2(\mathbf{k})+\delta_{so}
\end{pmatrix}.
\end{align}
The projected four-by-four effective TI Hamiltonian is $H_{eff} = H_{TI} - V H_p^{-1} V^\dagger$ as
\begin{align}
H_{eff} =
\begin{pmatrix}
M_1(\mathbf{k}) & 0 & A_2k_z & -A_1k_- \\
0 & M_1(\mathbf{k}) & A_1k_+ & A_2k_z \\
A_2k_z & A_1k_- & M_2(\mathbf{k}) & 0 \\
-A_1k_+ & A_2k_z & 0 & M_2(\mathbf{k})
\end{pmatrix} - \frac{1}{M_2(\mathbf{k})+\delta_{so}}
\begin{pmatrix}
A_1^2k_\parallel^2 & 0 & 0 & A_1k_+D_k^\ast \\
0 & A_1^2k_\parallel^2 & -A_1k_-D_k & 0 \\
0 & -A_1k_+D_k^\ast & D_k^\ast D_k & 0 \\
A_1k_-D_k & 0 & 0 & D_k^\ast D_k
\end{pmatrix},
\end{align}
where $k_\pm = k_x \pm i k_y$.
Then we take the approximation around Fermi energy
\begin{align}
\frac{1}{M_2(\mathbf{k})+\delta_{so}} \sim \frac{1}{\delta_{so}}.
\end{align}
Hereafter, we only keep the $k_\parallel^2$ order terms so that we ignore the $D_k$-terms.
Then, we can use the analytical criteria derived by H.~Hosur for the topological vortex phase transition for $H_{eff}$,
\begin{align}
H_{eff} = - \frac{A_1^2}{2\delta_{so}}k_\parallel^2 +
\begin{pmatrix}
M_1'(\mathbf{k}) & 0 & A_2k_z & -A_1k_- \\
0 & M_1'(\mathbf{k}) & A_1k_+ & A_2k_z \\
A_2k_z & A_1k_- & -M_1'(\mathbf{k}) & 0 \\
-A_1k_+ & A_2k_z & 0 & -M_1'(\mathbf{k})
\end{pmatrix},
\end{align}
where $M_1'(\mathbf{k}) = M_1(\mathbf{k}) - \frac{A_1^2}{2\delta_{so}}k_\parallel^2 = (M_0^1 +4M_2^1) + (M_1^1- \frac{A_1^2}{2\delta_{so}})k_\parallel^2 $.
Making $M_1'(\mathbf{k})=0$ so that $k_\parallel = \sqrt{-\frac{M_0^1 +4M_2^1}{M_1^1- \frac{A_1^2}{2\delta_{so}}} }$.
So the critical chemical potential $\mu_{eff,c}^\pm$ is given by,
\begin{align}
\mu_{eff,c}^- = - \frac{A_1^2}{2\delta_{so}}\left( -\frac{M_0^1 +4M_2^1}{M_1^1- \frac{A_1^2}{2\delta_{so}}} \right) - A_1\sqrt{-\frac{M_0^1 +4M_2^1}{M_1^1- \frac{A_1^2}{2\delta_{so}}} }.
\end{align}
Now let us decrease $\delta_{so}$ from infinity to a finite value (assume $\delta_{so}>0$ for simplicity), but we still assume $ M_1^1- \frac{A_1^2}{2\delta_{so}}>0$ to keep the validness of the perturbation theory. Therefore, we have
\begin{align}
\begin{split}
\delta_{so} \text{ decreases} \quad &\Rightarrow \quad M_1^1- \frac{A_1^2}{2\delta_{so}} \text{ decreases} \\
& \Rightarrow \quad \sqrt{-\frac{M_0^1 +4M_2^1}{M_1^1- \frac{A_1^2}{2\delta_{so}}} } \text{ decreases} \\
& \Rightarrow \quad \mu_{eff,c}^- \text{ decreases} .
\end{split}
\end{align}
The above analysis explains why $\mu_{eff,c}^- (<0)$ is negative large for a small $\delta_{so}$ case.
For a simple comparison,
\begin{align}
\begin{cases}
\delta_{so}\to \infty \text{(TI limit): } \mu_{eff,c}^- \to -2.82843, \\
\delta_{so}=2.5 \text{(TI+DSM): } \mu_{eff,c}^- = -10.32456,
\end{cases}
\end{align}
which shows the significant affect on $\mu_c^-$ by changing $\delta_{so}$.
This clearly explains the domination of hybrid vortex in the superconducting vortex line phase diagram of tFeSCs, shown in Fig.~(1) in the main text.
\section{Appendix D: Vortex Topology in CaKFe$_4$As$_4$}
In this appendix, we discuss the vortex topological phase diagram for CaKFe$_4$As$_4$.
As pointed out by Ref.~\cite{liu_nc_2020}, CaKFe$_4$As$_4$ features an interesting bilayer Fe-As structure and manifests as an alternate stacking of CaFe$_2$As$_2$ and KFe$_2$As$_2$. Such a bilayer structure of CaKFe$_4$As$_4$ leads to a Brillouin zone folding along $k_z$ when comparing to its parent compounds Ca(K)Fe$_2$As$_2$, and further enables two copies of TI and DSM bands. A schematic sketch for normal-state band structure for CaKFe$_4$As$_4$ is shown in Fig.~\ref{fig:CaKFeAs} (a), following the DFT+DMFT calculation in Ref.~\cite{liu_nc_2020}. Note that TI \#2 and DP \#2 are essentially a duplicate of TI \#1 and DP \#2, thanks to the Brillouin zone folding, where ``DP" is short for bulk Dirac points.
The key to map out the vortex phase diagram lies in the identification of phase boundaries. As we have discussed in Fig.~3 in the main text, each set of TI or DSM bands will contribute to a pair of phase boundaries. As a result, the vortex phase diagram for CaKFe$_4$As$_4$ necessarily consists of 8 critical chemical potentials as the phase boundaries, which we denote as $\mu_{\xi,\pm}$ and $\xi\in\{\text{TI\#1,\ DP\#1,\ TI\#2,\ DP\#2}\}$. Therefore, the phase diagram is completely determined by the energy sequence of all eight $\mu_{\xi,\pm}$s.
Notably, such a sequence sensitively depends on the competition among the following energy scales:
\begin{itemize}
\item $\delta_{so}$: the energy splitting between TI \#1 and DSM \#1 (or equivalently TI \#2 and DSM \#2);
\item $\delta_{t}$: the energy splitting between TI \#1 and TI \#2;
\item $\delta_{\mu}$: the energy difference between $\mu_{\xi,+}$ and $\mu_{\xi,-}$.
\end{itemize}
Practically, it is of technical difficulty for us to obtain accurate values of the above quantities, especially due to the strong electron correlations in CaKFe$_4$As$_4$ and the lack of experimental data. Nonetheless, we can make some rough estimate based on the existing DFT+DMFT calculation and ARPES data. We find that
\begin{equation}
\delta_{so}\sim \delta_{t} \sim 50\text{ meV},\ \delta_{\mu}\geq 20\text{ meV}.
\end{equation}
Values for $\delta_{so}$ of other tFeSC candidates can be found in Table.~\ref{soc}. Here the lower bound for $\delta_{\mu}$ is estimated based on the observation that vMBS signal exists for CaKFe$_4$As$_4$, even though the Fermi level is found to be $20$ meV below the surface Dirac point in experiment. We emphasize that a concrete prediction of $\delta_{\mu}$ will require a first-principles-based vortex spectrum calculation with correlation effects being carefully included, which is beyond the scope of our work. Nonetheless, based on the large $\delta_{\mu}$ found in our six-band minimal model (see Fig.~1 in the main text) and the bandwidth of TI bands in CaKFe$_4$As$_4$, we expect that $\delta_{\mu}$ should be much greater than 20 meV in the actual material.
Assuming $\delta_{\mu}>\delta_{so}\sim \delta_{t}$, we schematically show in Fig.~\ref{fig:CaKFeAs} (b) a possible vortex topological phase diagram for CaKFe$_4$As$_4$, which contains 7 topologically distinct vortex phases. Compared with the phase diagram in the main text, we now have several new types of hybrid topological vortex states termed ``hybrid$_{[m,n]}$ vortex", which is essentially a superposition of $m$ Kitaev vortices and $n$ nodal vortices. According to this new notation, the hybrid vortex in the main text is hybrid$_{[1,1]}$ vortex by definition. In Fig.~\ref{fig:CaKFeAs} (c), we list the number of vMBS for each vortex state when $C_4$ breaking effect is considered. Since the Fermi level is below the surface Dirac point of TI \#1, the most probable Majorana-carrying vortex state for CaKFe$_4$As$_4$ is the Kitaev vortex phase, as indicated in Fig.~\ref{fig:CaKFeAs} (b) and (c). Remarkably, if we can continuously lift the Fermi level by electron doping, we are expected to see an interesting oscillation of vMBS number as a function of $E_F$.
\begin{table}[bt]
\caption{\label{soc} Effective model related parameters from theoretical calculations and experimental measurements for typical tFeSCs. $\delta^{T/E}_{so}$ denotes the theoretical/experimental splitting of $d_{xz/yz}$ band owing to spin-orbit coupling. }
\begin{ruledtabular}
\begin{tabular}{cccc }
Material & $\delta^T_{so}$ (meV) & $\delta^E_{so}$ (meV) & Vortex Phase \\
\colrule
FeTe$_{0.5}$Se$_{0.5}$ & 70 & 35 & Kitaev \\
LiFeAs & 45 & 10.7 & Hybrid \\
CaKFe$_4$As$_4$ & 50 & N/A & Kitaev \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure*}[t]
\includegraphics[width=0.98\textwidth]{CaKFeAs.pdf}
\caption{A schematic plot for CaKFe$_4$As$_4$ band structure is shown in (a), where both bulk Dirac nodes and TI gaps are highlighted. A possible vortex topological phase diagram for CaKFe$_4$As$_4$ is shown in (b). The corresponding vMBS number for each vortex phase is shown in (c), where $C_4$ breaking effect has been taken into account.}
\label{fig:CaKFeAs}
\end{figure*}
\end{document}
|
1,116,691,500,148 | arxiv | \section{Methods}
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.75\columnwidth]{Schema_YoungLap.pdf}
\caption{Schematic of a pendant drop defining the coordinates used for the numerical integration.
}
\label{fg:Schema_YoungLap}
\end{center}
\end{figure}
\noindent\textbf{Experiments.} Silicone oil v1000 (Gelest DMS-T31) dyed in black was used in our experiments. The oil-dye viscosity $\eta=1.13$ Pa.s was measured in a rheometer (Anton Paar MCR 301) at room temperature $T=21^\circ$C. To prepare the uniform coating $h_0$, the edges of a rectangular flat glass plate were first covered with tape over a width of $\approx 1$ cm. Oil was then spin-coated (Laurell WS-650Mz-23NPPB) on the plate and the tape subsequently pealed to remove the edge bead that forms during spin-coating. The plate was then weighted to determine the film thickness $h_0=m/(\rho S)$ with $m$ the film mass, $\rho=971$ kg/m$^3$ the oil density, and $S$ the surface area not covered by the tape (measured with a camera). The experimental uncertainties on the measured thickness was found to be $\Delta h_0\approx \pm 6$ $\mu$m. Most of the experiments are done with three spin-coating speeds leading to the following films thicknesses $h_0\approx \{55,89,110\}$ $\mu$m.
The coated plate is then attached upside-down with a magnet to an arm mounted on a rotation stage which has been pre-leveled with a bubble level. Using a micropipette, a drop of oil was injected on the plate to form a drop with an initial amplitudes $0.69<A_0/\ell_c<1.46$. The arm is then rotated to the desired angle $\alpha$. The rotation stage being precise, the uncertainty on the angle $\alpha$ is mostly determined by the leveling step. To gauge it, we recorded the motion of a drop at `zero' angle for each set of experiments and fitted its dynamics with the numerics. The fitted value of $\lvert \alpha_0\rvert = 0.15- 0.3^\circ$ was then used to correct the zero angle for the following experiments of the set.
The drop dynamic was recorded from the side and the bottom with two cameras. The drop amplitude was measured from the side view while the drop position was measured from the bottom view since drops at low angles can move out of plane of the side view. All image analysis was performed with ImageJ and/or Matlab.\\
\noindent\textbf{Simulations.} Finite element simulations were performed with the commercial software Comsol v5.4. To supply the initial condition $h(x,y,0)=1+h_d(x,y)/h_0$ with $h_d(x,y)$ a dimensionless static pendant drop profile, we solved numerically the Young-Laplace equation with the appropriate boundary conditions~\cite{Marthelot:2018} using Mathematica (shooting method):
\begin{equation}
\begin{gathered}
\frac{\mathrm{d}^2 \phi\left(s\right)}{\mathrm{d} s^2}=\frac{-\cos\phi\left(s\right)}{\ell_c^2}+\frac{\mathrm{d}}{\mathrm{d} s}\left[\frac{\cos\phi\left(s\right)}{r(s)}\right]\\
\frac{\mathrm{d} h_d(s)}{\mathrm{d} s}=\cos\phi\left(s\right),\quad \frac{\mathrm{d} r(s)}{\mathrm{d} s}=\sin\phi\left(s\right),\\
\quad h_d(0)=0, \quad \phi(0)=-\pi/2,\\
r(s_\mathrm{f})=0, \quad \phi(s_\mathrm{f})=-\pi/2, \quad h_d(s_\mathrm{f})=A.
\label{eq:YoungLap}
\end{gathered}
\end{equation}
Here, \{$r(s)$, $h_d(s)$\} are the (cylindrical) coordinates of the drop surface, $\phi(s)$ is the local angle that the tangent makes with the vertical and $s$ is the arc-length as defined in Fig.~\ref{fg:Schema_YoungLap}. The value of $s_\mathrm{f}$ is a priori unknowns and is determined by the additional boundary condition. The drops shapes are then imported in the FEM solver to be used for the initial condition.
The simulations solve the dimensional version of Eq.~(1) using the values for our silicone oil to allow an easier comparison with experiments. We use rectangular domains with periodic boundary conditions using a square mesh with quadratic Lagrange elements for $h$ and for $\kappa$ (resolved separately) of size $0.125\ell_c$ or smaller. The simulations shown in Fig.~4-6 have a width $L_y\approx 22.2\ell_c$ and length $L_x\approx 53.3\ell_c$, and were stopped after $10\tau$ or at the onset of dripping ($A>2.2\ell_c$) or when the drop get too close to the boundary of our simulation domain (distance of the drop maximum to the boundary smaller than $8\ell_c$). The wake profiles $h_w(y)$ were measured between $4.7\ell_c$ and $7\ell_c$ behind the drop center depending on the length of the Landau-Levich region (estimated by looking for $\mathbf{\nabla}\kappa=\mathrm{const}$). As we only focus on quasi-steady moving drop, we discarded the transient initiation of the drop motion (where the drop grows due to the Rayleigh-Taylor instability, see Fig~\ref{fg:RT}) and only present data after the drop has moved by one diameter ($x>7\ell_c$), when the wake is fully formed. In total $176$ simulations are shown in Fig.~4-6 performed with the combination of parameters: $h_0=\{30,60,90,120\}$ $\mu$m, $\alpha=\{0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0,1.1\}h_0/\ell_c$ with $\ell_c=1.485$ mm, and four drops profiles $h_d(x,y)$ with initial amplitudes $A_0/\ell_c=\{0.61,1.0,1.4,1.7\}$. The dimensionless values displayed in the main text are therefore relevant to experiments.
\section{Theory}
\noindent\textbf{Justification for the 2D Landau-Levich model.} We assume as \citet{Lister:2010} that the drop moves quasi-steadily at the dimensionless speed $c$ and therefore look for traveling wave solutions of eq.~(1) of the form $h(x,y,t)=h(x-ct,y)$ such that $\frac{\partial h}{\partial t}=-c\frac{\partial h}{\partial x}$. Equation~(1) thus becomes:
\begin{equation}
\mathbf{\nabla \cdot} \left[h^3 \left(\mathbf{\nabla}h + \mathbf{\nabla} \kappa\right)\right]=3\left(c-\widetilde{\alpha}h^2\right)\frac{\partial h}{\partial x}.
\label{eq:lub_wave}
\end{equation}
Now focusing on the annular matching region between the drop and the thin film [$r\approx R$ in polar coordinates $\{r,\theta\}$, see Fig.~4(c)], we expect small slopes so that we can linearise the curvature $\kappa\approx \mathbf{\nabla}^2 h$ which has to vary rapidly to match the drop edge to the thin film. The curvature gradient term dominates the gravitational term (i.e. $\mathbf{\nabla} h \ll \mathbf{\nabla}^3 h$) and except at the drop side ($\theta=\pm \pi/2$), the azimuthal variations of the curvatures are negligible compared to the radial ones. At leading order, Eq.~\eqref{eq:lub_wave} therefore becomes~\cite{Lister:2010}:
\begin{equation}
\frac{\mathrm{d} }{\mathrm{d} r}\left[h^3 \frac{\mathrm{d}^3 h}{\mathrm{d} r^3}\right]=3\left(c-\widetilde{\alpha}h^2\right)\cos\theta\frac{\mathrm{d} h}{\mathrm{d} r}.
\label{eq:lub_rad}
\end{equation}
For small angles, i.e. $\widetilde{\alpha}\ll c/h^2$, the advection term is negligible and we recover the formulation of \citet{Lister:2010} that yields the Landau-Levich equation after integrating eq.~\eqref{eq:lub_rad} once. Since $h\sim 1$ in the matching region this condition is equivalent to $\widetilde{\alpha}\ll c=\mathrm{Ca} \left(\ell_c/h_0\right)^3\cos(\alpha)^{-3/2}\sim 10$.\\
\begin{figure}[tb]
\begin{center}
\includegraphics[width=\columnwidth]{Pendantdrops.pdf}
\caption{Properties of pendant drops $h_d$ obtained by numerically solving Eq.~\eqref{eq:YoungLap} (representative profiles are shown in inset). The black lines are best fits.
}
\label{fg:Pendantdrops}
\end{center}
\end{figure}
\noindent\textbf{Pendant drop properties.} A small slope solution of the Young-Laplace equation which is valid for small drops writes in dimensionless form (lengths rescaled by $\ell_c$)~\cite{Lister:2010}:
\begin{equation}
h_d^*(r)=A \frac{\mathrm{J}_0(R)-\mathrm{J}_0(r)}{\mathrm{J}_0(R)-1},
\end{equation}
with $A$ the previously defined drop height, $\mathrm{J}_0$ the Bessel function of the first kind of order $0$ and $R\approx 3.83$ the drop radius defined by $\mathrm{J}'_0(R)=0$. Using this, one can estimate analytically the numerical prefactors used in the main text:
\begin{equation}
\begin{gathered}
V=2\pi\int_0^R r h_d^*(r) \mathrm{d}r=\frac{\pi \mathrm{J}_0(R)}{\mathrm{J}_0(R)-1} AR^2\approx 0.902AR^2, \\
\kappa_d=\left.\frac{\mathrm{d}^2 h_d^*}{\mathrm{d} r^2}\right|_{r=R}=\frac{\mathrm{J}_0(R)}{\mathrm{J}_0(R)-1} A\approx 0.287 A,\\
z_c=\frac{1}{2}\frac{\int_0^R r h_d^*(r)^2 \mathrm{d}r}{\int_0^R r h_d^*(r) \mathrm{d}r}=\frac{\mathrm{J}_0(R)}{\mathrm{J}_0(R)-1} A\approx 0.287 A.
\end{gathered}
\end{equation}
We further checked these prefactor numerically on the pendant drop profiles $h_d$ obtained by solving Eq.~\eqref{eq:YoungLap}. As shown in Fig.~\ref{fg:Pendantdrops}, except for the drop radius $R$, these prefactor remain quite accurate even for larger drops. In our model, we use the values fitted on the full profiles and although we show rounded value in the main text we use more digits for the calculations.
\section{Additional experimental and numerical results}
\begin{figure*}
\begin{minipage}[b]{.47\textwidth}
\includegraphics[width=0.92\columnwidth]{Exp_SI_v2.pdf}
\caption{\textbf{(a)} Amplitude $A$ as a function of time $t$ for experimental drops sliding under films of different thicknesses $h_0$ (see legend) at different inclination angles $\alpha$ (color coded). \textbf{(b)} Same data with $A$ rescaled by $A_0$ , time by $\tau$ and the color now coding $\widetilde{\alpha}$.
}
\label{fg:Exp_SI}
\end{minipage}\qquad
\begin{minipage}[b]{.47\textwidth}
\includegraphics[width=\columnwidth]{Combined.pdf}
\caption{\textbf{(a)} Dimensionless amplitude $A$ as a function of dimensionless time $t$ for an immobile drop that grow through the Rayleigh-Taylor instability (dashed, $\widetilde{\alpha}=0$) and a mobile drop at very low inclination angle (solid, $\widetilde{\alpha}=0.1$), film thickness $h_0=90$ $\mu$m. \textbf{(b)} Curvature profiles $\kappa$ in the flow direction $x$ rescaled by the theoretical value in our model $\kappa_d=0.2844 A/\ell_c^2$ for a drop of initial size $A_0/\ell_c=1.4$ on a film of thickness $h_0=60$ $\mu$m at different inclination angles $\widetilde{\alpha}$ (color coded). All the profiles corresponds to the last time step of each simulation and were shifted horizontally by $x_m$, the position of the drop maximum.
}
\label{fg:RT}
\end{minipage}
\end{figure*}
\noindent\textbf{Experiments at different film thicknesses.} The influence of the initial film thickness $h_0$ is shown in Fig~\ref{fg:Exp_SI}(a) where we plot the time evolution of the drop amplitude $A$ for different values of $\alpha$ and $h_0$. As shown, the quantitative behavior is similar, the timescale on which the growth occurs and the critical angle $\alpha_c$ for the transition are however different. The drop under a quasi-horizontal substrate with $h_0\approx 55$ $\mu$m dripped in $115$ min while the one with $h_0\approx 115$ $\mu$m dripped in $45$ min, despite being initially $25\%$ smaller. The critical angles vary between $1\lesssim \alpha_c \;(^\circ)\lesssim 3$.
The same data are plotted in Fig~\ref{fg:Exp_SI}(b) but with the time rescaled by $\tau=\eta\gamma/\left(h_0^3\rho^2g^2\cos^2\alpha\right)$ and the color now coding $\widetilde{\alpha}$. As shown this rescaling captures the change of thickness $h_0$ for drops of similar initial amplitudes and the critical dimensionless angle is now of the order of $\widetilde{\alpha}\approx 0.6$ for all thicknesses. The blue circles, do not collapse probably due to the substantially larger initial drop size in this experiment which accelerate the dynamics. \\
\noindent\textbf{Limitations of the model.} In addition to the fact that the advection term stops being negligible in Eq~\eqref{eq:lub_rad}. Our Landau-Levich analysis has a few other limitations. First as shown in Fig~5(b), our model only predicts the central part of the wake and not the reconnection to the thin film (for $\lvert y\rvert>R$). We however checked numerically that the portion of the wake containing the side lobes had a minor impact in the drop volume change $\partial_x V$.
Then, for very low inclination angles, we observe in the experiments an initially rapid growth that slows down before increasing again (see Fig~3(a) or Fig.~\ref{fg:Exp_SI}). This initial growth and apparent saturation resembles the growth of a motionless drop due to the Rayleigh-Taylor instability~\cite{Lerisson:2020}. Since drops are always mobile in our experiments, we simulate a motionless one with $\widetilde{\alpha}=0$ and compare it to a drop moving at low inclination angle in Fig.~\ref{fg:RT}(a). As shown, the initial growth is identical for the two drops until $t\approx 0.5\tau$ the time required for the drop to move by $\approx\ell_c$. Before, the drop sucks the thin film around it and form a collar just like a Rayleigh-Taylor drop~\cite{Lister:2010}. The very beginning of the motion is therefore impacted by the non-linear Rayleigh-Taylor dynamics in addition to the dynamic described in this article.
Finally, by using the prefactor $4.94$ in the friction force $f_{v}(\theta)$, we implicitly assumed that the drop front and back curvatures were identical~\cite{Cantat:2013}. Fig.~\ref{fg:RT}(b) shows the horizontal evolution of the curvature $\kappa(x,0)$ as the inclination angle $\widetilde{\alpha}$ increases. At low inclination angles, the drop edge curvature to which the Landau-levich film must connect is close to the one of a static pendant drop $\kappa_d$ at both the front and the back of the drop. However, as the angle increases the drop shape starts to distort and the edge curvature changes, particularly in the front where it increases almost two fold. This increase in front-back curvature difference with $\widetilde{\alpha}$ should gradually decrease the value of the prefactor of $f_{v}(\theta)$ thereby reducing the friction~\cite{Cantat:2013}, thus possibly explaining why our data are slightly above our prediction, especially at high $\widetilde{\alpha}$.
|
1,116,691,500,149 | arxiv | \section{Introduction}
It is widely recognized that the presence of large scale sheared flows
\cite{r1,r2,r3} [also referred to as convective cells (CCs) or zonal
flows (ZFs)] is detrimental to regulating the cross-field turbulent
transport in magnetically confined fusion plasmas.The ZF is
characterized by poloidally and toroidally symmetric structure with
radial variation, and the relative zonal flow potential fluctuation
(in comparison with $T_e/e$, where $T_e$ is the electron temperature
and $e$ is the magnitude of the electron charge) is much smaller than
the relative zonal flow density perturbation (in comparison with the
equilibrium plasma number density $n_0$). The large scale Zonal jets also
occur in various planetary atmosphere, where they are nonlinearly generated
by the Rossby waves \cite{r4a,r4b}, and influence the atmospheric wind
circulation \cite{r4c,r4d}.
In magnetically confined fusion plasmas, there exist free energy
sources in the form of density, temperature,and magnetic field
inhomogeneities, which are responsible for exciting the low-frequency
(in comparison with the ion gyrofrequency), short scale (of the order
of the ion gyroradius or the ion sound gyroradius) DW-like
fluctuations \cite{r5a,r5b,r5c}. The linearly growing drift modes interact
among themselves and attain large amplitudes in due course of time.
The Reynolds stress of finite amplitude DWs, in turn, nonlinearly generate
convective cells (CCs) and sheared flows/ZFs \cite{r6,r7,r8,r9,r10,r11,r12},
via three-wave decay and modulational instabilities \cite{r7}, respectively.
There are recent review articles presenting the status of theoretical and
simulation works \cite{r12}, as well as experimental observations
\cite{r13} concerning the dynamics of DW-ZF turbulence system. Specifically,
some numerical simulations \cite{r12} lend support to the experimental
observation that the DW turbulence and transport levels are reduced in the
presence of the sheared flows/ZFs.
Recently, Guo {\it et al.} \cite{r14} used the governing equations of
Ref. \cite{r7} for the DW-CC turbulence system to investigate the
radial spreading of the DW-ZF turbulence via soliton formation.
However, the authors of Ref. \cite{r14} completely neglected
self-interactions among drift waves and zonal flows, which are very
important in the study of nonlinearly coupled finite amplitude drift
and zonal flow disturbances in nonuniform magnetoplasmas.
In this Letter, we present simulation results of fully nonlinear DW-ZF
turbulence systems, which exhibit the coexistence of drift dipolar
vortices and a radially symmetric monopolar zonal flow vortex. The
effect of the latter on the cross-field turbulent transport is
examined. Our investigation is based on the governing equations for
the DW-ZF turbulence systems that incorporate the Hasegawa-Mima (HM)
self-interaction nonlinearity \cite{r15} in the nonlinear dynamics of
the DW which are nonlinearly exciting CCs/ZFs. Furthermore, we also
account for nonlinear interactions among the CCs/ZFs and obtain the
driven Euler equation for the dynamics of finite amplitude
CCs/ZFs. The generalization of the governing equations for fully
nonlinear DW-ZF turbulence systems is rather essential for the
investigation of the formation of coherent nonlinear structures that
control the transport properties and confinement of tokamak plasmas.
We consider a nonuniform magnetoplasma in an external magnetic field $\hat {\bf z} B_0$,
where $\hat {\bf z}$ is the unit vector along the $z-$ axis in a Cartesian coordinate
system and $B_0$ is the strength of the homogeneous magnetic field. The density gradient
$\partial n_0/\partial x$ is along the $x-$ axis. In the presence of the finite amplitude
low-frequency (in comparison with the ion gyrofrequency $\omega_{ci} =eB_0/m_i c$, where
$m_i$ is the ion mass and $c$ is the speed of light in vacuum) electrostatic DWs and ZFs,
the perpendicular (to $\hat {\bf z}$ component of the electron and ion fluid velocities \cite{r8}
are, respectively,
\begin{equation}
{\bf u}_{e\perp}^d \approx \frac{c}{B_0} \hat {\bf z} \times \nabla \phi
-\frac{c}{B_0 n_e} \hat {\bf z} \times \nabla (n_e T_e) \equiv {\bf u}_{EB}^d + {\bf u}_{De}^d,
\end{equation}
\begin{equation}
{\bf u}_{e\perp}^z \approx (c/B_0) \hat {\bf z} \times \nabla \psi \equiv {\bf u}_{EB}^z,
\end{equation}
\begin{eqnarray}
{\bf u}_{i\perp}^d \approx {\bf u}_{EB}^d + {\bf u}_{Di}^d
- \frac{c}{B_0 \omega_{ci}} \left(\frac{\partial}{\partial t}
+ \nu_{in} - 0.3\nu_{ii}\rho_i^2 \nabla_\perp^2 + {\bf u}_{EB}^d\cdot \nabla
+ {\bf u}_{Di}^d\cdot \nabla\right)\nabla_\perp \phi \\ \nonumber
-\frac{c}{B_0\omega_{ci}}\left[({\bf u}_{EB}^d\cdot \nabla)\nabla_\perp \psi
+({\bf u}_{EB}^z\cdot \nabla)\nabla_\perp \phi\right],
\end{eqnarray}
and
\begin{equation}
{\bf u}_{i\perp}^z \approx {\bf u}_{EB}^z -\frac{c}{B_0\omega_{ci}}
\left[\left(\frac{\partial }{\partial t} + \nu_{in}- 0.3 \nu_{ii}\rho_i^2\nabla_\perp^2\right)\nabla_\perp \psi
+\left<({\bf u}_{EB}^d\cdot\nabla)\nabla_\perp \phi\right>\right],
\end{equation}
where the superscripts $d$ and $z$ represents quantities associated with the DWs and ZFs, respectively,
$\phi$ and $\psi$ are the electrostatic potentials of the DWs and ZFs, respectively,
$n_e$ and $n_i$ are the electron and ion number densities, respectively, ${\bf u}_{Di}^d
=(c/eB_0n_i)\hat {\bf z} \times \nabla (n_i T_i)$ is the ion diamagnetic drift velocity,
$T_i$ is the ion temperature, $\nu_{in} (\nu_{ii}$ is the ion-neutral (ion-ion) collision frequency,
$\rho_i =V_{Ti}/\omega_{ci}$ is the ion gyro-thermal radius, and $V_{Ti}$ is the ion thermal speed,
We stress that the self-interaction nonlinearities of the DWs and ZFs are retained in the fluid velocities
(3) and (4), respectively. The angular brackets denote averaging over one period of the DWs.
Assuming that $\left|(\partial/\partial t) + {\bf u}_{EB}^d \cdot \nabla\right| \ll \nu_{en}$,
where $\nu_{en}$ is the electron-neutral collision frequency, we obtain from the parallel
(to $\hat {\bf z}$ component) of the electron momentum equation the magnetic field-aligned electron
fluid velocity $u_{ez}^d \approx \left(1/m_e \nu_{en}\right) \partial
\left(e \phi -T_e n_{e1}^d/n_0\right)/\partial z$, where $n_{e1} = (n_e - n_0) \ll n_0$.
We can now insert $u_{ez}^d$ into the electron continuity equation to obtain
\begin{equation}
\left[\frac{\partial}{\partial t} + \frac{V_{Te}^2}{\nu_{en}} \frac{\partial^2}{\partial z^2}
+ \left({\bf u}_{EB}^d+ {\bf u}_{EB}^z\right)\cdot \nabla \right]n_{e1}^d
+ {\bf u}_{EB}^d\cdot \nabla n_0 + \frac{n_0 e}{m_e \nu_{en}} \frac{\partial^2\phi}{\partial z^2} =0,
\end{equation}
where $V_{Te} =(T_e/m_e)^{1/2}$ is the electron thermal speed and $m_e$ is the electron mass.
Furthermore, substituting for the ion fluid velocity from (3) into the ion continuity equation we have
\begin{eqnarray}
\left[\frac{\partial}{\partial t} + \left({\bf u}_{EB}^d+ {\bf u}_{EB}^z\right)\cdot \nabla \right]n_{i1}^d
+ {\bf u}_{EB}^d \cdot \nabla (n_0 + n_{i1}^z) \\ \nonumber
- \frac{n_0 c}{B_0 \omega_{ci}} \left[ \left(\frac{\partial}{\partial t}
+ \nu_{in} - 0.3 \nu_{ii} \rho_i^2 \nabla_\perp^2 + {\bf u}_{EB}^d \cdot \nabla \right)\nabla_\perp^2 \phi
+ \nabla \cdot ({{\bf u}_{Di}}^d \cdot \nabla)\nabla_\perp \phi \right] \\ \nonumber
-\frac{n_0c}{B_0 \omega_{ci}}\left[({\bf u}_{EB}^d\cdot \nabla)\nabla_\perp^2 \psi
+({\bf u}_{EB}^z\cdot \nabla)\nabla_\perp^2 \phi\right] =0,
\end{eqnarray}
where the magnetic field-aligned ion dynamics has been ignored, thereby isolating the ion sound waves
from our system. The ion density perturbation associated with the ZFs is
$n_{i1}^z =\left(n_0c/B_0 \omega_{ci}\right) \nabla_\perp^2 \psi$.
Equations (5) and (6), which govern the dynamics of collisional drift waves \cite{r16}
in the presence of zonal flows, are closed by assuming $n_{e1}^d \approx n_{i1}^d \equiv n_1$, which is
a valid approximation in plasmas with $\omega_{pi}^2 \gg \omega_{ci}^2$, where $\omega_{pi}$ is the ion
plasma frequency. In the linear limit, without the ZFs, Eqs. (5) and (6) yield the DW frequency
$\omega_k = - k_y c_s \rho_s/ L_n (1+k_\perp^2 \rho_s^2)$ and the growth rate $\gamma_k (\ll \omega_k)$,
which are much larger than the damping rate $\nu_{in} + 0.3 \nu_{ii} k_\perp^2 \rho_i^2$. The
growth rate is $\gamma_k =\nu_{en}\omega_k^2 k_\perp^2/\omega_{LH}^2 k_z^2(1+k_\perp^2 \rho_s^2)$,
where $c_s =(T_e/m_i)^{1/2}$ is the ion sound speed, $\rho_s =c_s/\omega_{ci}$
is the sound gyroradius, $\omega_{LH} =(\omega_{ce}\omega_{ci})^{1/2}$ is the lower-
hybrid resonance frequency, $\omega_{ce}=eB_0/m_ec$ is the electron gyrofrequency,
$L_n =\left(\partial {\rm ln } n_0/\partial x\right)^{-1}$ is the scale-length of the density gradient,
and ${\bf k} = {\bf k}_\perp + \hat {\bf z} k_z$ is the wave vector.
The equation for the ZFs is obtained by inserting (2) and (4) into the electron and ion
continuity equations, and inserting the resultant equations into the Poisson equation,
obtaining the driven [by the DW Reynolds stress; the last term in the left-hand side of Eq. (7)]
damped (by the ion-neutral collision and ion-gyroviscosity effects) ZF equation
\begin{equation}
\left(\frac{\partial}{\partial t}+ \nu_{in} -0.3 \nu_{ii} \rho_i^2 \nabla_\perp^2
+ {\bf u}_{EB}^z \cdot \nabla\right)\nabla_\perp^2 \psi
+ \left<({\bf u}_{EB}^d \cdot \nabla) \nabla_\perp^2\phi\right> =0.
\end{equation}
For the collisionless DWs, we assume that $|(\partial \phi/\partial t)+ ({\bf u}_{EB}^d+{\bf u}_{EB}^z)
\cdot \nabla \phi| \ll (V_{Te}^2/\nu_{en}) \nabla_\perp^2 \phi$ and $\hat {\bf z} \times \nabla n_0 \cdot \nabla
\phi \ll (\omega_{ce}/\nu_{en}) n_0 |\partial^2\phi/\partial z^2|$, and obtain from (5) the Boltzmann law
for the electron number density perturbation $n_{e1}^2 =n_0 e\phi/T_e$. The latter can be inserted into
(6) by assuming that $n_{i1}^d =n_{e1}^d$, so that we have fully nonlinear equation for the DWs in the
presence of ZFs
\begin{eqnarray}
\frac{\partial \phi}{\partial t}- \frac{c_s \rho_s}{L_n} \frac{\partial \phi}{\partial y}
- \rho_s^2 \left[\frac{\partial}{\partial t} + \nu_{in} - 0.3 \nu_{ii} \rho_i^2 \nabla_\perp^2
+ \frac{c}{B_0} \left(1+ \sigma\right) (\hat {\bf z} \times \nabla \phi) \cdot \nabla \right]
\nabla_\perp^2 \phi \\ \nonumber
+ \frac{c}{B_0} (\hat{z} \times \nabla \psi) \cdot \nabla \left( \phi - \rho_s^2 \nabla_\perp^2 \phi\right) =0,
\end{eqnarray}
where $\sigma =T_i/T_e$.
We normalize the time and space variables by $\omega_{ci^{-1}}$ and $\rho_s$, as well as $\phi$ and $\psi$
by $T_e$, and the collision frequencies by $\omega_{ci}$. In the normalized units, we can rewrite (7)
and (8) as, respectively,
\begin{equation}
\left[\frac{\partial}{\partial t}+ \frac{\nu_{in}}{\omega_{ci}} -0.3 \frac{\nu_{ii}}{\omega_{ci}}
\sigma \nabla_\perp^2 + (\hat {\bf z} \times \nabla \psi) \cdot \nabla\right]\nabla_\perp^2 \psi
+ \left< (\hat {\bf z} \times \nabla \phi \cdot \nabla) \nabla_\perp^2\phi\right> =0,
\end{equation}
and
\begin{eqnarray}
\frac{\partial \phi}{\partial t} - \frac{\rho_s}{L_n} \frac{\partial \phi}{\partial y}
- \left[\frac{\partial}{\partial t} + \frac{\nu_{in}}{\omega_{ci}} - 0.3 \frac{\nu_{ii}}{\omega_{ci}}
\sigma \nabla_\perp^2 + (1+ \sigma) (\hat {\bf z} \times \nabla \phi) \cdot \nabla \right]
\nabla_\perp^2 \phi
\\ \nonumber
+ (\hat{z} \times \nabla \psi) \cdot \nabla \left( \phi - \nabla_\perp^2 \phi \right) =0.
\end{eqnarray}
We have developed a 2D code to numerically integrate the system of
equations (9) and (10), which describe the self-consistent evolution
of the DW-ZF turbulence systems. We have chosen $\nu_{in}/\omega_{ci}
=0.1$, $\nu_{ii}/\omega_{ci}=0.01$, $\sigma =0.1$, and $\rho_s/L_n
=0.01$. Numerical descritization employs the spatial derivative in
Fourier spectral space, while time is descritized using time-split
integration algorithm, as prescribed in Ref. \cite{r17}. Periodic
boundary conditions are used along the $x$ and $y$ directions. A
fixed time integration step is used. The conservation of energy
\cite{r18} is used to check the numerical accuracy and validity of our
numerical code during the nonlinear evolution of the small scale drift
wave fluctuations and zonal flows. We also make sure that the initial
fluctuations are isotropic and do not influence any anisotropic flow
during the evolution. Anisotropic flows in the evolution can, however,
be generated from a $k_y=0$ mode that is excited as a result of the
nonlinear interactions between the ZFs and small scale DW
turbulence. The ZF and DW fields are initialized with a small
amplitude and uniform isotropic random spectral distribution of
Fourier modes in a 2D computational domain. These fields further
evolve through Eqs. (9) and (10) under the influence of nonlinear
interactions. Intrinsically, the set of Eqs. (9) and (10) possesses
parametrically unstable modes involving short scale drift waves and
zonal flows. In the early phase of simulations, we obtain the growth
of small scale DWs. We have carried out two characteristically
distinct sets of simulations by switching on and off the
self-interaction terms. This enables us to gain considerable insight
into the physics of generation of zonal flows and associated transport
level in the coupled DW-ZF turbulence systems.
\begin{figure}
\begin{center}
\includegraphics[width=16cm]{fig1.ps
\end{center}
\caption{Evolution of mode structures in our coupled DW-ZF turbulence model
from an initial random distribution. In the presence of self-interaction
terms, zonal flows are enhanced and quench the DW turbulence more
efficiently. Numerical resolution is $256^2$, box size is $2\pi
\times 2\pi$.}
\label{energy}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12cm]{fig2.ps
\caption{The self-consistent generation of zonal flows is shown. In
the presence of the self-interaction nonlinearity, zonal flows are
generated rapidly and their saturated level is also enhanced when
compared with the evolution without the self-interaction nonlinearity.}
\label{spectra}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12cm, height=7cm]{fig3.ps
\caption{Evolution of the cross-field diffusivity in the presence (blue curve), as well
as in the absence (red) of the self-interaction term. The cross-field diffusivity with
and without zonal flows is shown by the dashed and
solid curves, respectively. Clearly, the presence of the self-interaction term
enhances zonal flows, which dramatically reduce the cross-field diffusivity.}
\label{spectra2}
\end{figure}
To gain insight into the characteristics nonlinear interactions in our
coupled DW-ZF turbulence model, we examine the
Hasegawa-Mima-Wakatani (HMW) model \cite{r15,r16} that describes the
electrostatic drift waves in an inhomogeneous magnetoplasma. First,
the ion polarization drift nonlinearity in the HMW model,
viz. $\hat{z}\times \nabla \phi \cdot \nabla \nabla^2 \phi$, signifies
the self-interaction Reynolds stress that plays a critical role in the
formation of the ZFs \cite{r7}. This nonlinearity is basically
responsible for the generations of the ZFs. Secondly, since in
the collisionless DW dynamics, the electron density perturbation follows
a Boltzmann law due to the rapid thermalization of electrons along $\hat
{\bf z}$, the nonlinearity $\hat{z}\times \nabla \psi \cdot \nabla
\phi$ comes from the cross-coupling of the ZF's ${\bf E}_\perp^z
\times \hat {\bf z}$ particle motion with the drift wave density fluctuation in
our model. The role of this nonlinearity has traditionally been identified
as a source of suppressing the intensity of the nonlinear flows in the
DW turbulence \cite{r18}. Nevertheless, the presence of the
linear inhomogenous background can modify the nonlinear mode couplings
in a subtle manner. Our objective here is to understand the latter in
the context of the coupled DW-ZF turbulence system. The initially
isotropic and homogeneous spectral distribution associated with
potential fluctuations, as described above, evolve dynamically
following the set of Eqs. (9) and (10).
The small amplitude initial drift wave fluctuations are subject to the
{\em modulational instability} on account of their nonlinear coupling with ZFs.
The parametrically unstable fluctuations grow rapidly during the early phase of
the evolution. The instability eventually saturates via the nonlinear mode
couplings in which the DW Reynolds stress, in concert with other nonlinearities
in Eqs. (9) and (10), play a critical role. The mode couplings during the
nonlinear phase of the evolution leads to the formation of non-symmetric zonal
flow structures. This is shown in Fig. 1. Our simulations exhibit that the
self-interaction terms not only suppress the modulational instability on a
rapid timescales, but they also regulate the generation of the ZFs (see, Figs 1 b, c,
e, f). The extent and amplitude of the ZFs in Figs. 1(b) and
(e) [with the self-interaction] are larger than that of (c) and (f)
[without the self-interaction]. The final (i.e. steady-state)
structures, nonetheless, show the formation of a predominantly dipolar
vortex in the DW fluctuations, while the ZFs are dominated
by a large-scale monopolar-vortex motions. It is noteworthy that the absence
of the self-interaction contaminates the flows with more small scale
structures (See Figs. 1 c and f).
We next investigate the quantitative evolution of the ZFs, which is
depicted in Fig. 2. The spectral transfer of energy in the ZF
is estimated from $\sum_k |\phi(k_x, k_y=0)|^2$. The latter
describes a pile up of energy in the $k_y=0$ mode that is summed up
over the entire turbulent spectrum. We find from our simulations that
the presence of the self-interaction nonlinearity rapidly suppresses
the linear phase of the modulational instability of the DWs. Hence, the linear
phase during the evolution terminates rapidly when compared with the
no-self-interaction case. Alternatively, the modulational growth rate is enhanced,
and it saturates on a rapid timescales. Consequently, the presence of the
self-interactions gives rise to an enhanced level of ZFs, as shown in Fig. 2.
A direct consequence of the enhanced ZFs is to markedly suppress the cross-field
turbulent transport, because the sheared flows (in the poloidal direction) associated
with the ZFs tear apart the DW fluctuations/eddies, and thereby keeping their amplitudes
low. We have computed the cross-field diffusivity in our simulations
by using the ion fluxes involving the Boltzmann electron density
perturbation and the linear ion polarization drift velocity associated with
the nonthermal DWs. The cross-field ion diffusivity reads
\begin{equation}
D = {D_B} \sum_{\bf k} \frac{k_y^2 \rho_s^2 |\phi_k|^2}{(1+k_\perp^2 \rho_s^2)},
\end{equation}
where $D_B =cT_e/eB_0$ is the Bohm diffusion coefficient and $\phi_k$
is the spectral potential distribution of the DWs. Note that the ion thermal
diffusivity, as defined above, is subdued in the presence of the ZFs, since
the latter eventually suppress the DW turbulence so that the steady state
cross-field transport level is reduced. Consistent with this scenario,
we find, from our simulations, that the onset of the ZFs quenches the
cross-field turbulent transport, as shown by the dashed-curve in Fig. 3.
Furthermore, due to the vanishing poloidal wavenumber of the ZFs, the
sheared flows do not cause any cross-field turbulent transport in magnetized
plasmas.
In summary, the most notable point that emerges from our simulations of the
coupled DW-ZF turbulence system is the importance and significance of the
self-interaction nonlinearity in modeling the low-frequency DW turbulence
that is believed to be a critical source for heat and energy losses.
A most realistic and accurate understanding of the latter is, therefore,
essential for the building and the performance of the next generation
controlled thermonuclear fusion reactors, such as the ITER. In the work,
we have, for the first time, brought about the importance of self-interaction
processes and their role with regard to the cross-field turbulent transport
in high-temperature plasmas of thermonuclear fusion devices (e.g. tokamaks).
For our purposes, we used a new set of nonlinear equations for the
coupled DW-ZF turbulence system, which is a generalization of Ref. 7,
by including self-interactions among DWs which drives finite amplitude ZFs.
Numerical simulations of the newly obtained nonlinear equations reveal
that the coupled DW-ZF turbulence system evolves in the form of short-scale
drift dipolar vortices and a large scale monopolar zonal flow structure.
The simultaneous presence of the dipolar and monopolar
vortices is responsible for a subdued cross-field turbulent
transport in a magnetically confined fusion plasma.
This research was partially supported by the Deutsche Forschungsgemeinschaft
through the project SH21/3-1 of the Research Unit 1048.
|
1,116,691,500,150 | arxiv | \section{Introduction}
Let $I\subset k[x_0,\dots,x_N]$ be a homogeneous ideal. For $r\geq 0$, the $r$-th symbolic power of $I$ is defined to be \[I^{(r)}=\bigcap_{p\in\mbox{Ass}(R/I)}(I^rR_p\cap R).\] Symbolic powers of ideals are interesting for a number of reasons, not least of which is that, for a radical ideal $I$,the $r$-th symbolic power $I^{(r)}$ is the ideal of all polynomials vanishing to order at least $r$ on $V(I)$ (by the Zariski-Nagata theorem).
Containment relationships between symbolic and ordinary powers are a source of great interest. As an immediate consequence of the definition, $I^r\subseteq I^{(r)}$ for all $r$. However, the other type of containment, namely that of a symbolic power in an ordinary power is much harder to pin down. It has been proved by Ein-Lazarsfeld-Smith \cite{ELS} and Hochster-Huneke \cite{HH} that $I^{(m)}\subseteq I^r$ for all $m\geq Nr$, but as of yet there are no examples in which this bound is sharp.\\
It was conjectured by Harbourne in \cite[Conjecture 8..4.3]{PSC} (and later in \cite[Conjecture 4.1.1]{HaHu} in the case $e=N-1$) that $I^{(m)}\subseteq I^r$ for all $m\geq er-(e-1)$, where $e$ is the codimension of $V(I)$. While this conjecture holds in a number of important cases, some counterexamples have also been found. Notably, the main counterexamples come from singular points of hyperplane arrangements \cite{BNAL}. One particular family is known in the literature under the name of Fermat configurations of points cf. $\mathbb P^2$ \cite{BNAL, MS}. These have been recently generalized to Fermat-like configurations of lines in $\mathbb P^3$ in \cite{MS}. The $\rm{Ceva}(n)$ arrangement of hyperplanes in $\mathbb{P}^N$ is defined by the linear factors of \[F_{N,n}=\prod_{0\leq i<j\leq N}(x_i^n-x_j^n),\] where $n\geq 3$ is an integer.
In \cite{DST} Dumnicki, Szemberg, and Tutaj-Gasinska showed that, for
the ideal $I_{2,3}$ corresponding to all triple intersection points of
the lines defined by linear factors of $F_{2,3}$ in $\mathbb{P}^2$,
$F_{2,3}\not\in I_{2,3}^2$, but $F_{2,3}\in I_{2,3}^{(3)}$. This was the first
counterexample to the above mentioned conjecture. Later, in \cite{MS} Malara and
Szpond generalized this construction to $\mathbb{P}^3$, by showing
that for the ideal $I_{3,n}$, corresponding to all triple intersection
lines of the planes defined by the linear factors of $F_{3,n}$,
$F_{3,n}\not\in I_{3,n}^2$, but $F_{3,n}\in I_{3,n}^{(3)}$. In the following, the
construction of counterexamples to $I^{(3)}\subseteq I_2$ from
Fermat arrangements is generalized to $\mathbb{P}^N$ for all $N\geq
2$.
\section{Main result}
Let $n\in\mathbb{N}$. Let $k$ be a field
which contains a primitive $n$-th root of unity, $\varepsilon$. For each $N\in\mathbb{N}$, let $S_N:=k[x_0,x_1,\dots,x_N]$, and define \[F_{N,n}:=\prod_{0\leq i<j\leq N}(x_i^n-x_j^n).\]
Let \[C_{N,n}:=\bigcap_{0\leq i<j\leq N}(x_i,x_j)\]\[J_{N,n}:=\bigcap_{\substack{0\leq i<j<l\leq N \\ 0\leq a,b< n}}(x_i-\varepsilon^a x_j, x_i-\varepsilon^b x_l),\]
and let
\[I_{N,n}:=J_{N,n}\cap C_{N,n}.\]
We show in Lemma \ref{lem1} that $I_{N,n}$ is the ideal of the $N-2$ dimensional flats arising from triple intersection of hyperplanes corresponding to linear factors of $F_{N,n}$.
\begin{theorem}\label{main}
For all $N\geq 2$, $I_{N,n}^{(3)}\not\subseteq I_{N,n}^2$.
\end{theorem}
Before we can prove this, we must introduce a few lemmas.
\begin{lemma}\label{lem1}
The ideal $I_{N,n}$ defined above defines the union of all the $N-2$ dimensional linear spaces that are intersections of at least three hyperplanes corresponding to linear factors of $F_{N,n}$.
\end{lemma}
\begin{proof}
Let $0\leq a,b<n$, and let $0\leq i<j\leq l\leq N$. Then \[(x_i-\varepsilon^ax_j,x_i-\varepsilon^bx_l)=(x_i-\varepsilon^ax_j,x_i-\varepsilon^bx_l,x_j-\varepsilon^{b-a}x_l)\] defines the intersection of the three hyperplanes corresponding to $(x_i-\varepsilon^ax_j)$, $(x_i-\varepsilon^bx_l)$, and $(x_j-\varepsilon^{b-a}x_l)$.
Furthermore $$(x_i,x_j)=(x_i-\varepsilon^ax_j : a=0,1,\dots, n),$$ so $(x_i,x_j)$ defines the intersection of $n$ hyperplanes corresponding to linear factors of $x_i^n-x_j^n$.
It remains to be seen that all $N-2$ dimensional linear spaces that arise as intersections of at least three hyperplanes corresponding to linear factors of $F_{N,n}$ are accounted for above. Let $L$ be the ideal defining such a linear space. then $L$ contains three linearly dependent binomials of the form $x_i-\varepsilon^ax_j, x_k-\varepsilon^bx_l, x_u-\varepsilon^cx_v$. Without loss of generality (after multiplication by appropriate powers of $\varepsilon$) this yields $i=k$ and $\{j,l\}=\{u,v\}$. If $j\neq l$ then $L=(x_i-\varepsilon^ax_j, x_i-\varepsilon^bx_l)$ is one of the primes appearing in the decomposition of $J_{N,n}$ and if $j=l$ then $L=(x_i,x_j)$ is one of the primes appearing in the decomposition of $C_{N,n}$.
\end{proof}
\begin{lemma}\label{lem2}
Let $R$ and $S$ be finitely generated graded-local Noetherian rings. Let $m$ be the homogeneous maximal ideal of $R$. Let $I\subset R$ be a homogeneous ideal, and suppose $F\not\in I^r$ for some $r\in\mathbb{N}$. Let $J\subset S$ be an ideal, and let $\pi:S\rightarrow R$ be a (not necessarily homogeneous) ring homomorphism such that $\pi(J)\subseteq I$.
If $G\in R$ is such that $\pi(G)=Fg$, where $g\not\in m$, then $G\not\in J^r$.
\end{lemma}
\begin{proof}
Suppose by way of contradiction that such a $G$ exists and $G\in J^r$. Then \[Fg=\pi(G)\in\pi(J^r)\subseteq(\pi(J))^r\subseteq I^r.\]
Thus $Fg\in I^r$.
Let $I^r=Q_1\cap\dots\cap Q_t$ be a primary decomposition. Then, $Fg\in Q_i$ for each $i\in\{1,\dots,t\}$. Suppose that, for some $i$, $F\not\in Q_i$. Then $g^s\in Q_i$ for some $s\in\mathbb{N}$. However, $I^r$ is a homogeneous ideal, so $Q_i\subseteq m$ for all $i\in\{1,2,\dots, t\}$. But since $g\not\in m$, we know that $g^s\not\in m$, a contradiction. Thus $F\in Q_i$ for all $i$, which yields $F\in I^r$, a contradiction.
\end{proof}
This lemma allows us to construct an inductive argument for the main theorem.
\begin{proof}[Proof of Theorem \ref{main}]
By Lemma \ref{lem1} $F_{N,n}$ must vanish to order 3 or greater on each of the linear spaces whose union is $V(I_{N,n})$, thus $F_{N,n}\in I_{N,n}^{(3)}$. To finish the proof, it suffices to show that for all $N\geq 2$, $F_{N,n}\not\in I_{N,n}^2$.
We argue by induction on $N$. For $N=2$, this is proved in the paper of Dumnicki, Szemberg, and Tutaj-Gasinska \cite{DST}.
For $N>3$, assume that $F_{N-1,n}\not\in I_{N-1,n}^2$ and consider the evaluation homomorphism $\pi:S_N\rightarrow S_{N-1}$ defined by $\pi(x_N)=1$ and $\pi(x_i)=x_i$ for $i \leq N-1$. Then:
\[\pi(I_{N,n})\subseteq C_{N-1,n}\cap\left(\bigcap_{0\leq i<N}(x_i,1)\right)\cap J_{N-1,n}\cap \left(\bigcap_{\substack{0\leq i<j<N\\0\leq a,b<n}}(x_i-\varepsilon^a,x_i-\varepsilon^b)\right)\subseteq I_{N-1,n}.\]
We note that $\pi(F_{N,n})=F_{N-1,n}g$ where $g=\prod_{0\leq i<N}(x_i^n-1)$. Since $g\not\in (x_0,\dots,x_{N-1})$, we conclude by Lemma \ref{lem2} that $F_{N,n}\not\in I_{N,n}^2$.
\end{proof}
\section{Concluding Remarks}
Another proof for the noncontainment noncontainment $I_{N,n}^{(3)}\subseteq I_{N,n}^2$ has been found by Grzegorz Malara and Justinya Szpond and can bee seen in their upcoming paper \cite{MS2}.
|
1,116,691,500,151 | arxiv |
\section{The Contour Integral Method}\label{sec::cim}
This chapter will give a short summary of the contour integral method as introduced by Beyn \cite{Beyn_2012aa}, without any non-trivial modifications. Note that there are alternative approaches by other authors, where the first publication \cite{Asakura_2009aa} seems to go back to 2009, compare also \cite{Imakura_2016aa}. After this short review, we will discuss three numerical examples that utilise this method to solve Problem \ref{GalerkinEVP}.
Let $\bb T\colon M\to \mathbb C^{m\times m}$ be holomorphic on some domain $M\subset \mathbb C$. Let $\bb T^h$ denote the Hermitian transpose of $\bb T$ and $\bb T'$ denote its usual transpose.
We want to solve the nonlinear eigenvalue problem
$$ \bb T(z)\bb v = 0,\quad \bb v\in\mathbb C^m,\quad \bb v\neq \bb 0,\quad z\in M.$$
If $z$ is no eigenvalue of $\bb T$, we find by the condition above that the kernel of $\bb T(z)$ is trivial, thus $\bb T(z)$ has full rank and is invertible.
Fix some domain $D\subseteq M$ with boundary $\partial D,$ and assume there to be $k$ eigenvalues $(\lambda_j)_{j\leq k}$ in the interior of $D$, such that all eigenvalues are simple. We remark that {the entire approach was also generalised to that case of non simple eigenvalues \cite{Beyn_2012aa}.}
As a consequence of the version of the Keldysh Theorem stated in \cite{Mennicken_2003aa} one can proof the following result, on which this method is based.
\begin{theorem}[Contour Integral Theorem, {\citep[Thm.~2.9.]{Beyn_2012aa}}]\label{13}\index{contour integral}
For $\bb T$ as above and holomorphic $f\colon M\to \mathbb C$ it holds that
\begin{align*}
\frac 1{2\pi i}\int_{\partial D} f(z) \bb T(z)^{-1} \mathrm{d} z =\sum^k_{j=1}f(\lambda_j)\bb v_h\bb w_h^h,
\end{align*}
where $\bb v_j$ and $\bb w_j$ are left and right eigenvectors corresponding to $\lambda_n$ that are normalized according to
$$\bb w_j^h \bb T'(\lambda_j)\bb v_j=1$$
for all $j\leq k$.
\end{theorem}
For our purposes, where we set $\bb T(z)=\bb V_z$ as a matrix-valued function, cf.~\eqref{eq::linear_system}, the contour integral method can be reduced to the following theorem.
\begin{theorem}[Linearisation of Eigenvalue Problems, {\cite[Thm.~3.1]{Beyn_2012aa}}] \index{contour integral!method}\label{cim::thm::cim} Suppose that $\bb T\colon M\to\mathbb C^{m\times m}$ is holomorphic and has only simple eigenvalues $(\lambda_j)_{j\leq k}$ in some connected subdomain $D\subset M$.
Then there exists a diagonalizable matrix $\bb B$ which can be computed from evaluations of $\bb T$ such that $\bb B$ that has the same eigenvalues as the eigenvalue problem under consideration within $D$.
\end{theorem}
The proof of this theorem is constructive and corresponds to the following algorithm, on which our implementation is based.
\begin{algo}[Contour Integral Method, {\cite[p.~15]{Beyn_2012aa}}]\label{algo}
Let $\bb T$ be given as above, $\lbrace t_n\rbrace_{n\leq N}$ be a discretisation of some boundary $\partial D$ as above, $\delta,\epsilon>0$ and $\ell<\text{size}(\bb T(z))$.\\
\begin{algorithm}[H]
\KwData{$\lbrace t_n\rbrace_{n\leq N},\bb T,\delta,\epsilon,\ell$}
\KwResult{$(\lambda_j)_{j\leq k}$}
kfound $\gets$ false\;
$m\gets$ size($\bb T$)\;
\While{kfound $==$ false }{
$\hat {\bb V} \gets \operatorname{RandomFullRank}(m\times \ell)$\;
$\bb A_0 \gets \frac 1{iN}\sum_{j=1}^{N}\bb (T( t_j))^{-1}\hat {\bb V}$\;
$({\bb V},{\bb \Sigma},{\bb W}^h)\gets$ SVD$({\bb A}_0)$ \tcp*{$\bb \Sigma =\text{diag}(\sigma_1,\dots,\sigma_\ell)$}
$k\gets j,$ where $\sigma_1\geq\dots\geq\sigma_j>\delta>\sigma_{j+1}\approx\dots\approx\sigma_\ell \approx 0 $\;
\eIf{$k < \ell$}{
kfound $\gets$ true\;
$\bb V_0 \gets {\bb V}(1\colon m,1\colon k)$\;
$\bb W_0 \gets {\bb W}(1\colon \ell,1\colon k)$\;
${\bb \Sigma}_0\gets \bb\Sigma$\;}
{$\ell\gets \ell+1$\;}
}
$\bb A_1 \gets \frac 1{iN}\sum_{j=1}^{N} t_j\bb T( t_j)^{-1}\hat{\bb V}$\;
$\bb B\gets \bb V_0^h\bb A_1\bb W_0\bb \Sigma_0^{-1}$\;
$(\lambda_j)_{j\leq k}\gets {eigs}(B)$\;
\end{algorithm}
\end{algo}
At first the algortihm might seem prohibitively expensive due to the application of a singular value decomposition. However, since the number of eigenvalues $k$ will be small and often a reasonable upper estimate $\ell_0$ of the number of eigenvalues is known such that one can choose $\ell=\ell_0$.
Then the complexity of the singular value decomposition becomes negligible in comparison to solving the linear system in lines 5 and 15 of Algorithm~\ref{algo}.
Moreover, in an actual implementation $\bb A_1$ and $\bb A_0$ are assembled simultaneously, since the most expensive operation of the algorithm is evaluating $\bb T(t_j)^{-1}\hat {\bb V}$. In general, $k$ and $\ell$ will be small, so storing both $\bb A_j$ poses no issues. Since smooth contours should be used, exclusively, one can expect the trapezoidal rules for the assembly of the $\bb A_j$ to converge exponentially with respect to $N$.
Thus, the bottleneck in terms of accuracy of the entire scheme is the accuracy in which $\bb X$ represents the bilinear form of the EFIE.
\begin{remark}[Obtaining the Eigenvectors]
Note that through this algorithm we not only obtain the eigenvalues of Problem \ref{problem::variational}, but also the coefficients of the corresponding eigenfunctions in the form of the matrices $\bb V$ and $\bb W.$ Note moreover that the number of non-zero singular values reflects the number of solutions within $D$ and can be used for verification of an implementation if the number of solutions is known from analytical representations or measurements.
\end{remark}
\section{Conclusion}\label{sec::con}
Overall, the contour integral method yields exceptional accuracies in conjunction with our isogeometric boundary element method.
Judging from the asymptotic behaviour predicted in \eqref{ErrorEstimate3} and observed in the numerical examples, the accuracy of the combination of isogeometric boundary element method and the contour integral method promises higher orders of convergence to the correct solution compared to currently implemented volume based approaches, since these will not benefit from the convergence order of ${\mathcal O}(h^{2p+1})$.
This means that, for the same computational resources, the maximum reachable accuracy of the IGA-BEM with contour integration is higher compared to many volume-based methods. However, due to the long time spend solving the systems, this means that the IGA-BEM with contour integral offers this higher accuracy only in exchange for runtime, until efficient preconditioning strategies are available, thus making the utilisation of fast methods and iterative solvers viable for this type of application.
\section{The Electric Field Integral Equation}\label{sec::efie}
Before we introduce the variational formulation of Maxwell's eigenvalue problem, we need to introduce some notation.
\subsection{Function Spaces Related to Maxwell's Equations and the EFIE}
Let $\Omega$ be a Lipschitz domain.
By $H^0(\Omega)$ we denote the usual square integral functions $L^2(\Omega)$, and we write $H^s(\Omega)$ for the usual Sobolev spaces of higher regularity $s>0$, cf.~\cite{McLean_2000aa}. Their vector-valued counterparts $\pmb H^s (\Omega) = \big(H^s(\Omega)\big){}^3$ are denoted by bold letters. {We now define the spaces
\begin{align*}
\pmb H^s({\pmb{\operatorname{curl}}}{},\Omega){}&{}=\lbrace \pmb f \in \pmb H^s\colon {\pmb{\operatorname{curl}}}{}\pmb f \in \pmb H^s(\Omega)\rbrace, \\
\pmb H^s({\pmb{\operatorname{curl}}}{}^2,\Omega){}&{}=\lbrace \pmb f \in \pmb H^s({\pmb{\operatorname{curl}}}{},\Omega)\colon {\pmb{\operatorname{curl}}}{}\,{\pmb{\operatorname{curl}}}{}\pmb f \in \pmb H^s(\Omega)\rbrace,\text{ and}\\
\pmb H^s(\div,\Omega){}&{}=\lbrace \pmb f \in \pmb H^s\colon \div\pmb f \in H^s(\Omega)\rbrace,
\end{align*}
dropping the index $s$ in the case of $s=0$ for convenience.}
For $\bb{n}_{\pmb x_0}$ denoting the outwards directed unit normal at $\pmb x_0\in \Gamma$, the \emph{rotated tangential trace}
$$\pmb\gamma_{\mathrm t} (\pmb f) \coloneqq \lim_{\pmb x\to \pmb x_0} \pmb f(\pmb x)\times \bb{n}_{\pmb x_0}$$
is well defined for smooth $\pmb f.$ Note that $\pmb n_{\pmb x_0}$ is well-defined for almost all $\pmb x_0,$ cf.~\cite[p.~96]{McLean_2000aa}. We will equip the rotated tangential trace with the superscript~${}^\mathrm{int}$ if the limit is taken from within $\Omega$, and with the superscipt ${}^\mathrm{ext}$ if taken from $\mathbb R^3\setminus\overline{\Omega},$ omitting the notation if the mapping properties are clearly stated.
The operator $\pmb \gamma_{\mathrm t}$ is extended to a weak setting through density arguments.
We can now define
$\pmb H_\times^{-1/2}(\div_\Gamma,\Gamma) = \pmb \gamma_{\mathrm t}\big(\pmb H({\pmb{\operatorname{curl}}}{},\Omega)\big).$
By definition this renders the rotated tangential trace $ \pmb \gamma_{\mathrm t}\colon \pmb H({\pmb{\operatorname{curl}}}{},\Omega)\to \pmb H_\times^{-1/2}(\div_\Gamma,\Gamma)$ surjective, and one can prove that it is continuous, compare \cite{Buffa_2003aa}.
We can now reformulate Maxwell's eigenvalue problem as a variational problem exclusively on the boundary $\Gamma$ by using the anti-symmetric pairing
\begin{align}
\langle \pmb \nu,\pmb \mu\rangle_\times \coloneqq\int_\Gamma (\pmb \nu \times \bb{n})\pmb \mu\mathrm{d} \Gamma,\label{found::eq::dualitypairingX}
\end{align}
cf.~\cite[Def.~1]{Buffa_2003aa}.
\subsection{Recasting the Eigenvalue Problem}
Any solution of the electric wave equation can be derived via a boundary integral formulation, which we will review within this section.
The version reviewed here resembles the one presented in \cite[Thm.~6]{Buffa_2003aa}.
We define the \emph{Maxwell single layer potential} $${\pmb {V}_\kappa}{}\colon \bb H^{-1/2}_\times(\div_\Gamma,\Gamma)\to \bb H(\bb{\pmb{\operatorname{curl}}}{}^2,\Omega)$$
via the Helmholtz fundamental solution
\begin{align}
u^*_\kappa(\pmb x,\pmb y)=\frac{e^{-i\kappa\Vert\pmb x-\pmb y\Vert}}{4\pi\Vert \pmb x-\pmb y\Vert} \label{eq::fundamental}
\end{align}
as
\begin{align*}
{\pmb {V}_\kappa}{} (\pmb \mu)(\pmb x)= \int_\Gamma u^*_\kappa(\pmb x,\pmb y)\pmb \mu (\pmb y)\mathrm{d} \Gamma_{\pmb y}
+ \frac 1{\kappa^2} \pmb \operatorname{grad}{}_{\pmb x}\int_\Gamma u^*_\kappa(\pmb x,\pmb y)(\div_\Gamma\circ \pmb u)(\pmb y)\mathrm{d} \Gamma_{\pmb y}.
\end{align*}
The mapping properties are known, see \cite[Thm.~5]{Buffa_2003aa}, where it is also shown that the image of ${\pmb {V}_\kappa}{}$ is divergence free.
With the help of this operator, one can show the following.
\begin{lemma}[Single Layer Representation, {\cite[Thm.~6]{Buffa_2003aa}}]\label{found::thm::indirectrepresentation}
If $\bb E\in \bb H(\bb{\pmb{\operatorname{curl}}}{}^2,\Omega)$ is solution to the electric wave equation \eqref{eq:Maxwell-eig-cont} on $\Omega$, then it can be represented via
\begin{align*}
\bb E = {\pmb {V}_\kappa}{}\big({\bb\gamma}_{\bb t}^\mathrm{int}\bb ({\pmb{\operatorname{curl}}}{}\;\bb E)\big).
\end{align*}
\end{lemma}
We now define the
\emph{Maxwell single layer operator} ${\pmb {\mathcal V}_\kappa}{} = \pmb\gamma_{\mathrm t}^\mathrm{int} \circ {\pmb {V}_\kappa}{}$.
Applying the rotated tangential trace to the identity in Lemma \ref{found::thm::indirectrepresentation} allows us to recast the eigenvalue problem \eqref{eq:Maxwell-eig-cont} as a variational problem w.r.t.~the duality pairing \eqref{found::eq::dualitypairingX}. The underlying boundary integral equation
reads as follows:
\begin{problem}[Variational Eigenvalue Problem]\label{problem::variational}
Find a non-zero $\pmb j\in \pmb H_\times^{{-1/2}}(\div_\Gamma,\Gamma)$ and $\kappa>0$ such that
\begin{align}\label{Eq:EigVarFormulation}
\langle {\pmb {\mathcal V}_\kappa}{}\pmb j,\pmb \mu\rangle_\times = 0
\end{align}
holds for all $\pmb\mu\in \pmb H_\times^{{-1/2}}(\div_\Gamma,\Gamma)$.
\end{problem}
The problem in~\eqref{Eq:EigVarFormulation} is a nonlinear eigenvalue problem with respect to the eigenvalue parameter $\kappa$, since $\kappa$ occurs nonlinearly in the fundamental solution \eqref{eq::fundamental} which builds the kernel of the single layer boundary integral operator ${\pmb {\mathcal V}_\kappa}{}$.
The eigenvalue problem formulations~\eqref{eq:Maxwell-eig-cont} and~\eqref{Eq:EigVarFormulation} are equivalent in the following sense:
\begin{lemma}[Equivalence of eigenvalue problems] Let $\kappa\in\mathbb{R}$ and $\kappa>0$.
\begin{enumerate}[a)]
\item If $(\kappa,\bb E)$ is an eigenpair of~\eqref{eq:Maxwell-eig-cont}, then $(\kappa,({\bb\gamma}_{\bb t}^\mathrm{int}\bb ({\pmb{\operatorname{curl}}}{}\;\bb E))$ is an eigenpair of~\eqref{Eq:EigVarFormulation}.
\item If $(\kappa, \pmb j)$ is an eigenpair of~\eqref{Eq:EigVarFormulation}, then $(\kappa, {\pmb {V}_\kappa}{}\pmb j)$ is an eigenpair of~\eqref{eq:Maxwell-eig-cont}.
\end{enumerate}
\end{lemma}
\begin{proof}
Assertion a) has been already shown by the derivation of the boundary integral formulation~\eqref{Eq:EigVarFormulation}. For assertion b) note that ${\pmb {V}_\kappa}{} \pmb j$ is a solution of Maxwell's equation in $\Omega$~\cite[Sect.~4]{Buffa_2003aa} and that $0={\pmb {\mathcal V}_\kappa}{}\pmb j=\pmb\gamma_{\mathrm t}^\mathrm{int}{\pmb {V}_\kappa}{} \pmb j$. It remains to show that ${\pmb {V}_\kappa}{} \pmb j\neq 0$ in $\Omega$ which follows form the unique solvability of the exterior problem and the jump relations of the single layer potential on the boundary $\Gamma$, see~\cite[Prop.~2.1(ii)]{UngerPreprint}.
\end{proof}
We want to mention that the eigenvalue problem~\eqref{Eq:EigVarFormulation} has in addition to the real eigenvalues also non-real eigenvalues which correspond to the eigenvalues of the exterior eigenvalue problem. We refer to~\cite{UngerPreprint} for an analysis of this kind of eigenvalues.
\section{The Eigenvalue Problem}\label{sec::evp}
For a compact and simply connected Lipschitz domain $\Omega \in \mathbb{R}^3$ the electromagnetic fields in the source-free case are governed by the equations
\begin{equation}\label{eq:Maxwell-time-harmonic}
\begin{aligned}
{\pmb{\operatorname{curl}}}{}{(\bb{E})} &= -i\omega\mu_0\bb{H} && \text{in }\Omega\\
{\pmb{\operatorname{curl}}}{}{(\bb{H})} &= i\omega\varepsilon_0\bb{E} && \text{in }\Omega\\
\div{(\varepsilon_0\bb{E})} &= 0 && \text{in }\Omega\\
\div{(\mu_0\bb{H})} &= 0 && \text{in }\Omega,
\end{aligned}
\end{equation}
assuming the time-harmonic case.
As usual, $\bb E$ and $\bb H$ denote the electric and magnetic field \cite{Jackson_1998aa}, respectively.
In the case of the cavity problem, the electric permittivity $\varepsilon_0$
and the magnetic permeability $\mu_0$ are those of vacuum.
Moreover, since superconducting alloys can be modelled as perfect electric conducting, we assume the boundary conditions on $\Gamma\coloneqq \partial \Omega$ to be given by
\begin{equation}\label{eq:pecBC}
\begin{aligned}
\bb{E}\times\bb{n} &= 0 && \text{on }\partial\Omega\\
\bb{H}\cdot\bb{n} &= 0 && \text{on }\partial\Omega.
\end{aligned}
\end{equation}
By eliminating $\bb{H}$ from~\eqref{eq:Maxwell-time-harmonic} one can then derive the classical cavity problem \cite{Monk_2003aa}.
\begin{problem}[Cavity problem]
Find the wave number $k:=\omega \sqrt{\mu_0\epsilon_0} \in \mathbb{R}$ and $\bb{E} \neq 0$ such that
\begin{equation}\label{eq:Maxwell-eig-cont}
\begin{aligned}
{\pmb{\operatorname{curl}}}{}{\bigl({\pmb{\operatorname{curl}}}{}{(\bb{E})}\bigr)} &= k^2 \bb{E} && \text{in }\Omega\\
\div{(\bb{E})} &= 0 && \text{in }\Omega\\
\bb{E}\times\bb{n} &= 0 && \text{on }\Gamma.
\end{aligned}
\end{equation}
\end{problem}
\section{A Brief Review of Isogeometric Analysis}\label{sec::IGA}
This section is devoted to a brief review of isogeometric analysis as required for its utilisation in the context of boundary element methods for electromagnetism.
We follow the lines of \cite{Doelz_2017aa,SISC}, which in their turn are based on the works of Buffa et al.~\cite{Buffa_2010aa} aimed at finite element discretisations.
Let $p\geq 0$ and $m>0$ be integers. While all of the results reviewed in this article are applicable to so-called \emph{$p$-open locally quasi-uniform knot vectors}, cf.~\cite[Assum.~2.1]{Beirao-da-Veiga_2014aa}, we assume all \emph{knot vectors} to be of the form
$$\Xi^p_m = \big[\underbrace{0,\cdots,0}_{p+1\text{ times}},1/2^m,\cdots ,(2^m-1)/2^m ,\underbrace{1,\cdots,1}_{p+1\text{ times}}\big].$$
This is introduced only for notational convenience and in agreement with our numerical examples.
We will then refer to $m$ as the \emph{refinement level} and define the \emph{mesh size} $h\coloneqq 2^{-m}.$
We proceed to define B-spline bases via the well-known recursion formula
\begin{align*}
b_i^p(x) & = \frac{x-\xi_i}{\xi_{i+p}-\xi_i}b_i^{p-1}(x) +\frac{\xi_{i+p+1}-x}{\xi_{i+p+1}-\xi_{i+1}}b_{i+1}^{p-1}(x),
\end{align*}
anchored for $p=0$ by the locally constant functions $b_i^0(x) = \chi_{[\xi_i,\xi_i+1)},$ cf.~\cite[Sec.~2.2]{Piegl_1997aa}. We define the spline space $S^p_m$ as the span of the B-spline basis defined on $\Xi^p_m.$
In reference domain, i.e., on $\square\coloneqq(0,1)^2$, we can now define the divergence conforming spline space $\pmb{\mathbb S}^p_m(\square)$ as done in \cite[Sec.~5.5]{Beirao-da-Veiga_2014aa} via
$$\pmb{\mathbb S}^p_m(\square)\coloneqq (S^p_m\otimes S^{p-1}_m) \times (S^{p-1}_m\otimes S^{p}_m),$$
for $\otimes$ denoting the tensor product and $\times$ denoting the Cartesian product.
\subsection{Geometry and Discretisation in the Physical Domain}
Let a \emph{patch} $\Gamma$ be given by the image of $\square$ under an invertible diffeomorphism $\pmb F\colon \square\to\Gamma\subseteq\mathbb R^3$.
For $\Omega$ being a Lipschitz domain, define a \emph{multipatch geometry} to be a compact, orientable two-dimensional manifold $\Gamma=\partial \Omega$ invoked via $\bigcup_{0\leq j <N} \Gamma_j$ by a family of patches $\lbrace \Gamma_j\rbrace_{0\leq j<N},$ $N\in \mathbb N$.
The family of diffeomorphisms $\lbrace \pmb F_j \colon \square\hookrightarrow \Gamma_j\rbrace_{0\leq j<N}$ will be called \emph{parametrisation}.
We assume the $\Gamma_j$ to be disjoint and that for any \emph{patch interface} $D$ of the form $D=\partial\Gamma_{j_0}\cap \partial\Gamma_{j_1}\neq \emptyset$ we require the continuous extensions of the parametrisations $\pmb F_{j_0}$ and $\pmb F_{j_1}$ onto $\overline{\square}=[0,1]^2$ to coincide.
Within the framework of isogeometric analysis parametrisations will usually be given by tensor products of non-uniform rational B-splines (NURBS), i.e., functions of the form
\begin{align*}
\pmb F_j(x,y)\coloneqq \sum_{0\leq j_1<k_1}\sum_{0\leq j_2<k_2}\frac{\pmb c_{j_1,j_2} b_{j_1}^{p_1}(x) b_{j_2}^{p_2}(y) w_{j_1,j_2}}{ \sum_{i_1=0}^{k_1-1}\sum_{i_2=0}^{k_2-1} b_{i_1}^{p_1}(x) b_{i_2}^{p_2}(y) w_{i_1,i_2}}.
\end{align*}
To extenf the definition of $\pmb{\mathbb S}^p_m$ to the physical domain, we resort to an application of the so-called \emph{Piola transformation} \cite[Chap.~6]{Peterson_2006aa}.
For the case of geometry mappings $\pmb F_j\colon \square\to\Gamma_j$ and $\pmb f\in \pmb H^{-1/2}_\times(\div_\Gamma,\Gamma)$ that allow for point evaluations, its explicit form is given by
\begin{align*}
\iota_{\pmb F_j}(\pmb f)(\pmb x)&\coloneqq \eta(\pmb x)(\pmb {\mathrm d}\pmb F_j)^{-1} (\pmb f\circ \pmb F_j)\big)(\pmb x),&&\pmb x\in \square.
\end{align*}
Therein, $\eta$ denotes the \emph{surface measure}
$$ \eta(\pmb x) \coloneqq \Vert{\partial_{x_1}\pmb F_j( {\pmb x})\times \partial_{x_2}\pmb F_j( {\pmb x})}\Vert ,$$
and $\pmb {\mathrm d}\pmb F_j$ denotes the Jacobian of $\pmb F_j.$ Although it is not readily invertible, an inverse exists in the sense of a mapping from the two-dimensional tangent space of $\Gamma_j$ to vector fields on $\square,$ as has already been discussed in \cite[Sec.~3.2]{SISC}.
We remark that it needs not be computed, since all computations can be reduced to computations in the reference domain. Explicit formulae are easily derived, see e.g.~\cite[Sec.~6.3]{Peterson_2006aa}.
We can now introduce the globally divergence conforming space
$$ \pmb {\mathbb S}^p_m(\Gamma)\coloneqq \left\lbrace \pmb f\in \pmb H_\times^{{-1/2}}(\div_\Gamma,\Gamma)\colon \iota_{\pmb F_j}(\pmb f|_{\Gamma_j}) \in \pmb {\mathbb S}^p_m(\square)\text{ for all }0\leq j< N\right\rbrace.$$
It has been analysed in \cite[Def.~10]{NuMa}, where it has been shown that it enjoys quasi-optimal approximation properties w.r.t.~energy norm of the EFIE. Specifically, the spline space satisfies estimates of the kind
\begin{align}
\min_{\pmb f_h\in \pmb{\mathbb S}^p_m(\Gamma)}\Vert{\pmb f-\pmb f_h}\Vert_{\pmb H_\times^{{-1/2}}(\div_\Gamma,\Gamma)}\leq Ch^{s+1/2}\Vert \pmb f\Vert_{\pmb H^s_{\mathrm{pw}}(\div_\Gamma,\Gamma)},\qquad 0\leq s\leq p,\label{eq::iga::approximation}
\end{align}
for all $\pmb f\in \pmb H^s_{\mathrm{pw}}(\div_\Gamma,\Gamma)$, cf.~\cite[Thm.~3]{NuMa}, where the space ${\pmb H^s_{\mathrm{pw}}(\div_\Gamma,\Gamma)}$
is equipped with the norm
$$\Vert \pmb f \Vert_{\pmb H^s_{\mathrm{pw}}(\div_\Gamma,\Gamma)}\coloneqq \sum_{0\leq j<N}\big\Vert{{\iota_{\pmb F_j}(\pmb f|_{\Gamma_j})\big\Vert_{\pmb H^s(\div,\square)}}}.$$
We remind of the relation $h=2^{-m}$ due to the assumption of uniform refinement on the knot vectors.
These discrete spaces give rise to the discrete analogue of Problem \ref{problem::variational}.
\begin{problem}[Discrete Eigenvalue Problem]\label{problem::variational::disc}
Find a non-zero $\pmb j_h\in \pmb{\mathbb S}^p_m(\Gamma)$ and $\kappa_h$ such that
\begin{align}\label{GalerkinEVP}
\langle {\pmb {\mathcal V}_\kappa}{}\pmb j_h,\pmb \mu\rangle_\times = 0
\end{align}
holds for all $\pmb\mu\in \pmb{\mathbb S}^p_m(\Gamma)$.
\end{problem}
In its discrete form, this problem induces a linear system
\begin{align}
{\pmb {\mathrm V}_{\kappa}}{} \bb j_h = 0,\label{eq::linear_system}
\end{align}
which we choose to assemble as in \cite{SISC,IEEE}.
Therein, ${\pmb {\mathrm V}_{\kappa}}{}$ can be interpreted as a matrix valued function dependent on $\kappa.$
The well-posedness of this discrete problem is closely related to the following criteria, cf.~\cite[Sec.~3]{Bespalov_2010aa}.
\begin{itemize}
\item[(C1)] There exists a continuous splitting $\pmb H^{-1/2}_\times(\div_\Gamma,\Gamma) = \pmb W\oplus \pmb V$ such that the bilinear form $\langle {\pmb {\mathcal V}_\kappa}{} \cdot,\cdot\rangle_\times$ is stable and coercive on $\pmb V\times \pmb V$ and $\pmb W\times \pmb W$, and compact on $\pmb V\times \pmb W$ and $\pmb W\times \pmb V$.
\item[(C2)] $\pmb {\mathbb S}^p_m(\Gamma)$ can be decomposed into a sum $\pmb {\mathbb S}^p_{\pmb m}(\Gamma) = \pmb W_h \oplus \pmb V_h$ of closed subspaces of $\pmb H^{-1/2}_\times(\div_\Gamma,\Gamma)$.
\item[(C3)] $\pmb W_h$ and $\pmb V_h$ are stable under complex conjugation.
\item[(C4)] Both $\pmb W_h\subseteq \pmb W$, as well as the so-called \emph{gap-property}
\begin{align}\label{eq::gap-property}
\sup_{\pmb v_h\in \pmb V_{h}}\inf_{\pmb v\in \pmb V}\frac{\Vert{\pmb v-\pmb v_h}\Vert_{\pmb H^{-1/2}_\times(\div_\Gamma,\Gamma)}}{\Vert{\bb v_h}\Vert_{\pmb H^{-1/2}_\times(\div_\Gamma,\Gamma)}}\stackrel{h\rightarrow 0}{\longrightarrow} 0
\end{align}
hold.
\end{itemize}
These properties have been proven for the isogeometric discretisation \cite[Thm.~3.9]{SISC}.
\subsection{Numerical Analysis}
The convergence of the eigenvalues and eigenfunctions of the Galerkin eigenvalue problem~\eqref{GalerkinEVP} can be shown using abstract results of~\cite{Halla:2016,UngerPreprint} and \cite{Karma1, Karma2}. Crucial for the convergence analysis are the above listed criteria (C1)-(C4), that ${\pmb {\mathcal V}_\kappa}{}$ satisfies a $T$-G\r{a}rding's inequality with respect to the splitting in (C1)~\cite{SISC}, and that the mapping $\mathbb{C}\setminus\{0\}\ni\kappa\mapsto{\pmb {\mathcal V}_\kappa}{}$ is holomorphic~\cite[Lem.~2]{UngerPreprint}. In~\cite{Halla:2016,UngerPreprint} sufficient conditions for the convergence of conforming Galerkin methods are specified for eigenvalue problems for holomorphic $T$-G\r{a}rding operator-valued functions. According to~\cite[Lem.~5]{UngerPreprint} and \cite[Lem.~2.7]{Halla:2016} the criteria (C1)-(C4) allow the application of the convergence theory established in~\cite{Karma1, Karma2} to the Galerkin eigenvalue problem~\eqref{GalerkinEVP}. From the comprehensive convergence results presented in~\cite{Karma1, Karma2,Halla:2016,UngerPreprint} we only want to state the error estimate for semi-simple eigenvalues $\kappa$:
\begin{equation}\label{ErrorEstimate1}
\vert \kappa-\kappa_h\vert =\mathcal{O}(\delta_{m,p}^2),
\end{equation}
where
\begin{equation}\label{ErrorEstimate2}
\delta_{m,p}
:=\sup_{\substack{\pmb j\in \mathrm{ker}{\pmb {\mathcal V}_\kappa}{}\\
\|\pmb j\|_{\pmb H^{-1/2}_\times(\div_\Gamma,\Gamma) }= 1}} \inf_{\pmb j_h\in \pmb{\mathbb S}^p_m(\Gamma)}\|\pmb j-\pmb j_h\|_{\pmb H^{-1/2}_\times(\div_\Gamma,\Gamma) }.
\end{equation}
The quadratic convergence order with respect to $\delta_{h,p}$ follows from the fact that for any eigenfunction $\pmb j_\text{adj}$ of the adjoint eigenvalue problem there exists an eigenfunction $\pmb j$ of the eigenvalue problem ${\pmb {\mathcal V}_\kappa}{}\pmb j=0$ such that $\overline{\pmb j}=\pmb j_\text{adj}$, see~\cite[Lem.~3]{UngerPreprint}.
The estimate \eqref{eq::iga::approximation} together with \eqref{ErrorEstimate1} and \eqref{ErrorEstimate2} yields the final estimate
\begin{align}
\vert \kappa-\kappa_h\vert = \mathcal{O}(h^{2(p+1/2)}),\label{ErrorEstimate3}
\end{align}
for sufficiently smooth surface current densities $\pmb j.$
\begin{remark}[Convergence on non-smooth geometries]\label{rem::nonsmoothGeom}
In this sense sufficiently smooth needs to be understood in terms of patchwise regularity as explained in \cite{NuMa}. In general, for surface current densities on non-smooth geometries this smoothness assumption will not be fulfilled, since $\pmb j$ may admit singularities at corners and edges.
However, for the specific densities of resonant modes within interior cavities this smoothness assumption will often be fulfilled. For the cube an analytical representation of $\pmb j$ is known to be smooth~\cite{WienersXin:2013}, and even for other, non-trivial geometries this can be argued.
\end{remark}
\section{Motivation}
The development and construction of particle accelerators is arguably one of the most time and money consuming research projects in modern experimental physics.
A cause for this is that essential components are not available off the shelf and must be manufactured uniquely tailored to the design specification of the planned accelerator.
One of the most performance critical components are so-called \emph{cavities}, resonators often made out of superconducting materials in which electromagnetic fields oscillate at {radio frequencies. The resonant fields are then used to accelerate bunches of particles up to speeds close to the speed of light.}
The geometry, and consequently the resonance behaviour of these structures is vital to the overall performance of the accelerator as a whole.
Due to the expensive (e.g. superconducting) materials and vast amounts of manual labor that are needed in the manufacturing of these devices, the design of cavities has become its own area of research, cf.~\cite{add_Tesla} and the sources cited therein.
Consequently, the development of simulation tools specifically for this purpose became an important part of the related scientific advancement, see e.g.~\cite{Halbach_1976aa,Rienen_1985aa,Weiland_1985aa}.
\begin{figure}\centering
\begin{tikzpicture}[scale=1]
\node (A) at (.5,0) {\includegraphics[width=5cm]{figures/1.pdf}};
\node (C) at (4.25,0) {\includegraphics[width=5cm]{figures/4.pdf}};
\node (D) at (8,0) {\hspace{.2cm}\includegraphics[width=5cm]{figures/3.pdf}};
\draw [-latex] (2.05,0) -- (3.05,0);
\draw [-latex] (5.75,0) -- (6.75,0);
\end{tikzpicture}
\caption{For a finite element computation a mesh is generated from the boundary data available from the design framework. Afterwards, a volume mesh is created. This introduces geometrical errors and limits the obtainable order of convergence to that of the geometry representation. Graphics from \cite{Wolf2020}.}\label{fig::intro::tesla1}
\end{figure}
A bottleneck with these classical approaches has always been the representation of the geometry, which often limits the achievable accuracies, cf. Figure~\ref{fig::intro::tesla1}. {However, high accuracies are desired such that the initial design and its simulation are not the weakest link within the manufacturing pipeline.}
As an example, manufacturers alone are interested in the simulation of deformation effects, the so-called \emph{Lorentz detuning}, which are dependent on a relative error margin of roughly $10^{-7}$, \cite[Tab. II]{add_Tesla}. There are also other, more advanced applications which have such high demands on accuracy. One is presented by Georg et al.~\cite{Georg2019}, {who show that even higher accuracies than those already achievable are required to simulate eccentricities.}
{\section{Introduction}}
Nowadays, isogeometric analysis \cite{Hughes_2005aa} has been established as the method of choice when dealing with such high demands on accuracy w.r.t.~geometry representation. Isogeometric methods are well understood for the case of electromagnetism \cite{Buffa_2013aa,Bontinck_2017ag} and a corresponding finite element approach has already been applied to Maxwell's eigenvalue problem \cite{Corno_2016aa}.
\begin{figure}\centering
\begin{tikzpicture}[scale=1]
\node (A) at (0,0) {\includegraphics[width=5cm]{figures/1.pdf}};
\node (B) at (4,0) {\hspace{-.2cm}\includegraphics[width=5cm]{figures/5.pdf}};
\draw [-latex] (1.7,0) -- (2.7,0);
\end{tikzpicture}
\caption{Isogeometric boundary element methods enable the computation directly on the CAD representation. Graphics from \cite{Wolf2020}.}\label{fig::intro::tesla2}
\end{figure}
The boundary-only representations in modern CAD frameworks, as well as the demand for highly accurate simulation techniques, suggests the transition to isogeometric boundary element methods for these types of problems, cf.~Figure~\ref{fig::intro::tesla2}.
While the use of boundary element methods promises to both reduce the number of degrees of freedom w.r.t.~the element size $h$ drastically, and at the same time double the rate of convergence {for the point evaluation of the solution in the domain}~\cite[Cor.~3.11 \& Rem.~3.12]{SISC},
the system matrices become densely pouplated and the corresponding eigenvalue problem becomes non-linear. However, through the recent introduction of a new family of eigenvalue solvers known as \emph{contour integral methods} \cite{Beyn_2012aa,Asakura_2009aa}, this problem was mitigated.
A first approach to a boundary element eigenvalue problem via the contour integral method
was investigated {in \cite{WienersXin:2013}, however, neither with a higher-order approach, a discussion of the related convergence theory, or within the isogeometric setting. Moreover, a comparison of volume-based and boundary element methods has not been discussed within the literature.}
This article is build on top of the recent mathematical results of \cite{SISC,Beyn_2012aa,NuMa,UngerPreprint} and aims at advancing the solution of eigenvalue problems via boundary elements by discussing the convergence analysis of the isogeometric discretisation of the eigenvalue problem. In this, we prove {that a convergence order ${\mathcal O}(h^{2p+1})$ for the eigenvalue aproximation can be achieved for discretisations with mesh-size $h$ and ansatz functions of order $p$. For a corresponding volume-based IGA only a convergence order ${\mathcal O}(h^{2p})$ can be expected~\cite{BuffaRivasSangalliVazquez:2011}.}
The organisation of this document is straight forward. Section \ref{sec::evp} will introduce the cavity problem based on Maxwell's equations. Afterwards, Section \ref{sec::efie} will show how this problem can be rephrased as a boundary integral equation, the well-known \emph{electric field integral equation}, and we will discuss the equivalence of both formulations.
We will then discuss our discretisation scheme, where we will review our isogeometric discretisation of the boundary integral equation in Section \ref{sec::IGA}, making theoretical predictions regarding the convergence behaviour. Our approach to the arising non-linear eigenvalue problem will then be discussed in Section \ref{sec::cim}.
In Section \ref{sec::num} a selection of numerical examples will be presented, and finally we will briefly conclude our findings in Section \ref{sec::con}.\\
\section{Numerical Examples}\label{sec::num}
In the following we will discuss some numerical experiments showcasing the application of the contour integral method to the isogeometric boundary element method. The particular implementation used is \emph{Bembel}, which is available open-source \cite{bembel,SoftwareX}. A branch containing the code to recreate all of the presented numerical examples is also available \cite{CodeBranch}.
Since no quasi-optimal preconditioners for the isogeometric discretisation of the electric field integral equation are known, iterative solvers yield suboptimal runtimes. Thus, the following numerical experiments will use a dense matrix assembly together with the partially pivoted LU decomposition of the linear algebra library Eigen~\cite{eigen3} to solve the arising systems. The utilised higher-order approaches yield systems small enough for this approach to be more than feasible.
\begin{figure}\centering
\begin{tikzpicture}[scale = .7]
\begin{axis}[
height = 7.5cm,
width=8cm,
ymode=log,
xlabel=$h$,
xtick={0, 1, 2, 3, 4, 5},
xticklabels={1, {1}/{2}, {1}/{4}, {1}/{8}, {1}/{16}, {1}/{32}},
ylabel=error,
legend pos=south west,
legend columns=2,
]
\addplot+[line width=1.5pt,mark size =2.5pt,red] table [trim cells=true,x=M,y=sum_err] {data/cim_gpu1/Acim_cube_1_25};
\addlegendentry{ $p=1$}
\addplot+[line width=1.5pt,mark size =2.5pt,blue] table [trim cells=true,x=M,y=sum_err] {data/cim_gpu1/Acim_cube_2_25};
\addlegendentry{ $p=2$}
\addplot+[line width=1.5pt,mark size =2.5pt,brown] table [trim cells=true,x=M,y=sum_err] {data/cim_gpu1/Acim_cube_3_25};
\addlegendentry{ $p=3$}
\addplot+[line width=1.5pt,mark size =2.5pt,gray] table [trim cells=true,x=M,y=sum_err] {data/cim_gpu1/Acim_cube_4_25};
\addlegendentry{ $p=4$}
\convfitorder{sum_err}{data/cim_gpu1/cim_cube_1_30}{3}{{$\mathcal O(h^3)$}}
\convfitorder{sum_err}{data/cim_gpu1/cim_cube_2_30}{5}{{$\mathcal O(h^5)$}}
\convfitorder{sum_err}{data/cim_gpu1/cim_cube_3_30}{7}{{$\mathcal O(h^7)$}}
\convfitorder{sum_err}{data/cim_gpu1/cim_cube_4_30}{9}{{$\mathcal O(h^9)$}}
\end{axis}
\end{tikzpicture}\hfill
\begin{tikzpicture}[scale = .7]
\begin{axis}[
height = 7.5cm,
width=8cm,
ymode=log,
xlabel=$i$,
xtick = {0,1,2,3,4,5,6,7,8,9},
xticklabels={1,2,3,4,5,6,7,8,9,10},
ylabel=$\sigma_i$,
]
\addplot+[only marks,line width=1.5pt,mark size =2.5pt,red] table [trim cells=true,x index = {0},y index = {1}] {data/cim_gpu1/Asvd_ev_cim_cube_3_25_0};
\addlegendentry{$h=1$}
\addplot+[only marks,line width=1.5pt,mark size =2.5pt,blue] table [trim cells=true,x index = {0},y index = {1}] {data/cim_gpu1/Asvd_ev_cim_cube_3_25_1};
\addlegendentry{$h=1/2$}
\addplot+[only marks,line width=1.5pt,mark size =2.5pt,brown] table [trim cells=true,x index = {0},y index = {1}] {data/cim_gpu1/Asvd_ev_cim_cube_3_25_2};
\addlegendentry{$h=1/4$}
\addplot+[only marks,line width=1.5pt,mark size =2.5pt,gray] table [trim cells=true,x index = {0},y index = {1}] {data/cim_gpu1/Asvd_ev_cim_cube_3_25_3};
\addlegendentry{$h=1/8$}
\end{axis}
\end{tikzpicture}
\caption{On the left the minimal difference of the simultaneously computed first and second eigenvalue $\lambda_\mathrm{cim}$ of the cube to their analytical solution $\lambda_i$ via $\min_{i=0,1}\vert\lambda_\mathrm{cim}-\lambda_i\vert$. On the right the computed singular values for the example with $p=3.$}\label{fig::cim::cube}
\end{figure}
As a first example, we investigate the first two eigenvalues of the unit cube, where an analytical solution is given by the eigenvalues $\lambda_{\mathrm{ana},0}=\pi\sqrt{2}$ of multiplicity three and $\lambda_{\mathrm{ana},1}=\pi\sqrt{3}$ of multiplicity two. The ellipse was defined as $$\sin(t) + i\cdot0.05\cdot\cos(t)+5.0,\quad\text{ for }t=[0,2\pi),$$ where the contour integrals were evaluated, again, with $N=25.$ The error
visualised corresponds to
$$\mathrm{error} = \frac 15\sum_{0\leq j<5}\min_{i=0,1}\vert\lambda_j-\lambda_{\mathrm{ana},i}\vert.$$
As one can see in Figure~\ref{fig::cim::cube}, the multiplicity of the eigenvalues is reflected perfectly by the non-zero singular values, i.e., all eigenvalues have been recognised. Moreover, we have the theoretically obtainable convergence order of ${\mathcal O}(h^{2p+1})$.
\begin{figure}\centering
\begin{tikzpicture}[scale = .7]
\begin{axis}[
height = 7.5cm,
width=8cm,
ymode=log,
xlabel=$h$,
xtick={0, 1, 2, 3, 4, 5},
xticklabels={1, {1}/{2}, {1}/{4}, {1}/{8}, {1}/{16}, {1}/{32}},
ylabel=error,
legend pos=south west,
legend columns=2,
]
\addplot+[line width=1.5pt,mark size =2.5pt,red] table [trim cells=true,x=M,y=sum_err] {data/cim_gpu2/Acim_sphere_int_1_25};
\addlegendentry{ $p=1$}
\addplot+[line width=1.5pt,mark size =2.5pt,blue] table [trim cells=true,x=M,y=sum_err] {data/cim_gpu2/Acim_sphere_int_2_25};
\addlegendentry{ $p=2$}
\addplot+[line width=1.5pt,mark size =2.5pt,brown] table [trim cells=true,x=M,y=sum_err] {data/cim_gpu2/Acim_sphere_int_3_25};
\addlegendentry{ $p=3$}
\addplot+[line width=1.5pt,mark size =2.5pt,gray] table [trim cells=true,x=M,y=sum_err] {data/cim_gpu2/Acim_sphere_int_4_25};
\addlegendentry{ $p=4$}
\convfitorder{sum_err}{data/cim_gpu2/cim_sphere_int_1_40}{3}{{$\mathcal O(h^3)$}}
\convfitorder{sum_err}{data/cim_gpu2/cim_sphere_int_2_40}{5}{{$\mathcal O(h^5)$}}
\convfitorder{sum_err}{data/cim_gpu2/cim_sphere_int_3_40}{7}{{$\mathcal O(h^7)$}}
\convfitorder{sum_err}{data/cim_gpu2/cim_sphere_int_4_40}{9}{{$\mathcal O(h^9)$}}
\end{axis}
\end{tikzpicture}\hfill
\begin{tikzpicture}[scale = .7]
\begin{axis}[
height = 7.5cm,
width=8cm,
ymode=log,
xlabel=$i$,
xtick = {0,1,2,3,4,5,6,7,8,9},
xticklabels={1,2,3,4,5,6,7,8,9,10},
ylabel=$\sigma_i$,
]
\addplot+[only marks,line width=1.5pt,mark size =2.5pt,red] table [trim cells=true,x index = {0},y index = {1}] {data/cim_gpu2/Asvd_ev_cim_sphere_int_3_25_0};
\addlegendentry{$h=1$}
\addplot+[only marks,line width=1.5pt,mark size =2.5pt,blue] table [trim cells=true,x index = {0},y index = {1}] {data/cim_gpu2/Asvd_ev_cim_sphere_int_3_25_1};
\addlegendentry{$h=1/2$}
\addplot+[only marks,line width=1.5pt,mark size =2.5pt,brown] table [trim cells=true,x index = {0},y index = {1}] {data/cim_gpu2/Asvd_ev_cim_sphere_int_3_25_2};
\addlegendentry{$h=1/4$}
\addplot+[only marks,line width=1.5pt,mark size =2.5pt,gray] table [trim cells=true,x index = {0},y index = {1}] {data/cim_gpu2/Asvd_ev_cim_sphere_int_3_25_3};
\addlegendentry{$h=1/8$}
\end{axis}
\end{tikzpicture}
\caption{On the left the mean absolute difference of the computed first eigenvalue of the sphere to the analytical solution. On the right the computed singular values for the example with $p=3.$}\label{fig::cim::sphere}
\end{figure}
\begin{figure}
\begin{tikzpicture}[scale=.75]
\begin{axis}[
width=.65\textwidth,
xlabel = real part,
ylabel = imaginary part,
]
\addplot[red,only marks,mark = o,very thick,mark options={solid},mark size=3pt] coordinates {
(2.743707269992269, 0)
};
\addplot[blue,only marks,mark = x,very thick,mark options={solid}] coordinates {
(2.7437123, 0)
};
\addplot[dashed,mark = o,mark options={solid},gray] coordinates {
( 2.984292e+00, 1.243449e-02)
( 2.938153e+00, 2.408768e-02)
( 2.864484e+00, 3.422736e-02)
( 2.767913e+00, 4.221640e-02)
( 2.654508e+00, 4.755283e-02)
( 2.531395e+00, 4.990134e-02)
( 2.406309e+00, 4.911436e-02)
( 2.287110e+00, 4.524135e-02)
( 2.181288e+00, 3.852566e-02)
( 2.095492e+00, 2.938926e-02)
( 2.035112e+00, 1.840623e-02)
( 2.003943e+00, 6.266662e-03)
( 2.003943e+00, -6.266662e-03)
( 2.035112e+00, -1.840623e-02)
( 2.095492e+00, -2.938926e-02)
( 2.181288e+00, -3.852566e-02)
( 2.287110e+00, -4.524135e-02)
( 2.406309e+00, -4.911436e-02)
( 2.531395e+00, -4.990134e-02)
( 2.654508e+00, -4.755283e-02)
( 2.767913e+00, -4.221640e-02)
( 2.864484e+00, -3.422736e-02)
( 2.938153e+00, -2.408768e-02)
( 2.984292e+00, -1.243449e-02)
( 3, -1.224647e-17)
};
\end{axis}
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.75]
\begin{axis}[
width=.65\textwidth,
xlabel = real part,
ylabel = imaginary part,
]
\addplot[red,only marks,mark = o,very thick,mark options={solid}, mark size=3pt] coordinates {
(4.44288293815836, 0)
(5.44139809270265, 0)
};
\addlegendentry{Analytical eigenvalue};
\addplot[blue,only marks,mark = x,very thick,mark options={solid}] coordinates {
(4.442882, 0)
(5.441338, 0)
};
\addlegendentry{Computed eigenvalue};
\addplot[gray,dashed,mark = o,mark options={solid}] coordinates {
( 5.968583e+00, 1.243449e-02)
( 5.876307e+00, 2.408768e-02)
( 5.728969e+00, 3.422736e-02)
( 5.535827e+00, 4.221640e-02)
( 5.309017e+00, 4.755283e-02)
( 5.062791e+00, 4.990134e-02)
( 4.812619e+00, 4.911436e-02)
( 4.574221e+00, 4.524135e-02)
( 4.362576e+00, 3.852566e-02)
( 4.190983e+00, 2.938926e-02)
( 4.070224e+00, 1.840623e-02)
( 4.007885e+00, 6.266662e-03)
( 4.007885e+00, -6.266662e-03)
( 4.070224e+00, -1.840623e-02)
( 4.190983e+00, -2.938926e-02)
( 4.362576e+00, -3.852566e-02)
( 4.574221e+00, -4.524135e-02)
( 4.812619e+00, -4.911436e-02)
( 5.062791e+00, -4.990134e-02)
( 5.309017e+00, -4.755283e-02)
( 5.535827e+00, -4.221640e-02)
( 5.728969e+00, -3.422736e-02)
( 5.876307e+00, -2.408768e-02)
( 5.968583e+00, -1.243449e-02)
( 6, -1.224647e-17)
};
\addlegendentry{Quad. pts. $t_j$}
\end{axis}
\end{tikzpicture}
\caption{The setup for the contour integral method. Sphere (left) and cube (right), both computed with $N=25$, $p=2$, $h=1/4$.
}\label{fig::cim::setup}
\end{figure}
\subsection{Comparison to IGA-FEM and Commercial Tools}
As a second example, we compute the first eigenvalue for the sphere, cf.~Figure~\ref{fig::cim::sphere} for the results.
The contour was chosen as the curve $$ 0.5\cdot\sin(t) + i\cdot0.05\cdot\cos(t)+2.5,\quad\text{ for }t=[0,2\pi).$$ The trapezoidal rule for lines 5 and 15 of the algorithm was chosen with $N=25$.
A closed-form solution $\lambda_{\mathrm{ana}}$ is known \cite[Sec.~5.1.2]{UngerPreprint} and given as the first root of the spherical Bessel function of the first kind, cf.~Figure~\ref{fig::cim::setup}.
It is an eigenvalue of multiplicity three, thus three non-zero singular values in $\bb \Sigma$ are expected. This behaviour is reflected by the numerical examples perfectly. The error in Figure \ref{fig::cim::sphere} refers to the average error of all three computed eigenvalues, i.e.,
$$ \mathrm{error} = \frac 13\sum_{0\leq j < 3} \vert \lambda_j - \lambda_{\mathrm{ana}}\vert.$$
The convergence behaviour w.r.t.~$h$ matches the orders ${\mathcal O}(h^{2p+1})$ predicted by~\eqref{ErrorEstimate3} once more.
We have also computed approximations of the smallest eigenvalues of the unit sphere with the volume-based IGA software package \emph{GeoPDEs 3.0}~\cite{geopde3,Falco_2011aa}. In Table~\ref{Tab:FEMBEM} we showcase results for different polynomial degrees and refinements the number of degrees of freedom and the reached accuracy for the IGA-BEM and the volume-based IGA. One can observe that for the IGA-BEM a
notably fewer (at least $20\times$) number of degrees of freedom are necessary for each polynomial degree in order to reach the same accuracy as for the volume-based IGA.
The difference to commercial tools, in this case CST Microwave Studio 2018, is even more pronounced, cf.~Table~\ref{Tab:CST}.
However, the matrices for the IGA-BEM are dense and the eigenvalue problem non-linear.
\begin{table}
\begin{footnotesize}\begin{center}
\begin{tabular}{r r r r| r r r r }
\toprule
\multicolumn{4}{c}{IGA-BEM}&\multicolumn{4}{c}{volume-based IGA
} \\
$p$ & $m$ & DOFs & error & $p$ & SD & DOFs & error\\
\midrule
3 & 1 &192 & 2.66e-05 &3 &4 &4350 &3.345e-05\\
3 & 2 &432 &4.63e-07 &3 &8 &20450 &3.45e-07\\
3 & 3 &1200 &2.12e-09 &3 &14 & 84560 &1.10e-08\\
\midrule
4 &1 &300 &1.62e-06 &4 &4 &6944 &2.36e-06\\
4 &2 &588 &3.05e-08 &4 &8 &27280 &3.94e-09\\
4 &3 &1452 &3.89e-11 &4 &13 &84560 &7.88e-11\\
\midrule
5 &1 &432 &8.97e-08 &5 &4 &10408 &1.88e-07\\
5 &2 &768 &2.04e-09 &5 &8 &35484 &7.33e-11\\
5 &3 &1728 &7.17e-12 &5 &11 &69600 &2.1498e-12\\
\bottomrule
\end{tabular}
\end{center}\end{footnotesize}
\caption{Error and degrees of freedom (DOFs) for the approximation of the smallest eigenvalue of the sphere for different polynomial degrees $p$, refinement levels $m$ and number of subdivisions~(SD).}
\label{Tab:FEMBEM}
\end{table}
\begin{table}
\begin{footnotesize}\begin{center}
\begin{tabular}{r r r r}
\toprule
\multicolumn{4}{c}{CST Microwave Studio 2018}\\
elements & SD & DOFs & error \\
\midrule
1089 & 6&21597 & 1.8638e-07\\
5292 & 10&101997 & 5.4204e-09\\
9191 & 12&175818 & 1.9524e-09\\
\bottomrule
\end{tabular}
\end{center}\end{footnotesize}
\caption{Error and degrees of freedom (DOFs) for the approximation of the smallest eigenvalue of the sphere with CST Microwave Studio 2018. Settings are
Mesh: Tetrahedral, Desired accuracy: 1e-12, Solver order: 3rd (constant), Curved elements: up to order 5 (user defined)}
\label{Tab:CST}
\end{table}
\subsection{An Industrial Application: TESLA Cavity}
As a third example we discuss an eigenvalue computation of the one-cell TESLA geometry as shown in Figures~\ref{fig::intro::tesla1} and \ref{fig::intro::tesla2}, with the results depicted in Figure~\ref{fig::cim::tesla1}. For this example no analytical solution is known, but experience dictates a resonant frequency around $\kappa \approx 26.5$. We choose the contour $$0.5\cdot\sin(t) + i\cdot0.01\cdot\cos(t)+26.5,\quad\text{ for }t=[0,2\pi)$$ and $N=8.$ As a reference solution we utilise the result of a computation with $p=5$ and $h=1/8.$
This reference solution was compared to a computation with CST Microwave Studio 2018. The set solver order was 3rd (constant) and the mesh was generated with
200\,771 curved tetrahedral elements of order 5. CST yields the solution of 1.27666401260 GHz. Our reference solution of $\lambda_{\mathrm{cim}} = 26.75690023$ corresponds to a frequency of 1.276664064 GHz. This results in a relative error of 4.018e-08. Thus our experiments are in good agreement with those of the CST software. {However, note that in Figure \ref{fig::cim::tesla1} one can clearly see stagnation w.r.t.~the CST Solution on higher-order computations and small $h$. This suggests that the Bembel reference solution is more accurate, provided the convergence of the contour integral approach behaves as observed in the previous numerical experiments.}
The order of convergence for the cavity is not as pronounced as in the examples with analytical solution, which, in light of the estimate~\eqref{ErrorEstimate3} and Remark~\ref{rem::nonsmoothGeom} is most likely due to the reduced regularity of the corresponding eigenfunction.
Either way, one can clearly see an increased accuracy in higher-order approaches.
\begin{figure}[t!]\centering
\begin{tikzpicture}[scale = .75]
\begin{axis}[
height = 7.5cm,
ymode=log,
xlabel=$h$,
xtick={0, 1, 2, 3, 4, 5},
xticklabels={1, {1}/{2}, {1}/{4}, {1}/{8}, {1}/{16}, {1}/{32}},
ylabel=error,
legend style={at={(1,1)},anchor=north west,legend columns=1,cells={anchor=west},}
]
\addplot+[line width=1.5pt,mark size =2.5pt,red] table [trim cells=true,x=M,y=est_err] {data/1cell/cim_tesla_1_cell_1_22_gerhard};
\addlegendentry{ $p=1$, Bembel Error}
\addplot+[line width=1.5pt,mark size =2.5pt,mark options = {solid}, dotted,red] table [trim cells=true,x=M,y=cst_err] {data/1cell/cim_tesla_1_cell_1_22_gerhard};
\addlegendentry{ $p=1$, CST Error}
\addplot+[line width=1.5pt,mark size =2.5pt,blue] table [trim cells=true,x=M,y=est_err] {data/1cell/cim_tesla_1_cell_2_22_gerhard};
\addlegendentry{ $p=2$, Bembel Error}
\addplot+[line width=1.5pt,mark size =2.5pt,mark options = {solid}, dotted,blue] table [trim cells=true,x=M,y=cst_err] {data/1cell/cim_tesla_1_cell_2_22_gerhard};
\addlegendentry{ $p=2$, CST Error}
\addplot+[line width=1.5pt,mark size =2.5pt,brown] table [trim cells=true,x=M,y=est_err] {data/1cell/cim_tesla_1_cell_3_22_gerhard};
\addlegendentry{ $p=3$, Bembel Error}
\addplot+[line width=1.5pt,mark size =2.5pt,mark options = {solid}, dotted,brown] table [trim cells=true,x=M,y=cst_err] {data/1cell/cim_tesla_1_cell_3_22_gerhard};
\addlegendentry{ $p=3$, CST Error}
\addplot+[line width=1.5pt,mark size =2.5pt,gray] table [trim cells=true,x=M,y=est_err] {data/1cell/cim_tesla_1_cell_4_22_gerhard};
\addlegendentry{ $p=4$, Bembel Error}
\addplot+[line width=1.5pt,mark size =2.5pt,mark options = {solid}, dotted,gray] table [trim cells=true,x=M,y=cst_err] {data/1cell/cim_tesla_1_cell_4_22_gerhard};
\addlegendentry{ $p=4$, CST Error}
\end{axis}
\end{tikzpicture}
\caption{Eigenvalue problem of the one-cell TESLA cavity solved for the first eigenvalue. Error has to be understood as the relative difference to the result of a computation. Bembel Reference refers to the error of our method to a reference computation with $h=1/8$ and $p=5$ of our own implementation, and the CST Reference to the error w.r.t.~a reference solution obtained via CST Microwave Studio 2018.}\label{fig::cim::tesla1}
\end{figure} |
1,116,691,500,152 | arxiv | \section{Introduction}
\label{sec:int}
Very high-energy (VHE, E$>$100GeV) gamma-ray astronomy have the potential to provide unique insights on open issues
related to cosmology and particle physics. Additionally, it serves as an important probe for multi-wavelength
and multi-messenger astronomy. VHE emission from the astrophysical sources can be observed from the ground by studying
the shower of secondary charged particles initiated through the interaction of the primary gamma-rays with
the atmosphere \citep{2015CRPhy..16..610D}. Among various techniques employed, the Imaging
Atmospheric Cherenkov Telescopes (IACT) detect the primary gamma-rays through the image of the Cherenkov pool
caused by the secondary shower.
The current-generation IACTs, which include \emph{HESS} \citep{2006A&A...457..899A}, \emph{MAGIC} \citep{2012APh....35..435A}, and
\emph{VERITAS} \citep{2006APh....25..391H}, have been contributing extensively to VHE astrophysics for nearly two decades. In tandem with the telescopes operating at other
energies (e.g. \emph{Fermi}-LAT \citep{2009ApJ...697.1071A}, \emph{Swift} \citep{2005SSRv..120..165B}, \emph{NuSTAR} \citep{2013ApJ...770..103H} and \emph{XMM-Newton} \citep{2001A&A...365L...1J}),
IACTs have provided clues to understand the non-thermal emission processes in blazars \citep{KNODLSEDER2016663}.
The advent of new generation ground-based VHE telescopes, including the proposed Cherenkov Telescope Array Observatory (CTAO) \citep{2019scta.book.....C}, the
gamma-ray astronomy is entering into a new era. Operating from a few tens of GeV to the multi-TeV energy band, the CTAO is designed to be the
largest and the most sensitive gamma-ray observatory \citep{2021arXiv210804512G}. It will be configured as two sets of Cherenkov telescope arrays, one in each of the Earth's hemispheres, and it is expected to start science operations at full capacity within few years.
CTAO along with other upcoming VHE experiments, the Large High Altitude Air Shower Observatory (\emph{LHAASO}) \citep{Cao:2021LM} ,
ASTRI Mini-Array (\emph{ASTRI MA}) \citep{Antonelli:2021ml}, Southern Wide-field Gamma-ray Observatory (\emph{SWGO}) \citep{BarresdeAlmeida:2021xgv},
Major Atmospheric Cherenkov Experiments (\emph{MACE}) \citep{2021arXiv210704297H} etc, will be able to explore the gamma-ray sky with unprecedented performance, notably in the
multi-TeV energy range.
Blazars dominate the extragalactic sky at VHE energies. These objects are the subclass of Radio loud active galactic nuclei (AGNs) having a
relativistic jet pointing close to the line of sight of the observer on earth \citep{Urry_1995}.
The spectral energy distribution (SED) of blazars is dominated by the non thermal emission from the jet and consists of two broad peaks.
The low energy component extends from radio-to-X-rays and attributed to the synchrotron emission, while the high energy component is commonly
interpreted as the inverse Compton (IC) scattering of low energy photons by the relativistic electron distribution in the jet \citep{Urry_1995}.
Blazars are further classified into BL Lac objects and Flat spectrum radio Quasars (FSRQs) based on the presence/absence of broad emission/absorption
line features in their optical spectra.
The synchrotron SED of BL Lacs generally peaks at optical-to-X-ray energies; whereas, for FSRQs this spectral peak
fall in infrared-optical energy range. Besides the variation in peak location, the IC component of blazars is significantly different
for BL Lacs and FSRQs. Particularly, the IC luminosity for FSRQs is larger than the synchrotron luminosity, commonly referred as
{\it Compton dominance}, while it is of similar order in case of BL Lacs \citep{1994ApJ...421..153S}. The target photon field for the IC process is
also different for these two class of blazars. The IC scattering of synchrotron photons (synchrotron self Compton mechanism: SSC)
is quite successful in explaining
the Compton spectral component of BL Lacs. However, modelling the high energy SED of FSRQs through IC mechanism demands additional
photon field other than synchrotron emission. This photon field can be external to the jet and the plausible sources are
the thermal photons from the accretion disc \citep{1993ApJ...416..458D}, broad emission line photons \citep{B_a_ejowski_2000} and/or the IR photons from the dusty torus \citep{1994ApJ...421..153S}.
The interaction of VHE gamma-rays with the extragalactic background light (EBL) results in the formation of electron/positron pairs \citep{DWEK2013112}.
This attenuation results in steeping of the observed VHE spectrum for the distant blazars often causing the flux to fall below the telescope sensitivity.
This makes the Universe opaque above few GeV for objects having larger redshifts (gamma-ray horizon) which was initially predicted at
z > 0.1 \citep{1967PhRv..155.1404G}. Improved sensitivity of the current generation Cherenkov telescopes and better estimation
of EBL intensity have significantly modified the gamma-ray horizon; nevertheless, detection of the FSRQs 4FGL\,J0221.1+3556 and 4FGL J1443.9+2501 located at
redshifts 0.954 \citep{2016A&A...595A..98A} and 0.939 \citep{2015ApJ...815L..22A} have questioned the validity of the current EBL models. In our earlier work \citep{2022MNRAS.511..994M} (hereafter Paper I),
using the correlation study between the observed VHE spectral indices of blazars with their redshift, we have shown that the
commonly used EBL models deviate from the VHE observations of blazars, particularly at high redshifts.
Knowledge of EBL intensity and the intrinsic source VHE spectrum is crucial in identifying the VHE blazar candidates. The latter is often
estimated by extrapolating the \emph{Fermi} high energy gamma-ray spectrum to VHE energies \citep{2021MNRAS.508.6128P, 2021ApJ...916...93Z}. The VHE blazar candidates are then obtained
by convolving the extrapolated spectrum with EBL induced opacity predicted through cosmological models. However, such candidates are put forth
only for BL Lacs \citep{Massaro_2013,10.1093/mnras/stz812,10.1093/mnras/stz3532,10.1093/mnras/stz3018,2021MNRAS.508.6128P,2021ApJ...916...93Z} and no such study have been performed for FSRQs. The primary reason being, the \emph{Fermi} spectrum of FSRQs generally
deviate from a power-law and is often represented by a log-parabolic function and extending this function to VHE energies roll off the spectrum significantly.
Additionally, the FSRQs are populated at higher redshifts and the current EBL models disfavour them as probable candidates.
Our earlier study (Paper I) suggests the EBL intensity may be much less intense than the ones predicted by the cosmological models.
Consistently, this also suggests the gamma-ray horizon may fall at much larger redshifts than the one presumed. Moreover, from the
X-ray spectral studies of blazars it is known that the log-parabolic function is successful in reproducing only a narrow energy
band \citep{2004A&A...413..489M}. Hence, it may not be judicious to expect the \emph{Fermi} log-parabolic spectral shape to extend up to VHE.
In this work, we predict the plausible VHE FSRQ candidates considering these discrepancies. We perform a linear extrapolation
of the high energy spectrum of 586 FSRQs listed in 4FGL-DR2 catalog to VHE as a prediction for the intrinsic VHE spectra.
To account for the reduction in EBL intensity, suggested by VHE observations of FSRQs (Paper I), we add a redshift dependent
correction factor to the EBL opacity provided by \cite{Franceschini_2017}. These modifications are then used to predict the
list of VHE FSRQs that can be studied by CTAO and other operational/upcoming Cherenkov telescopes. In the next section \S \ref{sec:fsrq_cand} we first introduce a correction factor to the existing EBL estimates (using Paper I) followed by intrinsic VHE flux estimations using \emph{Fermi} spectral information. The section concludes by over-plotting the sensitivity curves of present and upcoming VHE telescopes to look for the possible VHE FSRQ candidates. Finally the Results are summarised and discussed in section \S \ref{sec:discussion}. In this work we adapt a cosmology with $\rm \Omega_M = 0.3$, $\rm \Omega_\Lambda = 0.7$, and $\rm H_0 = 71 km s^{-1} Mpc^{-1}$.
\section{VHE FSRQ Candidates}
\label{sec:fsrq_cand}
\subsection{Modified EBL photon density}
The EBL photon density predicted by cosmological models is unable to explain the VHE detection of FSRQs at
large redshift. A similar conclusion is also drawn from the correlation study between the observed
VHE spectral index and the redshift (Paper I). These results suggest, the predicted EBL intensity at
large redshifts may be considerably larger than the actual value. To account for this excess, we introduce
a redshift dependent correction factor $a(z)$ to the commonly used EBL opacity $\tau(E,z)$ by \cite{Franceschini_2017},
\begin{align}\label{eq:taucorr}
\tau_{c}(E,z) = a(z) \tau(E,z)
\end{align}
where, $\tau_c$ is the corrected opacity and $E$ is the VHE photon energy. This correction factor will result in a
harder observed VHE spectral index $\Gamma_{\rm obs}$ for a putative intrinsic spectrum (Figure~\ref{fig:fig0}) given by Paper I,
\begin{align}
\Gamma_{\rm obs} = \Gamma_{\rm int} + a(z) \frac{d\,\tau}{d\,ln(E)}
\end{align}
where, $\Gamma_{\rm int}$ is the intrinsic VHE spectral index and $a(z) < 1$. Estimation of $a(z)$ demands the
knowledge of $\Gamma_{\rm int}$ which is obtained from the y-intercept of the best fit straight line to
$\Gamma_{\rm obs}$ and $z$ distribution.
\setcounter{figure}{0}
\begin{figure}
\centering
\includegraphics[scale=0.3, angle=270]{figures/modelsfsrq_final.eps}
\caption{Comparison of observed VHE spectral indices with those estimated using EBL model by \citet{Franceschini_2017} for FSRQs. The black solid line represents the best fit straight line to 7 FSRQs detected in VHE (Paper 1), dashed line represents the index values estimated using the EBL model. Grey region shows the deviation of EBL model from the observational best fit line to VHE indices.}
\label{fig:fig0}
\end{figure}
\subsection{Intrinsic VHE Source Flux}
The opacity of VHE photons, along with the correction factor, equation (\ref{eq:taucorr}), let us to derive the
observed flux of FSRQs provided the intrinsic flux is known. To estimate the latter we perform a linear
extrapolation of the high energy spectrum of FSRQs to VHE. The information regarding the high energy spectrum
is obtained from the fourth \emph{Fermi} catalog, 4FGL-DR2. The data used in this catalogue was collected
over the period of ten years, starting from August 4 (15:43 UTC) in 2008 to August 2 in 2018 (19:13 UTC).
It employs the same analysis methods as of 4FGL catalogue in the energy range of 50 MeV to 1 TeV.
Among 5788 sources listed in the catalogue, 744 fall into the list of FSRQs. Since the attenuation due to EBL
will depend up on the source redshift, the same is obtained using the online databases
NED\footnote{\url{https://ned.ipac.caltech.edu/}} and SIMBAD\footnote{\url{http://simbad.cfa.harvard.edu/simbad/}}.
These databases provide the redshifts of 586 FSRQs listed in 4FGL-DR2 and hence only these sources are considered for the
present study. The information regarding the high energy spectrum of these FSRQs is extracted from the xml file available
online\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/access/lat/10yr_catalog/gll_psc_v26.xml}} and
the same is plotted as a black dotted line in Figures~\ref{fig:fig1}, \ref{fig:fig2}.
The \emph{Fermi} spectral behaviour for the selected sample of FSRQs is restricted to three distinct shapes. Among the 586
individual spectra of the sources, 263 are logparabola, 321 power-law and 2 sources exhibit a power-law with exponential cutoff.
For the extrapolation to VHE band, we assume a power-law extension with index consistent with the spectral slope
at 10 GeV since the \emph{Fermi} sensitivity is appreciable till this energy.
The extrapolated power law from the \emph{Fermi} fit to the VHE band is shown in the Figures~\ref{fig:fig1}, \ref{fig:fig2}
as a red dotted line and this serves as the intrinsic VHE spectra $F_{\rm int}$ for the present study.
The observed VHE spectrum $F_{\rm obs}$ is then obtained using $\tau_c$ from equation (\ref{eq:taucorr})
\begin{align}
F_{\rm obs} (E,z) = F_{\rm int}(E)\,e^{-\tau_c(E,z)}
\end{align}
The EBL attenuated VHE spectrum for the selected FSRQs is shown as
solid black line in Figures~\ref{fig:fig1},\ref{fig:fig2} The attenuated spectrum considering the original \cite{Franceschini_2017} EBL model is shown as green dotted line.
\subsection{Comparison with IACT sensitivity}
To identify the FSRQ candidates detectable by the operational and upcoming IACTs, we compare the predicted EBL attenuated
VHE spectrum with the available sensitivity curves.
The sensitivity curves of \emph{CTAO}-South and North are obtained from the \emph{CTAO}
webpage\footnote{\url{https://www.cta-observatory.org/science/cta-performance/}}
while the sensitivity curve for \emph{VERITAS}
is obtained from \emph{VERITAS}
webpage\footnote{\url{https://veritas.sao.arizona.edu/about-veritas/veritas-specifications/}}.
For other instruments such as \emph{MAGIC}, \emph{H.E.S.S}, \emph{SWGO} and \emph{MACE}, the sensitivity curves are obtained from
\citet{2016APh....72...76A}, \citet{Holler:2016Av}, \citet{2019arXiv190208429A}
and \citet{2017} respectively. These sensitivities are calculated at 5$\sigma$ significance for different exposure times and are
shown in Figures~\ref{fig:fig1}, \ref{fig:fig2} with legends.
Comparing sensitivity of the IACTs and the predicted VHE spectrum of \emph{Fermi} detected FSRQs, we identify the plausible
VHE FSRQ candidates and list them in Table~\ref{tab:tab1}. Our study results suggest, 37 FSRQs would fall within the observational
threshold of \emph{CTAO} while the detection status with \emph{CTAO}'s first construction phase (Alpha configuration) will be 34.
With the operational IACTs, the number of FSRQs falling within the observational threshold are 5 for \emph{VERITAS}, 3 for \emph{HESS},
and 2 for \emph{MAGIC}, while 2 sources fall under the detection threshold of upcoming observatory \emph{SWGO}.
Blazars being extremely variable, the intrinsic VHE flux derived from cumulative \emph{Fermi} observations
may portray only the average spectrum. To account for this variability, we assume a flaring scenario where
the predicted VHE flux is doubled. We refer the FSRQs which fall within the detection threshold of IACTs
under this assumption as marginally detectable. In Table~\ref{tab:tab2}, we list the FSRQs which are marginally detectable
and find additional 44 sources can be detected by \emph{CTAO} and 3 sources by \emph{SWGO}. With the existing IACTs, we found 7 FSRQs by \emph{VERITAS}, 2 by \emph{HESS} and
6 by \emph{MAGIC} are falling within the marginal detection.
\section{Summary \& Discussion}
\label{sec:discussion}
The IACTs are narrow angle telescopes and require long duration observation of distant sources for significant detection.
Hence, identification of plausible candidate FSRQs becomes a necessity for effective utilization of the telescope.
Our earlier study, based on correlation between observed VHE spectral index with redshift (Paper I) suggests the universe may
be more transparent to VHE photons than those predicted by cosmological EBL models. Taking cue from this, we
predict the observed VHE fluxes of \emph{Fermi} detected FSRQs and compare the same with the sensitivity of operational/upcoming
telescopes. We find a significant number of FSRQs, listed in Table~\ref{tab:tab1} and Table~\ref{tab:tab2}, can be studied by CTAO while few of them even
with the operational IACTs. The sources which fall within the sensitivity limits of \emph{MAGIC}/\emph{VERITAS}/\emph{HESS} and are not reported in VHE yet are 4FGL\,J0043.8+3425, 4FGL\,J0803.2-0337 and 4FGL\,J0957.6+5523 (Table~\ref{tab:tab1}).
Among them 4FGL\,J0043.8+3425, is located at $z=0.966$ which will be the second distant FSRQ if detected and only next to the newly announced 4FGL\,J0348.5-2749
at z = 0.991 \citep{2021ATel15020....1W}. Interestingly, the VHE emission from the FSRQ 4FGL\,J1159.5+2914, newly announced by \emph{VERITAS} \citep{2017ATel11075....1M} and \emph{MAGIC} \citep{2017ATel11061....1M} is also
predicted to be a candidate source in this work (see Table~\ref{tab:tab1}).
Among the FSRQs predicted for CTAO, we find 11 sources at $z>1$ fall under the detectable list and 13 under marginally detectable list (see Table~\ref{tab:tab1},\ref{tab:tab2}).
If detected, these high redshift sources may pose challenges to the existing cosmological EBL models. Alternatively, it can
also play an important role in understanding the cosmology. Limits on EBL is mainly obtained through numerical models of the
galaxy formation and/or their evolution with appropriate cosmological initial conditions \citep{2011MNRAS.410.2556D}. The model parameters
are fine tuned to reproduce the observed universe. VHE identification of the sources at large redshifts can therefore be an
important element to constrain
the EBL which in turn can provide a better understanding about the galaxy formation and evolution. These identifications can also be used to test the
alternate theories involving oscillation of photons and axion like particles proposed by the standard model \citep{2018PrPNP.102...89I,2018JHEAp..20....1G}.
Considering the fact that the blazars are extremely variable \citep{2010ApJ...722..520A,2020A&A...634A..80R}, certain FSRQs may still be
detectable by operational/upcoming telescopes even though our study suggests otherwise.
For instance, the FSRQs 4FGL\,J0739.2+0137
and 4FGL\,J1422.3+3223 fall below our criteria for detection though they are detected at VHE. This probably indicates
that during the flaring epoch, the flux of these sources can enhance more than double of its average flux. Consistently,
we find the \emph{Fermi} flux of 4FGL\,J1422.3+3223 during the VHE detection is typically more than 100 times larger than the average flux quoted in 4FGL catalogue \citep{2020ATel13382....1C}. Considering a similar factor of flux enhancement in the present work would make
this source also as detectable by the operational IACTs \emph{MAGIC} and \emph{VERITAS}. Similar conclusion can be arrived for the newly
announced FSRQ 4FGL\,J0348.5-2749 where the increase in Fermi flux, contemporaneous to the VHE detection, was $\sim$200 times
compared to average flux reported in \emph{Fermi} 4FGL catalogue \citep{2021ATel15020....1W}.
The VHE FSRQ candidates predicted in this work depend on
the choice of $a(z)$ and the robustness of the regression line.
However, the regression line is obtained by fitting merely 7 data points and this may deviate with future detection's. Since the
estimation of $a(z)$ assumes the intrinsic VHE index to be the y-intercept of the regression line, any deviation in the fit parameters
can modify these predictions considerably. Conversely, a better regression analysis needs more FSRQs to be detected in VHE
and the prediction based on the available information can facilitate this requirement. Detection of more VHE FSRQs with precise
index measurements will also let us fit the redshift-index dependence with non-linear functions. Such a study will also have a
major role in constraining the cosmological models.
VHE spectrum of FSRQs is better explained by a power-law function and hence we have assumed the intrinsic source
spectrum also to be a power-law. If we consider the EC interpretation for the VHE emission, the Klein-Nishina effects
will be substantial and the spectrum will deviate from a simple power-law \citep{1993ApJ...416..458D}. In addition, emission
at VHE may involve high energy electrons that may fall close to the cut-off energy of the underlying electron
distribution \citep{1998A&A...333..452K}. Under these conditions, the intrinsic VHE spectrum may deviate
significantly from a power-law. Again, the Klein-Nishina effect depend upon the energy of the target photons and under extreme limits
the VHE spectrum will be very steep with the photon spectrum nearly following the electron distribution \citep{1970RvMP...42..237B}.
Hence, VHE study of FSRQs can also have the potential to understand the photon field environment of FSRQs.
\section{Acknowledgement}
M.Z, S.S, N.I \& A.M acknowledge the financial support provided by Department of Atomic energy (DAE), Board of Research in Nuclear Sciences (BRNS), Govt of India via Sanction Ref No.: 58/14/21/2019-BRNS. SZ is supported by the Department of Science and Technology, Govt. of India, under the INSPIRE Faculty grant (DST/INSPIRE/04/2020/002319). This research has made use of the CTA instrument response functions provided by the CTA Consortium and Observatory, see https://www.cta-observatory.org/science/cta-performance/ (version prod3b-v2; https://doi.org/10.5281/zenodo.5163273 and version prod5 v0.1; https://doi.org/10.5281/zenodo.5499840) for more details.
\section{Data Availability}
The codes and model used in this work will be shared on a reasonable request to to the corresponding author Malik Zahoor (email:
[email protected]).
\newpage
\setcounter{figure}{1}
\begin{figure*
\includegraphics[width=0.30\textwidth]{figures/Detected/J0043.8+3425.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J0102.8+5824.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J0134.5+2637.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J0202.4+0849.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J0206.0-0958.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J0221.1+3556.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J0237.8+2848.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J0348.5-2749.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J0445.1-6012.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J0457.0-2324.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J0510.0+1800.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J0515.6-4556.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J0730.3-1141.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J0803.2-0337.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J0808.2-0751.eps}
\caption{37 VHE FSRQ candidates falling within the observational threshold of CTAO and other instruments. The black dashed line represents the \emph{Fermi} 4FGL-DR2 spectral fit. Red dashed line shows the linear extrapolation of \emph{Fermi} spectrum to VHE. The dashed green and solid black lines represent the EBL attenuated spectrum using \citet{Franceschini_2017} and corrected EBL model respectively. The differential sensitivities of \emph{CTAO}-North and south (Omega configuration, 50h), \emph{MAGIC} (50h), \emph{VERITAS} (50h), MACE (50h), H.E.S.S (50h) and SWGO (5y) are also plotted. Sensitivity curves are given in different colors as depicted in the Figure labels.}
\label{fig:fig1}
\end{figure*
\newpage
\setcounter{figure}{1}
\begin{figure*
\includegraphics[width=0.30\textwidth]{figures/Detected/J0957.6+5523.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J1048.4+7143.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J1127.0-1857.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J1159.5+2914.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J1224.9+2122.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J1256.1-0547.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J1427.9-4206.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J1443.9+2501.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J1504.4+1029.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J1512.8-0906.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J1553.6-2422.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J1642.9+3948.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J1722.7+1014.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J1924.8-2914.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J2000.9-1748.eps}
\caption{Continued from Figure.~2.}
\label{fig:fig1}
\end{figure*
\newpage
\setcounter{figure}{1}
\begin{figure*
\includegraphics[width=0.30\textwidth]{figures/Detected/J2025.6-0735.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J2158.1-1501.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J2232.6+1143.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J2244.2+4057.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J2253.9+1609.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J2329.3-4955.eps}
\includegraphics[width=0.30\textwidth]{figures/Detected/J2345.2-1555.eps}
\caption{Continued from Figure.~2.}
\label{fig:fig1}
\end{figure*
\newpage
\setcounter{figure}{2}
\begin{figure*
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0041.4+3800.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0112.8+3208.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0113.4+4948.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0118.9-2141.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0137.0+4751.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0210.7-5101.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0217.8+0144.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0236.8-6136.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0259.4+0746.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0339.5-0146.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0423.3-0120.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0505.3+0459.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0526.2-4830.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0532.6+0732.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0532.9-8325.eps}
\caption{44 VHE FSRQ candidates falling within the marginal observational threshold of CTAO and other instruments (when predicted VHE flux is doubled). The black dashed line represents the \emph{Fermi} 4FGL-DR2 spectral fit. Red dashed line shows the linear extrapolation of \emph{Fermi} spectrum to VHE. The dashed green and solid black lines represent the EBL attenuated spectrum using \citet{Franceschini_2017} and corrected EBL model respectively. The differential sensitivities of \emph{CTAO}-North and south (Omega configuration, 50h), \emph{MAGIC} (50h), \emph{VERITAS} (50h), MACE (50h), H.E.S.S (50h) and SWGO (5y) are also plotted. Sensitivity curves are given in different colors as depicted in the Figure labels.}
\label{fig:fig2}
\end{figure*
\newpage
\setcounter{figure}{2}
\begin{figure*
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0719.3+3307.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0725.2+1425.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0748.6+2400.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J0922.4-0528.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1006.7-2159.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1016.0+0512.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1033.9+6050.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1043.2+2408.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1123.4-2529.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1153.4+4931.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1246.7-2548.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1345.5+4453.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1401.2-0915.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1419.4-0838.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1512.2+0202.eps}
\caption{Continued from Figure.~3.}
\label{fig:fig2}
\end{figure*
\newpage
\setcounter{figure}{2}
\begin{figure*
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1522.1+3144.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1549.5+0236.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1733.0-1305.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1834.2+3136.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1849.2+6705.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J1852.4+4856.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J2023.6-1139.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J2030.4-0502.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J2143.5+1743.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J2156.3-0036.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J2236.3+2828.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J2321.9+2734.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J2323.5-0317.eps}
\includegraphics[width=0.30\textwidth]{figures/Marginally_Detected/J2348.0-1630.eps}
\caption{Continued from Figure.~3.}
\label{fig:fig2}
\end{figure*
\begin{landscape}
\begin{table}
\begin{center}
\caption{List of VHE FSRQ candidates falling within the observational threshold of CTAO. D, M and N depict detected, marginally detected and non detected sources corresponding to different instruments.}
\label{tab:tab1}
\begin{tabular}{cccccccccccc}
\hline
\multirow{2}{*}{Sourcename (4FGL-DR2)}&\multirow{2}{*}{R.A.}& \multirow{2}{*}{Decl.}& \multirow{2}{*}{Redshift}& \multirow{2}{*}{Fermi Spectral type}&CTAO(Omega) & CTA (Alpha) & VERITAS&MAGIC&MACE&HESS&SWGO\\
&&&&&(50 hour) & (50 hour) & (50 hour)& (50 hour)& (50 hour)&(50 hour) & (5 years)\\
\hline\hline
4FGL J0043.8+3425 & 00 43 53.2 & +34 25 54 & 0.966 & LogParabola & D & D & D & M & N & N & N\\
4FGL J0102.8+5824 & 01 02 48.2 & +58 24 33 & 0.644 & LogParabola & D & D & N & N & N & N & N\\
4FGL J0134.5+2637 & 01 34 30.5 & +26 37 46 & 0.571 & PowerLaw & D & D & M & M & N & N & N\\
4FGL J0202.4+0849 & 02 02 28.5 & +08 49 44 & 0.629 & PowerLaw & D & D & N & N & N & N & N\\
4FGL J0206.0-0958 & 02 06 04.4 & -09 58 09 & 0.16584 & PowerLaw & D & D & N & N & N & N & D\\
4FGL J0221.1+3556 & 02 21 07.4 & +35 56 09 & 0.96 & LogParabola & D & D & M & N & N & N & N\\
4FGL J0237.8+2848 & 02 37 53.7 & +28 48 16 & 1.206 & LogParabola & D & D & N & N & N & N & N\\
4FGL J0348.5-2749 & 03 48 34.0 & -27 49 49 & 0.99 & LogParabola & D & D & N & N & N & N & N\\
4FGL J0445.1-6012 & 04 45 08.6 & -60 12 44 & 0.097 & PowerLaw & D & D & N & N & N & N & M\\
4FGL J0457.0-2324 & 04 57 02.6 & -23 24 54 & 1.00 & LogParabola & D & D & N & N & N & M & N\\
4FGL J0510.0+1800 & 05 10 04.3 & +18 00 49 & 0.42 & LogParabola & D & D & M & N & N & N & N\\
4FGL J0515.6-4556 & 05 15 37.4 & -45 56 54 & 0.194 & PowerLaw & D & M & N & N & N & N & N\\
4FGL J0730.3-1141 & 07 30 18.6 & -11 41 20 & 1.59 & LogParabola & D & D & N & N & N & N & N\\
4FGL J0803.2-0337 & 08 03 17.9 & -03 37 08 & 0.365 & PowerLaw & D & D & D & M & N & D & D\\
4FGL J0808.2-0751 & 08 08 15.6 & -07 51 20 & 1.84 & LogParabola & D & M & N & N & N & N & N\\
4FGL J0957.6+5523 & 09 57 39.9 & +55 23 02 & 0.902000 & LogParabola & D & D & D & D & N & N & N\\
4FGL J1048.4+7143 & 10 48 25.6 & +71 43 47 & 1.1500 & LogParabola & D & D & N & N & N & N & N\\
4FGL J1127.0-1857 & 11 27 03.2 & -18 57 50 & 1.05 & LogParabola & D & D & N & N & N & N & N\\
4FGL J1159.5+2914 & 11 59 32.2 & +29 14 41 & 0.72475 & LogParabola & D & D & D & M & N & N & N\\
4FGL J1224.9+2122 & 12 24 54.6 & +21 22 53 & 0.43383 & LogParabola & D & D & D & M & N & N & N\\
4FGL J1256.1-0547 & 12 56 10.0 & -05 47 19 & 0.53620 & LogParabola & D & D & D & D & N & D & M\\
4FGL J1427.9-4206 & 14 27 56.8 & -42 06 22 & 1.52 & LogParabola & D & D & N & N & N & N & N\\
4FGL J1443.9+2501 & 14 43 58.4 & +25 01 45 & 0.9397 & LogParabola & D & D & M & N & N & N & N\\
4FGL J1504.4+1029 & 15 04 24.8 & +10 29 52 & 1.83795 & LogParabola & D & D & N & N & N & N & N\\
4FGL J1512.8-0906 & 15 12 51.5 & -09 06 23 & 0.36 & LogParabola & D & D & N & N & N & D & M\\
4FGL J1553.6-2422 & 15 53 36.6 & -24 22 07 & 0.33 & PowerLaw & D & D & N & N & N & N & N\\
4FGL J1642.9+3948 & 16 42 56.2 & +39 48 59 & 0.59541 & PowerLaw & D & D & N & N & N & N & N\\
4FGL J1722.7+1014 & 17 22 44.6 & +10 14 05 & 0.732 & PowerLaw & D & D & N & N & N & N & N\\
4FGL J1924.8-2914 & 19 24 51.3 & -29 14 48 & 0.35263 & PowerLaw & D & D & N & N & N & N & N\\
4FGL J2000.9-1748 & 20 00 56.3 & -17 48 59 & 0.65 & PowerLaw & D & D & N & N & N & N & N\\
4FGL J2025.6-0735 & 20 25 41.3 & -07 35 40 & 1.388000 & LogParabola & D & M & N & N & N & N & N\\
4FGL J2158.1-1501 & 21 58 06.6 & -15 01 25 & 0.67 & PowerLaw & D & D & N & N & N & N & N\\
4FGL J2232.6+1143 & 22 32 36.6 & +11 43 50 & 1.032 & PLSuperExpCutoff2 & D & D & M & M & N & N & N\\
4FGL J2244.2+4057 & 22 44 14.7 & +40 57 35 & 1.171 & LogParabola & D & D & M & N & N & N & N\\
4FGL J2253.9+1609 & 22 53 59.1 & +16 09 02 & 0.859001 & PLSuperExpCutoff2 & D & D & M & N & N & N & N\\
4FGL J2329.3-4955 & 23 29 19.1 & -49 55 57 & 0.518 & LogParabola & D & D & N & N & N & N & N\\
4FGL J2345.2-1555 & 23 45 12.7 & -15 55 06 & 0.621 & LogParabola & D & D & N & N & N & M & N\\
\hline
\end{tabular}
\end{center}
\end{table}
\end{landscape}
\newpage
\begin{landscape}
\begin{table}
\begin{center}
\caption{List of VHE FSRQ candidates falling within the marginal observational threshold of CTAO. D, M and N depict detected, marginally detected and non detected sources corresponding to different instruments.}
\label{tab:tab2}
\begin{tabular}{cccccccccccc}
\hline
\multirow{2}{*}{Sourcename (4FGL-DR2)}&\multirow{2}{*}{R.A.}& \multirow{2}{*}{Decl.}& \multirow{2}{*}{Redshift}& \multirow{2}{*}{Fermi Spectral type}&CTAO(Omega) & CTA (Alpha) & VERITAS & MAGIC & MACE & HESS & SWGO\\
&&&&&(50 hour) & (50 hour) & (50 hour)& (50 hour)& (50 hour)&(50 hour) & (5 years)\\
\hline\hline
4FGL J0041.4+3800 & 00 41 28.7 & +38 00 35 & 0.379 & PowerLaw & M & M & N & N & N & N & N\\
4FGL J0112.8+3208 & 01 12 53.4 & +32 08 24 & 0.6100 & LogParabola & M & M & N & N & N & N & N\\
4FGL J0113.4+4948 & 01 13 28.4 & +49 48 19 & 0.39 & LogParabola & M & M & N & N & N & N & N\\
4FGL J0118.9-2141 & 01 18 54.1 & -21 41 41 & 1.16 & LogParabola & M & N & N & N & N & N & N\\
4FGL J0137.0+4751 & 01 37 02.5 & +47 51 49 & 0.86 & LogParabola & M & M & N & N & N & N & N\\
4FGL J0210.7-5101 & 02 10 46.7 & -51 01 18 & 1.003 & LogParabola & M & M & N & N & N & N & N\\
4FGL J0217.8+0144 & 02 17 50.9 & +01 44 05 & 1.72 & LogParabola & M & M & N & N & N & N & N\\
4FGL J0236.8-6136 & 02 36 48.5 & -61 36 38 & 0.466569 & PowerLaw & M & M & N & N & N & N & N\\
4FGL J0259.4+0746 & 02 59 25.9 & +07 47 00 & 0.89 & LogParabola & M & M & N & N & N & N & N\\
4FGL J0339.5-0146 & 03 39 30.5 & -01 46 37 & 0.85 & LogParabola & M & M & N & N & N & N & N\\
4FGL J0423.3-0120 & 04 23 18.2 & -01 20 03 & 0.91609 & LogParabola & M & N & N & N & N & N & N\\
4FGL J0505.3+0459 & 05 05 22.3 & +04 59 58 & 0.95 & LogParabola & M & M & N & N & N & N & N\\
4FGL J0526.2-4830 & 05 26 17.1 & -48 30 54 & 1.3041 & LogParabola & M & M & N & N & N & N & N\\
4FGL J0532.6+0732 & 05 32 41.3 & +07 32 57 & 1.254 & LogParabola & M & M & N & N & N & N & N\\
4FGL J0532.9-8325 & 05 32 58.9 & -83 25 57 & 0.774 & PowerLaw & M & N & N & N & N & N & N\\
4FGL J0719.3+3307 & 07 19 21.6 & +33 07 24 & 0.779 & LogParabola & M & M & N & N & N & N & N\\
4FGL J0725.2+1425 & 07 25 17.8 & +14 25 16 & 1.038 & LogParabola & M & M & N & N & N & N & N\\
4FGL J0748.6+2400 & 07 48 39.3 & +24 01 00 & 0.40932 & PowerLaw & M & N & N & N & N & N & N\\
4FGL J0922.4-0528 & 09 22 27.0 & -05 28 29 & 0.974 & LogParabola & M & M & N & N & N & N & N\\
4FGL J1006.7-2159 & 10 06 46.3 & -21 59 28 & 0.33 & PowerLaw & M & M & N & N & N & N & N\\
4FGL J1016.0+0512 & 10 16 02.2 & +05 12 32 & 1.701000 & PowerLaw & M & M & N & N & N & N & N\\
4FGL J1033.9+6050 & 10 33 56.4 & +60 50 57 & 1.408000 & LogParabola & M & M & N & N & N & N & N\\
4FGL J1043.2+2408 & 10 43 13.3 & +24 08 46 & 0.560000 & PowerLaw & M & N & N & N & N & N & N\\
4FGL J1123.4-2529 & 11 23 28.4 & -25 29 17 & 0.146 & PowerLaw & M & N & N & N & N & N & N\\
4FGL J1153.4+4931 & 11 53 24.1 & +49 31 01 & 0.33364 & PowerLaw & M & M & N & N & N & N & N\\
4FGL J1246.7-2548 & 12 46 45.3 & -25 48 06 & 0.63 & LogParabola & M & M & N & N & N & N & N\\
4FGL J1345.5+4453 & 13 45 34.6 & +44 53 04 & 2.542000 & LogParabola & M & M & N & N & N & N & N\\
4FGL J1401.2-0915 & 14 01 13.5 & -09 15 28 & 0.667 & PowerLaw & M & N & N & N & N & N & N\\
4FGL J1419.4-0838 & 14 19 26.4 & -08 38 30 & 0.903 & LogParabola & M & M & N & N & N & N & N\\
4FGL J1512.2+0202 & 15 12 16.8 & +02 02 25 & 0.21945 & LogParabola & M & M & N & N & N & N & N\\
4FGL J1522.1+3144 & 15 22 10.9 & +31 44 22 & 1.4886 & LogParabola & M & M & N & N & N & N & N\\
4FGL J1549.5+0236 & 15 49 32.4 & +02 36 30 & 0.41421 & PowerLaw & M & N & N & N & N & N & N\\
4FGL J1733.0-1305 & 17 33 03.2 & -13 05 09 & 0.90 & LogParabola & M & M & N & N & N & N & N\\
4FGL J1834.2+3136 & 18 34 15.1 & +31 36 13 & 0.594 & PowerLaw & M & M & N & N & N & N & N\\
4FGL J1849.2+6705 & 18 49 16.6 & +67 05 27 & 0.66 & LogParabola & M & M & N & N & N & N & N\\
4FGL J1852.4+4856 & 18 52 27.8 & +48 56 06 & 1.250 & PowerLaw & M & M & N & N & N & N & N\\
4FGL J2023.6-1139 & 20 23 36.8 & -11 39 31 & 0.698 & LogParabola & M & N & N & N & N & N & N\\
4FGL J2030.4-0502 & 20 30 29.8 & -05 02 57 & 0.543 & PowerLaw & M & N & N & N & N & N & N\\
4FGL J2143.5+1743 & 21 43 34.6 & +17 43 50 & 0.21 & LogParabola & M & M & N & N & N & N & N\\
4FGL J2156.3-0036 & 21 56 19.1 & -00 36 14 & 0.495 & PowerLaw & M & N & N & N & N & N & N\\
4FGL J2236.3+2828 & 22 36 23.1 & +28 29 00 & 0.790 & LogParabola & M & M & N & N & N & N & N\\
4FGL J2321.9+2734 & 23 21 58.1 & +27 34 18 & 1.25 & PowerLaw & M & M & N & N & N & N & N\\
4FGL J2323.5-0317 & 23 23 33.2 & -03 17 24 & 1.41 & LogParabola & M & M & N & N & N & N & N\\
4FGL J2348.0-1630 & 23 48 03.8 & -16 30 58 & 0.58 & LogParabola & M & M & N & N & N & N & N\\
\hline
\end{tabular}
\end{center}
\end{table}
\end{landscape}
\bibliographystyle{mnras}
|
1,116,691,500,153 | arxiv | \section{Introduction}
The Large Hadron Collider (LHC) has confirmed the existence of the Higgs boson. The next main objective of future searches is to discover what lies beyond the Standard Model (BSM).
Supersymmetric (SUSY) extensions of the SM are among the most well-studied scenarios for BSM physics, and are mainly motivated by
theoretical considerations (e.g., the hierarchy problem). Although the LHC has not found any signals of BSM physics so far, SUSY extensions of the SM are not ruled out yet.
Among a variety of SUSY scenarios, the possibility of a light stop has received a lot of attention since it is naturally well motivated~\cite{Dimopoulos:1995mi,Pomarol:1995xc,Cohen:1996vb,Kitano:2006gv,Kats:2011qh,Brust:2011tb,Brust:2012uf,Papucci:2011wy,Espinosa:2012in}. A light stop is generally expected in the minimal supersymmetric Standard Model (MSSM) due to a large positive Yukawa coupling term in the renormalization group equation for the stop mass and possibly large mixing term between left and right stops in the
stop mass matrix. On top of that, there are other motivations for the light stop coming from cosmological considerations. First, a light stop with a mass that is a few tens of $\,{\rm GeV}$ above the lightest SUSY particle can successfully account for the thermal relic density of dark matter \cite{Boehm:1999bj}. Second, electroweak baryogenesis is possible in the MSSM with a light stop \cite{Huet:1995sh,Carena:2008vj,Li:2008ez}. Recently, the authors of Ref. \cite{Delgado:2012eu} have found that a light stop mass between 200 and $400\,{\rm GeV}$ (with several additional conditions) is consistent with the currently available experiment constraints such as the $126\,{\rm GeV}$ Higgs mass, $B\to X_s \gamma$, etc.
Collider experiments search for stop quarks decaying to a top quark and neutralino or bottom quark and chargino and
if the stop is light enough that these decays are not kinematically
allowed, then the bounds from these searches are not relevant and the stop quark is likely long-lived.
See Refs.~\cite{Cao:2012rz,Kribs:2013lua,Kowalska:2013ica,Han:2013kga} for light stop searches from stop decays to top quark and neutralino or bottom quark and chargino.
This SUSY spectrum can be obtained if the
bino and wino masses are comparable to the Higgsino mass term so that the lightest neutralino
and chargino masses are not degenerate.
If stops exist, they will be produced at the LHC mainly through gluon-gluon fusion, much like the main production channel of the SM
Higgs boson. The stops can either be produced singly or in pairs.
If pair produced, they can then form a bound state, stoponium, through the strong Coulomb interaction.
This bound state can then undergo two-body decays to SM particles, and the bound state will appear as a resonance above the SM background for $\gamma \gamma$, $W^+W^-$, or $Z^0 Z^0$, for example. In earlier work~\cite{Kim:2008bx,Idilbi:2010rs}, we showed that these are good channels in which to search for heavy BSM particles that are strongly interacting. In this work we apply the methodology of Refs.~\cite{Kim:2008bx,Idilbi:2010rs} to stop pairs. For early work advocating searching for stoponium in the $\gamma \gamma$ channel,
see Refs.~\cite{Drees:1993uw,Martin:2008sv}.
Since the LHC is a hadronic machine where two protons collide at very high energies, the partons inside the hadrons will initiate a hard reaction
responsible for the production of massive particles. To separate nonperturbative long-distance QCD effects from the calculable short-distance effects, we derive factorization theorems for the production process. These theorems clarify what is perturbatively calculable
and what is the proper form of the relevant hadronic matrix elements to be determined from experiment (or a nonperturbative QCD calculation). In cases where such factorization theorems hold, one then needs to deal with large logarithms encountered in perturbative calculations. Such large logarithms exist because the production process is characterized by several widely separated scales and thus large logarithms of the ratios of these scales need to be resummed.
In this work we utilize the effective field theory approach to establish a factorization theorem for the production of massive stop pairs. This
is done by constructing effective operators, at each relevant scale, that mediate the production reaction. In our work, the relevant theories are soft-collinear effective theory (SCET) \cite{SCET1,SCETf,Bauer:2003mga}
and heavy-scalar effective theory (HSET).
The former describes the multiscale physics behind the production of the stop pair through the gluon-gluon fusion process. The factorization of the production process into hard, soft, and collinear parts allows us to implement the threshold resummation by solving the renormalization group equations for each of these parts. HSET describes the production of a slowly moving stop pair whose strong Coulomb interactions will bind the stop pair into the stoponium.
The strong Coulomb interactions are resummed to all orders using the Coulomb Green's function and including the finite width of the stoponium yields a resonant shape in the vicinity of the stoponium mass~\cite{Beneke:2010da,Falgari:2012hx}. For recent next-to-next-to-leading logarithmic (NNLL) resummed calculations of squark and gluino production, including the Coulomb Green's function, see Ref.~\cite{Beneke:2013opa}.
Threshold resummation of squark and gluino production in Mellin space has been calculated up to NNLL accuracy in Refs.~\cite{Langenfeld:2009eg,Langenfeld:2010vu,Langenfeld:2012ti,Pfoh:2013iia,Beenakker:2010nq,Beenakker:2011sf,Beenakker:2013mva}. NNLL resummation in momentum space for stop pair production was recently reported in Refs.~\cite{Broggio:2013uba,Broggio:2013cia}.
The factorization theorem obtained allows us to resum large logarithms when the stop pair is produced near the partonic threshold which is the
kinematical range of interest once we assume a light stop mass and LHC energies.
The accuracy of resummation depends on the knowledge we have of the perturbatively calculable anomalous dimensions and beta functions appearing in the formulas
for the resummed cross section. In this work the resummation is performed up to NNLL accuracy. The phenomenological impact of resummation is discussed below.
Finally, we consider the decay rates of the stoponium bound state, denoted $\tilde \sigma$, in two channels: $pp\to\tilde{\sigma}\to\gamma\gamma$ and $pp\to\tilde{\sigma}\to ZZ$
which, as we will argue below, are the most promising channels for searching for stoponium. Current bounds on the stop mass are frequently presented as exclusion plots in the neutralino mass - stop mass plane, with stop masses being excluded up to 700 GeV for certain neutralino masses. A gap in the exclusion plots exists wherever the stop mass is less than (approximately) the sum of the top mass and neutralino mass, and this gap extends down to stop masses of about 200 GeV. (The plots we are referring
to can be seen in Ref.~\cite{ATLAS:directStop}. See also Ref.~\cite{Chatrchyan:2013xna} for recent direct stop search at CMS.) For this reason, in this paper we focus on light stop masses lying between 200 and $400\,{\rm GeV}$.
We also study the dependence of the resonant cross section on MSSM parameter choice while taking into account uncertainties resulting from different choices of the hard, soft, and factorization scales. As expected NNLL resummation greatly reduces these scale uncertainties.
We also discuss the required luminosity at LHC energies, 8 and $14\,{\rm TeV}$, and consider five different stoponium masses ranging from 400 to $800\,{\rm GeV}$.
Our findings show that, independent of the MSSM parameter space, we can determine whether stoponium resonance of mass up to $500\,{\rm GeV}$ exists or not within the first LHC run
at $14\,{\rm TeV}$, assuming 400 fb$^{-1}$ of integrated luminosity, through either the $\gamma\gamma$ or $ZZ$ decay modes.
Based on our analysis we could not exclude any stoponium mass within that mass range with the currently accumulated LHC data at $8\,{\rm TeV}$.
This paper is organized as follows. In Sec.~II, we outline the theoretical framework for deriving the effective Lagrangian for the production of massive stops through the strong interactions. In Sec.~III, we consider the near-threshold production cross section for $\tilde {\sigma}$ in $pp$ collisions and obtain the factorization theorem for this process. This includes the Green's function responsible for resumming the strong Coulomb interaction. In Sec.~IV, we derive the
cross section including threshold resummation for
$pp \rightarrow \tilde{\sigma}X$ followed by two-body decays to SM particles. This resummation is performed directly in momentum space. In Sec.~V, we present our phenomenological results. These include plots of the branching fractions for stoponium to various two-body SM final states and
cross sections for $pp \to \tilde \sigma \to \gamma \gamma, ZZ$. We study these as functions of the stoponium mass, for various choices of MSSM parameters, and for two different
LHC collision energies, 8 and 14 TeV. We conclude in Sec.~VI.
In Appendix A, we give the expressions for the rates for stoponium decaying into two SM particles.
In Appendix B we collect all the formulas for the anomalous dimensions and beta functions needed to obtain the NNLL threshold resummation for our cross section.
Appendix C contains the explicit formulas for the next-to-leading order (NLO) Coulomb Green's function.
\section{Effective Lagrangian Near Threshold}
At the LHC, stop pair is produced dominantly via the $gg$ fusion process through strong interactions.
The gluons couple to the stops via the kinetic terms for the stops in the MSSM Lagrangian,
\begin{equation}
\label{LQCD}
\mathcal{L}_{\mathrm{\tilde t}} = - \tilde{t}^{\dagger} D^2 \tilde{t} - m_{\tilde{t}}^2 ~\tilde{t}^{\dagger} \tilde{t},
\end{equation}
where
$D_{\mu} = \partial_{\mu} - ig A_{\mu}$
and $\tilde{t}$ is the scalar top (stop) field. Near threshold where the partonic center-of-mass (CM) energy
$\hat{s}$ is approximately $(2m_{\tilde{t}})^2$, the produced stop pair moves slowly, hence the $\tilde {t}$ can be represented
by a nonrelativistic heavy scalar field, analogous to the heavy quark field in heavy quark effective theory (HQET), where the velocity becomes a label of the quantum field.
The fields of this Heavy Scalar Effective Theory (HSET) are related to the full theory fields by
\begin{eqnarray}\label{sqmat}
\tilde{t}(x) &=&\frac{1}{\sqrt{2m_{\ti}}} (e^{-im_{\ti} v\cdot x} \tilde{t}_v (x) + e^{im_{\ti} v\cdot x} \tilde{t}_{-v} (x)), \\
\tilde{t}^{\dagger} (x) &=&\frac{1}{\sqrt{2m_{\ti}}} (e^{im_{\ti} v\cdot x} \tilde{t}_v^{\dagger} (x) + e^{-im_{\ti} v\cdot x} \tilde{t}_{-v}^{\dagger} (x)). \nonumber
\end{eqnarray}
The $\tilde{t}_v$ and $\tilde{t}_{-v}$ are the HSET fields for stop and antistop respectively.
Putting Eq.~(\ref{sqmat}) into Eq.~(\ref{LQCD}), and dropping terms with nontrivial exponentials that vanish in the $m_{\tilde t} \to \infty$ limit, we obtain the HSET Lagrangian up to $O(1/m_{\ti})$:
\begin{eqnarray}\label{HSET}
\mathcal{L}_{\mathrm{HSET}} &=& \tilde{t}_v^{\dagger} v\cdot iD \tilde{t}_v - \frac{1}{2m_{\ti}} \tilde{t}_v^{\dagger} D^2 \tilde{t}_v \\
&&+\tilde{t}_{-v}^\dagger(-v)\cdot iD \tilde{t}_{-v} - \frac{1}{2m_{\ti}} \tilde{t}_{-v}^{\dagger} D^2 \tilde{t}_{-v}. \nonumber
\end{eqnarray}
Here the second and fourth terms are suppressed by $\mathcal{O}(1/m_{\tilde t})$. The covariant derivative in Eq.~(\ref{HSET}) can be written as
\begin{equation}
\label{devsp}
D^{\mu} = \partial_s^{\mu} - igA_s^{\mu} + \partial_p^{\mu}- igA_p^{\mu} = D_s^{\mu}+D_p^{\mu},
\end{equation}
where $A_s$ is the soft gluon, and $A_p$ is the potential gluon, the exchange of which gives rise to Coulombic potential between the stop and antistop. We also separate the
derivatives, i.e., $\partial = \partial_s + \partial_p$ requiring $[\partial_s,A_p] = [\partial_p,A_s]=0$.
The HSET Lagrangian in Eq.~(\ref{HSET}) encodes the interaction of the stop field $\tilde{t}$ with soft and Coulomb gluons. Those two interactions can be
decoupled via gluon field redefinitions where one defines hatted fields through
\begin{equation}
\label{reds}
gA_s^{\mu} = g\hat{A}_s^{\mu} + \hat{Y}_v[iD_p^{\mu}, \hat{Y}_v^{\dagger}].
\end{equation}
The hatted field is a newly defined soft field, and $\hat{Y}_v$ is the timelike soft Wilson line, which is given by
\begin{equation}\label{redY}
\hat{Y}_v(x) = \mathrm{P} \exp \Bigl(ig \int^x_{-\infty} ds ~v\cdot \hat{A}_s (v s) \Bigr),
\end{equation}
where ``P'' represents a path-ordered integral. The covariant derivative $D^{\mu}$ in Eq.~(\ref{devsp}) can be expressed in terms of $\hat{A}_s$ as
$D^{\mu} = \hat{D}_s^{\mu} + \hat{Y}_v D_p^{\mu} \hat{Y}_v^{\dagger}$. Next we redefine the heavy stop field
as $\tilde{t}_v = \hat{Y}_v \tilde{t}_v^{(0)}$, where the newly defined field $\tilde{t}_v^{(0)}$ does not interact with soft fields (at LO in $1/m_{\tilde t}$)
and the soft interactions of
$\tilde t$ are taken care of by the soft Wilson line $\hat{Y}_v$. When the HSET Lagrangian is expressed in terms of the redefined fields
$\hat{A}_s$ and $\tilde{t}_v^{(0)}$ it then becomes an effective Lagrangian describing a nonrelativistic stop strongly interacting only with Coulomb gluons.
For convenience of notation, in the rest of the paper we will drop the hats on these fields.
Now we construct the effective interaction Lagrangian for $gg \to \tilde{t}^{\dagger} \tilde{t}$ to be denoted below by $\mathcal{L}_{\mathrm{EFT}}$. Near partonic threshold,
only soft and collinear gluons can be emitted into the final state. By collinear we mean collinear to one of the incoming beams. It is useful then to construct an effective operator basis in
the irreducible color representation since, in this basis, the effective operators do not mix. Since the possible irreducible
color representations of stop pair are only $\bf 1$ and $\bf 8$, the production channels allowed by color conservation are:
$(R_i,R_f) =\bf(1,1),~(8_S,8),~(8_A,8)$, where $R_{i}$ and $R_f$ denote the color representations of the initial and final states.
The effective Lagrangian is then
\begin{eqnarray}\label{EFTL}
\mathcal{L}_{\mathrm{EFT}} = \sum_{k=1}^3 C_k (Q^2,\mu) \mathcal{O}_k (\mu),
\end{eqnarray}
where $Q^2 \sim 4m_{\tilde{t}}^2$ is the typical hard scale (squared) for stop pair production, and
the effective operators $\mathcal{O}_k$ are
\begin{equation}\label{efto}
\mathcal{O}_k = \frac{1}{2m_{\tilde{t}}^3} E_{ab\alpha\beta}^{(k)} (\mathcal{Y}_n \mathcal{B}^{\mu}_{n\perp})^a (\mathcal{Y}_{\overline{n}} \mathcal{B}^{\perp}_{\overline{n}\mu})^b (\tilde{t}_v^{\dagger} Y_v)_{\alpha} (Y_v^{\dagger} \tilde{t}_{-v})_{\beta},
\end{equation}
where we introduced two light-cone vectors $n$ and $\overline{n}$ for the two beam directions. They satisfy $n^2=\overline{n}^2=0$ and $n\cdot \overline{n} = 2$.
Superscripts (subscripts) $a$ and $b$ ($\alpha$ and $\beta$) are color indices in the adjoint (fundamental) representation.
$\mathcal{B}_{n\perp}^{a\mu}$ is an $n$-collinear gluon field strength tensor at LO in the SCET power counting parameter, $\lambda \sim p_{\perp}/\overline{n}\cdot p$, where $\overline{n} \cdot p$
is the large collinear momentum
component of an $n$-collinear gluon. It is defined as
\begin{equation}\label{Bn}
\mathcal{B}_{n\perp}^{a\mu} = i \overline{n}^{\rho} g^{\mu\nu}_{\perp} G^b_{n,\rho\nu} \mathcal{W}_n^{ba}
= i\overline{n} ^{\rho} g^{\mu\nu}_{\perp} \mathcal{W}_n^{\dagger,ab} G^b_{n,\rho\nu},
\end{equation}
where $\mathcal{W}_n$ is an $n$-collinear Wilson line in the adjoint representation given by
\begin{equation}\label{colW}
\mathcal{W}_n^{ab} (x) = \mathrm{P} \exp \Bigl(ig \int^x_{-\infty} ds ~\overline{n}\cdot A_n^c (\overline{n} s) t^c \Bigr)^{ab}\,.
\end{equation}
The $n$-collinear gluon field is $ A_n^c$ and $(t^c)^{ab} = -if^{cab}$ is a generator in the adjoint representation.
$\mathcal{B}_{\overline{n}\perp}^{a\mu}$ is defined in the same way as $\mathcal{B}_{n\perp}^{a\mu}$ with $n$ and $\overline{n}$ interchanged. In Eq.~(\ref{efto}) we decoupled the soft interactions from $n$- and $\overline{n}$-collinear fields, then obtained $\mathcal{Y}_n$ and $\mathcal{Y}_{\overline{n}}$ in the adjoint representation respectively.
These soft Wilson lines are
\begin{eqnarray}
\mathcal{Y}_n^{ab} (x) &=& \mathrm{P} \exp \Bigl(ig \int^x_{-\infty} ds ~n\cdot A_s^c (n s) t^c\Bigr)^{ab}, \\
\mathcal{Y}_{\overline{n}}^{ab} (x) &=& \mathrm{P} \exp \Bigl(ig \int^x_{-\infty} ds ~\overline{n}\cdot A_s^c (\overline{n} s) t^c \Bigr)^{ab}.
\end{eqnarray}
In Eq.~(\ref{efto}) the color coefficient for each operator is defined as~\cite{Beneke:2009rj}
\begin{equation} \label{ecoe}
E_{ab\alpha\beta}^{(k)}= E_{ab\alpha\beta}^{(R_i,R_f)}=\frac{C^{R_i}_{lab}{C^{R_f}_{l\alpha\beta}}^*}{\sqrt{\mathrm{dim}~R_i}},
\end{equation}
where $C_{lab}^{R_i}$ and $C^{R_f}_{l\alpha\beta}$ are the Clebsh-Gordan coefficients for the color octet and triplet respectively, and $l$
is a dummy index running from 1 to dim$\,R_i$. The $E_{ab\alpha\beta}^{(i)}$ satisfy the orthonormality relation
\begin{equation}
E_{ab\alpha\beta}^{(i)} E_{ab\alpha\beta}^{(j)\,*} = \delta^{ij}\,.
\end{equation}
In case of $gg \to \tilde{t}^{\dagger} \tilde{t}$, the coefficients are
\begin{eqnarray}
E^{(1)}_{ab\alpha\beta} &=& E^{\bf(1,1)}_{ab\alpha\beta}=\frac{1}{\sqrt{N_c D_A}} \delta_{ab}\delta_{\alpha\beta}, \nonumber \\
\label{coef}
E^{(2)}_{ab\alpha\beta} &=& E^{\bf(8_S,8)}_{ab\alpha\beta}=\frac{1}{\sqrt{2 B_F D_A}} D^k_{ab}T^k_{\alpha\beta}, \\
E^{(3)}_{ab\alpha\beta} &=& E^{\bf(8_A,8)}_{ab\alpha\beta}=\sqrt{\frac{2}{N_c D_A}} F^k_{ba}T^k_{\alpha\beta}, \nonumber
\end{eqnarray}
where $F^k_{ab} = t^k_{ab} = -if^{kab}$ is the totally antisymmetric tensor in color space and $D^k_{ab} = d^{kab}$ is the totally symmetric one.
The color factors are $B_F = \frac{N_c^2-4}{4N_c}$, and $D_A = N_c^2-1$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=16cm]{spair.pdf}
\end{center}
\vspace{-0.3cm}
\caption{
\label{fig1}
\baselineskip 3.0ex
Tree-level processes for $gg \to \tilde{t}\ti^{\dagger}$.}
\end{figure}
We calculate the leading Wilson coefficients of operators in Eq.~$(\ref{efto})$ by computing the relevant Feynman diagrams in figure \ref{fig1}.
For transversely polarized gluons, the first two diagrams in figure \ref{fig1} are ${\cal O}(\beta^2)$, where $\beta=\sqrt{1-4m_{\tilde t}^2/{\hat s}}$, and therefore vanish at threshold.
Thus only the last diagram contributes to the matching coefficient. The results are
\begin{eqnarray} \label{wilt}
C_1 = C_{\bf 1,1} &=& \pi\alpha_s \sqrt{\frac{D_A}{N_c}} = \sqrt{\frac{8}{3}} \pi \alpha_s \,, \nonumber \\
C_2 =C_{\bf 8_S,8} &=& \pi\alpha_s \sqrt{2B_F D_A} = \sqrt{\frac{20}{3}} \pi \alpha_s\,, \nonumber \\
C_3=C_{\bf 8_A,8}&=& 0\,.
\end{eqnarray}
Therefore the leading effective Lagrangian is
\begin{equation}
\label{lagt}
\mathcal{L}^{(0)}_{\mathrm{EFT}} = \frac{\pi\alpha_s}{2m_{\tilde{t}}^3}
(\mathcal{Y}_n \mathcal{B}^{\mu}_{n\perp})^a (\mathcal{Y}_{\overline{n}} \mathcal{B}^{\perp}_{\overline{n}\mu})^b
\Bigl[\frac{\delta_{ab}}{N_c} \tilde{t}_v^{\dagger}\tilde{t}_{-v} + \mathcal{Y}_v^{\dagger km}D^k_{ab} \tilde{t}_v^{\dagger} T^m \tilde{t}_{-v}\Bigr].
\end{equation}
\section{Factorization of stop pair production near partonic threshold}
Near the partonic threshold for $gg \to \tilde{t}^{\dagger} \tilde{t}$, the scattering cross section for $pp \to \tilde{t}^{\dagger} \tilde{t} X$ can be factorized into hard, soft,
collinear and Coulombic parts. The derivation of the factorization theorem
is similar to the one obtained in Ref.~\cite{Idilbi:2010rs}, where the production of
color-octet scalar pairs is studied. The factorization theorem for the final state with color representation $R_f$ is
\begin{eqnarray}
\sigma_{R_f} (pp \to \tilde{t}\ti^{\dagger}X) &=& \int d x_1 dx_2 d\eta \sum_{R_i} \frac{|C_{R_i,R_f}(M,\mu)|^2}{8m_{\tilde{t}}^6 (N_c^2-1)^2} \hat{s} \tilde{f}_{g/p} (x_1) \tilde{f}_{g/p} (x_2) \nonumber \\
\label{fact1}
&&\times S_{R_i,R_f} (\eta)~ \mathrm{Im}\, G_{R_f} (0,0,E+i\Gamma_{\tilde{t}}).
\end{eqnarray}
Here $E = \hat{s}^{1/2} - 2m_{\tilde{t}} -\eta = M -2m_{\tilde t}$, where $M$ is the invariant mass of the stop pair and $\eta$ is given by $p_{X_S}^0$
in the CM frame for the incoming partons, i.e., $\eta$ is the total energy carried by soft particles in the CM frame.
We note that $M$ is approximately equal to the stoponium mass since the difference is negligible when $M$ is considered as the highest available scale and all other scales are small compared to it. Thus, in our analysis, we do not distinguish between the stoponium mass and the invariant mass of the stop pair.
$G_{R_f}(0,0, E) $ is the Green's function for the final $R_f$ state, which describes Coulombic interactions between the
heavy stop pair.
In our work, we use the Green's function computed to NLO in $\alpha_s$. The explicit formulas are given in Eqs.~(\ref{GFLO}) and (\ref{GFNLO}) of Appendix C.
$\Gamma_{\tilde t}$ is stop decay rate.
Before we continue the derivation of the factorization theorem we pause for a while and discuss the possibility of bound state formation.
The SUSY scenario we are focusing on is where stop is the NLSP
and the stop mass is less than the sum of the top mass and neutralino mass as well as the sum of the bottom and chargino mass so the tree-level two-body decays of stop are forbidden. In order for this scenario to be realized we must have $m_{\chi^0} < m_{\tilde t} < m_b + m_{\chi^+}$
(where $m_{\chi^+}$ and $m_{\chi^0}$ are the masses of the lightest chargino and neutralino, respectively). Therefore, the neutralino and chargino cannot be degenerate. This type of SUSY spectrum can be obtained by relaxing the ``natural SUSY'' requirement $M_1,M_2 \gg |\mu|$.
In this scenario, the main stop decay channels are loop-induced charm
quark and neutralino decay and three-body and four-body cascade decays. For these decay
channels, $\Gamma_{\tilde t}$ is a few keV or smaller~\cite{Hikasa:1987db,Porod:1996at}. Comparing this width with the binding energy of stoponium which is $1$-$3 \,{\rm GeV}$, we see that the stop pair will live long enough to form the stoponium before they decay. Since $\Gamma_{\tilde t} \ll m_{\tilde t}$ finite-width effects are negligible~\cite{Falgari:2012sq}.
$\tilde f_{g/p}$ is the collinear function for the gluon field, which is matched onto the standard parton distribution function (PDF).
The soft function $S_{R_i,R_f}(\eta)$ is defined as
\begin{eqnarray}
\label{softf}
S_{R_i,R_f} (\eta) &=& \sqrt{\mathrm{dim} R_i}~ E_{ab\alpha\beta}^{(R_i,R_f)*}E_{cd\gamma\delta}^{(R_i,R_f)}E_{pqrs}^{(R_f,R_f)*} \\
&&\times \langle 0 | \mathcal{Y}_n^{\dagger ea} \mathcal{Y}_{\overline{n}}^{\dagger fb} Y_{v,\alpha p}^{\dagger} Y_{v,q\beta} \delta(\eta+ i\partial_0) \mathcal{Y}_n^{ce} \mathcal{Y}_{\overline{n}}^{df} Y_{v,r\gamma} Y_{v,\delta s}^{\dagger} | 0 \rangle. \nonumber
\end{eqnarray}
At tree level we have: $S_{R_i,R_f}^{(0)}(\eta) = \delta (\eta)$.
If we use the variable $z= M^2/\hat{s} = \tau/(x_1 x_2) \sim 1$ where $\tau = M^2/s$ and $s$ is the CM energy of the incoming two protons, the soft
momentum $\eta$ can be written as
\begin{equation}
\label{softm}
\eta = \hat{s}^{1/2} - M = \hat{s}^{1/2} (1-z^{1/2}) \sim \frac{M}{2}(1-z).
\end{equation}
The differential scattering cross section as a function of the invariant mass $M$ is
\begin{eqnarray} \label{dfact1}
\frac{d\sigma_{R_f}}{dM} &=& \sum_{R_i} H_{R_i,R_f} (M,\mu) \frac{M}{(2m_{\tilde{t}})^6} \mathrm{Im}\, G_{R_f} (0,0,M-2m_{\tilde{t}}+i\Gamma_{\tilde{t}}) \\
&&\times~\tau \int^1_{\tau} \frac{dz}{z} \bar{S} _{R_i,R_f} (1-z,\mu) \tilde F_{gg} \Big(\frac{\tau}{z},\mu\Big), \nonumber
\end{eqnarray}
where the hard function $H_{R_i,R_f}(M,\mu)$ is given by
\begin{equation} \label{hardf}
H_{R_i,R_f} (M,\mu) = 16 \frac{|C_{R_i,R_f}(M,\mu)|^2}{(N_c^2-1)^2}.
\end{equation}
The function $\tilde F_{ij}(x,\mu)$ is the convolution of two collinear functions from $i,j$ initial partons:
\begin{equation}
\label{convf}
\tilde F_{ij} (x,\mu) = \int^1_x \frac{dy}{y} \tilde f_{i/p} (y,\mu) \tilde f_{j/p} (x/y,\mu),
\end{equation}
and the dimensionless soft function in Eq.~(\ref{dfact1})
$\bar{S}_{R_i,R_f}(1-z) = (M/2) S_{R_i,R_f} (\eta)$ is normalized so that $\bar{S}_{R_i,R_f}^{(0)}(1-z) = \delta(1-z)$.
The soft function $\bar{S}_{R_i,R_f}(1-z)$ as well as $S_{R_i,R_f} (\eta)$ are infrared (IR) divergent. Hence Eq.~(\ref{dfact1}) cannot describe
the same low energy physics as full QCD if the collinear function $\tilde{f}_{g/p}$ is a genuine PDF. Recently it was pointed out that one has to
subtract the contribution of the mode $p_s \sim Q(1-z)$ from the collinear function in order to avoid double-counting problems between
the collinear and soft parts~\cite{Chay:2012jr,Chay:2013zya}\footnote{This subtraction is done partonically and order by order in perturbation theory.}. Then the collinear function can be matched onto the PDF.
The gluonic collinear function can be written as the convolution of the collinear kernel and the gluon PDF~\cite{Chay:2013zya,Fleming:2006cd}
\begin{equation}\label{colm}
\tilde f_{g/p} (x,\mu) = \int^1_x \frac{dz}{z} K_{gg} (z,\mu) f_{g/p} \Big(\frac{x}{z},\mu\Big),
\end{equation}
where $K_{gg}(z,\mu)$ is the collinear kernel and $f_{g/p}$ is the gluon PDF. When combining the soft function with the two collinear kernels we obtain an
IR finite kernel
\begin{equation}\label{Wgg}
W_{R_i,R_f} (1-w,\mu) = \int_w^1
\frac{dz}{z} \bar{S}_{R_i,R_f} (1-z,\mu) \int_{w/z}^1 dt K_{gg} (t,\mu)
K_{gg} \Bigl(\frac{w}{zt},\mu\Bigr).
\end{equation}
Putting Eqs.~(\ref{colm}) and (\ref{Wgg}) into Eq.~(\ref{dfact1}), we rewrite the differential scattering cross section as
\begin{eqnarray} \label{dfact2}
\frac{d\sigma_{R_f}}{dM} &=& \sum_{R_i} H_{R_i,R_f} (M,\mu) \frac{M}{(2m_{\tilde{t}})^6} \mathrm{Im}\, G_{R_f} (0,0,M-2m_{\tilde{t}}+i\Gamma_{\tilde{t}}) \\
&&\times~\tau \int^1_{\tau} \frac{dz}{z} W_{R_i,R_f} (1-z,\mu) F_{gg}\Big(\frac{\tau}{z},\mu\Big), \nonumber
\end{eqnarray}
where $F_{ij}(x,\mu)$ is the parton luminosity function with initial partons $i,j$,
\begin{equation}
\label{convpdf}
F_{ij}(x,\mu) = \int^1_{x} \frac{dy}{y} f_{i/p} (y,\mu)f_{j/p} \Big(\frac{x}{y},\mu\Big).
\end{equation}
In general, the factorization scale $\mu_F$ in Eq.~(\ref{dfact2}) should be considered to be smaller than the intermediate scale $\mu_S \sim M(1-z)$
since we have successively
integrated out the hard ($\sim M$) and the soft ($\sim M(1-z)$) modes in order to obtain Eq.~(\ref{dfact2}).
Changing the variable $y$ in Eq.~(\ref{convpdf}) to the rapidity of the stop pair, $Y$, we have the following doubly differential scattering cross section
\begin{eqnarray} \label{dfact3}
\frac{d\sigma_{R_f}}{dMdY} (pp \to \tilde{t}\ti^{\dagger}X) &=& \sum_{R_i} H_{R_i,R_f} (M,\mu) \frac{M}{(2m_{\tilde{t}})^6} \mathrm{Im}\, G_{R_f} (0,0,M-2m_{\tilde{t}}+i\Gamma_{\tilde{t}}) \\
&&\times~\tau \int^1_{\tau} \frac{dz}{z} W_{R_i,R_f} (z,\mu) f_{g/p} \Big(\frac{Me^Y}{\sqrt{s}},\mu\Big)
f_{g/p} \Big(\frac{Me^{-Y}}{\sqrt{s}},\mu \Big), \nonumber
\end{eqnarray}
where we ignored soft momentum contributions to the rapidity since they are subleading.
\section{Scattering Cross section for $pp \to {\tilde \sigma}$ and resummation}
Near threshold, the produced stop pair moves slowly enough to form a stoponium bound state, $\tilde \sigma$.
The produced bound state can decay to electroweak gauge bosons such as $\gamma\gamma$, $\gamma Z$, $ZZ$, and $W^+W^-$.
In the case of bound states of color-octet scalars, the signal for pairs of electroweak bosons can exceed the SM background at the LHC~\cite{Kim:2008bx,Idilbi:2010rs}. For the stop pair production,
the signal might be weak compared to the color-octet scalar because of the relatively small Casimir factor $C_F$. However, with
sufficient integrated luminosity, we will see that the signal for stoponium can be visible above SM backgrounds, especially at the 14 TeV energy of future LHC runs. In this section we study the scattering cross section
for stoponium production followed by its electroweak decays, $pp \to \tilde \sigma \to AB$. The stops and the stoponium are very narrow in the scenario we are considering, so we expect the cross section to be enhanced
in a narrow region around $M \sim 2 m_{\tilde{t}}$. Since the decay width of the stoponium is a few tens of MeV, we use Eq. (\ref{dfact3}) multiplied by the branching ratio
for $\tilde \sigma \to AB$ in order to obtain the cross sections $pp \to \tilde \sigma \to AB$ in our analysis.
Since we consider electroweak decays of the stoponium, we only consider color singlet production and provide the relevant radiative corrections and resummation of large logarithms.
Combining Eqs.~(\ref{wilt}) and (\ref{hardf}) we obtain the LO contribution to the hard function in Eq.~(\ref{dfact3}) (for $R_i =R_f ={\bf 1}$),
\begin{equation}\label{hardlo}
H_{\bf 1,1}^{(0)} (M,\mu) = \frac{16\pi^2 \alpha^2_s (\mu)}{N_c(N_c^2-1)}.
\end{equation}
Up to NLO the hard function $H_{\bf 1,1}(M,\mu)$ can be extracted from Ref.~\cite{Younkin:2009zn}
\begin{equation}\label{hardnlo}
H_{\bf 1,1}(M,\mu) = H_{\bf 1,1}^{(0)} (M,\mu) \bigg[ 1+ \frac{\alpha_s (\mu)}{\pi} \Bigl\{C_A\Bigl(1+\frac{\pi^2}{3} - \frac{1}{2} \ln^2\frac{\mu^2}{M^2} \Bigr) - C_F \Bigl(3+ \frac{\pi^2}{4}\Bigr)\Bigr\} +\ldots \bigg],
\end{equation}
and $C_A = N_c$ and $C_F = \frac{N_c^2-1}{2N_c}$. The anomalous dimension of the term in the square parentheses is the same as the anomalous dimension for the hard
scattering coefficient for Higgs boson production (via gluon-gluon fusion) since the effective theory calculations are identical at hard matching scale.
Therefore, we can obtain the two-loop anomalous dimension of the hard function from the known result for Higgs boson production.
The anomalous dimension of the hard function is given by
\begin{equation} \label{anomalH}
\hat\gamma^H (\mu) = \frac{1}{H_{\bf 1,1}} \frac{d}{d\ln\mu} H_{\bf 1,1}
= 2 \Gamma_C^A(\alpha_s) \ln \frac{M^2}{\mu^2} + 2 \gamma^S + \frac{2 \beta(\alpha_s)}{\alpha_s}\,.
\end{equation}
The cusp anomalous dimension $\Gamma_C^A(\alpha_s)$ in the adjoint representation and the anomalous dimension of the hard function for Higgs production $\gamma^S$ are perturbatively calculable. We parametrize their expansion in $\alpha_s$ as
\begin{eqnarray}
\Gamma_C^A &=& \sum_{k=0} \Gamma_{C,k}^A\Big(\frac{\alpha_s}{4\pi} \Big)^{k+1}\,, \nonumber \\
\gamma^S &=& \sum_{k=0} \gamma_k^S\Big(\frac{\alpha_s}{4\pi} \Big)^{k+1}\,.
\end{eqnarray}
The coefficients of the cusp anomalous dimension up to three-loop order and the anomalous dimension of the hard factor up to two-loop order
are given in Appendix B. The function $\beta(\alpha_s)$ is defined by
\begin{equation}
\beta(\alpha_s) = \frac{d\alpha_s}{d\ln\mu}.
\end{equation}
The expansion of $\beta(\alpha_s)$ begins at ${\cal O} (\alpha_s^2)$. We will need the three-loop expression for $\beta(\alpha_s)$ in this work, and it is
given in Appendix B.
The logarithms of the hard function in Eq.~(\ref{hardnlo}) are minimized at $\mu \sim M$. Hence we can identify the typical
hard scale as $\mu_H \sim M$ for a stable perturbative expansion. However, if the factorization scale $\mu_F$ is taken to be much smaller than $\mu_H$,
we must evolve the hard function from the scale $\mu_H$ to the scale $\mu_F$. Using $\hat \gamma^H$ in Eq.~(\ref{anomalH}) we find
\begin{eqnarray} \label{RGH}
H_{\bf 1,1}(M,\mu_F) &=& \bigg(\frac{\alpha_s(\mu_F)}{\alpha_s(\mu_H)}\bigg)^2 {\rm exp}\Big[ - 4S_\Gamma(\mu_F,\mu_H) + 2a_{\gamma^S}(\mu_F,\mu_H) \Big]\,\nonumber \\
&& ~ \Big(\frac{\mu_F^2}{M^2}\Big)^{-2a_\Gamma(\mu_F,\mu_H)}H_{\bf 1,1}(M,\mu_H)\,.
\end{eqnarray}
The Sudakov exponent $S_{\Gamma}(\mu_1,\mu_2)$ and the exponent $a_{\gamma^A} (\mu_1,\mu_2)$ for an arbitrary anomalous dimension $\gamma^A$ are defined by
\begin{eqnarray}
\label{SF}
S_{\Gamma}(\mu_1,\mu_2) &=& \int^{\alpha_s (\mu_1)}_{\alpha_s (\mu_2)} \frac{d\alpha}{\beta(\alpha)}
\Gamma_C^A (\alpha) \int^{\alpha}_{\alpha_s (\mu_1)} \frac{d\alpha'}{\beta(\alpha')}, \\
a_{\gamma^A} (\mu_1,\mu_2) &=& \int^{\alpha_s (\mu_1)}_{\alpha_s (\mu_2)} \frac{d\alpha}{\beta(\alpha)}
\gamma^A(\alpha)\,,
\end{eqnarray}
and similarly $a_\Gamma(\mu_1,\mu_2)$ is defined by replacing $\gamma^A(\alpha)$ with $\Gamma_C^A(\alpha)$ in the definition of $a_{\gamma^A}(\mu_1,\mu_2)$.
The solutions for the Sudakov exponent and $a_{\Gamma}(\mu_1,\mu_2)$ up to NNLL order are given in Appendix B.
The soft kernel at NLO was computed in Ref.~\cite{Chay:2013zya} and is given by
\begin{eqnarray} \label{softk}
W_{\bf1,1} (z,\mu) &=& \delta(1-z) \Bigl[1+\frac{\alpha_s C_A}{\pi} \Bigl(\frac{1}{2} \ln^2 \frac{M^2}{\mu^2} -\frac{\pi^2}{4} \Bigr) \Bigr] \\
&&+\frac{\alpha_s C_A}{\pi} \Bigl[2\ln\frac{M^2}{\mu^2} \frac{1}{(1-z)_+} + 4 \Bigl(\frac{\ln(1-z)}{1-z}\Bigr)_+ \Bigr]\, , \nonumber
\end{eqnarray}
and obeys the following renormalization group (RG) equation,
\begin{equation}
\label{RGW}
\frac{d}{d\ln \mu} W_{\bf 1,1} (x,\mu) = \int^1_x \frac{dz}{z} \hat \gamma^W (z,\mu) W_{\bf 1,1}\Big(\frac{x}{z},\mu\Big) \, ,
\end{equation}
where the anomalous dimension $\hat \gamma_W$ is
\begin{equation}\label{anomalWG}
\hat\gamma^W(z,\mu) = - \left(2\Gamma_C^A(\alpha_s)\ln\frac{M^2}{\mu^2} + 2\gamma^W \right)\delta(1-z)
- \frac{4\Gamma_C^A(\alpha_s)}{(1-z)_+}\, .
\end{equation}
Here $\gamma^W = 0 + \mathcal{O}(\alpha_s^2)$.
One can also show that $\gamma^W = \frac{\beta(\alpha_s)}{\alpha_s}+ 2\gamma^B + \gamma^S$, where $2\gamma^B$ is the coefficient of the $\delta(1-x)$ term in the Altarelli-Parisi splitting function, $P_{gg}(x)$, by demanding that Eq.~(\ref{dfact3})
is scale independent.
Solving the RG equation in Eq.~(\ref{RGW}) by applying the Laplace transform~\cite{Becher:2006nr,Becher:2006mr}, we evolve $W_{\bf 1,1}$ from the soft
scale $\mu_S$ to the factorization scale $\mu_F$ using the formula
\begin{eqnarray} \label{evoW}
W_{\bf1,1} (z,\mu_F) &=& \bigg(\frac{\alpha_s(\mu_S)}{\alpha_s(\mu_F)}\bigg)^2 \exp\Bigl[4S_{\Gamma} (\mu_F,\mu_S)-4 a_{\gamma^B} (\mu_F,\mu_S)-2a_{\gamma^S} (\mu_F,\mu_S)\Bigr]
\nonumber \\
&&\times \Bigl(\frac{\mu_F}{M}\Bigr)^{-\eta} \tilde{w}_{\bf 1,1} \Big[\ln\frac{\mu_S}{M} -\frac{\partial_{\eta}}{2}\Big] \frac{e^{-\gamma_E \eta}}{\Gamma(\eta)} (1-z)^{-1+\eta}\,,
\end{eqnarray}
where $\eta$ is defined as $\eta = -4 a_{\Gamma}(\mu_F,\mu_S)$ and is positive for $\mu_F<\mu_S$.
$\tilde{w}_{\bf 1,1}(L)$ is obtained by substituting $L = \ln (\mu s e^{\gamma_E}/M)$ in $\tilde{W}_{\bf 1,1} (s)$:
\begin{equation}
\tilde{w}_{\bf 1,1}(L) = \tilde{W}_{\bf 1,1} \Big( \frac{M}{\mu} e^{L-\gamma_E} \Big)\,,
\end{equation}
where $\tilde{W}_{\bf 1,1}(s)$ is the Laplace transform of $W_{\bf 1,1}(z)$ in momentum space. $\tilde{W}_{\bf 1,1}(s)$ is defined by
\begin{equation}\label{LapW}
\tilde{W}_{\bf 1,1} (s) = \int^{\infty}_0 dt e^{-st} \hat{W}_{\bf 1,1} (t) = \int^1_0 dz z^{-1+s} W_{\bf 1,1}(z),~~~t=-\ln z,
\end{equation}
where $\hat{W}_{\bf 1,1} (t) = W_{\bf 1,1}(z)$. Taking the limit $s\to \infty~(t\to 0)$, we compute
$\tilde{W}_{\bf 1,1} (s)$ at NLO in $\alpha_s$ to be
\begin{equation} \label{LapWnlo}
\tilde{W}_{\bf 1,1} (s) = 1 + \frac{\alpha_s C_A}{2\pi} \Bigl[\ln^2\frac{\mu^2 s^2e^{2\gamma_E}}{M^2}+\frac{\pi^2}{6} \Bigr]\,,
\end{equation}
which leads to
\begin{equation}
\tilde{w}_{\bf 1,1}(L) = 1 + \frac{\alpha_s C_A}{4\pi} \bigg[ 8 L^2 + \frac{\pi^2}{3} \bigg]\,.
\end{equation}
Putting all the pieces together we obtain
\begin{eqnarray}
\label{eq:resummation}
H_{\bf 1,1} (M,\mu_F) W_{\bf 1,1}(z,\mu_F) &=& H_{\bf 1,1} (M,\mu_H) \bigg(\frac{\alpha_s(\mu_S)}{\alpha_s(\mu_H)}\bigg)^2
\Bigl(\frac{M}{\mu_H}\Bigr)^{-4a_\Gamma(\mu_H,\mu_S)} \nonumber \\
&& \times \exp\Bigl[4S_{\Gamma} (\mu_H,\mu_S)-4 a_{\gamma^B} (\mu_F,\mu_S)-2a_{\gamma^S} (\mu_H,\mu_S)\Bigr]
\nonumber \\
&&\times \tilde{w}_{\bf 1,1} \Big[\ln\frac{\mu_S}{M} -\frac{\partial_{\eta}}{2}\Big] \frac{e^{-\gamma_E \eta}}{\Gamma(\eta)} (1-z)^{-1+\eta}\,,
\end{eqnarray}
where we used the following relation,
\begin{equation}
S_\Gamma(\mu_F,\mu_S) - S_\Gamma(\mu_F,\mu_H) = S_\Gamma(\mu_H,\mu_S) - a_\Gamma(\mu_H,\mu_S) \ln \frac{\mu_F}{\mu_H}\,.
\end{equation}
The above resummation formula includes all order resummation of large
logarithms of $\ln\mu_H/\mu_S$. Treating $\alpha_s \ln (\mu_H/\mu_S)$ as ${\cal O}(1)$, we note that the expansion of
$S_\Gamma(\mu_H,\mu_S)$ begins with $\alpha_s \ln^2 \mu_H/\mu_S \sim {\cal O}(1/\alpha_s)$
at one-loop order while the expansion of $a_{\gamma^S} (\mu_F,\mu_S)$
begins with $\alpha_s \ln \mu_F/\mu_S \sim {\cal O}(1)$ at one-loop order.
Therefore, in order to obtain a resummed cross section with the same accuracy as the
NLO hard scattering contribution, we need $S_\Gamma(\mu_H,\mu_S)$ to three-loop order and
$a_{\gamma^{S(B)}} (\mu_F,\mu_S)$ to two-loop order. Doing this, we achieve NNLL resummation accuracy.
All the ingredients that are needed for NNLL resummation are given in Appendix B.
\section{Phenomenology}
In this section we carry out a phenomenological analysis of stoponium production and decay focusing on the processes $pp\to\tilde{\sigma}\to\gamma\gamma$ and $pp\to\tilde{\sigma}\to ZZ$ relevant to current accumulated as well as future LHC data. The $\gamma\gamma$ and $ZZ$ channels are golden modes for searching for stoponium simply because their invariant masses can be cleanly reconstructed from the energy deposition in calorimeters and the momentum of tracks of their decay products in the collider detector. In addition, SM backgrounds for $\gamma\gamma$ and $ZZ$ channels are much smaller compared with the $gg$ channel. The $Z\gamma$ channel is not favored since the branching ratio for $\tilde \sigma \to Z\gamma$ is much smaller than the branching ratios for the $\gamma\gamma$ or $ZZ$ channels, while the SM background is larger than the $ZZ$ channel.
It is interesting to search for resonances in the $WW$ channel. This has been done at the LHC by reconstructing $WW$ from two merged jets using jet substructure techniques~\cite{Chatrchyan:2012ypy,CMS:2013fea}. However, this analysis is beyond the scope of this work and we leave it for future work.
We choose a typical MSSM parameter set which is denoted by {\bf P1}: $\theta_{\tilde t} = \pi/4$, $\tan\beta=10$, $m_A = 2\,{\rm TeV}$ and $\kappa =-2$. Here, $\theta_{\tilde t}$ is the mixing angle between left and right stops where its typical value is chosen by maximal mixing, $\tan\beta$ is the ratio of the vacuum expectation value (VEV) of the up-type Higgs to the VEV of the down-type Higgs, and $m_A$ is the mass of CP odd neutral Higgs. The mixing angle, $\alpha$, between the neutral Higgs is obtained through the well-known relation $\tan2\alpha=\tan2\beta(m_A^2+m_Z^2)/(m_A^2-m_Z^2)$. $\kappa$ originates from the triple scalar coupling $\lambda_{ \tilde{t} \tilde{t} h}$ and is defined as in Ref.~\cite{Barger:2011jt} by (see Appendix A for more details)
\begin{equation}
\kappa \,m_W = (-\mu \sin\alpha + a_t \cos\alpha)\,.
\end{equation}
Here, $\mu$ is the Higgsino mass and $a_t$ is the trilinear coupling of scalars in the soft breaking term. The light Higgs mass is fixed by recent measurements to be $m_h=126\,{\rm GeV}$~\cite{Aad:2012tfa,Chatrchyan:2012ufa}. We set the mass of the light stop as a free parameter within the range of $ 200\,{\rm GeV} < m_{\tilde t} < 400\,{\rm GeV}$, for reasons discussed in the Introduction. Throughout this work, we neglect the contributions of the heavier stop, gluino and heavy Higgs in the intermediate state for stoponium production and decays by assuming that they are much heavier than the light stop. As for the SM parameters, we use $\alpha_s(M_Z) = 0.117$, $m_t = 173.5\,{\rm GeV}$. For numerical analysis, we employed the MSTW2008NNLO PDF set \cite{oai:arXiv.org:0901.0002}.
In order to see uncertainty from choosing different PDF sets, we simulated heavy Higgs production comparing the results by using the CTEQ5, CTEQ6, and CTEQ10 PDF sets \cite{Lai:1999wy} as well as the MSTW2008NNLO PDF set. We find that the differences are always less than 5\%.
With this choice of parameters, we plot the branching ratios for two-body decays of stoponium as a function of the stoponium mass in figure \ref{fig:BR}. The stoponium decays are calculated at tree level and formulas for stoponium decay rates are given in Appendix A. We have confirmed that our results are analytically consistent with Refs.~\cite{Drees:1993uw,Martin:2008sv}, and numerically consistent with Ref.~\cite{Barger:2011jt}. Here, we neglect the stoponium decay into neutralino pairs which is highly suppressed compared with the leading decay channel~\cite{Barger:2011jt,Drees:1993uw,Martin:2008sv}. As shown in the figure, the branching ratios for the $WW$ and $ZZ$ channels increase with increasing the stoponium mass while other decay modes exhibit the opposite behavior. To physically understand this property we note that the sum of the polarization vectors for a massive gauge boson is $\sum \epsilon_\mu \epsilon_\nu^* = - g_{\mu\nu} + k_\mu k_\nu / m_{Z,W}^2$ where $k_\mu$ is the four-momentum of the massive gauge boson. The second term comes from the longitudinal polarizations and becomes larger as the stoponium mass increases. Thus the branching ratios for $WW$ and $ZZ$ increase with the stoponium mass. We will discuss the dependence on MSSM parameters more in the last part of this section.
\begin{figure}[h]
\centering
\includegraphics[width=3.8in]{fig-BR.pdf}
\caption{
\baselineskip 3.0ex
Branching ratios of stoponium decays for each decay channel. MSSM parameters are chosen from set {\bf P1} (see the text). }
\label{fig:BR}
\end{figure}
We next calculate the invariant mass distributions for stoponium decaying to both $\gamma\gamma$ and $ZZ$ channels using the factorization formula, Eq.~(\ref{dfact3}), multiplied by the appropriate branching fraction. We include the NLO hard function for stoponium production and the corresponding NNLL-order threshold resummation. Coulomb gluon resummation is taken into account using the NLO Green's function. The LO Green's function is obtained by solving the Schr\"odinger equation for a $C_F \alpha_s/r$ potential, and hence is equivalent to resumming the leading-order Coulomb exchanges to all orders. We include this as well as NLO corrections. The results for the cross section are shown with total scale uncertainty by adding in quadrature the errors associated with variations of the factorization scale $\mu_F$, hard scale $\mu_H$ and soft scale $\mu_S$. We discuss the scale choices and scale variations in more detail later in this section.
As for the stoponium production, we use RG-improved production cross section which is given by
\begin{equation}
\hat\sigma^{\rm {RGI}}_{ij}(z) = \hat\sigma^{\rm {Res}}_{ij}(z) + \big( \hat\sigma^{\rm {Fixed}}_{ij}(z)|_{\mu_F} - \hat\sigma^{\rm {Res}}_{ij}(z)|_{\mu_H = \mu_S = \mu_F}\big)
\end{equation}
for initial-state patrons $ij$. Here the terms in parentheses are expanded to NLO in $\alpha_s$ so we have the full NNLL resummed cross sections as well as the full NLO calculation
without double counting. The total cross section is
\begin{equation}
\sigma = \sum_{ij} \int_\tau^1 \frac{dz}{z} \hat\sigma^{\rm {RGI}}_{ij}(z,\mu_H,\mu_S,\mu_F) \,\Phi_{ij}\Big(\frac{\tau}{z},\mu_F\Big)\,.
\end{equation}
The differential cross section with respect to the invariant mass of stoponium can be obtained in a similar manner. The RG-improved cross section includes threshold resummation of the terms that are singular as $z \rightarrow1$ as well as nonsingular contributions arising from real gluon emission into the final state. The partonic resummed cross section $\hat\sigma^{\rm {Res}}_{gg}(z)$ can be inferred from Eq.~(\ref{eq:resummation}):
\begin{eqnarray}
\hat\sigma^{\rm {Res}}_{gg}(z,\mu_H,\mu_S,\mu_F) &=& \frac{\pi}{\hat s} \frac{|\psi(0)|^2}{M^3} H_{\bf 1,1} (M,\mu_H) \bigg(\frac{\alpha_s(\mu_S)}{\alpha_s(\mu_H)}\bigg)^2
\Bigl(\frac{M}{\mu_H}\Bigr)^{-4a_\Gamma(\mu_H,\mu_S)} \nonumber \\
&& \times \exp\Bigl[4S_{\Gamma} (\mu_H,\mu_S)-4 a_{\gamma^B} (\mu_F,\mu_S)-2a_{\gamma^S} (\mu_H,\mu_S)\Bigr]
\nonumber \\
&&\times \tilde{w}_{\bf 1,1} \Big[\ln\frac{\mu_S}{M} -\frac{\partial_{\eta}}{2}\Big] \frac{e^{-\gamma_E \eta}}{\Gamma(\eta)} (1-z)^{-1+\eta}\,.
\end{eqnarray}
Here, $\psi(0)$ is the stoponium bound state wave function at the origin, defined in the same way as in Ref.~\cite{Younkin:2009zn}.
The procedure to get the invariant mass distribution of RG-improved cross section follows similar steps of previous section.
The NLO fixed-order calculation is separated into the part which is singular as $z\to 1 $ and the other part which is regular up to $\ln(1-z)$ as $z\to 1$, namely, $\hat\sigma^{\rm {Fixed}}_{ij}(z) = \hat\sigma^{\rm {Sing}}_{ij}(z) + \hat\sigma^{\rm {Reg}}_{ij}(z)$.
The full SUSY-QCD correction to stop pair production at NLO was calculated in Ref.~\cite{Beenakker:1997ut}. In this work, we use the results of NLO QCD correction to stoponium production given in Ref.~\cite{Younkin:2009zn} while assuming that the gluino is much heavier than the light stop. The fixed NLO results are
\begin{eqnarray}
\hat\sigma^{\rm {Sing}}_{gg}(z) &=& \hat\sigma_0 \Bigg[ \delta(1-z)\bigg( 1 + \frac{\alpha_s}{\pi}(C_A - 3 C_F)\Big(1+\frac{\pi^2}{12}\Big) \bigg) + \frac{\alpha_s}{\pi}\bigg(2C_A \frac{1}{[1-z]_+} \ln \frac{ M^2}{\mu^2}
\nonumber \\ ~~~~~~~~~~~~~~~~~ &&
+ 4C_A \bigg[\frac{\ln(1-z)}{1-z}\bigg]_+ \bigg)\Bigg]\,,
\end{eqnarray}
\begin{eqnarray}
\hat\sigma^{\rm {Reg}}_{gg}(z) &=& \hat\sigma_0 \frac{\alpha_s}{\pi} C_A \Bigg[ \frac{11z^5+11z^4+13z^3+19z^2+6z-12}{6z(1+z)^2 } -\frac{3}{1-z} \nonumber
\\ ~~~~~~~~~~~~~~~~~ &&
+\frac{2(z^3-2z^2-3z-2)(z^3-z+2)z \ln z}{(1+z)^3(1-z)^2}
\\ ~~~~~~~~~~~~~~~~~ &&
+2\Big(\frac{1}{z} + z(1-z)-2\Big) \ln\frac{M^2}{\mu^2} (1-z)^2 \Bigg]\,,\nonumber \\
\hat\sigma^{\rm {Reg}}_{gq}(z) &=& \hat\sigma_0 \frac{\alpha_s}{\pi} \frac{C_F}{2}\Bigg[ 2+z-\frac{2}{z}-z\ln z + \frac{1+(1-z)^2}{z} \ln \frac{M^2}{\mu^2}(1-z)^2 \Bigg]\,, \\
\hat\sigma^{\rm {Reg}}_{q \bar q}(z) &=& \sigma_0 \frac{\alpha_s}{\pi} C_F^2 \frac{2}{3} z(1-z),
\end{eqnarray}
where the LO cross section is given by
\begin{eqnarray}
\hat\sigma_0 = \frac{16\pi^3\alpha_s^2}{N_c (N_c^2-1) \hat s} \frac{|\psi(0)|^2}{M^3}\,.
\end{eqnarray}
For $gq$ and $q\bar q$ at the initial state, there are no singular contributions at threshold.
One can also show that $\hat\sigma^{\rm {Sing}}_{gg}(z)$ is reproduced by setting $\mu_H = \mu_S = \mu_F$ in the resummed cross section $\hat\sigma^{\rm Res}_{gg}(z,\mu_H,\mu_S,\mu_F)$ and expanding to $O(\alpha_s)$.
The decay rate of stoponium is a few tens of $\,{\rm MeV}$, therefore, a very narrow and sharp resonance signal is expected. However, in the experiments the resonant signals will be accumulated in a finite bin size that depends on the resolution of the detectors. The ATLAS Collaboration reports that the expected photon energy resolution is \cite{ATLAS:2008}
\begin{equation}
\frac{\Delta E_\gamma}{E_\gamma} = \sqrt{\bigg(\frac{0.1}{E_\gamma/{\rm GeV}}\bigg)^2+0.007^2}\,
\end{equation}
for a detected photon energy $E_\gamma$. By roughly taking the photon energy $E_\gamma \approx m_{\tilde \sigma}/2 \leq 400 \,{\rm GeV}$ we obtain $\Delta E_\gamma \lesssim 2.8 \,{\rm GeV}$. We simply take $\Delta E = 2 \,{\rm GeV}$ as the bin size for the invariant mass distribution.
We define the resonant cross section of stoponium $\sigma_{\rm res}$ as an integral over the differential cross section within $\Delta E$ near the ground state resonant peak of stoponium:
\begin{eqnarray}
\sigma_{\rm res}^{AB} = \int_{M_{\rm peak}-\frac{\Delta E}{2}}^{M_{\rm peak}+\frac{\Delta E}{2}} \frac{d\sigma(pp\to \tilde{\sigma} \to AB)}{dM} dM,
\end{eqnarray}
where $M_{\rm peak}$ denotes the invariant mass value where the first resonant peak arises.
The SM backgrounds are generated by the MCFM package \cite{Campbell:2011bn} for both $pp \to \gamma\gamma$ and $pp \to ZZ$ processes with NLO QCD correction.
The NLO correction to the $pp \to \gamma\gamma$ process includes the one-loop $gg$ initial-state contribution. We use the following kinematical cuts:
\begin{eqnarray}
|\eta_{\gamma_{_{1,2}}}| < 2.4, ~~~~ p^T_{\gamma_{_{1,2}}} > 10 \,{\rm GeV}\,.
\end{eqnarray}
We note that the $p^T_\gamma$ cut has no impact for the large invariant mass region that we are focusing on when we apply the rapidity cut given above. We do not include secondary photons which come from the fragmentation of decaying partons. For the $ZZ$ channel, we computed the $ZZ$ invariant mass distribution for signal and SM background. We did not multiply by branching ratios for the $Z$'s to decay to final states with four leptons, two leptons and two jets, or four jets, which are actually observed in experiments. We checked that the generated background is consistent with current experimental results in the low invariant mass region with the same kinematical cuts \cite{ATLAS:2012gamgam,ATLAS:2012ZZ}.
\begin{figure}[t]
\centering
\includegraphics[width=6.5in]{fig-InvMgamgam-250.pdf}
\caption{
\baselineskip 3.0ex
$\gamma\gamma$ invariant mass distribution with a $2\,{\rm GeV}$ bin for both $pp\to\tilde{\sigma}\to\gamma\gamma$
signal and the SM background. Error bars represent total scale uncertainty. The MSSM parameter set is {\bf P1}. }
\label{fig:InvMgamgam}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=6.5in]{fig-InvMZZ-250.pdf}
\caption{
\baselineskip 3.0ex
$ZZ$ invariant mass distribution with a $2\,{\rm GeV}$ bin for both $pp\to\tilde{\sigma}\to ZZ$
signal and the SM background. Error bars represent total scale uncertainty. The MSSM parameter set is {\bf P1}. }
\label{fig:InvMZZ}
\end{figure}
By setting the stop mass to $250\,{\rm GeV}$, we show a $2\,{\rm GeV}$-binned differential cross section of $pp\to\tilde{\sigma}\to\gamma\gamma$ as well as the SM background for both 8 and $14\,{\rm TeV}$ LHC runs. This is given in figure \ref{fig:InvMgamgam}. Each plot of the stoponium signal displays the total scale uncertainty in the error bars. It should be emphasized that the result shows good convergence of the perturbative expansion since the scale uncertainty is significantly reduced at NLO+NNLL. We note that the signal yield is much enhanced at $14\,{\rm TeV}$ as compared to $8\,{\rm TeV}$. The reason is that the $gg$ production channel is dominant for the signal while $q\bar q$ is dominant for the SM background, and the luminosity for initial-state $gg$ is much bigger than the luminosity of initial-state $q\bar q$ at higher center-of-mass energy. Therefore, with this parameter set, we can expect to see the stoponium signal at the early stages of the $14\,{\rm TeV}$ LHC run if it exists. In the last part of this section, we will give estimates for the required luminosity for a $5\sigma$ discovery of the stoponium in future LHC runs.
The invariant mass distribution for $pp \to \tilde{\sigma} \to ZZ$ near the threshold region is shown in figure \ref{fig:InvMZZ}. The resonant signals are dominant over the SM background for both 8 and $14\,{\rm TeV}$. This is due to a much larger branching ratio for $\tilde{\sigma}\to ZZ$ than $\tilde{\sigma}\to \gamma\gamma$ in the MSSM parameter set {\bf P1}.
We note that if we take into account the $ZZ \to 4l\, (l = e, \mu)$ channel, both signal and background events will be reduced by factor of $0.0045$.
Nonetheless, in this parameter set, searching for the resonant signal in the $ZZ$ invariant mass distribution will serve as a promising strategy for searching for stops.
\begin{figure}[t]
\centering
\includegraphics[width=6.5in]{fig-MstopDist.pdf}
\caption{
\baselineskip 3.0ex
Resonant cross section plot with respect to stoponium mass. Error bars represent total scale uncertainty. The MSSM parameter set is {\bf P1}. }
\label{fig:MstopDist}
\end{figure}
To address the issue of the stop search dependence on its mass, we vary, in our analysis, the stop mass parameter. Figure \ref{fig:MstopDist} shows the plots of $\sigma_{\rm res}^{\gamma\gamma}$ and $\sigma_{\rm res}^{ZZ}$ with respect to the stoponium mass for both 8 and $14\,{\rm TeV}$. The SM backgrounds are displayed in each plot for comparison. We again notice that the scale dependence is much reduced at NLO+NNLL for all stoponium masses. In all cases, the $K$-factor is found to be $1.09$ regardless of the stoponium mass. In the small mass region, the signal-to-background ratio is much enhanced. This is easily understood since the stoponium production rate is proportional to $1/m_{\tilde t}^3$ at LO. For $8\,{\rm TeV}$ in the $\gamma\gamma$ channel, the background is dominant for the entire stoponium mass range. In this case in order to find clear resonant signal we need large amount of data. Therefore, it is extremely difficult to find the $\gamma\gamma$ signal at $8\,{\rm TeV}$ with the currently accumulated data at the LHC since the cross section is small and there is a poor signal to background ratio. However, the plot shows that the signal prevails over the background for most of the stoponium mass range for both 8 and $14\,{\rm TeV}$ in the $ZZ$ channel. Therefore, we anticipate that it may be possible to observe the stoponium signal for stoponium masses below $800\,{\rm GeV}$ in the $14\,{\rm TeV}$ LHC run. It should be noted that this result is obtained for the MSSM parameter set {\bf P1}, and there is significant MSSM parameter dependence.
Before we go further into parameter dependence we discuss scale variation in our calculated cross sections.
\begin{figure}[t]
\centering
\includegraphics[width=6.5in]{fig-Scale.pdf}
\caption{
\baselineskip 3.0ex
Scale variations of resonant cross section with respect to stoponium mass. The MSSM parameter set is {\bf P1}. $\mu_S^I$ and $\mu_S^{II}$ are defined in the text. }
\label{fig:Scale}
\end{figure}
We choose the default value for the hard scale $\mu_H$ to be $\mu_H = M$, which suppresses large logarithms that can arise from the scale difference between $\mu_H$ and $M$. We choose the default value of the factorization scale, $\mu_F$, to be the same as the default hard scale, $\mu_F = \mu_H = M$, which also suppresses large logarithms coming from large scale difference between $\mu_F$ and $\mu_H$. Even though one expects $\mu_F < \mu_S <\mu_H$ from the effective field theory point of view, in principle the cross section is independent of the scale chosen for $\mu_F$. In order to cover the region $\mu_F < \mu_S$ in the scale variation of $\mu_F$, we set the minimum variation of $\mu_F$ to be $\mu_S/2$. As we see below, the dependence on $\mu_F$ is very small in the NNLL resummed resonant cross section. We vary $\mu_F$ and $\mu_H$ in the following ranges
\begin{eqnarray}
\mu_S/2 < \mu_F < 2M,~~~~ M/2 < \mu_H < 2 M\,,
\end{eqnarray}
where $\mu_S$ is default value of soft scale. The scale choice for the soft scale $\mu_S$ is nontrivial. For this issue we follow Refs. \cite{Becher:2007ty,Ahrens:2008nc}. We define $\mu_S^I$ and $\mu_S^{II}$ as follows. $\mu_S^I$ is the soft scale when the soft one-loop correction decreases by 15\% starting from a high scale, $\mu_S^{II}$ is the soft scale when the soft one-loop correction has a minimum value. We average these two estimates of $\mu_s$ to obtain
the default value of $\mu_S$ and vary $\mu_S$ as follows:
\begin{eqnarray}
\mu_S({\rm default}) = (\mu_S^I+ \mu_S^{II})/2, ~~~ \mu_S^{II} < \mu_S < \mu_S^{I}\,.
\end{eqnarray}
We plot the resonant cross section as a function of the stoponium mass and show the individual and combined scale variations
in figure \ref{fig:Scale}. At LO+NLL, the major bulk of uncertainty comes from factorization scale and hard scale uncertainties. However both uncertainties are dramatically reduced at NLO+NNLL. It is remarkable that the factorization scale dependence is so small in the NNLL resummed cross section even though we vary $\mu_F$ in such a broad range. The soft scale uncertainty is also quite small at LO+NLL due to large logarithmic resummation of $\ln(\mu_S/\mu_F)$. As mentioned before, the total scale uncertainty is greatly reduced at NLO+NNLL.
\begin{figure}[t]
\centering
\includegraphics[width=6.5in]{fig-MSSMparam-R.pdf}
\caption{
\baselineskip 3.0ex
(a) Scatter plot for ${\rm Br}(\tilde{\sigma}\to\gamma\gamma)$ and ${\rm Br}(\tilde{\sigma}\to ZZ)$ with respect to four relevant MSSM parameters. Here, $m_{\tilde t} = 250\,{\rm GeV}$. We choose three different benchmark points (see the text) : {\bf P1} (typical MSSM parameters), {\bf P2} (${\rm Br}(\tilde{\sigma}\to\gamma\gamma)$ is at its minimum), {\bf P3} (${\rm Br}(\tilde{\sigma}\to ZZ)$ is close to its minimum, with the minimum possible ${\rm Br}(\tilde{\sigma}\to\gamma\gamma)$ subject to this constraint).
(b) Contour plot for ${\rm Br}(\tilde{\sigma}\to \gamma\gamma)\times 10^3$ in the $\theta_{\tilde{t}}-\kappa$ parameter space.
Three benchmark points {\bf P1}, {\bf P2} and {\bf P3} are shown in the figure.
}
\label{fig:MSSMparam}
\end{figure}
Now we study the dependence on the MSSM parameters for the resonant cross section. Since the resonant cross section is proportional to the branching ratio for the decay channel, it suffices to examine the MSSM parameter dependence of the stoponium branching ratios. We show the scatter plot for branching ratios for ${\tilde\sigma} \to \gamma\gamma$ and ${\tilde\sigma} \to ZZ$
in figure \ref{fig:MSSMparam}(a) with $m_{\tilde t} = 250\,{\rm GeV}$ by randomly generating the four relevant MSSM parameters within the ranges:
\begin{eqnarray}
&& ~~~~~ 0 < \theta_{\tilde t} < \pi\,, ~~~ 3 < \tan\beta < 60\,, \nonumber \\
&& 1\,{\rm TeV} < m_A < 10\,{\rm TeV}, ~~~ -10 < \kappa < 10\,.
\end{eqnarray}
Note that even though the decay rate for $\tilde{\sigma}\to\gamma\gamma$ does not depend on the MSSM parameters, ${\rm Br} (\tilde{\sigma}\to\gamma\gamma)$ varies significantly within the range $[0.2,6]\times 10^{-3}$ since the total decay rate changes according to the MSSM parameters. Our point in the parameter space {\bf P1} has sizable branching ratios for both channels. Below we will also consider two more pessimistic scenarios,
{\bf P2} and {\bf P3}. The point {\bf P2} corresponds to a scenario in which ${\rm Br}(\tilde{\sigma}\to\gamma\gamma)$ gets its minimal value. In this case there is a unique ${\rm Br}(\tilde{\sigma}\to ZZ )$. The point {\bf P3} corresponds to ${\rm Br}(\tilde{\sigma}\to Z Z )$ close to its minimum, with the minimum possible ${\rm Br}(\tilde{\sigma}\to\gamma\gamma)$ subject to this constraint\footnote{\baselineskip 3.0ex The branching ratios of $ZZ$ and $\gamma\gamma$ channel are
$0.14$, $~2.5\times 10^{-3}$ respectively for {\bf P1}, $~0.10$, $~0.21\times 10^{-3}$ for {\bf P2}, and $0.0025$, $~1.8\times 10^{-3}$ for {\bf P3}.}. These three benchmark points of the MSSM parameter set are shown in figure \ref{fig:MSSMparam}(a). Note that there is no point in the parameter space where both branching fractions
are negligible. There is a curve, which is roughly a straight line, connecting points {\bf P2} and {\bf P3} that forms the boundary of the scatter plot. Moving along this curve one compensates for decreases in one branching ratio with increases in the other. It is clear that points along this curve correspond to worst-case scenarios for searching for stoponium in these channels: if we can exclude the existence of stoponium for parameter sets along this curve, than this will certainly be true for the remaining MSSM parameter space. We summarize explicit parameter choices $(\theta_{\tilde{t}}, ~\kappa, ~ m_A, ~ \tan \beta)$ for each benchmark point:
\begin{eqnarray}
{\rm {\bf P1}} &&:~ ( \pi/4,\,-2,~ 2\,{\rm TeV},~ 10 )\,, \nonumber \\
{\rm {\bf P2}} &&:~ ( 0.75,~10,~ 2\,{\rm TeV},~ 10 )\,, \nonumber \\
{\rm {\bf P3}} &&:~ ( 0.25,\,-9,~ 2\,{\rm TeV},~ 10 )\,.
\end{eqnarray}
It turns out that each branching ratio is strongly dependent on $\theta_{\tilde{t}}$ and $\kappa$ while the effects of $\tan\beta$ and $m_A$ are minor. For illustration, we show contour plot of ${\rm Br}(\tilde{\sigma}\to\gamma\gamma)$ with respect to $\theta_{\tilde{t}}$ and $\kappa$ in figure \ref{fig:MSSMparam}(b).
We now try to estimate the required luminosity to discover the stoponium resonance at current and future LHC runs.
The required luminosity is evaluated by demanding $5\,\sigma$ significance for the signal events. We use the following formula for the significance $Z$~\cite{Cowan:2010js}
\begin{equation}
\label{eq:sig}
Z=\sqrt{2\bigg((s+b)\ln\Big(1+\frac{s}{b}\Big) -s\bigg)}\,,
\end{equation}
where $s$ and $b$ represent the number of signal and background events. It is known that this formula is more reliable compared to the commonly used $Z=s/\sqrt{b}$ when the number of background events is small. If $b \gg s$, Eq. (\ref{eq:sig}) reduces to $Z=s/\sqrt{b}$.
We take into account the reconstruction efficiency for photons, $\epsilon_\gamma$, in generating signal and background events for the $\gamma\gamma$ channel. We set $\epsilon_\gamma = 97\%$ as given in Ref. \cite{Aad:2012tfa}. For the $ZZ$ channel, we consider the $ZZ \to 4l (4e, 4\mu, ee\mu\mu)$ final states for reconstructing $ZZ$. We multiply the calculated cross section for $ZZ$ by the branching fractions for decaying to leptons and the four lepton selection efficiency of $61\%$ as given in Ref. \cite{Collaboration:2012iua}. The generated SM background in the four lepton channels includes the virtual photon contribution.
We expect that including $Z \to jj $ channels will provide better statistics for the signal yield especially in the high mass region as discussed in Ref. \cite{Aaltonen:2011jn}.
We consider five different stoponium masses ranging between 400 and $800\,{\rm GeV}$ using the MSSM parameter values of the three benchmark points for both 8 and $14\,{\rm TeV}$. This analysis will help determine the better search channel for the stoponium resonance signal for a given point in MSSM parameter space and for a given integrated luminosity. The results are shown in Tables \ref{tab:resonantsignal1} and \ref{tab:resonantsignal2}.
\begin{table}[t]
\centering
\begin{tabular}{ccccccc}
\hline
\hline
\footnotesize $\sqrt{s}=8\,{\rm TeV}$
&~\footnotesize $m_{\tilde{\sigma}}$
&\footnotesize 400 GeV
&\footnotesize 500 GeV
&\footnotesize 600 GeV
&\footnotesize 700 GeV
&\footnotesize 800 GeV\\
\hline
&~\footnotesize \rm{\bf {P1}}
&~\footnotesize 0.62~(171)
&~\footnotesize 0.13~(1197)
&~\footnotesize 0.032~(9654)
&~\footnotesize 0.008~($\ast$)
&~\footnotesize 0.002~($\ast$)\\
\footnotesize $\sigma_{\rm{res}}(pp\to \tilde\sigma \to \gamma\gamma)$
&~\footnotesize \rm{\bf {P2}}
&~\footnotesize 0.018~($\ast$)
&~\footnotesize 0.011~($\ast$)
&~\footnotesize 0.006~($\ast$)
&~\footnotesize 0.002~($\ast$)
&~\footnotesize 0.001~($\ast$)\\
&~\footnotesize \rm{\bf {P3}}
&~\footnotesize 0.55~(212)
&~\footnotesize 0.097~(2186)
&~\footnotesize 0.020~($\ast$)
&~\footnotesize 0.004~($\ast$)
&~\footnotesize 0.001~($\ast$)\\
\cline{2-7}
\footnotesize $\sigma_{\rm{SM}}(pp\to\gamma\gamma)$
&
&\footnotesize 2.1
&\footnotesize 0.70
&\footnotesize 0.34
&\footnotesize 0.14
&\footnotesize 0.06\\
\hline
&~\footnotesize \rm{\bf {P1}}
&~\footnotesize 0.085~(519)
&~\footnotesize 0.034~(1288)
&~\footnotesize 0.015~(3268)
&~\footnotesize 0.006~(8171)
&~\footnotesize 0.003~($\ast$)\\
\footnotesize $\sigma_{\rm{res}}(pp\to \tilde\sigma \to ZZ \to 4l)$
&~\footnotesize \rm{\bf {P2}}
&~\footnotesize 0.032~(3016)
&~\footnotesize 0.025~(2232)
&~\footnotesize 0.016~(2801)
&~\footnotesize 0.008~(5268)
&~\footnotesize 0.004~($\ast$)\\
&~\footnotesize \rm{\bf {P3}}
&~\footnotesize 0.002~($\ast$)
&~\footnotesize $\ast$~($\ast$)
&~\footnotesize $\ast$~($\ast$)
&~\footnotesize $\ast$~($\ast$)
&~\footnotesize $\ast$~($\ast$)\\
\cline{2-7}
\footnotesize $\sigma_{\rm{SM}}(pp\to Z/\gamma^*Z/\gamma^*\to 4l)$
&
&\footnotesize 0.067 &\footnotesize 0.027
&\footnotesize 0.013 &\footnotesize 0.006 &\footnotesize 0.003\\
\hline
\hline
\end{tabular}
\caption{
\baselineskip 3.0ex
The resonant cross section (fb) and the required luminosity (${\rm fb}^{-1}$) for $5\sigma$ discovery in parentheses are shown for each stoponium decay channel at the $8\,{\rm TeV}$ LHC run. The asterisk denotes that the signal cross section is less than $10^{-3} \,{\rm fb}$ or the required luminosity is greater than 10$~{\rm ab}^{-1}$ which means beyond future LHC reach. Several stoponium masses are chosen to be investigated for each benchmark point. For comparison, the integrated cross section within the same invariant mass region of SM background is also shown.
The numbers for cross sections do not include efficiency factors.
}
\label{tab:resonantsignal1}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{ccccccc}
\hline
\hline
$\footnotesize\sqrt{s}=14\,{\rm TeV}$
&~\footnotesize $m_{\tilde{\sigma}}$
&\footnotesize 400 GeV
&\footnotesize 500 GeV
&\footnotesize 600 GeV
&\footnotesize 700 GeV
&\footnotesize 800 GeV\\
\hline
&~\footnotesize \rm{\bf {P1}}
&~\footnotesize 2.3~(28)
&~\footnotesize 0.55~(173)
&~\footnotesize 0.15~(956)
&~\footnotesize 0.043~(6650)
&~\footnotesize 0.014~($\ast$)\\
~\footnotesize $\sigma_{\rm{res}}(pp\to \tilde\sigma \to \gamma\gamma)$
&~\footnotesize \rm{\bf {P2}}
&~\footnotesize 0.067~($\ast$)
&~\footnotesize 0.047~($\ast$)
&~\footnotesize 0.026~($\ast$)
&~\footnotesize 0.012~($\ast$)
&~\footnotesize 0.005~($\ast$)\\
&~\footnotesize \rm{\bf {P3}}
&~\footnotesize 2.0~(34)
&~\footnotesize 0.40~(312)
&~\footnotesize 0.088~(2534)
&~\footnotesize 0.022~($\ast$)
&~\footnotesize 0.006~($\ast$)\\
\cline{2-7}
\footnotesize$\sigma_{\rm{SM}}(pp\to\gamma\gamma)$
&
&\footnotesize 4.4
&\footnotesize 1.7
&\footnotesize 0.67
&\footnotesize 0.41
&\footnotesize 0.21\\
\hline
&~\footnotesize \rm{\bf {P1}}
&~\footnotesize 0.31~(104)
&~\footnotesize 0.14 ~(223)
&~\footnotesize 0.066~(489)
&~\footnotesize 0.032~(1076)
&~\footnotesize 0.017~(2261)\\
~\footnotesize $\sigma_{\rm{res}}(pp\to \tilde\sigma \to ZZ \to 4l)$
&~\footnotesize \rm{\bf {P2}}
&~\footnotesize 0.12~(572)
&~\footnotesize 0.10~(378)
&~\footnotesize 0.073~(422)
&~\footnotesize 0.041~(707)
&~\footnotesize 0.022~(1413)\\
&~\footnotesize \rm{\bf {P3}}
&~\footnotesize 0.007~($\ast$)
&~\footnotesize 0.002~($\ast$)
&~\footnotesize $\ast$~($\ast$)
&~\footnotesize $\ast$~($\ast$)
&~\footnotesize $\ast$~($\ast$)\\
\cline{2-7}
\footnotesize$\sigma_{\rm{SM}}(pp\to Z/\gamma^*Z/\gamma^*\to 4l)$
&
&\footnotesize 0.16
&\footnotesize 0.070
&\footnotesize 0.035
&\footnotesize 0.019
&\footnotesize 0.010\\
\hline
\hline
\end{tabular}
\caption{
\baselineskip 3.0ex
The resonant cross section (fb) and the required luminosity (${\rm fb}^{-1}$) for $5\sigma$ discovery in parentheses are shown for each stoponium decay channel at the $14\,{\rm TeV}$ LHC run. The asterisk denotes that the signal cross section is less than $10^{-3} \,{\rm fb}$ or the required luminosity is greater than 10$~{\rm ab}^{-1}$ which means beyond future LHC reach. Several stoponium masses are chosen to be investigated for each benchmark point. For comparison, the integrated cross section within the same invariant mass region of SM background is also shown. The numbers for cross sections do not include efficiency factors.
}
\label{tab:resonantsignal2}
\end{table}
For the $8\,{\rm TeV}$ run, the required luminosity rapidly grows as the stoponium mass increases in any of the parameter sets. It easily goes beyond the current and future LHC reach. We see that it is hopeless to observe the resonance signal for stoponium masses between 400 and $800\,{\rm GeV}$ in any of the benchmark parameter sets with the current accumulated LHC luminosity, $23\,{\rm fb}^{-1}$.
On the other hand, for the $14\,{\rm TeV}$ run, we are able to explore the stoponium mass up to 500$\,{\rm GeV}$ in the first round of a future LHC run with $400~{\rm fb}^{-1}$ of accumulated data. As we expect, for the parameter sets {\bf P1} and {\bf P2}, the $ZZ$ channel is the most promising for discovering stoponium, while in the case of {\bf P3} the $\gamma\gamma$ channel is better than the $ZZ$ channel.
In order, for example, to see a 500$\,{\rm GeV}$ stoponium resonance signal we need at least 378$~{\rm fb}^{-1}$ using both $\gamma\gamma$ and $ZZ$ channels regardless of the MSSM parameter sets. For a heavier stoponium mass, one needs the high-luminosity LHC run with upgraded instantaneous luminosity. We note if the expected stoponium resonance signal is not observed in the $\gamma\gamma$ and/or $ZZ$ channels we can exclude some of the MSSM parameter space in the $\theta_{\tilde t}$-$\kappa$ plane.
\section{Conclusion}
In this work we have studied the production of stoponium in $pp$ collisions at the LHC. Our analysis focused on the MSSM scenario with light stops in the $200$-$400$ GeV mass range, which are able to evade existing searches that look for stops that decay to neutralinos and top quarks. We used effective field theory to obtain a factorized form of the cross section.
This allowed us to resum large threshold logarithms using RG equation methods up to NNLL accuracy. We verified explicitly that when NLO+NNLL results are considered, theoretical uncertainties
are considerably reduced which highly improves the phenomenological predictions for the LHC.
We included the enhanced Coulomb interactions responsible for creating the bound state of stops, stoponium,
by including the NLO strong Coulomb Green`s function. We provided formulas for both total and differential cross sections.
On the phenomenological side, we considered the decays of stoponium to $\gamma\gamma$ and $ZZ$ as promising channels for searching for the stoponium resonance in the mass range 400 to $800 \,{\rm GeV}$.
After investigating MSSM parameter dependence, we found that $\gamma\gamma$ and $ZZ$ channels sensitively depend on $\theta_{\tilde t}$ and $\kappa$ while the effects of other MSSM parameters are negligible. Therefore, one can impose constraints on $\theta_{\tilde t}$ and $\kappa$ if the stoponium resonance signal is not observed.
Our results indicated that one cannot exclude any mass value in this mass range with the currently accumulated LHC data at $8\,{\rm TeV}$. On the other hand, for the first round of future LHC runs at $14\,{\rm TeV}$ with $400\,{\rm fb}^{-1}$ integrated luminosity, it should be possible to find stoponium if its mass is less than $500\,{\rm GeV}$ via either the $\gamma\gamma$ or $ZZ$ channels. We stress that this result does not depend on any particular choice of MSSM parameters. In this regard, searching in $\gamma \gamma $ and $ZZ$ for
the stoponium resonance will serve as a complementary method for probing
light stop scenarios
in future LHC runs.
\begin{acknowledgements}
We would like to thank Stephen Martin for pointing out some subtle issues regarding the phenomenological part of our work.
C.~Kim was supported by the Basic Science Research Program through NRF funded by MSIP (Grant No. 2012R1A1A1003015).
C.~Kim thanks KIAS for its hospitality during a visit to complete this work.
A. Idilbi is supported by the U.S Department of Energy under Grant No. DE-SC0008745.
T. Mehen is supported in part by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-FG02-05ER41368.
T. Mehen thanks ECT* for its hospitality during which part of this work was performed.
Y.W.~Yoon thanks the KIAS Center for Advanced Computation for providing computing resources.
Y.W.~Yoon thanks the L/EFT theory group of Duke University for its hospitality during a visit where part of this work was done.
\end{acknowledgements}
|
1,116,691,500,154 | arxiv | \section{Introduction}\label{chap:intro}
Let $O(n)$ be the orthogonal group consisting of all $n\times n$ orthogonal matrices.
Let $\bold{\Gamma}_n=(\gamma_{ij})_{n\times n}$ be a random orthogonal matrix which is
uniformly distributed on the orthogonal group $O(n)$, or equivalently, $\bold{\Gamma}_n$ follows the Haar-invariant probability measure on $O(n)$. We sometimes also say that $\bold{\Gamma}_n$ is an Haar-invariant orthogonal matrix. Let $\mathbf Z_n$ be the $p\times q$
upper-left submatrix of
$\bold{\Gamma}_n,$ where $p=p_n$ and $q=q_n$ are two positive integers.
Let $\mathbf G_n$ be a $p\times q$ matrix from which the $pq$ entries are independent standard normals.
In this paper we will study the distance between $\sqrt{n}\mathbf Z_n$ and $\mathbf G_n$ in terms of the
total variation distance, the Hellinger distance, the Kullback-Leibler distance and the
Euclidean distance (or equivalently, the trace norm). Throughout this paper, we will frequently encounter the notations $p_n, q_n.$ For simplicity, we will use $p$ and $q$ rather than $p_n$ and $q_n$, respectively, if there is no confusion.
It has long been observed that the entries of $\bold{\Gamma}_n$ are roughly independent random variables with distribution $N(0, \frac{1}{n}).$ Historically, authors show that the distance between $\sqrt{n}\bold{Z}_n$ and $\mathbf G_n$, say, $d(\sqrt{n}\bold{Z}_n, \bold{G}_n)$ goes to zero under condition $(p, q)=(1,1)$, $(p, q)=(\sqrt{n},1)$, $(p, q)=(o(n),1)$ or $(p, q)=(n^{1/3}, n^{1/3})$. Readers are referred to, for instance, Maxwell \cite{max75, max78}, Poincar\'{e} \cite{poicare}, Stam \cite{stam}, Diaconis {\it et al}. \cite{DLE} and Collins \cite{Collins}. A more detailed recounts can be seen from Diaconis and Freedman \cite{DF87} and Jiang \cite{Jiang06}.
Obviously, with more research being done, it is known that the values of $p$ and $q$ become larger and
larger such that $d(\sqrt{n}\bold{Z}_n, \bold{G}_n)$ goes to zero.
Diaconis \cite{persi03} then asks the largest values of $p_n$ and $q_n$ such that the distance
between $\sqrt{n}\mathbf Z_n$ and $\mathbf G_n$ goes to zero. Jiang \cite{Jiang06} settles the problem by showing that
$p=o(n^{1/2})$ and $q=o(n^{1/2})$ are the largest orders to make the total variation distance go to
zero. If the distance is the weak distance,
or equivalently, the maximum norm, Jiang \cite{Jiang06} further proves that the largest order of
$q$ is $\frac{n}{\log n}$ with $p=n$. Based on this work some applications are obtained, for example, for
the properties of eigenvalues of the Jacobi ensemble in the random matrix theory \cite{Jiang09},
the wireless communications \cite{Li2016, Li2014, Li2016b} and data storage from Big Data \cite{ChenKe}.
However, even with the affirmative answer by Jiang \cite{Jiang06}, a conjecture [(1) below] and
a question [(2) below] still remain.
(1) If $pq/n\to 0$, and $p$ and $q$ do not have to be in the same scale, does the total variation distance
still go to zero?
(2) What if the total variation distance and weak norm are replaced by other popular distances,
say, the Hellinger distance, the Kullback-Leibler distance or the Euclidean distance?
Conjecture (1) is natural because it is shown by Diaconis and Freedman \cite{DF87} that the total variation distance goes to zero if $p=o(n)$ and $q=1$. The work by Jiang \cite{Jiang06} proves that the same holds if $p=o(n^{1/2})$ and $q=o(n^{1/2})$. In both occasions, $(p,q)$ satisfies that $pq=o(n)$.
In this paper we will answer conjecture (1) and question (2). For conjecture (1), we show that the total variation
distance between $\sqrt{n}\bold{Z}_n$ and $\mathbf G_n$ goes to zero as long as
$p\geq 1, q\geq 1$ and $\frac{pq}{n}\to 0$, and the orders are sharp in the sense
that the distance does not go to zero if $\frac{pq}{n}\to \sigma>0$, where $\sigma$ is a constant.
For question (2), we prove that the same answer as that for (1) is also true for the Hellinger
distance and the Kullback-Leibler distance. However, it is different for the Euclidean distance.
We prove that the Euclidean distance between them goes to zero as long as $\frac{pq^2}{n}\to 0$, and the conclusion no longer holds for any $p\geq 1$ and $q\geq 1$ satisfying $\frac{pq^2}{n}\to \sigma>0$. In order to compare
these results clearly, we make Table \ref{table1} for some special cases. One may like to read
the table through its caption and the statements of Theorems
\begin{table}[H]\setlength{\tabcolsep}{10pt}\label{table1}
\centering
\begin{threeparttable}[b]
\begin{tabular}{c|c}
distance $d$ & order of $(p, q)$ \\
\midrule
total variation & $(\sqrt{n}, \sqrt{n})$ \\
Hellinger & $(\sqrt{n}, \sqrt{n})$ \\
Kullback-Leibler & $(\sqrt{n}, \sqrt{n})$ \\
Euclidean & $(\sqrt[3]{n}, \sqrt[3]{n} )$ \\
weak & $(n, \frac{n}{\log n})$ \\
\end{tabular}
\end{threeparttable}
\caption{\small\it Largest orders of $p$ and $q$ such that $d(\sqrt{n}\bold{Z}_n, \bold{G}_n) \to 0$, where $\bold{Z}_n$ and $\bold{G}_n$ are the $p\times q$ upper-left submatrix of an $n\times n$ Haar-invariant orthogonal matrix and a $p\times q$ matrix whose entries are i.i.d. $N(0,1)$, respectively.}
\end{table}
\noindent
\ref{main1} and \ref{ttt} below.
Before stating our main results, let us review rigorously the distances aforementioned.
Let $\mu$ and $\nu$ be two probability measures on $(\mathbb{R}^m, \mathcal{B}),$
where $\mathbb{R}^m$ is the $m$-dimensional Euclidean space and $\mathcal{B}$ is the Borel $\sigma$-algebra.
Recall the total variation distance between $\mu$ and $\nu,$ denoted by $\|\mu-\nu\|_{\rm TV},$
is defined by
\begin{eqnarray}\label{norm}
\|\mu-\nu\|_{\rm TV}=2\cdot \sup_{A\in \mathcal{B}}|\mu(A)-\nu(A)|=\int_{\mathbb{R}^m}|f(x)-g(x)|\, dx,
\end{eqnarray}
provided $\mu$ and $\nu$ have density functions $f$ and $g$ with respect to the Lebesgue measure, respectively. The Hellinger distance $H(\mu, \nu)$ between $\nu$ and $\mu$ is defined by
$$H^2(\mu, \nu)=\frac12\int_{\mathbb{R}^m} |\sqrt{f(x)}-\sqrt{g(x)}\,|^2 dx. $$
The Kullback-Leibler distance between $\mu$ and $\nu$ is defined by
\begin{eqnarray*}\label{Kull}
D_{\rm KL}(\mu||\nu)=\int_{\mathbb{R}^m} \frac{d\mu}{d\nu}\log\frac{d\mu}{d\nu} d\nu.
\end{eqnarray*}
The three distances have the following relationships:
\begin{eqnarray}
& & 2 H^2(\mu, \nu)\le \|\mu-\nu\|_{\rm TV} \le 2\sqrt{2}H(\mu, \nu);\label{H_TV}\\
& & \|\mu-\nu\|_{\rm TV}^2\le 2 D_{\rm KL}(\mu||\nu).\label{TV_KL}
\end{eqnarray}
Readers are referred to, for example, \cite{Kraft} and \cite{Csiszar} for (\ref{H_TV})
and (\ref{TV_KL}), respectively. In particular, the assertion in (\ref{TV_KL}) is called the Pinsker inequality.
\begin{thm}\label{main1}
Suppose $p=p_n$ and $q=q_n$ satisfy $1\leq p, q\leq n$. For each $n\geq 1$, let $\bold{Z}_n$ and $\bold{G}_n$ be the $p\times q$ submatrices aforementioned.
Let $d(\sqrt{n}\bold{Z}_n, \bold{G}_n)$ be the total variation distance, the Hellinger distance or the Kullback-Leibler distance
between the probability distributions of $\sqrt{n}\bold{Z}_n$ and $\bold{G}_n$. Then
\begin{itemize}
\item[(i)] $\lim_{n\to\infty}d(\sqrt{n}\bold{Z}_n, \bold{G}_n)=0$ for any $p\geq 1$ and $q\geq 1$ with $\lim_{n\to\infty}\frac{pq}{n}=0$;
\item[(ii)] $\liminf_{n\to\infty}d(\sqrt{n}\bold{Z}_n, \bold{G}_n) > 0$
if $\lim_{n\to\infty}\frac{pq}{n}=\sigma\in(0, \infty).$
\end{itemize}
\end{thm}
When $d(\cdot, \cdot)$ is the total variation distance, Jiang \cite{Jiang06} obtains (i) with $p=o(\sqrt{n})$ and
$q=o(\sqrt{n})$ and (ii) with $p=[x\sqrt{n}\,]$ and $q=[y\sqrt{n}\,]$ where $x>0$ and $y>0$ are constants. Theorem \ref{main1} confirms a conjecture by the first author.
Now we study the approximation in terms of the Euclidean distance. Let $\bold{Y}_n=(\bold{y}_1, \cdots, \bold{y}_n)=(y_{ij})_{n\times n}$ be an $n\times n$ matrix, where $y_{ij}$'s are i.i.d. random variables with distribution $N(0,1)$. Perform the Gram-Schmidt algorithm on the column vectors $\bold{y}_1, \cdots, \bold{y}_n$ as follows.
\begin{eqnarray}
&& \bold{w}_1=\bold{y}_1,\ \ \bm{\gamma}_1=\frac{\bold{w}_1}{\|\bold{w}_1\|}; \nonumber\\
&& \bold{w}_k=\bold{y}_k-\sum_{i=1}^{k-1}\langle\bold{y}_k, \bm{\gamma}_i\rangle \bm{\gamma}_i,\ \ \bm{\gamma}_k=\frac{\bold{w}_k}{\|\bold{w}_k\|} \label{sea}
\end{eqnarray}
for $k=2,\cdots, n$, where $\langle\bold{y}_k, \bm{\gamma}_i\rangle$ is the inner product of the two vectors.
Then $\bm{\Gamma}_n=(\bm{\gamma}_1, \cdots, \bm{\gamma}_n)=(\gamma_{ij})$ is an $n\times n$
Haar-invariant orthogonal matrix. Set $\bold{\Gamma}_{p\times q}=(\gamma_{ij})_{1\leq i\leq p, 1\leq j \leq q}$
and $\bold{Y}_{p\times q}=(y_{ij})_{1\leq i\leq p, 1\leq j \leq q}$ for $1\leq p, q\leq n.$
We consider the Euclidean distance between $\sqrt{n}\bold{\Gamma}_{p\times q}$ and $\bold{Y}_{p\times q}$, that is,
the Hilbert-Schmidt norm defined by
\begin{eqnarray}\label{Hiha}
\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|^2_{\rm HS}=\sum_{i=1}^p\sum_{j=1}^q(\sqrt{n}\gamma_{ij}-y_{ij})^2.
\end{eqnarray}
Throughout the paper the notation $\xi_n\overset{p}{\to} \xi$ indicates that random variable $\xi_n\to \xi$ in probability as $n\to\infty$.
\begin{thm}\label{ttt}
Let the notation $\bold{\Gamma}_{p\times q}$ and $\bold{Y}_{p\times q}$ be as in the above. If $p=p_n, q=q_n$ satisfy $1\leq p, q\leq n$ and $\lim_{n\to\infty}\frac{pq^2}{n}= 0$, then
$\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|_{\rm HS} \overset{p}{\to} 0$ as $n\to\infty$.
Further, if $1\leq p, q\leq n$ satisfy $\lim_{n\to\infty}\frac{pq^2}{n}= \sigma \in (0, \infty)$, then
\begin{eqnarray}\label{indication_phone}
\liminf_{n\to\infty}P(\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|_{\rm HS}\geq \epsilon)>0
\end{eqnarray}
for every $\epsilon \in (0, \sqrt{\sigma/2})$.
\end{thm}
We also obtain an upper bound in Proposition \ref{lala}: $\mathbb{E}
\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|^2_{\rm HS} \leq \frac{24pq^2}{n}$
for any $n\ge 2$ and $1\leq p, q\leq n$. Further, we obtain cleaner results than (\ref{indication_phone}) for two special cases. It is proved in Lemma \ref{piano_black} that
$\|\sqrt{n}\bold{\Gamma}_{p\times 1}-\bold{Y}_{p\times 1}\|_{\rm HS} \to \sqrt{\frac{c}{2}}\cdot |N(0,1)|$ weakly provided $p/n\to c\in (0,1]$. In the proof of Theorem \ref{ttt}, we show that
$\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|_{\rm HS} \overset{p}{\to} \sqrt{\sigma/2}$ if $q\to \infty$ and $(pq^2)/n\to \sigma>0$.
In order to compare the orders for all different norms, we make Table \ref{table1} for
the special case that $p$ and $q$ are of the same scale except for the weak norm.
The weak norm is defined by $\|\bold{A}-\bold{B}\|_{\rm max}=\max_{1\leq i \leq p, 1\leq j \leq q}
|a_{ij}-b_{ij}|$ for $\bold{A}=(a_{ij})_{p\times q}$ and $\bold{B}=(b_{ij})_{p\times q}$.
The distance $\|\sqrt{n}\bold{Z}_n-\bold{G}_n\|_{\rm max}$ for the case $p=n$
is studied in \cite{Jiang06}.
\medskip
\noindent\textbf{Remarks and future questions}
{\bf A}. Compared to the techniques employed in \cite{Jiang06}, the proofs of the results
in this paper use the following new elements:
\begin{enumerate}
\item Tricks of calculating the means of monomials
of the entries from $\bold{\Gamma}_n$ are used in Lemmas \ref{what} and \ref{great}.
\item A subsequence argument is applied to the proofs of both theorems. In particular, the proof of Theorem \ref{main1} is reduced to the case $q/p\to 0$ and the case $q\equiv 1$.
\item A central limit theorem (CLT) on $\mbox{tr}[(\bold{G}_n'\bold{G}_n)^2]$ for the case
$q/p\to 0$ is established in Lemma \ref{awesome_pen}. The CLT for the case $q/p\to c>0$
is well known; see, for example, \cite{Bai_Jack} or \cite{Jonsson}.
\item Some properties of the largest and
the smallest eigenvalues of $\bold{G}_n'\bold{G}_n$ for the case $q/p\to 0$ is proved in
Lemma \ref{fly_ground}. This is a direct consequence of a recent result by Jiang
and Li \cite{JiangLi15}. The situation for $q/p\to c>0$ is well known; see,
for example, \cite{Bai_Jack}.
\item Connections in (\ref{H_TV}) and (\ref{TV_KL}) among distances
provide an efficient way to use known properties of Wishart matrices and Haar-invariant orthogonal matrices. The Wishart matrices appear in ``Proof of (ii) of Theorem \ref{main1}" and the Haar-invariant orthogonal matrices occur in ``Proof of (i) of Theorem \ref{main1}."
\end{enumerate}
{\bf B}. In this paper we approximate the Haar-invariant orthogonal matrices by independent normals with
various probability measures. It can be proved that similar results also hold for Haar-invariant unitary and symplectic matrices without difficulty.
This can be done by the method employed here together with those from \cite{Jiang09, Jiang10}.
{\bf C}. As mentioned earlier, the work \cite{Jiang06} has been applied to other random matrix problems \cite{Jiang09},
the wireless communications \cite{Li2016, Li2014, Li2016b} and a problem from Big Data \cite{ChenKe}. In this paper
we consider other three probability metrics: the Hellinger distance, the Kullback-Leibler distance
and the Euclidean distance. We expect more applications. In particular, since
Hellinger distance and Kullback-Leibler distance are popular in Statistics and Information Theory, respectively,
we foresee some applications in the two areas.
{\bf D}. In Theorem \ref{ttt}, the Haar-invariant orthogonal matrices are obtained
by the Gram-Schmidt algorithm. The approximation by independent normals via the Hilbert-Schmidt norm is valid if $pq^2=o(n)$.
There are other ways to generate Haar-invariant orthogonal matrices; see, for example, \cite{Mezzadri}.
It will be interesting to see the cut-off orders of $p$ and $q$ such that (\ref{indication_phone}) holds under the new couplings.
{\bf E}. So far five popular probability metrics are applied to study the distance
between $\sqrt{n}\bold{Z}_n$ and
$\bold{G}_n$. They are the total variation distance, the Hellinger distance,
the Kullback-Leibler distance,
the Euclidean distance in this paper and the weak norm in \cite{Jiang06}. Their corresponding conclusions show different features. There are many other distances of
probability measures which include the Prohorov distance, the Wasserstein distance and the Kantorovich
transport distance. It will be interesting to see the largest orders of $p$ and $q$ such that these distances go to zero.
Of course, applications of the results along this line are welcomed.
\medskip
Finally, the structure of the rest paper is organized as follows.
\noindent\textbf{Section \ref{sky_phone}: Proof of Theorem \ref{main1}}
Section \ref{sun_leave}: Preliminary Results.
Section \ref{Proof_Main1}: The Proof of Theorem \ref{main1}.
\noindent\textbf{Section \ref{yes_ttt}: Proof of Theorem \ref{ttt}}
Section \ref{papapa}: Auxiliary Results.
Section \ref{big_mouse}: The Proof of Theorem \ref{ttt}.
\noindent \textbf{Section \ref{Good_Tue}: Appendix}.
\section{Proof of Theorem \ref{main1}}\label{sky_phone}
\subsection{Preliminary Results}\label{sun_leave} Throughout the paper we
will adopt the following notation.
\noindent\textbf{Notation}. (a) $X\sim \chi^2(k)$ means that random variable $X$ follows the
chi-square distribution with degree of freedom $k$;
(b) $N_p(\bm{\mu}, \bold{\Sigma})$ stands for the $p$-dimensional normal distribution of mean vector $\bm{\mu}$ and covariance matrix $\bold{\Sigma}.$ We write $\bold{X} \sim N_p(\bm{\mu}, \bold{\Sigma})$ if random vector $\bold{X}$ has the distribution $N_p(\bm{\mu}, \bold{\Sigma})$. In particular, we write $\bold{X} \sim N_p(\bm{0}, \bold{I})$ if the $p$ coordinates of $\bold{X}$ are independent $N(0, 1)$-distributed random variables.
(c) For two sequences of numbers $\{a_n;\, n\geq 1\}$ and $\{b_n;\, n\geq 1\}$, the notation $a_n=O(b_n)$ as $n\to\infty$ means that $\limsup_{n\to\infty}|a_n/b_n|<\infty.$ The notation $a_n=o(b_n)$ as $n\to\infty$ means that $\lim_{n\to\infty}a_n/b_n=0$, and the symbol $a_n \sim b_n$ stands for $\lim_{n\to\infty}a_n/b_n=1$.
(d) $X_n=o_p(a_n)$ means $\frac{X_n}{a_n}\to 0$ in probability as $n\to\infty$.
The symbol $X_n=O_p(a_n)$ means that $\{\frac{X_n}{a_n};\, n\geq 1\}$
are stochastically bounded, that is,
$\sup_{n\geq 1}P(|X_n|\geq b a_n)\to 0$ as $b\to \infty.$
Before proving Theorem \ref{main1}, we need some preliminary results. They appear in a series of lemmas.
The following is taken from Proposition 2.1 by Diaconis, Eaton and Lauritzen \cite{DLE} or Proposition 7.3 by Eaton \cite{ME2}.
\begin{lem}\label{del} Let $\mathbf \Gamma_n$ be an $n\times n$ random matrix which is uniformly
distributed on the orthogonal group $O(n)$ and let $\mathbf Z_n$ be the upper-left $ p\times q$ submatrix of $\mathbf \Gamma_n.$ If $p+q\leq n$ and $q\leq p$ then the joint density function of entries of $\mathbf Z_n$ is
\begin{eqnarray}\label{heavy}
f(z)=(\sqrt{2\pi})^{-pq}\frac{\omega(n-p, q)}{\omega(n, q)}\left\{\mbox{det}(I_{q}-z'z)^{(n-p-q-1)/2}\right\}I_0(z'z)
\end{eqnarray}
where $I_0(z'z)$ is the indicator function of the set that all $q$ eigenvalues of $z'z$ are in $(0,1),$ and $\omega(\cdot, \cdot)$ is the Wishart constant defined by
\begin{eqnarray}\label{fire_you}
\frac{1}{\omega(s, t)}=\pi^{t(t-1)/4}2^{st/2}\prod_{j=1}^t\Gamma\left(\frac{s-j+1}{2}\right).
\end{eqnarray}
Here $t$ is a positive integer and $s$ is a real number, $s>t-1.$ When $p< q,$ the density of $\mathbf Z_n$ is obtained by interchanging $p$ and $q$ in the above Wishart constant.
\end{lem}
The following result is taken from \cite{Jiang2009}. For any integer $a\geq 1$, set $(2a-1)!!=1\cdot 3\cdots (2a-1)$ and $(-1)!!=1$ by convention.
\begin{lem}\label{Jiang2009} Suppose $m\geq 2$ and $\xi_1, \cdots, \xi_m$ are i.i.d. random variables with $\xi_1 \sim N(0,1).$ Define $U_i=\frac{\xi_i^2}{\xi_1^2 + \cdots + \xi_m^2}$ for $1\leq i \leq m$. Let $a_1, \cdots, a_m$ be non-negative integers and $a=\sum_{i=1}^m a_i$. Then
\begin{eqnarray*}
E\big(U_1^{a_1}\cdots U_m^{a_m}\big) = \frac{\prod_{i=1}^m(2a_i-1)!!}{\prod_{i=1}^a(m+2i-2)}.
\end{eqnarray*}
\end{lem}
\medskip
The expectations of some monomials of the entries of Haar-orthogonal matrices will be computed next. Recall $\bold{\Gamma}_n=(\gamma_{ij})_{n\times n}$ is an Haar-invariant orthogonal matrix. The following facts will be repeatedly used later. They follow from the property of the Haar invariance.
\begin{itemize}
\item[{\bf F1})]\, The vector $(\gamma_{11},\cdots, \gamma_{n1})'$ and $\frac{1}{\sqrt{\xi_1^2+\cdots +\xi_n^2}}(\xi_1, \cdots, \xi_n)'$ have the same probability distribution, where $\xi_1, \cdots, \xi_n$ are i.i.d. $N(0,1)$-distributed random variables.
\item[{\bf F2})]\, By the orthogonal invariance, for any $1\leq k \leq n$, any $k$ different rows/columns of $\bold{\Gamma}_n$ have the same joint distribution as that of the first $k$ rows of $\bold{\Gamma}_n$.
\end{itemize}
\begin{lem}\label{what} Let $\bold{\Gamma}_n=(\bm{\gamma}_1, \cdots, \bm{\gamma}_n)=(\gamma_{ij})_{n\times n}$ be an Haar-invariant orthogonal matrix. Then
(a)\, $\mathbb{E}(\gamma_{11}^2)=\frac{1}{n}$ and $\mathbb{E}(\gamma_{11}^4)=\frac{3}{n(n+2)}$;
(b)\, $\mathbb{E}(\gamma_{11}^2\gamma_{12}^2)=\frac{1}{n(n+2)}\ \ \mbox{and} \ \ \mathbb{E}(\gamma^2_{11}\gamma^2_{22})=\frac{n+1}{n(n-1)(n+2)}$;
(c)\, $\mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22})=-\frac{1}{n(n-1)(n+2)}$.
\end{lem}
\noindent\textbf{Proof}.
By Property {\bf F1}), picking $m=n$, $a_1=1, a_2=\cdots =a_n=0$ from Lemma \ref{Jiang2009}, we see $\mathbb{E}(\gamma_{11}^2)=\frac{1}{n}$. Choosing $a_1=2, a_2=\cdots =a_n=0$, we obtain $\mathbb{E}(\gamma_{11}^4)=\frac{3}{n(n+2)}$. Selecting $a_1=a_2=1, a_3=\cdots =a_n=0$, we see $\mathbb{E}(\gamma_{11}^2\gamma_{12}^2)=\frac{1}{n(n+2)}$.
Now, since $\|\bm{\gamma}_1\|=\|\bm{\gamma}_2\|=1$, by {\bf F2})
\begin{eqnarray*}
1=\mathbb{E}\big(\|\bm{\gamma}_1\|^2\|\bm{\gamma}_2\|^2\big)
&=&\mathbb{E}\bigg(\sum_{i=1}^n \gamma_{i1}^2\gamma_{i2}^2+\sum_{1\le i\neq j \le n}\gamma_{i1}^2\gamma_{j2}^2\bigg)\\
& = & n\mathbb{E}(\gamma_{11}^2\gamma_{12}^2) + n(n-1)\mathbb{E}(\gamma_{11}^2 \gamma_{22}^2)\\
&=&\frac{1}{n+2}+n(n-1)\mathbb{E}(\gamma_{11}^2 \gamma_{22}^2).
\end{eqnarray*}
The second conclusion of (b) is yielded.
Now we work on conclusion (c). In fact, since the first two columns of $\mathbf {\Gamma}_n$ are orthogonal, we know
\begin{eqnarray}\label{what_green}
0=\big(\sum_{i=1}^n\gamma_{i1}\gamma_{i2}\big)^2=\sum_{i=1}^n\gamma_{i1}^2\gamma_{i2}^2+\sum_{1\leq i\ne j \leq n}\gamma_{i1}\gamma_{i2}\gamma_{j1}\gamma_{j2}.
\end{eqnarray}
By Property {\bf F2)} again,
$$\mathbb{E}(\gamma_{i1}\gamma_{i2}\gamma_{j1}\gamma_{j2})=\mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22})$$ for any $i \ne j$. Hence, take expectations of both sides of (\ref{what_green}) to see
\begin{eqnarray*}\label{sun_land}
\mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22})=-\frac{n}{n(n-1)}\mathbb{E}(\gamma_{11}^2\gamma_{12}^2)=-\frac{1}{(n-1)n(n+2)}.
\end{eqnarray*}
\hfill$\blacksquare$
In order to understand the trace of the third power of an Haar-invariant orthogonal matrix, we need the following expectations of monomials of the matrix elements.
\begin{lem}\label{great} Let $\mathbf \Gamma_n=(\gamma_{ij})_{n\times n}$ be a random matrix with the uniform
distribution on the orthogonal group $O(n)$, $n\geq 3.$ The following holds:
(a) $\mathbb{E}(\gamma_{11}^2\gamma_{21}^2\gamma_{31}^2)=\frac{1}{n(n+2)(n+4)}$.
(b) $\mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22}\gamma_{23}^2)=
-\frac{1}{(n-1)n(n+2)(n+4)}.$
(c) $\mathbb{E}(\gamma_{11}^2\gamma_{21}^2\gamma_{22}^2)=
\frac{1}{(n-1)n(n+2)}-\frac{3}{(n-1)n(n+2)(n+4)}.$
(d) $\mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22}^3)=-\frac{3}{(n-1)n(n+2)(n+4)}$.
(e) $\mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{22}\gamma_{23}\gamma_{31}\gamma_{33})
=\frac{2}{(n-2)(n-1)n(n+2)(n+4)}.$
\end{lem}
Obviously, Lemma \ref{great} is more complex than Lemma \ref{what}.
We postpone its proof in Appendix from Section \ref{Good_Tue}.
Based on Lemma \ref{what}, we now present two identities that will be used later.
\begin{lem}\label{KL-Key} Let $\lambda_1, \cdots, \lambda_q$ be the eigenvalues of $\mathbf {Z}_n'\mathbf {Z}_n$, where $\mathbf Z_n$ is defined as in Lemma \ref{del}. Then
$$\aligned &\mathbb{E}\sum_{i=1}^q\lambda_i=\frac{pq}{n}; \\
&\mathbb{E}\sum_{i=1}^q\lambda_i^2=\frac{pq}{n(n+2)}\Big[p+q+1-\frac{(p-1)(q-1)}{n-1}\Big].
\endaligned $$
\end{lem}
\noindent\textbf{Proof}.
The first equality is trivial since
\begin{eqnarray*}\label{mean}
\mathbb{E}\sum_{i=1}^q \lambda_i=\mathbb{E}\mbox{tr}(\mathbf Z_n'\mathbf Z_n)=\mathbb{E}\sum_{i=1}^{p}\sum_{j=1}^q \gamma_{ij}^2=\frac{pq}{n}
\end{eqnarray*}
since $E(\gamma_{ij}^2)=E(\gamma_{11}^2)=\frac{1}{n}$ for any $i,j$ by (a) of Lemma \ref{what}. For the second equality, first
\begin{eqnarray}
\sum_{i=1}^q\lambda_i^2=\mbox{tr}(\mathbf {Z}_n'\mathbf{Z}_n\mathbf{Z}_n'\mathbf{Z}_n)
&=& \sum_{1\leq j,l\leq p; 1\leq i, k\leq q}\gamma_{ji}\gamma_{jk}\gamma_{lk}\gamma_{li}\nonumber\\
& =: & A + B +C,\label{additional_care}
\end{eqnarray}
where $A$ corresponds to that $j=l,\ i=k$; $B$ corresponds to that $j=l,\, i \ne k$ or $j \ne l,\, i=k$; $C$ corresponds to that $j \ne l,\, i \ne k.$ It is then easy to see that
\begin{eqnarray*}
& & A=\sum_{1\leq j \leq p,\, 1\leq i \leq q}\gamma_{ji}^4;\ \ \ B=\sum_{1\leq j \leq p,\, 1\leq i \ne k\leq q}\gamma_{ji}^2\gamma_{jk}^2 + \sum_{1\leq j\ne l \leq p,\, 1\leq i \leq q}\gamma_{ji}^2\gamma_{li}^2;\\
& & C=\sum_{1\leq j\ne l\leq p;\, 1\leq i\ne k\leq q}\gamma_{ji}\gamma_{jk}\gamma_{lk}\gamma_{li}.
\end{eqnarray*}
\medskip
By Properties {\bf F1}) and {\bf F2}) and Lemma \ref{what}, we see
\begin{eqnarray*}
\mathbb{E}A&=&pq \cdot E(\gamma_{11}^4)=
\frac{3pq}{n(n+2)};\label{thousand}\\
\mathbb{E}B &=& [pq(q-1)+pq(p-1)]\cdot \mathbb{E}(\gamma_{11}^2\gamma_{12}^2)
=\frac{pq(p+q-2)}{n(n+2)};\nonumber\\
\mathbb{E}C &= & pq(p-1)(q-1)\cdot \mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22})=-\frac{pq(p-1)(q-1)}{(n-1)n(n+2)}.\nonumber
\end{eqnarray*}
Consequently,
\begin{eqnarray*}
\mathbb{E}\sum_{i=1}^q\lambda_i^2
&=&\mathbb{E}A+\mathbb{E}B + \mathbb{E}C\\
&=& \frac{3pq}{n(n+2)} + \frac{pq(p+q-2)}{n(n+2)}-\frac{pq(p-1)(q-1)}{(n-1)n(n+2)}\\
& = & \frac{pq}{n(n+2)}\Big[p+q+1-\frac{(p-1)(q-1)}{n-1}\Big].
\end{eqnarray*}
The proof is completed. \hfill$\blacksquare$
\medskip
With Lemma \ref{great}, we are ready to compute the following quantity.
\begin{lem}\label{tri} Let $\lambda_1, \cdots, \lambda_q$ be the eigenvalues of $\mathbf {Z}_n'\mathbf {Z}_n.$ Then,
\begin{eqnarray*}
\mathbb{E}\sum_{i=1}^q \lambda_i^3&=&\frac{pq}{n(n+2)(n+4)}\big[p^2+q^2+3pq+3(p+q)+4\big]\\
&&\quad\quad +\frac{pq(p-1)(q-1)}{(n-1)n(n+2)(n+4)}\bigg[\frac{2(p-2)(q-2)}{n-2}-3(p+q)\bigg].
\end{eqnarray*}
\end{lem}
\noindent\textbf{Proof}. By definition,
\begin{eqnarray}\label{sumsum}
\sum_{i=1}^q\lambda_i^3=\mbox{tr}(\mathbf {Z}_n'\mathbf{Z}_n\mathbf{Z}_n'\mathbf{Z}_n\mathbf{Z}_n'\mathbf{Z}_n)
&= & \sum_{1\leq i, j,k\leq p; 1\leq l, s, t\leq q}\gamma_{il}\gamma_{is}\gamma_{js}\gamma_{jt}\gamma_{kt}\gamma_{kl}\nonumber\\
& = & A_1+A_2 + A_3,
\end{eqnarray}
where $A_1$ corresponds to the sum over $i=j=k$, $A_2$ corresponds to the sum that only two of $\{i, j, k\}$ are identical, and $A_3$ corresponds to the sum $i\ne j\ne k$. We next compute each term in detail.
{\it Case 1: $i=j=k$}. Each term in the sum has the expression $\mathbb{E}(\gamma_{il}^2\gamma_{is}^2\gamma_{it}^2)$. The corresponding sum then becomes
\begin{eqnarray*}
A_1&=& \sum_{i=1}^p\sum_{1\leq l,s,t\leq q}\mathbb{E}(\gamma_{il}^2\gamma_{is}^2\gamma_{it}^2)\\
& = & pq\mathbb{E}(\gamma_{11}^6)+ 3pq(q-1)\mathbb{E}(\gamma_{11}^4\gamma_{12}^2)+ pq(q-1)(q-2)\mathbb{E}(\gamma_{11}^2\gamma_{12}^2\gamma_{13}^2)\\
& = & \frac{15pq}{n(n+2)(n+4)} + \frac{9pq(q-1)}{n(n+2)(n+4)} + \frac{pq(q-1)(q-2)}{n(n+2)(n+4)}
\end{eqnarray*}
by {\bf F2)}, Lemmas \ref{Jiang2009}, \ref{what} and \ref{great}.
{\it Case 2: only two of $\{i, j, k\}$ are identical.} The corresponding sum is
\begin{eqnarray*}
A_2 & =& 3\sum_{1\leq i\ne k\leq p}\sum_{1\leq l,s,t\leq q}\mathbb{E}(\gamma_{il}\gamma_{is}^2\gamma_{it}\gamma_{kt}\gamma_{kl})\\
&=& 3p(p-1) \sum_{1\leq l,s,t\leq q}\mathbb{E}(\gamma_{1l}\gamma_{1s}^2\gamma_{1t}\gamma_{2t}\gamma_{2l}).
\end{eqnarray*}
By symmetry and {\bf F2)}
\begin{eqnarray*}
\frac{A_2}{3p(p-1)}&=&\sum_{l=1}^q\mathbb{E}(\gamma_{1l}^4\gamma_{2l}^2) + \sum_{1\leq l\ne s\leq q}\mathbb{E}(\gamma_{1l}\gamma_{1s}^3\gamma_{2s}\gamma_{2l})+ \sum_{1\leq l\ne s\leq q}\mathbb{E}(\gamma_{1l}^2\gamma_{1s}^2\gamma_{2l}^2)\\
&&\quad \quad\quad\quad + \sum_{1\leq l \ne s\leq q}\mathbb{E}(\gamma_{1l}^3\gamma_{1t}\gamma_{2t}\gamma_{2l})
+ \sum_{1\leq l\ne s\ne t\leq q}\mathbb{E}(\gamma_{1l}\gamma_{1s}^2\gamma_{1t}\gamma_{2t}\gamma_{2l})\\
& = & q\cdot\mathbb{E}(\gamma_{11}^4\gamma_{21}^2) + q(q-1)\cdot\mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22}^3) + q(q-1)\cdot\mathbb{E}(\gamma_{11}^2\gamma_{21}^2\gamma_{22}^2)\\
& & + q(q-1)\cdot\mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22}^3) + q(q-1)(q-2)\cdot\mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22}\gamma_{23}^2)
\end{eqnarray*}
where the sums in the first equality appearing in order correspond to $l=s=t$, $l\ne s=t$, $s\ne l= t$, $t\ne l= s$ and $l \ne s \ne t$, respectively. By Lemmas \ref{Jiang2009} and \ref{great},
\begin{eqnarray*}
& & A_2\\
& =& 3p(p-1)\Big[\frac{3q}{n(n+2)(n+4)} - \frac{6q(q-1)}{(n-1)n(n+2)(n+4)}\\
& & +q(q-1)\Big(\frac{1}{(n-1)n(n+2)}-\frac{3}{(n-1)n(n+2)(n+4)}\Big)-
\frac{q(q-1)(q-2)}{(n-1)n(n+2)(n+4)}\Big)\Big].
\end{eqnarray*}
{\it Case 3: $i\ne j\ne k$.} The corresponding sum becomes
\begin{eqnarray*}
A_3=p(p-1)(p-2)\sum_{1\leq l, s, t\leq q}\mathbb{E}\gamma_{1l}\gamma_{1s}\gamma_{2s}\gamma_{2t}\gamma_{3l}\gamma_{3t}.
\end{eqnarray*}
By symmetry and the same classification as that in {\it Case 2},
\begin{eqnarray*}
& & \frac{A_3}{p(p-1)(p-2)}\\
& = & q\cdot \mathbb{E}(\gamma_{11}^2\gamma_{21}^2\gamma_{31}^2) + 3q(q-1)\cdot \mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22}\gamma_{23}^2)\\
& & \quad\quad\quad\quad\quad\quad +\, q(q-1)(q-2)\cdot \mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{22}\gamma_{23}\gamma_{31}\gamma_{33})\\
&= & \frac{q}{n(n+2)(n+4)}-\frac{3q(q-1)}{(n-1)n(n+2)(n+4)} + \frac{2q(q-1)(q-2)}{(n-2)(n-1)n(n+2)(n+4)}.
\end{eqnarray*}
Combing (\ref{sumsum}) and the formulas on $A_1, A_2$ and $A_3$, we see
\begin{eqnarray*}
\mathbb{E}\sum_{i=1}^q \lambda_i^3&=&\frac{pq}{n(n+2)(n+4)}\bigg[p^2+q^2+6(p+q)+1\bigg]+\frac{3pq(p-1)(q-1)}{(n-1)n(n+2)}\\
&&-\frac{pq(p-1)(q-1)}{(n-1)n(n+2)(n+4)}\bigg[15+3(p+q)-\frac{2(p-2)(q-2)}{n-2}\bigg].
\end{eqnarray*}
Now write
\begin{eqnarray*}
\frac{3pq(p-1)(q-1)}{(n-1)n(n+2)}=\frac{3pq(p-1)(q-1)}{n(n+2)(n+4)}+
\frac{15pq(p-1)(q-1)}{(n-1)n(n+2)(n+4)}.
\end{eqnarray*}
By making a substitution, we obtain the desired formula.
\hfill$\blacksquare$
\medskip
The normalizing constant from (\ref{heavy}) needs to be understood. It is given below.
\begin{lem}\label{KO} For $1\leq q\le p<n$, define
\begin{equation}\label{eq1} K_n:=\left(\frac{2}{n}\right)^{pq/2}\prod_{j=0}^{q-1}\frac{\Gamma((n-j)/2)}{\Gamma((n-p-j)/2)}.\end{equation} If $p=p_n\to \infty$, $\limsup_{n\to\infty}\frac{p}{n}<1$ and $pq=O(n)$, then
\begin{equation}\label{KmO}\log K_n=-\frac{pq}{2}+\frac{q(q+1)}{4}\log \big(1+\frac p{n-p}\big)-\frac{pq^3}{12n^2}-c_n q\log\big(1-\frac pn\big)+o(1)\end{equation}
as $n\to\infty$, where $c_n:=\frac{1}{2}(n-p-q-1).$
\end{lem}
\noindent\textbf{Proof}. Recalling the Stirling formula (see, e.g., p. 204 from \cite{Ahlfors} or p. 368 from \cite{Gamelin}),
$$\log\Gamma(x)=\big(x-\frac{1}{2}\big)\log x-x+\log\sqrt{2\pi}+\frac1{12x}+O(\frac{1}{x^3})$$
as $x\to+\infty.$
Then, we have from the fact $q=o(n)$ that
\begin{eqnarray*}\label{Kmgeneral}
\log K_n &=& -\frac{pq}{2}\log \frac{n}{2}+\sum_{j=0}^{q-1}\Big[\log\Gamma\big(\frac{n-j}{2}\big)-
\log\Gamma\big(\frac{n-p-j}{2}\big)\Big]\\
& = & -\frac{pq}{2}\log \frac{n}{2}+\sum_{j=0}^{q-1}\Big[\frac{n-j-1}{2}\log \frac{n-j}{2}-\frac{n-p-j-1}{2}\log \frac{n-p-j}{2}
-\frac{p}{2}\Big]\\
& & \quad \quad \quad \quad \quad \quad\quad \quad \quad\quad \quad \quad \quad \quad \quad\quad \quad \quad \quad \quad \quad \quad \quad \quad\quad +o(1).
\end{eqnarray*}
Now, writing
$\frac{n-j-1}{2}=\frac{n-p-j-1}{2}+\frac{p}{2}$ and putting term ``$-\frac{pq}{2}\log \frac{n}{2}$" into ``$\sum_{j=0}^{q-1}$", we see
\begin{eqnarray}\label{Knmiddle}
\log K_n
= -\frac{pq}{2}+\sum_{j=0}^{q-1}\frac{n-p-j-1}{2}\log\frac{n-j}{n-p-j}+\frac p2\sum_{j=0}^{q-1}\log\frac{n-j}{n}+o(1).
\end{eqnarray}
It is easy to check
\begin{eqnarray}\label{substi}
\log\frac{n-j}{n-p-j}=-\log\big(1-\frac{p}{n}\big)+\log\Big[1+\frac{pj}{n(n-p-j)}\Big].
\end{eqnarray}
Putting \eqref{substi} back into the expression \eqref{Knmiddle}, we have
\begin{eqnarray*}\label{semi-final}
\log K_n &=& -\frac{pq}{2}+\frac p2\sum_{j=0}^{q-1}\log(1-\frac{j}{n})\\
&&+\sum_{j=0}^{q-1}\frac{n-p-j-1}{2}\Big[-\log(1-\frac{p}{n})+
\log\Big(1+\frac{pj}{n(n-p-j)}\Big)\Big]+o(1)\\
&=& -\frac{pq}{2}-\log(1-\frac{p}{n})\sum_{j=0}^{q-1}\frac{n-p-j-1}{2}\\
&& +\sum_{j=0}^{q-1}\Big[\frac{p}{2}\log\big(1-\frac{j}{n}\big)+
\frac{n-p-j-1}{2}\log\Big(1+\frac{pj}{n(n-p-j)}\Big)\Big]+o(1)\\
&=& -\frac {pq} 2-\bigg[\frac{(n-p)q}{2}-\frac{q(q+1)}{4}\bigg]\log(1-\frac{p}{n})\\
&& +\sum_{j=0}^{q-1}\Big[\frac{p}{2}\log\Big(1-\frac{j}{n}\big)+
\frac{n-p-j-1}{2}\log\Big[1+\frac{pj}{n(n-p-j)}\Big]+o(1).
\end{eqnarray*}
Since $\log (1+x)=x-\frac{1}{2}x^2+o(x^3)$ as $x\to 0$, we have
\begin{eqnarray*} &&\frac{p}{2}\log\big(1-\frac{j}{n}\big)+\frac{n-p-j-1}{2}\log\Big[1+\frac{pj}{n(n-p-j)}\Big]\\
&=&-\frac{pj}{2n}-\frac{pj^2}{4n^2}+
\frac{n-p-j-1}{2}\cdot\frac{pj}{n(n-p-j)}
+O\Big(\frac{pq^3}{n^3}+ \frac{1}{n}\Big)\\
&=&-\frac{pj}{2n}-\frac{pj^2}{4n^2}+\frac{pj}{2n}-\frac{pj}{2n(n-p-j)}+
O\Big(\frac{1}{n}\Big)\\
&=&-\frac{pj^2}{4n^2}+O\Big(\frac{1}{n}\Big),
\end{eqnarray*}
uniformly for all $1\leq j \leq q$, where we use the fact $\max_{1\leq j \leq q}\frac{j}{n}=\frac{q}{n}$, $\max_{1\leq j \leq q}\frac{pj}{n(n-p-j)}\leq \frac{pq}{n(n-p-q)}=O(\frac{1}{n})$ and $\frac{pq^3}{n^3}=O(\frac{1}{n})$ by the condition $p\to \infty$, $q\leq p$ and $pq=O(n)$ in the calculation. Combining the last two assertions, we conclude
$$\aligned &\log K_n=-\frac {pq} 2-\bigg[\frac{(n-p)q}{2}-\frac{q(q+1)}{4}\bigg]\log\Big(1-\frac{p}{n}\Big)-
\frac{pq^3}{12n^2}+O\Big(\frac{q}{n}\Big)\\
&=-\frac{pq}{2}-\frac{q(q+1)}{4}\log\Big(1-\frac pn\Big)-\frac{pq^3}{12n^2}-c_n q\log\Big(1-\frac pn\Big)+o(1)
\endaligned$$
with $c_n=\frac{1}{2}(n-p-q-1).$
\hfill$\blacksquare$
\medskip
Now we present some properties of the chi-square distribution.
\begin{lem}\label{party_love} Given integer $m\ge 1,$ review the random variable $\chi^2_{m}$ has density function $$
f(x)=\frac{x^{\frac m2-1} e^{-\frac x2}}{2^{\frac{m}{2}}\Gamma(\frac m2)}$$ for any $x>0$. Then
$$ \mathbb{E}(\chi^2_{m})^k=\prod_{l=0}^{k-1}(m+2l)
$$
for any positive integer $k.$
In particular, we have
\begin{eqnarray*}
& & {\rm Var}(\chi^2_{m})=2m, \quad {\rm Var}\big((\chi^2_m-m)^2\big)=8m(m+6); \\
& & {\rm Var}\big((\chi^2_m)^2\big)=8m(m+2)(m+3); \\
& & \mathbb{E}((\chi^2_m-m)^3)=8m, \quad \mathbb{E}((\chi^2_m-m)^4)=12m(m+4).
\end{eqnarray*}
\end{lem}
\noindent {\bf Proof.}
Note that
\begin{equation}\label{chik}\aligned \mathbb{E}(\chi^2_{m})^k&=\frac1{2^{\frac{m}{2}}\Gamma(\frac m2)}\int_{0}^{\infty} x^{\frac {m+2k}2-1} e^{-\frac x2} dx\\
&=\frac{2^k\Gamma(\frac m2+k)}{\Gamma(\frac m2)}=\prod_{i=0}^{k-1}(m+2i)
\endaligned
\end{equation}
for any $k\ge 1$.
Here for the last equality we use the property of the Gamma function that $\Gamma(l+1)=l\Gamma(l)$ for any $l>0.$
By \eqref{chik},
it is easy to check that
\begin{eqnarray*}
\mathbb{E}(\chi^2_{m}-m)^2 &=& \mathbb{E}[(\chi^2_m)^2]-2m\mathbb{E}(\chi_m^2)+m^2=2m; \\
{\rm Var}\big((\chi^2_m)^2\big)&=& \mathbb{E}[(\chi^2_m)^4]-\big[\mathbb{E}(\chi^2_m)^2\big]^2\\
&=& m(m+2)[(m+4)(m+6)-m(m+2)]\\
&=& 8m(m+2)(m+3)
\end{eqnarray*}
and
$$\aligned
{\rm Var}\big((\chi^2_m-m)^2\big)&={\rm Var}\big((\chi^2_m)^2\big)+4m^2{\rm Var}\big(\chi^2_m\big)-4m\cdot{\rm Cov}\big((\chi^2_m)^2, \chi^2_m\big)\\
&=8m(m+2)(m+3)+8m^3-4m\big[m(m+2)(m+4)-m^2(m+2)\big]\\
&=8m(m+6),
\endaligned $$ where we use the formula ${\rm Cov}\big((\chi^2_m)^2, \chi^2_m\big)=\mathbb{E}[(\chi^2_m)^3]-\mathbb{E}[(\chi^2_m)^2]\cdot \mathbb{E}(\chi^2_m).$
Similarly by the binomial formula, we have
$$\aligned
\mathbb{E}\big((\chi^2_m-m)^3\big)&=\mathbb{E}\big[(\chi^2_m)^3-3m(\chi^2_m)^2+3m^2(\chi^2_m)-m^3\big]\\
&=m(m+2)(m+4)-3m^2(m+2)+3m^3-m^3\\
&=8m\\
\endaligned $$ and
$$\aligned
\mathbb{E}\big((\chi^2_m-m)^4\big)&=\mathbb{E}\big[(\chi^2_m)^4-4m(\chi^2_m)^3+6m^2(\chi^2_m)^2-4m^3(\chi^2_m)+m^4\big]\\
&=m(m+2)(m+4)(m+6)-4m^2(m+2)(m+4)+6m^3(m+2)\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -4m^4+m^4\\
&=12m(m+4).
\endaligned $$ The proof is completed.
\hfill$\blacksquare$
\medskip
\medskip
The next result is on Wishart matrices. A Wishart matrix is determined by parameters $p$ and $q$ if it is generated by a random sample from $N_p(\bold{0}, \bold{I}_p)$ with sample size $q$. Let $p=p_n$ and $q=q_n$. Most popular work on this matrix has been taken under the condition $\lim_{n\to\infty}q_n/p_n= c\in (0, \infty).$ For instance, the Marchenko-Pastur distribution \cite{Marchenko}, the central limit theorem (e.g., \cite{Bai_Silverstein}) and the large deviations of its eigenvalues (e.g., \cite{Hiai}) are obtained. The following conclusion is based on the extreme case that $q_n/p_n\to 0$. It is one of the key ingredients in the proof of Theorem \ref{main1}.
\begin{lem}\label{fly_ground} Let $\bold{g}_1, \cdots, \bold{g}_q$ be i.i.d. random vectors with distribution $N_p(\bold{0}, \bold{I}_p)$. Set $\bold{X}_n=(\bold{g}_1, \cdots, \bold{g}_q).$ Let $\lambda_1, \cdots, \lambda_q$ be the eigenvalues of $\bold{X}_n'\bold{X}_n$. Let $p=p_n$ and $q=q_n$ satisfy $p\to\infty,\ q\to \infty,\ \frac{q}{p}\to 0$, then $\max_{1\leq i \leq q}|\frac{\lambda_i}{p}-1|\stackrel{p}{\to} 0$ as $n\to\infty$.
\end{lem}
\noindent\textbf{Proof}. Review (1.2) from \cite{JiangLi15}. Take $\beta=1$ and treating $n$ as our ``$q$" in Theorems 2 and 3 from \cite{JiangLi15}. The rate function $I$ satisfies $I(1)=0$ in both Theorems. By the large deviations in the two Theorems, we see
\begin{eqnarray*}
\frac{1}{p}\max_{1\leq i \leq q}\lambda_i \stackrel{p}{\to} 1\ \ \mbox{and}\ \ \frac{1}{p}\min_{1\leq i \leq q}\lambda_i \stackrel{p}{\to} 1
\end{eqnarray*}
as $n\to\infty$. The conclusion then follows from the inequality
\begin{eqnarray*}
\max_{1\leq i \leq q}\Big|\frac{\lambda_i}{p}-1\Big| \leq \Big|\frac{1}{p}\max_{1\leq i \leq q}\lambda_i-1\Big| + \Big|\frac{1}{p}\min_{1\leq i \leq q}\lambda_i-1\Big|.
\end{eqnarray*}
The proof is completed. \hfill$\blacksquare$
\begin{lem}\label{awesome_pen} Let $\bold{g}_1, \cdots, \bold{g}_q$ be i.i.d. random vectors
with distribution $N_p(\bold{0}, \bold{I}_p)$. Assume $p=p_n\to\infty, q=q_n\to\infty$
and $\frac{q}{p}\to 0$.
Then, as $n\to\infty$,
\begin{eqnarray*}
\frac{1}{pq}\sum_{1\leq i\ne j\leq q}\big[(\bold{g}_i'\bold{g}_j)^2-p\big]
\ \mbox{converges weakly to}\ N(0, 4).
\end{eqnarray*}
\end{lem}
The proof of Lemma \ref{awesome_pen} is based on a central limit theorem on martingales. Due to its length, we put it as an appendix in Section \ref{Good_Tue}. Figure \ref{fig_Xinmei}, which will be presented later, simulates the densities of $W:=\frac{1}{2pq}\sum_{1\leq i\ne j\leq q}\big[(\bold{g}_i'\bold{g}_j)^2-p\big]$ for various values of $(p, q).$ They indicate that the density of $W$ is closer to the density of $N(0,1)$ as both $p$ and $q$ are larger, and $\frac{q}{p}$ are smaller.
We would like to make a remark on Lemma \ref{awesome_pen} here. Assume $p=1$ instead of the condition $p\to\infty$ in Lemma \ref{awesome_pen}, the conclusion is no longer true. In fact, realizing that $\bold{g}_i$'s are real-valued random variables as $p=1$, we see
\begin{eqnarray*}
& & \sum_{1\leq i\ne j\leq q}\big[(\bold{g}_i'\bold{g}_j)^2-1\big]\\
&=& (\bold{g}_1^2+\cdots +\bold{g}_q^2-q)(\bold{g}_1^2+\cdots +\bold{g}_q^2+q)+q-\sum_{i=1}^q\bold{g}_i^4.
\end{eqnarray*}
By the Slutsky lemma, it is readily seen that $1/(pq^{3/2})\sum_{1\leq i\ne j\leq q}\big[(\bold{g}_i'\bold{g}_j)^2-1\big]$ converges weakly to $N(0, 8)$ as $q\to\infty$. The scaling ``$pq^{3/2}$" here is obviously different from ``$pq$".
\medskip
We will use the following result to prove Lemma \ref{rectangle_performance}.
\begin{lem}\label{var-e} Let $\bold{X}=(g_{ij})_{p\times q}$ where $g_{ij}$'s are independent standard normals. Then
(i)
${\rm Var}({\rm tr}[({\bf X'X})^2]) =4p^2q^2+8pq(p+q)^2+20pq(p+q+1);$
(ii) ${\rm Cov}\big({\rm tr}({\bf X}^{\prime}{\bf X}), {\rm tr}[({\bf X}^{\prime}{\bf X})^2]\big)=4pq(p+q+1).$
\end{lem}
The assertion (i) corrects an error appeared in (i) of Lemma 2.4 from \cite{Jiang06}, the correct coefficient of the term $p^2q^2$ is ``$4$." However, this does not affect the the main conclusions from \cite{Jiang06}. The proof of Lemma \ref{var-e} is postponed in Appendix.
In the proof of Theorem \ref{main1}, we will need a slightly more general version of a result from \cite{Jiang06} as follows.
\begin{lem}\label{rectangle_performance} Let $\bold{Z}_n$ and $\bold{G}_n$ be as in Theorem \ref{main1}. If $\lim_{n\to\infty}\frac{p_n}{\sqrt{n}}=x\in (0, \infty)$ and $\lim_{n\to\infty}\frac{q_n}{\sqrt{n}}=y\in (0, \infty)$, then
$$\liminf_{n\to\infty}\|\mathcal{L}(\sqrt{n}\bold{Z}_n)-\mathcal{L}(\bold{G}_n)\|_{\rm TV}\ge\mathbb{E}|e^{\xi}-1|>0,$$
where $\xi\sim N(-\frac{x^2y^2}{8}, \frac{x^2y^2}{4}).$
\end{lem}
\noindent{\bf Proof.} By inspecting the proof of Theorem 2 from \cite{Jiang06}, the variable $\xi$ is the limit of random variable $W_n-\frac{x^2y^2}{8}$ with $W_n$ defined in (2.16) of \cite{Jiang06}. Recall
$$
W_n:=\frac{p+q+1}{2n}h_1-\frac{n-p-q-1}{4n^2}h_2$$
where $h_i={\rm tr}({\bf X'X})^i-\mathbb{E}\,{\rm tr}({\bf X'X})^i.$ It is proved in \cite{Jiang06} that $W_n$ converges weakly to a normal random variable with zero mean. What we need to do is to calculate the limit of
${\rm Var}(W_n).$ In fact,
\begin{eqnarray*}
{\rm Var}(W_n)&= &\frac{(p+q+1)^2}{4n^2}{\rm Var}\left({\rm tr}({\bf X'X})\right) + \frac{(n-p-q-1)^2}{16n^4}{\rm Var}\left({\rm tr}\left(({\bf X'X})^2\right)\right)\\
& & - \frac{(p+q+1)(n-p-q-1)}{4n^3}\cdot {\rm Cov}\left({\rm tr}({\bf X'X}), {\rm tr}\left(({\bf X'X})^2\right)\right).
\end{eqnarray*}
Since ${\rm Var}({\rm tr}({\bf X'X}))=2pq$, by Lemma \ref{var-e} we have
$$
{\rm Var}(W_n)=\frac{p^2q^2}{4n^2}+o(1)\to \frac{x^2y^2}{4}
$$
as $n \to \infty.$ Therefore $W_n\to N(0, \frac{x^2y^2}{4}).$ The rest proof is exactly the same as the proof of Theorem 2 from \cite{Jiang06}. \hfill$\blacksquare$
\medskip
Let $p=p_n$ and $q=q_n$. We often need the following setting later:
\begin{eqnarray}\label{service}
q\to \infty,\ \frac{q}{p}\to 0\ \ \mbox{and}\ \ \frac{pq}{n}\to \sigma\in (0,\infty)
\end{eqnarray}
as $n\to \infty$. The next result reveals a subtle property of the eigenvalue part in the density from (\ref{heavy}) under the ``rectangular" case $\frac{q}{p}\to 0$. It is also one of the building blocks in the proof of Theorem \ref{main1}.
\begin{lem}\label{sun_half} Let $p=p_n$ and $q=q_n$ satisfy (\ref{service}).
Suppose $\lambda_1, \cdots, \lambda_q$ are the eigenvalues of $\mathbf X_n'\mathbf X_n$ where $\bold{X}_n=(g_{ij})_{p\times q}$ and $g_{ij}$'s are independent standard normals. Define
\begin{eqnarray*} L_n'=\Big(1-\frac{p}{n}\Big)^{-\frac{1}{2}(n-p-q-1)q}
\Big\{\prod_{i=1}^q\Big(1-\frac{\lambda_i}{n}\Big)\Big\}^{\frac{n-p-q-1}{2}}
\exp\Big(\frac{1}{2}\sum_{i=1}^q\lambda_i\Big).
\end{eqnarray*}
Then, as $n\to\infty$,
\begin{eqnarray*}
\log L_n'-\frac{pq}{2} + \frac{pq(q+1)}{4(n-p)}\ \mbox{converges weakly to}\ N\big(0, \frac{\sigma^2}{4}\big).
\end{eqnarray*}
\end{lem}
\noindent\textbf{Proof}. Write
\begin{eqnarray}\label{half_moon}
\log L_n' &=& \frac{1}{2}\sum_{i=1}^q\lambda_i+ \frac{n-p-q-1}{2}\sum_{i=1}^q\log \frac{n-\lambda_i}{n-p} \nonumber\\
& = & \frac{1}{2}\sum_{i=1}^q\lambda_i+ \frac{n-p-q-1}{2}\sum_{i=1}^q\log \Big(1+ \frac{p-\lambda_i}{n-p}\Big).
\end{eqnarray}
Let function $h(x)$ be such that $\log (1+x)=x-\frac{x^2}{2}+x^3h(x)$ for all $x>-1$. We are able to further write
\begin{eqnarray}
& & \sum_{i=1}^q\log \Big(1+ \frac{p-\lambda_i}{n-p}\Big)\nonumber\\
&=&\frac{1}{n-p}\sum_{i=1}^q(p-\lambda_i) -\frac{1}{2(n-p)^2}\sum_{i=1}^q(p-\lambda_i)^2+ \sum_{i=1}^q\Big(\frac{p-\lambda_i}{n-p}\Big)^3h\Big(\frac{p-\lambda_i}{n-p}\Big). \label{lantern_red}
\end{eqnarray}
Notice that
\begin{eqnarray*}
& & \frac{n-p-q-1}{2}\cdot \frac{1}{n-p}\sum_{i=1}^q(p-\lambda_i)\\
& = & -\frac{1}{2}\sum_{i=1}^q\lambda_i +\frac{(n-p-q-1)pq}{2(n-p)} + \frac{q+1}{2(n-p)}\sum_{i=1}^q\lambda_i.
\end{eqnarray*}
This, (\ref{half_moon}) and (\ref{lantern_red}) say that
\begin{eqnarray}
& & \log L_n' \nonumber\\
&=&\frac{(n-p-q-1)pq}{2(n-p)} + \frac{q+1}{2(n-p)}\sum_{i=1}^q\lambda_i -\frac{n-p-q-1}{4(n-p)^2}\sum_{i=1}^q(p-\lambda_i)^2 \nonumber\\
& & \quad \quad \quad \quad\quad \quad\quad \ \ +\frac{n-p-q-1}{2}\sum_{i=1}^q\Big(\frac{p-\lambda_i}{n-p}\Big)^3h
\Big(\frac{p-\lambda_i}{n-p}\Big). \label{lord}
\end{eqnarray}
We now inspect each term one by one. Since $\lambda_1, \lambda_2, \cdots, \lambda_q$ are the eigenvalues of $\mathbf X_n'\mathbf X_n$ and $\bold{X}_n=(g_{ij})_{p\times q}$, we have
\begin{eqnarray*}
\frac{q+1}{2(n-p)}\sum_{i=1}^q\lambda_i
&=&\frac{pq(q+1)}{2(n-p)} + \frac{q+1}{2(n-p)}\sum_{i=1}^p\sum_{j=1}^q (g_{ij}^2-1)\\
& = & \frac{pq(q+1)}{2(n-p)} +\frac{q+1}{2(n-p)}\sqrt{pq}\cdot O_p(1)\\
& = & \frac{pq(q+1)}{2(n-p)} + o_p(1)
\end{eqnarray*}
by the central limit theorem on i.i.d. random variables. This together with (\ref{lord}) gives
\begin{eqnarray}
& & \log L_n' \nonumber\\
&=&\frac{pq}{2} -\frac{n-p-q-1}{4(n-p)^2}\sum_{i=1}^q(\lambda_i-p)^2 \nonumber\\
& & \quad \quad \quad \quad\quad \quad\quad \ \ +\frac{n-p-q-1}{2}\sum_{i=1}^q\Big(\frac{p-\lambda_i}{n-p}\Big)^3h
\Big(\frac{p-\lambda_i}{n-p}\Big)+o_p(1)\nonumber\\
& & \label{bed_time}
\end{eqnarray}
as $n\to\infty.$ Now we study $\sum_{i=1}^q(\lambda_i-p)^2$. To do so, set $\bold{X}_n=(\bold{g}_1, \cdots, \bold{g}_q)$. Then $\bold{g}_1, \cdots, \bold{g}_q$ are i.i.d. with distribution $N_p(\bold{0}, \bold{I}_p)$. So $\lambda_1-p, \cdots, \lambda_q-p$ are the eigenvalues of the $q\times q$ symmetric matrix $\bold{X}_n'\bold{X}_n-p\bold{I}_q=(\bold{g}_i'\bold{g}_j)_{q\times q}-p\bold{I}_q.$ Consequently,
\begin{eqnarray*}
\sum_{i=1}^q(\lambda_i-p)^2=\sum_{1\leq i\ne j\leq q}(\bold{g}_i'\bold{g}_j)^2 + \sum_{i=1}^q(\|\bold{g}_i\|^2-p)^2.
\end{eqnarray*}
Now, for $\|\bold{g}_1\|^2 \sim \chi^2(p),$ by Lemma \ref{party_love} we see
$$\aligned
&\mathbb{E}\sum_{i=1}^q(\|\bold{g}_i\|^2-p)^2=q\cdot\mathbb{E}(\|\bold{g}_1\|^2-p)^2=2pq;\\
&\mbox{Var}\,\Big[\sum_{i=1}^q(\|\bold{g}_i\|^2-p)^2\Big] =q\cdot \mbox{Var}\,\Big[(\|\bold{g}_1\|^2-p)^2\Big]
=8pq(p+6).
\endaligned $$
By the Chebyshev inequality,
\begin{eqnarray*}
& & \frac{n-p-q-1}{4(n-p)^2}\sum_{i=1}^q(\|\bold{g}_i\|^2-p)^2\\
& = & \frac{(n-p-q-1)pq}{2(n-p)^2}+ \frac{n-p-q-1}{4(n-p)^2}\Big[-2pq+\sum_{i=1}^q(\|\bold{g}_i\|^2-p)^2\Big]\\
& = & \frac{(n-p-q-1)pq}{2(n-p)^2}+ o_p(1)
\end{eqnarray*}
by noting $\frac{n-p-q-1}{(n-p)^2}\sim \frac{1}{n}$ and $\frac{p^2q}{n^2}\to 0$ as $n\to\infty$. This concludes
\begin{eqnarray*}
& & \frac{n-p-q-1}{4(n-p)^2}\sum_{i=1}^q(\lambda_i-p)^2\\ &=&\frac{(n-p-q-1)pq}{2(n-p)^2}+\frac{n-p-q-1}{4(n-p)^2}\sum_{1\leq i\ne j\leq q}(\bold{g}_i'\bold{g}_j)^2+ o_p(1)\\
& = & \frac{(n-p-q-1)pq(q+1)}{4(n-p)^2} + \frac{n-p-q-1}{4(n-p)^2}\sum_{1\leq i\ne j\leq q}\big[(\bold{g}_i'\bold{g}_j)^2-p\big]+ o_p(1).
\end{eqnarray*}
By splitting $(n-p-q-1)pq(q+1)=(n-p)pq(q+1)-pq(q+1)^2$ and using the fact $\frac{pq^3}{n^2}\to 0$, we see
\begin{eqnarray}\label{liberation}
\frac{n-p-q-1}{4(n-p)^2}\sum_{i=1}^q(\lambda_i-p)^2-\frac{pq(q+1)}{4(n-p)}\to N\big(0, \frac{\sigma^2}{4}\big)
\end{eqnarray}
weakly, where Lemma \ref{awesome_pen} and the assertion $\frac{n-p-q-1}{4(n-p)^2}\sim \frac{1}{4n}\sim \frac{\sigma}{4}\cdot \frac{1}{pq}$ are used. Recalling (\ref{bed_time}), to finish our proof, it is enough to show
\begin{eqnarray}\label{last_last}
\delta_n:=\frac{n-p-q-1}{2}\sum_{i=1}^q\Big(\frac{p-\lambda_i}{n-p}\Big)^3h
\Big(\frac{p-\lambda_i}{n-p}\Big) \stackrel{p}{\to} 0.
\end{eqnarray}
Review $\log (1+x)=x-\frac{x^2}{2}+x^3h(x)$ for all $x>-1$. Then, $\tau:=\sup_{|x|\leq 1/2}|h(x)|< \infty.$ Hence, by the fact $\frac{p}{n}\to 0$ from (\ref{service}),
\begin{eqnarray}
P(|\delta_n|>\epsilon)&=& P\Big(|\delta_n|>\epsilon,\ \max_{1\leq i \leq q}|\frac{p-\lambda_i}{n-p}|\leq \frac{1}{2}\Big) + P\Big(\max_{1\leq i \leq q}|\frac{p-\lambda_i}{n-p}|>\frac{1}{2}\Big)\nonumber\\
& \leq & P\Big(|\delta_n|>\epsilon,\ \max_{1\leq i \leq q}|\frac{p-\lambda_i}{n-p}|\leq \frac{1}{2}\Big) + P\Big(\max_{1\leq i \leq q}|\frac{\lambda_i}{p}-1|>\frac{1}{4}\Big) \nonumber\\
& & \label{cloud_pee}
\end{eqnarray}
as $n$ is sufficiently large. Under $\max_{1\leq i \leq q}|\frac{p-\lambda_i}{n-p}|\leq \frac{1}{2}$,
\begin{eqnarray*}
|\delta_n| &\leq & (2\tau)\cdot \max_{1\leq i \leq q}|\frac{p-\lambda_i}{n-p}|\cdot \frac{n-p-q-1}{4(n-p)^2}\sum_{i=1}^q(\lambda_i-p)^2\\
& = & (2\tau)\cdot \max_{1\leq i \leq q}|\frac{\lambda_i}{p}-1|\cdot \frac{p}{n-p}\cdot \frac{n-p-q-1}{4(n-p)^2}\sum_{i=1}^q(\lambda_i-p)^2
\end{eqnarray*}
which goes to zero in probability by Lemma \ref{fly_ground}, (\ref{liberation}) and the fact $\frac{p}{n-p}\cdot \frac{pq(q+1)}{4(n-p)}=O(1)$ from the assumption $pq=O(n)$. This, (\ref{cloud_pee}) and Lemma \ref{fly_ground} again conclude (\ref{last_last}).
\hfill$\blacksquare$
\begin{lem}\label{sunny_time} Let $p_n$ satisfy $p_n/n\to c$ for some $c\in (0,1)$ and $q_n\equiv 1$. Let $\bold{Z}_n$ and $\bold{G}_n$ be as in the first paragraph in Section \ref{chap:intro}. Then $\liminf_{n\to\infty}\|\sqrt{n}\bold{Z}_n - \bold{G}_n\|_{\rm TV} > 0$.
\end{lem}
\noindent\textbf{Proof}. The argument is similar to that of Lemma \ref{sun_half}. By Lemma \ref{del}, the density function of $\sqrt{n}\mathbf Z_n$ is given by
$$
f_n(z):=(\sqrt{2\pi})^{-p}\Big(\frac{2}{n}\Big)^{p/2}
\frac{\Gamma(\frac{n}{2})}{\Gamma(\frac{n-p}{2})}
\Big(1-\frac{|z|^2}{n}\Big)^{(n-p-2)/2}I(|z|<\sqrt{n}),
$$
where $z\in \mathbb{R}^p$ and $p=p_n$. By Lemma \ref{KO},
\begin{eqnarray*}
\log \Big[\Big(\frac{2}{n}\Big)^{p/2}
\frac{\Gamma(\frac{n}{2})}{\Gamma(\frac{n-p}{2})}\Big]=-\frac{1}{2}\log (1-c)-\frac{p}{2}-c_n \log\big(1-\frac pn\big)+o(1)\end{eqnarray*}
as $n\to\infty$, where $c_n:=\frac{1}{2}(n-p-2).$ The density function of $\bold{G}_n$ is $g_n(z)=(\sqrt{2\pi})^{-p}e^{-|z|^2/2}$ for all $z\in \mathbb{R}^p.$ By a measure transformation,
\begin{equation}\label{discovery_find}
\|\mathcal{L}(\sqrt{n}\bold{Z}_n)-\mathcal{L}(\bold{G}_n)\|_{\rm TV}=\int_{\mathbb{R}^{pq}}\Big|\frac{f_n(z)}{g_n(z)}-1\Big|g_n(z)\,dz =\mathbb{E}\Big|\frac{f_n(\bold{G}_n)}{g_n(\bold{G}_n)}-1\Big|,
\end{equation}
where the expectation is taken with respect to random vector $\bold{G}_n$. It is easy to see
\begin{eqnarray*}
\log \frac{f_n(z)}{g_n(z)}=-\frac{1}{2}\log (1-c)+c_n\log \frac{n-|z|^2}{n-p}-\frac{p}{2}+\frac{1}{2}|z|^2
\end{eqnarray*}
if $|z|<\sqrt{n}$, and it is defined to be $-\infty$ if $|z|\geq \sqrt{n}.$ Define function $h(x)$ such that $\log (1+x)=x-\frac{x^2}{2}+x^3h(x)$ for all $x>-1$ and $h(x)=-\infty$, otherwise. Write $\frac{n-|z|^2}{n-p}=1+\frac{p-|z|^2}{n-p}=1+\eta_n(z).$ For convenience, write $\eta_n=\eta_n(z).$ It follows that
\begin{eqnarray*}
& & c_n\log \frac{n-|z|^2}{n-p}\\
&=& \frac{1}{2}(n-p-2)\Big[\frac{p-|z|^2}{n-p}-
\frac{1}{2}\Big(\frac{p-|z|^2}{n-p}\Big)^2+\Big(\frac{p-|z|^2}{n-p}\Big)^3
h\Big(\frac{p-|z|^2}{n-p}\Big)\Big]\\
& = & \frac{p}{2}-\frac{|z|^2}{2}-\frac{1}{4}\Big(\frac{p-|z|^2}{\sqrt{n-p}}\Big)^2 + \frac{1}{2(n-p)^{1/2}}\cdot \Big(\frac{p-|z|^2}{\sqrt{n-p}}\Big)^{3}h\Big(\frac{p-|z|^2}{n-p}\Big)\\
& & \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad -\eta_n+\frac{1}{2}\eta_n^2-\eta_n^3h(\eta_n)
\end{eqnarray*}
for every $|z|<\sqrt{n}$ by using $\frac{1}{2}(n-p-2)=\frac{1}{2}(n-p)-1.$ The last two assertions imply
\begin{eqnarray}
\log \frac{f_n(z)}{g_n(z)} &=&-\frac{1}{2}\log (1-c)-\frac{1}{4}\Big(\frac{p-|z|^2}{\sqrt{n-p}}\Big)^2 + \frac{1}{2(n-p)^{1/2}}\cdot \Big(\frac{p-|z|^2}{\sqrt{n-p}}\Big)^{3}h\Big(\frac{p-|z|^2}{n-p}\Big)\nonumber\\
& & \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\ -\eta_n+\frac{1}{2}\eta_n^2-\eta_n^3h(\eta_n) \label{Tea_houses}
\end{eqnarray}
for every $|z|<\sqrt{n}$, and it is identical to $-\infty$ otherwise. Since $\bold{G}_n\sim N_p(\bold{0}, \bold{I}_p)$, we see $\|\bold{G}_n\|^2\sim \chi^2(p)$, $\frac{p-\|\bold{G}_n\|^2}{\sqrt{n-p}}\to N(0, 2c(1-c)^{-1})$ weakly and $\eta_n(\bold{G}_n)\to 0$ in probability. In particular, this implies $h\big(\frac{p-|\bold{G}_n|^2}{n-p}\big)\to 0$ in probability. Finally, by the law of large numbers, $P(\|\bold{G}_n\|<\sqrt{n})\to 1.$ Consequently, from (\ref{Tea_houses}) we conclude
\begin{eqnarray*}
\frac{f_n(\bold{G}_n)}{g_n(\bold{G}_n)}\to \frac{1}{\sqrt{1-c}}\cdot\exp\Big\{-\frac{c}{2(1-c)}\chi^2(1)\Big\}
\end{eqnarray*}
weakly as $n\to\infty$. This and (\ref{discovery_find}) yield the desired conclusion by the Fatou lemma. \hfill$\blacksquare$
\medskip
For a sequence of real numbers $\{a_n\}_{n=1}^{\infty}$ and for a set $I\subset \mathbb{R}$, the notation $\lim_{n\to\infty}a_n\in I$ represents that $\{a_n\}$ has a limit and the limit is in $I$. The next result reveals the strategy about the proof of (ii) of Theorem \ref{main1}.
\begin{lem}\label{High_teacher}
For each $n\geq 1$, let $f_n(p,q):\{1, 2,\cdots, n\}^2\to [0, \infty)$ satisfy that $f_n(p, q)$ is non-decreasing in $p\in \{1, 2,\cdots, n\}$ and $q\in \{1, 2,\cdots, n\}$, respectively. Suppose
\begin{eqnarray}\label{red}
\liminf_{n\to\infty}f_n(p_n, q_n)>0
\end{eqnarray}
for any sequence $\{(p_n, q_n);\, 1\leq q_n\leq p_n\leq n\}_{n=1}^{\infty}$
if any of the following conditions holds:
(i) $q_n\equiv 1$ and $\lim_{n\to\infty}p_n/n\in (0,1)$;
(ii) $q_n\to\infty$, $\lim_{n\to\infty} q_n/p_n=0$ and $\lim_{n\to\infty}(p_n q_n)/n\in (0, \infty)$;
(iii) $\lim_{n\to\infty}p_n/\sqrt{n}\in (0,\infty)$ and $\lim_{n\to\infty}q_n/\sqrt{n}\in (0,\infty)$.
\noindent Then (\ref{red}) holds for any sequence $\{(p_n, q_n);\, 1\leq q_n\leq p_n\leq n\}_{n=1}^{\infty}$ satisfying that $\lim_{n\to\infty}(p_nq_n)/n\in (0, \infty)$.
\end{lem}
\noindent\textbf{Proof}. Suppose the conclusion is not true, that is, $\liminf_{n\to\infty}f_n(p_n, q_n)=0$ for some sequence $\{(p_n, q_n);\, 1\leq q_n\leq p_n\leq n\}_{n=1}^{\infty}$ with $\lim_{n\to\infty}\frac{p_nq_n}{n}=\alpha$, where $\alpha\in (0, \infty)$ is a constant.
Then there exists a subsequence $\{n_k; \, k\geq 1\}$ satisfying $1\leq q_{n_k}\leq p_{n_k}\leq n_k$ for all $k\geq 1$, $\lim_{k\to\infty}(p_{n_k}q_{n_k})/n_k=\alpha$ and
\begin{eqnarray}\label{shawn}
\lim_{k\to\infty}f_{n_k}(p_{n_k}, q_{n_k})=0.
\end{eqnarray}
There are two possibilities: $\liminf_{k\to\infty}q_{n_k}<\infty$ and $\liminf_{k\to\infty}q_{n_k}=\infty$. Let us discuss the two cases separately.
{\it (a)}. Assume $\liminf_{k\to\infty}q_{n_k}<\infty$. Then there exists a further subsequence $\{n_{k_j}\}_{j=1}^{\infty}$ such that $q_{n_{k_j}}\equiv m\geq 1$. For convenience of notation, write $\bar{n}_j=n_{k_j}$ for all $j\geq 1.$ The condition $\lim_{n\to\infty}\frac{p_nq_n}{n}=\alpha$ implies that $\lim_{j\to\infty}\frac{p_{\bar{n}_j}}{\bar{n}_j}=\frac{\alpha}{m}\in (0, 1].$ By (\ref{shawn}) and the monotonocity,
\begin{eqnarray}\label{privacy_it}
\lim_{j\to\infty}f_{\bar{n}_j}(p_{\bar{n}_j}, 1)=\lim_{j\to\infty}f_{\bar{n}_j}(p_{\bar{n}_j}, q_{\bar{n}_j})=0.
\end{eqnarray}
Define $\tilde{p}_{\bar{n}_j}=[p_{\bar{n}_j}/2]+1$ for all $j\geq 1.$ Then, $\lim_{j\to\infty}\frac{\tilde{p}_{\bar{n}_j}}{\bar{n}_j}=c:=\frac{\alpha}{2m}\in (0, \frac{1}{2}].$ Construct a new sequence such that
\begin{eqnarray*}
\tilde{p}_r=
\begin{cases}
\tilde{p}_{\bar{n}_j}, & \text{if $r=\bar{n}_j$ for some $j\geq 1$};\\
[cr] +1, & \text{if not}
\end{cases}
\end{eqnarray*}
and $\tilde{q}_r=1$ for $r=1,2, \cdots.$ It is easy to check $1\leq \tilde{q}_r\leq \tilde{p}_r\leq r$ for all $r\geq 1$ and $\lim_{r\to\infty}\tilde{p}_r/r=c\in (0,1/2)$. Moreover, $\tilde{p}_{\bar{n}_j}\leq p_{\bar{n}_j}$ for each $j\geq 1.$ So $\{(\tilde{p}_r, \tilde{q}_r);\, r\geq 1\}$ satisfies condition (i), and hence, $\liminf_{r\to\infty}f_r(\tilde{p}_r, \tilde{q}_r)>0$ by (\ref{red}). This contradicts (\ref{privacy_it}) since $f_r(\tilde{p}_r, \tilde{q}_r)=f_{\bar{n}_j}(\tilde{p}_{\bar{n}_j}, 1)\leq f_{\bar{n}_j}(p_{\bar{n}_j}, 1)$ if $r=\bar{n}_j$ for some $j\geq 1$ by monotonocity.
{\it (b).} Assume $\liminf_{k\to\infty}q_{n_k}=\infty$. Since $\{q_{n_k}/p_{n_k};\, k\geq 1\} \subset [0,1]$, there is a further subsequence $\{n_{k_j}\}_{j=1}^{\infty}$ such that $q_{n_{k_j}}/p_{n_{k_j}}\to c\in [0, 1]$ as $j\to\infty$. To ease notation, write $\bar{n}_j=n_{k_j}$ for all $j\geq 1.$ Then, $\lim_{j\to\infty}q_{\bar{n}_j}=\infty, \lim_{j\to\infty}q_{\bar{n}_j}/p_{\bar{n}_j}=c\in [0, 1]$ and $\lim_{j\to\infty}p_{\bar{n}_j}q_{\bar{n}_j}/\bar{n}_j=\alpha\in(0,\infty).$ There are two situations: $c=0$ and $c\in (0,1]$. Let us discuss these cases, respectively.
{\it (b1). $c=0$.} Define
$$
\aligned \tilde{p}_r&=
\begin{cases}
p_{\bar{n}_j}, & \text{if $r=\bar{n}_j$ for some $j\geq 1$};\\
[r^{2/3}], & \text{if not};
\end{cases}
\\
\tilde{q}_r&=
\begin{cases}
q_{\bar{n}_j}, & \text{if $r=\bar{n}_j$ for some $j\geq 1$};\\
([\alpha r^{1/3}\,]+1)\wedge \tilde{p}_r, & \text{if not}.
\end{cases}
\endaligned
$$
Trivially, $1\leq \tilde{q}_r\leq \tilde{p}_r\leq r$ for all $r\geq 1$ and condition (ii) holds. Moreover, $\tilde{p}_{\bar{n}_j}=p_{\bar{n}_j}$ and $\tilde{q}_{\bar{n}_j}=q_{\bar{n}_j}$ for all $j\geq 1.$ By assumption,
\begin{eqnarray*}
\liminf_{j\to\infty}f_{\bar{n}_j}(p_{\bar{n}_j}, q_{\bar{n}_j})\geq \liminf_{n\to\infty}f_n(\tilde{p}_n, \tilde{q}_n)>0.
\end{eqnarray*}
This contradicts the second equality in (\ref{privacy_it}).
{\it (b2). $c\in (0, 1]$.} In this scenario, $q_{\bar{n}_{j}}/p_{\bar{n}_{j}}\to c\in (0, 1]$. The argument here is similar to {\it (b1)}. Define
\begin{eqnarray*}
\tilde{p}_r=
\begin{cases}
p_{\bar{n}_j}, & \text{if $r=\bar{n}_j$ for some $j\geq 1$};\\
([\sqrt{\frac{\alpha}{c} r}]+1)\wedge r, & \text{if not}
\end{cases}
\end{eqnarray*}
and
\begin{eqnarray*}
\tilde{q}_r=
\begin{cases}
q_{\bar{n}_j}, & \text{if $r=\bar{n}_j$ for some $j\geq 1$};\\
[c\tilde{p}_r] \vee 1, & \text{if not}.
\end{cases}
\end{eqnarray*}
Obviously $1\le \tilde{q}_r\le \tilde{p}_r\le r.$ Since when $r$ is large enough, $\tilde{p}_r\sim\sqrt{\frac{\alpha}{c}}\sqrt{r}$ and $\tilde{q}_r\sim \sqrt{\alpha c}\sqrt{r},$ which means $(\tilde{p}_r, \tilde{q}_r)$ satisfies condition (iii). We will also get a contradiction by using the same discussion as that of {\it (b1)}.
In conclusion, any of the cases that $\liminf_{k\to\infty}q_{n_k}<\infty$ and $\liminf_{k\to\infty}q_{n_k}=\infty$ results with a contradiction. So our desired conclusion holds true.
\hfill$\blacksquare$
\subsection{The Proof of Theorem \ref{main1}}\label{Proof_Main1} The argument is relatively lengthy. We will prove (i) and (ii) separately.
\medskip
\noindent\textbf{Proof of (i) of Theorem \ref{main1}}.
For simplicity, we will use later $p, q$ to replace $p_n, q_n$, respectively,
if there is no confusion. By (\ref{H_TV}) and (\ref{TV_KL}), it is enough to show
\begin{eqnarray}\label{Comrade_blue}
\lim_{n\to\infty}D_{\rm KL}\big(\mathcal{L} (\sqrt{n}\mathbf Z_{n})||\mathbf G_{n}\big)=0
\end{eqnarray}
where $\mathcal{L} (\sqrt{n}\mathbf Z_{n})$ is the probability distribution of $\sqrt{n}\mathbf Z_{n}$.
We can always take two subsequences of $\{n\}$, one of which is such that $q_n\leq p_n$ and the second is $q_n>p_n$. By the symmetry of $p$ and $q$, we only need to prove one of them. So, without loss of generality, we assume $q\leq p$ in the rest of the proof. From the assumption $\lim_{n\to\infty}\frac{pq}{n}=0$, without loss of generality, we assume $p+q<n.$ By Lemma \ref{del}, the density function of $\sqrt{n}\mathbf Z_n$ is
\begin{eqnarray}\label{miserable_world}
f_n(z):=(\sqrt{2\pi})^{-pq}n^{-pq/2}\frac{\omega(n-p, q)}{\omega(n, q)}\left\{det\left(I_q-\frac{z'z}{n}\right)^{(n-p-q-1)/2}\right\}I_0(z'z/n)
\end{eqnarray}
where $I_0(z'z/n)$ is the indicator function of the set that all
$q$
eigenvalues of $z'z/n$ are in $(0, 1),$ and $\omega(s, t)$ is as in (\ref{fire_you}). Obviously, $g_n(z):= (\sqrt{2\pi})^{-pq} e^{-\mbox{tr}(z'z)/2}$ is the density function of $\bold{G}_n.$
Let $\lambda_1, \cdots, \lambda_q$ be the eigenvalues of $z'z$. Then, $\mbox{det}(I_q-\frac{z'z}{n})=\prod_{i=1}^q(1-\frac{\lambda_i}{n})$ and $\mbox{tr}(z'z)=\sum_{i=1}^q\lambda_i.$
Define \begin{equation}\label{Lequ2}
L_n=\left\{\prod_{i=1}^q\left(1-\frac{\lambda_i}{n}\right)\right\}^{c_n}\exp\left(\frac{1}{2}\sum_{i=1}^q\lambda_i\right)
\end{equation}
if all $\lambda_i$'s are in $(0,n),$ and $L_n$ is zero otherwise, where $c_n=\frac{1}{2}(n-p-q-1)$. Then one has
\begin{equation}\label{Radon}
\frac{f_n(z)}{g_n(z)}=K_n\cdot L_n
\end{equation}
where $K_n$ is defined as in \eqref{eq1}. The condition $pq=o(n)$ implies that $\frac{pq^3}{n^2}\to 0$. From Lemma \ref{KO},
\begin{equation}\label{Kmo}\log K_n=-\frac{pq}{2}+\frac{q(q+1)}{4}\log(1+\frac p{n-p})-c_n q\log(1-\frac pn)+o(1) \end{equation}
as $n\to\infty.$ By definition,
\begin{eqnarray}\label{KL-dis}
D_{\rm KL}\big(\mathcal{L} (\sqrt{n}\mathbf Z_{n})||\mathbf G_{n}\big)
&=& \int_{\mathbb{R}^{pq}} \Big[\frac{f_n(z)}{g_n(z)}\log \frac{f_n(z)}{g_n(z)}\Big] g_n(z)\, dz\nonumber\\
&=&\mathbb{E} \log\frac{f_n(\sqrt{n}\mathbf Z_n)}{g_n(\sqrt{n}\mathbf Z_n)}\nonumber\\
&=&\mathbb{E}\log[K_n\cdot L_n],
\end{eqnarray}
where $\lambda_1, \cdots, \lambda_q$ are the eigenvalues
of $n\mathbf Z_n'\mathbf Z_n$ since $f_n(z)$ is the density function of $\sqrt{n}\mathbf Z_n$. We also define
$\log 0=0$ since random variable $\frac{f_n(\sqrt{n}\mathbf Z_n)}{g_n(\sqrt{n}\mathbf Z_n)}>0$ a.s. The definition
of $I_0(z'z/n)$ from (\ref{miserable_world}) ensures that $I_0(z'z/n)=0$ if $\max_{1\leq i \leq q}\lambda_i\geq n$ a.s. By Lemma \ref{KL-Key}, $\mathbb{E}\sum_{i=1}^q\lambda_i=pq$. This and (\ref{Kmo}) imply that
the expectation in (\ref{KL-dis}) is further equal to
\begin{eqnarray}
&&\log K_n+\frac12\mathbb{E}\sum_{i=1}^q \lambda_i+c_n \mathbb{E}\sum_{i=1}^{q}
\log\big(1-\frac{\lambda_i}{n}\big)\nonumber\\
&=&\frac{q(q+1)}{4}\log\big(1+\frac p{n-p}\big)+c_n\mathbb{E}\sum_{i=1}^q
\log\big(1+\frac{p-\lambda_i}{n-p}\big)+o(1)\label{portland}\\
&\le&\frac{pq^2}{4(n-p)}+c_n\mathbb{E}\sum_{i=1}^q
\Big[\frac{p-\lambda_i}{n-p}-\frac{(\lambda_i-p)^2}{2(n-p)^2}+\frac{(p-\lambda_i)^3}{3(n-p)^3} \Big]+o(1),\nonumber
\end{eqnarray}
where we combine the term ``$-c_n q\log(1-\frac pn)$" from (\ref{Kmo}) with ``$\log\big(1-\frac{\lambda_i}{n}\big)$" to get the sum in (\ref{portland}),
and the last step is due to the elementary inequality
$$\log(1+x)\le x-\frac{x^2}{2}+\frac{x^3}{3}$$
for any $x>-1$. Based on Lemmas \ref{KL-Key} and \ref{tri}, we know that, under the condition $pq=o(n),$
$$\aligned
&\mathbb{E}\sum_{i=1}^q \lambda_i=pq, \quad \mathbb{E}\sum_{i=1}^q\lambda_i^2=pq(p+q)+O(pq), \\
& \mathbb{E}\sum_{i=1}^q\lambda_i^3=pq(p^2+q^2+3pq)+O(p^2q)
\endaligned $$
(the ``$\lambda_i$" here is $n$ times the ``$\lambda_i$" from Lemmas \ref{KL-Key} and \ref{tri}). These imply that
\begin{eqnarray}\label{main1-esti}\aligned
\mathbb{E}\sum_{i=1}^q(p-\lambda_i)&=pq-\mathbb{E}\sum_{i=1}^q \lambda_i=0; \\
\mathbb{E}\sum_{i=1}^q(p-\lambda_i)^2&=p^2q-2p\mathbb{E}
\sum_{i=1}^q \lambda_i+\mathbb{E}\sum_{i=1}^q \lambda_i^2=pq^2+O(pq);\\
\mathbb{E}\sum_{i=1}^q(p-\lambda_i)^3&=p^3q-3p^2\mathbb{E}\sum_{i=1}^q \lambda_i+3p\mathbb{E}\sum_{i=1}^q \lambda_i^2-\mathbb{E}\sum_{i=1}^q \lambda_i^3\\
&=-pq^3+O(p^2q).
\endaligned \end{eqnarray}
Recall that $c_n=\frac{1}{2}(n-p)-\frac{1}{2}(q+1).$ Plugging \eqref{main1-esti} into \eqref{portland}, we get from (\ref{KL-dis}) that
\begin{eqnarray*}\label{klestimate2}
D_{\rm KL}\bigg(\mathcal{L} (\sqrt{n}\mathbf Z_{n})||\mathbf G_{n}\bigg)
&\leq &\frac{pq^2}{4(n-p)}+c_n\bigg[-\frac{pq^2+O(pq) }{2(n-p)^2}-
\frac{pq^3+O(p^2q)}{3(n-p)^3}\bigg]\\
&= & \frac{pq^2}{4(n-p)}-\frac{pq^2}{4(n-p)}+o(1)\to 0,
\end{eqnarray*}
where we use the following two limits:
\begin{eqnarray*}
& & \frac{q+1}{2}\cdot
\frac{pq^2+O(pq) }{2(n-p)^2}=O\Big(\frac{pq^3+pq^2}{n^2}\Big)\to 0;\\
& & c_n\cdot \frac{pq^3+O(p^2q)}{(n-p)^3}=O\Big(\frac{pq^3 + p^2q}{n^2}\Big)\to 0\\
\end{eqnarray*}
by \eqref{service}. This gives (\ref{Comrade_blue}).
\hfill$\blacksquare$
\medskip
Let $(U_1, V_1)'\in \mathbb{R}^m$ and $(U_2, V_2)'\in \mathbb{R}^m$ be two random vectors with $U_1 \in \mathbb{R}^s, U_2 \in \mathbb{R}^s$ and $V_1 \in \mathbb{R}^t, V_2 \in \mathbb{R}^t$ where $s\geq 1,\, t\geq 1$ and $s+t=m$. It is easy to see from the first identity of (\ref{norm}) that
\begin{eqnarray}\label{Sunday_Kowalski}
\|\mathcal{L}(U_1, V_1)-\mathcal{L}(U_2, V_2)\|_{\rm TV} \geq \|\mathcal{L}(U_1)-\mathcal{L}(U_2)\|_{\rm TV}
\end{eqnarray}
by taking (special) rectangular sets in the supremum.
\medskip
\noindent\textbf{Proof of (ii) of Theorem \ref{main1}}. Remember that our assumption is
$\lim_{n\to\infty}\frac{p_nq_n}{n}=\sigma\in(0, \infty).$ By the argument at the beginning
of the proof of (i) of Theorem \ref{main1}, without loss of generality,
we assume $q_n\leq p_n$ for all $n\geq 3.$ By (\ref{H_TV}) and (\ref{TV_KL}),
it suffices to show
\begin{eqnarray}\label{culture_mouse}
\liminf_{n\to\infty}\|\mathcal{L}(\sqrt{n}\bold{Z}_n)-\mathcal{L}(\bold{G}_n)\|_{\rm TV}>0.
\end{eqnarray}
Define $f_n(p,q)=\|\mathcal{L}(\sqrt{n}\bold{Z}_n)-\mathcal{L}(\bold{G}_n)\|_{\rm TV}$ for $1\leq q\leq p \leq n.$ Here we slightly abuse the notation: $\bold{Z}_n$ and $\bold{G}_n$ are $p\times q$ matrices with $p$ and $q$ being arbitrary instead of fixed sizes $p_n$ and $q_n.$ From (\ref{Sunday_Kowalski}) it is immediate that $f_n(p,q)$ is non-decreasing in $p\in\{1,\cdots, n\}$ and $q\in \{1,\cdots, n\}$, respectively. Then, by Lemmas \ref{rectangle_performance}, \ref{sunny_time} and \ref{High_teacher}, it is enough to prove \eqref{culture_mouse} under assumption
(\ref{service}). For simplicity, from now on we will write $p$ for $p_n$ and $q$ for $q_n$, respectively. Remember the joint density function of entries of $\mathbf Z_n$ is the function $f_n(z)$ defined in \eqref{heavy} and $g_n(z):= (\sqrt{2\pi})^{-pq} e^{-\mbox{tr}(z'z)/2}$ is the density function of $\bold{G}_n.$
Set
\begin{eqnarray*}
& & L_n'=\left(1-\frac{p}{n}\right)^{-\frac{1}{2}(n-p-q-1)q}
L_n;\\
& & K_n'= \left(1-\frac{p}{n}\right)^{\frac{1}{2}(n-p-q-1)q}K_n,
\end{eqnarray*}
where $K_n$ and $L_n$ are defined by \eqref{eq1} and \eqref{Lequ2}, respectively.
Evidently,
\begin{eqnarray*}
K_n'\cdot L_n'=K_n\cdot L_n.
\end{eqnarray*}
By the expression \eqref{Radon},
we have
\begin{equation}\label{111}
\frac{f_n(z)}{g_n(z)}=K_n\cdot L_n=K_n'\cdot L_n'.
\end{equation}
Then by definition,
\begin{equation}\label{222}
\|\mathcal{L}(\sqrt{n}\bold{Z}_n)-\mathcal{L}(\bold{G}_n)\|_{\rm TV}=\int_{\mathbb{R}^{pq}}\Big|\frac{f_n(z)}{g_n(z)}-1\Big|g_n(z)\,dz
=
\mathbb{E}\Big|\frac{f_n(\bold{G}_n)}{g_n(\bold{G}_n)}-1\Big|,
\end{equation}
where the expectation is taken over random matrix $\bold{G}_n$.
From (\ref{111}) and (\ref{222}), we have
\begin{eqnarray}\label{333}
\|\mathcal{L}(\sqrt{n}\bold{Z}_n)-\mathcal{L}(\bold{G}_n)\|_{\rm TV}= \mathbb{E}|K_n'L_n'-1|,
\end{eqnarray}
where $\lambda_1, \cdots, \lambda_q$ are the eigenvalues of the Wishart matrix $\bold{G}_n'\bold{G}_n$.
First, we know $\frac{pq^3}{n^2}\to 0$ by (\ref{service}). Use (\ref{KmO}) to see
\begin{eqnarray*}\label{free_hot}
\log K_n'=-\frac{pq}{2}+\frac{q(q+1)}{4}\log \big(1+\frac p{n-p}\big)+o(1)
\end{eqnarray*}
as $n\to\infty$. By Taylor's expansion,
\begin{eqnarray*}
\log \big(1+\frac p{n-p}\big)&=&\frac p{n-p}-\frac {p^2}{2(n-p)^2} +O\big(\frac{p^3}{n^3}\big).
\end{eqnarray*}
We then get
\begin{eqnarray*}
\log K_n'=-\frac{pq}{2}+\frac{pq(q+1)}{4(n-p)}-\frac{\sigma^2}{8} +o(1)
\end{eqnarray*}
by (\ref{service}). This and Lemma \ref{sun_half} yield
\begin{eqnarray*}
\log (K_n'L_n') \to N\big(-\frac{\sigma^2}{8}, \frac{\sigma^2}{4}\big)
\end{eqnarray*}
weakly as $n\to\infty.$ This implies that $K_n'L_n'$ converges weakly to $e^{\xi}$, where $\xi\sim N\big(-\frac{\sigma^2}{8}, \frac{\sigma^2}{4}\big)$. By (\ref{333}) and the Fatou lemma
\begin{eqnarray*}
\liminf_{n\to\infty}\|\mathcal{L}(\sqrt{n}\bold{Z}_n)-\mathcal{L}(\bold{G}_n)\|_{\rm TV}\geq \mathbb{E}|e^{\xi}-1|>0.
\end{eqnarray*}
The proof is completed. \hfill$\blacksquare$
\section{Proof of Theorem \ref{ttt}}\label{yes_ttt}
There are two parts in this section. We first need a preparation and then prove Theorem \ref{ttt}.
\subsection{Auxiliary Results}\label{papapa}
Review the Hilbert-Schmidt norm defined in (\ref{Hiha}). A limit theorem on the norm appeared in Theorem \ref{ttt} is given for a special case.
\begin{lem}\label{piano_black} Let $p=p_n$ satisfy $p_n/n\to c$ for some $c\in (0,1]$.
Let $\bold{\Gamma}_{p\times 1}$ and $\bold{Y}_{p\times 1}$ be as in Theorem \ref{ttt}. Then $\|\sqrt{n}\bold{\Gamma}_{p\times 1}-\bold{Y}_{p\times 1}\|_{\rm HS} \to \sqrt{\frac{c}{2}}\cdot |N(0,1)|$ weakly as $n\to\infty$.
\end{lem}
\noindent\textbf{Proof}. Let $\bold{y}=(\xi_1, \cdots, \xi_n)'\sim N_n(\bold{0}, \bold{I}_n).$ By the Gram-Schmidt algorithm, $(\bold{y}, \frac{\bold{y}}{\|\bold{y}\|})$ and $(\bold{Y}_{n\times 1}, \bold{\Gamma}_{n\times 1})$ have the same distribution. By the definition of the Hilbert-Schmidt norm, it is enough to show
\begin{eqnarray*}\label{coffee_tea}
\|\sqrt{n}\bold{\Gamma}_{p\times 1}-\bold{Y}_{p\times 1}\|_{\rm HS}^2\overset{d}{=}\Big(\frac{\sqrt{n}}{\|\bold{y}\|}-1\Big)^2\cdot \sum_{i=1}^p\xi_i^2\to \frac{c}{2}\cdot N(0,1)^2
\end{eqnarray*}
as $n\to\infty$, where $\|\bold{y}\|^2=\xi_1^2+\cdots + \xi_n^2$. In fact, the middle term of the above is equal to
\begin{eqnarray*}
& & \frac{(\|\bold{y}\|^2-n)^2}{\|\bold{y}\|^2(\|\bold{y}\|+\sqrt{n})^2}\sum_{i=1}^p\xi_i^2\\
&= & \frac{p}{n}\cdot \Big(\frac{\|\bold{y}\|^2-n}{\sqrt{n}}\Big)^2\cdot \Big(\frac{\|\bold{y}\|^2}{n}\Big)^{-1}\cdot \Big[1+\Big(\frac{\|\bold{y}\|^2}{n}\Big)^{1/2}\Big]^{-2} \cdot \frac{1}{p}\sum_{i=1}^p\xi_i^2.
\end{eqnarray*}
By the classical law of large numbers and the central limit theorem, $\frac{\|\bold{y}\|^2}{n}\to 1$ in probability and $\frac{\|\bold{y}\|^2-n}{\sqrt{n}}\to N(0,2)$ weakly as $n\to\infty$. By the Slutsky lemma, the above converges weakly to $\frac{c}{2}\cdot N(0,1)^2$. \hfill$\blacksquare$
\medskip
Review the notation before the statement of Theorem \ref{ttt}. Set
\begin{eqnarray}\label{notationwise}
\bold{\Sigma}_k:=(\bm{\gamma}_1, \cdots, \bm{\gamma}_k)(\bm{\gamma}_1, \cdots, \bm{\gamma}_k)'=\sum_{i=1}^k\bm{\gamma}_i\bm{\gamma}_i'
\end{eqnarray}
for $1\leq k \leq n$. Easily, $\bold{\Sigma}_k$ has rank $k$ almost surely and it is an idempotent matrix, that is, $\bold{\Sigma}_k^2 =\bold{\Sigma}_k$.
It is easy to check that $\bold{w}_k=(\bold{I}-\bold{\Sigma}_{k-1})\bold{y}_k$ for $2\leq k \leq n.$
\begin{lem}\label{party}
Let $1\leq k\leq n$ be given. Then, $\|\bold{w}_k\|^2\sim \chi^2(n-k+1)$ and $\|\bold{\Sigma}_{k-1}\bold{y}_k\|^2 \sim \chi^2(k-1)$. Further, given $\bold{y}_1, \cdots, \bold{y}_{k-1}$, the two conclusions still hold, and
$\|\bold{w}_k\|^2$ and $\|\bold{\Sigma}_{k-1}\bold{y}_k\|^2$ are conditionally independent.
\end{lem}
\noindent\textbf{Proof}. First, let us review the following fact.
Suppose $\bold{y}\sim N_n(\bold{0}, \bold{I}_n)$ and $\bold{A}$ is an $n\times n$ symmetric matrix with eigenvalues
$\lambda_1, \cdots, \lambda_n.$ Then
\begin{eqnarray}\label{normal_fact}
\bold{y}'\bold{A}\bold{y}\ \ \mbox{and}\ \ \sum_{i=1}^n\lambda_i\xi_i^2\ \ \mbox{have the same distribution}.
\end{eqnarray}
In particular,
\begin{eqnarray}\label{variance2}
\mbox{Var}\, (\bold{y}'\bold{A}\bold{y})=\sum_{i=1}^n\lambda_i^2\,\mbox{Var}\,(\xi_i^2)=2\,\mbox{tr}\,(\bold{A}^2).
\end{eqnarray}
If $\bold{A}$ is an idempotent matrix with rank $r$,
then all of the nonzero eigenvalues of $\bold{A}$ are $1$ with $r$-fold. Thus, $\|\bold{A}\bold{y}\|^2=\bold{y}'\bold{A}\bold{y}\sim \chi^2(r)$. Moreover,
the distribution of $\|\bold{A}\bold{y}\|^2$ depends only on the rank of $\bold{A}.$
Therefore, all conclusions follow except the one on conditional independence.
Now we prove it.
Given $\bold{y}_1, \cdots, \bold{y}_{k-1}$, we see that $\bold{w}_k=(\bold{I}-\bold{\Sigma}_{k-1})\bold{y}_k$ and $\bold{\Sigma}_{k-1}\bold{y}_k$ are two Gaussian random vectors. By using the fact that $\bold{y}_1, \cdots, \bold{y}_k$ are i.i.d. random vectors, we see that the conditional covariance matrix
\begin{eqnarray*}
& & \mathbb{E}[\bold{w}_k(\bold{\Sigma}_{k-1}\bold{y}_k)'\big|\bold{y}_1, \cdots, \bold{y}_{k-1}]\\
& = & \mathbb{E}[(\bold{I}-\bold{\Sigma}_{k-1})\bold{y}_k\bold{y}_k'\bold{\Sigma}_{k-1}\big|\bold{y}_1, \cdots, \bold{y}_{k-1}]\\
& =& (\bold{I}-\bold{\Sigma}_{k-1})\mathbb{E}(\bold{y}_k\bold{y}_k')\bold{\Sigma}_{k-1} =0
\end{eqnarray*}
since $\mathbb{E}(\bold{y}_k\bold{y}_k')=\bold{I}_n$. This implies that $\bold{w}_k$ and $\bold{\Sigma}_{k-1}\bold{y}_k$ are conditionally independent. \hfill$\blacksquare$
\medskip
We next expand the trace of a target matrix in terms of its entries. Then the expectation of the trace can be computed explicitly via Lemma \ref{what}.
\begin{lem}\label{Togo} Let $\bold{\Gamma}_n=(\bm{\gamma}_1, \cdots, \bm{\gamma}_n)=(\gamma_{ij})$ be an $n\times n$ matrix. Set $\bold{\Sigma}_k=\sum_{i=1}^k\bm{\gamma}_i\bm{\gamma}_i'$ for $1\leq k \leq n.$ Given $1\leq p \leq n$, denote the upper-left $p\times p$ submatrix of $\bold{\Sigma}_{k}$ by $(\bold{\Sigma}_{k})_{p}$. Then
\begin{eqnarray*}
{\rm tr}[(\bold{\Sigma}_{k})_p^2]=\sum_{i=1}^k\sum_{r=1}^p\gamma_{ri}^4 +\sum_{i=1}^k\sum_{1\leq r\ne s\leq p}\gamma_{ri}^2\gamma_{si}^2& +& \sum_{r=1}^p\sum_{1\leq i \ne j \leq k}\gamma_{ri}^2\gamma_{rj}^2\\
&+ & \sum_{1\leq i \ne j \leq k}\sum_{1\leq r\ne s\leq p}\gamma_{ri}\gamma_{si}\gamma_{rj}\gamma_{sj}.
\end{eqnarray*}
\end{lem}
\noindent\textbf{Proof}. The argument is similar to that of (\ref{additional_care}). However, the following care has to be taken additionally. Observe the $(r,s)$-element of $(\bm{\gamma}_i\bm{\gamma}_i')$ is $\gamma_{ri}\gamma_{si}$.
Since $(\bold{\Sigma}_{k})_{p}=\sum_{i=1}^k(\bm{\gamma}_i\bm{\gamma}_i')_{p}$, we know the
$(r,s)$-element of the symmetric matrix $(\bold{\Sigma}_{k})_{p}$ is $\sum_{i=1}^k\gamma_{ri}\gamma_{si}$ for $1\leq r, s \leq p.$ Note that ${\rm tr}(\bold{U}^2)=\sum_{1\leq r, s\leq p}u_{rs}^2$ for any symmetric matrix $\bold{U}=(u_{ij})_{p\times p}$. We have
\begin{eqnarray*}
{\rm tr}[(\bold{\Sigma}_{k})_p^2]=\sum_{1\leq r, s\leq p}\Big(\sum_{i=1}^k\gamma_{ri}\gamma_{si}\Big)^2
=\sum_{1\leq i, j \leq k}\sum_{1\leq r, s\leq p}\gamma_{ri}\gamma_{si}\gamma_{rj}\gamma_{sj}.
\end{eqnarray*}
Divide the first sum into two sums corresponding to that $i=j$ and that $i\ne j$, respectively. Similarly, for the second sum, consider the case $r=s$ and the case $r \ne s$, respectively. The conclusion then follows. \hfill$\blacksquare$
\medskip
The study of the trace norm appearing in Theorem \ref{ttt} is essentially reduced to a sum; see the first statement next. It discloses the behavior of the sum on the ``boundary" case.
\begin{lem}\label{trofSigma} Let $\bold{\Gamma}_n=(\bm{\gamma}_1, \cdots, \bm{\gamma}_n)=(\gamma_{ij})$ be the
$n\times n$ Haar-invariant orthogonal matrix generated from $\bold{Y}_n=(\bold{y}_1, \cdots, \bold{y}_n)$ as in (\ref{sea}). Let
$\bold{\Sigma}_k$ be as in (\ref{notationwise}) and $(\bold{\Sigma}_{k})_{p}$
denote the upper-left $p\times p$ submatrix of $\bold{\Sigma}_{k}.$ The following hold.
\begin{itemize}
\item[1)] For $\sigma>0$, assume $p\geq 1$, $q\to\infty$ with $\frac{pq^2}{n}\to\sigma.$ Then, as $n\to\infty,$
$$\sum_{j=2}^q{\rm tr}[(\bold{\Sigma}_{j-1})_p]\stackrel{p}{\longrightarrow}\frac{\sigma}2.$$
\item[2)] For any $q\geq 2$,
$$\sum_{j=2}^{q}\mathbb{E}\,{\rm tr}[(\bold{\Sigma}_{j-1})_p^2]=\frac{pq(q-1)(p+2)}{2n(n+2)}+\frac{pq(q-1)(q-2)(n-p)}{3n(n-1)(n+2)}.
$$
\end{itemize}
\end{lem}
\noindent\textbf{Proof}. To prove 1), it is enough to show that
\begin{eqnarray}\label{monkey}
\mathbb{E}\sum_{j=2}^q{\rm tr}[(\bold{\Sigma}_{j-1})_p]=\frac{pq(q-1)}{2n}\ \ \mbox{and}\ \quad {\rm Var}\Big(\sum_{j=2}^q{\rm tr}[(\bold{\Sigma}_{j-1})_p]\Big)\to 0
\end{eqnarray}
as $n\to\infty.$
Recall
$
\bold{\Sigma}_{j-1}=
\sum_{k=1}^{j-1}\bm{\gamma}_k\bm{\gamma}_k'
$ and the $(r,s)$-element of $(\bm{\gamma}_k\bm{\gamma}_k')$ is $\gamma_{rk}\gamma_{sk}$. For convenience, define
$u_k=\sum_{s=1}^p
\gamma_{sk}^2$ for any $1\le k\le q.$
Then,
\begin{eqnarray*}
{\rm tr}[(\bold{\Sigma}_{j-1})_p]=\sum_{k=1}^{j-1}{\rm tr}[(\bm{\gamma}_k\bm{\gamma}_k')_p]=\sum_{k=1}^{j-1}\sum_{s=1}^p \gamma_{sk}^2=\sum_{k=1}^{j-1}u_k
\end{eqnarray*}
for each $2\le j\le q.$ It follows that
\begin{eqnarray}\label{egg}
\sum_{j=2}^q {\rm tr}[(\bold{\Sigma}_{j-1})_p]=\sum_{j=2}^q \sum_{k=1}^{j-1} u_k=\sum_{k=1}^{q-1}(q-k)u_k.
\end{eqnarray}
We claim that
\begin{equation}\label{u}\mathbb{E}(u_k)=\frac pn, \quad \mathbb{E}(u_k^2)=\frac{p(p+2)}{n(n+2)}, \quad {\rm Cov}(u_i, u_k)=-\frac{2p(n-p)}{n^2(n-1)(n+2)}
\end{equation}
for any $1\le i\neq k\le q.$ In fact, by {\bf F2}) and Lemma \ref{what}, it is immediate to see $\mathbb{E}(u_k)=\frac pn$. Further, by the same argument,
\begin{eqnarray*}
\mathbb{E}u_k^2&=&p \mathbb{E}(\gamma_{11}^4) + p(p-1) \mathbb{E}(\gamma_{11}^2\gamma_{12}^2)\\
& = & \frac{3p}{n(n+2)}+ \frac{p(p-1)}{n(n+2)}=\frac{p(p+2)}{n(n+2)}.
\end{eqnarray*}
Now we turn to prove the third conclusion from (\ref{u}). For any $i\neq k,$ by {\bf F2}) again,
\begin{eqnarray*}
{\rm Cov}(u_i,u_k)&=& \mathbb{E}(u_iu_k)-\frac{p^2}{n^2}\\
&=&\sum_{s=1}^p\mathbb{E}(\gamma_{si}^2\gamma_{sk}^2)+\sum_{1\le s\neq t\le p}\mathbb{E}(\gamma_{si}^2 \gamma_{tk}^2)-\frac{p^2}{n^2}\\
&=& p\mathbb{E}(\gamma_{11}^2\gamma_{12}^2)+p(p-1)\mathbb{E}(\gamma_{11}^2\gamma_{22}^2)-\frac{p^2}{n^2}\\
&=& \frac{p}{n(n+2)}+p(p-1)\frac{n+1}{n(n-1)(n+2)}-\frac{p^2}{n^2}\\
&=& -\frac{2p(n-p)}{n^2(n-1)(n+2)},
\end{eqnarray*}
where we use Lemma \ref{what} for the fourth equality. So claim (\ref{u}) follows.
Now, let us go back to the formula in (\ref{egg}). By (\ref{u}),
\begin{eqnarray*}
\label{e} \sum_{j=2}^q \mathbb{E}{\rm tr}[(\bold{\Sigma}_{j-1})_p]=\sum_{k=1}^{q-1}(q-k)\mathbb{E} u_k=\frac{pq(q-1)}{2n}.
\end{eqnarray*}
The first identity from (\ref{monkey}) is concluded. Now we work on the second one.
It is readily seen from the first two conclusions of (\ref{u}) that
${\rm Var}(u_k)=\frac{2p(n-p)}{n^2(n+2)}.$ By \eqref{u} again,
\begin{eqnarray*}
{\rm Var}\Big(\sum_{j=2}^q {\rm tr} [(\bold{\Sigma}_{j-1})_p]\Big)
&=&\sum_{k=1}^{q-1}(q-k)^2 {\rm Var}(u_k)+\sum_{1\le i\neq k\le q-1}(q-i)(q-k){\rm Cov}(u_i, u_k)\\
&=&A\sum_{k=1}^{q-1}(q-k)^2-B\sum_{1\le i\neq k\le q-1}(q-i)(q-k)\\
& = & (A+B)\sum_{r=1}^{q-1}r^2 -B\Big(\sum_{r=1}^{q-1}r\Big)^2
\end{eqnarray*}
by setting $r=q-k$ and $s=q-i$, respectively, where
\begin{eqnarray*}
A=\frac{2p(n-p)}{n^2(n+2)}\ \ \ \mbox{and}\ \ \ B=\frac{2p(n-p)}{n^2(n-1)(n+2)}.
\end{eqnarray*}
From an elementary calculation, we get
$$\aligned
{\rm Var}\Big(\sum_{j=2}^q {\rm tr} [(\bold{\Sigma}_{j-1})_p]\Big)
&=\frac{pq(n-p)(q-1)}{3n(n-1)(n+2)}\Big(2q-1-\frac{3q(q-1)}{2n}\Big)\label{come_on}\\
&= O\Big(\frac{pq^3}{n^2}\Big)
= O\Big(\frac{1}{pq}\Big)\to 0
\endaligned $$
as $ n\to\infty$, where we use the fact $\frac{3q(q-1)}{2n}=O(1)$ under the assumption $pq^2/n\to\sigma$ and $q\to\infty.$ This gives the second conclusion from (\ref{monkey}).
Now we prove 2). By {\bf F2}) and Lemma \ref{Togo},
\begin{eqnarray*}
& & \mathbb{E}{\rm tr}[(\bold{\Sigma}_{j-1})_p^2]\\
&=& p(j-1)\mathbb{E}(\gamma_{11}^4) + [(j-1)p(p-1) + p(j-1)(j-2)]\mathbb{E}(\gamma_{11}^2\gamma_{12}^2)\\
& & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \, p(p-1)(j-1)(j-2)\mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22}).
\end{eqnarray*}
Write
\begin{eqnarray*}
& & (j-1)p(p-1) + p(j-1)(j-2)=(j-1)p(p-2)+p(j-1)^2;\\
& & p(p-1)(j-1)(j-2)=p(p-1)\big[(j-1)^2-(j-1)\big].
\end{eqnarray*}
Then, by computing $\sum_{j=1}^{q}(j-1)$ and $\sum_{j=1}^{q}(j-1)^2$, we obtain
\begin{eqnarray*}
& & \sum_{j=2}^{q}\mathbb{E}\,{\rm tr}[(\bold{\Sigma}_{j-1})_p^2]\\
&=&
\frac{1}{2}pq(q-1)\mathbb{E}(\gamma_{11}^4) + \frac{1}{6}pq(q-1)\big[3p+2q-7\big]\mathbb{E}(\gamma_{11}^2\gamma_{12}^2)\\
& & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \,
\frac{1}{3}pq(p-1)(q-1)(q-2)\mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22}).
\end{eqnarray*}
From Lemma \ref{what}, it is trivial to get
\begin{eqnarray*}
& & \sum_{j=2}^{q}\mathbb{E}\,{\rm tr}[(\bold{\Sigma}_{j-1})_p^2]
=
\frac{pq(q-1)(p+2)}{2n(n+2)}+\frac{pq(q-1)(q-2)(n-p)}{3n(n-1)(n+2)}.
\end{eqnarray*}
The proof is completed. \hfill$\blacksquare$
\medskip
Our target is a submatrix of an Haar-orthogonal matrix. Based on the argument in the proof of Lemma \ref{trofSigma}, an estimate of the submatrix is provided now.
\begin{prop}\label{lala} Let $\bold{\Gamma}_{p\times q}$ and $\bold{Y}_{p\times q}$ be
as in (\ref{Hiha}). Then
\begin{eqnarray*}
\mathbb{E}\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|^2_{\rm HS} \leq \frac{24pq^2}{n}
\end{eqnarray*}
for any $n\ge 2$ and $1\leq p, q\leq n.$
\end{prop}
\noindent\textbf{Proof}. Review the notation from (\ref{sea}), identity
$\bold{w}_j=(\bold{I}-\bold{\Sigma}_{j-1})\bold{y}_j$ and $\bm{\gamma}_j={\bold w}_j/\|{\bf w}_j\|$.
We first write
\begin{eqnarray}\label{aha}
\sqrt{n}\bm{\gamma}_j-\bold{y}_j=\sqrt{n}{\bm \gamma}_j-{\bf w}_j
-\bold{\Sigma}_{j-1}\bold{y}_j=(\sqrt{n}-\|{\bf w}_j\|){\bm \gamma}_j-\bold{\Sigma}_{j-1}\bold{y}_j
\end{eqnarray}
for $1\leq j \leq n$, where $\bold{\Sigma}_{0}=\bold{0}$. Define $\bold{M}=\bold{M}_{p\times n}=(\bold{I}_p, \bold{0})$ for $1\leq p \leq n-1$ and $\bold{M}_{n\times n}=\bold{I}_n$, where $\bold{I}_p$ is the $p\times p$ identity matrix and $\bold{0}$ is the $p\times (n-p)$ matrix whose entries are all equal to zero. Evidently, $\bold{M}(\sqrt{n}\bm{\gamma}_j-\bold{y}_j)$ is the upper $p$-dimensional vector of $\sqrt{n}\bm{\gamma}_j-\bold{y}_j$. Hence, by (\ref{aha}),
\begin{equation}\label{expnorm}\aligned
\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|^2_{\rm HS}&=\sum_{j=1}^q\sum_{i=1}^p(\sqrt{n}\gamma_{ij}-y_{ij})^2 \\
&= \sum_{j=1}^q\|\bold{M}(\sqrt{n}\bm{\gamma}_j-\bold{y}_j)\|^2 \\
&=\sum_{j=1}^q\big\|(\sqrt{n}-\|{\bf w}_j\|)\bold{M}{\bm \gamma}_j-
\bold{M}\bold{\Sigma}_{j-1}\bold{y}_j\big\|^2\\
& \leq 2\sum_{j=1}^q(\sqrt{n}-\|{\bold w}_j\|)^2\|\bold{M}{\bm \gamma}_j\|^2 + 2
\sum_{j=1}^q\|\bold{M}\bold{\Sigma}_{j-1}\bold{y}_j\big\|^2\\
\endaligned
\end{equation}
by the triangle inequality and the formula $(a+b)^2\leq 2a^2+ 2b^2$ for any $a,b\in \mathbb{R}.$ Define
\begin{eqnarray*}
A_j=\big(\sqrt{n}-\|\bold{w}_j\|\big)^2;\ \ B_j=\|\bold{M}\bm{\gamma}_j\|^2;\ \
C_j=\|\bold{M}\bold{\Sigma}_{j-1}\bold{y}_j\big\|^2
\end{eqnarray*}
for $1\leq j \leq n$ with $C_0=0$. Then,
\begin{eqnarray}\label{simple}
\mathbb{E}\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|^2_{\rm HS} \leq 2\sum_{j=1}^q \mathbb{E}(A_jB_j) + 2\sum_{j=1}^q \mathbb{E}C_j.
\end{eqnarray}
We next bound $A_j$, $B_j$ and $C_j$, respectively, in terms of their moments.
{\it The estimate of $A_j$}. Trivially,
$$\aligned
A_j^2=\Big[\frac{\|\bold{w}_j\|^2-n}{(\|\bold{w}_j\|+\sqrt{n})}\Big]^4
\leq\frac{1}{n^2}\cdot(\|\bold{w}_j\|^2-n)^4. \endaligned $$
By Lemma 3.1, $\|\bold{w}_j\|^2\sim\chi^2(n-j+1).$ Set $c_j:=n-j+1$ and ${\bf z}_j=\|{\bf w}_j\|^2-c_j.$ By Lemma \ref{party_love} and the binomial formula, we have
$$\aligned
n^2\mathbb{E}(A_j^2) &\le \mathbb{E}\big[(\bold{z}_j-j+1)^4\big]\\
&=\mathbb{E}(\bold{z}_j^4)-4(j-1)\mathbb{E}(\bold{z}_j^3)+6(j-1)^2\mathbb{E}(\bold{z}_j^2)-4(j-1)^3\mathbb{E}\bold{z}_j+(j-1)^4\\
&=12c_j(c_j+4)-32(j-1)c_j+12c_j(j-1)^2+(j-1)^4\\
&\le 12(c_j+2)^2+12(c_j+2)(j-1)^2+3(j-1)^4\\
&=12\big(c_j+2+\frac{(j-1)^2}{2}\big)^2.
\endaligned $$
This immediately implies that
$$(\mathbb{E}A_j^2)^{1/2}\le\frac{2\sqrt{3}}{n}\Big[n-j+3+\frac{(j-1)^2}{2}\Big].$$
{\it The estimate of $B_j$}. Recall (\ref{sea}). The vector $\bm{\gamma}_j=\frac{\bold{w}_j}{\|\bold{w}_j\|}$ has the same distribution as $\bm{\gamma}_1=\frac{\bold{y}_1}{\|\bold{y}_1\|}$.
Note $\bold{y}_1=(y_{11}, \cdots, y_{n1})'$, hence $\|\bold{M}\bold{\gamma}_1\|^2=U_1+\cdots + U_p$ where $U_i=y_{i1}^2/(y_{11}^2 +\cdots y_{n1}^2)$ for $1\leq i \leq n$. By Lemma \ref{Jiang2009},
\begin{eqnarray*}
\mathbb{E}\big(\|\bold{M}\bold{\gamma}_1\|^4\big) & = & \mathbb{E}\big[(U_1+\cdots + U_p)^2\big] \\
& = & p\mathbb{E}(U_1^2)+ p(p-1)\mathbb{E}(U_1U_2)\\
& = &\frac{p(p+2)}{n(n+2)} .
\end{eqnarray*}
Therefore $$\mathbb{E}B_j^2=\mathbb{E}\big(\|\bold{M}\bold{\gamma}_1\|^4\big)=\frac{p(p+2)}{n(n+2)}\le \frac{3p^2}{n^2}.$$
In particular, the two estimates above conclude that
$$ |\mathbb{E}(A_jB_j)|\leq
(\mathbb{E}A_j^2)^{1/2}(\mathbb{E}B_j^2)^{1/2}\leq \frac{6p}{n^2}\Big(n-j+3+\frac{(j-1)^2}{2}\Big),\\
$$
which guarantees
\begin{equation}\label{ABsum}\aligned 2\sum_{j=1}^q\mathbb{E}(A_jB_j)&\le \frac{12p}{n^2}\sum_{j=1}^q\Big(n-j+3+\frac{(j-1)^2}{2}\Big) \\
&=\frac{12pq}{n}\Big(1+\frac{2q^2-9q+31}{12n}\Big).\\
\endaligned
\end{equation}
{\it The estimate of $C_j$.} Now, conditioning on $\bold{y}_1, \cdots, \bold{y}_{j-1}$, we get from (\ref{normal_fact}) that
\begin{equation}\label{Cespress}
C_j=\|\bold{M}\bold{\Sigma}_{j-1}\bold{y}_j\big\|^2
= \bold{y}_j'\bold{\Sigma}_{j-1}
\begin{pmatrix}
\bold{I}_p & \bold{0}\\
\bold{0} & \bold{0}
\end{pmatrix}
\bold{\Sigma}_{j-1}\bold{y}_j \overset{d}{=} \sum_{k=1}^n\lambda_k\xi_k^2, \end{equation}
where $\xi_1, \cdots, \xi_n$ are i.i.d. $N(0,1)$-distributed random variables and $\lambda_1, \cdots, \lambda_n$ are the eigenvalues of
\begin{equation}\label{defA}
\bold{A}_{j-1}:=\bold{\Sigma}_{j-1}
\begin{pmatrix}
\bold{I}_p & \bold{0}\\
\bold{0} & \bold{0}
\end{pmatrix}
\bold{\Sigma}_{j-1}.
\end{equation}
In particular,
\begin{eqnarray*}
\mathbb{E}C_j = \mathbb{E}\,\mbox{tr}(\bold{A}_{j-1}).
\end{eqnarray*}
By the fact $\mbox{tr}(\bold{A}\bold{B})=\mbox{tr}(\bold{B}\bold{A})$ for any matrix $\bold{A}, \bold{B}$ and the fact that both $\bold{\Sigma}_{j-1}$ and $\begin{pmatrix}
\bold{I}_p & \bold{0}\\
\bold{0} & \bold{0}
\end{pmatrix}$ are idempotent, we see
\begin{equation}\label{traceA}\aligned
\mbox{tr}(\bold{A}_{j-1})&=\mbox{tr}\, \Big[\bold{\Sigma}_{j-1}
\begin{pmatrix}
\bold{I}_p & \bold{0}\\
\bold{0} & \bold{0}
\end{pmatrix}
\begin{pmatrix}
\bold{I}_p & \bold{0}\\
\bold{0} & \bold{0}
\end{pmatrix}
\bold{\Sigma}_{j-1}\Big]\\
& = \mbox{tr}\, \Big[
\begin{pmatrix}
\bold{I}_p & \bold{0}\\
\bold{0} & \bold{0}
\end{pmatrix}
\bold{\Sigma}_{j-1}\begin{pmatrix}
\bold{I}_p & \bold{0}\\
\bold{0} & \bold{0}
\end{pmatrix}\Big]\\
&={\rm tr}\big((\bold{\Sigma}_{j-1})_p\big),
\endaligned
\end{equation}
where $(\bold{\Sigma}_{j-1})_p$ is as in the statement of Lemma \ref{Togo}. Hence by \eqref{monkey}, we have
\begin{equation}\label{Cesti}
\sum_{j=2}^q\mathbb{E}C_j=\sum_{j=2}^q\mathbb{E}\,\mbox{tr}(\bold{A}_{j-1})
=\sum_{j=2}^q\mathbb{E}\,\mbox{tr}((\bold{\Sigma}_{j-1})_p)=\frac{pq(q-1)}{2n}.\end{equation}
Therefore plugging \eqref{ABsum} and \eqref{Cesti} into \eqref{simple}, we know
\begin{eqnarray*}
\mathbb{E}\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|^2_{\rm HS}
&\leq & 2\sum_{j=1}^q\mathbb{E}(A_jB_j) + 2\sum_{j=1}^q\mathbb{E}C_j \nonumber\\
&\le &\frac{12pq}{n}\Big(1+\frac{2q^2-9q+31}{12n}\Big)+\frac{pq(q-1)}{n}\\
&= &\frac{pq^2}{n}\cdot\Big(1+\frac{2q^2-9q+31+11n}{nq}\Big).\\
\end{eqnarray*}
Define $f(q)=2q+\frac{31+11n}{q}-9$ for $q>0.$ Since $f{''}(q)>0$ for all $q>0$, we know $f(q)$ is a convex function. Therefore, $\max_{1\leq q \leq n}f(q)= f(1)\vee f(n).$
Trivially, $$f(1)-f(n)=24+11n-\Big(2n+\frac{31}{n}+2\Big)=22+9n-\frac{31}{n}\ge 22+9-31=0$$ for all $n\geq 1$. We then have
$\max_{1\leq q \leq n}f(q)=24+11n$
for any $n\geq 2.$
Thus,
$$
\mathbb{E}\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|^2_{\rm HS}
\leq\frac{pq^2}{n}\Big(1+\frac{24+11n}{n}\Big)=
\frac{12pq^2}{n}\Big(1+\frac2n\Big)\le\frac{24pq^2}{n}
$$
for any $n\geq 2.$ The proof is completed. \hfill$\blacksquare$
\medskip
\medskip
Similar to Lemma \ref{High_teacher}, the next result will serve as the framework of the proof of Theorem \ref{ttt}. The spirit of the proof is close to that of Lemma \ref{High_teacher}. We therefore omit it. For a sequence of numbers $\{a_n;\, n\geq 1\}$, we write $\lim_{n\to\infty}a_n\in (0, \infty)$ if $\lim_{n\to\infty}a_n=a$ exists and $a\in (0, \infty)$.
\begin{lem}\label{Low_beauty}
For each $n\geq 1$, let $f_n(p,q):\{1, 2,\cdots, n\}^2\to [0, \infty)$ satisfy that $f_n(p, q)$ is non-decreasing in $p\in \{1, 2,\cdots, n\}$ and $q\in \{1, 2,\cdots, n\}$, respectively. Suppose
\begin{eqnarray}\label{red_red}
\liminf_{n\to\infty}f_n(p_n, q_n)>0
\end{eqnarray}
for any sequence $\{(p_n, q_n)\in \{1, 2,\cdots, n\}^2\}_{n=1}^{\infty}$ if any of the next two conditions holds:
(i) $q_n\equiv 1$ and $\lim_{n\to\infty}p_n/n\in (0,1)$;
(ii) $\lim_{n\to\infty}q_n=\infty$ and $\lim_{n\to\infty}(p_nq_n^2)/n\in (0, \infty)$.
\noindent Then (\ref{red_red}) holds for any sequence $\{(p_n, q_n)\}_{n=1}^{\infty}$ with $1\leq p_n, q_n\leq n$ for each $n\geq 1$ and $\lim_{n\to\infty}(p_nq_n^2)/n\in (0, \infty)$.
\end{lem}
\subsection{The Proof of Theorem \ref{ttt}}\label{big_mouse}
After many pieces of understanding, we are now ready to prove the second main result in this paper.
\medskip
\noindent\textbf{Proof of Theorem \ref{ttt}}. The first part follows immediately from Proposition \ref{lala}. The second part is given next.
We first prove that
\begin{eqnarray}\label{God_creat_math}
\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|_{\rm HS} \overset{p}{\to} \Big(\frac{\sigma}{2}\Big)^{1/2}
\end{eqnarray}
for any $1\leq p_n, q_n \leq n$ satisfying $q_n\to \infty$ and $\frac{pq^2}{n}\to \sigma>0$. We claim this implies that
\begin{eqnarray}\label{sun_cars}
\liminf_{n\to\infty}P(\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|_{\rm HS}\geq \epsilon)>0
\end{eqnarray}
for any $\epsilon \in (0, \sqrt{\sigma/2})$ and any $1\leq p_n, q_n \leq n$ with $\frac{pq^2}{n}\to \sigma>0$. In fact, for given $\epsilon \in (0, \sqrt{\sigma/2})$, set
\begin{eqnarray*}
f_n(p,q)=P(\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|_{\rm HS}\geq \epsilon)
\end{eqnarray*}
for all $1\leq p, q\leq n$. Here we slightly abuse some notation: $\bold{\Gamma}_{p\times q}$ and $\bold{Y}_{p\times q}$ are $p\times q$ matrices with $p$ and $q$ being arbitrary instead of fixed sizes $p_n$ and $q_n.$ By (\ref{Hiha}), it is obvious that $f_n(p,q)$ is non-decreasing in $p$ and $q$, respectively, for any $n\geq 1$. Assume (\ref{God_creat_math}) holds, then $\liminf_{n\to\infty}f_n(p_n, q_n)=1$ for any $1\leq p_n, q_n \leq n$ under condition $\lim_{n\to\infty}q_n= \infty$ and $\lim_{n\to\infty}\frac{pq^2}{n}=\sigma \in (0, \infty).$ By Lemma \ref{piano_black}, $\liminf_{n\to\infty}f_n(p_n, 1)=P(|N(0,1)|\geq \epsilon\sqrt{2/c})$ for any $1\leq p_n \leq n$ with $\lim_{n\to\infty}p_n/n=c\in (0,1)$. Then we obtain (\ref{sun_cars}) from Lemma \ref{Low_beauty}.
Now we start to prove (\ref{God_creat_math}). Let us continue to use the notation in the proof of Proposition \ref{lala}. Review $\bold{M}=\bold{M}_{p\times n}=(\bold{I}_p, \bold{0})$ for $1\leq p \leq n-1$ and $\bold{M}_{n\times n}=\bold{I}_n$, where $\bold{I}_p$ is the $p\times p$ identity matrix and $\bold{0}$ is the $p\times (n-p)$ matrix whose entries are all equal to zero. By \eqref{expnorm},
\begin{eqnarray*}
\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|_{\rm HS}^2=
\sum_{j=1}^q\|(\sqrt{n}-\|\bold{w}_j\|)\bold{M}\bm{\gamma}_j-
\bold{M}\bold{\Sigma}_{j-1}\bold{y}_j\big\|^2.
\end{eqnarray*}
Review
$$
A_j=(\sqrt{n}-\|\bold{w}_j\|)^2;\ \ B_j=\|\bold{M}\bm{\gamma}_j\|^2;\ \ C_j=\|\bold{M}\bold{\Sigma}_{j-1}\bold{y}_j\big\|^2.
$$
For vectors $\bold{u}, \bold{v}\in \mathbb{R}^p$, we know $\|\bold{u}+\bold{v}\|^2=\|\bold{u}\|^2+\|\bold{v}\|^2 + 2\langle\bold{u}, \bold{v}\rangle$. By the Cauchy-Schwartz inequality, $|\langle\bold{u}, \bold{v}\rangle| \leq \|\bold{u}\|\cdot \|\bold{v}\|.$ So we can write
$$
\|\sqrt{n}\bold{\Gamma}_{p\times q}-\bold{Y}_{p\times q}\|_{\rm HS}^2=\sum_{j=1}^qA_jB_j + \sum_{j=1}^qC_j + \sum_{j=1}^q\epsilon_j
$$
where $|\epsilon_j|\leq 2\sqrt{A_jB_jC_j}$ for $1\leq j \leq q.$ From (\ref{ABsum}), we see that
$$
\mathbb{E}\sum_{j=1}^qA_jB_j
\le \frac{6pq}{n}\Big(1+\frac{2q^2-9q+31}{12n}\Big)
=O\Big(\frac{1}{q}+\frac1{pq}\Big)\to 0
$$
since $q=q_n\to\infty$ and $\frac{pq^2}{n}\to \sigma>0$ as $n\to\infty.$ In particular,
\begin{eqnarray}\label{bad_child}
\sum_{j=1}^qA_jB_j \overset{p}{\to} 0
\end{eqnarray}
as $n\to\infty$. We claim that it suffices to show
\begin{eqnarray}\label{last}
\sum_{j=1}^qC_j \overset{p}{\to} \frac{\sigma}{2}
\end{eqnarray}
as $n\to\infty$. In fact, once \eqref{last} holds, we have by the Cauchy-Schwartz inequality and \eqref{bad_child}
$$
(\sum_{j=1}^q|\epsilon_j|)^2 \le4(\sum_{j=1}^q\sqrt{A_jB_jC_j} )^2\le 4\Big(\sum_{j=1}^qA_jB_j\Big)\sum_{j=1}^q C_j\stackrel{p}{\to} 0
$$
as $n\to\infty.$ Then $\sum_{j=1}^q\epsilon_j\stackrel{p}{\to}0 $ as $n\to\infty.$ Now we prove \eqref{last}.
Recall the notation $(\bold{\Sigma}_{k})_{p}$ stands for the upper-left $p\times p$ submatrix of $\bold{\Sigma}_{k}$. Let $\mathcal{F}_{k}$ be
the sigma-algebra generated by $\bold{y}_1, \bold{y}_2, \cdots, \bold{y}_k.$ We first claim
\begin{eqnarray}\label{conclusion2}
X_j:=C_j-{\rm tr }[(\bold{\Sigma}_{j-1})_{p}],\ \ j=2,3,\cdots,q,
\end{eqnarray}
forms a martingale difference with respect to $\mathcal{F}_2,\mathcal{F}_3, \cdots, \mathcal{F}_{q}.$ In fact, as in \eqref{Cespress}, we write
\begin{eqnarray}\label{no_need}
C_j=\bold{y}_j'\bold{A}_{j-1}\bold{y}_j
\end{eqnarray}
for any $2\le j\le q,$ where the symmetric matrix
$
\bold{A}_{j-1}$
is defined in \eqref{defA} and is independent of $\bold{y}_j$. Let $\mu_1, \cdots, \mu_n$ be the eigenvalues of $\bold{A}_{j-1}$. By (\ref{normal_fact}), \eqref{traceA} and independence,
\begin{eqnarray*}\label{no_useless}
\mathbb{E}(\bold{y}_j' \bold{A}_{j-1} \bold{y}_j|\mathcal{F}_{j-1})=\mathbb{E}\sum_{i=1}^n\mu_i\xi_i^2=\mbox{tr}\,(\bold{A}_{j-1})
=\mbox{tr}\,[(\bold{\Sigma}_{j-1})_p],
\end{eqnarray*}
where $\xi_1, \cdots, \xi_n$ are i.i.d. standard normals.
This confirms (\ref{conclusion2}).
Obviously, $$B_k:=\sum_{j=1}^k X_j,\ \ k=2,\cdots, q$$ is a martingale relative to $\{\mathcal{F}_k;\, k=2,\cdots, q\}$. Therefore,
$$\sum_{j=2}^qC_j=B_q+\sum_{j=2}^q{\rm tr}[(\bold{\Sigma}_{j-1})_p].$$
By Lemma \ref{trofSigma}, when $pq^2/n\to \sigma ,$ $$\sum_{j=2}^q{\rm tr}[(\bold{\Sigma}_{j-1})_p]\stackrel{p}{\longrightarrow}\frac{\sigma}2$$ as $n\to\infty.$ To get (\ref{last}), it is enough to show
\begin{eqnarray}\label{inspiration}
{\rm Var}(B_q)\to 0
\end{eqnarray}
as $n\to\infty.$
Since $(X_i)_{1\le i\le q}$ is a martingale difference, it entails that
$$\mathbb{E}(X_iX_j)=\mathbb{E}[X_i\mathbb{E}(X_j|\mathcal{F}_{j-1})]=0$$ for any $2\le i<j\le q.$ Also, recall the conditional variance has the formula
\begin{eqnarray*}
{\rm Var}(X_j)&=& \mathbb{E}\, {\rm Var}\,(X_j|\mathcal{F}_{j-1}) + {\rm Var}\,[\mathbb{E} (X_j|\mathcal{F}_{j-1})]\\
& = & \mathbb{E}\, {\rm Var}\,(X_j|\mathcal{F}_{j-1})
\end{eqnarray*}
since $X_j$ is a martingale difference,
where ${\rm Var}\,(X_j|\mathcal{F}_{j-1})=
\mathbb{E}\big[(X_j-\mathbb{E}(X_j|\mathcal{F}_{j-1}))^2|\mathcal{F}_{j-1}\big]$; see, for example, \cite{Casella}.
Therefore, by (\ref{no_need}) and then (\ref{variance2})
\begin{eqnarray*}
{\rm Var}(B_q)=\sum_{j=2}^q {\rm Var}(X_j)&=&\sum_{j=2}^q\mathbb{E}\, {\rm Var}(\bold{y}_j'\bold{A}_{j-1} \bold{y}_j|\mathcal{F}_{j-1})\\
&=& 2\sum_{j=2}^q \mathbb{E}\,{\rm tr}(\bold{A}_{j-1}^2).
\end{eqnarray*}
Repeatedly using the facts
\begin{eqnarray*}
\bold{\Sigma}_{j-1}^2=\bold{\Sigma}_{j-1},\ \
\begin{pmatrix}\bold{I}_{p} & \bold{0} \\
\bold{0} & \bold{0} \end{pmatrix}^2=\begin{pmatrix}\bold{I}_{p} & \bold{0} \\
\bold{0} & \bold{0} \end{pmatrix}, \ \ \ \mbox{tr}\, (\bold{U}\bold{V})=\mbox{tr}\, (\bold{V}\bold{U})
\end{eqnarray*}
for any $n\times n$ matrices $\bold{U}$ and $\bold{V}$,
it is not difficult to see ${\rm tr}(\bold{A}_{j-1}^2)={\rm tr}[(\bold{\Sigma}_{j-1})_p^2]$. From
Lemma \ref{trofSigma},
\begin{eqnarray*}
\sum_{j=2}^q \mathbb{E}\,{\rm tr}(\bold{A}_{j-1}^2)&= &\frac{pq(q-1)(p+2)}{2n(n+2)}+\frac{pq(q-1)(q-2)(n-p)}{3n(n-1)(n+2)}\\
& \leq & C\cdot \Big(\frac{p^2q^2}{n^2} + \frac{pq^3}{n^2}\Big)=O(\frac1{q^2}+\frac1{pq})\to 0
\end{eqnarray*}
as $n\to\infty$ by the assumption $q\to\infty$ and $\frac{pq^2}{n}\to\sigma>0.$ We gets (\ref{inspiration}). The proof is completed. \hfill$\blacksquare$
\section{Appendix}\label{Good_Tue}
In this section we will prove Lemmas \ref{great}, \ref{awesome_pen} and \ref{var-e}. We start with Lemma \ref{great}, which computes the mean values of monomials of the matrix elements
from an Haar-orthogonal matrix.
To make the monomials more intuitive, we make Figure \ref{fig200}. For each plot inside the graph,
the number of circles appearing in a corner means the power of the corresponding matrix entry appearing in the monomial. For example, plot (d)
stands for the monomial $\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22}^3$;
plot (e) represents $\gamma_{11}\gamma_{12}\gamma_{22}\gamma_{23}\gamma_{31}\gamma_{33}.$
\begin{figure}
\begin{center}
\includegraphics[height=3.5in,width=6.4in]{5.eps}
\caption{}
\end{center}
\label{fig200}
\end{figure}
\medskip
\noindent\textbf{Proof of Lemma \ref{great}}. Our argument below are based on the unit length of
each row/column, the orthogonality of any two rows/columns and that all row/column random
vectors are exchangeable. We will first prove conclusions (a) and (c), and then prove the rest of them.
(a) Recall {\bf F1}).
Take $a_1=a_2=a_3=1$ and $a_3=\cdots =a_n=0$ in Lemma \ref{Jiang2009}, we get the conclusion.
(c) Since $$\gamma_{11}^2\gamma_{21}^2(\sum_{j=1}^n\gamma_{1j}^2-1)=0,$$ take expectations to get
\begin{eqnarray*}
\mathbb{E}(\gamma_{11}^4\gamma_{21}^2) + (n-1)\mathbb{E}(\gamma_{11}^2\gamma_{21}^2\gamma_{12}^2) =\mathbb{E}(\gamma_{11}^2\gamma_{21}^2).
\end{eqnarray*}
By using Lemma \ref{Jiang2009}, we see
\begin{eqnarray}\label{rabbit}
\mathbb{E}(\gamma_{11}^2\gamma_{21}^2)=\frac{1}{n(n+2)};\ \ \ \mathbb{E}(\gamma_{11}^4\gamma_{21}^2)=\frac{3}{n(n+2)(n+4)}.
\end{eqnarray}
The conclusion (c) follows.
(b), (d) \& (e). Since the first and the second columns of $\mathbf {\Gamma}_n$ are mutually orthogonal, we know
\begin{eqnarray}\label{tri-1}\aligned
0&=\mathbb{E}\Big(\sum_{i=1}^n \gamma_{i1}\gamma_{i2}\sum_{j=1}^n \gamma_{j2}\gamma_{j3} \sum_{k=1}^n \gamma_{k1}\gamma_{k3}\Big)\\
&=n\mathbb{E}(\gamma_{11}^2\gamma_{12}^2\gamma_{13}^2)+
3n(n-1)\mathbb{E}(\gamma_{11}^2\gamma_{12}\gamma_{13}\gamma_{22}\gamma_{23})\\
&+n(n-1)(n-2)\mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{22}\gamma_{23}\gamma_{31}\gamma_{33}).
\endaligned \end{eqnarray}
Similarly we have
\begin{eqnarray}\label{tri-2}\aligned
0&=\mathbb{E}\Big(\sum_{i=1}^n \gamma_{i1}^2\gamma_{i2}\gamma_{i3}\sum_{j=1}^n \gamma_{j2} \gamma_{j3}\Big)\\
&=n\mathbb{E}(\gamma_{11}^2\gamma_{12}^2\gamma_{13}^2)+
n(n-1)\mathbb{E}(\gamma_{11}^2\gamma_{12}\gamma_{13}\gamma_{22}\gamma_{23})\\
\endaligned \end{eqnarray}
and
\begin{eqnarray}\label{tri-3}\aligned
0&=\mathbb{E}\Big(\sum_{i=1}^n \gamma_{i1}^3\gamma_{i2}\sum_{j=1}^n \gamma_{j1} \gamma_{j2}\Big)\\
&=n\mathbb{E}(\gamma_{11}^4\gamma_{12}^2)+n(n-1)\mathbb{E}(\gamma_{11}^3\gamma_{12}\gamma_{21}\gamma_{22}).\\
\endaligned \end{eqnarray}
Combining the expressions \eqref{tri-1}, \eqref{tri-2} and \eqref{tri-3} together with conclusion (a) and (\ref{rabbit}), we arrive at
\begin{eqnarray*}\label{tri-t}\aligned
&\mathbb{E}(\gamma_{11}^2\gamma_{12}\gamma_{13}\gamma_{22}\gamma_{23})=
-\frac{1}{(n-1)n(n+2)(n+4)}; \\
& \mathbb{E}( \gamma_{11}^3\gamma_{12}\gamma_{22}\gamma_{21})=-\frac{3}{(n-1)n(n+2)(n+4)};\\
& \mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{22}\gamma_{23}\gamma_{31}\gamma_{33})
=\frac{2}{(n-2)(n-1)n(n+2)(n+4)}.
\endaligned
\end{eqnarray*}
By swapping rows and columns and using the invariance, we get
\begin{eqnarray*}
& & \mathbb{E}(\gamma_{11}^2\gamma_{12}\gamma_{13}\gamma_{22}\gamma_{23})=
\mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22}\gamma_{23}^2);\\
& & \mathbb{E}(\gamma_{11}\gamma_{12}\gamma_{21}\gamma_{22}^3)=\mathbb{E}( \gamma_{11}^3\gamma_{12}\gamma_{22}\gamma_{21}).
\end{eqnarray*}
The proof is completed. \hfill$\blacksquare$
\medskip
We will derive the central limit theorem appearing in Lemma \ref{awesome_pen} next. Two preliminary calculations are needed.
\begin{lem}\label{doublegreat}
Let $\bold{y}=(\xi_1, \cdots, \xi_p)'\sim N_p(\bold{0}, \bold{I}_p)$. Let $\bold{a}=(\alpha_1, \cdots, \alpha_p)'\in \mathbb{R}^p$
and $\bold{b}=(\beta_1, \cdots, \beta_p)'\in \mathbb{R}^p$ with $\|\bold{a}\|=\|\bold{b}\|=1$. Then,
$$\mathbb{E}\big[(\bold{a}' \bold{y})^2(\bold{b}'\bold{y})^2\big]=2(\bold{a}'\bold{b})^2+1.$$
\end{lem}
\noindent\textbf{Proof}. Rewrite
$$\bold{a}'\bold{y}\bold{b}'\bold{y}=\bold{y}' \bold{a} \bold{b}' \bold{y}=\bold{y}'\bold{b}\bold{a}'\bold{y}=\frac12 \bold{y}'(\bold{a}\bold{b}'+\bold{b}\bold{a}')\bold{y}.$$ Define
$\bold{A}=\frac12(\bold{a}\bold{b}'+\bold{b}\bold{a}').$ Then, $\bold{a}'\bold{y}\bold{b}'\bold{y}=\bold{y}'
\bold{A}\bold{y}$. By (\ref{normal_fact}) and (\ref{variance2}),
\begin{eqnarray*}
\mathbb{E}(\bold{y}'\bold{A}\bold{y})=\,\mbox{tr}(\bold{A})\ \
\mbox{and}\ \ \mbox{Var}(\bold{y}'\bold{A}\bold{y})=2\, \mbox{tr} (\bold{A}^2).
\end{eqnarray*}
Note that ${\rm tr}(\bold{A})=\bold{a}' \bold{b}.$ Therefore,
\begin{eqnarray*}
\mathbb{E}\big[(\bold{a}' \bold{y})^2(\bold{b}'\bold{y})^2\big]
&= &2{\rm tr}(\bold{A}^2)+{\rm tr}^2(\bold{A})\\
&=&\frac12{\rm tr}\big(\bold{a}\bold{b}'\bold{a}\bold{b}'+\bold{b}\bold{a}'\bold{b}\bold{a}'+
\bold{a}\bold{b}'\bold{b}\bold{a}'+\bold{b}\bold{a}'\bold{a}\bold{b}'\big)+(\bold{a}'\bold{b})^2\\
&=& 2(\bold{a}'\bold{b})^2+1
\end{eqnarray*}
by the assumption $\|\bold{a}\|=\|\bold{b}\|=1$.
\hfill$\blacksquare$
\begin{lem}\label{soup_good} Let $\bold{u}=(\xi_1, \cdots, \xi_p)'$ and $\bold{v}=(\eta_1, \cdots, \eta_p)'$ be independent random vectors with distribution $N_p(\bold{0}, \bold{I}_p).$ Set $\bold{w}=(\bold{u}'\bold{v})^2-\|\bold{u}\|^2$. Then
\begin{eqnarray}
& & \mathbb{E}[\bold{w} | \bold{v}]
=\|\bold{v}\|^2-p; \label{Clinton}\\
& & \mathbb{E}[\bold{w}^2 | \bold{u}]
=2\|\bold{u}\|^4; \label{Trump}\\
& \mathbb{E}[\bold{w}^2| \bold{v}]&=3\|\bold{v}\|^4+(p^2+2p)-2(p+2)\|\bold{v}\|^2. \label{none}
\end{eqnarray}
\end{lem}
\noindent\textbf{Proof}. The assertion (\ref{Clinton}) follows from independence directly. Further,
\begin{eqnarray*}
\mathbb{E}\big[\bold{w}^2 | \bold{u}\big]&=&
\mathbb{E}\big[(\bold{u}'\bold{v})^4-2\|\bold{u}\|^2 (\bold{u}'\bold{v})^2+\|\bold{u}\|^4 | \bold{u}\big]\\
& = & 3\|\bold{u}\|^4-2\|\bold{u}\|^4+\|\bold{u}\|^4=2 \|\bold{u}\|^4.
\end{eqnarray*}
We then obtain (\ref{Trump}). Finally, since $\|\bold{u}\|^2 \sim \chi^2(p)$,
we have
\begin{eqnarray*}
\mathbb{E}[\bold{w}^2| \bold{v}]&=&\mathbb{E}\big[\big((\bold{u}'\bold{v})^4+\|\bold{u}\|^4-2\|\bold{u}\|^2
(\bold{u}'\bold{v})^2\big)|\bold{v}\big]\\
&=& 3\|\bold{v}\|^4+(p^2+2p)-2\, \mathbb{E}\Big[\|\bold{u}\|^2
\big(\sum_{l=1}^p\xi_l\eta_l\big)^2\big| \bold{v}\Big].
\end{eqnarray*}
Expanding the last sum, we see from independence that
\begin{eqnarray*}
\mathbb{E}\Big[\|\bold{u}\|^2
\big(\sum_{l=1}^p\xi_l\eta_l\big)^2\big| \bold{v}\Big] &=& \sum_{l=1}^p\eta_l^2\mathbb{E}\big[\|\bold{u}\|^2\xi_1^2\big] + \sum_{1\leq k\ne l \leq p}\eta_k\eta_l\mathbb{E}\big[\|\bold{u}\|^2\xi_1\xi_2\big]\\
& = & \Big(\sum_{l=1}^p\eta_l^2\Big)\cdot \frac{1}{p}\Big(\mathbb{E}\sum_{l=1}^p\|\bold{u}\|^2\xi_l^2\Big)\\
& = & \|\bold{v}\|^2\cdot \frac{1}{p}\mathbb{E}\big(\|\bold{u}\|^4\big)=(p+2)\|\bold{v}\|^2
\end{eqnarray*}
by the fact $\mathbb{E}\big[\|\bold{u}\|^2\xi_1\xi_2\big]=0$ due to the symmetry of normals random variables. The above two identities imply (\ref{none}). \hfill$\blacksquare$
\medskip
\begin{figure}
\title{Plot of CLT}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=1.5in,width=1.7in]{q30.eps}&\includegraphics[height=1.5in,width=1.7in]{q301.eps} & \includegraphics[height=1.5in,width=1.7in]{q40.eps} \\
\includegraphics[height=1.5in,width=1.7in]{q50.eps} & \includegraphics[height=1.5in,width=1.7in]{q501.eps} & \includegraphics[height=1.5in,width=1.7in]{q100.eps}
\end{tabular}
\caption{\sl The plots simulate the density of $\frac{1}{2pq}\sum_{1\leq i\ne j\leq q}\big[(\bold{g}_i'\bold{g}_j)^2-p\big]$ in Lemma ~\ref{awesome_pen} for $(p,q)= (165, 30), (900, 30), (1600, 40)$ in the top row, and $(p,q)= (355, 50), (2500, 50), (10000, 100)$ in the bottom row, respectively. They match the density curve of $N(0,1)$ better as both $p$ and $q$ are larger.}
\label{fig_Xinmei}
\end{center}
\end{figure}
Now we prove the second main result in this section.
\medskip
\noindent\textbf{Proof of Lemma \ref{awesome_pen}}. Since $\frac{q}{p}\to 0$, we assume that, without loss of generality, $q< p$ for all $n\geq 3$. Let $$T_n=\frac{1}{pq}\sum_{1\leq i\ne j\leq q}\big[(\bold{g}_i'\bold{g}_j)^2-p\big],$$
which can be rewritten by
$$T_n=\frac{2}{pq}\sum_{j=2}^q \sum_{i=1}^{j-1}\big[(\bold{g}_i'\bold{g}_j)^2-p\big].$$
Define $C_0=0$ and
$$C_j=\sum_{i=1}^{j-1}\big[(\bold{g}_i'\bold{g}_j)^2-p\big]$$
for $2\leq j \leq q$.
It is easy to see
\begin{eqnarray}\label{putin_three}
B_j:=\mathbb{E}[C_j|\mathcal{F}_{j-1}]=\sum_{i=1}^{j-1}
\mathbb{E}[\big((\bold{g}_i'\bold{g}_j)^2-p\big)|\mathcal{F}_{j-1}]=\sum_{i=1}^{j-1}(\|\bold{g}_i\|^2-p)
\end{eqnarray}
where the sigma algebra $\mathcal{F}_k=\sigma\big(\bold{g}_1, \bold{g}_2, \cdots, \bold{g}_k\big)$ for all
$1\le k\le q.$
Thus
\begin{eqnarray}\label{siren_green}
X_j:=C_j-\mathbb{E}[C_j|\mathcal{F}_{j-1}]=
\sum_{i=1}^{j-1}\big[ (\bold{g}_i' \bold{g}_j)^2-\|\bold{g}_i\|^2\big], \quad 2\le j\le q,
\end{eqnarray}
form a martingale difference with respect to the $\sigma$-algebra $(\mathcal{F}_{j})_{2\le j\le q}.$
Therefore $T_n$ can be further written by
\begin{equation}\label{subofT}T_n=\frac{2}{pq}\sum_{j=2}^q X_j+\frac{2}{pq}\sum_{j=2}^q B_j.\end{equation}
By using (\ref{putin_three}) and changing sums, one gets
$$\frac{2}{pq} \sum_{j=2}^qB_j=\frac{2}{pq}\sum_{i=1}^{q-1}(\|\bold{g}_i\|^2-p)(q-i).$$
Since $\|\bold{g}_i\|^2\sim \chi^2(p)$ for each $1\leq i \leq q$, we know ${\rm Var}(\|\bold{g}_i\|^2-p)={\rm Var}(\|\bold{g}_i\|^2)=2p$. Hence,
\begin{eqnarray*}
{\rm Var}\Big(\frac{1}{pq} \sum_{j=2}^qB_j\Big) & = & \frac{1}{p^2q^2}
\sum_{i=1}^{q-1}(q-i)^2 {\rm Var}(\|\bold{g}_i\|^2-p)\\
&\leq & \frac{2p}{p^2q^2}\cdot q^3=\frac{2q}{p}\to 0.
\end{eqnarray*}
This together with the fact $\mathbb{E}\sum_{j=2}^qB_j=0$ indicates that
$$\frac{1}{pq} \sum_{j=2}^q B_j\stackrel{p}{\to} 0.$$
By (\ref{subofT}), to prove the theorem, it suffices to prove
$$\frac{1}{pq} \sum_{j=2}^q X_j \to N(0, 1)$$ weakly as $n\to\infty.$
By the Lindeberg-Feller central limit theorem for martingale differences
(see, for example, p. 414 from \cite{Durrett}), it is enough to verify that
\begin{equation}\label{squareX} W_n:=\frac{1}{p^2q^2} \sum_{j=2}^q\mathbb{E}[X_j^2|\mathcal{F}_{j-1}]\stackrel{p}{\to} 1 \end{equation}
and
\begin{equation}\label{fourX} \frac{1}{p^4q^4} \sum_{j=2}^q\mathbb{E}\big(X_j^4\big)\to 0\end{equation}
as $n\to\infty.$ To prove \eqref{squareX}, it suffices to show
\begin{eqnarray}\label{Weight_loss}
\mathbb{E}(W_n)\rightarrow 1
\end{eqnarray}
and
\begin{eqnarray}\label{Weight_loss2}
{\rm Var}(W_n)\rightarrow 0
\end{eqnarray}
as $n\to\infty.$ In the rest of the proof, due to their lengths we will show the above three assertions in the order of (\ref{Weight_loss}),
(\ref{fourX}) and (\ref{Weight_loss2}), respectively. The proof will be
finished then.
{\it The proof of (\ref{Weight_loss})}. For simplicity, given $2\le j\le q,$ define
\begin{eqnarray*}\label{fire_bad}
\bold{w}_i:=(\bold{g}_i'\bold{g}_j)^2-\|\bold{g}_i\|^2
\end{eqnarray*}
for any $i=1, \cdots, j-1.$ Recall that $\bold{a}_i:=\frac{\bold{g}_i}{\|\bold{g}_i\|}$ and $\|\bold{g}_i\|$ are independent, we have a useful fact that
\begin{eqnarray}\label{night_sun}
\bold{w}_i =\|\bold{g}_i\|^2[(\bold{a}_i'\bold{g}_j)^2-1]=\|\bold{g}_i\|^2(\chi-1)
\end{eqnarray}
given $\bold{g}_i$, where $\chi$ is a random variable with distribution $\chi^2(1)$ and is independent of $\|\bold{g}_i\|$. It is easy to see
$X_j=\sum_{i=1}^{j-1} \bold{w}_i$ from (\ref{siren_green}).
By Lemma \ref{soup_good}, for any $2\le j\le q,$ we see
\begin{eqnarray*}\label{x2expe}
\mathbb{E}\big(X^2_j\big)
&=&\sum_{i=1}^{j-1}\mathbb{E}(\bold{w}_i^2)+\sum_{1\le i\neq k\le j-1}\mathbb{E}(\bold{w}_i\bold{w}_k)\\
&=&\sum_{i=1}^{j-1}\mathbb{E}\big[\mathbb{E}(\bold{w}_i^2 |\bold{g}_{i})\big]
+\sum_{1\le i\neq k<j}\mathbb{E}\big(\mathbb{E}[\bold{w}_i \bold{w}_k | \bold{g}_{j}]\big)\\
&=&2\sum_{1\le i<j}\mathbb{E}\|\bold{g}_i\|^4+\sum_{1\le i\neq k<j}\mathbb{E}(\|\bold{g}_j\|^2-p)^2\\
&=&2(p^2+2p)(j-1)+2p(j-1)(j-2),
\end{eqnarray*}
where in the third equality we use the fact that $\bold{w}_i$ and $\bold{w}_k$ are conditionally independent given $\bold{g}_j$ for any $j \ne k$ and $\mathbb{E}[\bold{w}_i | \bold{g}_{j}]=\|\bold{g}_j\|^2-p.$
Thereby,
\begin{eqnarray*}
\mathbb{E}(W_n)=\frac{1}{p^2q^2}\sum_{j=2}^q
\mathbb{E}(X_j^2)=\frac{(p^2+2p)q(q-1)+O(pq^3)}{p^2q^2}\to 1
\end{eqnarray*}
as $n\to\infty$. This justifies (\ref{Weight_loss}). In particular,
\begin{eqnarray}\label{where_moon}
\sum_{j=2}^q \mathbb{E}(X_j^2)\leq Cp^2q^2
\end{eqnarray}
for all $n\geq 4$, which will be used later.
{\it The proof of (\ref{fourX})}. Fix $j$ with $2\le j\le q$.
Observe that $(\bold{w}_i)_{1\le i\le j-1}$ form again a martingale difference with respect to
the sigma algebra $(\mathcal{F}_{i})_{1\le i\le j-1}.$ The Burkholder inequality
(see, for example, \cite{Shiryayev}) says that, for any
$s>1,$
$$ \mathbb{E}\Big(\sum_{i=1}^{j-1} \bold{w}_i\Big)^s\le C\cdot\mathbb{E}\Big(\sum_{i=1}^{j-1}\bold{w}_i^2\Big)^{s/2}, $$
where $C$ is a universal constant depending on $s$ only. By taking $s=4$, we see from $X_j=\sum_{i=1}^{j-1} \bold{w}_i$ that
\begin{eqnarray}
\mathbb{E}(X^4_j)
& \le & C
\sum_{i=1}^{j-1}\mathbb{E}(\bold{w}_i^4)+C\sum_{1\le i\neq k\le j-1}\mathbb{E}(\bold{w}_i^2\bold{w}_l^2)\nonumber\\
&=& C\sum_{i=1}^{j-1}\mathbb{E}[\mathbb{E}(\bold{w}_i^4|\bold{g}_i)]+C\sum_{1\le i\neq k\le j-1}
\mathbb{E} \big[\mathbb{E}(\bold{w}_i^2| \bold{g}_j)\cdot \mathbb{E}(\bold{w}_k^2| \bold{g}_j)\big]
\label{kill_cloud}
\end{eqnarray}
by using the conditional independence. From (\ref{night_sun}),
\begin{eqnarray*}
\mathbb{E}\big[\mathbb{E}(\bold{w}_i^4|\bold{g}_i)\big]
&\leq & \mathbb{E}(\|\bold{g}_i\|^8)\cdot\mathbb{E} \big[(\chi-1)^4\big]\\
& =& \mathbb{E}\big(\chi^2(p)^4)\cdot\mathbb{E} \big[(\chi-1)^4\big]\\
& \leq & Cp^4
\end{eqnarray*}
by Lemma \ref{party_love}. Now, from (\ref{Trump}), the second sum in (\ref{kill_cloud}) is bounded by
\begin{eqnarray*}
& & \mathbb{E} \big[3\|\bold{g}_j\|^4+(p^2+2p)-2(p+2)\|\bold{g}_j\|^2\big]^2\\
& \leq & 3\big[\mathbb{E}(\chi^2(p)^4)+(p^2+2p)^2 + 4(p+2)^2\mathbb{E}(\chi^2(p)^2)\big]\\
& \leq & Cp^4
\end{eqnarray*}
by the Minkowski inequality and Lemma \ref{party_love} again. The above two estimates together with (\ref{kill_cloud}) imply
\begin{eqnarray*}
\mathbb{E}(X^4_j) \leq C\cdot\big[(j-1)p^4 + (j-1)(j-2)p^4]
\end{eqnarray*}
for $2\le j\le q$, where $C$ is free of $n$ and $j$. Consequently,
$$\frac{1}{p^4q^4}\sum_{j=2}^q \mathbb{E}\big(X_j^4\big)\le
\frac{C}{p^4q^4}\sum_{j=2}^qj^2p^4
\leq \frac{C p^4q^3}{p^4q^4}\to 0$$
as $n\to\infty.$ This concludes (\ref{fourX}).
{\it The proof of (\ref{Weight_loss2})}.
We need to prove that
\begin{eqnarray*}
\frac{1}{p^4q^4}{\rm Var}\bigg[ \sum_{j=2}^q\mathbb{E}\big(X_j^2|\mathcal{F}_{j-1}\big)\bigg]
\to 0
\end{eqnarray*}
as $n\to\infty.$
Let us first compute $\mathbb{E}[X_j^2|\mathcal{F}_{j-1}]$.
Set $\alpha_i=\frac{\bold{g}_i}{\|\bold{g}_i\|}$ for $1\le i\le q.$ Then,
\begin{eqnarray*}
\mathbb{E}\big(X_j^2|\mathcal{F}_{j-1}\big)
&=&\mathbb{E}
\Big[\big(\sum_{i=1}^{j-1} \bold{w}_i\big)^2|\mathcal{F}_{j-1}\Big]\\
&=&\sum_{i=1}^{j-1}\mathbb{E}\big(\bold{w}_i^2|\mathcal{F}_{j-1}\big)+
I(j\geq 3)\cdot\sum_{1\le i\neq k\le j-1}\mathbb{E}\big(\bold{w}_i\bold{w}_k|\mathcal{F}_{j-1}\big) \\
&=&\sum_{i=1}^{j-1}\|\bold{g}_i\|^4\cdot \mathbb{E}[(\chi-1)^2]+I(j\geq 3)\cdot
\sum_{1\le i\neq k\le j-1}\|\bold{g}_i\|^2\|\bold{g}_k\|^2\cdot\\
&&\quad\quad\quad\quad\quad\quad\quad
\mathbb{E}\big[(\alpha_i' \bold{g}_j)^2(\alpha_k' \bold{g}_j)^2-(\alpha_i' \bold{g}_j)^2-
(\alpha_k' \bold{g}_j)^2+1|\mathcal{F}_{j-1}\big],
\end{eqnarray*}
where $I(j\geq 3)$ is the indicator function of the set $\{j\geq 3\}.$
Given $\mathcal{F}_{j-1}$, evidently $\alpha_i' \bold{g}_j\sim N(0,1)$ for $1\leq i \leq j-1.$
Therefore, from Lemma \ref{doublegreat} we have
\begin{eqnarray*}
\mathbb{E}\big(X_j^2|\mathcal{F}_{j-1}\big)&=&2\sum_{i=1}^{j-1}\|\bold{g}_i\|^4+2I(j\geq 3)\cdot\sum_{1\le i\neq k\le j-1}
\|\bold{g}_i\|^2\|\bold{g}_k\|^2(\alpha_i'\alpha_k)^2\\
&=&2\sum_{i=1}^{j-1}\|\bold{g}_i\|^4+2I(j\geq 3)\cdot\sum_{1\le i\neq k\le j-1}(\bold{g}_i'\bold{g}_k)^2.
\end{eqnarray*}
By changing the order of sums, it is not difficult to verify that
$$ \sum_{j=2}^q\mathbb{E}\big(X_j^2|\mathcal{F}_{j-1}\big)
=2\sum_{i=1}^{q-1}\|\bold{g}_i\|^4(q-i)+
4\sum_{i=1}^{q-2}\sum_{k=i+1}^{q-1}(\bold{g}_i'\bold{g}_k)^2(q-k).$$
Therefore,
\begin{eqnarray} & & {\rm Var}\Big[\sum_{j=2}^q\mathbb{E}\big(X_j^2|\mathcal{F}_{j-1}\big)\Big]\nonumber\\
&\le & 8\cdot{\rm Var}\Big[\sum_{i=1}^{q-1}\|\bold{g}_i\|^4(q-i)\Big]+32\cdot
{\rm Var}\bigg[\sum_{i=1}^{q-2}\sum_{j=i+1}^{q-1}(\bold{g}_i'\bold{g}_j)^2(q-j) \bigg]\nonumber\\
&=& 8\cdot\sum_{i=1}^{q-1}(q-i)^2 {\rm Var}(\|\bold{g}_i\|^4)+32\cdot {\rm Var}
\bigg[\sum_{j=2}^{q-1}(q-j)\sum_{i=1}^{j-1}(\bold{g}_i'\bold{g}_j)^2\bigg].\label{I+II}
\end{eqnarray}
On the one hand, by Lemma \ref{party_love} we know \begin{equation}\label{I-I}
\sum_{i=1}^{q-1}(q-i)^2 {\rm Var}(\|\bold{g}_i\|^4)\leq q^3\cdot{\rm Var}\big[\chi^2(p)^2\big]
= 8pq^3(p+2)(p+3).
\end{equation}
Moreover, for $2\le j\le q$ fixed, recall the notation
$$\bold{w}_i:=(\bold{g}_i'\bold{g}_j)^2-\|\bold{g}_i\|^2, \quad X_j=\sum_{i=1}^{j-1}\bold{w}_i.$$
Then $$\sum_{i=1}^{j-1} (\bold{g}_i'\bold{g}_j)^2=
\sum_{i=1}^{j-1}\big(\bold{w}_i+\|\bold{g}_i\|^2\big)=X_j+\sum_{i=1}^{j-1}\|\bold{g}_i\|^2,$$
which implies
\begin{eqnarray*}
\sum_{j=2}^{q-1}(q-j)\sum_{i=1}^{j-1}(\bold{g}_i'\bold{g}_j)^2
&=& \sum_{j=2}^{q-1}(q-j) X_j+\sum_{j=2}^{q-1}(q-j)\sum_{i=1}^{j-1}\|\bold{g}_i\|^2\\
&=& \sum_{j=2}^{q-1}(q-j) X_j+\sum_{i=1}^{q-2}\|\bold{g}_i\|^2\sum_{j=i+1}^{q-1}(q-j)\\
&=& \sum_{j=2}^{q-1}(q-j) X_j+\frac{1}{2}\sum_{i=1}^{q-2}\|\bold{g}_i\|^2(q-i-1)(q-i).
\end{eqnarray*}
Consequently we have
\begin{eqnarray}
& & {\rm Var}\bigg[\sum_{j=2}^{q-1}(q-j)\sum_{i=1}^{j-1}(\bold{g}_i'\bold{g}_j)^2\bigg]\nonumber\\
&\le & 2\cdot {\rm Var}\bigg[\sum_{j=2}^{q-1}(q-j) X_j\bigg]+\frac{1}{2}\cdot{\rm Var}
\bigg[\sum_{i=1}^{q-2}\|\bold{g}_i\|^2(q-i-1)(q-i)\bigg] \nonumber\\
&=&2\sum_{j=2}^{q-1} (q-j)^2{\rm Var}(X_j)+
\frac12\sum_{i=1}^{q-2}(q-i-1)^2(q-i)^2{\rm Var}\big(\|\bold{g}_1\|^2\big)\label{mouse_day}\\
&\le & 2 q^2\sum_{j=2}^{q}\mathbb{E}(X_j^2)+ q^5\,{\rm Var}\big(\|\bold{g}_1\|^2\big)
\label{California}\\
&=& C\big(p^2q^4+pq^5),\label{miracle_2}
\end{eqnarray}
where we use the fact $(X_j)_{2\le j\le q}$ is a martingale with respect
to $(\mathcal{F}_j)_{2\le j\le q}$ in (\ref{mouse_day}); the trivial bounds $(q-j)^2\leq q^2$
and $(q-i-1)^2(q-i)^2\leq q^4$ are applied in (\ref{California});
the last step is obtained by (\ref{where_moon}) and
the identity ${\rm Var}\big(\|\bold{g}_1\|^2\big)=2p.$
Plugging \eqref{I-I} and \eqref{miracle_2} into \eqref{I+II},
we have from the fact $q\leq p$ that
\begin{eqnarray*}
\frac{1}{p^4q^4}{\rm Var}\bigg( \sum_{j=2}^q\mathbb{E}[X_j^2|\mathcal{F}_{j-1}]\bigg)
\leq \frac{Cp^3q^3}{p^4q^4}=\frac{C}{pq}\to 0
\end{eqnarray*}
as $n\to\infty$. This finishes the verification of (\ref{Weight_loss2}).
The proof is completed.
\hfill$\blacksquare$
\medskip
Now we prove Lemma \ref{var-e}.
\noindent{\bf Proof of Lemma \ref{var-e}}. Write ${\bf X}=({\bf g}_1, {\bf g}_2, \cdots, {\bf g}_q)$ where ${\bf g}_i\sim N_p(0, {\bf I}_p).$ A repeatedly used fact is that $\|{\bf g}_i\|^2\sim \chi^2(p)$ for each $i$. Using this fact, independence, Lemma \ref{party_love} and (\ref{night_sun}), we have
$$\aligned &\mathbb{E}({\bf g}_1'{\bf g}_2)^2=p, \quad \mathbb{E}({\bf g}_1'{\bf g}_2)^4=3\mathbb{E}(\|{\bf g}_1\|^4)=3p(p+2);\\
& \mathbb{E}[\|{\bf g}_1\|^2\cdot ({\bf g}^{\prime}_1 {\bf g}_2)^2]=p(p+2);\\
& \mathbb{E}\big[({\bf g}_1'{\bf g}_2)^2({\bf g}_1'{\bf g}_3)^2\big]=\mathbb{E}(\|{\bf g}_1\|^4)=p(p+2);\\
&\mathbb{E}\big[\|{\bf g}_1\|^4({\bf g}_1'{\bf g}_2)^2\big]=\mathbb{E}(\|{\bf g}_1\|^6)=p(p+2)(p+4).\endaligned $$
Easily, ${\rm tr}({\bf X}^{\prime}{\bf X})=\sum_{i=1}^q\|{\bf g}_i\|^2$ and
\begin{equation}\label{trace-one}{\rm tr}[({\bf X'X})^2]=\sum_{1\le i\neq j\le q}({\bf g}_i'{\bf g}_j)^2+\sum_{k=1}^q\|{\bf g}_k\|^4.\end{equation}
(i)
By using the set of formulas right before (\ref{trace-one}), we obtain
\begin{equation}\label{power2}\mathbb{E}{\rm tr}[({\bf X'X})^2]=q(q-1)p+qp(p+2)=pq(p+q+1).\end{equation}
From \eqref{trace-one} we see
\begin{equation}\label{trace-two}
{\rm tr}^2[({\bf X'X})^2]=\big(\sum_{k=1}^q\|{\bf g}_k\|^4\big)^2+\big[\sum_{1\le i\neq j\le q}({\bf g}_i'{\bf g}_j)^2\big]^2
+2\sum_{k=1}^q\|{\bf g}_k\|^4\sum_{1\le i\neq j\le q}({\bf g}_i'{\bf g}_j)^2.
\end{equation}
It is easy to check
\begin{eqnarray}\label{thin_of}
\Big(\sum_{k=1}^q\|{\bf g}_k\|^4\Big)^2=\sum_{k=1}^q\|{\bf g}_k\|^8+\sum_{1\le i\neq j\le q}\|{\bf g}_i\|^4\|{\bf g}_j\|^4
\end{eqnarray}
and
\begin{eqnarray}
\Big[\sum_{1\le i\neq j\le q}({\bf g}_i'{\bf g}_j)^2\Big]^2 &=&\sum_{1\le i\neq j\neq k\neq l\le q} ({\bf g}_i'{\bf g}_j)^2({\bf g}_k'{\bf g}_l)^2+2\sum_{1\le i\neq j\le q}({\bf g}_i'{\bf g}_j)^4\nonumber\\
&&+4\sum_{1\le i\neq j\neq k\le q}({\bf g}_i'{\bf g}_j)^2({\bf g}_i'{\bf g}_k)^2\label{head_foot}
\end{eqnarray}
and
\begin{eqnarray}\label{dudu}
2\sum_{k=1}^q\|{\bf g}_k\|^4\sum_{1\le i\neq j\le q}({\bf g}_i'{\bf g}_j)^2=2\sum_{1\le i\neq j\neq k\le q}\|{\bf g}_k\|^4({\bf g}_i'{\bf g}_j)^2+4\sum_{1\le i\neq j\le q}\|{\bf g}_i\|^4({\bf g}_i'{\bf g}_j)^2.
\end{eqnarray}
By the formulas right before (\ref{trace-one}), Lemma \ref{party_love}, (\ref{thin_of}), (\ref{head_foot}) and (\ref{dudu}), respectively, we have that
$$
\mathbb{E}\Big(\sum_{k=1}^q\|{\bf g}_k\|^4\Big)^2=qp(p+2)(p+4)(p+6)+q(q-1)p^2(p+2)^2$$
and
\begin{eqnarray*}
\mathbb{E}\Big[\sum_{1\le i\neq j\le q}({\bf g}_i'{\bf g}_j)^2\Big]^2&=&q(q-1)(q-2)(q-3)p^2+6p(p+2)q(q-1)\\
&&+4p(p+2)q(q-1)(q-2)
\end{eqnarray*}
and
\begin{eqnarray*}
2\mathbb{E}\Big[\sum_{k=1}^q\|{\bf g}_k\|^4\sum_{1\le i\neq j\le q}({\bf g}_i'{\bf g}_j)^2\Big]&=& 2q(q-1)(q-2)p^2(p+2)\\
&&+ 4q(q-1)p(p+2)(p+4).
\end{eqnarray*}
Putting all these three expressions back into \eqref{trace-two}, one gets
\begin{eqnarray*}
\mathbb{E}{\rm tr}^2[({\bf X'X})^2]&=&qp(p+2)(p+4)(p+6)+q(q-1)p^2(p+2)^2\\
&&+q(q-1)(q-2)(q-3)p^2+6p(p+2)q(q-1)\\
&&+4p(p+2)q(q-1)(q-2)\\
&&+2q(q-1)(q-2)p^2(p+2)+4q(q-1)p(p+2)(p+4).
\end{eqnarray*}
This together with \eqref{power2} implies
$$
{\rm Var}\big({\rm tr}[({\bf X'X})^2]\big)
=4p^2q^2+8pq(p+q)^2+20pq(p+q+1).
$$
(ii) Recall (\ref{trace-one}). By independence,
\begin{eqnarray*}
{\rm Cov}\big({\rm tr}({\bf X}^{\prime}{\bf X}), {\rm tr}[({\bf X}^{\prime}{\bf X})^2]\big)&=&{\rm Cov}\Big(\sum_{i=1}^q\|{\bf g}_i\|^2, \sum_{i=1}^q\|{\bf g}_i\|^4+\sum_{1\le i\neq j\le q}({\bf g}^{\prime}_i {\bf g}_j)^2 \Big)\\
&=&\sum_{i=1}^q{\rm Cov}\big(\|{\bf g}_i\|^2, \|{\bf g}_i\|^4\big)+2\sum_{1\le i\neq j\le q}{\rm Cov}(\|{\bf g}_i\|^2, ({\bf g}^{\prime}_i {\bf g}_j)^2)\\
&=&\sum_{i=1}^q\big[\mathbb{E}(\|{\bf g}_i\|^6)-\mathbb{E}(\|{\bf g}_i\|^4)\cdot\mathbb{E}(\|{\bf g}_i\|^2)\big]\\
&&+2\sum_{1\le i\neq j\le q}\big(\mathbb{E}[\|{\bf g}_i\|^2\cdot ({\bf g}^{\prime}_i {\bf g}_j)^2]-\mathbb{E}(\|{\bf g}_i\|^2)\cdot\mathbb{E}({\bf g}^{\prime}_i {\bf g}_j)^2\big).
\end{eqnarray*}
From the formulas right before (\ref{trace-one}) again, we have
$$\aligned {\rm Cov}\big({\rm tr}[{\bf X}^{\prime}{\bf X}], {\rm tr}[({\bf X}^{\prime}{\bf X})^2]\big)&=\sum_{i=1}^q[p(p+2)(p+4)-p^2(p+2)]+2\sum_{1\le i\neq j\le q}[p(p+2)-p^2]\\
&=4pq(p+2)+4pq(q-1)\\
&=4pq(p+q+1).\endaligned $$
The proof is completed now.
\hfill$\blacksquare$
\section{This part is for referees only}\label{only_for_referee}
\noindent\textbf{Proof of Lemma \ref{Low_beauty}}.
Suppose the conclusion is not true, that is, $\liminf_{n\to\infty}f_n(p_n, q_n)=0$ for some sequence $\{(p_n, q_n)\}_{n=1}^{\infty}$ with $1\leq p_n, q_n\leq n$ for each $n\geq 1$ and $\lim_{n\to\infty}(p_nq_n^2)/n=\alpha\in (0, \infty)$.
Then there exists a subsequence $\{n_k; \, k\geq 1\}$ satisfying $1\leq p_{n_k}, q_{n_k}\leq n_k$ for all $k\geq 1$ and $\lim_{k\to\infty}(p_{n_k}q_{n_k}^2)/n_k=\alpha\in (0, \infty)$ such that
\begin{eqnarray}\label{shawn_23}
\lim_{k\to\infty}f_{n_k}(p_{n_k}, q_{n_k})=0.
\end{eqnarray}
There are two possibilities: $\liminf_{k\to\infty}q_{n_k}<\infty$ and $\liminf_{k\to\infty}q_{n_k}=\infty$. Let us discuss the two cases separately.
{\it (a)}. Assume $\liminf_{k\to\infty}q_{n_k}<\infty$. Then there exists a further subsequence $\{n_{k_j}\}_{j=1}^{\infty}$ such that $q_{n_{k_j}}\equiv m\geq 1$. For convenience of notation, write $\bar{n}_j=n_{k_j}$ for all $j\geq 1.$ The condition $\lim_{n\to\infty}\frac{p_nq_n^2}{n}=\alpha$ implies that $\lim_{j\to\infty}\frac{p_{\bar{n}_j}}{\bar{n}_j}=\frac{\alpha}{m^2}\in (0, 1].$ By (\ref{shawn_23}) and the monotonocity,
\begin{eqnarray}\label{privacy_it_51}
\lim_{j\to\infty}f_{\bar{n}_j}(p_{\bar{n}_j}, 1)=\lim_{j\to\infty}f_{\bar{n}_j}(p_{\bar{n}_j}, q_{\bar{n}_j})=0.
\end{eqnarray}
Define $\tilde{p}_{\bar{n}_j}=[p_{\bar{n}_j}/2]+1$ for all $j\geq 1.$ Then, $\lim_{j\to\infty}\frac{\tilde{p}_{\bar{n}_j}}{\bar{n}_j}=c:=\frac{\alpha}{2m^2}\in (0, \frac{1}{2}].$ Construct a new sequence such that
\begin{eqnarray*}\label{green_beer}
\tilde{p}_r=
\begin{cases}
\tilde{p}_{\bar{n}_j}, & \text{if $r=\bar{n}_j$ for some $j\geq 1$};\\
[cr] \vee 1, & \text{if not}
\end{cases}
\end{eqnarray*}
and $\tilde{q}_r=1$ for $r=1,2, \cdots.$ Obviously, $\tilde{p}_{\bar{n}_j}\leq p_{\bar{n}_j}$ for each $j\geq 1.$ It is easy to check $1\leq \tilde{p}_r, \tilde{q}_r\leq r$ for all $r\geq 1$ and $\lim_{r\to\infty}\tilde{p}_r/r=c\in (0,1/2)$. So $\{(\tilde{p}_r, \tilde{q}_r);\, r\geq 1\}$ satisfies condition (i), and hence $\liminf_{r\to\infty}f_r(\tilde{p}_r, \tilde{q}_r)>0$ by (\ref{red_red}). This contradicts (\ref{privacy_it_51}) since $f_r(\tilde{p}_r, \tilde{q}_r)=f_{\bar{n}_j}(\tilde{p}_{\bar{n}_j}, 1)\leq f_{\bar{n}_j}(p_{\bar{n}_j}, 1)$ if $r=\bar{n}_j$ for some $j\geq 1$ by monotonocity.
{\it (b).} Assume $\liminf_{k\to\infty}q_{n_k}=\infty$. Then $\lim_{k\to\infty}q_{n_k}=\infty$.
Define
\begin{eqnarray*}
\tilde{p}_r=
\begin{cases}
p_{n_k}, & \text{if $r=n_k$ for some $k\geq 1$};\\
[r^{1/3}], & \text{if not}
\end{cases}
\end{eqnarray*}
and
\begin{eqnarray*}
\tilde{q}_r=
\begin{cases}
q_{n_k}, & \text{if $r=n_k$ for some $k\geq 1$};\\
\big([\sqrt{\alpha} r^{1/3}] + 1\big)\wedge r, & \text{if not}.
\end{cases}
\end{eqnarray*}
Trivially, $1\leq \tilde{p}_r, \tilde{q}_r\leq r$ for all $r\geq 1$, $\lim_{r\to\infty}\tilde{q}_r=\infty$ and $\lim_{r\to\infty}\tilde{p}_r\tilde{q}_r^2/r=\alpha$.
By (ii),
\begin{eqnarray*}
\liminf_{k\to\infty}f_{n_k}(p_{n_k}, q_{n_k})\geq \liminf_{r\to\infty}f_r(\tilde{p}_r, \tilde{q}_r)>0
\end{eqnarray*}
since $\tilde{p}_r=p_{n_k}$ and $\tilde{q}_r=q_{n_k}$ if $r=n_k$.
This contradicts (\ref{shawn_23}).
In summary, each of the cases that $\liminf_{k\to\infty}q_{n_k}<\infty$ and that $\liminf_{k\to\infty}q_{n_k}=\infty$ results with a contradiction. Therefore, we obtain our desired conclusion.
\hfill$\blacksquare$\\
\noindent\textbf{Acknowledgement}. We thank Professor Xinmei Shen for very helpful communications. In particular we thank her for producing Figure \ref{fig_Xinmei} for us.
|
1,116,691,500,155 | arxiv | \section{Introduction}
Recent gamma ray and cosmic ray observations give increasing evidence
that the sources of gamma ray bursts (GRBs) and of cosmic rays with
energy $E>10^{19}{\rm eV}$ are cosmological (see \cite{cos}
for GRBs; [2--5] for cosmic rays).
The sources of both phenomena, however, remain unknown. In particular,
most of the cosmic ray
sources discussed so far have difficulties in accelerating
cosmic rays up to the highest observed energies \cite{Hillas}.
Furthermore, the arrival directions of the few cosmic rays detected above
$10^{20}{\rm eV}$ are inconsistent with the position of any astrophysical
object that is likely to produce high energy particles \cite{obj}, since
the distance traveled by such particles must be smaller
than $100{\rm Mpc}$ \cite{dist}.
Although the source of GRBs is unknown, their observational
characteristics impose strong constraints on the physical conditions
in the $\gamma$-ray emitting region \cite{scenario},
which imply that protons may be accelerated by Fermi's mechanism
in this
region to energies $10^{20} - 10^{21}{\rm eV}$ \cite{Wa,Vietri}.
The observed energy spectrum of cosmic rays above $10^{19}
{\rm eV}$ (UHECRs) is consistent with a cosmological distribution of
sources of protons with a power-law generation spectrum typical of
Fermi acceleration \cite{Wb}. Furthermore,
the average rate (over volume and time) at which energy
is emitted as $\gamma$-rays by GRBs and in UHECRs
in the cosmological scenario is, remarkably, comparable
\cite{Wa,Wb}. These facts suggest that GRBs and UHECRs
have a common origin.
We describe the GRB model for UHECRs in Sec. \ref{subsec:fermi},
and discuss the current
observational evidence which support it
in Sec. \ref{subsec:spec}.
Several predictions of the
model are presented In Sec. \ref{sec:pred}. Sec. \ref{sec:conc} contains
a discussion.
\section{The GRB model}
\label{sec:model}
\subsection{Fermi acceleration in dissipative wind models of GRB's}
\label{subsec:fermi}
Whatever the ultimate source of GRBs is, observations strongly suggest the
following similar scenario for the production of the bursts \cite{scenario}.
The rapid rise time, $\sim{\rm ms}$, observed in some bursts implies
that the sources are compact, with a linear scale $r_0\sim10^7{\rm cm}$. The
high luminosity required for cosmological bursts,
$L\sim10^{51}{\rm erg}{\rm s}^{-1}$,
then results in an initially optically thick (to pair creation) plasma
of photons, electrons and positrons, which expands and accelerates to
relativistic velocities. In fact, the hardness of the observed
photon spectra, which extends to $\sim100{\rm MeV}$, implies
that the $\gamma$-ray emitting region must be moving relativistically,
with a Lorentz factor $\gamma$ of order a few hundreds.
If the observed radiation is due
to photons escaping the expanding ``wind'' as it becomes optically thin, two
problems arise. First, the photon spectrum is quasi-thermal,
in contrast with observations. Second, the plasma is expected to be ``loaded''
with baryons which may be injected with the radiation or present in the
atmosphere surrounding the source. A small baryonic load, $\geq10^{-8}
{M_\odot}$, increases the optical depth (due to Thomson scattering) so that
most of the radiation energy is converted to
kinetic energy of the relativistically expanding baryons before the plasma
becomes optically thin.
To overcome both problems, it was suggested that the
observed burst is produced once the kinetic energy of the ultra-relativistic
ejecta is dissipated, due to collision of the relativistic baryons
with the inter-stellar medium or due to internal collisions within the ejecta,
at large radius $r=r_d>10^{12}{\rm cm}$ beyond
the Thomson photosphere, and then radiated as $\gamma$-rays \cite{coll}.
Since $\gamma\gg1$, substantial dissipation of
kinetic energy at $r=r_d$
implies that the random motions in the wind rest frame are (at least mildly)
relativistic. The relativistic random motions
are likely to give rise to a turbulent build up of magnetic
fields, and therefore to Fermi acceleration
of charged particles. We derive below the constraints that should be satisfied
by the wind parameters in order to allow acceleration of protons at the
dissipation region to $\sim10^{20}{\rm eV}$.
The most restrictive requirement, which rules out the possibility of
accelerating
particles to energies $\sim10^{20}{\rm eV}$ in most astrophysical objects,
is that
the particle Larmor radius $R_L$ should be smaller than the system size
\cite{Hillas}. In our scenario we must apply a more stringent requirement.
Due to the wind expansion the internal energy is decreasing and therefore
available for proton
acceleration (as well as for $\gamma$-ray production) only
over a comoving time $t_d\sim r_d/\gamma c$. The typical Fermi
acceleration time
is $t_a\simeq R_L/c$ \cite{Hillas},
leading to the requirement $R_L<r_d/\gamma$.
This condition sets a lower limit for the required
comoving magnetic field strength \cite{Wa},
\begin{equation}
\left({B\over B_{e.p.}}\right)^2>0.15\gamma_{300}^2
E_{20}^2L_{51}^{-1},\label{larmor}
\end{equation}
where $E=10^{20}E_{20}{\rm eV}$, $\gamma=300\gamma_{300}$,
$L=10^{51}L_{51}{\rm erg}{\rm\ s}^{-1}$
is the wind luminosity, and $B_{e.p.}$ is the equipartition field,
i.e. a field with
comoving energy density similar to that associated with the random
energy of the baryons.
The accelerated proton energy is also limited by energy loss
due to synchrotron radiation.
The condition that the synchrotron loss time
should be smaller than the acceleration time is
\begin{equation}
B<3\times10^5\gamma_{300}^{2}E_{20}^{-2}{\rm G}.\label{sync}
\end{equation}
Since the equipartition field is inversely proportional to the radius $r$,
this condition may be satisfied simultaneously with (\ref{larmor}) provided
that the dissipation radius is large enough, i.e.
\begin{equation}
r_d>10^{12}\gamma_{300}^{-2}E_{20}^3{\rm cm}.\label{dis}
\end{equation}
The high energy protons lose energy also in interaction
with the wind photons (mainly through pion production). It can be
shown, however, that this energy loss is less important than the synchrotron
energy loss \cite{Wa}.
The conditions (\ref{larmor}-\ref{dis}) imply, that
a dissipative ultra-relativistic wind,
with luminosity and bulk Lorentz factor implied by GRB observations,
satisfies the constraints necessary to allow the acceleration of protons
to energies $\sim10^{20}{\rm eV}$ by second order Fermi acceleration,
provided that turbulent build up
of magnetic fields during dissipation gives rise to fields which are
close to equipartition. It should be noted that equipartition field is
also required in most of the dissipative relativistic wind models for GRBs.
Gamma-rays are produced in these models by synchrotron and synchro-self Compton
radiation of relativistic electrons produced by the dissipation shocks.
Equipartition field is required in this case to allow efficient radiation
of the electrons \cite{scenario}.
Although the details of the mechanism
of turbulent build up of an equipartition field are not fully understood, it
seems to operate in a variety of astrophysical systems.
We note that a magnetic field of
this strength may exist in the plasma prior to the onset of internal collisions
if a substantial part of the wind luminosity is initially, i.e. at $r\sim r_0$,
provided by magnetic field energy density,
and if the field becomes
predominantly transverse. The pre-existing field may suppress small scale
turbulent motions.
In this case shocks coherent over a scale $R\sim r_d/\gamma$
would exist and protons would be accelerated by first rather than second
order Fermi mechanism. The constraints (\ref{larmor}-\ref{dis}) are
valid in this case too, therefore leaving the above conclusions unchanged.
\subsection{UHECR spectrum and flux}
\label{subsec:spec}
In the GRB model for UHECR production described above,
the high energy cosmic rays are
protons accelerated by Fermi's mechanism
in sources that are distributed throughout
the universe.
In Fig.\ \ref{fig1} we compare the UHECR spectrum,
reported by the Fly's Eye and the AGASA experiments \cite{Fly,AGASA2},
with that expected from
a homogeneous cosmological distribution of sources, each generating
a power law differential spectrum of high energy protons
$dN/dE\propto E^{-2.2}$, as typically
expected from Fermi acceleration (e.g. \cite{Hillas}).
(For this calculation we have used a flat universe
with zero cosmological constant and $H_0=75{\rm km}\ {\rm s}^{-1}$; The
spectrum is insensitive to the cosmological parameters and to source
evolution, since most of the cosmic rays arrive from distances
$<500{\rm Mpc}$). The AGASA flux at $3-10\times10^{18}{\rm eV}$
is $\sim1.7$ times higher than that reported by the Fly's Eye, corresponding
to a systematic $\sim20\%$ larger estimate of event energies in the AGASA
experiment compared to the Fly's Eye experiment (see also \cite{Fly,AGASA}).
We have therefore multiplied in Fig.\ \ref{fig1} the Fly's Eye
energy by $1.1$ and the AGASA energy by $0.9$. Bird {\it et al.}
\cite{Fly} find that the Fly's Eye flux in the energy range $4\times10^{17}
-4\times10^{19}{\rm eV}$ can be fitted by a sum of two power laws: A
steeper Galactic component with $J\propto E^{-2.5}$ dominating at lower
energy, and a shallower extra-Galactic component with $J\propto E^{-1.6}$
dominating at higher energy.
The Bird {\it et al.} fit to the extra-Galactic component is also
shown in Fig.\ \ref{fig1}.
\begin{figure}[t]
\centerline{\psfig{figure=fig1n.ps,width=2.7in}}
\caption{
The UHECR flux expected in a cosmological model, compared to the Fly's Eye
and AGASA data. Integers indicate the number of events observed.
$1\sigma$ energy error bars are shown for the highest energy events.
The dashed line denotes the fit by
Bird {\it et al.} \cite{Fly} for the extra-galactic flux.}
\label{fig1}
\end{figure}
The data are consistent with the cosmological model for $E>2\times10^{19}
{\rm eV}$. Furthermore, the flux predicted by the model
at lower energy is consistent with the Bird
{\it et al.} fit to the extra-Galactic component.
(The flux deduced from the highest energy
event in the Fly's Eye data is significantly
higher than that predicted from the cosmological model. However,
the statistical significance of the apparent discrepancy
is not high: For the Fly's Eye exposure, the model predicts an average of
$\sim1.3$ events above $10^{20}{\rm eV}$, and the probability that the
first event observed at this energy range is above $2\times10^{20}{\rm eV}$
is $\sim15\%$.)
The deficit in the number of events detected above
$5\times10^{19}{\rm eV}$, compared to a
power-law extrapolation of the flux at lower energy,
is consistent with that expected due
to a cosmological ``black-body cutoff''. However, with current data the
``cutoff'' is detected with only $2\sigma$ significance.
The present rate at which energy should be produced as $10^{19}$--$10^{20}
{\rm eV}$ protons by the cosmological cosmic ray sources in order to
produce the observed flux is $4\pm2\times10^{44}{\rm erg\ Mpc}^{-3}
{\rm yr}^{-1}$. This rate is comparable to that produced in $\gamma$-rays
by cosmological GRBs: The rate of cosmological GRB events is $\nu_\gamma\simeq
3\times10^{-8}{\rm Mpc}^{-3}{\rm yr}^{-1}$ \cite{rate}, each producing
$\approx3\times10^{51}{\rm erg}$, corresponding to a $\gamma$-ray energy
production rate of $\sim10^{44}{\rm erg\ Mpc}^{-3}
{\rm yr}^{-1}$.
The above analysis implies that the GRB model of UHECR production would
produce UHECR flux consistent with that observed, provided the efficiency
with which the wind kinetic energy is converted to $\gamma$-ray and UHECR
energy is similar. There is, however, one additional point which requires
consideration. The energy of the most
energetic cosmic ray detected by the Fly's Eye experiment is in excess of
$2\times10^{20}{\rm eV}$, and that of the most
energetic AGASA event is above $10^{20}{\rm eV}$. On a
cosmological scale, the distance traveled by such energetic particles is
small: $<100{\rm Mpc}$ ($50{\rm Mpc}$) for the AGASA (Fly's Eye) event
(e.g., \cite{dist}). Thus, the detection of these events over a $\sim5
{\rm yr}$ period can be reconciled with the rate of nearby GRBs, $\sim1$
per $50\, {\rm yr}$ in the field of view of the CR experiments out to $100
{\rm Mpc}$ in a standard cosmological scenario \cite{rate}, only if
there is a large dispersion, $\geq50{\rm yr}$, in the arrival time of protons
produced in a single burst (This implies that if a direct
correlation between
high energy CR events and GRBs, as recently suggested in
\cite{MU}, is observed
on a $\sim10{\rm yr}$ time scale, it would be strong evidence {\it against} a
cosmological GRB hypothesis).
The required dispersion
is likely to occur due to the combined effects of deflection
by random magnetic fields and energy dispersion of the particles.
Consider a proton of energy $E$ propagating through a magnetic field of
strength $B$ and correlation length
$\lambda$. As it travels a distance $\lambda$, the proton is typically
deflected by an angle $\alpha\sim\lambda/
R_L$, where $R_L=E/eB$ is the Larmor radius. The
typical deflection angle for propagation over a distance $d$ is
$\theta_s\sim(d/\lambda)^{1/2}\lambda/R_L$. This deflection results in a time
delay, compared to propagation along a straight line, of order
$\tau(E,d)\approx\theta_s^2d/c\approx(eBd/E)^2\lambda/c$.
The random energy loss UHECRs suffer as they propagate, owing to the
production of pions, implies that
at any distance from the observer there is some finite spread
in the energies of UHECRs that are observed with a given fixed energy.
For protons with energies
$>10^{20}{\rm eV}$ the fractional RMS energy spread is of order unity
over propagation distances in the range $10-100{\rm Mpc}$ (e.g. \cite{dist}).
Since the time delay is sensitive to the particle energy, this implies that
the spread in arrival time of UHECRs with given observed energy is comparable
to the average time delay at that energy $\tau(E,d)$. The
field required to produce $\tau>100{\rm yr}$ is
\begin{equation}
B\sqrt{\lambda}>10^{-11}E_{20}d_{100}^{-1}{\rm G\ Mpc}^{1/2},
\label{Bmin}
\end{equation}
where $d=100d_{100}{\rm Mpc}$.
The required field is consistent with
the current upper limit for the inter-galactic magnetic
field, $B\lambda^{1/2}\le10^{-9}{\rm G\ Mpc}^{1/2}$ \cite{field}.
A time broadening over $\tau\gg100{\rm yr}$
is therefore possible.
It should be noted, that a GRB producing $3\times10^{51}{\rm erg}$ in
UHECRs at $50{\rm Mpc}$ distance, would produce a total fluence at Earth
of $\sim2$ cosmic rays above $10^{19}{\rm eV}$ per ${\rm km}^2$.
In the presence of a magnetic field induced time delay,
the typical distance $d_m(E)$ to the brightest source observed
over an energy range $\Delta E$ around $E$, with $\Delta E/E\sim 1$,
is the radius of a sphere within which the average time between bursts is
equal to the characteristic time delay $\tau[E,d_m(E)]$;
i.e. $4\pi d_m^3\nu_\gamma\tau(E,d_m)/3=1$.
Thus, the brightest source distance is
\begin{equation}
d_m(E)\simeq 30\left({B\sqrt{\lambda}\over10^{-11}{\rm G\ Mpc}^{1/2}}
\right)^{-2/5}E_{19}^{2/5}\,{\rm Mpc},
\label{Dm}
\end{equation}
and its flux is $f\sim0.1E_{19}^{-3/5}(B\lambda^{1/2}/10^{-11}
{\rm G\ Mpc}^{1/2})^{-2/5}$ per ${\rm km}^{2}{\rm yr}$ \cite{Jordi2}.
Here $E=10^{19}E_{\rm 19}{\rm eV}$.
\section{Predictions}
\label{sec:pred}
\subsection{The Number and Spectra of Bright Sources}
\label{subsec:bright}
The initial proton energy, necessary to have an observed energy $E$,
increases with source distance due to propagation energy losses.
The rapid increase of the initial energy after it exceeds, due to
electron-positron production, the threshold for pion production effectively
results in a cutoff distance, $d_c(E)$, beyond which sources do not contribute
to the flux above $E$. Since $d_c(E)$ is a decreasing function of $E$, for
a given number density of sources there is a critical energy $E_c$, above which
only one source (on average) contributes to the flux.
For bursting sources, $E_c$ depends on the product of the burst rate $\nu$
and the time delay. In the GRB model, the burst rate is given by the GRB
rate $\nu=\nu_\gamma$, which is determined from the GRB flux distribution.
The time delay depends on the unknown properties of
the intergalactic magnetic field, $\tau\propto B^2\lambda(d/E)^2$.
However, the rapid
decrease of $d_c(E)$ with energy near $10^{20}{\rm eV}$ implies that
$E_c$ is not very sensitive to the unknown value of $B^2\lambda$.
For the range allowed
for the GRB model, $10^{-11}{\rm G\ Mpc}^{1/2}\le B\lambda^{1/2}\le10^{-9}
{\rm G\ Mpc}^{1/2}$ (the lower
limit determined by (\ref{Bmin}), and the upper limit by Faraday rotation
observations \cite{field}),
the allowed range of $E_c$ is
$10^{20}{\rm eV}\le E_c\le3\times10^{20}{\rm eV}$.
\begin{figure}[t]
\centerline{\psfig{figure=difflux.ps,width=2.7in}}
\caption{Results of a Monte-Carlo realization of the bursting sources
model: Thick solid line- overall
spectrum in the realization;
Thin solid line- average spectrum, this
curve also gives $d_c(E)$;
Dotted lines- spectra of brightest sources at different energies.
}
\label{figNc}
\end{figure}
Fig. \ref{figNc} presents the flux obtained in one realization of
a Monte-Carlo simulation described in ref. \cite{Jordi1} of the total
number of UHECRs received from GRBs at some fixed time.
For each
realization the positions (distances from Earth) and
times at which cosmological GRBs occurred were randomly drawn,
assuming an average rate $\nu_\gamma=
2.3\times10^{-8}{\rm Mpc}^{-3}{\rm yr}^{-1}$, an intrinsic
generation spectrum $n_p(E) \propto E^{-2}{\rm d}E$, and $E_c
=1.4\times10^{20}{\rm eV}$.
Most of the realizations gave an overall spectrum similar to that obtained
in the realization of Fig. \ref{figNc} when the brightest source of this
realization (dominating at $10^{20}{\rm eV}$) is not included.
At $E < E_c$,
the number of sources contributing to the flux is very large,
and the overall UHECR flux received at any
given time is near the average (the average flux is that obtained when
the UHECR emissivity is spatially uniform and time independent).
At $E > E_c$, the flux will generally be much lower than the average,
because there will be no burst within a distance $d_c(E)$ having taken
place sufficiently recently. There is, however, a significant probability
to observe one source with a flux higher than the average.
A source similar to the brightest one in Fig. \ref{figNc}
appears $\sim5\%$ of the time.
At any fixed time a given burst is observed in UHECRs only over a narrow
range of energy, because if
a burst is currently observed at some energy $E$ then UHECRs of much lower
energy from this burst have not yet arrived, while higher energy UHECRs
reached us mostly in the past. As mentioned above, for energies above the
pion production threshold,
$E\sim5\times10^{19}{\rm eV}$, the dispersion in arrival times of UHECRs
with fixed observed energy is comparable to the average delay at that
energy. This implies that
the spectral width $\Delta E$ of the source at a given time is of order
the average observed energy, $\Delta E\sim E$.
Thus, bursting UHECR sources should have narrowly peaked energy
spectra,
and the brightest sources should be different at different energies.
For steady state sources, on the other hand, the brightest
source at high energies should also be the brightest one at low
energies, its fractional contribution to the overall flux decreasing to
low energy only as $d_c(E)^{-1}$.
\subsection{Spectra of Sources at $E<4\times10^{19}{\rm eV}$}
\label{subsec:Blambda}
The detection of UHECRs
above $10^{20}{\rm eV}$ imply that the brightest sources
must lie at distances smaller than $100{\rm Mpc}$.
UHECRs with $E\le4\times10^{19}{\rm eV}$
from such bright sources will suffer energy loss only by pair production,
because at $E < 5\times 10^{19}$ eV
the mean-free-path for pion production interaction
(in which the fractional energy loss is $\sim10\%$) is larger than
$1{\rm Gpc}$. Furthermore, the energy loss due to pair production
over $100{\rm Mpc}$ propagation is only $\sim5\%$.
In the case where the typical displacement of the UHECRs
due to deflections by inter-galactic magnetic fields is
much smaller than the correlation length, $\lambda \gg d\theta_s(d,E)
\simeq d(d/\lambda)^{1/2}\lambda/R_L$,
all the UHECRs that arrive at the
observer are essentially deflected by the same magnetic field structures,
and the absence of random energy loss during propagation implies that
all rays with a fixed observed energy would reach the observer with exactly
the same direction and time delay. At a fixed time, therefore, the source would
appear mono-energetic and point-like. In reality,
energy loss due to pair production
results in a finite but small spectral and angular width,
$\Delta E/E\sim\delta\theta/\theta_s\le1\%$ \cite{Jordi2}.
In the case where the typical displacement of the UHECRs is
much larger than the correlation length, $\lambda \ll d\theta_s(d,E)$,
the deflection of different UHECRs arriving at the observer
are essentially independent. Even in the absence of any energy loss there
are many paths from the source to the observer for UHECRs of fixed energy $E$
that are emitted from the source at an angle
$\theta\le\theta_s$ relative to the source-observer line of sight. Along
each of the paths, UHECRs are deflected by independent magnetic field
structures. Thus, the source angular size would be of order $\theta_s$
and the spread in arrival times would be comparable to the characteristic
delay $\tau$, leading to $\Delta E/E\sim1$ even when there are no random
energy losses. The observed spectral shape of a nearby ($d<100{\rm Mpc}$)
bursting source of UHECRs at
$E<4\times10^{19}{\rm eV}$
was derived for the case $\lambda \ll d\theta_s(d,E)$ in
\cite{Jordi2}, and is given by
\begin{equation}
{dN\over dE}\propto \sum\limits_{n=1}^{\infty} (-1)^{n+1}\, n^2\,
\exp\left[ -{2n^2\pi^2 E^2\over E_0^2(t,d)} \right]\quad,
\label{flux}
\end{equation}
where $E_0(t,d)=de(2{B^2\lambda}/3ct)^{1/2}$.
For this spectrum, the ratio of the
RMS UHECR energy spread to the average energy is $30\%$
\begin{figure}[t]
\centerline{\psfig{figure=BL.ps,width=2.7in}}
\caption{The line $\theta_s d=\lambda$ for a source at
$30{\rm Mpc}$ distance
observed at energy $E\simeq10^{19}{\rm eV}$ (dot-dash line), shown with
the Faraday rotation upper limit $B\lambda^{1/2}
\le10^{-9}{\rm G\ Mpc}^{1/2}$ (solid line), and with the lower limit
$B\lambda^{1/2}\ge10^{-11}{\rm G\ Mpc}^{1/2}$ required in the GRB model.
}
\label{figBL}
\end{figure}
Fig. \ref{figBL} shows the line $\theta_s d=\lambda$ in the $B-\lambda$ plane,
for a source at a distance $d=30{\rm Mpc}$
observed at energy $E\simeq10^{19}{\rm eV}$.
Since the $\theta_s d=\lambda$ line divides the
allowed region in the plane at $\lambda\sim1{\rm Mpc}$,
measuring the spectral width of bright sources would allow to determine
if the field correlation length is much larger, much smaller, or comparable
to $1{\rm Mpc}$.
\subsection{Correlation with Large Scale Structure (LSS)}
\label{subsec:LSS}
If the UHECR sources are indeed extra-Galactic, and
if they are correlated with luminous matter, then the
inhomogeneity of the large scale galaxy distribution on
scales $\le100{\rm Mpc}$ should be imprinted on the UHECR arrival directions.
The expected anisotropy signature
and its dependence on the relative bias
between the UHECR sources and the galaxy
distribution and on the (unknown) intrinsic UHECR source density
have been examined in \cite{Fisher}. The galaxy
distribution was derived from the {\it IRAS} 1.2 Jy redshift
survey, which gives an acceptable description of the
LSS out to $\sim100{\rm Mpc}$ (see \cite{Fisher} for a detailed
analysis).
\begin{figure*}[t]
\vglue-0.8in
\centerline{\psfig{figure=contour_60.ps,width=5in}}
\vglue-0.85in
\caption{Aitoff projection (Galactic coordinates) of the fractional
deviation (from the all sky average) of the mean UHECR intensity.
The heavy curve denotes the zero contour. Solid (dashed) contours
denote positive (negative) fractional fluctuations at intervals
$[-0.5,\ -0.25,\ 0,\ 0.25,\ 0.50,\ 1.0,\ 1.5]$. The super-galactic
plane is denoted by the heavy solid curve roughly perpendicular
to the Galactic plane. The dotted curve denotes the
Fly's Eye coverage of declination $>-10^\circ$. The shaded regions show
the high and low density regions used in the $X(E)$ statistic.
}
\label{figLSS}
\end{figure*}
Figures \ref{figLSS}
presents a map of the angular dependence of the mean (over different
realizations of source distribution) UHECR intensity,
for $E\ge6\times10^{19}{\rm eV}$ and a model where UHECR sources
trace {\it IRAS} galaxies with no bias.
The map clearly reflects the inhomogeneity of the large-scale
galaxy distribution- the overdense UHECR regions lie in the directions of
the ``Great Attractor'' [composed of the Hydra-Centaurus
($l=300$--$360^\circ$, $b=0$--$+45^\circ$)
and Pavo-Indus
($l=320$--$360^\circ$, $b=-45$--$0^\circ$) super-clusters]
and the Perseus-Pisces super-cluster
($l=120$--$160^\circ$, $b=-30$--$+30^\circ$).
In order to determine the
exposure required to discriminate between isotropic and LSS correlated
UHECR source distribution,
the distribution of a statistic similar in spirit to $\chi^2$ was considered,
$X(E)=\sum_l {[n_l(E)- n_{l,I}(E)]^2/ n_{l,I}(E)}$,
where $n_l$ is the number of events detected in angular bin $l$ and
$n_{l,I}$ is the average number expected for isotropic distribution (For
the calculation of $X$ $24^\circ\times24^\circ$ bins were used; see also
Fig. \ref{figLSS}). From the $X(E)$ distributions, generated
by Monte-Carlo simulations of the UHECR source distributions, it was found
that the exposure required for a northern hemisphere detector to discriminate
between isotropic UHECR source distribution and an unbiased distribution that
traces the LSS is approximately $10$ times the current Fly's Eye exposure
($0.1$ the expected Auger exposure).
If the UHECR source distribution is strongly biased, in a way similar to that
of radio galaxies, the required exposure
is $\sim3$ times smaller. Furthermore, with $10$ times the current
Fly's Eye exposure, it would be possible to discriminate between biased
and non-biased source distribution.
The anisotropy signal is not sensitive to the
currently unknown number density of sources.
Stanev {\it et al.} \cite{Stanev} have recently reported that
the arrival directions of $E>4\times10^{19}{\rm eV}$ UHECR events detected
by the Haverah Park experiment show a concentration in the
direction of the super-galactic plane. In agreement with
Stanev {\it et al.}, it is found in \cite{Fisher}
that the probability to obtain
the Haverah Park results assuming an isotropic source distribution is very
low. However, the results of \cite{Fisher} show that this probability is not
significantly higher for models where the source distribution traces
the LSS; thus, the concentration of the Haverah Park events towards the
super-galactic plane can not be explained by the known LSS.
It is important to note that for the biased
model the probability to obtain the Haverah Park
results is smaller than for the unbiased one. This reflects the fact that
the super-clusters, while concentrated towards the super-galactic plane,
have offsets above and below the super-galactic plane which cause the inferred
UHECR distribution to be less flattened than seen in the Haverah Park data.
\section{Conclusions}
\label{sec:conc}
The GRB model for UHECR production has several predictions, which
would allow it to be tested with future experiments.
In this model, the average number of sources
contributing to the flux decreases with energy much more rapidly than in the
case where the UHECR sources are steady \cite{Jordi1}.
A critical energy
exists, $10^{20}{\rm eV}\le E_c<3\times10^{20}{\rm eV}$,
above which a few sources produce most of the flux, and the
observed spectra of these sources is narrow, $\Delta E/E\sim1$:
the bright sources
at high energy should be absent in UHECRs of much
lower energy, since particles take longer to arrive the lower their
energy. In contrast, a model of steady sources predicts that the
brightest sources at high energies should also be the brightest ones at
low energies.
At the highest energies, where most of the cosmic rays should come only from
a few sources, bursting sources should
be identified from only a small number of events from their coincident
directions. Many more cosmic rays need to be detected at lower energies,
where many sources contribute to the flux, in order to identify sources.
Recently, the AGASA experiment reported the presence
of 3 pairs of UHECRs with angular separations (within each pair)
$\le2.5^\circ$, roughly consistent with the measurement error,
among a total of 36 UHECRs with $E\ge4\times10^{19}{\rm eV}$ \cite{AGASA2}.
The two highest energy AGASA events were in these pairs.
Given the total solid angle observed by the experiment, $\sim2\pi{\rm sr}$,
the probability to have found 3 pairs by chance is $\sim3\%$; and, given that
three pairs were found, the probability that the two highest energy events are
among the three pairs by chance is 2.4\%. Therefore, this observation favors
the bursting source model, although more data are needed to confirm it.
Above $E_c$, there is a significant probability to observe one
source with a flux considerably higher than average. If such a source is
present, its narrow spectrum may produce a ``gap'' in the overall
spectrum, as demonstrated in Fig. \ref{figNc}.
It had recently been argued \cite{TD}
that the observation of such an energy gap would
imply that the sources of $>10^{20}{\rm eV}$ UHECRs are different from
the sources at lower energy, hinting that these are
produced by the decay of a new type of massive
particle. We see here that this is not the case when bursting sources
are allowed, owing to the time variability. If such an energy gap is
present, our model predicts that most of the UHECRs above the gap
should normally come from one source.
If our model is correct, then the Fly's Eye event above
$2\times10^{20}{\rm eV}$ suggests that we live at one of the times
when a bright source is present at high energies. However,
given the present scarcity of UHECRs, no solid
conclusions can be drawn. With the projected Auger experiment
\cite{huge}, the number of detected UHECRs would increase by
a factor $\sim 100$. If $E_c$ is $2\times10^{20}{\rm eV}$,
then a few bright sources above $10^{20}{\rm eV}$ should be identified.
For the GRB model, the expected number of events
to be detected by the $5000{\rm km}^2$ Auger detectors from
individual bright sources at $E\sim10^{19}{\rm eV}$ is of order $100$
(cf. eq. (\ref{Dm}); see also \cite{Jordi2}).
The spectral width of these sources depends on the correlation length
$\lambda$ of the inter-galactic magnetic field: Very narrow spectrum,
$\Delta E/E\le1\%$, is expected for $\lambda>1{\rm Mpc}$, and a wider
spectrum, $\Delta E/E\sim1$, is expected for $\lambda\ll1{\rm Mpc}$
(see Fig. \ref{figBL}). With energy resolution of
$\sim10\%$, the Auger detectors would easily allow
to determine the spectral width of the sources, and therefore to put
interesting constraints on the unknown structure of the magnetic field.
If the distribution of UHECR sources traces the large scale
structure (LSS) of luminous matter,
large exposure detectors should clearly reveal
anisotropy in the arrival direction distribution of UHECRs above
$4\times10^{19}{\rm eV}$. With $10$ times the current Fly's Eye exposure
($0.1$ the expected Auger exposure), it would be possible to determine
whether the sources are distributed isotropically, or correlate with
known LSS. Furthermore, it would be possible to determine whether or not
the source
distribution is highly biased compared to {\it IRAS} galaxies (as radio
galaxies are). Thus, the anisotropy signal would provide constraints on the
source population.
Finally, we would like to note another possible signature of the GRB model,
which was not discussed above.
The energy lost by the UHECRs as they propagate and interact with the
microwave background is transformed by cascading into secondary GeV-TeV
photons. A significant fraction of these photons can arrive with
delays much smaller than the UHECR delay if much of inter-galactic space is
occupied by large-scale magnetic ``voids'', regions of size
$\ge5{\rm Mpc}$ and field weaker than $10^{-15}{\rm G}$.
Such voids might be expected, for example, in models where a weak primordial
field is amplified in shocked, turbulent regions of the intergalactic medium
during the formation of large-scale structure.
For a field strength $\sim4\times10^{-11}{\rm G}$ in the high field regions,
the value required to account for observed galactic fields if the field were
frozen in the protogalactic plasma, the delay of protons produced by a burst
at a
distance of $100{\rm Mpc}$ is $\sim100{\rm yr}$, and the fluence of secondary
photons above $10{\rm GeV}$ on hour--day time scales is
$I(>E)\sim10^{-6}E_{\rm TeV}^{-1}{\rm cm}^{-2}$ \cite{Paolo}. This fluence is
close to the detection threshold of current high-energy $\gamma$-ray
experiments.
\subsection*{Acknowledgments}
I would like to thank my collaborators in the work on which this article
is based, J. Miralda-Escud\'e, P. Coppi, T. Piran and K. Fisher, and Prof.
M. Nagano for the invitation to participate in the Tokyo International
Symposium on Extermely High Energy Cosmic Rays. This research
was partially supported by a W. M. Keck Foundation grant
and NSF grant PHY 95-13835.
|
1,116,691,500,156 | arxiv | \section{Introduction}
\setcounter{equation}{0}
We consider a spatially homogeneous gas modeled by the Boltzmann equation:
the density $f_t(v)\geq 0$ of particles with velocity $v\in {\mathbb{R}}^3$ at time $t\geq 0$ solves
\begin{align}\label{be}
\partial_t f_t(v) = \int_{\rd} {\rm d} v^* \int_{{\mathbb{S}^2}} {\rm d} \sigma B(v-v^*,\sigma)
\big[f_t(v')f_t(v'^*) -f_t(v)f_t(v^*)\big],
\end{align}
where
\begin{align}\label{vprime}
v'=v'(v,v^*,\sigma)=\frac{v+v^*}{2} + \frac{|v-v^*|}{2}\sigma \quad \hbox{and}\quad
v'^*=v'^*(v,v^*,\sigma)=\frac{v+v^*}{2} -\frac{|v-v^*|}{2}\sigma.
\end{align}
The cross section $B$ is a nonnegative function given by physics.
We refer to Cercignani \cite{c} and Villani \cite{v:h} for very complete books on the subject.
We are concerned here with hard potentials with angular cutoff: the cross section satisfies
\begin{align}\label{ass}
\left\{\begin{array}{l}
B(v-v^*,\sigma) = |v-v^*|^\gamma b(\langle \frac{v-v^*}{|v-v^*|},\sigma\rangle)
\quad \hbox{for some $\gamma\in[0,1]$}\\ \\
\hbox{and some bounded measurable $b:[-1,1]\mapsto [0,\infty)$.}
\end{array}\right.
\end{align}
The important case where $\gamma=1$ and $b$ is constant corresponds to a gas of hard spheres.
If $\gamma=0$, the cross section is velocity independent and one talks about Maxwellian molecules
with cutoff.
\vskip.13cm
We classically assume without loss of generality that the initial mass $\int_{\rd} f_0(v) {\rm d} v=1$
and we denote by $e_0=\int_{\rd} |v|^2 f_0(v){\rm d} v>0$ the initial kinetic energy.
It is then well-known, see Mischler-Wennberg \cite{MW}, that \eqref{be} has a unique weak solution
such that for all $t\geq 0$, $f_t$ is a probability density on ${\rr^3}$ with energy $\int_{\rd} |v|^2 f_t(v){\rm d} v=e_0$.
Some precise statements are recalled in the next section.
\vskip.13cm
In the whole paper, we denote, for $E$ a topological space, by ${\mathcal{P}}(E)$ (resp. ${\mathcal{M}}(E)$)
the set of probability measures (resp. nonnegative measures)
on $E$ endowed with its Borel $\sigma$-field ${\mathcal{B}}(E)$. For $v,v^*\in{\rr^3}$ and $\sigma \in {\mathbb{S}^2}$, we
put
\begin{equation}\label{beta}
\beta_{v,v^*}(\sigma)=b(\langle {\textstyle{\frac{v-v^*}{|v-v^*|}}},\sigma\rangle) \quad \hbox{and we observe that}
\quad \kappa= \int_{\sS} \beta_{v,v^*}(\sigma){\rm d}\sigma
\end{equation}
does not depend on $v,v^*$ and is given by $\kappa=2\pi \int_0^\pi b(\theta)\sin\theta {\rm d}\theta$.
If $v=v^*$, then we have $v'=v'^*=v$, see \eqref{vprime},
so that the definition of $\beta_{v,v}(\sigma)$ is not important, we can e.g. set
$\beta_{v,v}(\sigma)=|{\mathbb{S}^2}|^{-1}\kappa$.
\vskip.13cm
In the rest of this introduction, we informally recall how \eqref{be} can be solved,
in the case of Maxwellian molecules,
by using the Wild sum, we quickly explain its interpretation by McKean,
and we write down a closely related recursive simulation algorithm. We also recall that Wild's sum
can be used for theoretical and numerical analysis of Maxwellian molecules.
Then we briefly recall how the Wild sum and the algorithm can be easily extended to
the case of any {\it bounded} cross section, by introducing fictitious jumps. Finally,
we quickly explain our strategy to deal with hard potentials with angular cutoff.
\subsection{Wild's sum}
Let us first mention that some introductions to Wild's sum and its probabilistic interpretation by McKean
can be found in the book of Villani \cite[Section 4.1]{v:h} and in Carlen-Carvalho-Gabetta \cite{ccg,ccg2}.
Wild \cite{W} noted that for Maxwellian molecules, i.e. when $\gamma=0$, so that the cross section
$B(v-v^*,\sigma)=\beta_{v,v^*}(\sigma)$ does not depend on the relative velocity,
\eqref{be} rewrites
$$
\partial_t f_t(v)= \kappa Q(f_t,f_t)- \kappa f_t(v)
$$
where, for $f,g$ two probability densities on ${\rr^3}$,
$Q(f,g)(v)=\kappa^{-1}\int_{{\mathbb{S}^2}} f(v')g(v'^*)\beta_{v,v^*}(\sigma){\rm d} \sigma$.
\vskip.13cm
It holds that $Q(f,g)$ is also a probability density on ${\rr^3}$, that can be interpreted as
the law of $V'=(V+V^* + |V-V^*|\sigma)/2$, where $V$ and $V^*$ are two independent
${\rr^3}$-valued random variables with densities $f$ and $g$ and where $\sigma$ is, conditionally on
$(V,V^*)$, a $\kappa^{-1}\beta_{V,V^*}(\sigma)d\sigma$-distributed ${\mathbb{S}^2}$-valued random variable.
Wild \cite{W} proved that given $f_0$, the solution $f_t$ to \eqref{be} is given by
\begin{equation}\label{w}
f_t=e^{-\kappa t} \sum_{n\geq 1} (1-e^{-\kappa t})^{n-1} Q_n(f_0),
\end{equation}
where $Q_n(f_0)$ is defined recursively by $Q_1(f_0)=f_0$ and, for $n\geq 1$, by
$$
Q_{n+1}(f_0)=\frac 1 n \sum_{k=1}^n Q(Q_k(f_0),Q_{n-k}(f_0)).
$$
McKean \cite{MK,MK2} provided an interpretation of the Wild sum in terms of binary trees, see
also Villani \cite{v:h} and Carlen-Carvalho-Gabetta \cite{ccg}.
Let ${\mathcal{T}}$ be the set of all discrete finite rooted ordered binary trees.
By {\it ordered}, we mean that
each node of $\Upsilon \in {\mathcal{T}}$ with two children has a {\it left} child and a {\it right} child. We denote by
$\ell(\Upsilon)$ the number of leaves of $\Upsilon \in {\mathcal{T}}$. If $\circ$ is the trivial tree
(the one with only one node: the root), we set $Q_\circ(f_0)=f_0$. If now $\Upsilon \in {\mathcal{T}}\setminus \{\circ\}$,
we put $Q_\Upsilon(f_0)=Q(Q_{\Upsilon_\ell}(f_0), Q_{\Upsilon_r}(f_0))$, where
$\Upsilon_\ell$ (resp. $\Upsilon_r$) is the subtree
of $\Upsilon$ consisting of the left (resp. right) child of the root with its whole progeny.
Then \eqref{w} can be rewritten as
\begin{equation}\label{wwss}
f_t=e^{-\kappa t} \sum_{\Upsilon \in {\mathcal{T}}} (1-e^{-\kappa t})^{|\Upsilon|-1} Q_\Upsilon(f_0).
\end{equation}
In words, \eqref{wwss} can be interpreted as follows. For each $\Upsilon \in {\mathcal{T}}$, the term
$e^{-\kappa t} (1-e^{-\kappa t})^{|\Upsilon|-1}$ is the probability that
a typical particle has $\Upsilon$ as (ordered) collision tree, while $Q_\Upsilon(f_0)$ is the density
of its velocity knowing that it has $\Upsilon$ as (ordered) collision tree.
\vskip.13cm
Finally, let us mention a natural algorithmic interpretation of \eqref{be} closely related to \eqref{wwss}.
The {\it dynamical} probabilistic interpretation of Maxwellian molecules, initiated by Tanaka \cite{t},
can be roughly summarized as follows.
Consider a typical particle in the gas. Initially, its velocity $V_0$ is $f_0$-distributed. Then,
at {\it rate} $\kappa$, that is, after an Exp$(\kappa)$-distributed random time $\tau$, it collides with another
particle: its velocity $V_0$ is replaced by $(V_0+V^*_\tau + |V_0-V^*_\tau|\sigma)/2$, where $V^*_\tau$
is the velocity of an independent particle {\it undergoing the same process} (stopped at time $\tau$) and
$\sigma$ is a $\kappa^{-1}\beta_{V_0,V^*_\tau}(\sigma)d\sigma$-distributed ${\mathbb{S}^2}$-valued random variable.
Then, at {\it rate} $\kappa$, it collides again, etc. This produces a stochastic process $(V_t)_{t\geq 0}$ such
that for all $t\geq 0$, $V_t$ is $f_t$-distributed.
\vskip.13cm
Consider now the following recursive algorithm.
\vskip.13cm
{\tt Function velocity$(t)$:
.. Simulate a $f_0$-distributed random variable $v$, set $s=0$.
.. While $s<t$ do
.. .. simulate an exponential random variable $\tau$ with parameter $\kappa$,
.. .. set $s=s+\tau$,
.. .. if $s<t$, do
.. .. .. set $v^*=$velocity$(s)$,
.. .. .. simulate a $\kappa^{-1}\beta_{v,v^*}(\sigma)d\sigma$-distributed ${\mathbb{S}^2}$-valued random variable $\sigma$,
.. .. .. set $v=(v+v^* +|v-v^*|\sigma)/2 $,
.. .. end if,
.. end while.
.. Return velocity$(t)=v$.
}
\vskip.13cm
Of course, each new random variable is simulated independently of the previous ones. In particular,
line 7 of the algorithm, all the random variables required to produce {\tt $v^*=$velocity$(s)$}
are independent of all that has already been simulated.
\vskip.13cm
Comparing the above paragraph and the algorithm, it appears clearly that {\tt velocity}$(t)$
produces a $f_t$-distributed random variable. We have never seen this fact written precisely as it is here,
but it is more or less well-known folklore. In the present paper, we will prove such a fact, in a slightly
more complicated situation.
\vskip.13cm
In spirit, the algorithm produces a binary ordered tree: each time the recursive function calls itself,
we add a branch (on the right). So it is closely related to \eqref{wwss} and, actually, one can get
convinced that {\tt velocity$(t)$} is precisely an algorithmic interpretation of \eqref{wwss}.
But entering into the details would lead us to tedious and technical explanations.
\subsection{Utility of Wild's sum}
The Wild sum has often been used for numerical computations: one simply cutoffs \eqref{w} at some
well-chosen level and, possibly, adds a Gaussian distribution with adequate mean and covariance matrix
to make it have the desired mass and energy.
See Carlen-Salvarini \cite{cs} for a very precise study in this direction.
And actually, Pareschi-Russo \cite{pr} also managed to use the Wild sum, among many other things,
to solve numerically the {\it inhomogeneous} Boltzmann equation for non Maxwellian molecules.
\vskip.13cm
A completely different approach is to use a large number $N$ of times
the perfect simulation algorithm previously described
to produce some i.i.d. $f_t$-distributed random variables $V^1_t,\dots,V^N_t$,
and to approximate $f_t$ by $N^{-1}\sum_1^N \delta_{V^i_t}$. We believe that this is not very efficient in practice,
especially when compared to the use of a classical interacting particle system in the spirit of Kac
\cite{k}, see e.g. \cite{fsm}. The main reason is that the computational cost of the above {\it perfect}
simulation algorithm
increases exponentially with time, while the one of Kac's particle system increases linearly.
So the cost to remove the bias is disproportionate.
See \cite{fg} for such a study concerning the Smoluchowski equation, which has the same structure
(at the rough level) as the Boltzmann equation.
\vskip.13cm
The Wild sum has also been intensively used to study the rate of approach to equilibrium of Maxwellian molecules.
This was initiated by McKean \cite{MK}, with more recent studies by
Carlen-Carvalho-Gabetta \cite{ccg,ccg2}, themselves followed by Dolera-Gabetta-Regazzini
\cite{dgr} and many other authors.
\subsection{Bounded cross sections}
If $B(v-v^*,\sigma)=\Phi(|v-v^*|)b(\langle \frac{v-v^*}{|v-v^*|},\sigma\rangle)$ with $\Phi$ bounded, e.g. by $1$,
we can introduce fictitious jumps
to write \eqref{be} as $\partial_t f_t = \kappa Q(f_t,f_t) - \kappa f_t$, with
$Q(f,g)= \kappa^{-1} \int_{\mathbb{S}^2}\int_0^1 f(v'')g(v''^*) {\rm d} a \beta_{v,v^*}(\sigma){\rm d} \sigma$, where
$v''= v + [v'-v]\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{a \leq \Phi(|v-v^*|)\}}$ and something similar for $v''^*$.
Hence all the previous study directly applies, but the resulting Wild sum does not seem to
allow for a precise study the large time behavior of $f_t$, because it leads to intractable
computations.
\subsection{Hard potentials with angular cutoff}
Of course, the angular cutoff (that is, we assume that $\kappa < \infty$)
is crucial to hope for a perfect simulation algorithm
and for a series expansion in the spirit of Wild's sum. Indeed, $\kappa=\infty$
implies that a particle is subjected to infinitely many collisions on each finite time interval.
So our goal is to extend, at the price of many complications, the algorithm and series expansion
to hard potentials with cutoff. Since the cross section
is unbounded in the relative velocity variable,
some work is needed.
\vskip.13cm
We work with weak forms of PDEs for simplicity. First, it is classical, see e.g. \cite[Section 2.3]{v:h}
that a family
$(f_t)_{t\geq 0}\subset {\mathcal{P}}({\rr^3})$ is a weak solution to \eqref{be} if it
satisfies, for all reasonable $\phi\in C_b({\rr^3})$,
$$
\frac{d}{dt} \int_{\rd} \phi(v)f_t({\rm d} v)=\int_{\rd}\intrd \int_{{\mathbb{S}^2}} |v-v^*|^\gamma [\phi(v')-\phi(v)] \beta_{v,v^*}(\sigma)
{\rm d} \sigma f_t({\rm d} v^*) f_t({\rm d} v).
$$
As already mentioned, one also has $\int_{\rd} |v|^2 f_t({\rm d} v) = e_0$, so that
$g_t({\rm d} v)= (1+e_0)^{-1}(1+|v|^2)f_t({\rm d} v)$ belongs to ${\mathcal{P}}({\rr^3})$ for all $t\geq 0$. A simple computation
shows that, for all reasonable $\phi\in C_b({\rr^3})$,
$$
\frac{d}{dt} \int_{\rd} \phi(v)g_t({\rm d} v)=\int_{\rd}\intrd \int_{{\mathbb{S}^2}} \frac{(1+e_0)|v-v^*|^\gamma} {1+|v^*|^2}
\Big[\frac{1+|v'|^2}{1+|v|^2}\phi(v')-\phi(v)\Big] \beta_{v,v^*}(\sigma) {\rm d}
\sigma g_t({\rm d} v^*) g_t({\rm d} v).
$$
This equation enjoys the pleasant property that the {\it maximum rate of collision}, given by
$\Lambda_0(v)=\kappa \sup_{v^*\in{\rr^3}}
(1+|v^*|^2)^{-1}(1+e_0)|v-v^*|^\gamma$, is finite. Hence, up to some fictitious jumps,
one is able to predict when a particle will collide from its sole velocity, knowing nothing
of the {\it environment} represented by $g_t$. The function
$\Lambda_0$ is not bounded as a function of $v$, since it resembles $1+|v|^\gamma$,
but, as we will see, this it does not matter too much.
On the contrary, the presence of $(1+|v|^2)^{-1}(1+|v'|^2)$ in front of the gain term is problematic. It means
that, in some sense, particles are not all taken into account equally. To overcome this problem,
we consider an equation with an additional {\it weight} variable $m\in (0,\infty)$. So we search for an equation,
resembling more the Kolmogorov forward equation of a nonlinear Markov process,
of which the solution $(G_t)_{t\geq 0} \subset {\mathcal{P}}((0,\infty)\times{\rr^3})$ would be such that
$\int_0^\infty m G_t({\rm d} m, {\rm d} v)=g_t({\rm d} v)$ for all times. Then, one would recover the solution to \eqref{be}
as $f_t({\rm d} v) = (1+e_0)(1+|v|^2)^{-1}\int_0^\infty m G_t({\rm d} m, {\rm d} v)$. All this is doable and was our initial
strategy. However, we then found a more direct way to proceed: taking advantage of the energy conservation,
it is possible to build an equation of which the
solution $(F_t)_{t\geq 0} \subset {\mathcal{P}}((0,\infty)\times{\rr^3})$ is such that
$\int_0^\infty m F_t({\rm d} m, {\rm d} v)=f_t({\rm d} v)$ for all $t\geq 0$. And this equation is of the form
\begin{align*}
&\frac{d}{dt} \int_{(0,\infty)\times{\rr^3}}\Phi(m,v)F_t({\rm d} m,{\rm d} v)=\int_{(0,\infty)\times{\rr^3}}\int_{(0,\infty)\times{\rr^3}}
\int_{{\mathbb{S}^2}\times[0,1]} \Lambda(v)
\Big[\Phi(m'',v'' ) -\Phi(m,v)\Big] \\
&\hskip9cm \beta_{v,v^*}(\sigma) {\rm d} a {\rm d} \sigma F_t({\rm d} m^*,{\rm d} v^*)F_t({\rm d} m,{\rm d} v).
\end{align*}
Here $\Lambda$ is any (explicit) function dominating $\Lambda_0$,
the additional variable $a$ is here to allow for fictitious jumps, and the post-collisional characteristics
$(m'',v'')$, depending on $m,v,m^*,v^*,\sigma,a$ are precisely defined in the next section.
The {\it perfect} simulation algorithm
for such an equation is almost as simple as the one previously described, except that the rate of collision
$\Lambda(v)$ now depends on the state of the particle. On the contrary, this state-dependent rate
complicates subsequently the series expansion because the time and phase variables do not separate anymore.
\subsection{Plan of the paper}
In the next section, we expose our main results: we
introduce an equation with an additional variable $m$, state that this equation has a unique
solution $F_t$ that, once integrated in $m$, produces the solution $f_t$ to \eqref{be}.
We then propose an algorithm that perfectly simulates an $F_t$-distributed random variable,
we write down a series expansion for $F_t$ in the spirit of \eqref{wwss} and discuss briefly
the relevance of our results. The proofs are then handled: the algorithm is studied in Section \ref{sa},
the series expansion established in Section \ref{sps},
well-posedness of the equation solved by $F_t$ is checked in Section
\ref{swp}, and the link between $F_t$ and $f_t$ shown in Section \ref{sr}.
\section{Main results}
\subsection{Weak solutions}
We use a classical definition of weak solutions, see e.g. \cite[Section 2.3]{v:h}.
\begin{defi}\label{dfws}
Assume \eqref{ass} and recall \eqref{beta}.
A measurable family $f=(f_t)_{t\geq 0} \subset {\mathcal{P}}({\mathbb{R}}^3)$
is a weak solution to (\ref{be}) if for all $t\geq 0$,
$\int_{\rd} |v|^2 f_t({\rm d} v)= \int_{\rd} |v|^2 f_0({\rm d} v)<\infty$ and for all $\phi \in C_b({\mathbb{R}}^3)$,
\begin{align}\label{ws}
\int_{\rd} \phi(v)f_t({\rm d} v) = \int_{\rd} \phi(v)f_0({\rm d} v) + \int_0^t
\int_{\rd} \int_{\rd} {\mathcal{A}} \phi(v,v^*) f_s({\rm d} v^*)f_s({\rm d} v){\rm d} s,
\end{align}
where ${\mathcal{A}}\phi(v,v^*)= |v-v_*|^\gamma \int_{\sS}[\phi(v'(v,v^*,\sigma))-\phi(v)]\beta_{v,v^*}(\sigma)
{\rm d} \sigma$.
\end{defi}
Everything is well-defined in \eqref{ws} by boundedness of mass and energy and since
$|{\mathcal{A}}\phi(v,v^*)|\leq 2 \kappa ||\phi||_\infty |v-v^*|^\gamma
\leq 2 \kappa ||\phi||_\infty (1+|v|^2)(1+|v^*|^2)$.
\vskip.13cm
For any given $f_0 \in {\mathcal{P}}({\mathbb{R}}^3)$ such that $\int_{\rd} |v|^2 f_0({\rm d} v)<\infty$,
the existence of unique weak solution starting from $f_0$
is now well-known. See Mischler-Wennberg \cite{MW} when $f_0$ has a density
and Lu-Mouhot \cite{LM} for the general case.
Let us also mention that the conservation assumption
is important in Definition \ref{dfws},
since Wennberg \cite{Wenn} proved that there also solutions with increasing energy.
\subsection{An equation with an additional variable}\label{tlf}
We fix $e_0>0$ and define, for $v\in {\rr^3}$,
$$
\Lambda (v) = (1+e_0)(1+|v|^\gamma)\in [1,\infty) \quad \hbox{which satisfies}\quad
\Lambda(v) \geq \sup_{v^* \in {\rr^3}} \frac{(1+e_0)|v-v^*|^\gamma}{1+|v^*|^2}.
$$
For $v,v^*\in{\rr^3}$ and $z=(\sigma,a) \in H={\mathbb{S}^2} \times [0,1]$, we put
\begin{gather*}
v''(v,v^*,z) =v+ \Big[v'(v,v^*,\sigma)-v\Big]\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{a\leq q(v,v^*)\}} \in {\rr^3},\\
\hbox{where}\quad
q(v,v^*)=\frac{(1+e_0)|v-v^*|^\gamma}{(1+|v^*|^2)\Lambda(v)}\in [0,1].
\end{gather*}
We also introduce $E=(0,\infty)\times {\mathbb{R}}^3$ and, for $y=(m,v)$ and $y^* =(m^*,v^*)$ in $E$ and $z\in H$,
$$
h(y,y^*,z)=\Big(\frac{mm^*(1+|v^*|^2)}{1+e_0},v''(v,v^*,z)\Big)\in E \quad\hbox{and}\quad \Lambda(y)=\Lambda(v)
\in [1,\infty)
$$
with a small abuse of notation. We finally consider, for $y=(m,v)$ and $y^* =(m^*,v^*)$ in $E$,
$$
\nu_{y,y^*}({\rm d} z)=\beta_{v,v^*}(\sigma){\rm d}\sigma {\rm d} a
$$
which is a measure on $H$ with total mass $\nu_{y,y^*}(H)=\kappa$, see \eqref{ass}.
\begin{defi}\label{GE}
Assume \eqref{ass}. A measurable family $F=(F_t)_{t\geq 0} \subset {\mathcal{P}}(E)$ is said to solve (A) if
for all $T>0$, $\sup_{[0,T]} \int_E |v|^\gamma F_t({\rm d} m,{\rm d} v)<\infty$,
and for all $\Phi \in C_b(E)$, all $t\geq 0$,
\begin{align}\label{gew}
\int_E \Phi(y)F_t({\rm d} y)=\int_E \Phi(y)F_0({\rm d} y)+\int_0^t \int_E\int_E {\mathcal{B}}\Phi(y,y^*) F_s({\rm d} y^*)
F_s({\rm d} y){\rm d} s,
\end{align}
where ${\mathcal{B}}\Phi(y,y^*)=\Lambda(y)\int_H[\Phi(h(y,y^*,z))-\Phi(y)]\nu_{y,y^*}({\rm d} z)$.
\end{defi}
All is well-defined in \eqref{gew}
thanks to the conditions on $F$ and since
$|{\mathcal{B}}\Phi(y,y^*)|\leq 2\kappa ||\Phi||_\infty \Lambda(y)\leq C_\Phi(1+|v|^\gamma)$ (with the notation
$y=(m,v)$).
As already mentioned in the introduction,
the important point is that the function $\Lambda$ does not depend on $y^*$.
Hence a particle, when in the state $y$, jumps at rate $\kappa \Lambda(y)$, independently of everything else.
\begin{prop}\label{beeu}
Assume \eqref{ass}. For any $F_0\in {\mathcal{P}}(E)$ such that $\int_E |v|^\gamma F_0({\rm d} m,{\rm d} v)<\infty$,
(A) has exactly one solution $F$
starting from $F_0$.
\end{prop}
We will also verify the following estimate.
\begin{rk}\label{dec}
Assume \eqref{ass}. A solution to (A) satisfies
$\sup_{t\geq 0}\int_E |v|^2 F_t({\rm d} m,{\rm d} v)=\int_E |v|^2 F_0({\rm d} m,{\rm d} v)$.
\end{rk}
Finally, the link with the Boltzmann equation is as follows.
\begin{prop}\label{relat}
Assume \eqref{ass}. Let $F_0\in {\mathcal{P}}(E)$ such that
$\int_E [|v|^\gamma+m(1+|v|^{2+2\gamma})] F_0({\rm d} m,{\rm d} v)<\infty$ and let
$F$ be the solution to (A). Introduce, for each $t\geq 0$, the nonnegative measure $f_t$ on ${\mathbb{R}}^3$
defined by $f_t(A)=\int_E m \hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{v \in A\}}F_t({\rm d} m, {\rm d} v)$ for all $A\in {\mathcal{B}}({\mathbb{R}}^3)$.
If $f_0 \in {\mathcal{P}}({\mathbb{R}}^3)$ and if the quantity $e_0$ used to define the coefficients of (A) is
precisely $e_0=\int_{{\mathbb{R}}^3} |v|^2 f_0({\rm d} v)$, then
$(f_t)_{t\geq 0}$ is the unique weak solution to \eqref{be} starting from $f_0$.
\end{prop}
\subsection{A perfect simulation algorithm}
We consider the following procedure.
\begin{alg}\label{algo}
Fix $e_0>0$ and $F_0 \in {\mathcal{P}}(E)$. For any $t\geq 0$ we define the following recursive function,
of which the result is some $E\times {\mathbb{N}}$-valued random variable.
\vskip.13cm
{\tt function (value$(t)$,counter$(t)$):
.. Simulate a $F_0$-distributed random variable $y$, set $s=0$ and $n=0$.
.. While $s<t$ do
.. .. simulate an exponential random variable $\tau$ with parameter $\kappa \Lambda(y)$,
.. .. set $s=s+\tau$,
.. .. if $s<t$, do
.. .. .. set $(y^*,n^*)=$(value$(s)$,counter$(s)$),
.. .. .. simulate $z\in H$ with law $\kappa^{-1}\nu_{y,y^*}$,
.. .. .. set $y=h(y,y^*,z)$,
.. .. .. set $n=n+n^*+1$,
.. .. end if,
.. end while.
.. Return value$(t)=y$ and counter$(t)=n$.
}
\end{alg}
Of course, each time a new random variable is simulated, we implicitly assume that it is
independent of everything that has already been simulated. In particular,
line 7 of the procedure, all the random variables used to produce
{\tt $(y^*,n^*)=$(value$(s)$,counter$(s)$)} are independent of all the random variables already simulated.
By construction, {\tt counter$(t)$} is precisely the number of times the recursive function
calls itself.
\begin{prop}\label{mr}
Assume \eqref{ass}. Fix $F_0\in {\mathcal{P}}(E)$ such that $\int_E |v|^\gamma F_0({\rm d} m,{\rm d} v)<\infty$ and fix $t\geq 0$.
Algorithm \ref{algo} a.s. stops and thus produces a couple $(Y_t,N_t)$ of random variables.
The $E$-valued random variable
$Y_t$ is $F_t$-distributed, where $F$ is the solution to (A) starting from $F_0$.
The ${\mathbb{N}}$-valued random variable $N_t$ satisfies $\mathbb{E}[N_t]\leq
\exp(\kappa \int_0^t \int_E \Lambda(y) F_s({\rm d} y) ds)-1$.
\end{prop}
\subsection{A series expansion}
We next write down a series expansion of $F_t$, the solution to (A), in the spirit of Wild's sum \eqref{wwss}.
Unfortunately, the expressions are more complicated,
because the time ($t\geq 0$) and phase ($y\in E$) variables do not separate. This is due to the
fact that the jump rate $\Lambda$ depends on the state of the particle.
\vskip.13cm
For $F,G$ in ${\mathcal{M}}(E)$, we define $Q(F,G) \in {\mathcal{M}}(E)$ by, for all Borel subset $A\subset E$,
$$
Q(F,G)(A)= \int_E\int_E\int_H \Lambda(y) \hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{h(y,y^*,z)\in A\}} \nu_{y,y^*}({\rm d} z) G({\rm d} y^*)F({\rm d} y).
$$
Observe that \eqref{gew} may be written, at least formally,
$\partial_t F_t=-\kappa \Lambda F_t + Q(F_t,F_t)$, provided $F_t$ is a probability measure on $E$
for all $t\geq 0$. Also, note that $Q(F,G)\neq Q(G,F)$ in general.
\vskip.13cm
For $J\in {\mathcal{M}}({\mathbb{R}}_+\times E)$, consider the measurable family $(\Gamma_t(J))_{t\geq 0}\subset {\mathcal{M}}(E)$ defined by
$$
\Gamma_t(J)(A)=\int_0^t \int_E \hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{y \in A\}} e^{-\kappa \Lambda(y) (t-s)}J({\rm d} s,{\rm d} y)
$$
for all Borel subset $A\subset E$.
\vskip.13cm
We finally consider the set ${\mathcal{T}}$ of all finite binary (discrete) ordered trees: such a tree is constituted of
a finite number of nodes, including the root, each of these nodes having either $0$ or two children
(ordered, in the sense that a node having two children has a {\it left} child and a {\it right} child).
We denote by $\circ \in {\mathcal{T}}$ the trivial tree, composed of the root as only node.
\begin{prop}\label{psum}
Assume \eqref{ass}. Let $F_0 \in {\mathcal{P}}(E)$ such that $\int_E |v|^\gamma F_0({\rm d} m,{\rm d} v)<\infty$.
The unique solution
$(F_t)_{t\geq 0}$ to (A) starting from $F_0$ is given by
$$
F_t = \sum_{\Upsilon \in {\mathcal{T}}} \Gamma_t(J_\Upsilon(F_0)),
$$
with $J_\Upsilon(F_0) \in {\mathcal{M}}({\mathbb{R}}_+\times E)$ defined by induction:
$J_\circ(F_0)({\rm d} t,{\rm d} x)=\delta_0({\rm d} t) F_0({\rm d} x)$ and, if $\Upsilon \in {\mathcal{T}}\setminus\{\circ\}$,
$$
J_\Upsilon(F_0)({\rm d} t,{\rm d} x)={\rm d} t Q(\Gamma_t(J_{\Upsilon_\ell}(F_0)),\Gamma_t(J_{\Upsilon_r}(F_0)))({\rm d} x),
$$
where $\Upsilon_\ell$ (resp. $\Upsilon_r$) is the subtree
of $\Upsilon$ consisting of the left (resp. right) child of the root with its whole progeny.
\end{prop}
We will prove this formula by a purely analytic method.
We do not want to discuss precisely its connection with Algorithm \ref{algo}, but
let us mention that in spirit, the algorithm produces a (random) ordered tree $\Upsilon_t$ of interactions
together with the
value of $Y_t$, and that $\Gamma_t(J_\Upsilon(F_0))$ can be interpreted as the probability distribution of
$Y_t$ restricted to the event that $\Upsilon_t=\Upsilon$.
\subsection{Conclusion}
Fix $f_0\in {\mathcal{P}}({\mathbb{R}}^3)$ such that $\int_{{\mathbb{R}}^3} |v|^{2+2\gamma}f_0({\rm d} v)<\infty$ and set
$F_0= \delta_1 \otimes f_0 \in {\mathcal{P}}(E)$, which satisfies
$\int_E [|v|^\gamma+m(1+|v|^{2+2\gamma})] F_0({\rm d} m,{\rm d} v)
=\int_{\rd} (1+|v|^\gamma+|v|^{2+2\gamma}) f_0({\rm d} v)<\infty$.
\vskip.13cm
(a) Gathering Propositions \ref{mr} and \ref{relat}, we find that
Algorithm \ref{algo} used with $e_0=\int_{{\mathbb{R}}^3} |v|^{2}f_0({\rm d} v)$ and with
$F_0$ produces a random variable $(Y_t,N_t)$, with $Y_t=(M_t,V_t)$ such that
$\mathbb{E}[M_t \hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{V_t \in A\}}]=f_t(A)$ for all $A \in {\mathcal{B}}({\mathbb{R}}^3)$,
where $f$ is the unique weak solution to \eqref{be} starting from $f_0$.
Also, the mean number of iterations $\mathbb{E}[N_t]$ is bounded by $\exp[\kappa(1+e_0)(1+e_0^{\gamma/2}) t]-1$.
\vskip.13cm
Indeed, we know from Proposition \ref{mr} that $\mathbb{E}[N_t]\leq
\exp(\kappa \int_0^t \int_E \Lambda(y) F_s({\rm d} y) ds)-1$. But we have
$\int_E \Lambda(y)F_t({\rm d} y)=(1+e_0) \int_E (1+|v|^\gamma)F_t({\rm d} m,{\rm d} v)
\leq (1+e_0)(1+ (\int_E |v|^2 F_t({\rm d} m,{\rm d} v))^{\gamma/2})$, which is smaller than
$(1+e_0)(1+ (\int_E |v|^2 F_0({\rm d} m,{\rm d} v))^{\gamma/2})=(1+e_0)(1+e_0^{\gamma/2})$ by Remark \ref{dec}.
\vskip.13cm
(b) Gathering Propositions \ref{psum} and \ref{relat}, we conclude that
for all $t\geq 0$, all Borel subset $A\subset {\rr^3}$, we have
$f_t(A)=\sum_{\Upsilon \in {\mathcal{T}}} \int_E m \hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{v \in A\}}\Gamma_t(J_\Upsilon(F_0))({\rm d} m,{\rm d} v)$.
\subsection{Discussion}
It might be possible to prove Proposition \ref{relat} assuming only that $F_0 \in {\mathcal{P}}(E)$
satisfies $\int_E m(1+|v|^2) F_0({\rm d} m,{\rm d} v)<\infty$ instead of
$\int_E m(1+|v|^{2+2\gamma}) F_0({\rm d} m,{\rm d} v)<\infty$, since the Boltzmann equation \eqref{be}
is known to be well-posed as soon as the initial energy is finite, see \cite{MW,LM}.
However, it would clearly be more difficult and our condition is rather harmless.
\vskip.13cm
Observe that (A) is well-posed under the condition that $F_0 \in{\mathcal{P}}(E)$
satisfies $\int_E |v|^\gamma F_0({\rm d} m,{\rm d} v)$, which does not at all imply that
$e_0=\int_E m|v|^2 F_0({\rm d} m,{\rm d} v)<\infty$. But, recalling that the $e_0$ has to be finite
for the coefficients of (A) to be well-defined, this is not very interesting.
\vskip.13cm
The series expansion of Proposition \ref{psum} is of course much more complicated than the original
Wild sum, since (a)
we had to add the variable $m$, (b) we had to introduce fictitious jumps, (c) time and space do not separate.
So it is not clear whether the formula can be used theoretically or numerically. However, it provides an
explicit formula expressing $f_t$ as a (tedious) function of $f_0$.
\vskip.13cm
Algorithm \ref{algo} is extremely simple.
Using it a large number of times, which produces some i.i.d. sample
$(M^i_t,V^i_t)_{i=1,\dots,N}$, we may approximate $f_t$ by $N^{-1}\sum_1^N M^i_t \delta_{V^i_t}$.
For a central limit
theorem to hold true, one needs $\mathbb{E}[M_t^2]=\int_E m^2F_t({\rm d} m,{\rm d} v)$ to be finite.
We do not know if this holds true, although
we have some serious doubts. Hence the convergence of this Monte-Carlo approximation may be much
slower that $N^{-1/2}$. The main interest of Algorithm \ref{algo} is thus theoretical.
\section{The algorithm}\label{sa}
Here we prove Proposition \ref{mr}. We fix $F_0 \in {\mathcal{P}}(E)$ such that
$\int_E |v|^\gamma F_0({\rm d} m,{\rm d} v)<\infty$,
which implies that $\int_E \Lambda(y) F_0({\rm d} y)<\infty$.
When Algorithm \ref{algo} never stops, we take the convention that it returns
{\tt (value$(t)$,counter$(t)$)}=$(\triangle,\infty)$, where $\triangle$ is
a cemetery point. For each $t\geq 0$, we denote by $G_t \in {\mathcal{P}}((E\times {\mathbb{N}})\cup\{(\triangle,\infty)\})$
the law of the random variable produced by Algorithm \ref{algo}.
Also, for $y \in E$, $n\in {\mathbb{N}}$ and $z\in H$, we take the conventions that $h(y,\triangle,z)=\triangle$
and $n+\infty+1=\infty$. We arbitrarily define, for $y\in E$, $\nu_{y,\Delta}({\rm d} z)=|{\mathbb{S}^2}|^{-1}\kappa {\rm d} \sigma
{\rm d} a$.
\vskip.13cm
{\it Step 1.} We now consider the following procedure. It is an {\it abstract} procedure, because
it assumes that for each $t\geq 0$, one can simulate a random variable with law $G_t$
and because the instructions are repeated {\it ad infinitum} if the cemetery point is
not attained.
\vskip.13cm
{\tt Simulate a $F_0$-distributed random variable $y$, set $s=0$ and $n=0$.
While $y \neq \triangle$ do ad infinitum
.. simulate an exponential random variable $\tau$ with parameter $\kappa \Lambda(y)$,
.. set $Y_t=y$ and $N_t=n$ for all $t \in [s,s+\tau)$,
.. set $s=s+\tau$,
.. set $(y^*,n^*)=$(value$(s)$,counter$(s)$), with $(y^*,n^*)=(\triangle,\infty)$ if it never stops,
.. simulate $z\in H$ with law $\kappa^{-1}\nu_{y,y^*}$,
.. set $y=h(y,y^*,z)$,
.. set $n=n+n^*+1$,
end while.
If $s<\infty$, set $Y_{t}=\Delta$ and $N_t=\infty$ for all $t\geq s$.
}
\vskip.13cm
Observe that in the last line, we may have $s<\infty$ either because after a finite
number of steps, the simulation of $(y^*,n^*)$ with law $G_s$ has produced $(\triangle,\infty)$,
or because we did repeat the loop
{\it ad infinitum}, but the increasing process $N$ became infinite in finite time.
\vskip.13cm
At the end, this produces a process $(Y_t,N_t)_{t\geq 0}$ and one easily gets convinced that
for each $t\geq 0$, $(Y_t,N_t)$ is $G_t$-distributed. Indeed, if one extracts from the above
procedure only what is required to produce $(Y_t,N_t)$ (for some fixed $t$), one precisely re-obtains
Algorithm \ref{algo} if $(Y_t,N_t)\neq (\triangle,\infty)$
(and in this case Algorithm \ref{algo} stops),
while $(Y_t,N_t)= (\triangle,\infty)$ implies that Algorithm \ref{algo} never stops.
\vskip.13cm
By construction, the process $(Y_t,N_t)_{t\geq 0}$
is a time-inhomogeneous (possibly exploding) Markov process with values in
$(E\times {\mathbb{N}})\cup\{(\triangle,\infty)\}$
with generator ${\mathcal{L}}_t$, absorbed at $(\triangle,\infty)$ if this point is reached
and set to $(\triangle,\infty)$ after explosion if it explodes, where
$$
{\mathcal{L}}_t \Psi(y,n)= \kappa \Lambda(y) \int_H \int_{(E\times {\mathbb{N}})\cup\{(\triangle,\infty)\}}
\Big[ \Psi(h(y,y^*,z),n+n^*+1) - \Psi(y,n)\Big] G_t({\rm d} y^*,{\rm d} n^*) \frac{\nu_{y,y^*}({\rm d} z)}{\kappa}
$$
for all $t\geq 0$, all $\Psi \in B_b((E\times {\mathbb{N}})\cup \{(\triangle,\infty)\})$,
all $y\in E$ and all $n \in {\mathbb{N}}$.
\vskip.13cm
{\it Step 2.} Here we handle a preliminary computation: for all $y,y^*\in E$, we have
\begin{equation}\label{xxx}
A(y,y^*)=\int_H \Big[\Lambda(h(y,y^*,z^*))-\Lambda(y)\Big] \nu_{y,y^*}({\rm d} z)\leq \kappa(1+e_0).
\end{equation}
Writing $y=(m,v)$, $y^*=(m^*,v^*)$ and recalling Subsection \ref{tlf}, $A(y,y^*)$ equals
$$
(1+e_0)\int_H \Big[|v''(v,v^*,z)|^\gamma-|v|^\gamma \Big]\nu_{y,y^*}({\rm d} z)=
(1+e_0)q(v,v^*)\int_{\sS} [|v'(v,v^*,\sigma)|^\gamma-|v|^\gamma]
\beta_{v,v^*}(\sigma){\rm d} \sigma.
$$
But $|v'(v,v^*,\sigma)|\leq |v|+|v^*|$, see \eqref{vprime}, so that
$|v'(v,v^*,\sigma)|^\gamma -|v|^\gamma \leq |v^*|^\gamma$, whence
$$
A(y,y^*)\leq \kappa (1+e_0)q(v,v^*)|v^*|^\gamma=\kappa(1+e_0)
\frac{|v-v^*|^\gamma|v^*|^\gamma}{(1+|v|^\gamma)(1+|v^*|^2)}\leq \kappa(1+e_0),
$$
because $|v-v^*|^\gamma |v^*|^\gamma \leq (|v|^\gamma+|v^*|^\gamma) |v^*|^\gamma \leq
|v|^\gamma(1+|v^*|^2)+(1+|v^*|^2)=(1+|v|^\gamma)(1+|v^*|^2)$.
\vskip.13cm
{\it Step 3.} We now prove that $(Y_t,N_t)_{t\geq 0}$ actually does not explode nor reach the cemetery point,
that $\mathbb{E}[N_t] \leq \exp(\kappa \int_0^t \mathbb{E}[\Lambda(Y_s)] {\rm d} s) -1$
and that $\mathbb{E}[\Lambda(Y_t)]\leq \mathbb{E}[\Lambda(Y_0)] \exp(\kappa (1+e_0)t)$.
\vskip.13cm
For $A \in {\mathbb{N}}_*$, we introduce $\zeta_A=\inf\{t\geq 0 : N_t\geq A\}$.
The process $(Y_t,N_t)_{t\geq 0}$ does not explode nor reach the cemetery point during $[0,\zeta_A)$,
so that we can write, with $\Psi(y,n)=n\land A$, (recall that $N_0=0$ and that $t\mapsto N_t$ is a.s.
non-decreasing),
$$
\mathbb{E}[N_t \land A] = \mathbb{E}[N_{t \land \zeta_A}\land A]=\mathbb{E}\Big[\int_0^{t \land \zeta_A} {\mathcal{L}}_s\Psi(Y_s,N_s){\rm d} s\Big]
=\int_0^t \mathbb{E}\Big[\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{N_s < A\}}{\mathcal{L}}_s\Psi(Y_s,N_s)\Big] {\rm d} s.
$$
Since $0\leq (n+n^*+1)\land A - n \land A \leq 1+n^*\land A$, we deduce that
$$
0\leq {\mathcal{L}}_s \Psi(y,n) \leq \kappa \Lambda(y) \int_{(E\times {\mathbb{N}})\cup\{(\triangle,\infty)\}} [1+n^*\land A]
G_s({\rm d} y^*,{\rm d} n^*)= \kappa \Lambda(y) (1+\mathbb{E}[N_s\land A]),
$$
because $(Y_s,N_s)$ is
$G_s$-distributed. We thus find
$$
\mathbb{E}[N_t\land A]\leq \kappa \int_0^t\mathbb{E}[\Lambda(Y_s)\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{N_s<A\}}] \mathbb{E}[1+N_s\land A]{\rm d} s,
$$
whence, by the Gronwall lemma,
\begin{equation}\label{jab1}
\mathbb{E}[N_t\land A] \leq \exp\Big(\kappa \int_0^t \mathbb{E}[\Lambda(Y_s)\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{N_s<A\}}] {\rm d} s\Big) -1 .
\end{equation}
We next choose $\Psi(y,n)=\Lambda(y)\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{n<A\}}$ and write, as previously,
$$
\mathbb{E}[\Lambda(Y_t)\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{N_t <A\}}] = \mathbb{E}[\Lambda(Y_{t \land \zeta_A})\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{N_{t\land \zeta_A} <A\}}]
= \mathbb{E}[\Lambda(Y_0)]+\mathbb{E}\Big[\int_0^{t \land \zeta_A} {\mathcal{L}}_s\Psi(Y_s,N_s){\rm d} s\Big],
$$
whence
$$
\mathbb{E}[\Lambda(Y_t)\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{N_t <A\}}] =\mathbb{E}[\Lambda(Y_0)]+\int_0^t \mathbb{E}\Big[\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{N_s <A\}}{\mathcal{L}}_s\Psi(Y_s,N_s)\Big] {\rm d} s.
$$
But ${\mathcal{L}}_s\Psi(y,n)$ equals
\begin{align*}
& \Lambda(y) \int_H \int_{(E\times {\mathbb{N}})\cup\{(\triangle,\infty)\}} \!\!
\Big[\Lambda(h(y,y^*,z^*))\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{n+n^*+1<A\}}- \Lambda(y)\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{n<A\}}\Big] G_s({\rm d} y^*,{\rm d} n^*)\nu_{y,y^*}({\rm d} z)\\
\leq & \Lambda(y) \hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{n<A\}} \int_H \int_{E\times {\mathbb{N}}}
\Big[\Lambda(h(y,y^*,z^*))-\Lambda(y)\Big] G_s({\rm d} y^*,{\rm d} n^*)\nu_{y,y^*}({\rm d} z),
\end{align*}
whence ${\mathcal{L}}_s\Psi(y,n) \leq \kappa(1+e_0) \hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{n<A\}}\Lambda(y)$ by \eqref{xxx}
and since $G_s(E\times {\mathbb{N}})\leq 1$.
Finally, we have checked that
$\mathbb{E}[\Lambda(Y_t)\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{N_t <A\}}] \leq \mathbb{E}[\Lambda(Y_0)]
+\kappa(1+e_0) \int_0^t \mathbb{E}[\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{N_s <A\}}\Lambda(Y_s)] {\rm d} s,$
whence
\begin{equation}\label{jab2}
\mathbb{E}[\Lambda(Y_t)\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{N_t <A\}}]\leq \mathbb{E}[\Lambda(Y_0)] \exp(\kappa(1+e_0) t).
\end{equation}
Gathering \eqref{jab1} and \eqref{jab2} and letting $A$ increase to infinity, we first conclude
that $\mathbb{E}[N_t]<\infty$ for all $t\geq 0$. In particular, $N_t<\infty$ a.s. for all $t\geq 0$, and
the process $(Y_t,N_t)_{t\geq 0}$ does a.s. not explode and never reach $(\triangle,\infty)$.
Consequently, $\mathbb{E}[\Lambda(Y_t)]=\lim_{A\to \infty} \mathbb{E}[\Lambda(Y_t)\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{N_t <A\}}]
\leq \mathbb{E}[\Lambda(Y_0)] \exp(\kappa (1+e_0)t)$ by \eqref{jab2}.
Finally, we easily conclude from \eqref{jab1} that $\mathbb{E}[N_t] \leq \exp(\kappa \int_0^t \mathbb{E}[\Lambda(Y_s)] {\rm d} s) -1$.
\vskip.13cm
{\it Step 4.} By Step 3, we know that $G_t$ (which is the law of $(Y_t,N_t)$)
is actually supported by $E\times{\mathbb{N}}$ for all $t\geq 0$. Hence Algorithm \ref{algo} a.s. stops.
The process $(Y_t,N_t)_{t\geq 0}$ is thus an inhomogeneous Markov with generator $\tilde{{\mathcal{L}}}_t$ defined,
for $\Psi \in C_b(E\times{\mathbb{N}})$, by
$$\tilde{{\mathcal{L}}}_t \Psi(y,n)= \Lambda(y) \int_H \int_{E\times {\mathbb{N}}}
\Big[ \Psi(h(y,y^*,z),n+n^*+1) - \Psi(y,n)\Big] G_t({\rm d} y^*,{\rm d} n^*) \nu_{y,y^*}({\rm d} z)
$$
and we thus have
$$
\int_{E\times{\mathbb{N}}}\Psi(y,n)G_t({\rm d} y,{\rm d} n)= \int_{E\times{\mathbb{N}}}\Psi(y,n)G_0({\rm d} y,{\rm d} n)+\int_0^t \int_{E\times {\mathbb{N}}}
\tilde{{\mathcal{L}}}_s \Psi(y,n) G_s({\rm d} y,{\rm d} n) {\rm d} s.
$$
Let now $F_t \in {\mathcal{P}}(E)$ be the law of $Y_t$ (so $F_t$ is the first marginal of $G_t$).
It starts from $F_0$ and solves (A). Indeed, $\int_E |v|^\gamma F_t({\rm d} m,{\rm d} v)\leq
\int_E \Lambda(y)F_t({\rm d} y)=\mathbb{E}[\Lambda(Y_t)]$ is locally bounded by Step 3
and for all $\Phi \in C_b(E)$,
applying the above equation with $\Psi(y,n)=\Phi(y)$, we find
$\tilde{{\mathcal{L}}}_t \Psi(y,n)= \Lambda(y) \int_H \int_{E} [ \Phi(h(y,y^*,z)) - \Phi(y)] F_t({\rm d} y^*) \nu_{y,y^*}({\rm d} z)$,
so that
$$
\int_{E}\Phi(y)F_t({\rm d} y)\!=\!\! \int_{E}\Phi(y)F_0({\rm d} y)+\!\int_0^t \int_{E}\int_E\int_H
\Lambda(y)\Big[ \Phi(h(y,y^*,z)) - \Phi(y)\Big] \nu_{y,y^*}({\rm d} z)F_s({\rm d} y^*)F_s({\rm d} y) {\rm d} s
$$
as desired. Finally, we have already seen in Step 3 that
$\mathbb{E}[N_t]\leq \exp(\kappa \int_0^t \mathbb{E}[\Lambda(Y_s)] {\rm d} s) -1=\exp(\kappa \int_0^t \int_E \Lambda(y)F_s({\rm d} y)
{\rm d} s) -1$.
We have proved Proposition \ref{mr}, as well as the existence part of Proposition \ref{beeu}.
\hfill $\square$
\section{Series expansion}\label{sps}
The goal of this section is to prove Proposition \ref{psum}.
We thus consider $F_0 \in {\mathcal{P}}(E)$ such that $\int_E \Lambda(y)F_0({\rm d} y)
=(1+e_0)\int_E(1+|v|^\gamma)F_0({\rm d} m,{\rm d} v)<\infty$. To shorten notation, we set
$J_\Upsilon=J_\Upsilon(F_0)$.
\vskip.13cm
{\it Step 1.} Here we check that for all $\Upsilon \in {\mathcal{T}}$, all $t\geq 0$,
$C_\Upsilon(t)=\int_0^t \int_E \Lambda(y)
J_\Upsilon({\rm d} s,{\rm d} y)<\infty$. We work by induction. First, since
$J_\circ({\rm d} t,{\rm d} y)=\delta_0({\rm d} t)F_0({\rm d} y)$,
we find that $C_\circ(t)=\int_E \Lambda(y) F_0({\rm d} y)$ for all $t\geq 0$, which is finite by assumption.
Next, we fix $t\geq 0$,
$\Upsilon \in {\mathcal{T}}\setminus\{\circ\}$, we consider $\Upsilon_\ell$ and $\Upsilon_r$ as in the statement,
we assume by induction that $C_{\Upsilon_\ell}(t)<\infty$ and $C_{\Upsilon_r}(t)<\infty$ and prove that
$C_\Upsilon(t)<\infty$. We start from
\begin{align*}
C_\Upsilon(t)=&\int_0^t \int_E \Lambda(x)Q(\Gamma_s(J_{\Upsilon_\ell}),\Gamma_s(J_{\Upsilon_r}))({\rm d} x) {\rm d} s\\
=&\int_0^t \int_E\int_E\int_H \Lambda(h(y,y^*,z)) \Lambda(y)\nu_{y,y^*}({\rm d} z) \Gamma_s(J_{\Upsilon_r})({\rm d} y^*)
\Gamma_s(J_{\Upsilon_\ell})({\rm d} y){\rm d} s\\
=& \int_0^t \int_E\int_E\int_H \int_0^s \int_0^s \Lambda(h(y,y^*,z))\Lambda(y) \nu_{y,y^*}({\rm d} z) \\
& \hskip5cm J_{\Upsilon_r}({\rm d} u^*,{\rm d} y^*)e^{-\kappa\Lambda(y^*)(s-u^*)}J_{\Upsilon_\ell}({\rm d} u,{\rm d} y)
e^{-\kappa \Lambda(y)(s-u)}
{\rm d} s.
\end{align*}
But it follows from \eqref{xxx} that $\int_H\Lambda(h(y,y^*,z))\nu_{y,y^*}({\rm d} z) \leq \kappa (1+e_0+\Lambda(y))\leq
2\kappa \Lambda(y)$,
whence
\begin{align*}
C_\Upsilon(t)\leq & 2\kappa \int_0^t \int_E\int_E\int_0^s \int_0^s \Lambda^2(y)
J_{\Upsilon_r}({\rm d} u^*,{\rm d} y^*) J_{\Upsilon_\ell}({\rm d} u,{\rm d} y) e^{-\kappa \Lambda(y)(s-u)}{\rm d} s\\
\leq & 2\kappa C_{\Upsilon_r}(t) \int_0^t \int_E\int_0^s \Lambda^2(y)
J_{\Upsilon_\ell}({\rm d} u,{\rm d} y) e^{-\kappa \Lambda(y)(s-u)}{\rm d} s.
\end{align*}
We finally used that for $s\in [0,t]$, $\int_E\int_0^sJ_{\Upsilon_r}({\rm d} u^*,{\rm d} y^*) \leq
\int_E\int_0^t \Lambda(y^*)J_{\Upsilon_r}({\rm d} u^*,{\rm d} y^*) =C_{\Upsilon_r}(t)$.
Next, by the Fubini theorem,
\begin{align*}
C_\Upsilon(t)\leq & 2\kappa C_{\Upsilon_r}(t) \int_0^t \int_E J_{\Upsilon_\ell}({\rm d} u,{\rm d} y) \int_u^t \Lambda^2(y)
e^{-\kappa \Lambda(y)(s-u)}{\rm d} s\leq 2 C_{\Upsilon_r}(t) \int_0^t \int_E \Lambda(y) J_{\Upsilon_\ell}({\rm d} u,{\rm d} y),
\end{align*}
so that $C_\Upsilon(t)\leq 2 C_{\Upsilon_r}(t)C_{\Upsilon_\ell}(t) <\infty$ as desired.
\vskip.13cm
{\it Step 2.} We deduce from Step 1 that for all $\Upsilon \in {\mathcal{T}}$,
$$
t\mapsto \int_E \Lambda(y)\Gamma_t(J_\Upsilon)({\rm d} y)=\int_0^t \int_E \Lambda(y)e^{-\Lambda(y)(t-s)}
J_\Upsilon({\rm d} s,{\rm d} y)\leq C_\Upsilon(t)
$$
is locally bounded.
\vskip.13cm
{\it Step 3.} We fix $k\in {\mathbb{N}}_*$ and denote by ${\mathcal{T}}_k\subset {\mathcal{T}}$ the finite set of all ordered binary
trees with at most
$k$ nodes. We introduce $F_t^k=\sum_{\Upsilon \in {\mathcal{T}}_k} \Gamma_t(J_\Upsilon)$. By Step 2, we know that
$t\mapsto \int_E \Lambda(y) F_t^k({\rm d} y)$ is locally bounded. We claim that
for all $\Phi \in C_b(E)$, all $t\geq 0$,
\begin{align}\label{todiff}
\int_E \Phi(y) F_t^k({\rm d} y) =& \int_E \Phi(y)e^{-\kappa \Lambda(y)t}F_0({\rm d} y) \\
&+\sum_{\Upsilon \in {\mathcal{T}}_k\setminus\{\circ\}} \int_0^t \int_E \Phi(y)e^{-\kappa \Lambda(y)(t-s)}
Q(\Gamma_s(J_{\Upsilon_\ell}),
\Gamma_s(J_{\Upsilon_r}))({\rm d} y){\rm d} s,\notag
\end{align}
whence in particular $F_0^k=F_0$.
Indeed, we first observe that
$$
\int_E \Phi(y) \Gamma_t(J_\circ)({\rm d} y) = \int_0^t \int_E \Phi(y)e^{-\kappa \Lambda(y)(t-s)}J_\circ({\rm d} s,{\rm d} y)
=\int_E \Phi(y)e^{-\kappa \Lambda(y)t}F_0({\rm d} y)
$$
and then that,
for $\Upsilon \in {\mathcal{T}}_k\setminus\{\circ\}$,
\begin{align*}
\int_E \Phi(y)\Gamma_t(J_\Upsilon)({\rm d} y)=& \int_0^t \int_E \Phi(y)e^{-\kappa \Lambda(y)(t-s)} J_{\Upsilon}({\rm d} s,{\rm d} y).
\end{align*}
Since $J_{\Upsilon}({\rm d} s,{\rm d} y)=Q(\Gamma_s(J_{\Upsilon_\ell}),
\Gamma_s(J_{\Upsilon_r}))({\rm d} y){\rm d} s$ by definition, the result follows.
\vskip.13cm
{\it Step 4.} Differentiating \eqref{todiff}, we find that for all $\Phi\in C_b(E)$, all $t\geq 0$,
\begin{equation}\label{diff}
\frac d{dt}\int_E \Phi(y) F_t^k({\rm d} y) = - \kappa \int_E \Lambda(y) \Phi(y) F_t^k({\rm d} y) +
\sum_{\Upsilon \in {\mathcal{T}}_k\setminus\{\circ\}} \int_E \Phi(y)Q(\Gamma_t(J_{\Upsilon_\ell}),\Gamma_t(J_{\Upsilon_r}))({\rm d} y).
\end{equation}
The differentiation is easily justified, using that $\Phi$ is bounded, that
$t\mapsto \int_E \Lambda(y) F_t^k({\rm d} y)$ is locally bounded, as well as
$t\mapsto \int_E \Lambda(y) \Gamma_t(J_\Upsilon)({\rm d} y)$ for all $\Upsilon \in {\mathcal{T}}$,
and that for $F,G\in{\mathcal{M}}(E)$,
$$
Q(F,G)(E) = \kappa G(E) \int_E \Lambda(y)F({\rm d} y).
$$
{\it Step 5.} Here we verify that $\sup_{[0,\infty)} F_t^k(E) \leq 1$ and that
$\sup_{k\geq 1} \sup_{[0,T]} \int_E \Lambda(y) F_t^k({\rm d} y) <\infty$ for all $T>0$.
First observe that if $\Phi\in C_b(E)$ is nonnegative, then
\begin{align*}
\sum_{\Upsilon \in {\mathcal{T}}_k\setminus\{\circ\}} \int_E \Phi(y)Q(\Gamma_t(J_{\Upsilon_\ell}),\Gamma_t(J_{\Upsilon_r}))({\rm d} y)
\leq& \sum_{\Upsilon_1 \in {\mathcal{T}}_k, \Upsilon_2 \in {\mathcal{T}}_k}
\int_E \Phi(y)Q(\Gamma_t(J_{\Upsilon_1}),\Gamma_t(J_{\Upsilon_2}))({\rm d} y)\\
=&\int_E \Phi(y)Q(F_t^k,F_t^k)({\rm d} y).
\end{align*}
We used that the map $\Upsilon \mapsto (\Upsilon_\ell,\Upsilon_r)$
is injective from ${\mathcal{T}}_k\setminus\{\circ\}$ into ${\mathcal{T}}_k\times{\mathcal{T}}_k$, as well as
the bilinearity of $Q$. Consequently, by \eqref{diff},
\begin{align}\label{yyy}
\frac d{dt}\int_E \Phi(y) F_t^k({\rm d} y) \leq& - \kappa \int_E \Lambda(y) \Phi(y) F_t^k({\rm d} y) +
\int_E \Phi(y)Q(F_t^k,F_t^k)({\rm d} y)\\
=& \int_E \int_E {\mathcal{B}}\Phi(y,y^*) F_t^k({\rm d} y^*) F_t^k({\rm d} y) + \kappa (F_t^k(E)-1)
\int_E \Lambda(y) \Phi(y) F_t^k({\rm d} y).\notag
\end{align}
For the last equality, we used that for any $F,G \in {\mathcal{M}}(E)$, we have
\begin{equation}\label{zzz}
\int_E\int_E{\mathcal{B}} \Phi(y,y^*) G({\rm d} y^*)F({\rm d} y)= \int_E \Phi(y)Q(F,G)({\rm d} y)-\kappa G(E)
\int_E \Lambda(y) \Phi(y) F({\rm d} y).
\end{equation}
Applying \eqref{yyy} with $\Phi=1$, we see that
$$
\frac d {dt} F_t^k(E) \leq
\kappa (F_t^k(E)-1)\int_E \Lambda(y) F_t^k({\rm d} y).
$$
Since $F_0^k(E)=F_0(E)=1$ and since $t\mapsto \int_E \Lambda(y)F_t^k({\rm d} y)$ is locally bounded,
we conclude that $F_t^k(E)\leq 1$ for all $t\geq 0$ (because
$(d/dt) [(F_t^k(E)-1)\exp(- \kappa \int_0^t \int_E \Lambda(y) F_s^k({\rm d} y){\rm d} s)] \leq 0$).
\vskip.13cm
Applying next \eqref{yyy} with $\Phi=\Lambda$ and using that $F_t^k(E)\leq 1$, we find that
\begin{align*}
\frac d {dt} \int_E \Lambda(y) F_t^k({\rm d} y)
\leq& \int_E \int_E {\mathcal{B}}\Lambda (y,y^*) F_t^k({\rm d} y^*) F_t^k({\rm d} y)
\leq \kappa(1+e_0) F_t^k(E)\int_E \Lambda(y) F_t^k({\rm d} y),
\end{align*}
because ${\mathcal{B}} \Lambda(y,y^*)=\Lambda(y)\int_H[\Phi(h(y,y^*,z))-\Phi(y)]\nu_{y,y^*}({\rm d} z)\leq \kappa(1+e_0)
\Lambda(y)$, see
\eqref{xxx}. Since $F_0^k=F_0$ and since, again, $F_t^k(E)\leq 1$, we conclude that
$\int_E \Lambda(y) F_t^k({\rm d} y) \leq [\int_E \Lambda(y) F_0({\rm d} y)]\exp(\kappa (1+e_0)t)$.
\vskip.13cm
{\it Step 6.} By Step 5, the series of nonnegative measures
$F_t=\sum_{\Upsilon \in {\mathcal{T}}} \Gamma_t(J_\Upsilon)$ converges, satisfies
$F_t(E)\leq 1$, and we know that $t\mapsto \int_E \Lambda(y)F_t({\rm d} y)$ is locally bounded.
Passing to the limit in the time-integrated version of \eqref{diff}, we find that
for all $\Phi \in C_b(E)$, all $t\geq 0$,
\begin{align}\label{ttac}
\int_E \Phi(y) F_t({\rm d} y) =&\int_E \Phi(y) F_0({\rm d} y) - \kappa \int_0^t \int_E \Lambda(y)
\Phi(y) F_s({\rm d} y){\rm d} s \\
& + \sum_{\Upsilon \in {\mathcal{T}}\setminus\{\circ\}} \int_0^t \int_E \Phi(y)
Q(\Gamma_s(J_{\Upsilon_\ell}),\Gamma_s(J_{\Upsilon_r}))({\rm d} y){\rm d} s. \notag
\end{align}
To justify the limiting procedure, it suffices to use that $t\mapsto \int_E \Lambda(y)F_t({\rm d} y)$
is locally bounded, as well as $t\mapsto \sum_{\Upsilon \in {\mathcal{T}}\setminus\{\circ\}}
Q(\Gamma_t(J_{\Upsilon_\ell}),\Gamma_t(J_{\Upsilon_r}))(E)=
\sum_{\Upsilon_1 \in {\mathcal{T}}, \Upsilon_2 \in {\mathcal{T}}} Q(\Gamma_t(J_{\Upsilon_1}),\Gamma_t(J_{\Upsilon_2}))(E)$, which equals
$Q(\sum_{\Upsilon_1 \in {\mathcal{T}}}\Gamma_t(J_{\Upsilon_1}),\sum_{\Upsilon_2 \in {\mathcal{T}}}\Gamma_t(J_{\Upsilon_2}))(E)=Q(F_t,F_t)(E)
=\kappa F_t(E) \int_E \Lambda(y)F_t({\rm d} y)$. We used that
the map $\Upsilon \mapsto (\Upsilon_\ell,\Upsilon_r)$
is bijective from ${\mathcal{T}}\setminus\{\circ\}$ into ${\mathcal{T}}\times{\mathcal{T}}$, as well as
the bilinearity of $Q$.
By the same way, \eqref{ttac} rewrites as
\begin{align*}
\int_E \Phi(y) F_t({\rm d} y) =&\int_E \Phi(y) F_0({\rm d} y) - \kappa \int_0^t \int_E \Lambda(y) \Phi(y) F_s({\rm d} y){\rm d} s
+ \int_0^t \int_E \Phi(y) Q(F_s,F_s)({\rm d} y){\rm d} s\\
=&\int_E \Phi(y) F_0({\rm d} y) + \int_0^t \int_E \int_E {\mathcal{B}}\Phi(y,y^*) F_s({\rm d} y^*)F_s({\rm d} y){\rm d} s\\
&+\kappa \int_0^t (F_s(E)-1) \Big(\int_E \Lambda(y) \Phi(y) F_s({\rm d} y) \Big){\rm d} s,
\end{align*}
see \eqref{zzz}.
To conclude that $(F_t)_{t\geq 0}$ solves (A), it only remains to verify that
$t\mapsto \int_E |v|^\gamma F_t({\rm d} m,{\rm d} v)$ is locally bounded, which follows from the fact that
$\int_E |v|^\gamma F_t({\rm d} m,{\rm d} v)\leq \int_E \Lambda(y) F_t({\rm d} y)$, and
that $F_t(E)=1$ for all $t\geq 0$.
Applying the previous equation with $\Phi=1$ (for which ${\mathcal{B}}\Phi=0$),
we find that $F_t(E)=1+ \int_0^t (F_s(E)-1) \alpha_s {\rm d} s$,
where $\alpha_s= \kappa \int_E \Lambda(y) F_s({\rm d} y)$ is locally bounded. Hence $F_t(E)=1$ for all $t\geq 0$
by the Gronwall lemma.
The proof of Proposition \ref{psum} is complete. \hfill $\square$
\section{Well-posedness of (A)}\label{swp}
We have already checked (twice) the existence part of Proposition \ref{beeu}. We now turn to uniqueness.
Let us consider two solutions $F$ and $G$ to (A) with $F_0=G_0$. By assumption,
we know that
$\alpha_t=\int_E \Lambda(y) (F_t+G_t)({\rm d} y)=(1+e_0)\int_E (1+|v|^\gamma)(F_t+G_t)({\rm d} m,{\rm d} v)$
is locally bounded. Hence, setting
$\epsilon^M_t=\int_E \Lambda(y)\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{\Lambda(y)\geq M\}} (F_t+G_t)({\rm d} y)$, we have
$\lim_{M\to \infty} \int_0^t \epsilon^M_s {\rm d} s=0$ for all $t\geq 0$.
\vskip.13cm
We use the total variation distance $u_t=||F_t-G_t||_{TV}=\sup \{u_t^\Phi : \Phi \in C_b(E),
||\Phi||_\infty \leq 1\}$
where $u_t^\Phi=\int_E \Phi(y)(F_t-G_t)({\rm d} y)$. We also have $u_t=\int_E |F_t-G_t|({\rm d} y)$, where
for $\mu$ a
finite signed measure on $E$, $|\mu|=\mu_++\mu_-$ with the usual definitions of $\mu_+$ and $\mu_-$.
\vskip.13cm
We fix $\Phi \in C_b(E)$ such that $||\Phi||_\infty \leq 1$ and we use Definition \ref{GE} to write
$$
\frac d{dt}u_t^\Phi= \int_E\int_E {\mathcal{B}}\Phi(y,y^*) (F_t({\rm d} y^*)F_t({\rm d} y)-G_t({\rm d} y^*)G_t({\rm d} y))
=A_t^\Phi+B_t^\Phi,
$$
where $A_t^\Phi=\int_E\int_E {\mathcal{B}}\Phi(y,y^*) (F_t-G_t)({\rm d} y^*)F_t({\rm d} y)$
and $B_t^\Phi=\int_E\int_E {\mathcal{B}}\Phi(y,y^*) G_t({\rm d} y^*)(F_t-G_t)({\rm d} y)$.
\vskip.13cm
Using only that $|{\mathcal{B}}\Phi(y,y^*)|\leq 2\kappa ||\Phi||_\infty \Lambda(y) \leq 2\kappa \Lambda(y)$, we get
$$
A_t^\Phi \leq 2\kappa \int_E\Lambda(y) F_t({\rm d} y)\int_E |F_t-G_t|({\rm d} y^*)\leq
2\kappa \alpha_t ||F_t-G_t||_{TV}=2\kappa \alpha_t u_t.
$$
We next recall that $||\Phi||_\infty\leq 1$ and that $\int_E G_t({\rm d} y^*)=1$ and we write
\begin{align*}
B^\Phi_t = & \int_E\int_E\int_H \Lambda(y)\Phi(h(y,y^*,z)) \nu_{y,y^*}(dz) G_t({\rm d} y^*)(F_t-G_t)({\rm d} y)
- \kappa \int_E \Lambda(y)\Phi(y)(F_t-G_t)({\rm d} y)\\
\leq & \kappa \int_E \Lambda(y)|F_t-G_t|({\rm d} y)-\kappa \int_E \Lambda(y)\Phi(y)(F_t-G_t)({\rm d} y).
\end{align*}
Since now $|F_t-G_t|({\rm d} y)-\Phi(y)(F_t-G_t)({\rm d} y)$ is a nonnegative measure bounded by
$2(F_t+G_t)({\rm d} y)$, we may write, for any
$M\geq 1$,
\begin{align*}
B^\Phi_t \leq & \kappa M \int_E [|F_t-G_t|({\rm d} y)-\Phi(y)(F_t-G_t)({\rm d} y)] + 2\kappa
\int_E \Lambda(y)\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{\Lambda(y)\geq M\}} (F_t+G_t)({\rm d} y)\\
=& \kappa M u_t - \kappa M u_t^\Phi + 2\kappa \epsilon^M_t.
\end{align*}
All this proves that $(d/dt)u_t^\Phi \leq 2\kappa \alpha_t u_t+\kappa M u_t - \kappa M u_t^\Phi + 2\kappa \epsilon^M_t$,
whence
$$
\frac d {dt} (u_t^\Phi e^{\kappa M t}) \leq [2\kappa \alpha_t u_t+\kappa M u_t + 2\kappa \epsilon^M_t]e^{\kappa M t}.
$$
Integrating in time (recall that $u_0^\Phi=0$) and taking the supremum over $\Phi \in C_b(E)$ such that
$||\Phi||_\infty\leq 1$, we find
$u_te^{\kappa M t} \leq \int_0^t [(2\kappa \alpha_s + \kappa M)u_s +2\kappa \epsilon^M_s]e^{\kappa M s} {\rm d} s.$
\vskip.13cm
Recall the following generalized Gronwall lemma: if we have three locally bounded nonnegative functions
$v,g,h$ such that $v_t \leq \int_0^t (h_s v_s+g_s){\rm d} s$ for all $t\geq 0$, then
$v_t \leq \int_0^t g_s \exp(\int_s^t h_u {\rm d} u) {\rm d} s$. Applying this result with
$v_t=u_te^{\kappa M t}$, $g_t=2\kappa \epsilon^M_te^{\kappa M t}$ and $h_t=2\kappa \alpha_t + \kappa M$, we get
$$
u_t e^{\kappa M t} \leq 2\kappa \int_0^t \epsilon^M_s \exp\Big(
\kappa M s+2 \kappa \int_s^t \alpha_u {\rm d} u + \kappa M(t-s) \Big) {\rm d} s,
$$
so that $u_t \leq 2\kappa \int_0^t \epsilon^M_s \exp(2 \kappa \int_s^t \alpha_u{\rm d} u) {\rm d} s$.
Recalling that $\alpha$ is locally bounded and that $\int_0^t \epsilon^M_s {\rm d} s$ tends to $0$ as $M\to \infty$,
we conclude that $u_t=0$, which was our goal. The proof of Proposition \ref{beeu} is now complete.
\hfill $\square$
\vskip.13cm
We end this section with the
\begin{proof}[Proof of Remark \ref{dec}]
We fix $A\geq 1$ and apply \eqref{gew} with $\Phi_A(m,v)=|v|^2\land A$, which belongs to $C_b(E)$.
With the notation $y=(m,v)$ and $y^*=(m^*,v^*)$, we find
\begin{align*}
{\mathcal{B}}\Phi_A(y,y^*)=&\Lambda(v) \int_H[|v''(v,v^*,z)|^2\land A - |v|^2\land A]\nu_{y,y^*}({\rm d} z)\\
=&\Lambda(v)q(v,v^*)
\int_{\sS} [|v'(v,v^*,\sigma)|^2\land A- |v|^2\land A] \beta_{v,v^*}(\sigma){\rm d}\sigma\\
=&\kappa(1+e_0)\frac{|v-v^*|^\gamma}{1+|v^*|^2}
\Big[\kappa^{-1}\int_{\sS} (|v'(v,v^*,\sigma)|^2\land A ) \beta_{v,v^*}(\sigma){\rm d} \sigma - |v|^2\land A\Big]\\
\leq & \kappa(1+e_0)\frac{|v-v^*|^\gamma}{1+|v^*|^2}
\Big[ \Big(\kappa^{-1}\int_{\sS}|v'(v,v^*,\sigma)|^2\beta_{v,v^*}(\sigma){\rm d} \sigma\Big)
\land A -
|v|^2\land A\Big].
\end{align*}
But a simple computation, recalling \eqref{vprime}
and using that
$$
\frac{|v-v^*|}\kappa \int_{\sS} \sigma \beta_{v,v^*}(\sigma){\rm d} \sigma =
\frac{|v-v^*|}\kappa \int_{\sS} \sigma b({\textstyle{\langle\frac{v-v^*}{|v-v^*|},\sigma\rangle}}){\rm d} \sigma
= c (v-v^*)
$$
where $c= 2\pi\kappa^{-1}\int_0^\pi \sin \theta \cos\theta b(\cos\theta){\rm d} \theta \in [-1,1]$
(recall \eqref{beta}) shows that
\begin{equation}\label{ccc}
\kappa^{-1}\int_{\sS} |v'(v,v^*,\sigma)|^2\beta_{v,v^*}(\sigma){\rm d} \sigma=
\frac{1+c}2|v|^2+\frac{1-c}2|v^*|^2=(1-\alpha)|v|^2+\alpha|v^*|^2,
\end{equation}
where $\alpha=(1-c)/2 \in [0,1]$. Hence,
$$
{\mathcal{B}}\Phi_A(y,y^*)+{\mathcal{B}}\Phi_A(y^*,y) \leq \kappa(1+e_0)|v-v^*|^\gamma G_A(|v|^2,|v^*|^2),
$$
where $G_A(x,x^*)= (1+x^*)^{-1}[((1-\alpha)x+\alpha x^*)\land A - x\land A] +
(1+x)^{-1}[((1-\alpha)x^*+\alpha x)\land A - x^*\land A]$.
One can check that $G_A(x,x^*)\leq 0$ if $x\lor x^*\leq A$ and it always holds true that
$G_A(x,x^*)\leq (1+x^*)^{-1}\alpha x^*+ (1+x)^{-1}\alpha x \leq 2$. At the end,
$G_A(x,x^*) \leq 2(\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{x>A\}}+\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{x^*>A\}})$. Consequently, applying \eqref{gew}
and using a symmetry argument,
\begin{align*}
\int_E (|v|^2&\land A)F_t({\rm d} y)=\!\int_E (|v|^2\land A)F_0({\rm d} y)
+ \int_0^t \int_E\int_E {\mathcal{B}}\Phi_A(y,y^*) F_s({\rm d} y^*)F_s({\rm d} y) {\rm d} s\\
=&\!\int_E (|v|^2\land A)F_0({\rm d} y)
+ \frac12 \int_0^t \int_E\int_E [{\mathcal{B}}\Phi_A(y,y^*)+{\mathcal{B}}\Phi_A(y^*,y)] F_s({\rm d} y^*)F_s({\rm d} y) {\rm d} s\\
\leq & \int_E |v|^2F_0({\rm d} y)+ \kappa(1+e_0)
\int_0^t \int_E\int_E |v-v^*|^\gamma[\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{|v|^2>A\}}+\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{|v^*|^2>A\}}] F_s({\rm d} y^*)F_s({\rm d} y) {\rm d} s.
\end{align*}
Letting $A\to \infty$ and using that $\int_0^t \int_E\int_E |v-v^*|^\gamma F_s({\rm d} y^*)F_s({\rm d} y) {\rm d} s<\infty$
(which follows from the fact that $\sup_{[0,t]} \int |v|^\gamma F_s({\rm d} y)<\infty$), we conclude that
$\int_E |v|^2 F_t({\rm d} m,{\rm d} v)\leq \int_E |v|^2 F_0({\rm d} m,{\rm d} v)$.
\end{proof}
\section{Relation between (A) and the Boltzmann equation}\label{sr}
It remains to prove Proposition \ref{relat}. In the whole section, we consider
the solution $F$ to (A) starting from some $F_0 \in {\mathcal{P}}(E)$
such that $\int_E [|v|^\gamma+m(1+|v|^{2+2\gamma})] F_0({\rm d} y)<\infty$.
We define the nonnegative measure $f_t$ on ${\mathbb{R}}^3$
by $f_t(A)=\int_E m\hbox{\rm 1}{\hskip -2.8 pt}\hbox{\rm I}_{\{v \in A\}} F_t({\rm d} y)$ for all $A\in{\mathcal{B}}({\mathbb{R}}^3)$
and we assume that $f_0 \in {\mathcal{P}}({\mathbb{R}}^3)$ and
that $\int_{{\mathbb{R}}^3}|v|^2 f_0({\rm d} v)=e_0$, where $e_0$ was used in Subsection \ref{tlf} to build the
coefficients of (A). We want to prove
that $f=(f_t)_{t\geq 0}$ is a weak solution to \eqref{be}.
\vskip.13cm
The main difficulty is to establish properly the following estimate, of which the proof is postponed
at the end of the section.
\begin{lem}\label{ttc}
For any $T>0$, $\sup_{[0,T]}\int_E m (1+|v|^{2+\gamma})F_t({\rm d} m,{\rm d} v)<\infty$.
\end{lem}
Next, we handle a few preliminary computations.
\begin{rk}\label{tf}
(i) For all $\Phi\in C(E)$ of the form $\Phi(m,v)=m\phi(v)$ with $\phi \in C({\mathbb{R}}^3)$,
using the notation $y=(m,v)$ and $y^*=(m^*,v^*)$, it holds that
$$
{\mathcal{B}} \Phi(y,y^*)=mm^* {\mathcal{A}}\phi(v,v^*)+\kappa m \Lambda(v)\phi(v)
\Big(\frac{m^*(1+|v^*|^2)}{1+e_0} -1\Big).
$$
(ii) Assume furthermore that there is $\alpha \geq 0$ such that for all $v\in {\rr^3}$,
$|\phi(v)|\leq C (1+|v|^{2+\alpha})$. Then $|{\mathcal{B}}\Phi(y,y^*)|\leq C [\Lambda(y)+\Lambda(y^*)][1+m(1+|v|^{2+\alpha})]
[1+m^*(1+|v^*|^{2+\alpha})]$.
\end{rk}
\begin{proof}
For (i), it suffices to write
$$
{\mathcal{B}} \Phi(y,y^*)=\Lambda(v)\int_H\Big[\frac{mm^*(1+|v^*|^2)}{1+e_0}\phi(v''(v,v^*,z))-m\phi(v)\Big] \nu_{y,y^*}
({\rm d} z)={\mathcal{B}}^1 \Phi(y,y^*)+{\mathcal{B}}^2 \Phi(y,y^*),
$$
where
\begin{align*}
{\mathcal{B}}^1\Phi(y,y^*)=&\Lambda(v)\frac{mm^*(1+|v^*|^2)}{1+e_0}\int_H
\Big[\phi(v''(v,v^*,z))-\phi(v)\Big] \nu_{y,y^*}({\rm d} z)\\
=&\Lambda(v)q(v,v_*)\frac{mm^*(1+|v^*|^2)}{1+e_0}\int_{\sS}
\Big[\phi(v'(v,v^*,\sigma))-\phi(v) \Big] \beta_{v,v^*}(\sigma) {\rm d} \sigma,
\end{align*}
which equals $mm^*{\mathcal{A}}\phi(v,v^*)$, and where
\begin{align*}
{\mathcal{B}}^2\Phi_A(y,y^*)=&\kappa \Lambda(v) \phi(v) \Big[\frac{mm^*(1+|v^*|^2)}{1+e_0} -m\Big].
\end{align*}
For point (ii), we first observe that $|{\mathcal{A}}\phi(v,v^*)|\leq C |v-v^*|^\gamma(1+|v|^{2+\alpha}+|v^*|^{2+\alpha})$,
because $|v'(v,v^*,\sigma)|\leq |v|+|v^*|$, see \eqref{vprime}.
Using next that $|v-v^*|^\gamma\leq |v|^\gamma+|v^*|^\gamma$
and that $\Lambda(y)=\Lambda(v)=(1+e_0)(1+|v|^\gamma)$, we thus find
\begin{align*}
|{\mathcal{B}}\Phi(y,y^*)|\leq & C mm^*(|v|^\gamma+|v^*|^\gamma)(1+|v|^{2+\alpha}+|v^*|^{2+\alpha})
+ C \Lambda(v) m(1+|v|^{2+\alpha})(1+m^*(1+|v^*|^2)),
\end{align*}
from which the conclusion easily follows.
\end{proof}
We now can give the
\begin{proof}[Proof of Proposition \ref{relat}]
For any $\phi\in C({\mathbb{R}}^3)$ such that $|\phi(v)|\leq C(1+|v|^2)$, we can apply \eqref{gew}
with $\Phi(m,v)=m\phi(v)$. To check it it properly, first apply \eqref{gew} with
$\Phi(m,v)=[m\land A] \phi_A(v)$ with $\phi_A(v)=\phi(v)\land A \lor (-A)$ and let $A\to \infty$.
This essentially relies on the facts
that
\vskip.13cm
\noindent $\bullet$ $|{\mathcal{B}}\Phi(y,y^*)|\leq C [\Lambda(y)+\Lambda(y^*)][1+m(1+|v|^2)][1+m^*(1+|v^*|^2)]$
by Remark \ref{tf}-(ii) (with $\alpha=0$), whence
$ |{\mathcal{B}}\Phi(y,y^*)|\leq C
[1+|v|^\gamma+m(1+|v|^{2+\gamma})][1+|v^*|^\gamma+m^*(1+|v^*|^{2+\gamma})]$ and
\vskip.13cm
\noindent $\bullet$ $t\mapsto \int_E [1\!+\!|v|^\gamma\!+\!m (1\!+\!|v|^{2+\gamma})]F_t({\rm d} m,{\rm d} v)<\infty$
is locally bounded by Lemma \ref{ttc} and Definition \ref{GE}.
\vskip.13cm
So, applying \eqref{gew} and using the formula of Remark \ref{tf}-(i), we find
\begin{align*}
\int_E m\phi(v)F_t({\rm d} y)=&\int_E m\phi(v)F_0({\rm d} y)+\int_0^t \int_E\int_E mm^* {\mathcal{A}}\phi(v,v^*)
F_s({\rm d} y^*)F_s({\rm d} y){\rm d} s\\
&+\kappa \int_0^t \int_E\int_E m \Lambda(v)\phi(v) \Big(\frac{m^*(1+|v^*|^2)}{1+e_0} -1\Big)
F_s({\rm d} y^*)F_s({\rm d} y){\rm d} s.
\end{align*}
This precisely rewrites, by definition of $f_t$,
\begin{align}\label{ww}
\int_{{\mathbb{R}}^3} \phi(v)f_t({\rm d} v)=&\int_{{\mathbb{R}}^3} \phi(v)f_0({\rm d} v)+ \int_0^t \int_{{\mathbb{R}}^3}\int_{{\mathbb{R}}^3} {\mathcal{A}}\phi(v,v^*)
f_s({\rm d} v^*)f_s({\rm d} v) {\rm d} s \\
&+ \kappa \int_0^t (\Theta_s-1) \Big(\int_{{\mathbb{R}}^3} \Lambda(v)\phi(v)f_s({\rm d} v)\Big) {\rm d} s, \nonumber
\end{align}
where $\Theta_t=(1+e_0)^{-1}\int_{{\mathbb{R}}^3} (1+|v|^2)f_t({\rm d} v)$.
\vskip.13cm
But, with $\phi(v)=(1+e_0)^{-1}(1+|v|^2)$, recalling \eqref{ccc}, it holds that
\begin{equation}\label{coco}
{\mathcal{A}}\phi(v,v^*)+{\mathcal{A}}\phi(v^*,v)=\frac{\kappa}{1+e_0}\big[(1-\alpha)|v|^2+\alpha |v^*|^2 - |v|^2+
(1-\alpha)|v^*|^2+\alpha |v|^2 - |v^*|^2\big]=0.
\end{equation}
Hence applying \eqref{ww} and using a symmetry argument, we find
$$
\Theta_t = 1 + \kappa \int_0^t (\Theta_s-1) \Big(\int_{{\mathbb{R}}^3} \Lambda(v)\phi(v)f_s({\rm d} v)\Big) {\rm d} s.
$$
Hence $\Theta_t=1$ for all $t\geq 0$ by the Gronwall Lemma,
because $\Theta_t=(1+e_0)^{-1}\int_E m(1+|v|^2)F_t({\rm d} m,{\rm d} v)$ and
$\int_{{\mathbb{R}}^3} \Lambda(v)\phi(v)f_t({\rm d} v)= \int_E m(1+|v|^2)(1+|v|^\gamma)F_t({\rm d} m,{\rm d} v)$
are locally bounded by Lemma \ref{ttc}.
\vskip.13cm
Coming back to \eqref{ww}, we thus see that for all $\phi\in C_b({\mathbb{R}}^3)$,
\begin{align*}
\int_{{\mathbb{R}}^3} \phi(v)f_t({\rm d} v)=&\int_{{\mathbb{R}}^3} \phi(v)f_0({\rm d} v)+ \int_0^t \int_{{\mathbb{R}}^3}\int_{{\mathbb{R}}^3} {\mathcal{A}}\phi(v,v^*)
f_s({\rm d} v^*)f_s({\rm d} v) {\rm d} s.
\end{align*}
To complete the proof, it only remains to prove that $f_t({\rr^3})=1$ for all $t\geq 0$,
which follows from the choice
$\phi(v)=1$ (for which ${\mathcal{A}} \phi(v,v_*)=0$), and to check that
$\int_{\rd} |v|^2 f_t({\rm d} v)=e_0$ for all $t\geq 0$, which holds true because
$\int_{\rd} |v|^2 f_t({\rm d} v)=(1+e_0)\Theta_t - f_t({\mathbb{R}}^3)$.
\end{proof}
It only remains to prove Lemma \ref{ttc}.
\begin{proof}[Proof of Lemma \ref{ttc}]
The proof relies on the series expansion
$F_t=\sum_{\Upsilon \in {\mathcal{T}}} \Gamma_t(J_\Upsilon(F_0))$, see Proposition \ref{psum}.
We write $J_\Upsilon=J_\Upsilon(F_0)$ for simplicity. We will make use of the functions
$\Phi_0(m,v)=m(1+|v|^{2})/(1+e_0)$, $\Phi_1(m,v)=m(1+|v|^{2+\gamma})$ and $\Phi_2(m,v)=m(1+|v|^{2+2\gamma})$.
\vskip.13cm
{\it Step 1.} Here we verify that for all $\Upsilon \in {\mathcal{T}}$, all $t\geq 0$,
$D_\Upsilon(t)= \int_0^t \int_E \Phi_2(y) J_\Upsilon({\rm d} s,{\rm d} y)<\infty$. We proceed by induction
as in the proof of Proposition \ref{psum}, Step 1. First, $D_\circ (t)=\int_E \Phi_2(y)F_0({\rm d} y)<\infty$
by assumption. Next, we fix $t\geq 0$,
$\Upsilon \in {\mathcal{T}}\setminus\{\circ\}$, we assume by induction that $D_{\Upsilon_\ell}(t)<\infty$ and
$D_{\Upsilon_r}(t)<\infty$ and prove that $D_\Upsilon(t)<\infty$. We start from
\begin{align*}
D_\Upsilon(t)=&\int_0^t \int_E\int_E\int_H \int_0^s \int_0^s \Phi_2(h(y,y^*,z))\Lambda(y) \nu_{y,y^*}({\rm d} z) \\
& \hskip5cm J_{\Upsilon_r}({\rm d} u^*,{\rm d} y^*)e^{-\kappa\Lambda(y^*)(s-u^*)}J_{\Upsilon_\ell}({\rm d} u,{\rm d} y)
e^{-\kappa \Lambda(y)(s-u)}{\rm d} s.
\end{align*}
But we see from Remark \ref{tf}-(ii) (with $\alpha=2\gamma$) that
\begin{align*}
\Lambda(y)\int_H \Phi_2(h(y,y^*,z))\nu_{y,y^*}({\rm d} z)=& {\mathcal{B}}\Phi_2(y,y^*)+ \kappa \Lambda(y)\Phi_2(y)\\
\leq& C [\Lambda(y)\!+\!\Lambda(y^*)](1\!+\! \Phi_2(y))(1\!+\!\Phi_2(y^*)).
\end{align*}
Together with the Fubini theorem, this gives us
\begin{align*}
D_\Upsilon(t)\leq &C \int_E \int_0^t \int_E \int_0^t (1+\Phi_2(y))(1+\Phi_2(y^*))J_{\Upsilon_r}({\rm d} u^*,{\rm d} y^*)
J_{\Upsilon_\ell}({\rm d} u,{\rm d} y) \\
&\hskip4cm\int_{u\lor u^*}^t [\Lambda(y)+\Lambda(y^*)]
e^{-\kappa\Lambda(y^*)(s-u^*)} e^{-\kappa \Lambda(y)(s-u)}{\rm d} s\\
\leq & C \int_E \int_0^t \int_E \int_0^t (1+\Phi_2(y))(1+\Phi_2(y^*))
J_{\Upsilon_r}({\rm d} u^*,{\rm d} y^*)J_{\Upsilon_\ell}({\rm d} u,{\rm d} y)\\
=& C [J_{\Upsilon_\ell}([0,t]\times E)+D_{\Upsilon_\ell}(t)][J_{\Upsilon_r}([0,t]\times E)+D_{\Upsilon_r}(t)].
\end{align*}
We conclude by induction and since we already know from Step 1 of the proof of Proposition \ref{psum}
that $J_{\Upsilon}([0,t]\times E)\leq \int_0^t \int_E \Lambda(y)J_\Upsilon({\rm d} s,{\rm d} y) <\infty$
for all $\Upsilon \in {\mathcal{T}}$.
\vskip.13cm
{\it Step 2.} For $k \in {\mathbb{N}}^*$, we define
$F^k_t=\sum_{\Upsilon \in {\mathcal{T}}_k} \Gamma_t(J_\Upsilon(F_0))$ as in the proof of Proposition \ref{psum}, Step 3.
We know that $F^k_0=F_0$ and that for all nonnegative $\Phi \in C_b(E)$, see \eqref{yyy} and recall that
$F^k_t(E)\leq1 $,
\begin{equation}\label{toex}
\int_E \Phi(y)F_t^k({\rm d} y) \leq \int_E \Phi(y)F_0({\rm d} y) +\int_0^t \int_E\int_E{\mathcal{B}}\Phi(y,y^*)F_s^k({\rm d} y^*)
F_s^k({\rm d} y){\rm d} s.
\end{equation}
Also, we immediately deduce from Step 1 that $t\mapsto \int_E \Phi_2(y)F_t^k({\rm d} y)$ is locally bounded,
as well as $t\mapsto \int_E \Lambda(y)F_t^k({\rm d} y)$, see Step 2 of the proof of Proposition \ref{psum}.
It is then easy to extend \eqref{toex} to any function $\Phi \in C(E)$ of the form
$\Phi(m,v)=m\phi(v)$, with $0\leq \phi(v)\leq C(1+|v|^{2+\gamma})$.
This follows from the fact that, by Remark \ref{tf}-(ii) (with $\alpha=\gamma$),
$$
|{\mathcal{B}}\Phi(y,y^*)| \leq C[\Lambda(y)+\Lambda(y^*)](1+\Phi_1(y))(1+\Phi_1(y^*))\leq C (1+\Lambda(y)+\Phi_2(y))
(1+\Lambda(y^*)+\Phi_2(y^*)).
$$
\vskip.13cm
{\it Step 3.} We now verify that $\int_E \Phi_0(y)F_t^k({\rm d} y)\leq 1$ for all $t\geq 0$.
To this end, we apply \eqref{toex} with $\Phi=\Phi_0$ for which, by Remark \ref{tf}-(i),
$$
{\mathcal{B}} \Phi_0(y,y^*)= m m^*{\mathcal{A}}\phi(v,v^*)+\kappa \Lambda(y)\Phi_0(y)[\Phi_0(y^*)-1 ],
$$
where $\phi(v)=(1+|v|^2)/(1+e_0)$. Using that ${\mathcal{A}}\phi(v,v^*)+{\mathcal{A}}\phi(v^*,v)=0$ (recall
\eqref{coco}) and a symmetry argument, we find that
\begin{align*}
\int_E \Phi_0(y)F_t^k({\rm d} y) \leq & \int_E \Phi_0(y)F_0({\rm d} y) +
\int_0^t \int_E\int_E \kappa \Lambda(y)\Phi_0(y)
[\Phi_0(y^*)-1 ]F_s^k({\rm d} y^*) F_s^k({\rm d} y){\rm d} s\\
=& 1 + \int_0^t \Big(\int_E \kappa \Lambda(y)\Phi_0(y) F_s^k({\rm d} y)\Big)\Big( \int_E \Phi_0(y)F_s^k({\rm d} y)-1
\Big) {\rm d} s.
\end{align*}
Setting $u_t=\int_E \Phi_0(y)F_t^k({\rm d} y)-1$ and $\alpha_t=\int_E \kappa \Lambda(y)\Phi_0(y) F_t^k({\rm d} y)\geq 0$,
we know that $u$ and $\alpha$ are locally bounded by Step 2 (because $\Phi_0(y)+\Lambda(y)\Phi_0(y)
\leq C \Phi_2(y)$)
and that $u_t \leq \int_0^t \alpha_s u_s {\rm d} s$. This implies that $u_t\leq 0$ for all $t\geq 0$,
which was our goal.
\vskip.13cm
{\it Step 4.} We finally apply \eqref{toex} with $\Phi=\Phi_1$. By Remark \ref{tf}, we see that,
with $\phi(v)=1+|v|^{2+\gamma}$,
$$
{\mathcal{B}}\Phi_1(y,y^*)=mm^* {\mathcal{A}}\phi(v,v^*)+\kappa \Lambda(y)\Phi_1(y)(\Phi_0(y^*)-1).
$$
Hence
\begin{align*}
\int_E \Phi_1(y)F_t^k({\rm d} y) \leq & \int_E \Phi_1(y)F_0({\rm d} y)
+ \int_0^t \int_E\int_E mm^* {\mathcal{A}}\phi(v,v^*) F_s^k({\rm d} y^*)F^k_s({\rm d} y) {\rm d} s\\
&+\int_0^t \int_E\int_E \kappa \Lambda(y)\Phi_1(y)[\Phi_0(y^*)-1 ]F_s^k({\rm d} y^*) F_s^k({\rm d} y){\rm d} s\\
\leq& \int_E \Phi_1(y)F_0({\rm d} y) + \int_0^t \int_E\int_E mm^* {\mathcal{A}}\phi(v,v^*) F_s^k({\rm d} y^*)F^k_s({\rm d} y) {\rm d} s
\end{align*}
by Step 3. We next recall a Povzner lemma \cite{p} in the version found in
\cite[Lemma 2.2-(i)]{MW}: for $\alpha>0$, setting $\phi_\alpha(v)=|v|^{2+\alpha}$,
there is a $C_\alpha>0$ such that for all $v,v^* \in {\rr^3}$,
${\mathcal{A}}\phi_\alpha(v,v^*)+{\mathcal{A}}\phi_\alpha(v^*,v) \leq C_\alpha |v-v^*|^{\gamma} (|v||v^*|)^{1+\alpha/2}$.
Actually, the result of \cite{MW} is much stronger. Since $\phi=1+\phi_\gamma$ and since
$\cA1=0$, we conclude that
\begin{align*}
{\mathcal{A}}\phi(v,v^*)+{\mathcal{A}}\phi(v^*,v) \leq & C |v-v^*|^\gamma (|v||v^*|)^{1+\gamma/2}\\
\leq& C|v|^{1+3\gamma/2}|v^*|^{1+\gamma/2}+C|v|^{1+\gamma/2}|v^*|^{1+3\gamma/2}\\
\leq& C (1+|v|^{2+\gamma})(1+|v^*|^2)+C(1+|v|^{2})(1+|v^*|^{2+\gamma}),
\end{align*}
so that
$$
mm^*[{\mathcal{A}}\phi(v,v^*)+{\mathcal{A}}\phi(v^*,v)] \leq C [\Phi_1(y)\Phi_0(y^*)+\Phi_1(y^*)\Phi_0(y)]
$$
Finally, using twice a symmetry argument,
\begin{align*}
\int_E \Phi_1(y)F_t^k({\rm d} y)
\leq &\int_E \Phi_1(y)F_0({\rm d} y) + C \int_0^t \int_E \int_E \Phi_1(y)\Phi_0(y^*) F_s^k({\rm d} y^*)F^k_s({\rm d} y) {\rm d} s\\
\leq & \int_E \Phi_1(y)F_0({\rm d} y) + C \int_0^t \int_E\Phi_1(y)F^k_s({\rm d} y) {\rm d} s
\end{align*}
by Step 3 again.
Hence $\int_E \Phi_1(y)F_t^k({\rm d} y) \leq e^{Ct}\int_E \Phi_1(y)F_0({\rm d} y)$ by the Gronwall lemma.
It then suffices to let $k$ increase to infinity, by monotone convergence, to complete the proof.
\end{proof}
|
1,116,691,500,157 | arxiv | \section{Comments}
King \& Lasota (2019) claimed that there are ``No magnetars in ULXs''. In their opinion, ULX pulsars may
be beamed due to super-Eddington mass transfer. While the neutron star is still accreting matter near the Eddington threshold. Using the
spin-up measurements, the magnetic dipole field is found to be of normal value ($\sim 10^{12} \ \rm G$). Then they claim that there are no magnetars
in ULXs. However, this is just one solution to the ULX pulsar problem. The super-Eddington luminosity of ULX pulsar may be due to the presence of
magnetar strength magnetic field ($\sim 10^{14} \ \rm G$) in the vicinity of the neutron star\footnote{This is also the key point to propose the existence of magnetars in the early 1990s (Duncan \& Thompson 1992; Paczynski 1992; Usov 1992). Therefore, seeing a super-Eddington luminosity, proposing the idea of accreting magnetar is similar to the invention of magnetars. It is not a ``magnetic analogy'' to intermediate mass black holes.} (Paczynski 1992; Mushtukov et al. 2015). The large scale magnetic dipole field of ULX pulsars may be of normal value (Tong 2015; Dall'Osso et al. 2015; Israel et al. 2017b). Therefore, ULX pulsars can also be accreting low magnetic field magnetars.
In addition to the luminosity and spin-up observations, ULX pulsars also have pulse profile and pulsed fraction measurements (which are unique to pulsars).
For the four ULX pulsars (M82 X-2, Bachetti et al. 2014; NGC 7793 P13, F$\rm \ddot{u}$rst et al. 2016, Israel et al. 2017a; NGC 5907 ULX, Israel et al. 2017b; NGC 300 ULX1, Carpano et al. 2018), they all have near sinusoidal pulse profiles. This means that they do not have a strong beaming. Their pulsed fraction are relatively high (e.g., $20\%$ for NGC 5907 ULX, Israel et al. 2017b). Therefore, for the pulsed component alone, ULX pulsars may be accreting at super-Eddington rate. This is in contradiction with that of King \& Lasota (2019). Previous studies already found that the magnetic dipole field of ULX pulsars may not be very high (Tong 2015; Dall'Osso et al. 2015; Israel et al. 2017b). Noting this point, one way is to propose the idea of accreting low magnetic field magnetars (Tong 2015; Israel et al. 2017b). Another way is to consider the ``no magnetar'' solution as in King \& Lasota (2019). One reason for the later choice may be that: the idea of low magnetic field magnetar is not fully appreciated, especially outside the magnetar domain (Rea et al. 2010).
Besides normal magnetars and low magnetic field magnetars, the existence of high magnetic field pulsars (Ng \& Kaspi 2010) further complicates this problem. High magnetic field pulsars are neutron stars with strong magnetic dipole field. But it is not granted they are magnetars. Magnetar is not simply a neutron star with high magnetic dipole field. However, this definition of magnetars are often taken as granted in the accreting neutron star and gamma-ray burst studies. Therefore, seeing a strong magnetic dipole field in slow pulsation X-ray pulsars does not grant their magnetar nature (Sanjurjo-Ferrrin et al. 2017). Similarly, a low magnetic dipole field in ULX pulsars can not rule out the presence of magnetars. At present, they are all ``accreting magnetar candidates'', which are often shortened as ``accreting magnetars''.
In summary, as a new specimen to the zoo of accreting neutron stars, the idea of accreting magnetars should be welcomed and explored in full detail in the future.
|
1,116,691,500,158 | arxiv | \section{Preliminaries}
\noindent The concept of generalized shifts has been introduced for the first time
in \cite{AHK} as a generalization of one-sided shift
$\mathop{\{1,\ldots,k\}^{\mathbb N}\to\{1,\ldots,k\}^{\mathbb N}}\limits_{
\: \: \: \: \:(a_1,a_2,\cdots)\mapsto(a_2,a_3,\cdots)}$
and two-sided shift
$\mathop{\{1,\ldots,k\}^{\mathbb Z}\to\{1,\ldots,k\}^{\mathbb Z}}\limits_{
\: \: \: \: \:(a_n)_{n\in\mathbb Z}\mapsto(a_{n+1})_{n\in\mathbb Z}}$~\cite{walters, shift}.
Suppose $K$ is a nonempty set with at least two elements, $\Gamma$
is a nonempty set, and $\varphi:\Gamma\to\Gamma$ is an arbitrary map, then
we call
\linebreak
$\sigma_\varphi:\mathop{K^\Gamma\to K^\Gamma\: \: \: \: \:}\limits_{(x_\alpha)_{\alpha\in\Gamma}
\mapsto(x_{\varphi(\alpha)})_{\alpha\in\Gamma}}$ a generalized shift
(for one-sided and two-sided shifts consider $\varphi(n)=n+1$).
It's evident that for topological space $K$, $\sigma_\varphi:K^\Gamma\to K^\Gamma$
is continuous, where $K^\Gamma$ is equipped by product topology.
\\
For Hilbert space $H$ there exists unique cardinal number $\tau$ such that
$H$ and $\ell^2(\tau)$ are isomorphic~\cite{hilbert1, hilbert2}. All members of the collection
$\{\ell^2(\tau):\tau$ is a non--zero cardinal number$\}$ are Hilbert spaces, moreover for
cardinal number $\tau$ and $(x_\alpha)_{\alpha<\tau}\in{\mathbb K}^\tau$
(where $\mathbb{K}\in\{\mathbb{R},\mathbb{C}\}$ depending on our
choice for real Hilbert spaces or Complex Hilbert spaces)
we have $x=(x_\alpha)_{\alpha<\tau}\in\ell^2(\tau)$ if and only if
$||x||^2:=\mathop{\Sigma}\limits_{\alpha<\tau}|x_\alpha|^2<+\infty$.
Moreover for $(x_\alpha)_{\alpha<\tau},(y_\alpha)_{\alpha<\tau}\in\ell^2(\tau)$
let $<(x_\alpha)_{\alpha<\tau},(y_\alpha)_{\alpha<\tau}>=
\mathop{\Sigma}\limits_{\alpha<\tau}x_\alpha\overline{y_\alpha}$
(inner product).
For $\varphi:\tau\to\tau$, one may consider $\sigma_\varphi:\mathbb{K}^\tau\to
\mathbb{K}^\tau$ in particular we may study $\sigma_\varphi\restriction_{\ell^2(\tau)}:\ell^2(\tau)\to
\mathbb{K}^\tau$.
\\
{\bf Convention.} In the following text suppose $\tau>1$ is a cardinal number and
$\varphi:\tau\to\tau$ is arbitrary, we denote $\sigma_\varphi\restriction_{\ell^2(\tau)}:\ell^2(\tau)\to
\mathbb{K}^\tau$ simply by $\sigma_\varphi:\ell^2(\tau)\to\mathbb{K}^\tau$,
and equip $\ell^2(\tau)$ with its usual inner product introduced in the above lines.
Also for cardinal number $\psi$ let (for properties of cardinal numbers and their arithmetic see~\cite{cardinal}):
\[\psi^*:=\left\{\begin{array}{lc} \psi & \psi{\rm \: is \: finite \:,} \\ +\infty & {\rm otherwise\:.}
\end{array}\right.\]
Moreover for $s\neq t$ let $\delta_s^t=0$ and $\delta_s^s=1$.
\\
If $X,Y$ are normed vector spaces, we say the linear map $S:X\to Y$ is an operator if
it is continuous. We call $(X,T)$ a linear dynamical system,
if $X$ is a normed vector space and $T:X\to X$ is an operator~\cite{linearchaos}.
\\
Let's recall that $\mathbb{R}$ is the set of real numbers, $\mathbb{C}$ is the set of complex numbers, and $\mathbb{N}=\{1,2,\ldots\}$ is the set of natural numbers.
\section{On generalized shift operators}
\noindent In this section we show $\sigma_\varphi(\ell^2(\tau))\subseteq\ell^2(\tau)$ (and
$\sigma_\varphi:\ell^2(\tau)\to\ell^2(\tau)$ is continuous) if and only if
$\varphi:\tau\to\tau$ is bounded. Moreover
$\sigma_\varphi(\ell^2(\tau))=\ell^2(\tau)$ if and only if
$\varphi:\tau\to\tau$ is one--to--one.
\begin{remark}
We say $f:A\to A$ is bounded if
there exists finite $n\geq1$ such that for all $a\in A$
we have ${\rm card}(\varphi^{-1}(a))\leq n$~\cite{giordano}.
\end{remark}
\begin{theorem}\label{lem10}
The following statements are equivalent:
\begin{itemize}
\item[1.] $\sigma_\varphi(\ell^2(\tau))\subseteq\ell^2(\tau)$,
\item[2.] $\varphi:\tau\to\tau$ is bounded,
\item[3.] $\sigma_\varphi:\ell^2(\tau)\to\ell^2(\tau)$ is a linear continuous map.
\end{itemize}
Moreover in the above case we have
$||\sigma_\varphi||=\sqrt{\sup\{({\rm card}(\varphi^{-1}(\alpha)))^*:\alpha\in\tau\}}$.
\end{theorem}
\begin{proof}
First note that for $x=(x_\alpha)_{\alpha<\tau}$ we have
\[||\sigma_\varphi(x)||^2=\mathop{\Sigma}\limits_{\alpha<\tau}|x_{\varphi(\alpha)}|^2
=\mathop{\Sigma}\limits_{\alpha<\tau}\left(({\rm card}(\varphi^{-1}(\alpha)))^*|x_\alpha|^2\right)\tag{*}\]
(where $0(+\infty)=(+\infty)0=0$).
\\
``(1) $\Rightarrow$ (2)'' Suppose $\sigma_\varphi(\ell^2(\tau))\subseteq\ell^2(\tau)$, for
$\theta<\tau$ we have
$||(\delta_\alpha^\theta)_{\alpha<\tau}||=1$ and:
\[||\sigma_\varphi((\delta_\alpha^\theta)_{\alpha<\tau})||^2=({\rm card}(\varphi^{-1}(\theta)))^*\]
by (*). Hence
$(\delta_\alpha^\theta)_{\alpha<\tau}\in\ell^2(\tau)$ and
$\sigma_\varphi((\delta_\alpha^\theta)_{\alpha<\tau})\in\sigma_\varphi(\ell^2(\tau))\subseteq\ell^2(\tau)$,
thus $\varphi^{-1}(\theta)$ is finite.
\\
Thus $\{{\rm card}(\varphi^{-1}(\alpha)):\alpha\in\tau\}$ is a collection of finite cardinal numbers. If
\linebreak
$\sup\{({\rm card}(\varphi^{-1}(\alpha)))^*:\alpha\in\tau\}=+\infty$, then there exists
a strictly increasing sequence $\{n_k\}_{k\geq1}$ in $\mathbb N$ and sequence $\{\alpha_k\}_{k\geq1}$
in $\tau$ such that for all $k\geq1$ we have ${\rm card}(\varphi^{-1}(\alpha_k))=n_k$. Since
$\{n_k\}_{k\geq1}$ is a one--to--one sequence, $\{\alpha_k\}_{k\geq1}$ is a one--to--one sequence
too.
Consider $(x_\alpha)_{\alpha<\tau}$ with:
\[x_\alpha:=\left\{\begin{array}{lc} \frac{1}{k} & \alpha=\alpha_k,k\geq1\:, \\ 0 & {\rm otherwise}\:.
\end{array}\right.\]
Then $\mathop{\Sigma}\limits_{\alpha<\tau}|x_\alpha|^2=
\mathop{\Sigma}\limits_{k\geq1} x_{\alpha_k}^2=\mathop{\Sigma}\limits_{k\geq1}\frac{1}{k^2}<+\infty$
and $(x_\alpha)_{\alpha<\tau}\in\ell^2(\tau)$. On the other hand by (*) we have
$||\sigma_\varphi((x_\alpha)_{\alpha<\tau})||^2=
\mathop{\Sigma}\limits_{k\geq1}\frac{n_k}{k^2}\geq\mathop{\Sigma}\limits_{k\geq1}\frac{1}{k}=+\infty$
(note that $n_k\geq k$ for all $k\geq1$), in particular
$\sigma_\varphi((x_\alpha)_{\alpha<\tau})\notin\ell^2(\tau)$
which leads to the contradiction
$\sigma_\varphi(\ell^2(\tau))\not\subseteq\ell^2(\tau)$. Therefore
$\sup\{{\rm card}(\varphi^{-1}(\alpha)):\alpha\in\tau\}$ is finite and is a natural number.
\\
``(2) $\Rightarrow$ (3)'' Suppose
$n:=\sup\{{\rm card}(\varphi^{-1}(\alpha)):\alpha\in\tau\}$ is finite.
For all $x=(x_\alpha)_{\alpha<\tau}\in\ell^2(\tau)$
we have:
\[||\sigma_\varphi(x)||
=\sqrt{\mathop{\Sigma}\limits_{\alpha<\tau}\left(({\rm card}(\varphi^{-1}(\alpha)))^*|x_\alpha|^2\right)}
\leq\sqrt{\mathop{\Sigma}\limits_{\alpha<\tau}\left(n|x_\alpha|^2\right)}=\sqrt{n}||x||\]
which shows continuity of $\sigma_\varphi$ and $||\sigma_\varphi||\leq\sqrt{n}$.
On the other hand, there exists $\theta<\tau$ with
${\rm card}(\varphi^{-1}(\theta)=n$. By $||(\delta_\alpha^\theta)_{\alpha<\tau}||=1$ and (*) we have
$||\sigma_\varphi((\delta_\alpha^\theta)_{\alpha<\tau})||=\sqrt{n}$ which leads to
$||\sigma_\varphi||\geq\sqrt{n}$.
\end{proof}
\noindent By \cite[Lemma 4.1]{set} and \cite{AHK}, $\varphi:\tau\to\tau$ is one--to--one (resp. onto) if and only if
$\sigma_\varphi:{\mathbb K}^\tau\to{\mathbb K}^\tau$
is onto (resp. one--to--one), however the following lemma deal with
$\sigma_\varphi:\ell^2(\tau)\to\ell^2(\tau)$.
\begin{lemma}\label{lem20}
The following statements are equivalent:
\begin{itemize}
\item[1.] $\sigma_\varphi(\ell^2(\tau))=\ell^2(\tau)$,
\item[2.] $\sigma_\varphi(\ell^2(\tau))$ is a dense subset of $\ell^2(\tau)$,
\item[3.] $\varphi:\tau\to\tau$ is one--to--one.
\end{itemize}
In addition the following statements are equivalent too:
\begin{itemize}
\item[i.] $\sigma_\varphi:\ell^2(\tau)\to{\mathbb K}^\tau$ is
one--to--one,
\item[ii.] $\varphi:\tau\to\tau$ is onto.
\end{itemize}
\end{lemma}
\begin{proof}
``(2) $\Rightarrow$ (3)'' Suppose $\varphi:\tau\to\tau$
is not one--to--one, then there exists
$\theta\neq\psi$ with $\mu:=\varphi(\theta)=\varphi(\psi)$.
There exists
$(y_\alpha)_{\alpha<\tau}\in\ell^2(\tau)$ with $||\sigma_\varphi((y_\alpha)_{\alpha<\tau})-
(\delta_\alpha^\theta)_{\alpha<\tau}||<\frac14$, thus for all $\alpha<\tau$ we have
$|y_{\varphi(\alpha)}-\delta_\alpha^\theta|<\frac14$ in particular
$|y_{\varphi(\psi)}-\delta_\psi^\theta|<\frac14$ and $|y_{\varphi(\theta)}-\delta_\theta^\theta|<\frac14$,
thus $|y_{\mu}|<\frac14$ and $|y_{\mu}-1|<\frac14$, which is
a contradiction, therefore $\varphi:\tau\to\tau$ is one--to--one.
\\
``(3) $\Rightarrow$ (1)'' Suppose $\varphi:\tau\to\tau$
is one--to--one, then by
Theorem~\ref{lem10}, $\sigma_\varphi(\ell^2(\tau))\subseteq\ell^2(\tau)$. For
$y=(y_\alpha)_{\alpha<\tau}\in\ell^2(\tau)$, define $x=(x_\alpha)_{\alpha<\tau}$
in the following way:
\[x_\alpha=\left\{\begin{array}{lc} y_\beta & \alpha=\varphi(\beta),\beta<\tau\:, \\
0 & \alpha\in\tau\setminus\varphi(\tau)\:,\end{array}\right.\]
then $||x||=||y||$ and $x\in\ell^2(\tau)$, moreover $\sigma_\varphi(x)=y$, which leads to
$\sigma_\varphi(\ell^2(\tau))=\ell^2(\tau)$.
\\
In order to complete the proof we should prove that (i) and (ii) are equivalent
however by \cite[Lemma 4.1]{set}, (ii) implies (i), so we should just prove that (i) implies (ii).
\\
``(i) $\Rightarrow$ (ii)'' Suppose $\varphi:\tau\to\tau$ is not onto and choose $\theta\in\tau\setminus
\varphi(\tau)$.
Then $(\delta_\alpha^\theta)_{\alpha<\tau},(0)_{\alpha<\tau}$ are two distinct elements of
$\ell^2(\tau)$, however \[\sigma_\varphi((\delta_\alpha^\theta)_{\alpha<\tau})=
\sigma_\varphi((0)_{\alpha<\tau})=(0)_{\alpha<\tau}\]
and
$\sigma_\varphi:\ell^2(\tau)\to{\mathbb K}^\tau$ is not one--to--one.
\end{proof}
\begin{corollary}
The following statements are equivalent:
\begin{itemize}
\item[1.] $\varphi:\tau\to\tau$ is bijective,
\item[2.] $\sigma_\varphi:\ell^2(\tau)\to\ell^2(\tau)$ is bijective,
\item[3.] $\sigma_\varphi:\ell^2(\tau)\to\ell^2(\tau)$ is an isomorphism,
\item[4.] $\sigma_\varphi:\ell^2(\tau)\to\ell^2(\tau)$ is an isometry.
\end{itemize}
\end{corollary}
\begin{proof}
Using Lemma~\ref{lem20}, (1) and (2) are equivalent. It's evident that (3) implies
(2), moreover if $\varphi:\tau\to\tau$ is bijective, then by Theorem~\ref{lem10} two linear
maps $\sigma_\varphi:\ell^2(\tau)\to\ell^2(\tau)$ and its inverse
$\sigma_{\varphi^{-1}}:\ell^2(\tau)\to\ell^2(\tau)$ are continuous, hence (1) implies (3).
\\
(1) implies (4),
is evident by (*) in Theorem~\ref{lem10}.
In order to complete the proof, we should just prove that (4) implies (1).
\\
``(4) $\Rightarrow$ (1)'' Suppose $\sigma_\varphi:\ell^2(\tau)\to\ell^2(\tau)$ is an isometry,
then $\sigma_\varphi:\ell^2(\tau)\to\ell^2(\tau)$ is one--to--one and by Lemma~\ref{lem20},
$\varphi:\tau\to\tau$ is onto. Moreover, $||\sigma_\varphi||=1$ since
$\sigma_\varphi:\ell^2(\tau)\to\ell^2(\tau)$ is an isometry. By Lemma~\ref{lem10}
we have $1=||\sigma_\varphi||^2=\sup\{{\rm card}(\varphi^{-1}(\alpha)):\alpha\in\tau\}$,
thus for all $\alpha<\tau$ we have ${\rm card}(\varphi^{-1}(\alpha))\leq1$ and
$\varphi:\tau\to\tau$ is one--to--one.
\end{proof}
\begin{lemma}\label{th}
Let $\mathcal{D}=\{z\in\ell^2(\tau):\sigma_\varphi(z)\in\ell^2(\tau)\}$
(consider $\mathcal D$ with induced normed and topology of
$\ell^2(\tau)$), then:
\begin{itemize}
\item[1.] $\mathcal{D}$ is a subspace of $\ell^2(\tau)$,
\item[2.] $\{\theta<\tau:\exists(z_\alpha)_{\alpha<\tau}\in \mathcal{D}\: z_\theta\neq0\}=\{\alpha<\tau:\varphi^{-1}(\alpha)$ is finite $\}$.
\end{itemize}
\end{lemma}
\begin{proof}
Since $\sigma_\varphi:\ell^2(\tau)\to{\mathbb K}^\tau$ is linear, we have immediately (1).
\\
2) We have
\[||\sigma_\varphi((\delta_\alpha^\theta)_{\alpha<\tau})||=\left\{\begin{array}{lc} \sqrt{({\rm card}(\varphi^{-1}(\theta)))^*} &
\varphi^{-1}(\theta){\rm \: is \: finite,} \\ +\infty & \varphi^{-1}(\theta) {\rm \: is \: infinite.} \end{array}\right.\tag{**}\]
Thus if $\varphi^{-1}(\theta)$ is finite we have $\sigma_\varphi((\delta_\alpha^\theta)_{\alpha<\tau})\in\ell^2(\tau)$
and $(\delta_\alpha^\theta)_{\alpha<\tau}\in{\mathcal D}$, which shows
$\theta\in\{\beta<\tau:\exists(z_\alpha)_{\alpha<\tau}\in \mathcal{D}\: z_\beta\neq0\}$. Therefore:
\begin{center}
$\{\alpha<\tau:\varphi^{-1}(\alpha)$ is finite $\}\subseteq \{\theta<\tau:\exists(z_\alpha)_{\alpha<\tau}\in \mathcal{D}\: z_\theta\neq0\}$
\end{center}
Now for $\theta<\tau$ suppose there exists $(z_\alpha)_{\alpha<\tau}\in \mathcal{D}$ with $z_\theta\neq0$.
Using the fact that $\mathcal D$ is a subspace of
$\ell^2(\tau)$ we may suppose $z_\theta=1$, now we have
(since $\sigma_\varphi(z)\in\ell^2(\tau)$):
\[||\sigma_\varphi((\delta^\theta_\alpha)_{\alpha<\tau})||=||\sigma_\varphi((z_\alpha \delta^\theta_\alpha)_{\alpha<\tau})||\leq ||\sigma_\varphi(z)||<+\infty\:,\]
by (**), $\varphi^{-1}(\theta)$ is finite, which completes the proof of (2).
\end{proof}
\begin{note}\label{note}
For $H\subseteq\tau$ the closure of subspace generated
by $\{(\delta_\alpha^\theta)_{\alpha<\tau}:\theta\in H\}$ (in $\ell^2(\tau)$)
is $\{(x_\alpha)_{\alpha<\tau}\in\ell^2(\tau):\forall\alpha\notin H\:(x_\alpha=0)\}$.
\end{note}
\begin{theorem}
For $\mathcal{D}=\{z\in\ell^2(\tau):\sigma_\varphi(z)\in\ell^2(\tau)\}$ as in Lemma~\ref{th} and
$M:=\{\alpha<\tau:\varphi^{-1}(\alpha)$ is finite $\}$, the following statemnts are equivalent:
\begin{itemize}
\item[1.] $\sigma_\varphi\restriction_{\mathcal D}:\mathcal{D}\to\ell^2(\tau)$ is continuous,
\item[2.] there exists finite $n\geq1$ with ${\rm card}(\varphi^{-1}(\alpha))\leq n$ for all $\alpha\in M$,
\item[3.] $\mathcal{D}=\{(x_\alpha)_{\alpha<\tau}\in\ell^2(\tau):\forall\theta\notin M\:\: x_\theta=0\}$,
\item[4.] $\mathcal{D}$ is a closed subspace of $\ell^2(\tau)$,
\end{itemize}
\end{theorem}
\begin{proof} ``(1) $\Rightarrow$ (2)''
Suppose $\sigma_\varphi\restriction_{\mathcal D}:\mathcal{D}\to\ell^2(\tau)$ is continuous,
consider $\theta<\tau$ with finite $\varphi^{-1}(\theta)$.
By proof of item (2) in Lemma~\ref{th},
$(\delta_\alpha^\theta)_{\alpha<\tau}\in{\mathcal D}$ and
$||\sigma_\varphi((\delta_\alpha^\theta)_{\alpha<\tau})||=\sqrt{({\rm card}(\varphi^{-1}(\theta)))^*}$,
thus
\[+\infty>||\sigma_\varphi||\geq\sup\{\sqrt{({\rm card}(\varphi^{-1}(\theta)))^*}:\theta\in M\}\:.\]
``(2) $\Rightarrow$ (1)''
For
$n:=\sup\{{\rm card}(\varphi^{-1}(\alpha)):\alpha\in M\}<+\infty$
and $x=(x_\alpha)_{\alpha<\tau}\in{\mathcal D}$
we have $||\sigma_\varphi(x)||
=\sqrt{\mathop{\Sigma}\limits_{\alpha\in M}\left(({\rm card}(\varphi^{-1}(\alpha)))^*|x_\alpha|^2\right)}
\leq\sqrt{\mathop{\Sigma}\limits_{\alpha\in M}\left(n|x_\alpha|^2\right)}=\sqrt{n}||x||$
which shows continuity of $\sigma_\varphi\restriction_{\mathcal D}:\mathcal{D}\to\ell^2(\tau)$.
\\
``(3) $\Rightarrow$ (4)''
Note that for nonempty $M$, $\{(x_\alpha)_{\alpha<\tau}\in\ell^2(\tau):\forall\theta\notin M\:\: x_\theta=0\}$
is just $\ell^2(M)$.
\\
``(4) $\Rightarrow$ (3)''
By proof and notations in Lemma~\ref{th}, we have
$\{(\delta_\alpha^\theta)_{\alpha<\tau}:\theta\in M\}\subseteq \mathcal D$. Use Note~\ref{note} to complete the proof.
\\
``(2) $\Rightarrow$ (3)''
For
$n:=\sup\{{\rm card}(\varphi^{-1}(\alpha)):\alpha\in M\}<+\infty$
and $x=(x_\alpha)_{\alpha<\tau}\in\ell^2(\tau)$ with $x_\alpha=0$ for all $\alpha\notin M$
we have $||\sigma_\varphi(x)||\leq\sqrt{n}||x||$ which shows $x\in{\mathcal D}$ and
$\{(x_\alpha)_{\alpha<\tau}\in\ell^2(\tau):\forall\theta\notin M\:\: x_\theta=0\}\subseteq\mathcal D$.
Using Lemma~\ref{th} we have
$\mathcal{D}\subseteq\{(x_\alpha)_{\alpha<\tau}\in\ell^2(\tau):\forall\theta\notin M\:\: x_\theta=0\}$.
\\
``(3) $\Rightarrow$ (2)''
If $\sup\{({\rm card}(\varphi^{-1}(\alpha)))^*:\alpha\in M\}=+\infty$, then there exists
a strictly increasing sequence $\{n_k\}_{k\geq1}$ in $\mathbb N$ and sequence $\{\alpha_k\}_{k\geq1}$
in $M$ such that for all $k\geq1$ we have ${\rm card}(\varphi^{-1}(\alpha_k))=n_k$. Using a similar method described in Lemma~\ref{lem10},
consider $(x_\alpha)_{\alpha<\tau}\in{\mathcal D}$ with:
\[x_\alpha:=\left\{\begin{array}{lc} \frac{1}{k} & \alpha=\alpha_k,k\geq1\:, \\ 0 & {\rm otherwise}\:.
\end{array}\right.\]
Then
$||\sigma_\varphi((x_\alpha)_{\alpha<\tau})||^2=
\mathop{\Sigma}\limits_{k\geq1}\frac{n_k}{k^2}\geq\mathop{\Sigma}\limits_{k\geq1}\frac{1}{k}=+\infty$, in particular
$\sigma_\varphi((x_\alpha)_{\alpha<\tau})\notin\ell^2(\tau)$
which is in contradiction with
$(x_\alpha)_{\alpha<\tau}\in{\mathcal D}$ and completes the proof.
\end{proof}
\subsection{Compact generalized shift operators}
For normed vector spaces $X,Y$ we say the operator $T:X\to Y$ is a
compact operator, if $\overline{T(B^X(0,1))}$ is a compact subset of $Y$,
where $B^X(0,1)=\{x\in X:||x||<1\}$~\cite{conway}.
\begin{theorem}
The generalized shift operator $\sigma_\varphi:\ell^2(\tau)\to\ell^2(\tau)$ is compact if and only if $\tau$ is finite.
\end{theorem}
\begin{proof}
If $\tau$ is infinite, then by Theorem~\ref{lem10}, $\{\varphi^{-1}(\alpha):\alpha<\tau\}\setminus\{\varnothing\}$
is a partition of $\tau$ to its finite subsets, thus there exists a one--to--one sequence $\{\alpha_n\}_{n\geq1}$
in $\tau$ such that $\{\varphi^{-1}(\alpha_n\}_{n\geq1}$ is a sequence of nonempty finite
and disjoint subsets of $\tau$. For all distinct $n,m\geq1$ we have
\[||\sigma_\varphi((\delta_\alpha^{\alpha_n})_{\alpha<\tau})-
\sigma_\varphi((\delta_\alpha^{\alpha_m})_{\alpha<\tau})||=\sqrt{2}\]
so $\{\sigma_\varphi(\frac12(\delta_\alpha^{\alpha_n})_{\alpha<\tau})\}_{n\geq1}$ (is a sequence in
$\overline{\sigma_\varphi(B((0)_{\alpha<\tau},1))}$) without any converging subsequence.
Therefore $\overline{\sigma_\varphi(B((0)_{\alpha<\tau},1))}$ is not compact and
$\sigma_\varphi:\ell^2(\tau)\to\ell^2(\tau)$ is not a compact operator.
\\
On the other hand, if $\tau$ is finite, then every linear operator on $\ell^2(\tau)$
is compact, hence $\sigma_\varphi:\ell^2(\tau)\to\ell^2(\tau)$ is a compact operator.
\end{proof}
|
1,116,691,500,159 | arxiv | \section{Introduction}
A detection of primordial non-Gaussianity has the potential to test today's standard inflationary paradigm and its alternatives for the physics of the early Universe. Measurements of the CMB bispectrum furnish a direct probe of the nature of the initial conditions (see, e.g., \cite{Komatsu2010,Bartolo2010b,Liguori2010,Yadav2010,Komatsu2011} and references therein), but are limited by the two-dimensional nature of the CMB and its damping on small scales. However, the non-Gaussian signatures imprinted in the initial fluctuations of the potential gravitationally evolve into the large-scale structure (LSS) of the Universe, which can be observed in all three dimensions and whose statistical properties can be constrained with galaxy clustering data (for recent reviews, see \cite{Desjacques2010a,Verde2010}).
One of the cleanest probes is the galaxy (or, more generally, any tracer of LSS including clusters, etc.) two-point correlation function (in configuration space) or power spectrum (in Fourier space), which develops a characteristic scale dependence on large scales in the presence of primordial non-Gaussianity of the local type \cite{Dalal2008}. The power spectrum picks up an additional term proportional to $f_{\mathrm{NL}}(b_{\mathrm{G}}-1)$, where $b_{\mathrm{G}}$ is the Gaussian bias of the tracer and $f_{\mathrm{NL}}$ is a parameter describing the strength of the non-Gaussian signal. However, the precision to which we can constrain $f_{\mathrm{NL}}$ is limited by sampling variance on large scales: each Fourier mode is an independent realization of a (nearly) Gaussian random field, so the ability to determine its rms-amplitude from a finite number of modes is limited. Recent work has demonstrated that it is possible to circumvent sampling variance by comparing two different tracers of the same underlying density field \cite{Seljak2009a,Slosar2009,McDonald2009a,Gil-Marin2010}. The idea is to take the ratio of power spectra from two tracers to (at least partly) cancel out the random fluctuations, leaving just the signature of primordial non-Gaussianity itself.
Another important limitation arises from the fact that galaxies are discrete tracers of the underlying dark matter distribution. Therefore, with a finite number of observable objects, the measurement of their power spectrum is affected by shot noise. Assuming galaxies are sampled from a Poisson process, this adds a constant contribution to their power spectrum, it is given by the inverse tracer number density $1/\bar{n}$. This is particularly important for massive tracers such as clusters, since their number density is very low. Yet they are strongly biased and therefore very sensitive to a potential non-Gaussian signal. Recent work has demonstrated the Poisson shot noise model to be inadequate \cite{Seljak2009b,Hamaus2010,Cai2011}. In particular, \cite{Seljak2009b,Hamaus2010} have shown that a mass-dependent weighting can considerably suppress the stochasticity between halos and the dark matter and thus reduce the shot noise contribution. In view of constraining primordial non-Gaussianity from LSS, this can be a very helpful tool to further reduce the error on $f_{\mathrm{NL}}$.
Both of these methods (sampling variance cancellation and shot noise suppression) have so far been discussed separately in the literature. In this paper we combine the two to derive optimal constraints on $f_{\mathrm{NL}}$ that can be achieved from two-point correlations of LSS. We show that dramatic improvements are feasible, but we do not imply that two-point correlations achieve optimal constraints in general: further gains may be possible when considering higher-order correlations, starting with the bispectrum analysis \cite{Baldauf2011a} (three-point correlations).
This paper is organized as follows: Sec.~\ref{sec:localng} briefly reviews the impact of local primordial non-Gaussianity on the halo bias, and the calculation of the Fisher information content on $f_{\mathrm{NL}}$ from two-point statistics in Fourier space is presented in Sec.~\ref{sec:model}. In Sec.~\ref{sec:sims} we apply our weighting and multitracer methods to dark matter halos extracted from a series of large cosmological $N$-body simulations and demonstrate how we can improve the $f_{\mathrm{NL}}$-constraints. These results are confronted with the halo model predictions in Sec.~\ref{sec:HM} before we finally summarize our findings in Sec.~\ref{sec:conclusion}.
\section{Non-Gaussian Halo Bias}
\label{sec:localng}
Primordial non-Gaussianity of the local type is usually characterized by expanding Bardeen's gauge-invariant potential $\Phi$ about the fiducial Gaussian case. Up to second order, it can be parametrized by the mapping \cite{Salopek1990,Gangui1994,Komatsu2001,Bartolo2004}
\begin{equation}
\Phi({\bf x})=\Phi_{\mathrm{G}}({\bf x})+f_{\mathrm{NL}}\Phi_{\mathrm{G}}^2({\bf x}) \;,
\end{equation}
where $\Phi_{\mathrm{G}}({\bf x})$ is an isotropic Gaussian random field and $f_{\mathrm{NL}}$ a dimensionless phenomenological parameter. Ignoring smoothing (we will consider scales much larger than the Lagrangian size of a halo), the linear density perturbation $\delta_0$ is related to $\Phi$ through the Poisson equation in Fourier space,
\begin{equation}
\delta_0({\bf k},z)=\frac{2}{3}\frac{k^2T(k)D(z)c^2}{\Omega_\mathrm{m} H_0^2}\Phi({\bf k}) \;,
\end{equation}
where $T(k)$ is the matter transfer function and $D(z)$ is the linear growth rate normalized to $1+z$. Applying the peak-background split argument to the Gaussian piece of Bardeen's potential, one finds a scale-dependent correction to the linear halo bias \cite{Dalal2008,Matarrese2008,Slosar2008}:
\begin{equation}
b(k,f_{\mathrm{NL}})=b_{\mathrm{G}}+f_{\mathrm{NL}}(b_{\mathrm{G}}-1)u(k,z) \;, \label{b(k,fnl)}
\end{equation}
where $b_{\mathrm{G}}$ is the scale-independent linear bias parameter of the corresponding Gaussian field ($f_{\mathrm{NL}}=0$) and
\begin{equation}
u(k,z)\equiv\frac{3\delta_\mathrm{c}\Omega_\mathrm{m} H_0^2}{k^2T(k)D(z)c^2} \;. \label{u(k,z)}
\end{equation}
Here, $\delta_\mathrm{c}\simeq1.686$ is the linear critical overdensity for spherical collapse. Corrections to Eq.~(\ref{b(k,fnl)}) beyond linear theory have already been worked out and agree reasonably well with numerical simulations \cite{Giannantonio2010,Jeong2009,Sefusatti2009b,McDonald2008}. Also, the dependence of the halo bias on merger history and halo formation time affects the amplitude of the non-Gaussian corrections in Eq.~(\ref{b(k,fnl)}) \cite{Slosar2008,Gao2005,Gao2007,Reid2010}, which we will neglect here.
\section{Fisher information from the two-point statistics of LSS \label{sec:model}}
It is believed that all discrete tracers of LSS, such as galaxies and clusters, reside within dark matter halos, collapsed nonlinear structures that satisfy the conditions for galaxy formation. The analysis of the full complexity of LSS is therefore reduced to the information content in dark matter halos. In this section we introduce our model for the halo covariance matrix and utilize it to compute the Fisher information content on $f_{\mathrm{NL}}$ from the two-point statistics of halos and dark matter in Fourier space. We separately consider two cases: first halos only and second halos combined with dark matter. While the observation of halos is relatively easy with present-day galaxy redshift surveys, observing the underlying dark matter is hard, but not impossible: weak-lensing tomography is the leading candidate to achieve that.
\subsection{Covariance of Halos}
\subsubsection{Definitions}
We write the halo overdensity in Fourier space as a vector whose elements correspond to $N$ successive bins
\begin{equation}
\boldsymbol{\delta}_{\mathrm{h}} \equiv \left(\delta_\mathrm{h_1},\delta_\mathrm{h_2},\dots,\delta_{\mathrm{h}_N}\right)^\intercal \;.
\end{equation}
In this paper we will only consider a binning in halo mass, but the following equations remain valid for any quantity that the halo density field depends on (e.g., galaxy-luminosity, etc.). The covariance matrix of halos is defined as
\begin{equation}
\mathbf{C}_{\mathrm{h}}\equiv\langle\boldsymbol{\delta}_{\mathrm{h}}^{\phantom{\intercal}}\boldsymbol{\delta}_{\mathrm{h}}^\intercal\rangle \;,
\end{equation}
i.e., the outer product of the vector of halo fields averaged within a $k$-shell in Fourier space. Assuming the halos to be locally biased and stochastic tracers of the dark matter density field $\delta$, we can write
\begin{equation}
\boldsymbol{\delta}_{\mathrm{h}} = \boldsymbol{b}\delta+\boldsymbol{\epsilon} \;, \label{dhalo}
\end{equation}
and we define
\begin{equation}
\boldsymbol{b}\equiv\frac{\langle\boldsymbol{\delta}_{\mathrm{h}}\delta\rangle}{\langle\delta^2\rangle} \; \label{b}
\end{equation}
as the \emph{effective bias}, which is generally scale-dependent and non-Gaussian. $\boldsymbol{\epsilon}$ is a residual noise-field with zero mean and we assume it to be uncorrelated with the dark matter, i.e., $\langle\boldsymbol{\epsilon}\delta\rangle=0$ \cite{Manera2011}.
In each mass bin, the effective bias $\boldsymbol{b}$ shows a distinct dependence on $f_{\mathrm{NL}}$. In what follows, we will assume that $\boldsymbol{b}$ is linear in $f_{\mathrm{NL}}$, as suggested by Eq.~(\ref{b(k,fnl)}):
\begin{equation}
\boldsymbol{b}(k,f_{\mathrm{NL}}) = \boldsymbol{b}_{\mathrm{G}}+f_{\mathrm{NL}}\boldsymbol{b}'(k) \;. \label{b_fnl}
\end{equation}
Here, $\boldsymbol{b}_{\mathrm{G}}$ is the Gaussian effective bias and $\boldsymbol{b}'\equiv\partial\boldsymbol{b}/\partialf_{\mathrm{NL}}$. Finally, we write $P\equiv\langle\delta^2\rangle$ for the nonlinear dark matter power spectrum and assume $\partial P/\partialf_{\mathrm{NL}}=0$. This is a good approximation on large scales \cite{Desjacques2009,Pillepich2010,Smith2011}. Thus, the model from Eq.~(\ref{dhalo}) yields the following halo covariance matrix:
\begin{equation}
\mathbf{C}_{\mathrm{h}}=\boldsymbol{b}\bg^\intercal P + \boldsymbol{\mathcal{E}} \;, \label{Cov}
\end{equation}
where the \emph{shot noise matrix} $\boldsymbol{\mathcal{E}}$ was defined as
\begin{equation}
\boldsymbol{\mathcal{E}}\equiv\langle\boldsymbol{\epsilon}\e^\intercal\rangle \;.
\end{equation}
In principle, $\boldsymbol{\mathcal{E}}$ can contain other components than pure Poisson noise, for instance higher-order terms from the bias expansion \cite{Fry1993,Mo1996,McDonald2006}. Here and henceforth, we will define $\boldsymbol{\mathcal{E}}$ as the residual from the effective bias term $\boldsymbol{b}\bg^\intercal P$ in $\mathbf{C}_{\mathrm{h}}$, and allow it to depend on $f_{\mathrm{NL}}$. Thus, with Eqs.~(\ref{b}) and~(\ref{Cov}) the shot noise matrix can be written as
\begin{equation}
\boldsymbol{\mathcal{E}} = \langle\boldsymbol{\delta}_{\mathrm{h}}^{\phantom{\intercal}}\boldsymbol{\delta}_{\mathrm{h}}^\intercal\rangle - \frac{\langle\boldsymbol{\delta}_{\mathrm{h}}^{\phantom{\intercal}}\delta\rangle\langle\boldsymbol{\delta}_{\mathrm{h}}^\intercal\delta\rangle}{\langle\delta^2\rangle} \;. \label{E}
\end{equation}
This agrees precisely with the definition given in \cite{Hamaus2010} for the Gaussian case, however it also takes into account the possibility of a scale-dependent effective bias in non-Gaussian scenarios, such that the effective bias term $\boldsymbol{b}\bg^\intercal P$ always cancels in this expression \cite{Kendrick2010}.
Reference~\cite{Slosar2009} already investigated the Fisher information content on primordial non-Gaussianity for the idealized case of a purely Poissonian shot noise component in the halo covariance matrix. In \cite{Cai2011}, the halo covariance was suggested to be of a similar simple form, albeit with a modified definition of halo bias and a diagonal shot noise matrix. In this work we will consider the more general model of Eq.~(\ref{Cov}) without assuming anything about $\boldsymbol{\mathcal{E}}$. Instead we will investigate the shot noise matrix with the help of $N$-body simulations.
The Gaussian case has already been studied in \cite{Hamaus2010}. Simulations revealed a very simple eigenstructure of the shot noise matrix: for $N>2$ mass bins of equal number density $\bar{n}$ it exhibits a $(N-2)$-dimensional degenerate subspace with eigenvalue $\lambda_{\mathrm{P}}^{\left(N-2\right)}=1/\bar{n}$, which is the expected result from Poisson sampling. Of the two remaining eigenvalues $\lambda_{\pm}$, one is enhanced ($\lambda_+$) and one suppressed ($\lambda_-$) with respect to the value $1/\bar{n}$. The shot noise matrix can thus be written as
\begin{equation}
\boldsymbol{\mathcal{E}}=\bar{n}^{-1}\mathbf{I}+(\lambda_{+}-\bar{n}^{-1})\boldsymbol{V}_{\!\!+}^{\phantom{\intercal}}\boldsymbol{V}_{\!\!+}^\intercal+(\lambda_{-}-\bar{n}^{-1})\boldsymbol{V}_{\!\!-}^{\phantom{\intercal}}\boldsymbol{V}_{\!\!-}^\intercal \;, \label{E_eb}
\end{equation}
where $\mathbf{I}$ is the $N\times N$ identity matrix and $\boldsymbol{V}_{\!\!\pm}$ are the normalized eigenvectors corresponding to $\lambda_{\pm}$. Its inverse takes a very similar form
\begin{equation}
\boldsymbol{\mathcal{E}}^{-1}=\bar{n}\mathbf{I}+(\lambda_{+}^{-1}-\bar{n})\boldsymbol{V}_{\!\!+}^{\phantom{\intercal}}\boldsymbol{V}_{\!\!+}^\intercal+(\lambda_{-}^{-1}-\bar{n})\boldsymbol{V}_{\!\!-}^{\phantom{\intercal}}\boldsymbol{V}_{\!\!-}^\intercal \;.
\end{equation}
The halo model \cite{Seljak2000} can be applied to predict the functional form of $\lambda_{\pm}$ and $\boldsymbol{V}_{\!\!\pm}$ (see \cite{Hamaus2010} and Sec.~\ref{sec:HM}). This approach is however not expected to be exact, as it does not ensure mass- and momentum conservation of the dark matter density field and leads to white-noise-like contributions in both the halo-matter cross and the matter auto power spectra which are not observed in simulations \cite{Crocce2008}. Yet, the halo model is able to reproduce the eigenstructure of $\boldsymbol{\mathcal{E}}$ fairly well \cite{Hamaus2010} and we will use it for making predictions beyond our $N$-body resolution limit.
In the Gaussian case one can also relate the dominant eigenmode $\boldsymbol{V}_{\!\!+}$ with corresponding eigenvalue $\lambda_{+}$ to the second-order term arising in a local bias-expansion model \cite{Fry1993,Mo1996}, where the coefficients $\boldsymbol{b}_i$ are determined analytically from the peak-background split formalism given a halo mass function \cite{Sheth1999,Scoccimarro2001}. In non-Gaussian scenarios this can be extended to a multivariate expansion in dark matter density $\delta$ and primordial potential $\Phi$ including bias coefficients for both fields \cite{Giannantonio2010,Baldauf2011a}. For the calculation of $\boldsymbol{\mathcal{E}}$ we will however restrict ourselves to the Gaussian case and later compare with the numerical results of non-Gaussian initial conditions to see the effects of $f_{\mathrm{NL}}$ on $\boldsymbol{\mathcal{E}}$ and its eigenvalues.
The suppressed eigenmode $\boldsymbol{V}_{\!\!-}$ with eigenvalue $\lambda_{-}$ can also be explained by a halo-exclusion correction to the Poisson-sampling model for halos, as studied in \cite{Smith2011}.
In what follows, we will truncate the local bias expansion at second order. Therefore, we shall assume the following model for the halo overdensity in configuration space
\begin{equation}
\boldsymbol{\delta}_{\mathrm{h}}(\mathbf{x}) = \boldsymbol{b}_1\delta(\mathbf{x}) + \boldsymbol{b}_2\delta^2(\mathbf{x}) + \boldsymbol{n}_\mathrm{P}(\mathbf{x}) + \boldsymbol{n}_\mathrm{c}(\mathbf{x}) \;.
\end{equation}
Here, $\boldsymbol{n}_\mathrm{P}$ is the usual Poisson noise and $\boldsymbol{n}_\mathrm{c}$ a correction to account for deviations from the Poisson-sampling model. In Fourier space, this yields
\begin{equation}
\boldsymbol{\delta}_{\mathrm{h}}(\mathbf{k}) = \boldsymbol{b}_1\delta(\mathbf{k}) + \boldsymbol{b}_2\left(\delta\!*\!\delta\right)(\mathbf{k}) + \boldsymbol{n}_\mathrm{P}(\mathbf{k}) + \boldsymbol{n}_\mathrm{c}(\mathbf{k}) \;, \label{model}
\end{equation}
where the asterisk-symbol denotes a convolution. The Poisson noise $\boldsymbol{n}_\mathrm{P}$ arises from a discrete sampling of the field $\boldsymbol{\delta}_{\mathrm{h}}$ with a finite number of halos, it is uncorrelated with the underlying dark matter density, $\langle\boldsymbol{n}_\mathrm{P}\delta\rangle=0$, and its power spectrum is $\langle\boldsymbol{n}_\mathrm{P}^{\phantom{\intercal}}\boldsymbol{n}_\mathrm{P}^\intercal\rangle=1/\bar{n}$ (Poisson white noise). We further assume $\langle\boldsymbol{n}_\mathrm{P}\boldsymbol{n}_\mathrm{c}^\intercal\rangle=\langle\boldsymbol{n}_\mathrm{c}\delta\rangle=0$, which leads to
\begin{equation}
\boldsymbol{b} = \boldsymbol{b}_1+\boldsymbol{b}_2\frac{\langle\left(\delta\!*\!\delta\right)\delta\rangle}{\langle\delta^2\rangle} \;, \label{b_model}
\end{equation}
\begin{align}
\mathbf{C}_{\mathrm{h}} &= \boldsymbol{b}_1^{\phantom{\intercal}}\boldsymbol{b}_1^\intercal\langle\delta^2\rangle + \left(\boldsymbol{b}_1^{\phantom{\intercal}}\boldsymbol{b}_2^\intercal+\boldsymbol{b}_2^{\phantom{\intercal}}\boldsymbol{b}_1^\intercal\right)\langle\left(\delta\!*\!\delta\right)\delta\rangle
\nonumber \\
&\quad +\boldsymbol{b}_2^{\phantom{\intercal}}\boldsymbol{b}_2^\intercal\langle\left(\delta\!*\!\delta\right)^2\rangle + \langle\boldsymbol{n}_\mathrm{P}^{\phantom{\intercal}}\boldsymbol{n}_\mathrm{P}^\intercal\rangle + \langle\boldsymbol{n}_\mathrm{c}^{\phantom{\intercal}}\boldsymbol{n}_\mathrm{c}^\intercal\rangle\;,
\end{align}
\begin{equation}
\boldsymbol{\mathcal{E}} = \bar{n}^{-1}\mathbf{I} + \boldsymbol{b}_2^{\phantom{\intercal}}\boldsymbol{b}_2^\intercal\left[\langle\left(\delta\!*\!\delta\right)^2\rangle-\frac{\langle\left(\delta\!*\!\delta\right)\delta\rangle^2}{\langle\delta^2\rangle}\right] + \langle\boldsymbol{n}_\mathrm{c}^{\phantom{\intercal}}\boldsymbol{n}_\mathrm{c}^\intercal\rangle \; . \label{E-model}
\end{equation}
Hence, we can identify the normalized vector $\boldsymbol{b}_2/|\boldsymbol{b}_2|$ with the eigenvector $\boldsymbol{V}_{\!\!+}$ of Eq.~(\ref{E_eb}) with corresponding eigenvalue
\begin{equation}
\lambda_+ = \boldsymbol{b}_2^\intercal\boldsymbol{b}_2^{\phantom{\intercal}}\mathcal{E}_{\delta^2} + \bar{n}^{-1} \; , \label{lambda-model}
\end{equation}
where we define
\begin{equation}
\mathcal{E}_{\delta^2} \equiv \langle\left(\delta\!*\!\delta\right)^2\rangle-\frac{\langle\left(\delta\!*\!\delta\right)\delta\rangle^2}{\langle\delta^2\rangle} \; . \label{sigma_dm2}
\end{equation}
In \cite{McDonald2006} this term is absorbed into an effective shot noise power, since it behaves like white noise on large scales and arises from the peaks and troughs in the dark matter density field being nonlinearly biased by the $b_2$-term \cite{Heavens1998}. We evaluated $\mathcal{E}_{\delta^2}$ along with the expressions that appear in Eq.~(\ref{sigma_dm2}) with the help of our dark matter $N$-body simulations for Gaussian and non-Gaussian initial conditions (for details about the simulations, see Sec.~\ref{sec:sims}).
The results are depicted in Fig.~\ref{sn_m2}. $\mathcal{E}_{\delta^2}$ obviously shows a slight dependence on $f_{\mathrm{NL}}$, but it remains white-noise-like even in the non-Gaussian cases. The $f_{\mathrm{NL}}$-dependence of this term has not been discussed in the literature
yet, but it can have a significant impact on the power spectrum of high-mass halos which have a large $b_2$-term; see Eq.~(\ref{lambda-model}). A discussion of the numerical results for halos, specifically the $f_{\mathrm{NL}}$-dependence of $\lambda_+$, is conducted later in this paper. It is also worth noticing the $f_{\mathrm{NL}}$-dependence of $\langle\left(\delta\!*\!\delta\right)^2\rangle$ and $\langle\left(\delta\!*\!\delta\right)\delta\rangle$. The properties of the squared dark matter field $\delta^2(\mathbf{x})$ are similar to the ones of halos, namely, the $k^{-2}$-correction of the effective bias in Fourier space, which in this case is defined as $b_{\delta^2}\equiv\langle\left(\delta\!*\!\delta\right)\delta\rangle/\langle\delta^2\rangle$ and appears in Eq.~(\ref{b_model}).
The last term in Eq.~(\ref{E-model}) corresponds to the suppressed eigenmode of the shot noise matrix. Both its eigenvector and eigenvalue can be described reasonably well by the halo model \cite{Hamaus2010}. The argument of \cite{Smith2011} based on halo exclusion yields a similar result while providing a more intuitive explanation for the occurrence of such a term.
\begin{figure}[!t]
\centering
\resizebox{\hsize}{!}{
\includegraphics[trim=8 0 0 0,clip]{fig1.eps}}
\caption{Shot noise $\mathcal{E}_{\delta^2}$ of the squared dark matter density field $\delta^2$ as defined in Eq.~(\ref{sigma_dm2}) with both Gaussian (solid green) and non-Gaussian initial conditions with $f_{\mathrm{NL}}=+100$ (solid red) and $f_{\mathrm{NL}}=-100$ (solid yellow) from $N$-body simulations at $z=0$. Clearly, $\mathcal{E}_{\delta^2}$ is close to white-noise like in all three cases. The auto power spectrum $\langle\left(\delta\!*\!\delta\right)^2\rangle$ of $\delta^2$ in Fourier space (dashed), its cross power spectrum $\langle\left(\delta\!*\!\delta\right)\delta\rangle$ with the ordinary dark matter field $\delta$ (dot-dashed), as well as the ordinary dark matter power spectrum $\langle\delta^2\rangle$ (dotted) are overplotted for the corresponding values of $f_{\mathrm{NL}}$. The squared dark matter field $\delta^2$ can be interpreted as a biased tracer of $\delta$ and therefore shows the characteristic $f_{\mathrm{NL}}$-dependence of biased fields (like halos) on large scales.}
\label{sn_m2}
\end{figure}
\subsubsection{Likelihood and Fisher information}
In order to find the \emph{best unbiased estimator} for $f_{\mathrm{NL}}$, we have to maximize the likelihood function. Although we are dealing with non-Gaussian statistics of the density field, deviations from the Gaussian case are usually small in practical applications (e.g., \cite{Slosar2008,Carbone2008,Afshordi2008}), so we will consider a multivariate Gaussian likelihood
\begin{equation}
\mathscr{L}=\frac{1}{(2\pi)^{N/2}\sqrt{\det\mathbf{C}_{\mathrm{h}}}}\exp\left(-\frac{1}{2}\boldsymbol{\delta}_{\mathrm{h}}^\intercal\mathbf{C}_{\mathrm{h}}^{-1}\boldsymbol{\delta}_{\mathrm{h}}^{\phantom{\intercal}}\right) \;. \label{likelihood}
\end{equation}
Maximizing this likelihood function is equivalent to minimizing the following chi-square,
\begin{equation}
\chi^2=\boldsymbol{\delta}_{\mathrm{h}}^\intercal\mathbf{C}_{\mathrm{h}}^{-1}\boldsymbol{\delta}_{\mathrm{h}}^{\phantom{\intercal}}+\ln\left(1+\alpha\right)+\ln(\det\boldsymbol{\mathcal{E}}) \;, \label{chi2}
\end{equation}
where we dropped the irrelevant constant $N\ln(2\pi)$ and used
\begin{equation}
\det\mathbf{C}_{\mathrm{h}}=\det\left(\boldsymbol{b}\bg^\intercal P+\boldsymbol{\mathcal{E}}\right)=(1+\alpha)\det\boldsymbol{\mathcal{E}} \;,
\end{equation}
with $\alpha\equiv\boldsymbol{b}^\intercal\boldsymbol{\mathcal{E}}^{-1}\boldsymbol{b} P$. For a single mass bin, Eq.~(\ref{chi2}) simplifies to
\begin{equation}
\chi^2=\frac{\delta_{\mathrm{h}}^2}{b^2P+\mathcal{E}}+\ln\left(b^2P+\mathcal{E}\right) \;. \label{chi2_1}
\end{equation}
The Fisher information matrix \cite{Fisher1935} for the parameters $\theta_i$ and $\theta_j$ and the random variable $\boldsymbol{\delta}_{\mathrm{h}}$ with covariance~$\mathbf{C}_{\mathrm{h}}$, as derived from a multivariate Gaussian likelihood \cite{Tegmark1997,Heavens2009}, reads
\begin{equation}
F_{ij} \equiv \frac{1}{2}\mathrm{Tr}\left(\frac{\partial\mathbf{C}_{\mathrm{h}}}{\partial\theta_i}\mathbf{C}_{\mathrm{h}}^{-1}\frac{\partial\mathbf{C}_{\mathrm{h}}}{\partial\theta_j}\mathbf{C}_{\mathrm{h}}^{-1}\right) \;. \label{fisher}
\end{equation}
With the above assumptions, the derivative of the halo covariance matrix with respect to the parameter $f_{\mathrm{NL}}$ is
\begin{equation}
\frac{\partial\mathbf{C}_{\mathrm{h}}}{\partialf_{\mathrm{NL}}} = \left(\boldsymbol{b}\bg'^\intercal+\boldsymbol{b}'\boldsymbol{b}^\intercal\right)P + \boldsymbol{\mathcal{E}}' \;, \label{C_fnl}
\end{equation}
with $\boldsymbol{\mathcal{E}}'\equiv \partial\boldsymbol{\mathcal{E}}/\partialf_{\mathrm{NL}}$.
The inverse of the covariance matrix can be obtained by applying the \emph{Sherman-Morrison} formula \cite{Sherman1950,Bartlett1951}
\begin{equation}
\mathbf{C}_{\mathrm{h}}^{-1}=\boldsymbol{\mathcal{E}}^{-1}-\frac{\boldsymbol{\mathcal{E}}^{-1}\boldsymbol{b}\bg^\intercal\boldsymbol{\mathcal{E}}^{-1}P}{1+\alpha} \;, \label{Sherman-Morrison}
\end{equation}
where again $\alpha\equiv\boldsymbol{b}^\intercal\boldsymbol{\mathcal{E}}^{-1}\boldsymbol{b} P$. On inserting the two previous relations into Eq.~(\ref{fisher}), we eventually obtain the full expression for $F_{f_{\mathrm{NL}}\fnl}$ in terms of $\boldsymbol{b}$, $\boldsymbol{b}'$, $\boldsymbol{\mathcal{E}}$, $\boldsymbol{\mathcal{E}}'$ and $P$ (see Appendix \ref{appendix1} for the derivation of Eq.~(\ref{F})). Neglecting the $f_{\mathrm{NL}}$-dependence of $\boldsymbol{\mathcal{E}}$, i.e., setting $\boldsymbol{\mathcal{E}}'\equiv0$, the Fisher information on $f_{\mathrm{NL}}$ becomes
\begin{equation}
F_{f_{\mathrm{NL}}\fnl}=\frac{\alpha\gamma+\beta^2+\alpha\left(\alpha\gamma-\beta^2\right)}{\left(1+\alpha\right)^2} \;, \label{F_lin}
\end{equation}
with $\alpha\equiv\boldsymbol{b}^\intercal\boldsymbol{\mathcal{E}}^{-1}\boldsymbol{b} P$, $\beta\equiv\boldsymbol{b}^\intercal\boldsymbol{\mathcal{E}}^{-1}\boldsymbol{b}'P$ and $\gamma\equiv\boldsymbol{b}'^\intercal\boldsymbol{\mathcal{E}}^{-1}\boldsymbol{b}'P$.
For a single mass bin, Eq.~(\ref{F}) simplifies to Eq.~(\ref{F1}),
\begin{equation}
F_{f_{\mathrm{NL}}\fnl}=2\left(\frac{bb'P+\mathcal{E}'/2}{b^2P+\mathcal{E}}\right)^2 \;. \label{F1_text}
\end{equation}
This implies that even in the limit of a very well-sampled halo density field ($\bar{n}\rightarrow\infty$) with negligible shot noise power $\mathcal{E}$ (and neglecting $\mathcal{E}'$) the Fisher information content on $f_{\mathrm{NL}}$ that can be extracted per mode from a single halo mass bin is limited to the value $2\left(b'/b\right)^2$. This is due to the fact that we can only constrain $f_{\mathrm{NL}}$ from a change in the halo bias relative to the Gaussian expectation, not from a measurement of the effective bias itself. The latter can only be measured directly if one knows the dark matter distribution, as will be shown in the subsequent paragraph. However, the situation changes for several halo mass bins (multiple tracers as in \cite{Seljak2009a}). In this case, the Fisher information content from Eqs.~(\ref{F}) and (\ref{F_lin}) can exceed the value $2\left(b'/b\right)^2$ (see Sec.~\ref{sec:sims} and \ref{sec:HM}).
\subsection{Covariance of Halos and Dark Matter}
\subsubsection{Definitions}
We will now assume that we possess knowledge about the dark matter distribution in addition to the halo density field. In practice one may be able to achieve this by combining galaxy redshift surveys with lensing tomography \cite{Pen2004}, but the prospects are somewhat uncertain. We will simply add the dark matter overdensity mode $\delta$ to the halo overdensity vector $\boldsymbol{\delta}_{\mathrm{h}}$, defining a new vector
\begin{equation}
\boldsymbol{\delta} \equiv \left(\delta,\delta_\mathrm{h_1},\delta_\mathrm{h_2},\dots,\delta_{\mathrm{h}_N}\right)^\intercal \;.
\end{equation}
In analogy with the previous section, we define the covariance matrix as $\mathbf{C}\equiv\langle\boldsymbol{\delta}\dd^\intercal\rangle$ and write
\begin{equation}
\mathbf{C} = \left( \begin{array}{cc}
\langle\delta^2\rangle & \langle\boldsymbol{\delta}_{\mathrm{h}}^\intercal\delta\rangle \vspace{1pt} \\
\langle\boldsymbol{\delta}_{\mathrm{h}}\delta\rangle & \langle\boldsymbol{\delta}_{\mathrm{h}}^{\phantom{\intercal}}\boldsymbol{\delta}_{\mathrm{h}}^\intercal\rangle \\
\end{array} \right) =
\left( \begin{array}{cc}
P & \boldsymbol{b}^\intercal P \\
\boldsymbol{b} P & \mathbf{C}_{\mathrm{h}} \\
\end{array} \right) \;. \label{Covm}
\end{equation}
\subsubsection{Likelihood and Fisher information}
Upon inserting the new covariance matrix into the Gaussian likelihood as defined in Eq.~(\ref{likelihood}), we find the chi-square to be
\begin{equation}
\chi^2=\boldsymbol{\delta}^\intercal\mathbf{C}^{-1}\boldsymbol{\delta}^{\phantom{\intercal}}+\ln{\left(\det\boldsymbol{\mathcal{E}}\right)} \;, \label{chi2_Cm}
\end{equation}
where we used
\begin{equation}
\det\mathbf{C}=\det\mathbf{C}_{\mathrm{h}}\det\left(P-\boldsymbol{b}^\intercal\mathbf{C}_{\mathrm{h}}^{-1}\boldsymbol{b} P^2\right)=P\det\boldsymbol{\mathcal{E}} \;,
\end{equation}
and we still assume $P$ to be independent of $f_{\mathrm{NL}}$ and therefore drop the term $\ln{(P)}$ in Eq.~(\ref{chi2_Cm}). In terms of the halo and dark matter overdensities, the chi-square can also be expressed as
\begin{equation}
\chi^2=\left(\boldsymbol{\delta}_{\mathrm{h}}-\boldsymbol{b}\delta\right)^\intercal\boldsymbol{\mathcal{E}}^{-1}\left(\boldsymbol{\delta}_{\mathrm{h}}-\boldsymbol{b}\delta\right)+\ln{\left(\det\boldsymbol{\mathcal{E}}\right)} \;, \label{chi2m}
\end{equation}
which is equivalent to the definition in \cite{Hamaus2010} (where the last term was neglected). The corresponding expression for a single halo mass bin reads
\begin{equation}
\chi^2=\frac{\left(\delta_{\mathrm{h}}-b\delta\right)^2}{\mathcal{E}}+\ln{\left(\mathcal{E}\right)} \; . \label{chi2m_1}
\end{equation}
For the derivative of $\mathbf{C}$ with respect to $f_{\mathrm{NL}}$ we get
\begin{equation}
\frac{\partial\mathbf{C}}{\partialf_{\mathrm{NL}}} = \left( \begin{array}{cc}
0 & \boldsymbol{b}'^\intercal P \\
\boldsymbol{b}'P\;\; &\;\; \boldsymbol{b}\bg'^\intercal P+\boldsymbol{b}'\boldsymbol{b}^\intercal P +\boldsymbol{\mathcal{E}}' \\
\end{array} \right) \;. \label{Covm_fnl}
\end{equation}
Performing a block inversion, we readily obtain the inverse covariance matrix,
\begin{equation}
\mathbf{C}^{-1} = \left( \begin{array}{cc}
(1+\alpha)P^{-1} & -\boldsymbol{b}^\intercal\boldsymbol{\mathcal{E}}^{-1} \\
-\boldsymbol{\mathcal{E}}^{-1}\boldsymbol{b} & \boldsymbol{\mathcal{E}}^{-1} \\
\end{array} \right) \;. \label{CovmI}
\end{equation}
As shown in Appendix \ref{appendix2}, the Fisher information content on $f_{\mathrm{NL}}$ now becomes
\begin{equation}
F_{f_{\mathrm{NL}}\fnl}=\gamma+\tau \;, \label{F_m_text}
\end{equation}
with $\gamma\equiv\boldsymbol{b}'^\intercal\boldsymbol{\mathcal{E}}^{-1}\boldsymbol{b}' P$ and $\tau\equiv\frac{1}{2}\mathrm{Tr}\left(\boldsymbol{\mathcal{E}}'\boldsymbol{\mathcal{E}}^{-1}\boldsymbol{\mathcal{E}}'\boldsymbol{\mathcal{E}}^{-1}\right)$. For a single halo mass bin this further simplifies to
\begin{equation}
F_{f_{\mathrm{NL}}\fnl}=\frac{b'^2P}{\mathcal{E}}+\frac{1}{2}\left(\frac{\mathcal{E}'}{\mathcal{E}}\right)^2 \;. \label{F_m1}
\end{equation}
It is worth noting that, in contrast to Eq.~(\ref{F1_text}), the Fisher information from one halo mass bin with knowledge of the dark matter becomes infinite in the limit of vanishing $\mathcal{E}$. In this limit the effective bias can indeed be determined exactly, allowing an exact measurement of $f_{\mathrm{NL}}$~\cite{Seljak2009a}.
\section{Application to N-body simulations}
\label{sec:sims}
We employ numerical $N$-body simulations with both Gaussian and non-Gaussian initial conditions to find signatures of primordial non-Gaussianity in the two-point statistics of the final density fields in Fourier space. More precisely, we consider an ensemble of $12$ realizations of box-size $1.6h^{-1}\mathrm{Gpc}$ (this yields a total effective volume of $V_{\mathrm{eff}}\simeq50h^{-3}\mathrm{Gpc}^3$). Each realization is seeded with both Gaussian ($f_{\mathrm{NL}}=0$) and non-Gaussian ($f_{\mathrm{NL}}=\pm100$) initial conditions of the local type \cite{Desjacques2009}, and evolves $1024^3$ particles of mass $3.0\times10^{11}h^{-1}\mathrm{M}_{\odot}$. The cosmological parameters are $\Omega_\mathrm{m}=0.279$, $\Omega_{\Lambda}=0.721$, $\Omega_\mathrm{b}=0.046$, $\sigma_8=0.81$, $n_\mathrm{s}=0.96$, and $h=0.7$, consistent with the \scshape wmap5 \rm \cite{Komatsu2009} best-fit constraint. Additionally, we consider one realization with each $f_{\mathrm{NL}}=0,\pm50$ of box-size $1.3h^{-1}\mathrm{Gpc}$ with $1536^3$ particles of mass $4.7\times10^{10}h^{-1}\mathrm{M}_{\odot}$ to assess a higher-resolution regime. The simulations were performed on the supercomputer \scshape zbox3 \rm at the University of Z\" urich with the \scshape gadget ii \rm code \cite{Springel2005b}. The initial conditions were laid down at redshift $z=100$ by perturbing a uniform mesh of particles with the Zel'dovich approximation.
To generate halo catalogs, we employ a friends-of-friends (FOF) algorithm \cite{Davis1985} with a linking length equal to $20\%$ of the mean interparticle distance. For comparison, we also generate halo catalogs using the \scshape ahf \rm halo finder developed by \cite{Gill2004}, which is based on the spherical overdensity (SO) method \cite{Lacey1994}. In this case, we assume an overdensity threshold $\Delta_\mathrm{c}(z)$ being a decreasing function of redshift, as dictated by the solution to the spherical collapse of a tophat perturbation in a $\Lambda$CDM Universe \cite{Eke1996}. In both cases, we require a minimum of 20 particles per halo, which corresponds to a minimum halo mass $M_{\mathrm{min}}\simeq 5.9\times10^{12}h^{-1}\mathrm{M}_{\odot}$ for the simulations with $1024^3$ particles. For Gaussian initial conditions the resulting total number density of halos is $\bar{n}\simeq 7.0\times10^{-4}h^3\mathrm{Mpc^{-3}}$ and $4.2\times10^{-4}h^3\mathrm{Mpc^{-3}}$ for the FOF and SO catalogs, respectively. Note that the FOF mass estimate is on average $20\%$ higher than the SO mass estimate. For our $1536^3$-particles simulation we obtain $M_{\mathrm{min}}\simeq 9.4\times10^{11}h^{-1}\mathrm{M}_{\odot}$ and $\bar{n}\simeq 4.0\times10^{-3}h^3\mathrm{Mpc^{-3}}$ resulting from the FOF halo finder.
The binning of the halo density field into $N$ consecutive mass bins is done by sorting all halos by increasing mass and dividing this ordered array into $N$ bins with an equal number of halos. The halos of each bin $i\in\left[1\dots N\right]$ are selected separately to construct the halo density field~$\delta_{\mathrm{h_i}}$. The density fields of dark matter and halos are first computed in configuration space via interpolation of the particles onto a cubical mesh with $512^3$ grid points using a cloud-in-cell mesh assignment algorithm \cite{Hockney1988}. We then perform a fast fourier transform to compute the modes of the fields in $k$-space.
For each of our Gaussian and non-Gaussian realizations, we match the total number of halos to the one realization with the least amount of them by discarding halos from the low-mass end. This \emph{abundance matching} technique ensures that we eliminate any possible signature of primordial non-Gaussianity induced by the unobservable $f_{\mathrm{NL}}$-dependence of the halo mass function. It guarantees a constant value $1/\bar{n}$ of the Poisson noise for both Gaussian and non-Gaussian realizations. A dependence of the Poisson noise on $f_{\mathrm{NL}}$ would complicate the interpretation of the Fisher information content. Note also that, in order to calculate the derivative of a function $\mathcal{F}$ with respect to $f_{\mathrm{NL}}$, we apply the linear approximation
\begin{equation}
\frac{\partial\mathcal{F}}{\partialf_{\mathrm{NL}}}\simeq\frac{\mathcal{F}(f_{\mathrm{NL}}=+100)-\mathcal{F}(f_{\mathrm{NL}}=-100)}{2\times100}\;, \label{df}
\end{equation}
which exploits the statistics of all our non-Gaussian runs. All the error bars quoted in this paper are computed from the variance amongst our $12$ realizations.
\subsection{Effective bias and shot noise}
At the two-point level and in Fourier space, the clustering of halos as described by Eq.~(\ref{Cov}) is determined by two basic components: effective bias and shot noise. Since the impact of primordial non-Gaussianity on the nonlinear dark matter power spectrum $P$ is negligible on large scales (see Fig.~\ref{sn_m2}), the dependence of both $\boldsymbol{b}$ and $\boldsymbol{\mathcal{E}}$ on $f_{\mathrm{NL}}$ must be known if one wishes to constrain the latter. In the following sections, we will examine this dependence in our series of $N$-body simulations.
\begin{figure*}[!t]
\centering
\resizebox{\hsize}{!}{
\includegraphics{fig2a.eps}
\includegraphics{fig2b.eps}}
\caption{LEFT: Gaussian effective bias (top) and its derivative with respect to $f_{\mathrm{NL}}$ (bottom) for the case of $30$ mass bins. The scale-independent part $\boldsymbol{b}_{\mathrm{G}}$ is plotted in dotted lines for each bin; it was obtained by averaging all modes with $k\le0.032h\mathrm{Mpc}^{-1}$. RIGHT: Large-scale averaged Gaussian effective bias $\boldsymbol{b}_{\mathrm{G}}$ from the left panel (dotted lines) plotted against mean halo mass. The solid line depicts the linear-order bias derived from the peak-background split formalism. All error bars are obtained from the variance of our $12$ boxes to their mean. Results are shown for FOF halos at $z=0$.}
\label{bias}
\end{figure*}
\subsubsection{Effective bias}
In the top left panel of Fig.~\ref{bias}, the effective bias $\boldsymbol{b}$ in the fiducial Gaussian case ($f_{\mathrm{NL}}=0$) is shown for 30 consecutive FOF halo mass bins as a function of wave number. In the large-scale limit $k\to 0$, the measurements are consistent with being scale-independent, as indicated by the dotted lines which show the average of $\boldsymbol{b}(k,f_{\mathrm{NL}}=0)$ over all modes with $k\le0.032h\mathrm{Mpc}^{-1}$, denoted $\boldsymbol{b}_{\mathrm{G}}$. At larger wave numbers, the deviations can be attributed to higher-order bias terms, which are most important at high mass. Relative to the low-$k$ averaged, scale-independent Gaussian bias $\boldsymbol{b}_{\mathrm{G}}$, these corrections tend to suppress the effective bias at low mass, whereas they increase it at the very high-mass end (see Eq.~(\ref{b_model})). The right panel of Fig.~\ref{bias} shows the large-scale average $\boldsymbol{b}_{\mathrm{G}}$ as a function of halo mass, as determined from $30$ halo mass bins, each with a number density of $\bar{n}\simeq 2.3\times10^{-5}h^3\mathrm{Mpc^{-3}}$. The solid line is the linear-order bias as derived from the peak-background split formalism \cite{Sheth1999,Scoccimarro2001}. We find a good agreement with our $N$-body data, only at masses below $\sim8\times10^{12}h^{-1}\mathrm{M}_{\odot}$ deviations appear for halos with less than $\sim30$ particles \cite{Knebe2011}.
The bottom left panel of Fig.~\ref{bias} depicts the derivative of $\boldsymbol{b}$ with respect to $f_{\mathrm{NL}}$ for each of the $30$ mass bins. The behavior is well described by the linear theory prediction of Eq.~(\ref{b(k,fnl)}), leading to a $k^{-2}$-dependence on large scales which is more pronounced for more massive halos (for quantitative comparisons with simulations, see \cite{Desjacques2009,Grossi2009,Pillepich2010}). Thus, the amplitude of this effect gradually diminishes towards smaller scales and even disappears around $k\sim0.1h\mathrm{Mpc}^{-1}$. Note that \cite{Desjacques2009} argued for an additional non-Gaussian bias correction which follows from the $f_{\mathrm{NL}}$-dependence of the mass function. This $k$-independent contribution should in principle be included in Eq.~(\ref{b(k,fnl)}). However, as can be seen in the lower left plot, it is negligible in our approach (i.e., all curves approach zero at high $k$) owing to the matching of halo abundances between our Gaussian and non-Gaussian realizations.
\subsubsection{Shot noise matrix \label{shot noise matrix}}
The shot noise matrix $\boldsymbol{\mathcal{E}}$ has been studied using simulations with Gaussian initial conditions in \cite{Hamaus2010}. Figure~\ref{SN} displays the eigenstructure of this matrix for $f_{\mathrm{NL}}=0$ (solid curves) and $f_{\mathrm{NL}}=\pm 100$ (dashed and dotted curves). The left panel depicts all the eigenvalues (top) and their derivatives with respect to $f_{\mathrm{NL}}$ (bottom), while the right panel shows the two important eigenvectors $\boldsymbol{V}_{\!\!+}$ and $\boldsymbol{V}_{\!\!-}$ (top) along with their derivatives (bottom). The eigenstructure of $\boldsymbol{\mathcal{E}}$ is accurately described by Eq.~(\ref{E_eb}), even in the non-Gaussian case. Namely, we still find one enhanced eigenvalue $\lambda_+$ and one suppressed eigenvalue $\lambda_-$. The remaining $N-2$ eigenvalues $\lambda_{\mathrm{P}}^{\left(N-2\right)}$ are degenerate with the value $1/\bar{n}$, the Poisson noise expectation. This means that our Gaussian bias-expansion model from Eq.~(\ref{model}) still works to describe $\boldsymbol{\mathcal{E}}$ in the weakly non-Gaussian regime.
\begin{figure*}[!t]
\centering
\resizebox{\hsize}{!}{
\includegraphics{fig3a.eps}
\includegraphics{fig3b.eps}}
\caption{Eigenvalues (left panel) and eigenvectors (right panel) of the shot noise matrix $\boldsymbol{\mathcal{E}}$ for $f_{\mathrm{NL}}=0$ (solid), $+100$ (dashed) and $-100$ (dotted) in the case of $30$ mass bins. Their derivatives with respect to $f_{\mathrm{NL}}$ are plotted underneath. For clarity, only the two eigenvectors $\boldsymbol{V}_{\!\!\pm}$ along with their derivatives are shown in the right panel. The straight dotted line in the upper left panel depicts the value $1/\bar{n}$ and the red (dot-dashed) curve in the top right panel shows $b_2(M)$ computed from the peak-background split formalism, scaled to the value of $\boldsymbol{V}_{\!\!+}$ at $M\simeq3\times10^{13}h^{-1}\mathrm{M}_{\odot}$. Results are shown for FOF halos at $z=0$.}
\label{SN}
\end{figure*}
Note however that, owing to sampling variance, the decomposition into eigenmodes becomes increasingly noisy towards larger scales. This leads to an artificial breaking of the eigenvalue degeneracy which manifests itself as a scatter around the mean value $1/\bar{n}$. This scatter is the major contribution of sampling variance in the halo covariance matrix $\mathbf{C}_{\mathrm{h}}$. Although we can eliminate most of it by setting $\lambda_{\mathrm{P}}^{\left(N-2\right)}\equiv1/\bar{n}$, a residual degree of sampling variance will remain in $\lambda_+$ and $\lambda_-$, as well as in $\boldsymbol{b}$ and $P$.
As is apparent from the left panel in Fig.~\ref{SN}, the dominant eigenvalue $\lambda_+$ exhibits a small, but noticeable $f_{\mathrm{NL}}$-dependence similar to that of $\mathcal{E}_{\delta^2}$ in Fig.~\ref{sn_m2}, which is about $2\%$ in this case. Its derivative, $\partial\lambda_+/\partialf_{\mathrm{NL}}$, clearly dominates the derivative of all other eigenvalues (which are all consistent with zero due to matched abundances). Only the derivative of the suppressed eigenvalue $\lambda_-$ shows a similar $f_{\mathrm{NL}}$-dependence of $\sim2\%$, albeit at a much lower absolute amplitude. To check the convergence of our results, we repeated the analysis with $100$ and $200$ bins and found both derivatives of $\lambda_+$ and $\lambda_-$ to increase, supporting an $f_{\mathrm{NL}}$-dependence of these eigenvalues.
By contrast, the eigenvectors $\boldsymbol{V}_{\!\!+}$ and $\boldsymbol{V}_{\!\!-}$ shown in the right panel of Fig.~\ref{SN} exhibit very little dependence on $f_{\mathrm{NL}}$ (the different lines are all on top of each other). The derivatives of $\boldsymbol{V}_{\!\!+}$ and $\boldsymbol{V}_{\!\!-}$ with respect to $f_{\mathrm{NL}}$ shown in the lower panel reveal a very weak sensitivity to $f_{\mathrm{NL}}$ which is less than $0.5\%$ for most of the mass bins (for the most massive bin it reaches up to $1\%$). We repeated the same analysis with $100$ and $200$ mass bins and found that the relative differences between the measurements in Gaussian and non-Gaussian simulations further decrease. We thus conclude that the eigenvectors $\boldsymbol{V}_{\!\!+}$ and $\boldsymbol{V}_{\!\!-}$ can be assumed independent of $f_{\mathrm{NL}}$ to a very high accuracy.
Our findings demonstrate that the two-point statistics of halos are sensitive to primordial non-Gaussianity beyond the linear-order effect of Eq.~(\ref{b(k,fnl)}) derived in \cite{Dalal2008,Matarrese2008,Slosar2008}. However, the corrections are tiny if one considers a single bin containing many halos of very different mass (see \cite{Kendrick2010}) due to mutual cancellations from $b_2$-terms of opposite sign. Only two specific eigenmodes of the shot noise matrix (corresponding to two different weightings of the halo density field) inherit a significant dependence on $f_{\mathrm{NL}}$. This is most prominently the case for the eigenmode corresponding to the highest eigenvalue $\lambda_+$. Its eigenvector, $\boldsymbol{V}_{\!\!+}$, is shown to be closely related to the second-order bias $\boldsymbol{b}_2$ in Eq.~(\ref{E-model}). As can be seen in the upper right panel of Fig.~\ref{SN}, $\boldsymbol{V}_{\!\!+}$ measured from the simulations, and the function $b_2(M)$ calculated from the peak-background split formalism \cite{Sheth1999,Scoccimarro2001}, agree closely (note that $b_2(M)$ has been rescaled to match the normalized vector $\boldsymbol{V}_{\!\!+}$).
In the continuous limit this implies that weighting the halo density field with $b_2(M)$ selects the eigenmode with eigenvalue $\lambda_+$ given in Eq.~(\ref{lambda-model}). Since $\lambda_+$ depends on $f_{\mathrm{NL}}$ through the quantity $\mathcal{E}_{\delta^2}$ defined in Eq.~(\ref{sigma_dm2}), the resulting weighted field will show the same $f_{\mathrm{NL}}$-dependence. However, this $f_{\mathrm{NL}}$-dependence cannot immediately be exploited to constrain primordial non-Gaussianity, because the Fourier modes of $\mathcal{E}_{\delta^2}$ are heavily correlated due to the convolution of $\delta$ with itself in Eq.~(\ref{sigma_dm2}), and thus do not contribute to the Fisher information independently. The bottom line is that for increasingly massive halo bins with large $b_2$, the term $\mathcal{E}_{\delta^2}$ makes an important contribution to the halo power spectrum \emph{and} shows a significant dependence on $f_{\mathrm{NL}}$. It is important to take into account this dependence when attempting to extract the best-fit value of $f_{\mathrm{NL}}$ from high-mass clusters, so as to avoid a possible measurement bias.
Although it provides some additional information on $f_{\mathrm{NL}}$, we will ignore it in the following and quote only lower limits on the Fisher information content.
\subsection{Constraints from Halos and Dark Matter \label{sec:halos&dm}}
Let us first assume the underlying dark matter density field $\delta$ is available in addition to the galaxy distribution. Although this can in principle be achieved with weak-lensing surveys using tomography, the spatial resolution will not be comparable to that of galaxy surveys. To mimic the observed galaxy distribution we will assume that each dark matter halo (identified in the numerical simulations) hosts exactly one galaxy. A further refinement in the description of galaxies can be accomplished with the specification of a halo occupation distribution for galaxies \cite{Berlind2002,Cai2011}, but we will not pursue this here. Instead, we can think of the halo catalogs as a sample of central halo galaxies from which satellites have been removed. We also neglect the effects of baryons on the evolution of structure formation, which are shown to be marginally influenced by primordial non-Gaussianity at late times \cite{Maio2011}.
\subsubsection{Single tracer: uniform weighting}
In the simplest scenario we only consider one single halo mass bin. In this case, all the observed halos (galaxies) of a survey are correlated with the underlying dark matter density field in Fourier space to determine their scale-dependent effective bias, which can then be compared to theoretical predictions. In practice, this translates into fitting our theoretical model for the scale-dependent effective bias, Eq.~(\ref{b(k,fnl)}), to the Fourier modes of the density fields and extracting the best fitting value of $f_{\mathrm{NL}}$ together with its uncertainty. For a single halo mass bin, we can employ Eq.~(\ref{chi2m_1}) and sum over all the Fourier modes.
In the Gaussian simulations, we measure the scale-independent effective bias $b_{\mathrm{G}}$ via the estimator $\langle\delta_{\mathrm{h}}\delta\rangle/\langle\delta^2\rangle$ and the shot noise $\mathcal{E}$ via $\langle\left(\delta_{\mathrm{h}}-b_{\mathrm{G}}\delta\right)^2\rangle$, and average over all modes with $k\le0.032h\mathrm{Mpc}^{-1}$. In practice, $b_{\mathrm{G}}$ and $\mathcal{E}$ are not directly observable, but a theoretical prediction based on the peak-background split \cite{Sheth1999,Scoccimarro2001} and the halo model \cite{Hamaus2010} provides a reasonable approximation to the measured $b_{\mathrm{G}}$ and $\mathcal{E}$, respectively, (see Sec.~\ref{sec:HM}). Note that for bins covering a wide range of halo masses, the $f_{\mathrm{NL}}$-dependence of the shot noise is negligible \cite{Kendrick2010} and it is well approximated by its Gaussian expectation.
Figure~\ref{fit_u} shows the best fits of Eq.~(\ref{b(k,fnl)}) to the simulations with $f_{\mathrm{NL}}=0,\pm100$ using all the halos of our FOF (left panel) and SO catalogs (right panel). In order to highlight the relative influence of $f_{\mathrm{NL}}$ on the effective bias, we normalize the measurements by the large-scale Gaussian average $b_{\mathrm{G}}$ and subtract unity. The resulting best-fit values of $f_{\mathrm{NL}}$ along with their one-sigma errors are quoted in the lower right for each case of initial conditions. The $68\%$-confidence region is determined by the condition $\Delta\chi^2(f_{\mathrm{NL}})=1$. Note that we include only Fourier modes up to $k\simeq 0.032h\mathrm{Mpc}^{-1}$ in the fit, as linear theory begins to break down at higher wave numbers.
\begin{figure*}[!t]
\centering
\resizebox{\hsize}{!}{
\includegraphics{fig4a.eps}
\includegraphics{fig4b.eps}}
\caption{Relative scale dependence of the effective bias from all FOF (left panel) and SO halos (right panel) resolved in our N-body simulations ($M_{\mathrm{min}}\simeq 5.9\times10^{12}h^{-1}\mathrm{M}_{\odot}$), which are seeded with non-Gaussian initial conditions of the local type with $f_{\mathrm{NL}}=+100,0,-100$ (solid lines and data points from top to bottom). The solid lines show the best fit to the linear theory model of Eq.~(\ref{b(k,fnl)}), taking into account all the modes to the left of the arrow. The corresponding best-fit values are quoted in the bottom right of each panel. The dotted lines show the model evaluated at the input values $f_{\mathrm{NL}}=+100,0,-100$. The results assume knowledge of the dark matter density field and an effective volume of $V_{\mathrm{eff}}\simeq50h^{-3}\mathrm{Gpc}^3$ at $z=0$.}
\label{fit_u}
\end{figure*}
\begin{figure*}[!t]
\centering
\resizebox{\hsize}{!}{
\includegraphics{fig5a.eps}
\includegraphics{fig5b.eps}}
\caption{Same as Fig.~\ref{fit_u}, but for weighted halos that have minimum stochasticity relative to the dark matter. Note that the one-sigma errors on $f_{\mathrm{NL}}$ are reduced by a factor of $\sim 5$ compared to uniform weighting. In the case of SO halos the input values for $f_{\mathrm{NL}}$ are well recovered by the best-fit, while FOF halos still show a suppression of $\sim20\%$ ($q\simeq0.8$) in the best-fit $f_{\mathrm{NL}}$.}
\label{fit_w}
\end{figure*}
Obviously, the best-fit values for $f_{\mathrm{NL}}$ measured from the FOF halo catalogs are about $20\%$ below the input values. A suppression of the non-Gaussian correction to the bias of FOF halos has already been reported by \cite{Grossi2009,Pillepich2010}. These authors showed that the replacement $\delta_\mathrm{c}\rightarrow q\delta_\mathrm{c}$ with $q=0.75$ in Eq.~(\ref{b(k,fnl)}) yields a good agreement with their simulation data. In our framework, including this ``$q$-factor'' is equivalent to exchanging $f_{\mathrm{NL}}\rightarrowf_{\mathrm{NL}}/q$ and $\sigma_{f_{\mathrm{NL}}}\rightarrow\sigma_{f_{\mathrm{NL}}}/q$, owing to the linear scaling of Eq.~(\ref{b(k,fnl)}) with $\delta_\mathrm{c}$. Repeating the chi-square minimization with $q=0.75$ yields best-fit values that are consistent with our input values, namely $f_{\mathrm{NL}}=+107.0\pm8.3$, $+1.8\pm8.7$ and $-104.0\pm8.5$. In fact, the closest match to the input $f_{\mathrm{NL}}$-values is obtained for a slightly larger $q$ of $\simeq0.8$.
Note that \cite{Grossi2009} attributed this suppression to ellipsoidal collapse. However, this conclusion seems rather unlikely since ellipsoidal collapse increases the collapse threshold or, equivalently, implies $q>1$ \cite{Sheth2001}. A more sensible explanation arises from the fact that a linking length of 0.2 times the mean interparticle distance can select regions with an overdensity as low as $\Delta\sim1/0.2^3=125$ (with respect to the mean background density $\bar{\rho}_\mathrm{m}$), which is much less than the virial overdensity $\Delta_\mathrm{c}(z=0)\simeq340$ associated with a linear overdensity $\delta_\mathrm{c}$ (see \cite{Eke1996,Valageas2010,More2011}). Therefore, we may reasonably expect that, on average, FOF halos with this linking length trace linear overdensities of height less than $\delta_\mathrm{c}$.
In the case of SO halos, however, we observe the opposite trend. As is apparent in the right panel of Fig.~\ref{fit_u}, the model from Eq.~(\ref{b(k,fnl)}) overestimates the amplitude of primordial non-Gaussianity by roughly $40\%$. This is somewhat surprising since the overdensity threshold $\Delta_c\simeq 340$ used to identify the SO halos at $z=0$ is precisely the virial overdensity predicted by the spherical collapse of a linear perturbation of height $\delta_\mathrm{c}$. As we will see shortly, however, an optimal weighting of halos can remove this overshoot and therefore noticeably improve the agreement between model and simulations.
\subsubsection{Single tracer: optimal weighting}
As demonstrated in \cite{Hamaus2010}, the shot noise matrix $\boldsymbol{\mathcal{E}}$ exhibits nonzero off-diagonal elements from correlations
between halos of different mass. Thus, in order to extract the full information on halo statistics, it is necessary to include these correlations into our analysis. For this purpose, we must employ the more general chi-square of Eq.~(\ref{chi2m}). The halo density field is split up into $N$ consecutive mass bins in order to construct the vector $\boldsymbol{\delta}_{\mathrm{h}}$, and the full shot noise matrix $\boldsymbol{\mathcal{E}}$ must be considered.
However, this approach can be simplified, since we know that $\boldsymbol{\mathcal{E}}$ exhibits one particularly low eigenvalue~$\lambda_-$. Because the Fisher information content on $f_{\mathrm{NL}}$ from Eq.~(\ref{F_m_text}) is proportional to the inverse of $\boldsymbol{\mathcal{E}}$ (this is true at least for the dominant part $\gamma$), it is governed by the eigenmode corresponding to this eigenvalue. In \cite{Hamaus2010} it has been shown that this eigenmode dominates the clustering signal-to-noise ratio. In the continuous limit (infinitely many bins), it can be projected out by performing an appropriate weighting of the halo density field. The corresponding weighting function, denoted as \emph{modified mass weighting} with functional form
\begin{equation}
w(M)=M+M_0 \;, \label{w(M)}
\end{equation}
was found to minimize the stochasticity of halos with respect to the dark matter. Here, $M$ is the individual halo mass and $M_0$ a constant whose value depends on the resolution of the simulation. It is approximately $3$ times the minimum resolved halo mass $M_{\mathrm{min}}$, so in this case $M_0\simeq1.8\times10^{13}h^{-1}\mathrm{M}_{\odot}$. The weighted halo density field is computed as
\begin{equation}
\delta_w=\frac{\sum_i w(M_i)\delta_{\mathrm{h}_i}}{\sum_i w(M_i)}\equiv\frac{\boldsymbol{w}^\intercal\boldsymbol{\delta}_{\mathrm{h}}}{\boldsymbol{w}^\intercal\openone}\;, \label{weight}
\end{equation}
where we have combined the weights of the individual mass bins into a vector $\boldsymbol{w}$ in the last expression. Because the chi-square in Eq.~(\ref{chi2m}) is dominated by only one eigenmode, it simplifies to the form of Eq.~(\ref{chi2m_1}) with the halo field $\delta_{\mathrm{h}}$ being replaced by the weighted halo field~$\delta_w$. Note also that $b_{\mathrm{G}}$ and $\mathcal{E}$ have to be replaced by the corresponding weighted quantities (see \cite{Hamaus2010}).
The results are shown in Fig.~\ref{fit_w} for both FOF and SO halos. We observe a remarkable reduction in the error on $f_{\mathrm{NL}}$ by a factor of $\sim 4-6$ (depending on the halo finder) when replacing the uniform sample used in Fig.~\ref{fit_u} by the optimally weighted one. While for the FOF halos the predicted amplitude of the non-Gaussian correction to the halo bias still shows the $20\%$ suppression (again, this can be taken into account by introducing a $q$-factor into our fit), for the SO halos the best-fit values of $f_{\mathrm{NL}}$ now agree much better with the input values, i.e., $q\simeq1$.
Therefore, the large discrepancy seen in Fig.~\ref{fit_u} presumably arises from noise in the SO mass assignment at low mass. To ascertain whether this is the case, we repeat the analysis, increasing the threshold for the minimum number of particles per halo and discarding all halos below that threshold. If this threshold reaches $40$ particles per halo, we find the best-fit $f_{\mathrm{NL}}$ to be much closer to the input values, namely $f_{\mathrm{NL}}=+102.6\pm4.6$, $+2.1\pm4.5$ and $-103.5\pm4.4$. This suggests that most of the discrepancy seen in the right panel of Fig.~\ref{fit_u} is due to poorly resolved halos of mass $M\lesssim2M_\mathrm{min}$ \cite{Knebe2011}. However, modified mass weighting removes this discrepancy since halos at low mass are given less weight.
\begin{figure*}[!t]
\centering
\resizebox{\hsize}{!}{
\includegraphics{fig6a.eps}
\includegraphics{fig6b.eps}}
\caption{Fisher information (left panel) and one-sigma error on $f_{\mathrm{NL}}$ (right panel, $k_{\mathrm{min}}=0.0039h\mathrm{Mpc}^{-1}$, $V_{\mathrm{eff}}\simeq50h^{-3}\mathrm{Gpc}^3$) from simulations of FOF halos and dark matter at $z=0$. The lines show the results for $1$ (solid black), $3$ (dotted blue), $10$ (dashed green) and $30$ (dot-dashed red) uniform halo mass bins, as well as for $1$ weighted bin (long-dashed yellow).}
\label{fisher_m}
\end{figure*}
Our findings are consistent with the ones of \cite{Desjacques2009}, where the non-Gaussian bias of SO halos has been measured also at higher redshifts and mass thresholds, and the results of \cite{Desjacques2010b}, where the fractional deviation from the Gaussian mass function for both FOF and SO halos was presented (see their Fig.~5). The remarkable improvement in the constraints on $f_{\mathrm{NL}}$ follows from the fact that the stochasticity (shot noise) of the optimally weighted halo density field is strongly suppressed with respect to the dark matter \cite{Hamaus2010}. This means that the fluctuations of the halo and the dark matter overdensity fields are more tightly correlated and the variance of the estimator $\langle\delta_{\mathrm{h}}\delta\rangle/\langle\delta^2\rangle$ for the effective bias is minimized. Also, cosmic variance fluctuations inherent in both $\delta$ and $\delta_{\mathrm{h}}$ are canceled in this ratio (see Appendix \ref{appendix3}). Since the scale dependence of this estimator is a direct probe of primordial non-Gaussianity, the error on $f_{\mathrm{NL}}$ is significantly reduced. At the same time, modified mass weighting increases the magnitude of $b_{\mathrm{G}}$. We will show below that the constraints on $f_{\mathrm{NL}}$ are indeed optimized with this approach.
Finally, we can test our assumption about the likelihood function as defined in Eq.~(\ref{likelihood}) being of a Gaussian form and thus yielding the correct Fisher information. Non-Gaussian corrections could arise from correlated $k$-modes in the covariance matrix (as present in the eigenmode $\lambda_+$ of the shot noise matrix), preventing the Fisher information from being a single integral over $k$. The error $\sigma_b$ on the effective bias in Figs.~\ref{fit_u} and \ref{fit_w} is determined from the variance amongst our sample of $12$ realizations and thus provides an independent way of testing the value for $\sigma_{f_{\mathrm{NL}}}$: from Eq.~(\ref{b(k,fnl)}) we can determine $\sigma_{f_{\mathrm{NL}}}=\sigma_b/(b_{\mathrm{G}}-1)u(k,z)$ and compare it to the value obtained from the chi-square fit with Eq.~(\ref{chi2m}). Applying the two methods, we find no significant differences in $\sigma_{f_{\mathrm{NL}}}$, so at least up to the second moment of the likelihood function, the assumption of it being Gaussian seems reasonable for the considered values of $f_{\mathrm{NL}}$.
\subsubsection{Multiple tracers}
Let us now estimate the minimal error on $f_{\mathrm{NL}}$ achievable with a given galaxy survey for the general case, dividing halos into multiple mass bins. The Fisher information is given by Eq.~(\ref{F_m_text}) or~(\ref{F_lin}), depending on whether the dark matter density field is known or not, and the minimal error on $f_{\mathrm{NL}}$ is determined via integration over all observed modes in the volume $V$,
\begin{equation}
\sigma_{f_{\mathrm{NL}}}^{-2}=\frac{V}{2\pi^2}\int_{k_{\mathrm{min}}}^{k_{\mathrm{max}}}F_{f_{\mathrm{NL}}\fnl}(k)\;k^2\mathrm{d}k \; . \label{sigma_fnl}
\end{equation}
The largest modes with wave number $k_{\mathrm{min}}=2\pi/L_{\mathrm{box}}\simeq0.0039h\mathrm{Mpc}^{-1}$ available from our $N$-body simulations are smaller than the largest modes in a survey of $50h^{-3}\mathrm{Gpc}^3$ volume ($k_{\mathrm{min}}\simeq0.0017h\mathrm{Mpc}^{-1}$ ), since we only obtain an \emph{effective} volume by considering $12$ smaller simulation boxes. Because the signal from $f_{\mathrm{NL}}$ is strongest at low $k$, our results slightly underestimate the total Fisher information. However, we can roughly estimate that on larger scales ($k_{\mathrm{min}}<0.0039h\mathrm{Mpc}^{-1}$), $F_{f_{\mathrm{NL}}\fnl}(k)\sim u^2(k)P(k)\sim k^{-4}k^{n_s}$ [see Eqs.~(\ref{u(k,z)}), (\ref{F_lin}) and (\ref{F_m_text}), as well as Figs.~\ref{fisher_m} and \ref{fisher_h}], and thus $\sigma_{f_{\mathrm{NL}}}\sim \ln\left(k_{\mathrm{max}}/k_{\mathrm{min}}\right)^{-1/2}$ assuming $n_s\simeq1$, a relatively weak dependence on $k_{\mathrm{min}}$. In our case this amounts to an overestimation of $\sigma_{f_{\mathrm{NL}}}$ by roughly $20\%$.
Note that we only consider the $f_{\mathrm{NL}}$-$f_{\mathrm{NL}}$-element of the Fisher matrix. In principle we would have to consider various other parameters of our cosmology and then marginalize over them, i.e., compute $\left(F^{-1}\right)_{f_{\mathrm{NL}}\fnl}$ \cite{Carbone2010}. However, any degeneracy with cosmological parameters is largely eliminated when multiple tracers are considered, since the underlying dark matter density field mostly cancels out in this approach \cite{Seljak2009a}. A mathematical demonstration of this fact is presented in Appendix \ref{appendix3}.
Recent studies have developed a gauge-invariant description of the observable large-scale power spectrum consistent with general relativity \cite{Yoo2009a,Yoo2009b,Yoo2010,McDonald2009b,Bonvin2011,Challinor2011,Baldauf2011b}. In particular, it has been noted that the general relativistic corrections to the usually adopted Newtonian treatment leave a signature in the galaxy power spectrum that is very similar to the one caused by primordial non-Gaussianity of the local type \cite{Wands2009,Bruni2011,Jeong2011}. However, in a multitracer analysis the two effects can be distinguished sufficiently well, so that the ability to detect primordial non-Gaussianity is little compromised in the presence of general relativistic corrections \cite{Yoo2011}.
In order to make the most conservative estimates we will discard all the terms featuring $\boldsymbol{\mathcal{E}}'$ in the Fisher matrix, since it is not obvious how much information on $f_{\mathrm{NL}}$ can actually be extracted from the shot noise matrix. $\boldsymbol{\mathcal{E}}$~is indeed close to a pure white-noise quantity and we find its Fourier modes to be highly correlated. Therefore, in order to extract residual information on $f_{\mathrm{NL}}$, one would have to decorrelate those modes through an inversion of the correlation matrix among $k$-bins (see \cite{Kiessling2011}). However, in light of the limited volume of our simulations this can be a fairly noisy procedure, especially when the halo distribution is additionally split into narrow mass bins. Hence, for the Fisher information content on $f_{\mathrm{NL}}$ assuming knowledge of both halos and dark matter, we will retain only the first term in Eq.~(\ref{F_m_text}) and provide a lower limit:
\begin{equation}
F_{f_{\mathrm{NL}}\fnl}\ge\gamma\equiv\boldsymbol{b}'^\intercal\boldsymbol{\mathcal{E}}^{-1}\boldsymbol{b}' P \; . \label{F_fnl}
\end{equation}
To calculate $F_{f_{\mathrm{NL}}\fnl}$, we measure the functions $\boldsymbol{b}(k)$, $\boldsymbol{b}'(k)$, $\boldsymbol{\mathcal{E}}(k)$ and $P(k)$ from our $N$-body simulations (see Figs.~\ref{sn_m2}, \ref{bias} and \ref{SN}). In order to mitigate sampling variance in the multibin case, we then use Eq.~(\ref{E_eb}) to recalculate the shot noise matrix. Namely, we set all the eigenvalues $\lambda_{\mathrm{P}}^{\left(N-2\right)}$ equal to the average value $1/\bar{n}$, and measure $\lambda_+$, $\lambda_-$, as well as $\boldsymbol{V}_{\!\!+}$ and $\boldsymbol{V}_{\!\!-}$ directly from the numerical eigendecomposition of $\boldsymbol{\mathcal{E}}$.
Figure~\ref{fisher_m} depicts $F_{f_{\mathrm{NL}}\fnl}(k)$ and $\sigma_{f_{\mathrm{NL}}}(k_\mathrm{max})$ with fixed $k_\mathrm{min}=0.0039h\mathrm{Mpc}^{-1}$ for the cases of $1$, $3$, $10$ and $30$ halo mass bins. Clearly, the finer the sampling into mass bins, the higher the information content on $f_{\mathrm{NL}}$. The weighted halo density field with minimal stochasticity relative to the dark matter (corresponding to a continuous sampling of infinitely many bins) yields more than a factor of 6 reduction in $\sigma_{f_{\mathrm{NL}}}$ when compared to a single mass bin of uniformly weighted halos. This improvement agrees reasonably well with that seen in Figs. \ref{fit_u} and \ref{fit_w}, although the estimates for $\sigma_{f_{\mathrm{NL}}}$ are slightly larger than those we obtained from the fitting procedure. This may be expected, since we only obtain an upper limit on $\sigma_{f_{\mathrm{NL}}}$ from Eqs.~(\ref{sigma_fnl}) and (\ref{F_fnl}).
The inflection around $k\sim0.1h\mathrm{Mpc}^{-1}$ in $F_{f_{\mathrm{NL}}\fnl}$ and $\sigma_{f_{\mathrm{NL}}}$ marks a breakdown of the linear model from Eq.~(\ref{b_fnl}). We should not trust our results too much at high wave number, where higher-order contributions to the non-Gaussian effective bias may become important. It should also be noted that the inflection disappears for the weighted field, suggesting numerical issues to be less problematic in that case.
Further improvements can be achieved when going to lower halo masses (see Sec.~\ref{sec:HM}): the error on $f_{\mathrm{NL}}$ is proportional to the shot noise of the halo density field (Eq.~(\ref{F_m1})), which itself is a function of the minimum halo mass $M_{\mathrm{min}}$. References~\cite{Hamaus2010,Cai2011} numerically investigated the extent to which the shot noise depends on $M_{\mathrm{min}}$ and proposed a method based on the halo model for extrapolating it to lower mass. It predicts the shot noise of the weighted halo density field to decrease linearly with $M_{\mathrm{min}}$, anticipating about 2 orders of magnitude further reduction in $\mathcal{E}$ when resolving halos down to $M_{\mathrm{min}}\simeq10^{10}h^{-1}\mathrm{M}_{\odot}$. In terms of $f_{\mathrm{NL}}$-constraints this is however somewhat mitigated by the fact that the Gaussian bias also decreases with $M_{\mathrm{min}}$, so the non-Gaussian correction to the effective bias in Eq.~(\ref{b(k,fnl)}) gets smaller. Furthermore, \cite{Hamaus2010} studied the effect of adding random noise to the halo mass (to mimic scatter between halo mass and the observables such as galaxy luminosity), while \cite{Cai2011} explored the redshift dependence of the optimally weighted halo density field and extended the method to halo occupation distributions for galaxies.
\begin{figure*}[!t]
\centering
\resizebox{\hsize}{!}{
\includegraphics{fig7a.eps}
\includegraphics{fig7b.eps}}
\caption{Relative scale dependence of the effective bias $\hat{b}$ estimated from all uniform (left panel) and weighted FOF halos (right panel) resolved in our N-body simulations ($M_{\mathrm{min}}\simeq 5.9\times10^{12}h^{-1}\mathrm{M}_{\odot}$), which are seeded with non-Gaussian initial conditions of the local type with $f_{\mathrm{NL}}=+100,0,-100$ (solid lines and data points from top to bottom). The solid lines show the best fit to the linear theory model of Eq.~(\ref{b(k,fnl)}), taking into account all the modes to the left of the arrow. The corresponding best-fit values are quoted in the bottom right of each panel. The dotted lines show the model evaluated at the input values $f_{\mathrm{NL}}=+100,0,-100$. The results assume no knowledge of the dark matter density field and an effective volume of $V_{\mathrm{eff}}\simeq50h^{-3}\mathrm{Gpc}^3$ at $z=0$.}
\label{fit_hh}
\end{figure*}
\subsection{Constraints from Halos}
The scenario described above is optimistic in the sense that it assumes the dark matter density field is available. In the following section we will show that it is possible to considerably improve the constraints on $f_{\mathrm{NL}}$ even without this
assumption. This is perhaps not surprising in light of the results in \cite{Hamaus2010,Cai2011}, where it was argued that halos can be used to reconstruct the dark matter to arbitrary precision, as long as they are resolved down to the required low-mass threshold.
\subsubsection{Single tracer}
Considering a single halo mass bin, we must again sum over all the Fourier modes in Eq.~(\ref{chi2_1}) and minimize this chi-square with respect to $f_{\mathrm{NL}}$. Although we pretend to have no knowledge of the dark matter distribution, we determine $b_{\mathrm{G}}$ and $\mathcal{E}$ from our simulations. In realistic applications, however, these quantities will have to be accurately modeled. In addition, we use the linear power spectrum $P_0(k)$ instead of the simulated nonlinear dark matter power spectrum $P(k)$ in Eq.~(\ref{chi2_1}).
Since, in this case, we cannot determine the scale-dependent effective bias directly from the estimator $\langle\delta_{\mathrm{h}}\delta\rangle/\langle\delta^2\rangle$, we define the new estimator
\begin{equation}
\hat{b}\equiv\sqrt{\frac{\langle\delta_{\mathrm{h}}^2\rangle-\mathcal{E}}{P_0}} \;, \label{b_hat}
\end{equation}
which solely depends on the two-point statistics of halos. In Fig.~\ref{fit_hh} we plot this estimator together with the best-fit solutions for the scale-dependent effective bias obtained from the chi-square fit of Eq.~(\ref{chi2_1}). The left panel depicts the results obtained for uniform FOF halos. Compared to the previous case with dark matter, we observe the constraints on $f_{\mathrm{NL}}$ to be weaker by a factor of $\sim3$. The main reason for this difference is the fact that sampling variance inherent in $\delta_{\mathrm{h}}$ is not canceled out by subtracting~$\delta$, as is done in Eq.~(\ref{chi2m_1}). This can also be seen in the estimator $\hat{b}$, where a division of the smooth linear power spectrum $P_0$ does not cancel the cosmic variance inherent in $\langle\delta_{\mathrm{h}}^2\rangle$. Hence, $\hat{b}$ shows significantly stronger fluctuations than $b=\langle\delta_{\mathrm{h}}\delta\rangle/\langle\delta^2\rangle$, which demonstrates how well the basic idea of sampling variance cancellation works.
\begin{figure*}[!t]
\centering
\resizebox{\hsize}{!}{
\includegraphics{fig8a.eps}
\includegraphics{fig8b.eps}}
\caption{Fisher information (left panel) and one-sigma error on $f_{\mathrm{NL}}$ (right panel, $k_{\mathrm{min}}=0.0039h\mathrm{Mpc}^{-1}$, $V_{\mathrm{eff}}\simeq50h^{-3}\mathrm{Gpc}^3$) from simulations of FOF halos only ($z=0$). The lines show the results for $1$ (solid black), $3$ (dotted blue), $10$ (dashed green) and $30$ (dot-dashed red) uniform halo mass bins, as well as for $1$ weighted bin (long-dashed yellow).}
\label{fisher_h}
\end{figure*}
Exchanging the uniform halo field $\delta_{\mathrm{h}}$ with the weighted one, $\delta_w$, the constraints on $f_{\mathrm{NL}}$ improve by about a factor of $2-3$, as can be seen in the right panel of Fig.~\ref{fit_hh}. However, this improvement is mainly due to the larger value of $b_{\mathrm{G}}$ of the weighted sample, since the relative scatter among the data points remains unchanged. This is expected, because we do not consider a second tracer (e.g., the dark matter) in this case, and therefore do not cancel cosmic variance.
Comparing the uncertainty on $f_{\mathrm{NL}}$ obtained from Eq.~(\ref{chi2}) with the one determined via the variance of $\hat{b}$ amongst our $12$ realizations, we can check once more the assumption of a Gaussian likelihood as given in Eq.~(\ref{likelihood}). Again, we find both methods to yield consistent values for $\sigma_{f_{\mathrm{NL}}}$, suggesting any non-Gaussian corrections to the likelihood function to be negligible at this order.
\subsubsection{Multiple tracers}
If we want to exploit the gains from sampling variance cancellation in the case where the dark matter density field is not available, we have to perform a multitracer analysis of halos (see Appendix \ref{appendix3}), which is the focus of this section. We now consider Eq.~(\ref{chi2}) for the chi-square fit. In order to calculate the Fisher information, we use Eq.~(\ref{F_lin}) and thus neglect any possible contribution emerging from the $f_{\mathrm{NL}}$-dependence of the shot noise matrix $\boldsymbol{\mathcal{E}}$.
Numerical results for the Fisher information content and the one-sigma error on $f_{\mathrm{NL}}$ are shown in Fig.~{\ref{fisher_h}} for $1$, $3$, $10$ and $30$ uniform FOF halo mass bins, as well as for $1$ weighted bin. Clearly, the cases of $10$ and $30$ uniform bins outperform a single bin of the weighted field in terms of Fisher information. This suggests that further improvements compared to the single weighted halo field can be achieved when all the correlations of sufficiently many halo mass bins are taken into account.
In principle, we want to split the halo density field into as many mass bins as possible and extrapolate the results to the limit of infinitely many bins (continuous limit). Note that in the high-sampling limit of $\bar{n}\rightarrow\infty$, $F_{f_{\mathrm{NL}}\fnl}$ from Eq.~(\ref{F1_text}) is limited to $2\left(b'/b\right)^2$, whereas the same quantity for several mass bins, Eq.~(\ref{F_lin}), may surpass this bound (see Sec.~\ref{sec:HM}). In Sec.~\ref{sec:halos&dm} we showed that a single optimally weighted halo sample combined with the dark matter reaches the continuous limit in $F_{f_{\mathrm{NL}}\fnl}$, which corresponds to a splitting into infinitely many bins in the multitracer approach. It is unclear, whether a similar goal can be achieved from halos alone, e.g., by considering two differently weighted tracers that would preserve all of the information on $f_{\mathrm{NL}}$, because we do not know the continuous limit of the Fisher information in that case. We will therefore turn to theoretical predictions by the halo model in the following paragraph.
\section{Halo model predictions \label{sec:HM}}
\begin{figure*}[!t]
\centering
\resizebox{\hsize}{!}{
\includegraphics{fig9a.eps}
\includegraphics{fig9b.eps}}
\caption{Halo model predictions for the mean scale-independent Gaussian bias (left panel) and shot noise (right panel) as a function of minimum halo mass from uniform- (solid lines) and weighted halos (dashed lines) from a single mass bin at $z=0$ (blue) and $z=1$ (red). $N$-body simulation results are overplotted, respectively, as squares and circles for different low-mass cuts. The dotted line in the left panel depicts $b_{\mathrm{G}}=1$, the ones in the right panel show the Poisson-model shot noise $\bar{n}_{\mathrm{tot}}^{-1}$.}
\label{HM_b_sn}
\end{figure*}
A useful theoretical framework for the description of dark matter and halo clustering is given by the halo model (see, e.g., \cite{Seljak2000}). Despite its limitations \cite{Crocce2008}, the halo model achieves remarkable agreement with the results from $N$-body simulations \cite{Seljak2000,Hamaus2010}. In particular, it provides an analytical expression for the shot noise matrix in the fiducial Gaussian case, given by
\begin{equation}
\boldsymbol{\mathcal{E}}=\bar{n}^{-1}\mathbf{I}-\boldsymbol{b}\boldsymbol{\mathcal{M}}^\intercal-\boldsymbol{\mathcal{M}}\boldsymbol{b}^\intercal \;, \label{E_hm}
\end{equation}
where $\boldsymbol{\mathcal{M}}\equiv\boldsymbol{M}/\bar{\rho}_\mathrm{m}-\boldsymbol{b}\langle nM^2\rangle/2\bar{\rho}_\mathrm{m}^2$ and $\boldsymbol{M}$ is a vector containing the mean halo mass of each bin (see \cite{Hamaus2010} for the derivation). The Poisson model is recovered when we set $\boldsymbol{\mathcal{M}}=0$. Here, $\boldsymbol{b}$ can be determined by integrating the peak-background split bias $b(M)$ over the Sheth-Tormen halo mass function $dn/dM$ \cite{Sheth1999} in each mass bin. The expression $\langle nM^2\rangle/\bar{\rho}_\mathrm{m}^2$ originates from the dark matter one-halo term, so it does not depend on halo mass and from our suite of simulations we determine its Gaussian value to be $\simeq418h^{-3}\mathrm{Mpc}^3$ at $z=0$ and $\simeq45h^{-3}\mathrm{Mpc}^3$ at $z=1$. In the case of one single mass bin, Eq.~(\ref{E_hm}) reduces to $\mathcal{E}=\bar{n}^{-1}-2bM/\bar{\rho}_\mathrm{m}+b^2\langle nM^2\rangle/\bar{\rho}_\mathrm{m}^2$, while if we project out the lowest eigenmode $\boldsymbol{V}_{\!\!-}$ and normalize, we obtain the weighted shot noise
\begin{equation}
\mathcal{E}_w\equiv\frac{\boldsymbol{V}_{\!\!-}^\intercal\boldsymbol{\mathcal{E}}\boldsymbol{V}_{\!\!-}^{\phantom{\intercal}}}{\left(\boldsymbol{V}_{\!\!-}^\intercal\openone\right)^2}=\lambda_-\frac{\boldsymbol{V}_{\!\!-}^\intercal\boldsymbol{V}_{\!\!-}^{\phantom{\intercal}}}{\left(\boldsymbol{V}_{\!\!-}^\intercal\openone\right)^2} \;.
\end{equation}
The eigenvalues $\lambda_\pm$ with eigenvectors $\boldsymbol{V}_{\!\!\pm}$ can be found from Eq.~(\ref{E_hm}),
\begin{gather}
\lambda_\pm=\bar{n}^{-1}-\boldsymbol{\mathcal{M}}^\intercal\boldsymbol{b}\pm\sqrt{\boldsymbol{\mathcal{M}}^\intercal\boldsymbol{\mathcal{M}}\;\boldsymbol{b}^\intercal\boldsymbol{b}} \;, \\
\boldsymbol{V}_{\!\!\pm}=\mathcal{N_\pm}^{-1}\left(\boldsymbol{\mathcal{M}}\left/\sqrt{\boldsymbol{\mathcal{M}}^\intercal\boldsymbol{\mathcal{M}}}\right.\mp\boldsymbol{b}\left/\sqrt{\boldsymbol{b}^\intercal\boldsymbol{b}}\right.\right) \;,
\end{gather}
where
\begin{equation}
\mathcal{N_\pm}\equiv\sqrt{2\mp2\boldsymbol{\mathcal{M}}^\intercal\boldsymbol{b}\left/\sqrt{\boldsymbol{\mathcal{M}}^\intercal\boldsymbol{\mathcal{M}}\;\boldsymbol{b}^\intercal\boldsymbol{b}}\right.} \;
\end{equation}
is a normalization constant to guarantee $\boldsymbol{V}_{\!\!\pm}^\intercal\boldsymbol{V}_{\!\!\pm}^{\phantom{\intercal}}=1$. It is easily verified that $\boldsymbol{V}_{\!\!\pm}^\intercal\boldsymbol{V}_{\!\!\mp}^{\phantom{\intercal}}=0$, i.e., they are orthogonal. In the continuous limit of infinitely many bins ($N\rightarrow\infty$) we can replace $\boldsymbol{V}_{\!\!\pm}$ by the smooth function
\begin{equation}
V_{\pm}=\mathcal{N_\pm}^{-1}\left(\mathcal{M}\left/\sqrt{\langle\mathcal{M}^2\rangle}\right.\mp b\left/\sqrt{\langle b^2\rangle}\right.\right) \;,
\end{equation}
and obtain
\begin{gather}
\mathcal{E}_w=\left(\bar{n}_{\mathrm{tot}}^{-1}-\langle\mathcal{M}b\rangle-\sqrt{\langle\mathcal{M}^2\rangle\langle b^2\rangle}\right) \frac{\langle V_-^2\rangle}{\langle V_-\rangle^2} \;, \\
b_w=\frac{\langle V_-b\rangle}{\langle V_-\rangle}\;,
\end{gather}
where $b_w$ is the weighted effective bias and we exchanged the vector products by integrals over the mass function:
\begin{equation}
\boldsymbol{x}^\intercal\boldsymbol{y}\longrightarrow \frac{N}{\bar{n}_{\mathrm{tot}}}\int_{M_{\mathrm{min}}}^{M_\mathrm{max}}\frac{dn}{dM}(M)x(M)y(M)\;dM\equiv N\langle xy \rangle \;, \label{cont}
\end{equation}
\begin{equation}
{\bar{n}_{\mathrm{tot}}}=\int_{M_{\mathrm{min}}}^{M_\mathrm{max}}\frac{dn}{dM}(M)\;dM\equiv N\bar{n} \;.
\end{equation}
\begin{figure*}[!t]
\centering
\resizebox{\hsize}{!}{
\includegraphics{fig10.eps}}
\caption{Halo model predictions for the one-sigma error on $f_{\mathrm{NL}}$ (inferred from an effective volume of $V_{\mathrm{eff}}\simeq50h^{-3}\mathrm{Gpc}^3$, taking into account all modes with $0.0039h\mathrm{Mpc}^{-1}\le k\le0.032h\mathrm{Mpc}^{-1}$ at $z=0$) as a function of minimum halo mass from uniform- (solid lines) and weighted halos (dashed lines) from a single mass bin. The $N$-body simulation results are overplotted, respectively, as squares and circles for different low-mass cuts. Results that assume knowledge of halos and the dark matter are plotted in red (filled symbols), those that only consider halos are depicted in blue (open symbols). The dotted lines (triangles) show the results from splitting the halo catalog into multiple mass bins and taking into account the full halo covariance matrix in calculating $F_{f_{\mathrm{NL}}\fnl}$. The high-sampling limit for one mass bin ($\bar{n}\rightarrow\infty$, $F_{f_{\mathrm{NL}}\fnl}=2\left(b'/b\right)^2$) is overplotted for the uniform- (thin solid line) and the weighted case (thin dashed line). Arrows show the effect of adding a log-normal scatter of $\sigma_{\ln M}=0.5$ to all halo masses, they are omitted in all cases where the scatter has negligible impact.}
\label{HM_F_z=0}
\end{figure*}
Figure \ref{HM_b_sn} depicts the halo model prediction for the scale-independent Gaussian bias $b_{\mathrm{G}}$ and shot noise $\mathcal{E}$ as a function of minimum halo mass $M_{\mathrm{min}}$ at $z=0$ and $z=1$ for both the uniform and the weighted case of a single mass bin. Simulation results are overplotted as symbols for a few $M_{\mathrm{min}}$ [we approximate the weighting function $V_-(M)$ by $w(M)$ from Eq.~(\ref{w(M)}) in the simulations]. Obviously, modified mass weighting increases $b_{\mathrm{G}}$, especially when going to lower halo masses. It is also worth noticing that in contrast to the uniform case, $b_{\mathrm{G}}$ is always greater than unity when weighted by $w(M)$ (at least in the considered mass range). Going to higher redshift further increases $b_{\mathrm{G}}$ at any given $M_{\mathrm{min}}$.
For the shot noise we observe the opposite behavior: modified mass weighting leads to a suppression of~$\mathcal{E}$, which is increasingly pronounced towards lower halo masses. Moreover, it is always below the Poisson-model prediction of $\bar{n}_{\mathrm{tot}}^{-1}$. Our $N$-body simulation results generally confirm this trend (at least down to our resolution limit), although the halo model slightly underestimates the suppression of shot noise between uniform and weighted halos at lower $M_{\mathrm{min}}$. At higher redshifts, this suppression becomes smaller at given $M_{\mathrm{min}}$, but the magnitude of $\mathcal{E}_w$ at $z=1$ approaches the one at $z=0$ towards low $M_{\mathrm{min}}$ and is still small compared to the Poisson-model prediction of $\bar{n}_{\mathrm{tot}}^{-1}$.
\begin{figure*}[!t]
\centering
\resizebox{\hsize}{!}{
\includegraphics{fig11.eps}}
\caption{Same as Fig.~\ref{HM_F_z=0} at $z=1$.}
\label{HM_F_z=1}
\end{figure*}
\subsection{Single tracer}
With predictions for $b_{\mathrm{G}}$ and $\mathcal{E}$ at hand, we can directly compute the expected Fisher information content on $f_{\mathrm{NL}}$ from a single halo mass bin. If the dark matter density field is known we apply Eq.~(\ref{F_m1}), if it is not we use Eq.~(\ref{F1_text}). In order to obtain most conservative results, we neglect terms featuring derivatives of the shot noise with respect to $f_{\mathrm{NL}}$. We then apply Eq.~(\ref{sigma_fnl}) with $k_{\mathrm{min}}=0.0039h\mathrm{Mpc}^{-1}$, $k_{\mathrm{max}}=0.032h\mathrm{Mpc}^{-1}$ and $V\simeq50h^{-3}\mathrm{Gpc}^3$ to compute the one-sigma error forecast for $f_{\mathrm{NL}}$.
Results are shown in Fig.~\ref{HM_F_z=0} for $z=0$. When the dark matter density field is available (red lines and filled symbols), weighting the halos (red dashed lines and filled circles) is always superior to the conventional uniform case (red solid lines and filled squares), especially when going to lower halo masses. In particular, $\sigma_{f_{\mathrm{NL}}}$ substantially decreases with decreasing $M_{\mathrm{min}}$ in the weighted case, while for uniform halos it shows a spike at $M_{\mathrm{min}}\simeq1.4\times10^{12}h^{-1}\mathrm{M}_{\odot}$. This happens when $b_{\mathrm{G}}$ becomes unity and the non-Gaussian correction to the halo bias in Eq.~(\ref{b(k,fnl)}) vanishes, leaving no signature of $f_{\mathrm{NL}}$ in the effective bias. Since in the weighted case $b_{\mathrm{G}}>1$ for all considered $M_{\mathrm{min}}$, this spike does not appear, although we notice that the error on $f_{\mathrm{NL}}$ begins to increase below $M_{\mathrm{min}}\sim10^{11}h^{-1}\mathrm{M}_{\odot}$.
The simulation results are overplotted as symbols for a few values of $M_{\mathrm{min}}$, the agreement with the halo model predictions is remarkable. Note that the first two data-points at $M_{\mathrm{min}}=9.4\times10^{11}h^{-1}\mathrm{M}_{\odot}$ and $M_{\mathrm{min}}=2.35\times10^{12}h^{-1}\mathrm{M}_{\odot}$ resulting from our high-resolution simulation were scaled to the effective volume of our $12$ low-resolution boxes. The simulations yield a minimum error of $\sigma_{f_{\mathrm{NL}}}\simeq0.8$ at $M_{\mathrm{min}}\simeq10^{12}h^{-1}\mathrm{M}_{\odot}$ in the optimally weighted case with the dark matter available. This value is even lower than what is anticipated by the halo model ($\sigma_{f_{\mathrm{NL}}}\simeq1$).
The results without the dark matter are shown as blue lines and open symbols. $\sigma_{f_{\mathrm{NL}}}$ exhibits a minimum at $M_{\mathrm{min}}\simeq10^{14}h^{-1}\mathrm{M}_{\odot}$ with $\sigma_{f_{\mathrm{NL}}}\sim10$ for both uniform and weighted halos. Thus, weighting the halos does not decrease the lowest possible error on $f_{\mathrm{NL}}$ from the uniform case, as expected. This suggests that only the highest-mass halos (clusters at $z=0$) need to be considered to optimally constrain $f_{\mathrm{NL}}$ from a single-bin survey without observations of the dark matter.
In the limit of $\bar{n}\rightarrow\infty$, $F_{f_{\mathrm{NL}}\fnl}\rightarrow2\left(b'/b\right)^2$. Then, according to Eq.~(\ref{b(k,fnl)}) for high $M_{\mathrm{min}}$, $b'\rightarrow b_{\mathrm{G}}u$, and hence $F_{f_{\mathrm{NL}}\fnl}\rightarrow2u^2$ becomes independent of $M_{\mathrm{min}}$. The corresponding $\sigma_{f_{\mathrm{NL}}}$ in the limit $\bar{n}\rightarrow\infty$ is plotted in Fig.~\ref{HM_F_z=0} for both uniform- (thin blue solid line) and weighted halos (thin blue dashed line) and it indeed approaches a constant value at high $M_{\mathrm{min}}$. It is about a factor of $2$ below the minimum in $\sigma_{f_{\mathrm{NL}}}$ without setting $\bar{n}\rightarrow\infty$.
The results for redshift $z=1$ are presented in Fig.~\ref{HM_F_z=1}. In comparison to Fig.~\ref{HM_F_z=0} one can observe that all the curves are shifted towards the lower left of the plot, i.e., the constraints on $f_{\mathrm{NL}}$ improve with increasing redshift. This is mainly due to the increase of the Gaussian effective bias $b_{\mathrm{G}}$ with $z$, as evident from the left panel of Fig.~\ref{HM_b_sn}. For example, the location of the spikes in $\sigma_{f_{\mathrm{NL}}}(M_{\mathrm{min}})$ requires $b_{\mathrm{G}}=1$. At $z=1$ this condition is fulfilled at lower $M_{\mathrm{min}}$ ($\simeq5\times10^{10}h^{-1}\mathrm{M}_{\odot}$) than at $z=0$, thus shifting the spikes to the left. Further, since the Fisher information from Eqs.~(\ref{F1_text}) and (\ref{F_m1}) increases with $b_{\mathrm{G}}$, $\sigma_{f_{\mathrm{NL}}}(M_{\mathrm{min}})$ decreases, especially at low $M_{\mathrm{min}}$.
In the case of optimally weighted halos with knowledge of the dark matter, our simulations suggest $\sigma_{f_{\mathrm{NL}}}\simeq0.6$ when reaching $M_{\mathrm{min}}\simeq 10^{12}h^{-1}\mathrm{M}_{\odot}$ at $z=1$, in good agreement with the halo model. It even forecasts $\sigma_{f_{\mathrm{NL}}}\simeq0.2$ when including halos down to $M_{\mathrm{min}}\simeq10^{10}h^{-1}\mathrm{M}_{\odot}$.
\subsection{Multiple tracers}
The more general strategy for constraining $f_{\mathrm{NL}}$ from a galaxy survey is to consider all auto- and cross-correlations between tracers of different mass, namely, the halo covariance matrix $\mathbf{C}_{\mathrm{h}}$. If the dark matter density field is known, one can add the correlations with this field and determine $\mathbf{C}$. The Fisher information on $f_{\mathrm{NL}}$ is then given by Eq.~(\ref{F_lin}) and Eq.~(\ref{F_m_text}), respectively. Again, the halo model can be applied to make predictions on the Fisher information content. In Appendix~\ref{appendix4}, the analytical expressions for $\alpha$, $\beta$ and $\gamma$ are derived for arbitrarily many mass bins and the continuous limit of infinite bins.
\begin{figure*}[!t]
\centering
\resizebox{\hsize}{!}{
\includegraphics[trim = 0 32 13 0,clip]{fig12a.eps}
\includegraphics[trim = 49 32 0 0,clip]{fig12b.eps}}
\resizebox{\hsize}{!}{
\includegraphics[trim = 0 0 13 2,clip]{fig12c.eps}
\includegraphics[trim = 49 0 0 2,clip]{fig12d.eps}}
\caption{Same as Figs.~\ref{HM_F_z=0} and \ref{HM_F_z=1} at higher redshifts, as indicated in the bottom right of each panel. Here, only the halo model predictions are shown.}
\label{HM_F_z}
\end{figure*}
The dotted lines in Fig.~\ref{HM_F_z=0} show the halo model predictions at $z=0$ in this continuous limit of infinitely many mass bins. When the dark matter is available (red dotted line), $\sigma_{f_{\mathrm{NL}}}$ coincides with the results from the optimally weighted one-bin case (dashed red lines). This confirms our claim that with the dark matter density field at hand, modified mass weighting is the optimal choice for constraining $f_{\mathrm{NL}}$ and yields the maximal Fisher information content. Only below $M_{\mathrm{min}}\sim10^{12}h^{-1}\mathrm{M}_{\odot}$ the optimally weighted halo field becomes slightly inferior to the case of infinite bins.
From multiple bins of halos without the dark matter (blue dotted line) we observe a different behavior. While at high $M_{\mathrm{min}}$ the error on $f_{\mathrm{NL}}$ still matches the results from one mass bin, either uniform (blue solid line) or weighted (blue dashed line), below $M_{\mathrm{min}}\sim10^{14}h^{-1}\mathrm{M}_{\odot}$ it departs towards lower values and finally reaches the same continuous limit as in the case where the dark matter is available at $M_{\mathrm{min}}\sim10^{10}h^{-1}\mathrm{M}_{\odot}$. Thus, galaxies in principle suffice to yield optimal constraints on $f_{\mathrm{NL}}$, however, one has to go to very low halo mass.
Our simulation results for multiple bins (triangles in Fig.~\ref{HM_F_z=0}) support this conclusion. Although we can only consider a limited number of mass bins in the numerical analysis (we used $N=30$ for our $12$ low-res boxes and $N=10$ for our high-res box), the continuous limit of the halo model can be approached closely. However, note that residuals of sampling variance in the numerical determination of $\boldsymbol{\mathcal{E}}$, as described in Sec.~\ref{sec:halos&dm} and shown in Fig.~\ref{SN}, can result in an overestimation of $F_{f_{\mathrm{NL}}\fnl}$. This is especially the case when the number of mass bins $N$ is high, resulting in a low halo number density per bin~$\bar{n}$. Hence, we chose $N$ such that the influence of sampling variance on our results is negligible, and yet clear improvements compared to the single-tracer case are established.
One concern in practical applications is scatter in the halo mass estimation. Although X-ray cluster-mass proxies show very tight correlations with halo mass with a log-normal scatter of $\sigma_{\ln M}\lesssim0.1$ \cite{Kravtsov2006,Fabjan2011}, optical mass-estimators are more likely to have $\sigma_{\ln M}\simeq0.5$ \cite{Rozo2009}. We applied a log-normal mass scatter of $\sigma_{\ln M}=0.5$ to all of our halo masses and repeated the numerical analysis for all the cases (symbols) shown in Fig.~\ref{HM_F_z=0}. The arrows in that figure show the effect of adding the scatter, pointing to the new (higher) value of $\sigma_{f_{\mathrm{NL}}}$. We find the effect of the applied mass scatter to be negligible in most of the considered cases (arrows omitted). Only in the case of one weighted halo bin with knowledge of the dark matter (red filled circles) we observe a moderate weakening in $f_{\mathrm{NL}}$-constraints, especially towards lower $M_{\mathrm{min}}$. This is expected, since we make most heavy use of the halo masses when applying modified mass weighting. Yet, the improvement compared to the uniform one-bin case remains substantial, so the method is still beneficial in the presence of mass scatter.
At higher redshifts, we observe the same characteristics as in the single-tracer case: the $\sigma_{f_{\mathrm{NL}}}$-curves are shifted towards the lower left of the plot in Fig.~\ref{HM_F_z=1}, due to the increase in the effective bias with $z$. Moreover, the impact of mass scatter on $\sigma_{f_{\mathrm{NL}}}$ becomes less severe at higher redshifts, as evident from the smaller arrows in Fig.~\ref{HM_F_z=1} as compared to Fig.~\ref{HM_F_z=0}. High-redshift data are therefore more promising for constraining $f_{\mathrm{NL}}$. This is good news, since the relatively large effective volume assumed in the current analysis ($V_{\mathrm{eff}}\simeq50h^{-3}\mathrm{Gpc}^3$) can only be reached in practical applications when going to $z\sim1$ or higher.
On the other hand, the convergence of the constraints obtained with and without the dark matter is pushed to even lower halo masses at higher redshifts. This can be seen in Fig.~\ref{HM_F_z}, where we show the halo model predictions for even higher redshifts, going up to $z=5$. With a mass threshold of $M_{\mathrm{min}}=10^{10}h^{-1}\mathrm{M}_{\odot}$, the optimal constraints on $f_{\mathrm{NL}}$ from only halos start to saturate above $z\simeq2$, where $\sigma_{f_{\mathrm{NL}}}\simeq0.5$. This is however not the case when the dark matter is available: the error on $f_{\mathrm{NL}}$ decreases monotonically up to $z=5$ reaching $\sigma_{f_{\mathrm{NL}}}\simeq0.06$, although for practical purposes it will be difficult to achieve this limit. Yet, reaching $\sigma_{f_{\mathrm{NL}}}\sim1$ at $z=1$ and $M_{\mathrm{min}}\sim10^{11}h^{-1}\mathrm{M}_{\odot}$ with a survey volume of about $50h^{-3}\mathrm{Gpc}^3$ seems realistic.
\section{Conclusions}
\label{sec:conclusion}
The aim of this work is to assess the amount of information on primordial non-Gaussianity that can be extracted from the two-point statistics of halo- and dark matter large-scale structure in light of shot noise suppression and sampling variance cancellation techniques that have been suggested in the literature. For this purpose we developed a theoretical framework for calculating the Fisher information content on $f_{\mathrm{NL}}$ that relies on minimal assumptions for the covariance matrix of halos in Fourier space. The main ingredients of this model are the \emph{effective bias} and the \emph{shot noise matrix}, both of which we measure from $N$-body simulations and compare to analytic predictions. Our results can be summarized as follows:
\begin{itemize}
\item On large scales the effective bias agrees well with linear theory predictions from the literature, while towards smaller scales, it shows deviations that can be explained by the local bias-expansion model. The shot noise matrix exhibits
two nontrivial eigenvalues $\lambda_+$ and $\lambda_-$, both of which show a considerable dependence on $f_{\mathrm{NL}}$. We further show that the eigenvector $\boldsymbol{V}_{\!\!+}$ is closely related to the second-order bias and that the corresponding eigenvalue $\lambda_+$ depends on the shot noise of the squared dark matter density field $\mathcal{E}_{\delta^2}$, which itself depends on $f_{\mathrm{NL}}$ weakly. This property can become important when constraining $f_{\mathrm{NL}}$ from very high-mass halos (clusters). However, since the Fourier modes of $\mathcal{E}_{\delta^2}$ are highly correlated, it is questionable how much information on primordial non-Gaussianity can be gained from the $f_{\mathrm{NL}}$-dependence of the shot noise matrix. We demonstrate, though, that for the considered values of $f_{\mathrm{NL}}$ the assumption of a Gaussian form of the likelihood function is sufficient to determine the correct Fisher information.
\item With the help of $N$-body simulations we demonstrate how the parameter $f_{\mathrm{NL}}$ can be constrained and its error reduced relative to traditional methods by applying optimal weighting- and multiple-tracer techniques to the halos. For our specific simulation setup with $M_{\mathrm{min}}\sim10^{12}h^{-1}\mathrm{M}_{\odot}$, we reach almost 1 order of magnitude improvements in $f_{\mathrm{NL}}$-constraints at $z=0$, even if the dark matter density field is not available. The absolute constraints on $f_{\mathrm{NL}}$ depend on the effective volume and the minimal halo mass that is resolved in the simulations, or observed in the data, and are expected to improve further when higher redshifts or lower-mass halos are considered.
\item We confirm the existence of a suppression factor (denoted $q$-factor in the literature) in the amplitude of the linear theory correction to the non-Gaussian halo bias. We argue that this only holds for halos generated with a friends-of-friends finding algorithm and depends on the specified linking length between halo particles. For a linking length of $20\%$ of the mean interparticle distance, our simulations yield $q\simeq0.8$. For halos generated with a spherical overdensity finder, we demonstrate that the best-fit values of $f_{\mathrm{NL}}$ measured from the simulations are fairly consistent with the input values, i.e., $q\simeq 1$.
\item We calculate the Fisher information content from the two-point statistics of halos and dark matter in Fourier space, both analytically and numerically, and express the results in terms of an effective bias, a shot noise matrix and the dark matter power spectrum. In the case of a single mass bin and assuming knowledge of the dark matter density field, the Fisher information is inversely proportional to the shot noise and, therefore, not bounded from above if the shot noise vanishes. However, when only the halo distribution is available, the Fisher information remains finite even in the limit of zero shot noise. In this case, the amount of information on $f_{\mathrm{NL}}$ can only be increased by dividing the halos into multiple mass bins (multiple tracers).
\item Utilizing the halo model we calculate $\sigma_{f_{\mathrm{NL}}}$ and find a remarkable agreement with our simulation results. We show that in the continuous limit of infinite mass bins, optimal constraints on $f_{\mathrm{NL}}$ can in principle be achieved even in the case where dark matter observations are not available. With an effective survey volume of $\simeq50h^{-3}\mathrm{Gpc}^3$ out to scales of $k_{\mathrm{min}}\simeq0.004h\mathrm{Mpc}^{-1}$ this means $\sigma_{f_{\mathrm{NL}}}\sim1$ when halos down to $M_{\mathrm{min}}\sim10^{11}h^{-1}\mathrm{M}_{\odot}$ are observed at $z=0$. In comparison to this, a single-tracer method yields $f_{\mathrm{NL}}$-constraints that are weaker by about 1 order of magnitude. Further improvements are expected at higher redshifts and lower $M_{\mathrm{min}}$, potentially reaching the level of $\sigma_{f_{\mathrm{NL}}}\lesssim0.1$.
\item In realistic applications, additional sources of noise, such as a scatter in halo mass will have to be considered. We test the impact of adding a log-normal scatter of $\sigma_{\ln M}=0.5$ to our halo masses and find our results to be relatively unaffected. Assuming the dark matter to be available to correlate against halos is even more uncertain. Weak-lensing tomography can only measure the dark matter over a broad radial projection and more work is needed to see how far this approach can be pushed. Moreover, one would also need to include weak-lensing ellipticity noise into the analysis, which we have not done here.
\end{itemize}
We conclude that the shot noise suppression method (modified mass weighting) as presented in \cite{Hamaus2010} when the dark matter density field is available, and the sampling variance cancellation technique (multiple tracers) as proposed in \cite{Seljak2009a} when it is not, have the potential to significantly improve the constraints on primordial non-Gaussianity from current and future large-scale structure data. In \cite{Baldauf2011a} it was found (their Fig.~$15$) that while the power spectrum analysis of a single tracer with $M_{\mathrm{min}}\sim10^{14}h^{-1}\mathrm{M}_{\odot}$ (close to our optimal mass for a single tracer without the dark matter) predicts $\sigma_{f_{\mathrm{NL}}}\sim10$ for $V_{\mathrm{eff}}\simeq50h^{-3}\mathrm{Gpc}^3$, in good agreement with our results, the bispectrum analysis improves this to $\sigma_{f_{\mathrm{NL}}}\sim5$. Our results suggest that the multitracer analysis of the halo power spectrum can improve upon a single-tracer bispectrum analysis, potentially reaching significantly smaller errors on $f_{\mathrm{NL}}$. In principle the multitracer approach can also be applied to the halo bispectrum, but it is not clear how much one can benefit from it, since the dominant terms in the bispectrum do not feature any additional scale dependence that changes with tracer-mass.
In this paper we only focused on primordial non-Gaussianity of the local type and the two-point correlation analysis. Yet, our techniques can be applied to some, but not all, other models of primordial non-Gaussianity, which have only recently been studied in simulations \cite{Taruya2008,Bartolo2010a,Desjacques2010c,Wagner2010,Fedeli2011,Wagner2011}. Theoretical calculations of the non-Gaussian halo bias generally suggest different degrees of scale dependence and amplitudes depending on the model \cite{Verde2009,Sefusatti2009a,Schmidt2010,Shandera2011,Becker2011}. Our methods may help to test those various classes of primordial non-Gaussianity and thus provide a tool to probe the physics of the very early Universe.
\begin{acknowledgments}
We thank Pat McDonald, Tobias Baldauf, Ravi Sheth and Jaiyul Yoo for fruitful discussions, V. Springel for making public his N-body code {\scshape gadget ii}, and A. Knebe for making public his SO halo finder {\scshape AHF}. This work is supported by the Packard Foundation, the Swiss National Foundation under contract 200021-116696/1 and WCU grant R32-10130. VD acknowledges additional support from FK UZH 57184001. NH thanks the hospitality of Lawrence Berkeley National Laboratory (LBNL) at UC Berkeley and the Institute for the Early Universe (IEU) at Ewha University Seoul, where parts of this work were completed.
\end{acknowledgments}
|
1,116,691,500,160 | arxiv | \section{Introduction}
High redshift radio galaxies (HzRGs) are rare, spectacular objects with extended radio jets whose length exceeds scales of a few kiloparsecs. The radio jets are edge-brightened, Fanaroff-Riley class II \citep[FR$\,$II;][]{fr74} sources. Found in overdensities of galaxies indicative of protocluster environments \citep[e.g.,][]{pent00}, HzRGs are among the most massive galaxies in the distant universe and are likely to evolve into modern-day dominant cluster galaxies \citep{mdb08,blr97}. They are therefore important probes for studying the formation and evolution of massive galaxies and clusters at $z\geq2$.
One of the most intriguing characteristics of the relativistic plasma in HzRGs is the correlation between the steepness of the radio spectra and the redshift of the associated host galaxy \citep{tielens79,bm79}.
Radio sources with steeper spectral indices are generally associated with galaxies at higher redshift, and samples of radio sources with ultra-steep spectra ($\alpha \lesssim -1$ where the flux density $S$ is $S\propto\nu^{\alpha}$) were effectively exploited to discover HzRGs \citep[e.g.,][]{rott94,chambers90,cmb87}.
The underlying physical cause of this relation is still not understood. Three causes have been proposed: observational biases, environmental influences, and internal particle acceleration mechanisms that produce intrinsically steeper spectra.
Several observational biases can impact the measured relation. \cite{klamer06} explored the radio \textquotedblleft $k$-correction\textquotedblright\ using a sample of 28 spectroscopically confirmed HzRGs. The authors compared the relation between spectral index and redshift as measured from the observed and rest frame spectra, and found that the relation remained unchanged. Another bias could come from the fact that jet power and spectral index are correlated. This manifests in an observed luminosity$-$redshift correlation: brighter sources (which tend to be at higher redshifts) are more likely to have higher jet power, and therefore steeper spectral indices. For flux density limited surveys this leads to a correlation between power and redshift, and surveys with higher flux density limits have a tighter power$-$redshift correlation \citep{blundell99}.
Environmental effects could also impact the relation. The temperature of the circumgalactic medium is expected to be higher at higher redshifts. It is also known that the linear sizes of radio sources decrease with redshift \citep[e.g.,][]{miley68,neeser95} which is interpreted as lower expansion speeds due to higher surrounding gas densities at higher redshifts. \citet{ak98} point out that the expanding radio lobes therefore have to work against higher density and temperature. This would slow down the propagation of the jet into the medium, increasing the Fermi acceleration and thus steepening the spectral index. The power$-$redshift correlation in this case would be caused by a change in environment with redshift.
The final option is that the steeper spectrum is indicative of particle acceleration mechanisms different from those in local radio galaxies. One global difference between low and high redshift sources is that the CMB temperature is higher, and could provide more inverse Compton losses at high frequencies from scattering with CMB photons. Internally to a radio galaxy, spectral indices are seen to evolve along the radio jet axis, with hot spots dominant at high frequencies, and diffuse lobe emission is dominant at low frequencies \citep[e.g. Cygnus A;][]{carilli91}.
Recently \citet{mckean16} observed a turnover in the spectra of the hot spots detected with LOFAR around 100$\,$MHz. The authors were able to rule out a cut off in the low-energy electron distribution, and found that both free-free absorption or synchrotron self-absorption models provided adequate fits to the data, albeit with unlikely model parameters. To determine the particle acceleration mechanisms it is crucial to make observations at 100$\,$MHz and below with sufficient resolution to determine the internal variation of the low-frequency spectra. This can then be compared to archival observations with similar or higher resolution at frequencies above 1$\,$GHz, where the internal structure of HzRGs have been well studied \citep[e.g.,][]{carilli97,pentVLA00}. All current low frequency information that does exist comes from studies in which HzRGs are unresolved.
Typical angular sizes of HzRGs with $z\gtrsim2$ are about $10\,\textrm{arcsec}$ \citep{wm74}, driving the need for resolutions of about an arcsecond to determine the distribution of spectral indices among spatially resolved components of HzRGs.
The unique capabilities of the Low Frequency Array \citep[LOFAR;][]{vh13} are ideally suited for revealing these distributions at low frequencies. Covering the frequency bands of $10$--$80\,\textrm{MHz}$ (Low Band Antenna; LBA) and $120$--$240\,$MHz (High Band Antenna; HBA), LOFAR can characterize HzRG spectra down to rest frequencies of $\sim100\,$MHz. The full complement of stations comprising International LOFAR (I-LOFAR) provides baselines over $1000\,\textrm{km}$, and sub-arcsecond resolution is achievable down to frequencies of about $60\,\textrm{MHz}$.
At such low radio frequencies, very long baseline interferometry (VLBI) becomes increasingly challenging, as signal propagation through the ionosphere along the different sightlines of widely separated stations gives rise to large differential dispersive delays. These vary rapidly both in time and with direction on the sky, requiring frequent calibration solutions interpolated to the position of the target. Previous works have focused on observations at $\sim150\,$MHz where I-LOFAR is most sensitive and the dispersive delays are less problematic \citep[][Varenius et al., A\&A submitted]{varenius15}. The $\nu^{-2}$ frequency dependence of the ionospheric delays means they are six times larger at 60$\,$MHz than at 150$\,$MHz, reducing the bandwidth over which the assumption can be made that the frequency dependence is linear. Combined with the lower sensitivity of I-LOFAR in the LBA band and the reduction in the number of suitable calibration sources due to absorption processes in compact radio sources below 100$\,$MHz, this makes reducing LBA I-LOFAR observations considerably more challenging than HBA observations. Accordingly, the LBA band of I-LOFAR has been less utilised than the HBA. Previous published LBA results have been limited to observations of 3C 196 \citep{wucknitz10} and the Crab nebula (unpublished) during LOFAR commissioning, when the complement of operational stations limited the longest baseline to $\sim$600$\,$km.
Here we use I-LOFAR to study the spatially resolved properties of 4C 43.15 (also B3 0731+438) at $z=2.429$
\citep{mccarthy91}. This object is one of a sample of 10 that comprise a pilot study of the ultra-steep spectra of HzRGs. We selected 4C 43.15 for this study based on data quality, the suitability of the calibrator, and the simple double-lobed, edge-brightened structure of the target seen at higher frequencies.
The overall spectral index of 4C 43.15 between 365 MHz \citep[Texas Survey of Radio Sources;][]{texas} and 1400 MHz \citep[from the Green Bank 1.4 GHz Northern Sky Survey;][]{wb92} is $\alpha=-1.1$, which places it well within the scatter on the $\alpha-z$ relation, seen in Figure~1 of \citet{db00}. 4C 43.15 has been well studied at optical frequencies, and exhibits many of the characteristics of HzRGs \citep[e.g., an extended Lyman-$\alpha$ halo;][]{vm03}.
Here we present images of 4C 43.15 made with the LBA of I-LOFAR at 55$\,$MHz. These are the first images made with the full operational LBA station complement of I-LOFAR in 2015, and this study sets the record for image resolution at frequencies less than 100$\,$MHz. We compare the low frequency properties of 4C 43.15 with high frequency archival data from the Very Large Array (VLA) to measure the spectral behaviour from 55 -- 4860$\,$MHz. We describe the calibration strategy we designed to address the unique challenges of VLBI for the LBA band of I-LOFAR. The calibration strategy described here provides the foundation for an ongoing pilot survey of ten HzRGs in the Northern Hemisphere with ultra steep ($\alpha<-1$) spectra.
In \S~\ref{sec:obs} we outline the observations and data pre-processing. Section \ref{sec:lba} describes the LBA calibration, including the VLBI techniques.
The resulting images are presented in \S~\ref{sec:results} and discussed in \S~\ref{sec:diss}. The conclusions and outlook are summarised in \S~\ref{sec:concl}. Throughout the paper we assume a $\Lambda$CDM concordance cosmology with $H_0=67.8$ km$\,$s$^{-1}\,$Mpc$^{-1}$, $\Omega_{\textrm{m}}=0.308$, and $\Omega_{\Lambda}=0.692$, consistent with \citet{planckcosmo}. At the distance of 4C 43.15, 1$^{\prime\prime}\!\!$\ corresponds to 8.32$\,$kpc.
\section{Observations and pre-processing}
\label{sec:obs}
In this section we describe the observations, pre-processing steps and initial flagging of the data.
As part of project LC3\_018, the target 4C 43.15 was observed on 22 Jan 2015 with 8.5 hr on-source time. Using two beams we conducted the observation with simultaneous continuous frequency coverage between 30 and 78$\,$MHz on both the target and a flux density calibrator. Designed with calibration redundancy in mind, the observation started with 3C 147 as the calibrator and switched to 3C 286 halfway though the observation. Although 3C286 was included for calibrator redundancy, it was later realised that the large uncertainties of the current available beam models prevent accurate calibration transfer to the target at this large angular separation. The observations are summarized in Table~\ref{tab:obs}.
All 46 operational LBA stations participated in the observation, including 24 core stations, 14 remote stations, and 8 international stations. The international stations included 5 in Germany (DE601-DE605) and one each in Sweden (SE607), France (FR606), and the United Kingdom (UK608). While all stations have 96 dipoles, the core and remote stations are limited by electronics to only using 48 dipoles at one time. The observation was made in the LBA\_OUTER configuration, which uses only the outermost 48 dipoles in the core and remote stations. This configuration reduces the amount of cross-talk between closely spaced dipoles and gives a smaller field of view when compared with other configurations. The international stations always use all 96 dipoles, and thus have roughly twice the sensitivity of core and remote stations. The raw data were recorded with an integration time of 1$\,$s and 64 channels per 0.195$\,$MHz subband to facilitate radio frequency interference (RFI) excision.
\subsection{Radio Observatory Processing}
All data were recorded in 8-bit mode and correlated with the COBALT correlator to produce all linear correlation products (XX, XY, YX, YY). After correlation the data were pre-processed by the Radio Observatory. Radio frequency interference was excised using AOFlagger \citep{offringa10} with the default LBA flagging strategy. The data were averaged to 32 channels per subband (to preserve spectral resolution for future studies of carbon radio recombination lines) and 2 second integration time (to preserve information on the time-dependence of phases) before being placed in the Long Term Archive (LTA). The data were retrieved from the LTA and further processed on a parallel cluster kept up to date with the most current stable LOFAR software available at the time (versions 2.9 -- 2.15).
\begin{table*}
\begin{center}
\caption{Observations. The bandwidth for all targets was 48$\,$MHz, split into 244 subbands of 0.195$\,$kHz width. Overlapping times are due to the use of simultaneous beams. Right ascension is hh:mm:ss.ss, and declination is dd:mm:ss.ss.\label{tab:obs}}\vspace{5pt}
\begin{tabular}{llccccccccc}
& Obs. ID & Object & Type & RA & Dec & Date start & UT start & UT stop & exposure \\ \hline \\[-5pt]
& L257205 & 3C 147 & Calibrator & 05:42:36.26 & +49:51:07.08 & 22-Jan-2015 & 18:32:33 & 22:47:32 & 4.25 hr \\
& L257207 & 4C 43.15 & Target & 07:35:21.89 & +43:44:20.20 & 22-Jan-2015 & 18:32:33 & 22:47:32 & 4.25 hr \\
& L257209 & 3C 147 & Calibrator & 05:42:36.26 & +49:51:07.08 & 22-Jan-2015 & 22:48:33 & 23:03:32 & 0.25 hr \\
& L257211 & 3C 286 & Calibrator & 13:31:08.28 & +30:30:32.95 & 22-Jan-2015 & 22:48:33 & 23:03:32 & 0.25 hr \\
& L257213 & 3C 286 & Calibrator & 13:31:08.28 & +30:30:32.95 & 22-Jan-2015 & 23:04:33 & 03:19:32 & 4.25 hr \\
& L257215 & 4C 43.15 & Target & 07:35:21.89 & +43:44:20.20 & 22-Jan-2015 & 23:04:33 & 03:19:32 & 4.25 hr \\ \hline
\end{tabular}
\end{center}
\end{table*}
\section{Data Calibration}
\label{sec:lba}
In this section we describe in detail the steps taken to calibrate the entire LBA, including international stations, paying particular attention to how we address the unique challenges at low frequencies. Figure~\ref{fig:dr} shows a block diagram overview of the calibration steps.
\begin{figure*}
\includegraphics[width=\textwidth]{fig1}
\caption{\label{fig:dr} A block diagram overview of the calibration steps. Blue blocks represent operations on data sets with core stations, while gray blocks represent operations on data sets where the core stations have been combined into the `super' station (see \S~\ref{sec:cs} for details on station combination). Yellow blocks represent operations on solution tables rather than data.}
\end{figure*}
\subsection{Initial flagging and data selection}
Our first step after downloading the data from the LTA was to run AOFlagger again with the LBA default strategy. Typically 0.5 to 2 per cent of the data in each subband were flagged. An inspection of gain solutions from an initial gain calibration of the entire bandwidth on 3C 147 showed that the Dutch remote station RS409 had dropped out halfway through the first observing block, and we flagged this station and removed it from the dataset. We further excised one core station (CS501) and one remote station (RS210) after manual inspection.
We determined the normalised standard deviation per subband from the calibrator data and used this information to select the most sensitive subbands close to the peak sensitivity of the LBA. Outside these subbands the normalised standard deviation rapidly increases towards the edges of the frequency range. The total contiguous bandwidth selected was 15.6$\,$MHz with a central frequency of 55$\,$MHz. During this half of the observation, the standard calibrator 3C 147 was always less than 20 degrees in total angular separation from the target, and the absolute flux density errors are expected to be less than 20 per cent. This is important for two reasons. First, amplitude errors from beam correction models are reduced when objects are close in elevation. The second reason is that we transfer information derived from the calibrator phases (see \S~\ref{sec:tgt} for full details) to the target. This information is valid for a particular direction on the sky, and transferral over very large distances will not improve the signal to noise ratio for the target data. For the second half of the observation, 3C 286 was more than 20 degrees distant from 4C 43.15 for most of that observation block, requiring more advanced calibration which is beyond the scope of this paper, and would only provide $\sqrt{2}$ noise improvement. The second half of the observation was therefore not used for the data analysis in this paper.
\subsection{Removal of bright off-axis sources}
Bright off-axis sources contribute significantly to the visibilities. At low frequencies, this problem is exacerbated by LOFAR's wide field of view and large primary beam sidelobes. There are several sources that have brightnesses of thousands to tens of thousands of Janskys within the LBA frequency range, and they need to be dealt with. We accomplished the removal of bright off-axis sources using a method called demixing \citep{vdt07}, where the data are phase-shifted to the off-axis source, averaged to mimic beam and time smearing, and calibrated against a model. All baselines were demixed, although simulations performed as part of commissioning work showed that the source models have insufficient resolution to correctly predict the compact bright sources to which the longest baselines would be sensitive. Such sources produce strong beating in the amplitudes of the visibilities, which is visible by eye. A careful visual inspection ensured that this was not a problem for these data. Using the calibration solutions, the \textit{uncorrected} visibilities for the source are subtracted. After examination of the bright off-axis sources above the horizon and within $90^{\circ}$ of the target and calibrator (such a large radius is necessary in case there are sources in sidelobes), we demixed Cassiopeia A and Taurus A from our data. After demixing the data were averaged to 16 channels per subband to reduce the data volume, and the AOFlagger was run again with the default LBA flagging strategy. Typical flagging percentages were 2--4 per cent. The combined losses from time (2$\,$sec) and bandwidth (4 channels per 195$\,$kHz subband) smearing on the longest baseline are 5 per cent at a radius of 95 arcsec \citep{bridleschwab}. Higher frequency observations of 4C 43.15 show its largest angular size to be 11 arcsec, well within this field of view.
\subsection{LOFAR beam correction and conversion to circular polarization}
At low frequencies, differential Faraday rotation from propagation through the ionosphere can shift flux density from the XX and YY to the cross hand polarizations. An effective way to deal with this is to convert from linear to circular polarization, which shifts the impact of differential Faraday rotation to only a L-R phase offset in the resulting circular polarization. Since the conversion from linear to circular polarization is beam dependent, we first removed the beam.
We used \textsc{mscorpol} (version 1.7)\footnote{\textsc{mscorpol} was developed by T.~D. Carozzi and available at: \\ \href{https://github.com/2baOrNot2ba/mscorpol}{\color{blue}{https://github.com/2baOrNot2ba/mscorpol}}} to accomplish both removal of beam effects and conversion to circular polarization. This software performs a correction for the geometric projection of the incident electric field onto the antennas, which are modelled as ideal electric dipoles.
One drawback of \textsc{mscorpol} is that it does not yet include frequency dependence in the beam model, so we also replicated our entire calibration strategy but correcting for the beam with the LOFAR \emph{new default pre-processing pipeline} (NDPPP), which has frequency-dependent beam models, rather than \textsc{mscorpol}. We converted the NDPPP beam-corrected data to circular polarization using standard equations, and followed the same calibration steps described below. We found that data where the beam was removed with \textsc{mscorpol} ultimately had more robust calibration solutions and better reproduced the input model for the calibrator. Therefore we chose to use the \textsc{mscorpol} beam correction.
\subsection{Time-independent station scaling}
The visibilities for the international stations must be scaled to approximately the right amplitudes relative to the core and remote stations before calibration. This is important because the amplitudes of the visibilities are later used to calculate the data weights, which are used in subsequent calibration steps, see \S~\ref{sec:pdr}. To do this we solved for the diagonal gains (RR,LL) on all baselines using the Statistical Efficient Calibration \citep[StEfCal;][]{sw14} algorithm in NDPPP.
One solution was calculated every eight seconds per 0.915$\,$MHz bandwidth (one subband). The StEfCal algorithm calculates time and frequency independent phase errors, and does not take into account how phase changes with frequency (the delay; $d\phi/d\nu$) or time (the rate; $d\phi/dt$). If the solution interval over which StEfCal operates is large compared to these effects, the resulting incoherent averaging will result in a reduction in signal to noise. Since the incoherently averaged amplitudes are adjusted to the correct level, the coherence losses manifest as an increase of the noise level.
Using the maximal values for delays and rates found in \S~\ref{sec:pdr} to calculate the signal to noise reduction \citep[from Eqn. 9.8 and 9.11 of ][]{moran95}, we find losses of 6 and 16 per cent for delays and rates, respectively.
The calibrator 3C 147 flux density was given by the model from \citet{sh12}.
3C 147 is expected to be unresolved or only marginally resolved and therefore expeted to provide an equal amplitude response to baselines of any length.
We use this gain calibration for two tasks: (i) to find an overall scaling factor for each station that correctly provides the relative amplitudes of all stations; and (ii) to identify bad data using the LOFAR Solution Tool\footnote{The LOFAR Solution Tool (LoSoTo) was developed by Francesco de Gasperin and is available at: \\ \href{https://github.com/revoltek/losoto}{\color{blue}{https://github.com/revoltek/losoto}}}. About 20 per cent of the solutions were flagged either due to outliers or periods of time with loss of phase coherence, and we transferred these flags back to the data.
To find the time-independent scaling factor per station, we zeroed the phases and calculated a single time-averaged amplitude correction for each antenna. These corrections were applied to both calibrator and target datasets.
\subsection{Phase calibration for Dutch stations}
We solved for overall phase corrections using only the Dutch array but filtering core -- core station baselines, which can have substantial low-level RFI and are sensitive to extended emission. The phase calibration removes ionospheric distortions in the direction of the dominant source at the pointing centre. We performed the phase calibration separately for 3C 147 and 4C 43.15 using appropriate skymodels. 3C 147 is the dominant source in its field, and we use the \citet{sh12} point source model. 4C 43.15 has a flux density of at least 10$\,$Jy in the LBA frequency range. We used an apparent sky model of the field constructed from the TGSS Alternative Data Release 1 \citep{tgssadr}, containing all sources within 7 degrees of our target and with a flux density above 1$\,$Jy.
\subsection{Combining core stations}
\label{sec:cs}
After phase calibration of the Dutch stations for both the calibrator and the target, we coherently added the visibilities from the core stations to create a `super' station.
This provides an extremely sensitive `super' station with increased signal to noise on individual baselines to anchor the I-LOFAR calibration (described further in \S~\ref{sec:pdr}).
All core stations are referred to a single clock and hence should have delays and rates that are negligibly different after phase calibration is performed. The station combination was accomplished with the Station Adder in NDPPP by taking the weighted average of all visibilities on particular baselines. For each remote and international station, all visibilities on baselines between that station and the core stations are averaged together taking the data weights into account. The new $u,v,w$ coordinates are calculated as the weighted geometric center of the $u,v,w$ coordinates of the visibilities being combined\footnote{We found an extra 1 per cent reduction in noise for the calibrator when using the weighted geometric center of the $u,v,w$ coordinates, rather than calculating the $u,v,w$ coordinates based on the `super' station position. This has been implemented in NDPPP (LOFAR software version 12.2.0)}. Once the core stations were combined, we created a new data set containing only the `super' station and remote and international stations. The dataset with the uncombined core stations was kept for later use.
The final averaging parameters for the data were 4 channels per subband for 3C 147, and 8 channels per subband for 4C 43.15.
After averaging the data were again flagged with the AOFlagger default LBA flagging strategy, which flagged another 1 -- 2 per cent of the data.
\subsection{Calibrator residual phase, delay, and rate}
\label{sec:pdr}
The international stations are separated by up to 1292 km and have independent clocks which time stamp the data at the correlator. There are residual non-dispersive delays due to the offset of the separate rubidium clocks at each station. Correlator model errors can also introduce residual non-dispersive delays up to $\sim100$ns.
Dispersive delays from the ionosphere make a large contribution to the phase errors. Given enough signal to noise on every baseline, we could solve for the phase errors over small enough time and bandwidth intervals that the dispersive errors can be approximated as constant. However, a single international-international baseline is only sensitive to sources of $\sim10$Jy over the resolution of our data ($\Delta\nu=0.195\,$MHz, 2 sec). Larger bandwidth and time intervals increase the signal to noise ratio, and the next step is to model the dispersive delays and rates with linear slopes in frequency and time.
This can be done using a technique known as fringe-fitting \citep[e.g.,][]{cotton95,tms01}. A global fringe-fitting algorithm is implemented as the task FRING in the Astronomical Image Processing System \citep[AIPS;][]{greisen03}. We therefore converted our data from measurement set to UVFITS format using the task \textsc{ms2uvfits} and read it into AIPS. The data weights of each visibility were set to be the inverse square of the standard deviation of the data within a three minute window.
The ionosphere introduces a dispersive delay, where the phase corruption from the ionosphere is inversely proportional to frequency, $\phi_{\textrm{ion}} \propto \nu^{-1}$. The dispersive delay is therefore inversely proportional to frequency squared, $d\phi / d\nu \propto -\nu^{-2}$. Non-dispersive delays such as those introduced by clock offsets are frequency-independent. The ionospheric delay is by far the dominant effect. For a more in-depth discussion of all the different contributions to the delay at 150$\,$MHz for LOFAR, see \citet{moldon15}. The delay fitting-task FRING in AIPS fits a single, non-dispersive delay solution to each so-called \emph{intermediate frequency} (IF), where an IF is a continuous bandwidth segment. With I-LOFAR data, we have the freedom to choose the desired IF bandwidth by combining any number of LOFAR subbands (each of width 0.195$\,$MHz). This allows us to make a piece-wise linear approximation to the true phase behaviour. Making wider IFs provides a higher peak sensitivity, but leads to increasingly large deviations between the (non-dispersive only) model and the (dispersive and non-dispersive) reality at the IF edges when the dispersive delay contribution is large. As a compromise, we create 8 IFs of width 1.95$\,$MHz each (10 LOFAR subbands), and each IF is calibrated independently. We used high resolution model of 3C 147 (from a previous I-LOFAR HBA observation at 150$\,$MHz) for the calibration, and set the total flux density scale from \citet{sh12}. The solution interval was set to 30 seconds, and we found solutions for all antennas using only baselines with a projected separation $>10$k$\lambda$, effectively removing data from all baselines containing only Dutch stations. The calibration used the `super' station as the reference antenna.
The search windows were limited to 5$\,\mu$s for delays and 80$\,$mHz for rates. Typical delays for remote stations were 30$\,$ns, while international station delays ranged from 100$\,$ns to 1$\,\mu$s. The delay solutions showed the expected behaviour, with larger offsets from zero for longer baselines, and increasing magnitudes (away from zero) with decreasing frequency. Rates were typically
up to a few tens of mHz for remote and international stations.
\subsection{Calibrator phase self-calibration}
The combined `super' station, while useful for gaining signal to noise on individual baselines during fringe-fitting, left undesirable artefacts when imaging. This can occur if the phase-only calibration prior to station combination is imperfect. The imperfect calibration will result in the `super' station not having a sensitivity equal to the sum of the constituent core stations. The `super' station also has a much smaller field of view than the other stations in the array. Therefore we transferred the fringe-fitting solutions to a dataset where the core stations were not combined.
Before applying the calibration solutions we smoothed the delays and rates with solution intervals of 6 and 12 minutes, respectively, after clipping outliers (solutions more than 20 mHz and 50 ns different from the smoothed value within a 30 minute window for rates and delays, respectively). The smoothing intervals were determined by comparing with the unsmoothed solutions to find the smallest time window that did not oversmooth the data. We applied the solutions to a dataset where the core stations were not combined.
The data were then averaged by a factor of two in time prior to self-calibration to 4 second integration times. We performed three phase-only self-calibration loops with time intervals of 30 seconds, 8 seconds, and 4 seconds. Further self-calibration did not improve the image fidelity or reduce the image noise.
\subsection{Setting the flux density scale}
After applying the final phase-only calibration, we solved for amplitude and phase with a 5 minute solution interval, as the amplitudes vary slowly with time. The amplitude solutions provide time-variable \textit{corrections} to the initial default station amplitude calibration. Fig.~\ref{fig:amp} shows the amplitude solutions per station for an IF near the centre of the band.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig2}
\caption{Amplitude solutions for all stations, from the final step of self-calibration. These are \textit{corrections} to the initial amplitude calibration of each station, for the central IF at 53$\,$MHz. The colours go from core stations (darkest) to international stations (lightest). \label{fig:amp}}
\end{figure}
The amplitude solutions show some small-scale variations in time, but are stable to within 20 per cent of the median value over the entirety of the observation. We therefore adopt errors of 20 per cent for the measurements presented here. Several effects could be responsible for the variations in time such as imperfect beam or source models, or ionospheric disturbances. Currently we are not able at this time to distinguish whether the time variation we see is from the ionosphere or beam errors.
We checked the calibration of 3C 147 by imaging each IF of the final self-calibrated data separately, fitting a Gaussian to extract the integrated flux density, and plotting this against the input model, see Figure~\ref{fig:calsed}. The integrated flux density measurements are within the errors of the point-source model, while the peak brightness measurements are below the model. This is due to the fact that the jet in 3C 147, which is seen also at higher frequencies, is resolved (the restoring beam is 0.9$^{\prime\prime}\!\!$\ $\times$ 0.6$^{\prime\prime}\!\!$\ ). The values are systematically lower than the model, and slightly flatter. This could be due to the fact that the starting model from \cite{sh12} is a point source model, and 3C 147 is resolved. The flattening spectral index towards higher frequencies, where the beam size is smaller, implies that the jet which appears as a NW-elongation in Figure~\ref{fig:cal} has a steeper low-frequency spectral index than the core. This is supported by the fact that the peak brightness measurements are slightly flatter than the integrated flux density measurements in Figure~\ref{fig:calsed}.
In some extremely compact objects, scintillation effects from the interstellar medium have been seen to artificially broaden sources \citep[e.g.,][]{linsky08,quirrenbach92,rickett86}. However, these scintillations are usually only seen in compact ($\sim$10 mas) sources and/or on longer timescales (days to weeks). Both the calibrator and target are larger in size, and well outside of the galactic plane (above $b=20^{\circ}$). We thus do not expect that they should be impacted. The final self-calibrated image using the entire bandwidth is shown in Fig.~\ref{fig:cal}, and has a noise of 135$\,$mJy$\,$bm$^{-1}$, about a factor of 3 above the expected noise given the amount of flagging (40 per cent) and the $u-v$ cut in imaging ($>20\,$k$\lambda$).
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig3}
\caption{Spectral energy distribution (SED) of 3C 147. The solid line shows the calibration model that we used, while the dashed lines indicate errors of 10 per cent. The open circles show the integrated flux density measurements from each IF, while the filled circles show the peak brightness measurements. \label{fig:calsed} }
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig4}
\caption{The calibrator, 3C 147, imaged using 15.6$\,$MHz bandwidth and 4.25 hr of data. The noise in the image is 135$\,$mJy$\,$bm$^{-1}$. The NW-elongation is a jet also seen at higher frequencies. \label{fig:cal}}
\end{figure}
\subsection{Target residual phase, delay, and rate}
\label{sec:tgt}
Before fringe-fitting on the target, the time independent and dependent amplitude corrections derived from the calibrator were applied to the target, for a dataset with the `super' station. The time dependent core station amplitude corrections were all within a few per cent of each other so we transferred the amplitude corrections from a station close to the centre of the array, CS001, to the `super' station. The fringe-fitting solutions from the calibrator, approximately $20^{\circ}$ away, should also contain some instrumental and systematic effects, such as those due to clock offsets and large-scale ionospheric structure, which will be common to the target direction and can be usefully subtracted by applying the calibrator solutions to the target data.
After extensive testing, we found that we gained the most signal to noise in the fringe-fitting by applying the smoothed delays from the calibrator, along with a model of the frequency dependence of the phases. We used the AIPS task MBDLY to model the frequency dependence from the FRING calibration solutions with smoothed delays. We used the `DISP' option of MBDLY to find the dispersion and multi-band delay for each solution in the fringe-fitting calibration table. After zeroing the phases and rates in the FRING calibration solutions, we used the MBDLY results to correct for the multi-band delay and the dispersion. With the phases already zeroed, the dispersion provides a \emph{relative} correction of the phases, effectively removing the frequency dependence.
This allowed us to use a wider bandwidth in the FRING algorithm, which increased the signal to noise. We chose to use the entire 15.6$\,$MHz bandwidth.
The resultant delays were smaller by at least a factor of two on the longest baselines, which was expected as transferring the delays from the calibrator already should have corrected the bulk of the delays. These residual delays are then the \emph{difference} in the dispersion and multi-band delays between the target and the calibrator. We also tested
the effect of only including data from partial $uv$ selections and established that it was necessary to use the full $uv$ range to find robust fringe-fitting solutions. It is important to remember that the shortest baseline is from the `super' station to the nearest remote station. There are 12 remote station -- `super' station baselines, ranging from about 4$\,$km to 55$\,$km, with a median length of about 16$\,$km.
The next step was to perform fringe fitting on the target. We began fringe fitting using a point source model with a flux density equal to the integrated flux density of the target measured from a low-resolution image made with only the Dutch array. Initial tests showed a double source with similar separation and position angle (PA) as seen for 4C 43.15 at higher frequencies, rather than the input point source model. We further self-calibrated by using the resulting image as a starting model for fringe-fitting. We repeated this self-calibration until the image stopped improving.
\subsection{Astrometric Corrections}
The process of fringe frequency fitting does not derive absolute phases or preserve absolute positions, only relative ones. To derive the absolute astrometric positions we assumed that the components visible in our derived images coincided with the components visible on the high-frequency archival data for which the absolute astrometry was correct.
We centred the low-frequency lobes in the
direction perpendicular to the jet axis, and along the jet axis we centred the maximum extent of the low-frequency emission between the maximum extent of the high frequency emission. The re-positioning of the source is accurate to within $\sim$0.6$^{\prime\prime}\!\!$\ assuming that the total extent of the low-frequency emission is contained within the total extent of the high-frequency emission. This positional uncertainty will not affect the following analysis.
\section{Results}
\label{sec:results}
In Figure~\ref{fig:tgt} we present an LBA image of 4C 43.15 which achieves a resolution of 0.9$^{\prime\prime}\!\!$\ $\times$0.6$^{\prime\prime}\!\!$\ with PA -33 deg and has a noise level of 59\mjy\ . This image was made using multi-scale CLEAN in the Common Astronomy Software Applications \citep[CASA;][]{casa} software package, with Briggs weighting and a robust parameter of -1.5, which is close to uniform weighting and offers higher resolution than natural weighting. The contours show the significance of the detection (starting at $3\sigma$ and up to $20\sigma$). \emph{This is the first image made with sub-arcsecond resolution at frequencies below 100$\,$MHz.} The beam area is a factor of 2.5 smaller than that achieved by \citet{wucknitz10}. The measured noise is a factor of 2.4 above the theoretical noise.
\begin{figure*}
\includegraphics[width=0.49\textwidth,clip,trim=1.8cm 1cm 1.8cm 0cm]{fig5a}
\includegraphics[width=0.49\textwidth,clip,trim=1.8cm 1cm 1.8cm 0cm]{fig5b}
\caption{\label{fig:tgt} The final LBA images of 4C 43.15. The image on the left was made using 15.6$\,$MHz of bandwidth centred on 55$\,$MHz. We used the multi-scale function of the \textsc{CLEAN} task in CASA with Briggs weighting (robust -1.5) and no inner $uv$ cut. The image noise achieved is 59\mjy\ , while the expected noise given the amount of flagged data and image weighting is 25\mjy\ . The final restoring beam is 0.9$^{\prime\prime}\!\!$\ $\times$0.6$^{\prime\prime}\!\!$\ with PA -33 deg. The image on the right is the same image, but smoothed with a Gaussian kernel 1.2 times the size of the restoring beam. The contours in both images are drawn at the same levels, which are 3, 5, 10, and 20$\sigma$ of the unsmoothed image. }
\end{figure*}
In the following subsections we examine first the morphology of 4C 43.15 and then the spectral index properties of the source. For comparison with higher frequencies, we used archival data from the NRAO VLA Archive Survey\footnote{http://archive.nrao.edu/nvas/}. The available images had higher resolution than the LBA image presented here, with the exception of images at 1.4$\,$GHz. We therefore downloaded and re-imaged the calibrated data to produce more similar beam sizes with the use of different weighting and/or maximum baseline length. The archival data and resulting beam sizes are listed in Table~\ref{tab:vla}. All images were then convolved to the largest beam full width at half maximum (FWHM) of 1.55$^{\prime\prime}\!\!$\ $\times$ 0.98$^{\prime\prime}\!\!$\ (at 1.4$\,$GHz). Even at the highest frequency used here (8.4$\,$GHz) the A-configuration of the VLA is still sensitive to emission on scales of about 5$^{\prime\prime}\!\!$\ , or roughly the size of a single lobe of 4C 43.15. We therefore do not expect that the image misses significant contributions to the flux density. This is supported by the third panel in Figure~\ref{fig:si}, which shows that the spectral indices from 1.4$\,$GHz to the two higher frequencies in this study are the same within the errors. If a substantial amount of flux density were missing at 8.4$\,$GHz, we would expect to see a steeper spectral index from 1.4$\,$GHz to 8.4$\,$GHz than from 1.4$\,$GHz to 4.7$\,$GHz.
\begin{table}
\caption{\label{tab:vla} Summary of archival VLA data and re-imaging parameters. All data were taken in A-configuration, which has a minimum baseline of 0.68$\,$km and a maximum baseline of 36.4$\,$km.}
\begin{tabular}{ccccc}
Date & $\nu$ & & Maximum & Beam \\[-2pt]
observed & $[$GHz$]$ & Weighting & baseline & size \\ \hline
31-08-1995 & 1.4 & super uniform & -- & 1.55$^{\prime\prime}\!\!$\ $\times$ 0.98$^{\prime\prime}\!\!$\ \\
19-03-1994 & 4.7 & natural & 192$\,$k$\lambda$ & 1.02$^{\prime\prime}\!\!$\ $\times$ 0.88$^{\prime\prime}\!\!$ \\
31-08-1995 & 8.4 & natural & 192$\,$k$\lambda$ & 1.05$^{\prime\prime}\!\!$\ $\times$ 0.83$^{\prime\prime}\!\!$\ \\
\end{tabular}
\end{table}
\subsection{Morphology}
Figure~\ref{fig:tgt} shows two radio lobes that are edge brightened, the classic signature of an FR$\,$II source. FR$\,$II sources have several components. There are collimated jets that extend in opposite directions from the host galaxy, terminating in hot spots that are bright, concentrated regions of emission. More diffuse, extended radio emission from plasma flowing back from the hot spots comprises the lobes. In HzRGs, only the hotspots and lobes have been directly observed, since the jets have low surface brightness. Observations of 4C 43.15 at frequencies higher than 1$\,$GHz clearly show the hot spots and diffuse lobe emission, but this is the first time this morphology has been spatially resolved for an HzRG at frequencies $<300\,$MHz. The smoothed image shows a bridge of emission connecting the two lobes at the 3 and 5$\sigma$ significance levels. This is
similar to what is seen in the canonical low-redshift FR$\,$II galaxy, Cygnus A \citep{carilli91}, but this is \emph{the first time that a bridge of low frequency emission connecting the two lobes has been observed in a HzRG.}
To qualitatively study the low-frequency morphology of 4C 43.15 in more detail and compare it with the structure at high frequencies, we derived the brightness profiles along and perpendicular to the source axis. To do this we defined the jet axis by drawing a line between the centroids of Gaussian fits to each lobe. We used the position angle of this line to rotate all images (the unsmoothed image was used for the 55$\,$MHz image) so the jet axis is aligned with North. We fitted for the rotation angle independently for all frequencies, and found the measured position angles were all within 1 degree of each other, so we used the average value of 13.36 degrees to rotate all images. The rotated images are shown overlaid on each other in Figure~\ref{fig:rot}, along with normalized sums of the flux density along the North--South direction and East--West direction.
\begin{figure}
\includegraphics[width=0.5\textwidth,clip,trim=1cm 0.5cm 4cm 4cm]{fig6}
\caption{\label{fig:rot} Contours and intensity profiles for 4C 43.15 at four frequencies. The rotation angle of the jet was determined per frequency to rotate all images so the jet axis is aligned for all images. The contours are set at 20, 40, 60, 80, and 95 per cent of the maximum intensity (which is unity).}
\end{figure}
The integrated flux density ratio of the lobes also evolves with frequency, which can be seen in Figure~\ref{fig:rot}. The lobe ratio changes from 3 at the highest frequency to 1.7 at the lowest frequency. This implies a difference in spectral index between the two lobes, which will be discussed in the next section.
\subsection{Spectral Index Properties}
\label{sec:si}
In this section we shall describe the spectral index properties of 4C 43.15 using the integrated spectra from each of the lobes, and the total integrated spectral index. Figure~\ref{fig:bb} shows the lobe spectra and the total integrated spectrum for comparison. The lobe spectra at 1.4, 4.7, and 8.4$\,$GHz were measured from VLA archival images convolved to the resolution at 1.4$\,$GHz and are reported in Table~\ref{tab:sp}. We assumed errors of 20 per cent for the LOFAR data and 5 per cent for the VLA archival data. The integrated spectral data were taken from the NASA/IPAC Extragalactic Database (NED), with the inclusion of the new LOFAR data point, see Table~\ref{tab:ned}.
\begin{table}
\caption{\label{tab:ned}Integrated Flux Density Measurements.}
\begin{tabular}{lccl}
Frequency & Flux Density & Error & Reference \\
& [Jy] & [Jy] & \\ \hline
54$\,$MHz & 14.9 & 3.0 & This work \\
74$\,$MHz & 10.6 & 1.1 & VLSS, \citet{cohen07} \\
151$\,$MHz & 5.9 & 0.17 & 6C, \citet{6c} \\
178$\,$MHz & 4.5 & 0.56 & 3C, \citet{gower67} \\
365$\,$MHz & 2.9 & 0.056 & Texas, \citet{texas} \\
408$\,$MHz & 2.6 & 0.056 & Bologna, \citet{bologna} \\
750$\,$MHz & 1.5 & 0.080 & \citet{pt66} \\
1.4$\,$GHz & 0.77 & 0.023 & NVSS, \citet{nvss} \\
4.85$\,$GHz & 0.19 & 0.029 & \citet{becker91} \\
\end{tabular}
\end{table}
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig7}
\caption{\label{fig:bb} The total integrated spectrum derived from archival (black circles) and LOFAR data (white circle with black outline). The integrated spectra of the lobes are also shown for the measurements described in \S~\ref{sec:si}. The lines between data points do not represent fits to the data and are only drawn to guide the eye.}
\end{figure}
Figure~\ref{fig:si} shows the point-to-point spectral index values measured from each frequency to all other frequencies in this study. There are several interesting results.
\begin{enumerate}
\item The spectral index values amongst frequencies $\geq1.4\,$GHz show a steepening high-frequency spectrum. This can be seen most clearly in the second panel from the top of Figure~\ref{fig:si}, where the spectral index from 4.7$\,$GHz to 8.4$\,$GHz is always steeper than the spectral index from 4.7$\,$GHz to 1.4$\,$GHz for all components.
\item The point-to-point spectral index from 55$\,$MHz to the higher frequencies in this study steepens, i.e. becomes more negative, as the other point increases in frequency.
This indicates a break frequency between 55$\,$MHz and 1.4$\,$GHz. This indicates either a steepening at high frequencies, a turnover at low frequencies, or a combination of both. However a low-frequency turnover is not observed in the integrated spectrum. Therefore a steepening of the spectra at high frequencies is more likely, which is seen in Figure~\ref{fig:bb}.
\item The northern lobe always has a spectral index as steep or steeper than the lobe regardless of the frequencies used to measure the spectral index. This suggests a physical difference between the two lobes.
\end{enumerate}
These results for the entire spectrum are consistent with a flatter, normal FR$\,$II spectral index coupled with synchrotron losses that steepen the spectra at high frequencies and cause a break frequency at intermediate frequencies (Harwood et al., 2016). The spectral index between 55$\,$MHz and 1.4$\,$GHz is $\alpha=-0.95$ for both lobes. We fit power laws to the lobe spectra for frequencies $>1\,$GHz and found spectral indices of -1.75$\pm0.01$ (northern lobe) and -1.31$\pm0.03$ (southern lobe). Figure~\ref{fig:bb} shows six measurements of the total integrated spectrum at frequencies less than 500$\,$MHz. The spectral index measured from fitting a power law to these points is $\alpha=-0.83$, which we would expect the lobes to mimic if we had more spatially resolved low-frequency measurements.
\begin{figure}
\includegraphics[width=0.5\textwidth,clip,trim=3cm 2cm 3cm 2cm]{fig8}
\caption{\label{fig:si} The point-to-point spectral index values measured from each frequency to all other frequencies in this study. The symbols in all panels of the plot are as follows: 55$\,$MHz -- yellow triangles; 1425$\,$MHz -- green diamonds; 4710$\,$MHz -- blue squares; 8440$\,$MHz -- purple circles. }
\end{figure}
\begin{table*}
\caption{\label{tab:sp} Source parameters. Uncertainties in the LOFAR measurement are assumed to be 20 per cent. The optical position was converted to J2000 from the B1950 coordinates in \citet{mccarthy91}: B1950 07:31:49.37 +43:50:59.}
\begin{tabular}{lccccc}
& \multicolumn{2}{c}{\textsc{northern lobe}} & & \multicolumn{2}{c}{\textsc{southern lobe}} \\ \cline{2-3} \cline{5-6} \\[-6pt]
& $S_{\nu}$ & Offset from & & $S_{\nu}$ & Offset from \\
& $[$Jy$]$ & host galaxy & & $[$Jy$]$ & host galaxy \\ \cline{2-2} \cline{3-3} \cline{5-5} \cline{6-6} \\
55$\,$MHz & 5.40$\pm$1.1 & 4.34$^{\prime\prime}\!\!$\ & & 9.53$\pm$1.9 & 4.35$^{\prime\prime}\!\!$\ \\
1.4$\,$GHz & 0.25$\pm$0.013 & 4.87$^{\prime\prime}\!\!$\ & & 0.44$\pm$0.022 & 5.03$^{\prime\prime}\!\!$\ \\
4.7$\,$GHz & 0.031$\pm$1.6$\times$10$^{-3}$ & 4.52$^{\prime\prime}\!\!$\ & & 0.095$\pm$4.8$\times$10$^{-3}$ & 4.89$^{\prime\prime}\!\!$\ \\
8.4$\,$GHz & 0.011$\pm$5.5$\times10^{-4}$ & 5.02$^{\prime\prime}\!\!$\ & & 0.042$\pm$2.1$\times10^{-3}$ & 4.83$^{\prime\prime}\!\!$\ \\
\end{tabular}
\end{table*}
\section{Discussion}
\label{sec:diss}
The main result is that both the general morphology and spectral index properties of 4C 43.15 are similar to FR$\,$II sources at low redshift. We have determined that 4C 43.15 has historically fallen on the spectral index -- redshift relation because of the steepening of its spectrum at high frequencies, and a break frequency between 55$\,$MHz and 1.4$\,$GHz. The total integrated spectrum has a spectral index of $\alpha=-0.83\pm0.02$ for frequencies below 500$\,$MHz, which is not abnormally steep when compared to other FR$\,$II sources. For example, the median spectral index for the 3CRR sample is $\alpha=-0.8$ \citep{3CRR}. The lowest rest frequency probed is 180$\,$MHz, which is still above where low-frequency turnovers are seen in the spectra of local FR$\,$II sources \citep[e.g.,][]{mckean16,carilli91}. Thus we expect the break frequency to be due to synchrotron losses at high frequencies rather than a low frequency turnover.
We find no evidence that environmental effects cause a steeper overall spectrum. In fact, the northern lobe, which has the steeper spectral index, is likely undergoing adiabatic expansion into a region of lower density. This is contrary to the scenario discussed by \citet{ak98} where higher ambient densities and temperatures will cause a steeper spectral index. The interaction of 4C 43.15 with its environment will be discussed in detail later in this section.
The observational bias resulting in the initial classification of 4C 43.15 as having an ultra steep spectrum could be a manifestation of different spectral energy losses at high frequencies when compared to local radio galaxies. It is possible that inverse Compton losses, which scale as (1+$z$)$^4$, combined with spectral ageing, have lowered the break frequency relative to losses from spectral ageing alone. For any two fixed observing frequencies that straddle the break frequency, a lower break frequency will cause a reduction in the intensity measured at the higher frequency, resulting in a steeper measured spectral index. To model the lobe spectra including the contribution from losses due to the CMB, we require spatially resolved measurements at another low frequency (less than $\sim500\,$MHz) to unambiguously determine the low-frequency spectral indices of the lobes of 4C 43.15. We plan to use HBA observations of 4C 43.15 to provide measurements at 150$\,$MHz in future studies.
In the following subsections we first calculate the apparent ages of the radio lobes and then look at evidence for environmental interaction.
\subsection{Ages of the radio lobes}
\label{sec:int}
The spectral age can be related to the break frequency $\nu_{br}$ and the magnetic field strength $B$ by:
\begin{equation}
\tau_{\textrm{rad}} = 50.3 \frac{B^{1/2}}{B^2 + B_{\textrm{iC}}^2} [\nu_{br}(1+z)]^{-1/2} \textrm{ Myr}
\end{equation}
\citep[e.g.,][and references therein]{harwood13}. The inverse Compton microwave background radiation has a magnetic field strength $B_{\textrm{iC}}=0.318(1+z)^2$. The units of $B$ and $B_{\textrm{iC}}$ are nT and $\nu_{br}$ is in GHz. Using the standard minimum energy assumptions \cite{carilli97} derived minimum pressures for the hotspots, which correspond to a magnetic field of 32$\,$nT for 4C 43.15, which is consistent with values for Cygnus A \citep{carilli91}. We therefore assume an average value of $B=1\,$nT for the lobes of 4C 43.15, which is consistent with Cygnus A. To calculate $\tau_{\textrm{rad}}$, the break frequency must also be known, and we estimate this from fitting two power laws to integrated flux density measurements: one power low fitted to data at frequencies below 500$\,$MHz, and one power law fitted to data at frequencies above 1$\,$GHz. The frequency at which these two power laws cross is the break frequency.
Using the spectral indices calculated in the previous section, we estimate the break frequencies of the lobes by finding where the low and high frequency fitted power laws cross. The estimated break frequencies for the northern and southern lobes are 947$\pm12\,$MHz and 662$\pm29\,$MHz, giving apparent ages of 12.7$\pm0.2$ and 15.2$\pm0.7$ Myr, respectively. These ages are reasonable for FR$\,$II sources of this size \citep[e.g.,][]{harwood15}.
\subsection{Environmental interaction}
\label{sec:env}
The fact that the observed lobes are not the same is clear: the northern lobe is little more than half as bright as the southern lobe, and has a steeper spectral index above 1.4$\,$GHz by $\Delta\alpha=-0.5$. We have thus far found that 4C 43.15 is consistent with local FR$\,$II sources, and therefore we do not expect an internal difference in physical processes driving the two lobes. This suggests there must be an external cause. \citet{humphrey07} found the differences between the lobes in 4C 43.15 to be consistent with orientation effects by modelling Doppler boosting of the hotspots to predict the resulting asymmetry between the lobes for a range of viewing angles and velocities. Although only hot spot advance speeds of 0.4$c$ and viewing angles of $>20\,$deg approach the measured $\Delta\alpha=-0.5$. Since 4C 43.15 is similar to Cygnus A, hot spot advance speeds of $\sim0.05c$ are much more likely. In this scenario, the models in \citet{humphrey07} predict a value for $\Delta\alpha$ at least an order of magnitude smaller than $-0.5$ for all viewing angles considered. We therefore find it unlikely that orientation is the only cause for the differences between the lobes.
Environmental factors could also cause differences between the lobes. In lower density environments, adiabatic expansion of a radio lobe would lower the surface brightness, effectively shift the break frequency to lower frequencies, and cause a slight steepening of the radio spectrum at higher frequencies. This is consistent with the morphology and spectral index properties of 4C 43.15. The northern lobe is dimmer, appears more diffuse, and has a spectral index steeper than that of the southern lobe. Having ruled out that orientation can explain these asymmetries, this implies that the northern jet is propagating through a lower density medium.
\begin{figure*}
\includegraphics[width=\textwidth,clip,trim=5.5cm 3.5cm 5.5cm 5cm]{fig9}
\caption{\label{fig:env} The spatial distribution of the radio emission compared with the with $K'$-band (2.13$\,\mu$m) continuum from the host galaxy and H$\alpha$+$[$N$\,$II$]$ line emission showing cones of ionized gas \citep{motohara00}. The two panels show the radio images with the same contours as the images in Figure~\ref{fig:tgt} (unsmoothed in the left panel, smoothed in the right panel), overlaid with $K'$-band continuum in black and H$\alpha$+$[$N$\,$II$]$ in red. A separate bright source to the NW has been blanked out. }
\end{figure*}
There is supporting evidence for a lower density medium to the North of the host galaxy. Both Lyman-$\alpha$ \citep{vm03} and H$\alpha$+$[$N$\,$II$]$ \citep{motohara00} are seen to be more extended to the North. Figure~\ref{fig:env} shows the H$\alpha$+$[$N$\,$II$]$ overlaid on the radio images for comparison. Qualitatively the emission line gas is more extended and disturbed towards the North, and reaches farther into the area of the radio lobe. \citet{motohara00} concluded that the Lyman-$\alpha$ and H$\alpha$ emission are both nebular emission from gas ionized by strong UV radiation from the central active galactic nucleus. They estimate the electron density of the ionised gas to be 38$\,$cm$^{-3}$ and 68$\,$cm$^{-3}$ for the northern and southern regions, respectively. The lower density in the North is consistent with adiabatic expansion having a larger impact on the northern lobe relative to the southern lobe. Naively, the ratios between the integrated flux densities of the lobes and the densities of the environment are similar. However determining the expected relationship between the two ratios requires estimating the synchrotron losses from adiabatic expansion, which requires knowing the relevant volumes and densities, then modelling and fully evolving the spectra. Measuring the volumes requires knowing the full extent of the radio emission, which is hard to do if the lobe already has low surface brightness due to adiabatic expansion. This complex modelling is beyond the scope of this paper and will be addressed in future studies (J. Harwood, private communication).
\section{Conclusions and Outlook}
\label{sec:concl}
We have shown that I-LOFAR LBA is suitable for spatially resolved studies of bright objects. We have presented the first sub-arcsecond image made at frequencies lower than 100$\,$MHz, setting the record for highest spatial resolution at low radio frequencies. This is an exciting prospect that many other science cases will benefit from in the future.
There are two main conclusions from this study of the spatially resolved low frequency properties of high redshift radio galaxy 4C 43.15: \\
$\bullet $ Low-surface brightness radio emission at low frequencies is seen, for the first time in a high redshift radio galaxy, to be extended between the two radio lobes. The low-frequency morphology is similar to local FR$\,$II radio sources like Cygnus A.
$\bullet $ The overall spectra for the lobes are ultra steep only when measuring from 55$\,$MHz to frequencies \emph{above} 1.4$\,$GHz. This is likely due to an ultra-steep spectrum at frequencies $\geq1.4\,$GHz with a break frequency between 55$\,$MHz and 1.4$\,$GHz. The low-frequency spectra are consistent with what is found for local FR$\,$II sources. \\
This study has revealed that although 4C 43.15 would have been classified as an ultra-steep spectrum source by \citet{db00}, this is likely due to a break frequency at intermediate frequencies, and the spectral index at frequencies less than this break is not abnormally steep for nearby FR$\,$II sources.
Steepening of the spectra at high frequencies could be due to synchrotron ageing and inverse Compton losses from the increased magnetic field strength of the cosmic microwave background radiation at higher redshifts.
Unlike nearby sources, we do not observe curvature in the low frequency spectra, which could be due to the fact that we only observe down to a rest frequency of about 180$\,$MHz. Future observations at 30$\,$MHz (103$\,$MHz rest frequency) or lower would be useful.
Larger samples with more data points at low to intermediate frequencies are necessary to determine if the observed ultra steep spectra of high redshift radio galaxies also exhibit the same spectral properties as 4C 43.15.
We will use the methods developed for this paper to study another 10 resolved sources with $2<z<4$, incorporating both LBA and HBA measurements to provide excellent constraints on the low-frequency spectra.
While a sample size of 11 may not be large enough for general conclusions, it will provide important information on trends in these high redshift sources. These trends can help guide future, large scale studies.
\section*{Acknowledgements}
LKM acknowledges financial support from NWO Top LOFAR project, project no. 614.001.006.
LKM and HR acknowledge support from the ERC Advanced Investigator programme NewClusters 321271.
The authors would like to thank J. Harwood and H. Intema for many useful discussions.
RM gratefully acknowledge support from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) /ERC Advanced Grant RADIOLIFE-320745.
This paper is based (in part) on data obtained with the International LOFAR Telescope (ILT). LOFAR (van Haarlem et al. 2013) is the Low Frequency Array designed and constructed by ASTRON. It has facilities in several countries, that are owned by various parties (each with their own funding sources), and that are collectively operated by the ILT foundation under a joint scientific policy.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
This research made use of Montage. It is funded by the National Science Foundation under Grant Number ACI-1440620, and was previously funded by the National Aeronautics and Space Administration's Earth Science Technology Office, Computation Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology.
\bibliographystyle{mnras}
|
1,116,691,500,161 | arxiv | \section{Definitions and the main result}
We say a smooth manifold $E$ homeomorphic to $\R^4$ satisfies the
$DFT$\ condition (for De Michelis, Freedman and Taubes) provided that,
for every compact subset $K \subset E$, there exists an open neighborhood $U$ of $K$
such that
\begin{enumerate}
\item $K\subset U$
\item the closure of $U$, $\Bar U$, is homeomorphic to $D^4$
\item $U$ can be engulfed by itself rel $K$
\end{enumerate}
Precisely, condition (3) means that there exists a smooth, ambient, isotopy of $E$
from the identity to $\iota$ such that
$\iota\colon U\to E$ satisfies $\Bar U\subset\iota(U)$ and the isotopy is the identity on $K$.
As discussed in section 2, all \emph{exotic} $\R^4$'s (smoothings of $\R^4$ not diffeomorphic
to the standard smoothing) known to the author do not have the $DFT$-property.
Differential geometry enters the picture via critical points of functions related to the distance function.
Such functions are not necessarily smooth so the notion of critical point needs
to be interpreted and there is a standard way to do this going back at least to Grove and Shiohama
\cite{Grove and Shiohama}.
The idea involves constructing vector fields and using differential geometry to get enough
similarity to the gradient-like vector fields of Morse theory to prove results.
Here is the main observation of this note.
\begin{Theorem}\namedLabel{main result}
Let $E$ be a smooth, manifold homeomorphic to $\R^4$ with a proper
Lipschitz function which has bounded critical values.
Then $E$ satisfies the $DFT$-condition.
\end{Theorem}
\section{Remarks on the $DFT$\ property}
Taubes \cite[Thm.~1.4, p.~366]{Taubes} proves that many exotics $\R^4$'s have the property
that neighborhoods of large compact sets can not be embedded smoothly in exotic $\R^4$'s
with periodic ends.
Hence these $\R^4$'s are not $DFT$\ since $\iota$ can be used to construct a periodic end
smoothing of $\R^4$ containing a neighborhood of any compact set $K$.
De Michelis and Freedman \cite{De Michelis and Freedman} state in the first line of the last paragraph
on page 220 that the family of $\R^4$'s they construct do not satisfy the $DFT$-property
even if the $\iota$ is not required to be isotopic to the identity.
Gompf \cite{Gompf} argues that none of the smoothings of $\R^4$ in his menagerie have
the $DFT$\ property.
Certainly the universal $\R^4$ of Freedman and Taylor \cite{Freedman and Taylor} is not $DFT$.
On the other hand, if the smooth Schoenflies conjecture is true but the smooth Poincar\'e
conjecture is false, there will be exotic $DFT$\ $\R^4$'s.
\section{Patterns of application}
Rather than produce a long list of theorems, here is a meta-principle for generating theorems.
\begin{Named Theorem}[Meta-Principle]\namedLabel{Meta-Principle}
Take any theorem in differential geometry regarding the existence of complete metrics
with special properties.
If the proof shows that a distance function or some other proper Lipschitz function
has bounded critical values then such metrics do not exist on a non-$DFT$$\,\R^4$.
\end{Named Theorem}
\begin{Comment}[Remark]%
The distance function from any point $p\in E$ is proper Lipschitz if the Riemannian metric on $E$
is complete.
\end{Comment}
Here are three examples.
\begin{Example}
Reading the paper of Lott and Shen \cite{Lott and Shen} gave the author the idea for this note.
The first paragraph on page 281 shows that an exotic $\R^4$
with lower quadratic curvature decay, quadratic volume growth and which does not collapse at
infinity is $DFT$.
\end{Example}
\begin{Example}
An early finite-type theorem is by Abresch \cite{Abresch} which says that if the curvature
decays faster than quadratic on some complete Riemannian exotic $\R^4$, then it satisfies
the $DFT$-property.
\end{Example}
\begin{Example}
An example involving mostly Ricci curvature and diameter is
\cite[Thm.~B, p.~356]{Abresch and Gromoll}.
\end{Example}
\section{The proof of Theorem 1.1}
Let $E$ be any smoothing of $\R^4$.
Say that a flat topological embedding $e\colon S^3 \subset E$ is
\emph{not a barrier to isotopy} provided there is a smooth vector field on $E$
with compact support so that if $\iota\colon E \to E$ is the isotopy at time $1$ generated
by the vector field, then
$\Bar{U} \subset \iota(U)$, where $U$ is the bounded component of $E - e(S^3)$.
Say that $E$ has \emph{no barrier to isotopy to $\infty$} if, for every compact set
$K\subset E$, there is a flat topological embedding $e\colon S^3 \subset E$ such that $K$ lies
in the bounded component of $E - e_K(S^3)$ and such that $e_K$ is not a barrier to isotopy .
\begin{Proposition}\namedLabel{no barrier to isotopy}
Let $E$ be a smooth, manifold homeomorphic to $\R^4$.
If $E$ has no barrier to isotopy to $\infty$ then $E$ satisfies the $DFT$-condition.
\end{Proposition}
\begin{proof}
Given $K$ compact, pick an $e_K\colon S^3 \to E$ with $K$ in the bounded component
of $E - e_K(S^3)$ such that $e_K$ is not a barrier to isotopy.
Let $\chi_1$ be the smooth vector field promised by the definition.
Let $U$ be the bounded component of $E - e_K(S^3)$ and notice $U$ satisfies (1) and (2).
Since $K \subset U$, there is another smooth vector field $\chi$ such that $\chi$
vanishes on $K$ and agrees with $\chi_1$ on a neighborhood of $e(S^3)$.
Let $I\colon E\times [0,1] \to E$ be the isotopy generated by the flow for $\chi$.
Since $\chi$ vanishes on $K$, the isotopy $I$ fixes $K$ and since $\chi$ agrees with
$\chi_1$ on $e(S^3)$, $I_1\bigl(e(S^3),1\bigr) \subset E - \Bar{U}$.
Hence $I$ is the isotopy required for (3).
\end{proof}
\begin{proof}[Proof of \namedRef{main result}]
Let $\rho\colon E \to [0,\infty)$ be a proper Lipschitz function.
Since the critical values are bounded by hypothesis, there is an $r_0$ such that
for any critical point $x\in E$, $\rho(x) < r_0$.
Since $\rho$ is proper, $\rho^{-1}\bigl([0,r_0]\bigr)$ is compact.
Let $e\colon S^3 \to E$ be any flat embedding with $\rho^{-1}\bigl([0,r_0]\bigr)$ in
the bounded component of $E - e(S^3)$.
It will be shown that $e$ is not a barrier to isotopy, from which it follows that
$E$ has no barrier to isotopy to $\infty$, from which it follows that
$E$ satisfies the $DFT$-property using \namedRef{no barrier to isotopy}.
Since $S^3$ is compact, $\rho\bigl(e(S^3)\bigr) \subset [r_1, r_2]$ with
$r_0<r_1$.
The idea is contained in the proofs of \cite[Lemma 3.1, p. 108]{Greene}
or \cite[Lemma 1.4, p.~2]{Cheeger}.
They start by constructing a vector field locally on $\rho^{-1}\bigl([r_1,r_2]\bigr)$
and patching it together using a smooth partition of unity.
They observe that the conditions they need are open conditions so the field can be
taken to be smooth.
Then the proofs show that the resulting flow (or the flow for the negative of the constructed
field) moves $\rho^{-1}(r_1)$ out past $\rho^{-1}(r_2)$ and so $e(S^3)$ ends up in
$\rho^{-1}(r_2,\infty) = E - \rho^{-1}\bigl([0, r_2]\bigr) \subset E - \Bar{U}$ since
$\rho\bigl(\Bar{U}\bigr) \subset [0,r_2]$.
\end{proof}
\section{Concluding remarks}
Differential geometry can show that two smooth $4$-manifolds are diffeomorphic,
avoiding the ``greater than or equal to $5$'' hypothesis of differential topology.
As examples, the Cartan-Hadamard Theorem \cite[Thm.~4.1, p.~221]{Sakai}
and the Cheeger-Gromoll Soul Theorem \cite[Thm.~3.4, p.~215]{Sakai} both prove that
a $4$-manifold with very restrictive curvature conditions is
\emph{diffeomorphic} to the standard $\R^4$.
The results presented here start with much weaker hypotheses than the
Cartan-Hadamard or the Cheeger-Gromoll Soul Theorems, but the conclusions are also
weaker.
One question would be whether some of these theorems could be strengthened to
show that the manifold was diffeomorphic to the standard $\R^4$.
A second question would be to use the $DFT$\ property to produce interesting metrics,
perhaps metrics strong enough to prove that a $DFT$\ $\R^4$ is standard.
\begin{bibdiv}
\begin{biblist}
\bib{Abresch}{article}{
author={Abresch, U.},
title={Lower curvature bounds, Toponogov's theorem, and bounded
topology. II},
journal={Ann. Sci. \'Ecole Norm. Sup. (4)},
volume={20},
date={1987},
number={3},
pages={475\ndash 502},
issn={0012-9593},
review={MR925724 (89d:53080)},
}
\bib{AbreschandGromoll}{article}{
author={Abresch, Uwe},
author={Gromoll, Detlef},
title={On complete manifolds with nonnegative Ricci curvature},
journal={J. Amer. Math. Soc.},
volume={3},
date={1990},
number={2},
pages={355\ndash 374},
issn={0894-0347},
review={MR1030656 (91a:53071)},
}
\bib{Cheeger}{article}{
author={Cheeger, Jeff},
title={Critical points of distance functions and applications to
geometry},
booktitle={Geometric topology: recent developments (Montecatini Terme,
1990)},
series={Lecture Notes in Math.},
volume={1504},
pages={1\ndash 38},
publisher={Springer},
place={Berlin},
date={1991},
review={MR1168042 (94a:53075)},
}
\bib{DeMichelisandFreedman}{article}{
author={De{ Michelis}, Stefano},
author={Freedman, Michael H.},
title={Uncountably many exotic ${\bf R}\sp 4$'s in standard $4$-space},
journal={J. Differential Geom.},
volume={35},
date={1992},
number={1},
pages={219\ndash 254},
issn={0022-040X},
review={MR1152230 (93d:57036)},
}
\bib{FreedmanandTaylor}{article}{
author={Freedman, Michael H.},
author={Taylor, Laurence R.},
title={A universal smoothing of four-space},
journal={J. Differential Geom.},
volume={24},
date={1986},
number={1},
pages={69\ndash 78},
issn={0022-040X},
review={MR857376 (88a:57044)},
}
\bib{Gompf}{article}{
author={{Go}mpf, Robert E.},
title={An exotic menagerie},
journal={J. Differential Geom.},
volume={37},
date={1993},
number={1},
pages={199\ndash 223},
issn={0022-040X},
review={MR1198606 (93k:57041)},
}
\bib{Greene}{article}{
author={{Gr}eene, Robert E.},
title={A genealogy of noncompact manifolds of nonnegative curvature:
history and logic},
booktitle={Comparison geometry (Berkeley, CA, 1993--94)},
series={Math. Sci. Res. Inst. Publ.},
volume={30},
pages={99\ndash 134},
publisher={Cambridge Univ. Press},
place={Cambridge},
date={1997},
review={MR1452869 (98g:53069)},
}
\bib{GroveandShiohama}{article}{
author={Grove, Karsten},
author={Shiohama, Katsuhiro},
title={A generalized sphere theorem},
journal={Ann. Math. (2)},
volume={106},
date={1977},
number={2},
pages={201\ndash 211},
review={MR0500705 (58 \#18268)},
}
\bib{LottandShen}{article}{
author={Lott, John},
author={Shen, Zhongmin},
title={Manifolds with quadratic curvature decay and slow volume growth},
language={English, with English and French summaries},
journal={Ann. Sci. \'Ecole Norm. Sup. (4)},
volume={33},
date={2000},
number={2},
pages={275\ndash 290},
issn={0012-9593},
review={MR1755117 (2002e:53049)},
}
\bib{Sakai}{book}{
author={Sakai, Takashi},
title={Riemannian geometry},
series={Translations of Mathematical Monographs},
volume={149},
note={Translated from the 1992 Japanese original by the author},
publisher={American Mathematical Society},
place={Providence, RI},
date={1996},
pages={xiv+358},
isbn={0-8218-0284-4},
review={MR1390760 (97f:53001)},
}
\bib{Taubes}{article}{
author={Taubes, Clifford Henry},
title={Gauge theory on asymptotically periodic $4$-manifolds},
journal={J. Differential Geom.},
volume={25},
date={1987},
number={3},
pages={363\ndash 430},
issn={0022-040X},
review={MR882829 (88g:58176)},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
1,116,691,500,162 | arxiv | \section{Introduction}
In this paper, we consider Einstein hypersurfaces
of warped products $I\times_\omega\Q_\epsilon^n,$ where $I\subset\mathbb{R} $ is an open interval, and
$\mathbb{Q}_\epsilon^n$ stands for the simply connected space form of dimension
$n\ge 2$ and constant sectional curvature $\epsilon\in\{-1,0,1\}.$
Recall that a Riemannian manifold $(M,g)$ is called \emph{Einstein} if
$${\rm Ric}_M=\Lambda g,$$ where ${\rm Ric}_M$ is the Ricci tensor of $(M,g)$ and
$\Lambda$ is a constant. More specifically, we call such an $M$ a
$\Lambda$-Einstein manifold.
Riemannian manifolds of constant sectional curvature (CSC, for short)
are the simplest examples of Einstein manifolds, which we call \emph{trivial}.
It was shown by P. Ryan \cite{ryan} that,
for $\epsilon\le 0,$ any Einstein hypersurface
of $\mathbb Q_\epsilon^{n+1}$ is trivial, and also that there
exist nontrivial Einstein hypersurfaces in $\mathbb{S}^{n+1}.$
Inspired by this result, the main question we address here is the following:
\[
\text{\emph{Under which conditions an Einstein hypersurface of $I\times_\omega\Q_\epsilon^n$ is necessarily trivial}?}
\]
For $\omega$ constant and $\epsilon\ne 0,$ it was proved in \cite{lps} that any Einstein hypersurface of
$\mathbb{R} \times_\omega\mathbb{Q}_\epsilon^n$ is trivial.
On the other hand, this is not true for certain nonconstant warping functions $\omega.$
Indeed, writing $\mathbb{S}^2(1/\sqrt 2)$ for the $2$-sphere of $\mathbb{R} ^3$ of radius $1/\sqrt 2,$
it is known that the product $\mathbb{S}^2(1/\sqrt 2)\times\mathbb{S}^2(1/\sqrt 2)$
--- endowed with its canonical Riemannian metric ---
is an Einstein manifold of nonconstant sectional curvature which
is naturally embedded in $\mathbb{S}^{5}-\{p,-p\}=(0,\pi)\times_{\sin t}\mathbb{S}^4,$
where $p$ is a suitable point of the unit sphere $\mathbb{S}^{5}\subset\mathbb{R} ^6.$
It should be noticed that, in answering the above question, one may
conclude that certain manifolds cannot be isometrically immersed
in $I\times_\omega\Q_\epsilon^n.$
For instance, from the aforementioned result in \cite{lps},
there is no isometric immersion
of the product
$\mathbb{S}^2(1/\sqrt 2)\times\mathbb{S}^2(1/\sqrt 2)$ into $\mathbb{R} \times \mathbb{Q} _\epsilon^4,$
$\epsilon\ne 0.$
A major property of Einstein hypersurfaces of $I\times_\omega\Q_\epsilon^n$
we establish here is that they have the gradient $T$
of their height functions (when nonzero) as principal directions.
Hypersurfaces satisfying this condition, which we call the $T$-\emph{property}, have been considered in
several works on constant curvature (mean and sectional)
hypersurfaces of product spaces $\mathbb{R} \times M$, $M$ a Riemannian manifold
(see, e.g., \cite{dillenetal, delima-roitman, lps,
manfio-tojeiro, manfio-tojeiro-veken, tojeiro}).
We add that, in product spaces $I\times_\omega\Q_\epsilon^n,$ the class of hypersurfaces
with the $T$-property is abundant and includes all rotational ones.
Considering our purposes here, a primary concern
should be to prove existence of trivial Einstein hypersurfaces in
$I\times_\omega\Q_\epsilon^n.$ This is accomplished in our first result, as stated below.
It should be mentioned that we conceive rotational hypersurfaces
as in \cite{dillenetal}, dividing them into three types:
spherical, parabolic, and hyperbolic (see Section \ref{sec-csc} for details).
\begin{theorem} \label{th-existenceCSC01}
Given an open interval $I\subset\mathbb{R} ,$ let $\omega:I\rightarrow \omega(I)$ be a diffeomorphism
such that $|\omega'|>1$ on $I.$ Then, the following assertions
hold:
\begin{itemize}[parsep=1ex]
\item[\rm i)] For all $c\in\mathbb{R} ,$
there exists a rotational spherical-type hypersurface
of constant sectional curvature $c$ in $I\times_\omega\mathbb{Q}_{\epsilon}^n ,$ where $\epsilon\in\{0,-1\}$.
\item[\rm ii)] For all $c>0,$ there exists a rotational spherical-type hypersurface
of constant sectional curvature $c$ in $I\times_\omega\mathbb{S}^n.$
\item[\rm iii)] For all $c<0,$ there exist in $I\times_\omega\mathbb{H} ^n$
a rotational parabolic-type hypersurface and a rotational
hyperbolic-type hypersurface which are both of constant sectional curvature $c.$
\item[\rm iv)] There exists a flat rotational parabolic-type hypersurface in
$I\times_\omega\mathbb{H} ^n.$
\end{itemize}
\end{theorem}
We remark that Theorem \ref{th-existenceCSC01}
is of local nature. So,
there is no loss of generality in assuming the (nonconstant)
warping function $\omega$ to be
a diffeomorphism over its image.
By the same token,
we can assume that, on $I,$
$|\omega'|$ is bounded below by a positive constant.
Hence, up to scaling, one has $|\omega'|>1.$
Considering these facts, Theorem \ref{th-existenceCSC01} shows that,
for virtually any nonconstant warping function $\omega,$
the class of local rotational CSC hypersurfaces of $I\times_\omega\Q_\epsilon^n$ is abundant.
To each warped product $I\times_\omega\Q_\epsilon^n,$ we associate two real functions $\alpha$ and $\beta$
defined on $I,$ which arise from the expression of the curvature tensor of $I\times_\omega\Q_\epsilon^n$ in terms of the curvature
tensor of $\mathbb{Q}_\epsilon^n$ and the unit tangent $\partial_t$ to the horizontal factor $I\subset\mathbb{R} .$
Then, assuming $n>3,$ we show that
a CSC hypersurface $\Sigma$ of $I\times_\omega\Q_\epsilon^n$ is essentially rotational.
More precisely, we have the following result
($\xi$ always denotes the height function of the hypersurface).
\begin{theorem} \label{th-CSC}
Let $\Sigma\subset I\times_\omega \mathbb{Q} _\epsilon^n$ $(n>3)$ be a hypersurface with constant sectional
curvature on which $(\beta\circ\xi)T$ never vanishes.
Then, $\Sigma$ is rotational.
\end{theorem}
Next, we turn our attention to Einstein hypersurfaces of $I\times_\omega\Q_\epsilon^n$
focusing on our main question.
Firstly, as stated in the next two theorems, we characterize them
according to whether $(\beta\circ\xi)T$ vanishes everywhere or nowhere on
the hypersurface.
\begin{theorem} \label{th-betazero}
For $n>3,$ let $\Sigma$ be a connected $\Lambda$-Einstein hypersurface of $I\times_\omega\Q_\epsilon^n$ such
that $(\beta\circ\xi)T$ vanishes on $\Sigma.$
Then, $\alpha\circ\xi$ is constant. Furthermore, defining
\[
\sigma:=\Lambda+(n-1)(\alpha\circ\xi),
\]
the following assertions hold:
\begin{itemize}[parsep=1ex]
\item[\rm i)] If $\sigma>0,$ $\Sigma$ is trivial and totally umbilical (with constant umbilical function).
\item[\rm ii)] If $\sigma=0,$ $\Sigma$ is trivial and its shape operator has rank at most one.
\item[\rm iii)]If $\sigma<0,$ $\Sigma$ is nontrivial and has precisely
two distinct nonzero constant principal curvatures with opposite signs, both with constant
multiplicity $\ge 2.$ In particular, $\Sigma$ has constant mean curvature.
\end{itemize}
\end{theorem}
Connected Einstein hypersurfaces on which $(\beta\circ\xi)T$ never vanishes
have many interesting properties. For this reason, we shall call them \emph{ideal.}
The following result supports our claim.
\begin{theorem} \label{th-einstein}
For $n>3,$ let $\Sigma$ be an ideal Einstein hypersurface of $I\times_\omega\Q_\epsilon^n.$
Then, $\Sigma$ has the $T$-property. In addition,
the following assertions are equivalent:
\begin{itemize}[parsep=1ex]
\item[\rm i)] $\Sigma$ is trivial.
\item[\rm ii)] Any vertical section $\Sigma_t=\Sigma\mathrel{\text{\tpitchfork}}(\{t\}\times\mathbb{Q} _\epsilon^n)$ is
totally umbilical.
\item[\rm iii)] $\Sigma$ is rotational.
\end{itemize}
\end{theorem}
The second fundamental property of ideal Einstein hypersurfaces
of $I\times_\omega\Q_\epsilon^n$ we establish here is that they are
local horizontal graphs of functions $\phi=\phi(s)$
whose level hypersurfaces $f_s$ are parallel and isoparametric (i.e, have constant
principal curvatures) in $\mathbb{Q}_{\epsilon}^n .$ We call such a hypersurface a $(\phi,f_s)$-\emph{graph}.
\begin{theorem} \label{L1-grapheinstein}
For $n>3,$ let $\Sigma$ be an oriented ideal Einstein hypersurface of $I\times_\omega\Q_\epsilon^n$ with unit normal
$N$ and non vanishing angle function $\theta:=\langle N,\partial_t\rangle.$
Under these conditions, the following hold:
\begin{itemize}[parsep=1ex]
\item [\rm i)] $\Sigma$ is locally a $(\phi,f_s)$-graph in $I\times_\omega\Q_\epsilon^n.$
\item [\rm ii)] The parallel family $\{f_s\}$ of any local $(\phi,f_s)$-graph of $\Sigma$
is isoparametric, and each $f_s$ has at most two distinct principal curvatures.
\item [\rm iii)] $\Sigma$ is trivial, provided that the principal curvature associated to
$T$ vanishes everywhere on $\Sigma.$
\end{itemize}
\end{theorem}
Theorem \ref{th-betazero}-(iii) opens the possibility of existence of
nontrivial Einstein hypersurfaces in $I\times_\omega\Q_\epsilon^n.$
As we have seen, they are easily obtained in
$\mathbb{S}^{n+1},$ which turns out to be
a space form that can be expressed as a warped product.
However, nontrivial examples
of Einstein hypersurfaces of general warped products $I\times_\omega\Q_\epsilon^n$
are not easy to find. Here, we consider this problem restricting ourselves to the class
of constant angle hypersurfaces of $I\times_\omega\Q_\epsilon^n,$ obtaining the following result.
\begin{theorem} \label{Einstein-cylinder}
For $n>3,$ let $\Sigma$ be a connected oriented $\Lambda$-Einstein
hypersurface (not necessarily ideal) of $I\times_\omega\Q_\epsilon^n$ with constant
angle function $\theta\in[0,1).$
Then, the following assertions hold:
\begin{itemize}[parsep=1ex]
\item [\rm i)] If $\theta\ne 0,$ $\Sigma$ is trivial.
\item [\rm ii)] If $\theta=0,$ $\omega$ is necessarily a solution of the ODE:
\[
(\omega')^2 + \dfrac{\Lambda}{n-1} \omega^2 + c =0, \,\, c\in\mathbb{R} ,
\]
on an open subinterval of \,$I,$ and $I\times_\omega\mathbb{Q}_{\epsilon}^n $ has
nonconstant sectional curvature if $c+\epsilon\ne0.$ Regarding $\Sigma,$
we distinguish the following cases, according to the sign of $c+\epsilon:$
\begin{itemize}[parsep=1ex]
\item[\rm a)] $c+\epsilon=0:$ $\Sigma$ is trivial and its shape operator has rank at most one.
\item[\rm b)] $c+\epsilon<0:$ $\Sigma$ is trivial and constitutes a cylinder over
a totally umbilical hypersurface of \,$\mathbb{Q}^n_\epsilon.$
\item[\rm c)] $c +\epsilon>0:$ $\Sigma$ is nontrivial and constitutes a
cylinder over a product
of two spheres of \,$\mathbb{S}^n$ (so that $\epsilon$ is necessarily $1$ in this case).
\end{itemize}
\end{itemize}
\end{theorem}
We point out that Theorem \ref{Einstein-cylinder}-(ii)(c)
provides examples of nontrivial Einstein hypersurfaces in
warped products $I\times_\omega\mathbb{S}^n$ which are not space forms.
Putting together Theorems \ref{th-einstein}--\ref{Einstein-cylinder} with classical
results on classification of isoparametric hypersurfaces in space forms,
we obtain the following characterization of ideal Einstein hypersurfaces
of $I\times_\omega\Q_\epsilon^n.$
\begin{theorem} \label{th-final}
Let $\Sigma$ be an ideal Einstein hypersurface of $I\times_\omega\Q_\epsilon^n,$ $n>3.$
Then, $\Sigma$ has a constant number of distinct principal curvatures ---
which is two or three --- all with constant multiplicities.
Moreover, $\Sigma$ is trivial
if and only if it has two distinct principal curvatures everywhere.
If so, $\Sigma$ is necessarily rotational.
\end{theorem}
The paper is organized as follows. In Section \ref{sec-prelimminaries},
we set some notation and formulae. In Section \ref{sec-lemmas}, we establish three key lemmas.
In Section \ref{sec-csc}, we consider rotational CSC hypersurfaces of $I\times_\omega\Q_\epsilon^n$ and prove Theorems \ref{th-existenceCSC01} and \ref{th-CSC}.
Section \ref{sec-einstein} is reserved for the proofs of Theorems \ref{th-betazero} and \ref{th-einstein}. In Section \ref{sec-graphs},
we introduce the $(\phi,f_s)$-graphs we mentioned, establishing some
of its fundamental properties. In the final Section \ref{sec-last}, we prove
Theorems \ref{L1-grapheinstein}--\ref{th-final}.
\vspace{.2cm}
\emph{Added in proof.} After a preliminary version of this paper was completed, we became acquainted with
the work of V. Borges and A. da Silva \cite{borges-silva}, which overlaps with ours on some parts.
\section{Preliminaries} \label{sec-prelimminaries}
\subsection{Ricci tensor}
Given a Riemannian manifold $(M^n,\langle\,,\,\rangle)$ with curvature
tensor $R$ and tangent bundle $TM,$
recall that its \emph{Ricci tensor} is defined as:
\[
{\rm Ric}\,(X,Y):={\rm trace}\,\{Z\mapsto R(Z,X)Y\}, \,\,\, \,\, (X, Y)\in TM\times TM.
\]
We say that a frame $\{X_1\,,\dots ,X_n\}\subset TM$ \emph{diagonalizes} the Ricci tensor
of $(M,\langle\,,\,\rangle)$ if
${\rm Ric}\,(X_i,X_j)=0\, \forall i\ne j\in\{1,\dots ,n\}.$
Notice that, when $M$ is an Einstein manifold, any orthogonal
frame $\{X_1\,,\dots ,X_n\}$ in $TM$ diagonalizes its Ricci tensor.
\subsection{Hypersurfaces}
Let $\Sigma$ be an oriented hypersurface of a Riemannian manifold $\overbar M^{n+1}$ whose
unit normal field we denote by $N.$ Let $A$ be the shape operator of $\Sigma$ with respect to $N,$ that is,
$$
AX=-\overbar\nabla_XN, \,\, X\in T\Sigma,
$$
where $\overbar\nabla$ is the Levi-Civita connection of $\overbar M^{n+1}.$
For such a $\Sigma$, and for $X,Y, Z, W\in T\Sigma$, one has the \emph{Gauss equation}:
\begin{equation}\label{eq-gauss}
\langle R(X,Y)Z,W\rangle = \langle\overbar R(X,Y)Z,W\rangle+\langle AX,W\rangle\langle AY,Z\rangle-\langle AX,Z\rangle\langle AY,W\rangle,
\end{equation}
where $R$ and $\overbar R$ denote the curvature tensors of $\Sigma$ and $\overbar M^{n+1},$ respectively.
Let $\lambda_1\,, \dots, \lambda_n$ be the principal curvatures of $\Sigma$, and
$\{X_1\,, \dots, X_n\}$ the corresponding orthonormal frame of principal directions, i.e.,
$$
AX_i=\lambda_iX_i\,, \,\,\,\, \langle X_i,X_j\rangle=\delta_{ij}\,.
$$
In this setting, for $1\le k\le n,$ the Gauss equation yields
\begin{equation}\label{eq-gauss01}
\langle R(X_k,Y)Z,X_k\rangle = \langle\overbar R(X_k,Y)Z,X_k\rangle+\lambda_k\langle AY,Z\rangle-\lambda_k^2\langle X_k,Z\rangle\langle X_k,Y\rangle.
\end{equation}
Thus, for the Ricci tensor ${\rm Ric}$ of $\Sigma$ one has
\begin{equation}\label{eq-ricci01}
{\rm Ric}(Y,Z)=\sum_{k=1}^{n}\langle\overbar R(X_k,Y)Z,X_k\rangle+H\langle AY,Z\rangle-\sum_{k=1}^{n}\lambda_k^2\langle X_k,Z\rangle\langle X_k,Y\rangle,
\end{equation}
where $H={\rm trace}\, A$ is the (non normalized) mean curvature of $\Sigma.$
In particular, if we set $Y=X_i$ and $Z=X_j$\,, we have
\begin{equation}\label{eq-ricci02}
{\rm Ric}(X_i\,,X_j)=\sum_{k=1}^{n}\langle\overbar R(X_k,X_i)X_j,X_k\rangle+H\delta_{ij}\lambda_i-\delta_{ij}\lambda_i^2\,.
\end{equation}
\subsection{Hypersurfaces in $I\times_\omega\Q_\epsilon^n$}
In the above setting, let us consider the particular case when $\overbar M^{n+1}$ is the warped product
$I\times_\omega\mathbb{Q} _\epsilon^n,$ where $I\subset\mathbb{R} $ is an open interval,
$\omega$ is a positive differentiable function defined on $I,$
and $\mathbb{Q} _\epsilon^n$ is one of the simply connected space forms of constant curvature
$\epsilon\in\{0,1,-1\}$: $\mathbb{R} ^n$ ($\epsilon =0$),
$\mathbb{S}^n$ ($\epsilon =1$), or $\mathbb{H} ^n$ ($\epsilon =-1$).
Recall that the Riemannian metric
of $I\times_\omega\Q_\epsilon^n$, to be denoted by $\langle\,,\, \rangle,$ is:
\[
dt^2+\omega^2ds_\epsilon^2,
\]
where $dt^2$ and $ds_\epsilon^2$ are the standard Riemannian metrics of $\mathbb{R} $ and
$\mathbb{Q} _\epsilon^n,$ respectively.
We call \emph{horizontal} the fields of $T(I\times_\omega\Q_\epsilon^n)$ which are tangent to $I,$ and
\emph{vertical} those which are tangent to $\mathbb{Q} _\epsilon^n.$ The gradient of
the projection $\pi_I$ on the first factor $I$ will be denoted by $\partial_t$\,.
Setting $\overbar\nabla$ and $\widetilde\nabla$\, for the Levi-Civita connections of
$I\times_\omega\Q_\epsilon^n$ and $\mathbb{Q} _\epsilon^n,$ respectively,
for any \emph{vertical} fields $X, Y\in T\mathbb{Q} _\epsilon^n,$
the following identities hold
(see \cite[Lema 7.3]{bishop-oneill}):
\begin{equation}\label{eq-connectionwarped}
\begin{aligned}
\overbar\nabla_XY &= \widetilde\nabla_XY-({\omega'}/{\omega})\langle X,Y\rangle\partial_t\,.\\
\overbar{\nabla}_X\partial_t &=\overbar{\nabla}_{\partial_t}X=({\omega'}/{\omega}) X.\\
\overbar{\nabla}_{\partial_t}\partial_t &= 0.
\end{aligned}
\end{equation}
The curvature tensor $\overbar R$ of $I\times_\omega \,\mathbb{Q} _\epsilon^n$ is given by (cf. \cite{lawn-ortega})
\begin{eqnarray} \label{eq-barcurvaturetensor}
\langle\overbar R(X,Y)Z,W\rangle &=& (\alpha\circ\pi_I)(\langle X,Z\rangle\langle Y,W\rangle-\langle X,W\rangle\langle Y,Z\rangle) \nonumber \\
&& +(\beta\circ\pi_I)(\langle X,Z\rangle\langle Y,\partial_t\rangle\langle W,\partial_t\rangle-\langle Y,Z\rangle\langle X,\partial_t\rangle\langle W,\partial_t\rangle \\
&& -\langle X,W\rangle\langle Y,\partial_t\rangle\langle Z,\partial_t\rangle+\langle Y,W\rangle\langle X,\partial_t\rangle\langle Z,\partial_t\rangle), \nonumber
\end{eqnarray}
where $X,Y, Z, W\in T(I\times_\omega\,\mathbb{Q} _\epsilon^n),$ and $\alpha$ and $\beta$ are the following functions on $I$:
\begin{equation} \label{eq-def-alpha-beta}
\alpha:=\frac{(\omega')^2-\epsilon}{\omega^2} \quad\text{and}\quad \beta:= \frac{\omega''}{\omega}-\alpha.
\end{equation}
It is easily seen from \eqref{eq-barcurvaturetensor} that $I\times_\omega\mathbb{Q}_{\epsilon}^n $ has
constant sectional curvature if and only if $\alpha$ is constant and $\beta$ vanishes on $I.$
Also, a direct computation yields:
\begin{equation}\label{eq-alphaprime}
\alpha'=\frac{2\omega'}{\omega}\beta.
\end{equation}
Given a hypersurface $\Sigma$ of $I\times_\omega\Q_\epsilon^n,$
the \emph{height} function $\xi$ and the \emph{angle} function $\theta$ of $\Sigma$
are defined as
$$
\xi:=\pi_{\scriptscriptstyle I}|_\Sigma \quad\text{and}\quad \Theta(x):=\langle N(x),\partial_t\rangle, \,\, x\in\Sigma.
$$
The gradient of $\xi$ on $\Sigma$ is denoted by $T,$ so that
\begin{equation} \label{eq-Tandtheta}
T=\partial_t-\Theta N.
\end{equation}
\begin{definition}
A connected Einstein hypersurface $\Sigma$ of $I\times_\omega\Q_\epsilon^n$ on which
$(\beta\circ\xi)T$ never vanishes will be called \emph{ideal}.
\end{definition}
Regarding the gradient of $\theta,$ let us notice first that,
from the equalities \eqref{eq-connectionwarped}, for all $X\in T(I\times_\omega\Q_\epsilon^n),$ one has
\begin{equation} \label{eq-connectionwarp2}
\overbar\nabla_X\partial_t=\overbar\nabla_{X-\langle X,\partial_t\rangle\partial_t}\partial_t=
\frac{\omega'}{\omega}\left(X-\langle X,\partial_t\rangle\partial_t\right),
\end{equation}
where, by abuse of notation, we are writing $\omega\circ\xi=\omega$ and
$\omega'\circ\xi=\omega'$.
Therefore, for any $X\in T\Sigma,$ we have
\[
X(\theta)=\langle\overbar\nabla_XN,\partial_t\rangle+\langle N,\overbar\nabla_X\partial_t\rangle=-\langle AX,T\rangle-\frac{\omega'}{\omega}\theta\langle T,X\rangle=
-\langle AT,X\rangle-\frac{\omega'}{\omega}\theta\langle T,X\rangle.
\]
Hence, the gradient of $\theta$ is
\begin{equation} \label{eq-gradthetawarp}
\nabla\theta=-\left(A+\frac{\omega'}{\omega}\theta\,{\rm Id}\right)T,
\end{equation}
where ${\rm Id}$ stands for the identity map of $T\Sigma.$
The following concept will play a fundamental role in this paper.
\begin{definition}
We say that a hypersurface $\Sigma$ of $I\times_\omega\Q_\epsilon^n$ has the $T$-\emph{property} if $T$ is a principal direction
of $\Sigma,$ i.e., $T$ never vanishes and there is a differentiable real function $\lambda$ on $\Sigma$ such that
$AT=\lambda T$, where $A$ is the shape operator of $\Sigma.$
\end{definition}
Finally, it follows from \eqref{eq-barcurvaturetensor} that, for any hypersurface
$\Sigma$ of $I\times_\omega\Q_\epsilon^n,$ the
Gauss equation \eqref{eq-gauss} takes the form:
\begin{eqnarray} \label{eq-gauss02}
\langle R(X,Y)Z,W\rangle &=& (\alpha\circ\xi)(\langle X,Z\rangle\langle Y,W\rangle-\langle X,W\rangle\langle Y,Z\rangle) \nonumber \\
&&+ (\beta\circ\xi)(\langle X,Z\rangle\langle Y,T \rangle\langle W,T \rangle-\langle Y,Z\rangle\langle X,T \rangle\langle W,T \rangle \nonumber \\
&&- \langle X,W\rangle\langle Y,T \rangle\langle Z,T\rangle+\langle Y,W\rangle\langle X,T\rangle\langle Z,T\rangle) \\
&&+ \langle AX,W\rangle\langle AY,Z\rangle-\langle AX,Z\rangle\langle AY,W\rangle. \nonumber
\end{eqnarray}
\section{Three Key Lemmas} \label{sec-lemmas}
Given a hypersurface $\Sigma\subset I\times_\omega\mathbb{Q} _\epsilon^n,$ any nonempty transversal intersection
$$
\Sigma_t:=\Sigma\mathrel{\text{\tpitchfork}}(\{t\}\times\mathbb{Q} _\epsilon^n)
$$
is a hypersurface of $(\{t\}\times\mathbb{Q} _\epsilon^n,ds_\epsilon^2)$
which we call the \emph{vertical section} of $\Sigma$ at height $t.$ If $\Sigma$ is oriented with
unit normal $N,$ it is easily seen that
$N-\theta\partial_t$ is tangent to
$\{t\}\times\mathbb{Q} _\epsilon^n$ and orthogonal to $\Sigma_t$\,, so that
$\Sigma_t$ is orientable as well.
Denoting also by $\langle\,,\,\rangle_{\epsilon}$ the Riemannian metric $ds_\epsilon^2$ of $\mathbb{Q} _\epsilon^n,$ one has
\[
\omega^2\langle N-\theta\partial_t,N-\theta\partial_t\rangle_{\epsilon}=\langle N-\theta\partial_t,N-\theta\partial_t\rangle=1-\theta^2=\|T\|^2.
\]
Hence, a unit normal
to $\Sigma_t$ in $(\{t\}\times\mathbb{Q} _\epsilon^n,ds_\epsilon^2)$ is given by
\begin{equation} \label{eq-Nt}
N_t=-\frac{\omega(t)(N-\theta\partial_t)}{\|T\|}\,\cdot
\end{equation}
We shall denote by $A_t$ the
shape operator of $\Sigma_t$ with respect to $N_t\,,$
that is,
$$
A_tX=-\widetilde\nabla_XN_t\,, \,\,\, X\in T\Sigma_t\subset T\Sigma,
$$
where $\widetilde\nabla$ stands for the Levi-Civita connection of
$(\mathbb{Q} _\epsilon^n,ds_\epsilon^2).$
Our first lemma establishes the relation between the shape operator
$A$ of a hypersurface $\Sigma\subsetI\times_\omega\Q_\epsilon^n$ with the shape operator $A_t$ of a vertical section
$\Sigma_t\subset(\{t\}\times\mathbb{Q} _\epsilon^n,ds_\epsilon^2).$
\begin{lemma} \label{lem-verticalsection}
For $n\ge 2,$ let $\Sigma$ be a hypersurface of $I\times_\omega\mathbb{Q} _\epsilon^n,$ and let
$\Sigma_t$ be one of its vertical sections (considered as a hypersurface of $(\{t\}\times\mathbb{Q} _\epsilon^n,ds_\epsilon^2)$).
Then, the following identity holds:
\begin{equation} \label{eq-At001}
\langle A_t X,Y\rangle=-\frac{\omega}{\|T\|}\left(\langle AX,Y\rangle+\frac{\theta\omega'}{\omega}\langle X,Y\rangle\right)
\,\,\,\, \forall X, Y\in T\Sigma_t\subset T\Sigma.
\end{equation}
Consequently, if \,$T$ is a principal direction of $\Sigma$ along $\Sigma_t$
with corresponding principal curvature $\lambda_1,$ the principal curvatures $\lambda_2,\dots, \lambda_n$
of $\Sigma$ are given by
\begin{equation} \label{eq-principalcurvatureslemma}
\lambda_i=-\frac{\|T\|\lambda_i^t+\omega'\theta}{\omega}\,, \,\,\,\,\, i=2,\dots, n,
\end{equation}
where $\lambda_i^t$ denotes the $i$-th principal curvature of $\Sigma_t.$
\end{lemma}
\begin{proof}
Given $X\in T\Sigma_t,$ it follows from the first identity \eqref{eq-connectionwarped} that
$\widetilde\nabla_XN_t=\overbar\nabla_XN_t,$ for $\langle X,N_t\rangle=0.$
However,
from \eqref{eq-Nt}, we have
\[
-\overbar\nabla_XN_t=X({\omega}/{\|T\|})(N-\theta\partial_t)+
({\omega}/{\|T\|})(\overbar\nabla_XN-X(\theta)\partial_t-\theta\overbar\nabla_X\partial_t).
\]
Thus, since $\overbar\nabla_X\partial_t=(\omega'/\omega)X$ and $AX=-\overbar\nabla_XN,$ for all $Y\in T\Sigma_t$\,, one has
\[
\langle A_tX,Y\rangle=-\langle \widetilde\nabla_XN_t,Y\rangle=
-\langle\overbar\nabla_XN_t,Y\rangle=-\frac{\omega}{\|T\|}\langle AX+\theta(\omega/\omega')X,Y\rangle,
\]
as we wished to prove.
Now, assuming that $T$ is a principal direction of $\Sigma$
along $\Sigma_t,$ we have that $T\Sigma_t$ is invariant by $A.$
This, together with \eqref{eq-At001}, then yields
\[
A_t=-\frac{\omega}{\|T\|}\left(A+\theta({\omega'}/{\omega}){\rm Id}\right){|_{T\Sigma_t}}\,,
\]
where $\rm{Id}$ denotes the identity map of $T\Sigma_t.$
From this last equality,
one easily concludes that any principal direction $X_i$ of $\Sigma_t$ with
corresponding principal curvature $\lambda_i^t$ is a principal direction of
$\Sigma$ with corresponding principal curvature $\lambda_i$ as in
\eqref{eq-principalcurvatureslemma}.
\end{proof}
Now, we establish a necessary and sufficient condition
for an arbitrary hypersurface of $I\times_\omega\Q_\epsilon^n$ to have the $T$-property.
\begin{lemma} \label{lem-T}
For $n>2,$ let $\Sigma$ be a hypersurface of $I\times_\omega \,\mathbb{Q} _\epsilon^n$ on which
$(\beta\circ\xi)T$ never vanishes. Then, $\Sigma$ has the
$T$-property if and only if the principal directions $X_1\,, \dots ,X_n$ of $\Sigma$ diagonalize its Ricci tensor.
In particular, if $\Sigma$ is Einstein, it has the $T$-property.
\end{lemma}
\begin{proof}
Choosing $i, j, k\in\{1,\dots ,n\}$ with $i\ne j\ne k\ne i,$ we have from \eqref{eq-barcurvaturetensor} that
\[
\langle\overbar R(X_k,X_i)X_j,X_k\rangle= -\beta\langle X_i,T\rangle\langle X_j,T\rangle.
\]
Combining this equality with \eqref{eq-ricci02}, we get
$${\rm Ric}(X_i\,,X_j)=\sum_{k=1}^{n}\langle\overbar R(X_k,X_i)X_j,X_k\rangle=(2-n)(\beta\circ\xi)\langle X_i\,, T\rangle\langle X_j\,, T\rangle,$$
from which the result follows.
\end{proof}
Finally, as a direct consequence of equation \eqref{eq-ricci02}, we obtain necessary and sufficient
conditions on the principal curvatures of a hypersurface
$\Sigma\subsetI\times_\omega\Q_\epsilon^n$ with the $T$-property to be an Einstein manifold.
\begin{lemma} \label{lem-T-einstein}
For $n>2,$ let $\Sigma$ be a hypersurface of \,$I\times_\omega \,\mathbb{Q} _\epsilon^n$
having the $T$-property. Let $\lambda_1\,, \dots, \lambda_n$ be the principal curvatures of $\Sigma$, and
let $\{X_1\,, \dots, X_n\}$ be the corresponding orthonormal frame of principal directions
with $X_1 = T / ||T||$. Then, $\Sigma$ is a $\Lambda$-Einstein hypersurface if and only if
the following equalities hold:
\begin{equation}\label{T-einstein-lambda1}
\begin{aligned}
\lambda_1^2 - H\lambda_1 + (n-1)((\beta \circ \xi) ||T||^2 + (\alpha \circ \xi)) + \Lambda &= 0,\\[1ex]
\lambda_i^2 - H \lambda_i + (\beta \circ \xi) ||T||^2 + (n-1) (\alpha \circ \xi) + \Lambda &=0, \,\,\,\, 2\le i\le n.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
The proof is straightforward considering the identity \eqref{eq-ricci02}
for $X_i=X_j=X_1$ and $X_i=X_j \neq X_1,$ respectively.
\end{proof}
\section{Rotational CSC Hypersurfaces of $I\times_\omega\mathbb{Q} _\epsilon^n$} \label{sec-csc}
In \cite{dillenetal}, the authors introduced a class of hypersurfaces, called
\emph{rotational}, which was considered in \cite{manfio-tojeiro} to construct and classify CSC hypersurfaces
of $\mathbb{R} \times\mathbb{Q}_{\epsilon}^n $ (interchanging factors).
Such a rotational hypersurface $\Sigma$ is characterized by the fact that
the horizontal projections on $\mathbb{Q}_{\epsilon}^n $ of its vertical sections
constitute a parallel family $\mathscr F$ of totally umbilical hypersurfaces. In this way,
$\Sigma$ is called \emph{spherical} if the members of $\mathscr F$
are geodesic spheres of $\mathbb{Q}_{\epsilon}^n ,$ \emph{parabolic} if they are horospheres
of $\mathbb{H} ^n$, and \emph{hyperbolic} if they are equidistant hypersurfaces of $\mathbb{H} ^n.$
\begin{table}[hbtp]%
\centering %
\begin{tabular}{cccc}
\toprule %
$f(x)$ & $\epsilon$ & rotational type \\\midrule[\heavyrulewidth] %
$\cos x$ & $\phantom +1$ & spherical \\
$\sinh x$ & $ -1$& spherical \\
$x$ & $\phantom{-}0$ & spherical \\
$x$ & $-1$& parabolic \\
$\cosh x$ & $-1$& hyperbolic \\\bottomrule
\end{tabular}
\vspace{.2cm}
\caption{Definition of $f.$}
\label{table-f}
\end{table}
Given an interval $I\subset\mathbb{R} ,$
any rotational hypersurface $\Sigma\subset I\times\mathbb{Q}_{\epsilon}^n $ can be parametrized by means of a plane curve defined
by two differentiable functions on an open interval.
Here, we denote these functions by $\phi=\phi(s)$ and $\xi=\xi(s)$
(see \cite{manfio-tojeiro} for details).
In this setting, we can replace $I\times\mathbb{Q}_{\epsilon}^n $ by $I\times_\omega\Q_\epsilon^n$ and consider $\Sigma$ as a hypersurface of
the latter. Then, assuming that either
\[
(\xi')^2+(\omega(\xi)\phi')^2=1 \quad\text{or}\quad (\xi')^2+\left(\omega(\xi)\frac{\phi'}{\phi}\right)^2=1\,\,\,(\text{if $\Sigma$ is parabolic}),
\]
and performing computations analogous to the ones in \cite[Section 4]{manfio-tojeiro}, one concludes that the
metric $d\sigma^2$ on $\Sigma$ induced by the warped metric of $I\times_\omega\Q_\epsilon^n$ is a warped metric as well,
which is given by
\begin{equation} \label{eq-rotationalmetric}
d\sigma^2=ds^2+(\omega(\xi)f(\phi))^2du^2,
\end{equation}
where $f,$ as defined in Table \ref{table-f},
depends on $\epsilon$ and the rotational type of $\Sigma$,
and $du^2$ is the standard metric of
$\mathbb{S}^{n-1},$ $\mathbb{R} ^{n-1}$ or $\mathbb{H} ^{n-1},$ according to
whether $\Sigma$ is spherical, parabolic or hyperbolic,
respectively.
Let $\Sigma$ be a rotational hypersurface of $I\times_\omega\Q_\epsilon^n$ whose metric
$d\sigma^2$ is given by \eqref{eq-rotationalmetric}.
By \cite[Proposition 4.6]{manfio-tojeiro}, $\Sigma$ has constant sectional curvature
$c\in\mathbb{R} $ if its warping function $(\omega\circ\xi)f(\phi)$ coincides with the
function $\psi=\psi(s),$ defined in Table \ref{table-psi}, which depends on
$\epsilon,$ the sign of $c,$ and the rotational type of $\Sigma.$
\begin{table}[thb]%
\centering %
\begin{tabular}{cccc}
\toprule %
$\psi(s)$ & sign of $c$ & $\epsilon$ & rotational type \\\midrule[\heavyrulewidth] %
$\frac{1}{\sqrt c}\sin\sqrt c s$ & $c>0$ & $0,\pm 1$& spherical \\
$ s$ & $c=0$ & $0,\pm 1$& spherical \\
$\frac{1}{\sqrt{-c}}\sinh\sqrt{-c}s$ & $c<0$ & $0,-1$& spherical \\\midrule
$e^{\sqrt{-c}s}$ & $c<0$ & $-1$& parabolic \\
cte & $c=0$ & $-1$& parabolic \\\midrule
$\frac{1}{\sqrt{-c}}\cosh\sqrt{-c}s$ & $c<0$ & $-1$& hyperbolic \\\bottomrule
\end{tabular}
\vspace{.2cm}
\caption{Definition of $\psi.$}
\label{table-psi}
\end{table}
Therefore, given $c\in\mathbb{R} ,$ our problem of finding a rotational hypersurface of $I\times_\omega\Q_\epsilon^n$
with constant sectional curvature $c$ reduces to solving either the systems
\begin{equation} \label{eq-system}
\left\{
\begin{array}{rcc}
(\xi')^2+(\omega(\xi)\phi')^2 &=&1\\[1ex]
\omega(\xi)f(\phi) &=&\psi
\end{array}
\right.
\quad\text{or}\,\,\quad
\left\{
\begin{array}{rcl}
(\xi')^2+\left(\omega(\xi)\frac{\phi'}{\phi}\right)^2&=&1\\[1ex]
\omega(\xi)f(\phi) &=&\psi,
\end{array}
\right.
\end{equation}
where the second one is considered only in the parabolic case.
Let us assume that $\omega:I\rightarrow\omega(I)$ is a diffeomorphism, and
that $f(\phi)$ does not vanish. Under these assumptions, the second equation of these
systems gives that
\begin{equation} \label{eq-xi}
\xi=\chi\left(\frac{\psi}{f(\phi)}\right),
\end{equation}
where $\chi:\omega(I)\rightarrow I$ is the inverse of $\omega.$ Of course, equality \eqref{eq-xi}
makes sense only if the range of ${\psi}/{f(\phi)}$ is contained in $\omega(I).$ Assuming that, taking
the derivative, and substituting $\xi'$ in the first equation of the system, we have that
$\phi$ must be a solution of a nonlinear first order ODE
in the following form:
\begin{equation}\label{eq-EDOphi}
a_2(s,y)(y')^2+a_1(s,y)y'+a_0(s,y)=0,
\end{equation}
where each coefficient $a_i$ is a differentiable function.
Let us assume that, for some
pair $(s_0,y_0),$ we have
\begin{equation}\label{eq-Delta}
\Delta(s_0,y_0):=a_1^2(s_0,y_0)-4a_2(s_0,y_0)a_0(s_0,y_0)>0.
\end{equation}
In this case, \eqref{eq-EDOphi} can be written as
\begin{equation}\label{eq-EDOphi2}
y'=F(s,y),
\end{equation}
where $F$ is a differentiable function in a neighborhood
of $(s_0,y_0).$
Therefore, if the inequality \eqref{eq-Delta} holds, we can apply
Picard's Theorem and conclude that
there exists a solution
$y=\phi(s)$ to \eqref{eq-EDOphi2}, and so to \eqref{eq-EDOphi},
satisfying the initial condition $\phi(s_0)=y_0$ (see \cite[pg 82]{ince} for details).
Applying it to \eqref{eq-xi}, we obtain the function $\xi$,
so that $(\phi,\xi)$ is a solution of \eqref{eq-system}.
The following result, which will be considered in the proof
of Theorem \ref{th-existenceCSC01}, summarizes the above discussion.
\begin{proposition} \label{prop-solutionsystem}
Let $\omega:I\rightarrow\omega(I)$ be a diffeomorphism. Given
$c\in\mathbb{R} ,$ suppose that there exists $s_0\in\mathbb{R} $ such that the function
$\xi$ given in \eqref{eq-xi007} is well defined in an open interval
$I_0\owns s_0$\,. If, for some $y_0\in\mathbb{R} ,$ one has $\Delta(s_0,y_0)>0,$ then
the system \eqref{eq-system} has a solution $(\phi,\xi)$ defined in an open
interval contained in $I_0$\,. Consequently, there is a hypersurface
of constant sectional curvature $c$ in the
corresponding warped product $I\times_\omega\mathbb{Q}_{\epsilon}^n .$
\end{proposition}
\begin{proof}[Proof of Theorem \ref{th-existenceCSC01}]
We can assume,
without loss of generality, that
$$\omega(I)=(0,\delta), \,\,\, 0<\delta\le+\infty,$$
for $\omega>0.$
Then, the hypothesis on $\omega'$ yields
\begin{equation}\label{eq-xiderivative}
|\chi'(u)|<1 \,\,\, \forall u\in (0,\delta).
\end{equation}
We will consider separately the cases in the statement (according to
$\epsilon,$ $c,$ and the rotational type of the hypersurface) to show that,
in any of them, there exists a pair $(s_0,y_0)$ such that $\xi(s)$ (as given in \eqref{eq-xi})
is well defined in an open interval $I_0\owns s_0$\,, and $\Delta(s_0,y_0)>0.$
The result, then, will follow
from Proposition \ref{prop-solutionsystem}.
\vspace{.2cm}
{\bf Case 1:} \underline{$\epsilon=-1$, $c\in(-\infty,+\infty)$, spherical-type}
\vspace{.2cm}
In this setting,
$f(x)=\sinh x$ and $\psi(s)$ is as in the first three lines
of Table \ref{table-psi}, according to the sign of $c.$ Then, we have
\begin{equation} \label{eq-xi007}
\xi(s)=
\left\{
\begin{array}{lcc}
\chi\left(\frac{\sin(\sqrt cs)}{\sqrt c\sinh(\phi(s))}\right) &\text{if}& c>0.\\[2ex]
\chi\left(\frac{s}{\sinh(\phi(s))}\right) &\text{if}& c=0.\\[2ex]
\chi\left(\frac{\sinh(\sqrt {-c}s)}{\sqrt {-c}\sinh(\phi(s))}\right) &\text{if}& c<0.
\end{array}
\right.
\end{equation}
In any case, we choose $s_0$ positive and sufficiently small, and
$y_0=\phi(s_0)$ satisfying $\sinh y_0>1$, so that $\xi$ is well defined in an open
interval $I_0\owns s_0$\,, $I_0\subset (0,+\infty).$
A straightforward calculation gives that, at $(s_0,y_0),$
the coefficients $a_i$ (defined in \eqref{eq-EDOphi})
satisfy $a_2(s_0,y_0)>0$ and
\[
a_0(s_0,y_0)=
\left\{
\begin{array}{lcc}
(\chi'(u_0))^2\frac{\cos^2(\sqrt cs_0)}{\sinh^2(y_0)}-1 &\text{if}& c>0.\\[2ex]
\frac{(\chi'(u_0))^2}{\sinh^2(y_0)}-1 &\text{if}& c=0.\\[2ex]
(\chi'(u_0))^2\frac{\cosh^2(\sqrt{-c}s_0)}{\sinh^2(y_0)}-1 &\text{if}& c<0,
\end{array}
\right.
\]
where $u_0$ is the argument of $\chi$ in \eqref{eq-xi007} for
$s=s_0$\,.
From the choice of $y_0$ and \eqref{eq-xiderivative}, we have $a_0(s_0,y_0)<0$
in all three cases, which implies that $\Delta(s_0,y_0)>0.$
\vspace{.2cm}
{\bf Case 2:} \underline{$\epsilon=0$, $c\in(-\infty,+\infty)$, spherical-type}
\vspace{.2cm}
The reasoning for this case is completely analogous to the one for Case 1.
We have just to replace, in that argument, the function $f(x)=\sinh x$ by the identity function $f(x)=x.$
\vspace{.2cm}
{\bf Case 3:} \underline{$\epsilon=1$, $c>0$, spherical-type}
\vspace{.2cm}
Now, the functions
$f$ and $\psi$ are
$f(x)=\cos x$ and
$\psi(s)=\sin(\sqrt cs)/\sqrt c,$
in which case equality \eqref{eq-xi} takes the form
\[
\xi(s)=\chi\left(\frac{\sin(\sqrt cs)}{\sqrt c\cos\phi(s)}\right)\cdot
\]
Hence, choosing $s_0$ and $y_0=\phi(s_0)$ positive and
sufficiently small, one has
\[
u_0:=\frac{\sin(\sqrt cs_0)}{\sqrt c\cos y_0}\in\omega(I),
\]
which implies that $\xi(s)$ is well defined in an
open interval $I_0\owns s_0$\,, $I_0\subset (0,+\infty).$
Also, a direct computation yields
$a_2(s_0,y_0)>0$ and
\[a_0(s_0,y_0)=(\chi'(u_0))^2\frac{\cos^2(\sqrt cs_0)}{\cos^2(y_0)}-1.\]
Thus, assuming $y_0<\sqrt cs_0$ and considering \eqref{eq-xiderivative}, we conclude that
$a_0(s_0,y_0)<0.$ In particular, $\Delta(s_0,y_0)>0.$
\vspace{.2cm}
{\bf Case 4:} \underline{$\epsilon=-1$, $c\le 0$, parabolic-type}
\vspace{.2cm}
For this case, $f(x)=x$ and $\psi(s)$ is as in the fourth and fifth
lines of Table \ref{table-psi}, according to the sign of $c.$ Hence,
\begin{equation} \label{eq-xi008}
\xi(s)=
\left\{
\begin{array}{lcc}
\chi\left(\frac{e^{\sqrt{-c}s}}{\phi(s)}\right) &\text{if}& c<0.\\[2ex]
\chi\left(\frac{a}{\phi(s)}\right) &\text{if}& c=0,
\end{array}
\right.
\end{equation}
where $a$ is a positive constant. Setting $s_0=0,$ in any of the above cases, we
can choose a sufficiently large $y_0=\phi(0)$ such that $\xi$ is well defined
in a neighborhood of $0.$ For $c<0,$ we shall also assume $y_0>-c.$
Writing $u_0$ for the argument of $\chi$ in \eqref{eq-xi008},
we have $a_2(s_0,y_0)>0$ and
\[
a_0(s_0,y_0)=
\left\{
\begin{array}{lcc}
\frac{-c(\chi'(u_0))^2}{y_0^4}-1 &\text{if}& c<0.\\[2ex]
-1 &\text{if}& c=0.
\end{array}
\right.
\]
So, considering \eqref{eq-xiderivative} (only for $c<0$), we have
that $\Delta(s_0,y_0)>0$ in both cases.
\vspace{.2cm}
{\bf Case 5:} \underline{$\epsilon=-1$, $c<0$, hyperbolic-type}
\vspace{.2cm}
The functions $f$ and $\psi$ are $f(x)=\cosh x$ and $\psi(s)=\cosh s.$ Thus,
we have
\begin{equation} \label{eq-xifinal}
\xi(s)=\chi\left(\frac{\cosh(\sqrt{-c}s)}{\sqrt{-c}\cosh\phi(s)}\right).
\end{equation}
As in the previous case, by choosing $s_0=0$ and $y_0=\phi(s_0)$ sufficiently large, we have
that $\xi$ is well defined in a neighborhood of $0.$ Again,
setting $u_0$ for the argument of $\xi$ in \eqref{eq-xifinal},
we have $a_2(s_0,y_0)>0$ and
\[
a_0(s_0,y_0)=\frac{(\chi'(u_0)\sinh(\sqrt{-c}s_0))^2}{\cosh^4(y_0)}-1=-1<0,
\]
which implies that $\Delta(s_0,y_0)>0.$
\end{proof}
Besides having totally umbilical vertical sections,
a notable fact of rotational hypersurfaces in $I\times\mathbb{Q}_{\epsilon}^n $
is that they have the $T$-property (see \cite{dillenetal}).
Let us see that the same is true for hypersurfaces of
warped products $I\times_\omega\Q_\epsilon^n.$
To that end, we first consider the diffeomorphism $G:I\rightarrow J=G(I)$ such that
$G'=1/\omega.$ As can be easily checked, the map
\begin{equation} \label{eq-varphi}
\begin{array}{cccc}
\varphi\colon & I\times_\omega\Q_\epsilon^n & \rightarrow & J\times\mathbb{Q} _\epsilon^n\\[1ex]
& (t,p) & \mapsto & (G(t),p)
\end{array}
\end{equation}
is a conformal diffeomorphism. This, together with \cite[Lemma 3]{manfio-tojeiro-veken}, gives that a hypersurface
$\Sigma\subsetI\times_\omega\Q_\epsilon^n$ has the $T$-property if and only if $\varphi(\Sigma)$
has the $T$-property as a hypersurface of $J\times\mathbb{Q} _\epsilon^n$.
Also, since $\varphi$ fixes the second factor $\mathbb{Q} _\epsilon^n$ pointwise,
a vertical section $\Sigma_t\subset\Sigma$ is totally umbilical
in $(\{t\}\times\mathbb{Q} _\epsilon^n,ds_\epsilon^2)$ if and only if
$\varphi(\Sigma_t)\subset\varphi(\Sigma)$ is totally umbilical
in $((\{G(t)\}\times\mathbb{Q} _\epsilon^n,ds_\epsilon^2).$ Consequently,
$\Sigma$ is rotational if and only if $\varphi(\Sigma)$ is rotational.
The following result, which will be applied in the
proof of Theorem \ref{th-CSC}, follows directly from the above considerations,
the identity \eqref{eq-principalcurvatureslemma}, and
\cite[Theorem 2]{dillenetal} (see also \cite[Remark 7.4]{manfio-tojeiro}).
\begin{proposition} \label{prop-TandRotational}
For $n\ge 3,$ let $\Sigma\inI\times_\omega\Q_\epsilon^n$ be a hypersurface having the $T$-property.
Then, $\Sigma$ is rotational if and only if
any vertical section $\Sigma_t$ of $\Sigma$ is totally
umbilical in $(\{t\}\times\mathbb{Q} _\epsilon^n,ds_\epsilon^2).$
\end{proposition}
\begin{proof}[Proof of Theorem \ref{th-CSC}]
As $\Sigma$ has constant sectional curvature, it is an Einstein manifold. Thus, by Lemma \ref{lem-T},
$T$ is a principal direction of $\Sigma,$ so that we can set
\[
\{X_1\,, \dots, X_n\}\subset T\Sigma
\]
for the orthonormal frame of principal directions of $\Sigma$ with $X_1=T/\|T\|.$
Writing $c$ for the sectional curvature of $\Sigma,$
$\lambda_1$ for its principal curvature corresponding to $T,$ and
$\lambda_2,\dots,\lambda_n$ for the principal curvatures corresponding to the
vertical principal directions of $\Sigma,$
we get from Gauss equation \eqref{eq-gauss02} that, for all $2\le i,j\le n,$
$i\ne j,$ the following identities hold:
\begin{equation}\label{eq-gaussproof}
\begin{aligned}
(\alpha\circ\xi) +c &= \lambda_i\lambda_j\\
(\alpha\circ\xi)+c &=\lambda_1\lambda_i-(\beta\circ\xi)\|T\|^2.
\end{aligned}
\end{equation}
Suppose that, for some $x\in\Sigma,$ $\alpha(\xi(x))+c=0.$ Then, from the first identity in \eqref{eq-gaussproof},
$\lambda_i(x)$ is nonzero for at most one index $i\in\{2,\dots ,n\}.$ Thus, choosing $i\in\{2,\dots ,n\}$ such that $\lambda_i(x)=0$,
we have from the second identity in \eqref{eq-gaussproof} that $(\beta\circ\xi)\|T\|$ vanishes at $x,$
which is contrary to our hypothesis. Hence,
$(\alpha\circ\xi) +c$ never vanishes on $\Sigma.$
Since we are assuming $n>3,$ the first identity in \eqref{eq-gaussproof} implies that
the last $n-1$ principal curvatures of $\Sigma$ are all equal and nonzero.
This, together with
\eqref{eq-principalcurvatureslemma}, gives that any vertical section $\Sigma_t$ of $\Sigma$ is totally umbilical
in $(\{t\}\times\mathbb{Q} _\epsilon^n,ds_\epsilon^2).$
The result, then, follows from Proposition \ref{prop-TandRotational}.
\end{proof}
\begin{comment}
Since Σ has constant sectional curvature, it is
an Einstein manifold. Thus, ” is suggested to be “As Σ has constant
sectional curvature, ”.
\end{comment}
\section{Proofs of Theorems \ref{th-betazero} and \ref{th-einstein}} \label{sec-einstein}
\begin{proof}[Proof of Theorem \ref{th-betazero}]
The proof follows closely the one given for \cite[Theorem 3.1]{ryan}.
Considering equality \eqref{eq-alphaprime} and the fact that $T=\nabla\xi,$ it follows
from the hypothesis on $(\beta\circ\xi)T$ that
the derivative of $\alpha\circ\xi$ vanishes on $\Sigma.$ Indeed, given $x\in\Sigma,$
one has
$$
(\alpha\circ\xi)_*(x)=\alpha'(\xi(x))\xi_*(x)=\frac{2\omega'(\xi(x))}{\omega(\xi(x))}\beta(\xi(x))\xi_*(x)=0.
$$
In particular, $\alpha\circ\xi$ (to be denoted by $\alpha$) is constant, since $\Sigma$ is connected.
Let $\{X_1\,,\dots, X_n\}\subset T\Sigma$ be a local orthonormal frame of principal directions of
$\Sigma.$ It follows from $\eqref{eq-barcurvaturetensor}$ that
$\langle\overbar R(X_i\,, X_k)X_k\,, X_i\rangle=-\alpha\,\, \forall i\ne k\in\{1,\dots ,n\}.$ This equality, together
with \eqref{eq-ricci02}, gives that any principal direction $\lambda_i$ of
$\Sigma$ is a root of the following quadratic equation:
\begin{equation}\label{eq-quadratic}
s^2-Hs+\sigma=0.
\end{equation}
In particular, there are only two possible values for
each $\lambda_i.$ Hence, we can assume that, for some $k\in \{1,\dots ,n\},$ one has
$$\lambda_1=\cdots =\lambda_k=\lambda \quad\text{and}\quad \lambda_{k+1}=\cdots =\lambda_{n}=\mu,$$
with $\lambda$ and $\mu$ satisfying the identities:
\begin{equation} \label{eq-lm}
\lambda+\mu=H \quad \text{and} \quad \lambda\mu=\sigma.
\end{equation}
Since $H={\rm trace}\, A=k\lambda+(n-k)\mu,$ it follows that
\begin{equation} \label{eq-k}
(k-1)\lambda+(n-k-1)\mu=0.
\end{equation}
Assume $\sigma>0.$ In this case, if $\lambda$ and $\mu$ are distinct at some point of $\Sigma$,
they must have the same sign. Thus,
by equality \eqref{eq-k}, we must have $k-1=n-k-1=0,$ which gives $n=2.$ However, we are assuming $n>2.$
Hence, $\Sigma$ is totally umbilical if
$\sigma>0.$ In addition, $\lambda^2=\sigma,$ i.e., $\lambda$ is constant on $\Sigma.$
But, by Gauss equation \eqref{eq-gauss02},
for any pair of orthonormal principal directions $X_i\,, X_j$\,, one has
$K(X_i,X_j)=\lambda^2-\alpha,$
which implies that $\Sigma$ has constant sectional curvature $c=\lambda^2-\alpha.$
Suppose now that $\sigma=0.$ If $\lambda\ne 0$ and $\mu=0,$ we have $H=k\lambda.$ However,
$\lambda$ is a solution of \eqref{eq-quadratic}. Hence,
$\lambda^2-k\lambda^2=0,$ that is, $k=1,$ which implies
that $\Sigma$ has at most one nonzero principal curvature. This,
together with Gauss equation, gives that $\Sigma$ has
constant sectional curvature $c=-\alpha.$
Finally, assume that $\sigma=\lambda\mu<0,$ in which case $\lambda$ and $\mu$ are distinct and nonzero.
Then, considering \eqref{eq-k}, we conclude that $1<k<n-1,$ that is,
both $\lambda$ and $\mu$ have multiplicity at least $2.$
In addition, since $H$ is differentiable on $\Sigma$, as distinct roots of
\eqref{eq-quadratic}, $\lambda$ and $\mu$ are also differentiable. This, together
with \eqref{eq-k}, gives that $k$ is differentiable, and so is constant on $\Sigma,$
that is, $\lambda$ and $\mu$ have constant multiplicities.
It follows from the above considerations and the identities \eqref{eq-lm} and \eqref{eq-k}
that $\lambda$ and $\mu$ are constant functions on $\Sigma.$
Now, choosing orthonormal pairs $X_1, \, X_2$ and $Y_1\,, Y_2$ satisfying
$AX_i=\lambda X_i$ and $AY_i=\mu Y_i$\,, we get from Gauss equation that
$$K(X_1,X_2)=\lambda^2-\alpha\ne\mu^2-\alpha=K(Y_1\,,Y_2),$$
which implies that $\Sigma$ has nonconstant sectional curvature.
\end{proof}
\begin{remark} \label{rem-betaT=0}
It is a well known fact that, for any $\delta\in\{-1,0,1\},$
$\mathbb{Q} _\delta^{n+1}$ contains open dense subsets which can
be represented as warped products $I\times_\omega\Q_\epsilon^n$ (see, e.g., \cite[Section 3.2]{manfio-tojeiro-veken}).
In all these representations, the corresponding function $\alpha$ is constant, so that Theorem \ref{th-betazero}
applies. More precisely, the cases (i) and (ii) occur for any $\delta\in\{-1,0,1\},$ whereas case (iii) occurs
only for $\delta=1.$ Namely, for the set
$\mathbb{S}^{n+1}-\{e_{n+2}\,,-e_{n+2}\}=(0,\pi)\times_{\sin t}\mathbb{S}^n$ (cf. \cite[Theorem 3.1]{ryan}).
\end{remark}
Let us recall that a Riemannian manifold $\overbar M^{n+1}$ is called \emph{conformally flat} if it is
locally conformal to Euclidean space. Riemannian manifolds
of constant sectional curvature are known to be conformally flat. Conversely,
any conformally flat Einstein manifold has constant sectional curvature.
Another classical result on this subject states that, for $n\ge 4,$ any hypersurface of a
conformally flat manifold $\overbar M^{n+1}$ is also conformally flat if and only if
it has a principal curvature of multiplicity $n-1$ (cf. \cite[chap. 16]{dajczer-tojeiro}).
These facts will be considered in the proof below.
\begin{proof}[Proof of Theorem \ref{th-einstein}]
By Lemma \ref{lem-T}, $\Sigma$ has the $T$-property. Hence,
by Proposition \ref{prop-TandRotational}, the assertions (ii) and (iii) are equivalent. In addition, by
Theorem \ref{th-CSC}, (i) implies (iii). So, it remains to prove that (ii) implies (i).
To that end, let us observe that, from the second equality in \eqref{T-einstein-lambda1},
any of the principal curvatures $\lambda_i$\,, $i\in\{2,\dots ,n\},$
of $\Sigma$ is a root of the
quadratic equation $s^2-Hs+\sigma=0,$ where
\begin{equation} \label{eq-sigma}
\sigma=\Lambda+(n-1)(\alpha\circ\xi)+\|T\|^2(\beta\circ\xi).
\end{equation}
Therefore, at each point of $\Sigma,$ there are at most two
different values for such principal curvatures, that is,
there exists $k\in\{2,\dots ,n\}$ such that
$$\lambda_2=\cdots =\lambda_k=\lambda \quad\text{and}\quad \lambda_{k+1}=\cdots =\lambda_{n}=\mu.$$
Consider a vertical
section $\Sigma_t\subset\Sigma,$ $t\in I.$
It follows from Lemma \ref{lem-verticalsection} that
$X_2\,, \dots , X_n$ are principal directions of $\Sigma_t$ whose principal curvatures are
\begin{equation} \label{eq-lambdatmut}
\lambda_t:=-\frac{\omega(t)}{\|T\|}\left(\lambda+\frac{\theta\omega'(t)}{\omega(t)}\right) \quad\text{and}\quad
\mu_t:=-\frac{\omega(t)}{\|T\|}\left(\mu+\frac{\theta\omega'(t)}{\omega(t)}\right).
\end{equation}
In particular, $\lambda_t=\mu_t$ if and only if $\lambda=\mu.$
Thus, if all vertical sections $\Sigma_t\subset\Sigma$ are totally umbilical,
then $\Sigma$ has a principal curvature of multiplicity $n-1.$
Hence, $\Sigma$ is conformally flat,
since $I\times_\omega \mathbb{Q} _\epsilon^n$ is conformally flat (cf. \cite[Examples 16.2]{dajczer-tojeiro}).
Being Einstein and conformally flat,
$\Sigma$ has constant sectional curvature, and so is trivial.
This shows that (ii) implies (i) and finishes the proof of the theorem.
\end{proof}
\section{Einstein Graphs in $I\times_\omega\Q_\epsilon^n$} \label{sec-graphs}
Consider an oriented isometric immersion
\[f:M_0^{n-1}\rightarrow\mathbb{Q}_{\epsilon}^n ,\]
of a Riemannian manifold $M_0$ into $\mathbb{Q}_{\epsilon}^n .$
Suppose that there exists a neighborhood $\mathscr{U}$
of $M_0$ in $T^\perp M_0$ without focal points of $f,$ that is,
the restriction of the normal exponential map $\exp^\perp_{M_0}:T^\perp M_0\rightarrow M$ to
$\mathscr{U}$ is a diffeomorphism onto its image. Denoting by
$\eta$ the unit normal field of $f,$ there is an open interval $I\owns 0$
such that, for all $p\in M_0,$ the curve
\begin{equation}\label{eq-geodesic}
\gamma_p(s)=\exp_{\scriptscriptstyle \mathbb{Q}_{\epsilon}^n }(f(p),s\eta(p)), \, s\in I,
\end{equation}
is a well defined geodesic of $\mathbb{Q}_{\epsilon}^n $ without conjugate points. Thus,
for all $s\in I,$
\[
\begin{array}{cccc}
f_s: & M_0 & \rightarrow & \mathbb{Q}_{\epsilon}^n \\
& p & \mapsto & \gamma_p(s)
\end{array}
\]
is an immersion of $M_0$ into $\mathbb{Q}_{\epsilon}^n ,$ which is said to be \emph{parallel} to $f.$
Notice that, given $p\in M_0$, the tangent space $f_{s_*}(T_p M_0)$ of $f_s$
at $p$ is the parallel transport of $f_{*}(T_p M_0)$ along
$\gamma_p$ from $0$ to $s.$ We also remark that, with the induced metric,
the unit normal $\eta_s$ of $f_s$ at $p$ is given by
\[\eta_s(p)=\gamma_p'(s).\]
\begin{definition}
Let $\phi:I\rightarrow \phi(I)\subset\mathbb{R} $ be an increasing diffeomorphism, i.e., $\phi'>0.$
With the above notation, we call the set
\begin{equation}\label{eq-paralleldescription1}
\Sigma:=\{(\phi(s),f_s(p))\in I\times_\omega\Q_\epsilon^n\,;\, p\in M_0, \, s\in I\},
\end{equation}
the \emph{graph} determined by $\phi$ and $\{f_s\,;\, s\in I\}$ or $(\phi,f_s)$-\emph{graph}, for short.
\end{definition}
Let $\Sigma\subset I\times_\omega\mathbb{Q} _\epsilon^n$ be a $(\phi,f_s)$-graph
with the induced metric from $I\times_\omega\mathbb{Q} _\epsilon^n.$
For an arbitrary point $x=(\phi(s), f_s(p))$ of $\Sigma,$ one has
\[T_x\Sigma=f_{s_*}(T_p M_0)\oplus {\rm Span}\,\{\partial_s\}, \,\,\, \partial_s=\eta_s+\phi'(s)\partial_t,\]
where $\eta_s$ denotes the unit normal field of $f_s$\,.
Writing, by abuse of notation,
$\omega\circ\phi=\omega,$ a unit normal to $\Sigma$ is
\begin{equation} \label{eq-normal}
N=\frac{-\phi'/\omega}{\sqrt{1+(\phi' / \omega)^2}}\dfrac{\eta_s}{\omega}+\frac{1}{\sqrt{1+(\phi' / \omega)^2}}\partial_t.
\end{equation}
In particular, the angle function of $\Sigma$ is
\begin{equation} \label{eq-thetaparallel}
\Theta=\frac{1}{\sqrt{1+(\phi' / \omega)^2}}\,\cdot
\end{equation}
So, if we set
\begin{equation}\label{eq-rho}
\rho:=\frac{\phi' / \omega}{\sqrt{1+(\phi' / \omega)^2}},
\end{equation}
we can write $N$ as
\begin{equation}\label{eq-normal2}
N=-\rho\dfrac{\eta_s}{\omega}+\theta\partial_t\,,
\end{equation}
which yields $1=\rho^2+\theta^2,$ i.e., $\rho=\|T\|$. It also follows from \eqref{eq-normal2} that
\[
\overbar\nabla_{\partial s}N=
-\left(\dfrac{\rho}{\omega}\right)'\eta_s-
\dfrac{\rho}{\omega}\overbar\nabla_{\partial s}\eta_s+\langle \nabla\theta,\partial_s\rangle\partial_t+\theta\overbar\nabla_{\partial_s}\partial_t\,.
\]
However, by the identities \eqref{eq-connectionwarped}, setting $\zeta=\omega'/\omega$ and, again by abuse of notation, $\zeta\circ\phi=\zeta,$ we have that
$$
\begin{array}{rcl}
\overbar\nabla_{\partial s}\eta_s &=& \overbar{\nabla}_{\eta_s} \eta_s + \phi' \overbar{\nabla}_{\partial_t} \eta_s \\
&=& \widetilde\nabla_{\eta_s}\eta_s - \zeta \langle \eta_s, \eta_s \rangle \partial_t + \phi' \zeta \eta_s \\
&=& - \zeta \omega^2 \partial_t + \phi' \zeta \eta_s.
\end{array}
$$
and $\overbar\nabla_{\partial_s}\partial_t=\overbar\nabla_{\eta_s}\partial_t=\zeta\eta_s$\,. Therefore,
\[
A\partial_s=-\overbar\nabla_{\partial s}N=-(\rho\zeta\omega+\langle\nabla\theta,\partial_s\rangle)\partial_t+\left(\dfrac{\rho'}{\omega}-\theta\zeta\right)\eta_s\,,
\]
which implies that $A\partial_s$ is orthogonal to any vertical field $X\in\{\partial_s\}^\perp\subset T\Sigma$. Thus,
$\partial_s$ is a principal direction of $\Sigma$ with eigenvalue
\[
\lambda_1=\frac{\rho'}{\omega}-\theta\zeta.
\]
In particular, $\lambda_1$ is a function of $s$ alone.
From \eqref{eq-thetaparallel}, we have that $0<\theta <1.$
Thus, $T$ is non vanishing on $\Sigma$ and parallel to $\partial_s$\,,
which implies that $T$ is also a principal direction of
$\Sigma$ with eigenvalue $\lambda_1.$
Now, let $X_2\,, \dots ,X_n$ be orthonormal vertical principal directions of $\Sigma.$ By Lemma \ref{lem-verticalsection},
these fields are also principal directions of the vertical sections $\Sigma_s=\Sigma_{\phi(s)}$
with corresponding principal curvatures
\begin{equation} \label{eq-lambdais}
\lambda_i^s=-\frac{\omega}{\|T\|}(\lambda_i+\theta\zeta)=-\frac{\omega}{\rho}(\lambda_i+\theta\zeta), \,\, \,\, i=2,\dots ,n.
\end{equation}
Summarizing,
we have the following result.
\begin{proposition} \label{prop-graph}
Let $\Sigma$ be a $(\phi,f_s)$-graph in $I\times_\omega\Q_\epsilon^n$ with unit normal
$N$ (with respect to the induced metric) as in \eqref{eq-normal}.
Then, $\Sigma$ has the $T$-property, and
its principal curvatures are given by
\begin{equation}
\lambda_1=\dfrac{\rho'(s)}{\omega(\phi(s))}-\theta\zeta(\phi(s)), \quad
\lambda_i=-\dfrac{\rho(s)}{\omega(\phi(s))}\lambda_i^s(p)-\theta\zeta(\phi(s)), \,\, \,\, i=2,\dots ,n, \label{lambdas-sum}
\end{equation}
where $\theta$ is as in \eqref{eq-thetaparallel}, $\zeta=(\omega'\circ\phi)/(\omega\circ\phi),$
and $\lambda_i^s(p)$ is the $i$-th principal curvature of the parallel $f_s:M_0\rightarrow\mathbb{Q}_{\epsilon}^n $ at $p\in M_0.$
\end{proposition}
It is well-known that the principal curvatures $\lambda_i^s(p)$ in the statement
of Proposition \ref{prop-graph} are given by (see \cite{cecil, dominguez-vazquez}):
\begin{equation}\label{eq-lambdais007}
\lambda_i^s(p) = \dfrac{\epsilon S_{\epsilon}(s) + C_{\epsilon}(s)\lambda_i^0(p)}{C_{\epsilon}(s)-S_{\epsilon}(s)\lambda_i^0(p)},
\end{equation}
where $\lambda_i^0$ is the $i$-th principal curvature of $f=f_0,$ and
$C_\epsilon, S_\epsilon$ are as in Table \ref{table-trigfunctions}.
\begin{table}[thb]%
\centering %
\begin{tabular}{cccc}
\toprule %
{{\small\rm Function}} & $\epsilon=0$ & $\epsilon=1$ & $\epsilon=-1$ \\\midrule[\heavyrulewidth] %
$C_\epsilon (s)$ & $1$ & $\cos s$ & $\cosh s$ \\\midrule
$S_\epsilon (s)$ & $s$ & $\sin s$ & $\sinh s$ \\\bottomrule
\end{tabular}
\vspace{.2cm}
\caption{Definition of $C_\epsilon$ and $S_\epsilon$}
\label{table-trigfunctions}
\end{table}
In particular, the following elementary equalities hold:
\begin{equation}\label{relations-c-s}
\begin{aligned}
C_{\epsilon}'(s)&=-\epsilon S_{\epsilon}(s)\\
S_{\epsilon}'(s)&=C_{\epsilon}(s)\\
C_{\epsilon}^2(s)+\epsilon S^2_{\epsilon}(s) &= 1.
\end{aligned}
\end{equation}
From identities \eqref{lambdas-sum}, \eqref{eq-lambdais007} and \eqref{relations-c-s}, we have
\begin{eqnarray}
\lambda_1 = - \dfrac{\theta}{\phi'} \dfrac{d}{ds} \log (\theta \omega), \label{lemma-lambda1} \\
\lambda_i^s(p) = - \dfrac{d}{ds} \log \left(C_{\epsilon}(s) - S_{\epsilon}(s) \lambda_i^0(p)\right), \label{lambda-i-int} \\
\dfrac{d}{ds} \lambda_i^s(p) = \epsilon + (\lambda_i^s(p))^2. \label{lambda-i-dev}
\end{eqnarray}
Let $f:M_0\rightarrow\mathbb{Q}_{\epsilon}^n $ be a hypersurface, and let
$$\mathscr F=\{f_s:M_0\rightarrow\mathbb{Q}_{\epsilon}^n \,;\, s\in I\owns 0\}$$
be a family of parallel hypersurfaces to $f=f_0.$ We say that $f$ is \emph{isoparametric} if any hypersurface $f_s$ has constant mean curvature.
It was proved by Cartan that $f$ is isoparametric if and only if any $f_s$ has constant principal curvatures
$\lambda_i^s$ (possibly depending on $i$ and $s$).
Suppose that, for an integer $d>1,$ $\lambda_1, \dots ,\lambda_{d}$
are pairwise distinct principal curvatures of an isoparametric
hypersurface $f:M_0\rightarrow\mathbb{Q}_{\epsilon}^n .$ The following formula,
also due to Cartan, holds for any fixed $i\in\{1,\dots, d\}$ (cf. \cite{cecil, dominguez-vazquez}):
\[
\sum_{i\ne j}n_j\frac{\epsilon+\lambda_i\lambda_j}{\lambda_i-\lambda_j}=0,
\]
where $n_j$ denotes the multiplicity of $\lambda_j.$
If $n>2$ and $f$ has only two distinct principal curvatures, say $\lambda$ and $\mu,$ we have
from equality \eqref{eq-lambdais007} that the same is true for each parallel $f_s.$ Denoting their
corresponding principal curvatures by $\lambda^s$ and $\mu^s,$
Cartan's formula yields
\begin{equation}\label{eq-cartan}
\lambda^s\mu^s=-\epsilon \,\,\, \forall s\in I.
\end{equation}
\section{Proofs of Theorems \ref{L1-grapheinstein}--\ref{th-final}} \label{sec-last}
\begin{proof}[Proof of Theorem \ref{L1-grapheinstein}]
By Lemma \ref{lem-T}, $\Sigma$ has the $T$-property. So, the same is true for
$\varphi(\Sigma)\subset J\times\mathbb{Q}_\epsilon^n,$
where $\varphi$ is the map given in \eqref{eq-varphi}. Also, since $\varphi$ is a conformal diffeomorphism,
the angle function of $\varphi(\Sigma)$ is non vanishing in $J\times\mathbb{Q}_\epsilon^n.$
Hence, by \cite[Theorem 1]{tojeiro}, $\varphi(\Sigma)$ is given locally by a
$(\tilde\phi,f_s)$-graph, which implies that $\Sigma$ is locally a
$(\phi,f_s)$-graph,
where $\phi=G^{-1}\circ\tilde\phi$, and $G$ is the diffeomorphism
$G:I\rightarrow J$ such that $G'=1/\omega.$ This proves (i).
(By abuse of notation, we shall call $\Sigma$ such a local $(\phi,f_s)$-graph.)
Let $\{X_1\,,\dots ,X_n\}$ be an orthonormal basis of principal directions of
$\Sigma$ such that $X_1=T/\|T\|.$
Since $\rho=||T||$, equations \eqref{T-einstein-lambda1} become
\begin{equation}\label{T-einstein-lambda007}
\begin{aligned}
\lambda_1^2 - H \lambda_1 + (n-1)(\beta \rho^2 + \alpha) + \Lambda &= 0\\[1ex]
\lambda_i^2 - H \lambda_i + \beta \rho^2 + (n-1) \alpha + \Lambda &=0,
\end{aligned}
\end{equation}
where, by abuse the notation, we
write $\alpha\circ\xi=\alpha$ and $\beta\circ\xi=\beta.$
The second of the above equations gives that $\Sigma$ has at most two distinct
principal curvatures among the last $n-1$. Let us call them $\lambda$ and $\mu$.
So, one has
\begin{equation}\label{system-lambda-mu}
\begin{aligned}
\lambda + \mu &= H \\[1ex]
\lambda \mu &= (n-1)(\alpha + \beta \rho^2) + \Lambda.
\end{aligned}
\end{equation}
Let us assume first that $\lambda_1$
vanishes in an open interval $I_0 \subset I$. Denoting the multiplicity of
$\lambda$ and $\mu$ by $n_{\lambda}$ and $n_{\mu}$, respectively,
it follows from the first equation in \eqref{T-einstein-lambda007}
that equations \eqref{system-lambda-mu} can be rewritten as
\begin{equation}
\begin{array}{rcl}
(1-n_{\lambda}) \lambda + (1-n_{\mu}) \mu &=& 0 \\
\lambda \mu &=& -(n-2) \beta \rho^2. \label{system-lambda-mu-rewritten}
\end{array}
\end{equation}
Since $n>2$ and $\beta ||T||$ never vanishes,
it follows from equations \eqref{system-lambda-mu-rewritten}
that $\beta>0$, and that $\lambda$ and $\mu$ depend only on $s.$
Since $\lambda_1$ also depends only on $s,$
we conclude that $f_s$ is isoparametric.
By continuity, we can assume now that $\lambda_1$ never vanishes.
Since $\lambda_1$ and $\beta \rho^2 + \alpha$ depend only on $s,$ it follows
from the first equality in \eqref{T-einstein-lambda007} that
the same is true for $H.$ This,
together with \eqref{eq-lambdais}, gives that the mean curvature $H_s$
of $f_s$ is a function of $s$ alone, which implies that the family
$\{f_s\}$ is isoparametric. In addition, identity \eqref{eq-lambdais007} gives that
$f_s,$ just as $f=f_0,$ has at most two distinct principal curvatures, which we shall
denote by $\lambda^s$ and $\mu^s.$ This finishes the proof of (ii).
To prove (iii), let us suppose by contradiction that $\lambda\neq\mu$ and, consequently,
that $\lambda^s\neq\mu^s$. Since we are assuming $\lambda_1\equiv0$, it follows from \eqref{lambdas-sum} that
\begin{equation}
\begin{array}{rcl}
\theta \zeta &=& \dfrac{\rho'}{\omega} \\[1ex]
\lambda &=& - \dfrac{\rho \lambda^s+\rho'}{\omega} \\[1ex]
\mu &=& - \dfrac{\rho \mu^s+\rho'}{\omega}\,\cdot
\end{array} \label{lambda-mu-particular}
\end{equation}
In particular, equations \eqref{system-lambda-mu-rewritten} can be written as
\begin{equation} \label{system-lambda-mu-v2}
\begin{aligned}
(n_{\lambda}-1) \lambda^s + (n_{\mu}-1) \mu^s + (n-3) \dfrac{\rho'}{\rho} &= 0 \\[1ex]
\left( \dfrac{\rho'}{\rho} \right)^2 + (\lambda^s+\mu^s) \dfrac{\rho'}{\rho} + (n-3)\epsilon + (n-2) \omega^2 \zeta' &= 0.
\end{aligned}
\end{equation}
Now, equation \eqref{lemma-lambda1} yields $\theta\omega=\sqrt{c}$
for some positive constant $c$. Therefore, from the first equation in \eqref{lambda-mu-particular},
we have $\zeta={\rho'}/{\sqrt{c}}$. Moreover, from the expressions of $\Theta$ and $\rho$
in \eqref{eq-thetaparallel} and \eqref{eq-rho},
one has ${\rho}/{\theta} = {\phi'}/{\omega}$. Hence,
\[
\omega^2\zeta'=\frac{\omega^2 \rho''}{\phi'\sqrt{c}}=\frac{\omega \rho''}{\phi'\Theta}=\frac{\rho''}{\rho}\,,
\]
so that
\[
\omega^2 \zeta' = \left( \dfrac{\rho'}{\rho} \right)'+\left( \dfrac{\rho'}{\rho} \right)^2.
\]
From this and the second identity in \eqref{system-lambda-mu-v2}, we have
\begin{equation} \label{system-lambdalmu-v22}
(n-1) \left( \dfrac{\rho'}{\rho} \right)^2 + (\lambda^s+\mu^s) \dfrac{\rho'}{\rho} + (n-3)\epsilon + (n-2) \left( \dfrac{\rho'}{\rho} \right)' = 0.
\end{equation}
Using \eqref{lambda-i-dev} and the first equality in \eqref{system-lambda-mu-v2}, we also have
\begin{equation}
\left( \dfrac{\rho'}{\rho} \right)' = -\epsilon + \left(\dfrac{1-n_{\lambda}}{n-3} \right) (\lambda^s)^2 + \left(\dfrac{1-n_{\mu}}{n-3} \right) (\mu^s)^2. \label{rho-derivative}
\end{equation}
Finally, Cartan's formula $\lambda^s \mu^s = -\epsilon$ and equalities
\eqref{system-lambda-mu-v2}--\eqref{rho-derivative} yield
$$
\dfrac{(n-1)(1-n_{\lambda})(1-n_{\mu})}{(n-3)^2} \left( (\lambda^s)^2 + (\mu^s)^2+2 \epsilon \right) = 0.
$$
Since, from \eqref{system-lambda-mu-rewritten}, we have $n_{\lambda} > 1$
and $n_{\mu}>1,$ the above equation reduces to $(\lambda^s)^2 + (\mu^s)^2+2 \epsilon = 0.$
Again from $\lambda^s\mu^s=-\epsilon,$ we have $(\lambda^s-\mu^s)^2 = 0,$
which yields $\lambda^s\equiv\mu^s,$ so that $\lambda\equiv\mu.$
Therefore, all vertical sections of $\Sigma$ are totally umbilical. So,
Theorem \ref{th-einstein} applies and gives that $\Sigma$ is trivial.
This shows (iii) and finishes the proof of the theorem.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Einstein-cylinder}]
Since $\theta<1$ is constant, we have that $T$ never
vanishes on $\Sigma$ and, from \eqref{eq-gradthetawarp},
that $T$ is a principal direction with associated principal curvature
\begin{equation}\label{eq-lambda1Thetacte}
\lambda_1 = -\theta\dfrac{\omega'}{\omega}\,\cdot
\end{equation}
Let us suppose that $\beta\circ\xi$ vanishes in a connected open set
$\Sigma'\subset\Sigma.$ Then, from Theorem \ref{th-betazero},
$\alpha\circ\xi$ is constant on $\Sigma'$, and $\Sigma'$ has constant sectional curvature,
unless $\sigma:=\Lambda+(n-1)(\alpha\circ\xi)$ is negative. In this case,
Theorem \ref{th-betazero} also gives that all
principal curvatures of $\Sigma'$ are constant. Hence, from \eqref{eq-lambda1Thetacte},
$\omega'/\omega$ is constant on $\Sigma'.$ Since $\alpha\circ\xi$ is also constant, we have
from \eqref{eq-def-alpha-beta} that $\omega$ is constant on $\Sigma',$ which implies that
$\lambda_1$ vanishes there. However, from Lemma \ref{lem-T-einstein}, we have
\[
0=\lambda_1^2-H\lambda_1+\sigma=\sigma<0,
\]
which is clearly a contradiction.
From the above considerations and the connectedness of $\Sigma,$ we can assume
that $\beta\circ\xi$ never vanishes on $\Sigma,$ which makes it an ideal Einstein hypersurface.
Assuming also that $\theta>0,$ we have from Theorem \ref{L1-grapheinstein} that $\Sigma$ is locally a $(\phi, f_s)$-graph, where
$\{f_s\}$ is isoparametric.
Since $\theta$ is constant on $\Sigma,$ so is the $\rho$-function of
any such graph,
for $\theta^2+\rho^2=1.$ Thus,
from Theorem \ref{L1-grapheinstein}-(iii), $\Sigma$ is trivial.
This proves (i).
Let us suppose now that $\theta$ vanishes on $\Sigma.$ Then, from \eqref{eq-lambda1Thetacte},
$\lambda_1 \equiv 0$. In this case, we have from \eqref{T-einstein-lambda1} that
\begin{equation} \label{einstein-theta1}
\begin{aligned}
(n-1)(\beta + \alpha) + \Lambda &= 0 \\[1ex]
\lambda_i^2 - H \lambda_i + \beta + (n-1) \alpha + \Lambda &=0,
\end{aligned}
\end{equation}
where, by abuse of notation, we write $\beta\circ\xi=\beta$ and $\alpha\circ\xi=\alpha.$
Considering the first of the above equations and
the definitions of $\alpha$ and $\beta$ given in \eqref{eq-def-alpha-beta},
we conclude that $\omega$ must satisfy the following ODE
$$
\dfrac{\omega''}{\omega} = -\dfrac{\Lambda}{n-1}\,\cdot
$$
A first integration then gives, for some constant $c,$
\begin{equation}
(\omega')^2+\dfrac{\Lambda}{n-1}\omega^2+c=0, \label{eq-edo-omega2}
\end{equation}
which, together with \eqref{eq-def-alpha-beta}, yields
\begin{equation} \label{eq-alpha-beta-omega010}
\alpha =-\dfrac{\Lambda}{n-1}-\dfrac{c+\epsilon}{\omega^2} \quad \textnormal{and} \quad \beta=\dfrac{c+\epsilon}{\omega^2}\,\cdot
\end{equation}
If $c+\epsilon=0$, then $\beta=0$ and we are
in the position of Theorem \ref{th-betazero}-(ii), in which case
$\Sigma$ has constant sectional curvature
and its shape operator has rank at most $1.$
If $c+\epsilon\neq 0,$ we have from \eqref{eq-alpha-beta-omega010} that
$\beta$ does not vanish identically on $I,$ so that $I\times_\omega\mathbb{Q}_{\epsilon}^n $ has
nonconstant sectional curvature.
Also, since $\theta \equiv 0$, we have $T=\partial_t,$ which gives that $\Sigma$ is
a cylinder over a hypersurface $\Sigma_0$ of $\mathbb{Q}^n_{\epsilon}.$
Denoting by $\lambda_i^0$ the $i$-th
principal curvature of $\Sigma_0,$ we have from
equality \eqref{eq-principalcurvatureslemma}
in Lemma \ref{lem-verticalsection} that
$\lambda_i^0 = -\omega(t) \lambda_i.$ In this case,
the second equation in \eqref{einstein-theta1} becomes
\begin{equation} \label{eq-sigma0}
(\lambda_i^0)^2 - H^0 \lambda_i^0 -(n-2)(c+\epsilon)=0,
\end{equation}
where $H^0$ is the mean curvature of $\Sigma_0.$
Therefore, at each point of $\Sigma_0,$
we have at most two distinct principal curvatures. Let us call them $\lambda^0$ and $\mu^0$.
Suppose $c+ \epsilon <0$. If $\lambda^0 \neq \mu^0$ at some point, then
\begin{equation}\label{eq-lambdamuzero}
\begin{array}{rcl}
(k-1) \lambda^0 + (n-k-2) \mu^0 &=& 0 \\
\lambda^0 \mu^0 &=& -(n-2) (c+\epsilon).
\end{array}
\end{equation}
Then $\lambda_0$ and $\mu_0$ do not vanish and have the same sign. The first equation of \eqref{eq-lambdamuzero}
implies that $k-1=n-k-2=0$ and, therefore, $n=3$. Since we are supposing $n>3,$
we conclude that all points of $\Sigma_0$ are umbilical, which implies
that $\Sigma$ has constant sectional curvature.
If $c + \epsilon>0,$
the roots of \eqref{eq-sigma0} are
necessarily distinct, which implies that $\lambda^0 \neq \mu^0$ everywhere on $\Sigma_0.$
Moreover, from \eqref{eq-lambdamuzero},
$\lambda^0$ and $\mu^0$ are constant functions.
In this case, Cartan's identity $\lambda^0 \mu^0 = - \epsilon$ implies that
$\epsilon = 1$. Otherwise, again by the first equation of \eqref{eq-lambdamuzero},
we would have $n=3.$
Therefore, denoting by $\mathbb{S}^k(c)$ the $k$-dimensional sphere of constant sectional curvature $c,$ it follows from
\cite[Theorems 2.5,\,3.4]{ryan} that, for some $k\in\{1,\dots,n-2\},$
$\Sigma_0$ is congruent to the standard Riemannian product $\mathbb{S}^k(c_1)\times\mathbb{S}^{n-1-k}(c_2),$ where
\begin{equation}\label{radi-cylinder}
c_1={\dfrac{n-3}{k-1}} \quad\textnormal{and}\quad c_2={\dfrac{n-3}{n-k-2}}\,\cdot
\end{equation}
This finishes our proof.
\end{proof}
\begin{remark} \label{rem-thetacte}
It follows from the proof of Theorem \ref{Einstein-cylinder} that, except for the case (ii)-(a),
the Einstein hypersurface $\Sigma$ in its statement is necessarily ideal.
\end{remark}
In the next proof, we shall consider the fact that
the isoparametric hypersurfaces of $\mathbb{Q}_{\epsilon}^n $ having at most two distinct
principal curvatures are completely classified. They are
the totally umbilical hypersurfaces, in the case of a single principal curvature,
and the tubes around totally geodesic submanifolds,
in the case of two distinct principal curvatures. In Euclidian space, for instance,
they constitute hyperplanes, geodesic spheres, and generalized cylinders
$\mathbb{S}^{k}\times\mathbb{R} ^{n-k-1},$ $k\in\{1,\dots, n-2\}$ (see \cite{cecil,dominguez-vazquez} for more details).
\begin{proof}[Proof of Theorem \ref{th-final}]
Keeping the notation of the proof of Theorem \ref{th-einstein}, we have that $\Sigma$ has at most
three distinct principal curvatures, $\lambda_1, \lambda$ and $\mu.$ Since $(\beta\circ\xi)T$ never vanishes on
$\Sigma,$ the principal curvature $\lambda_1$ cannot satisfy both equations in Lemma \ref{lem-T-einstein}. Thus,
$\lambda\ne\lambda_1\ne\mu,$ i.e., $\lambda_1$ has multiplicity one, so that
$\Sigma$ has at least two distinct principal curvatures.
Let us define the set
\[
C:=\{x\in\Sigma\,;\, \lambda(x)=\mu(x)\}.
\]
If $\Sigma=C,$ we are done. Analogously, if
$C=\emptyset,$ we have that
$\Sigma$ has three distinct principal curvatures everywhere. Also, since
$\{T\}^\perp$ is invariant by the shape operator $A$ of $\Sigma,$
and $\lambda$ and $\mu$ are the (distinct) eigenvalues of $A|_{\{T\}^\perp},$
we have from \cite[Proposition 2.2]{ryan} that both $\lambda$ and $\mu$
have constant multiplicity.
Now, suppose that $C$ and $\Sigma-C$ are both nonempty and let
$x$ be a boundary point of $\Sigma-C.$ Since $C$ is closed, one has
$x\in C,$ so that the vertical section $\Sigma_t\owns x$ is umbilical at $x$
(by \eqref{eq-principalcurvatureslemma} in Lemma \ref{lem-verticalsection}).
Furthermore, there is no open set $\Sigma'$ of $\Sigma$ containing $x$
such that $\theta$ vanishes on $\Sigma'.$ Indeed, assuming otherwise,
we have that $\Sigma'$ correspond to
one of the cases (b) or (c)
of Theorem \ref{Einstein-cylinder}-(ii) (see Remark \ref{rem-thetacte}).
However, in any of them,
$\lambda=\mu$ or $\lambda\ne\mu$
everywhere on $\Sigma',$ which contradicts the fact that
$x$ is a boundary point of $\Sigma-C.$
It follows from the above considerations that, in any open set $\Sigma'$ of
$\Sigma$ containing $x,$ there exists
$y\in(\Sigma-C)\cap\Sigma'$ such that $\theta(y)\ne0.$
Then, from Theorem \ref{L1-grapheinstein}, there exists a local $(\phi,f_s)$-graph
$\Sigma(y)\owns y$ in $\Sigma-C$
whose parallel family $\{f_s\}$ is isoparametric in $\mathbb{Q}_{\epsilon}^n .$ In this case,
each parallel $f_s$ has precisely two distinct principal curvatures everywhere,
so that they are tubes around totally geodesic submanifolds.
Now, we can choose $y$ as above in such a way that
$x$ is a boundary point of $\Sigma(y).$
Then, denoting by $\pi$ the vertical projection of $I\times\mathbb{Q}_{\epsilon}^n $ over $\mathbb{Q}_{\epsilon}^n ,$
one has that an open neighborhood
$U\owns\pi(x)$ in $\pi(\Sigma_t)\subset\mathbb{Q}_{\epsilon}^n $ is the limit set of a
family of parallel open subsets of tubes of the family $\{f_s\}.$ In this case,
it is easily seen that $U$ itself must be part of a
tube of $\{f_s\}$. However, $U$ is umbilical at $\pi(x),$ and tubes have no umbilical points.
This contradiction shows that either $C$ or $\Sigma-C$ is empty, proving the
first part of the theorem. As for the last part, just apply Theorem \ref{th-einstein}.
\end{proof}
|
1,116,691,500,163 | arxiv | \section{Introduction}\label{int}
In recent years, distributed optimization algorithms and their applications have received extensive attention in decision-making problems for multi-agent networks\cite{nedic2018distributed,notarstefano2019distributed,yang2019survey}, with applications in smart grids \cite{Nguyen19}, resource allocation \cite{DAI2022}, and robot formations\cite{BHOWMICK2022}. In the framework of multi-agent networks, the agents in the network have a local interactive pattern, each of them can only access its own information and that of its neighboring agents, and the goal of the agents is to optimize the sum of the local objective functions in a cooperative manner.
In general, distributed optimization algorithms can be divided into unconstrained optimization \cite{nedic2009distributed} \cite{chen2020distributed}, and constrained optimization \cite{nedic2018distributed} \cite{yi2015distributed}, from the viewpoint of with or without constraints. For unconstrained optimization, various algorithms like consensus subgradient algorithm \cite{nedic2009distributed}, dual averaging \cite{duchi2011dual}, EXTRA \cite{shi2015extra} and gradient tracking algorithm \cite{pu2021distributed} are studied.
For constrained optimization problems, methods based on projection dynamics and primal-dual dynamics have been studied. For example, \cite{nedic2014distributed} studied the distributed projected subgradient algorithm. \cite{lei2016primal} studied a projected primal-dual algorithm for constrained optimization, and \cite{Liang2020} developed a distributed dual average push-sum algorithm with dual decomposition, while a constrained subproblem is solved at each step. \cite{YUAN2018} investigated a distributed mirror-descent optimization algorithm based on the Bregman divergence and achieved an $O(ln(T)/T)$ rate of convergence. \cite{Yuan2016} proposed a consensus-based distributed regularized primal-dual gradient method. Compared to methods that require projection of estimates onto the constraint set at each iteration, the algorithm in \cite{Yuan2016} only required one projection at the last iteration.
However, the projection-based distributed algorithm implies that the agent needs to solve a quadratic optimization problem at each iteration, to find the closest point within constraint set. When the constraints have complex structures (e.g., polyhedra), the computational cost of solving quadratic subproblem can prevent the agent from using projection-based dynamics, especially for high-dimensional optimization problems. Fortunately, the well-known Frank-Wolfe(FW) \cite{frank1956algorithm} algorithm provides us with a way to derive an effective searching direction while maintaining decision feasibility. Each step of FW algorithm only needs to solve a constrained linear programming problem, which could have a closed form for specific problems or have effective
solvers.
Recently, FW methods have received renewed research attention due to its projection-free property and advantages for large-scale problems for online learning \cite{hazan2020faster}, machine learning \cite{chen2018projection} and traffic assignment \cite{chen2001effects}.
Briefly speaking, the FW method uses a linearized function to approximate the objective function and derives feasible descent directions by solving a linear objective optimization, by $\boldsymbol{s}:=\underset{\boldsymbol{s} \in \mathcal{D}}{\arg \min }\left\langle\boldsymbol{s}, \nabla f\left(\boldsymbol{x}^{(k)}\right)\right\rangle$, $\boldsymbol{x}^{(k+1)}:=(1-\gamma) \boldsymbol{x}^{(k)}+\gamma \boldsymbol{s}$. There have been massive developments and applications for the FW method afterwards. For example, the primal-dual convergence rate has been analyzed in \cite{jaggi2013revisiting}, and \cite{lacoste2016convergence} have analyzed its convergence for non-convex problems. In addition, \cite{Lafond2016} developed communication efficient algorithms using the stochastic Frank-Wolfe (sFW) algorithm when there exist noises on the gradient computation. FW algorithms are also utilized and extended to distributed optimization. For example, \cite{wai2017decentralized} studied the decentralized optimization problem by treating the algorithm as an inexact FW algorithm with consensus errors. By combing gradient tracking and FW algorithm, \cite{Guanpu2021} investigated the continuous-time algorithm based on FW dynamics, and \cite{jiang2022distributed} studied FW algorithm for constrained stochastic optimization.
In the distributed optimization problems mentioned above, the agents decide the same variable while need to reach consensus on the optimal decision. However, in other scenarios, each agent only decides its own variable but the objective function of each agent is related to other agents' decisions through an aggregated variable. For example, in multi-agents formation control problem \cite{cao2020distributed}, a group of networked agents wish to achieve a geometric pattern while surrounding a target, which can be regarded as a goal tracking problem. The dynamical tracking problem can be modelled as an online distributed optimization. In this case, each agent decides its location, while the objective function also depends on the centroid of all agents. Similar scenarios exist in resource allocation \cite{yi2016initialization}, smart grids \cite{Longe2017}, social networks \cite{PANTOJA2019209} as well. The above optimization problem is called distributed aggregative optimization in \cite{li2021distributed}. The aggregative optimization is also related to the well studied aggregative games \cite{koshal2016distributed,liang2017distributed,grammatico2017dynamic}, but can be treated as a cooperative formulation in contrast to the noncooperative setting for multiagent decision problem. Regarding the study of aggregative optimization, \cite{li2021distributed} considered a static unconstrained framework, \cite{carnevale2021distributed} considered an online constrained framework, and \cite{chen2022distributed} considered a quantitative problem. To the best of our knowledge, no work has been done to improve the computational bottlenecks encountered by algorithms when dealing with complicated constraints that prohibiting projection algorithms.
Therefore, our interest is to design projection-free methods to solve distributed aggregative optimization problems with constraints. Motivated both by \cite{li2021distributed} and \cite{jaggi2013revisiting}, we propose a FW based approach to solve the aggregative optimization in a distributed manner. The main contributions are as follows.
\begin{itemize}
\item Firstly, a novel distributed projection-free algorithm based on FW method with gradient tracking is designed to solve the aggregative optimization problem. Each agent's local cost function depends both on its own variable and on aggregated variable, while the global information are not known by any single agent. The proposed algorithm uses the dynamical averaging tracking approach to estimate the global aggregation variable and the corresponding gradient term.
\item Secondly, we prove that the algorithm converges to the optimal solution when the objective function is convex. Compared with the projected dynamics in \cite{carnevale2021distributed}, the proposed algorithm is able to solve the aggregative optimization problem over time-varying communication graphs.
\item Finally, we demonstrate the efficiency of the proposed algorithm with numerical studies.
\end{itemize}
The rest of paper is organized as follows. Section 2 introduces notations, preliminary knowledge of graph theory, and illustrates the distributed aggregative optimization problem. Section 3 provides the proposed distributed algorithm and main convergence result. Section 4 provides the proof of convergence. Then, a numerical experiment is given in Section 5 and Section 6 concludes the paper.
{\bf Notations:} When referring to a vector $x$, it is assumed to be
a column vector while $x^{\top}$ denotes its transpose. $\langle x,y\rangle=x^{\top}y$ denotes the inner product of vectors $x,y.$ $\|x\|$
{denotes} the Euclidean vector norm, i.e., $\|x\|=\sqrt{x^{\top}x}$. Let $\otimes$ be the Kronecker product. Denote by $\mathbf{1}_{N}$ and $\mathbf{0}_{N}$ the column vectors.
A nonnegative square matrix $A $ is called doubly stochastic if
$A\mathbf{1} =\mathbf{1}$ and $\mathbf{1}^{\top} A =\mathbf{1}^{\top}$, where
$\mathbf{1}$ denotes the vector with each entry being $1$.
$\mathbf{I}_N \in \mathbb{R}^{N\times N}$ denotes the identity matrix.
Let $\mathcal{G}=\{ \mathcal{N}, \mathcal{E}\}$ be a directed graph with
$\mathcal{N}=\{1,\cdots,N\} $ denoting the set of players and $\mathcal{E}$
denoting the set of directed edges between players, where
$(j,i)\in\mathcal{E }$ if player $i$ can obtain information from
player $j$.
The graph $\mathcal{G}$ is called strongly connected if for
any $ i,j\in \mathcal{N}$ there exists a directed path from $i$ to
$j$, i.e., there exists a sequence of edges $ (i,i_1),(i_1,i_2),\cdots,(i_{p-1},j)$ in the digraph with distinct nodes $ i_m \in \mathcal{N},~\forall m: 1 \leq m \leq p-1$.
A differentiable function $f: \mathbb{R}^{n} \rightarrow \mathbb{R}$ is $\mu$-strongly convex if for all $\boldsymbol{\theta}, \boldsymbol{\theta}^{\prime} \in \mathbb{R}^{n}$,$
f(\boldsymbol{\theta})-f\left(\boldsymbol{\theta}^{\prime}\right) \leq\left\langle\nabla f(\boldsymbol{\theta}), \boldsymbol{\theta}-\boldsymbol{\theta}^{\prime}\right\rangle-\frac{\mu}{2}\left\|\boldsymbol{\theta}-\boldsymbol{\theta}^{\prime}\right\|_{2}^{2}
$. moreover, $f$ is convex if the above is satisfied with $\mu=0$.
\section{Problem Formulation }\label{sec:formulation}
In this section, we formulate the aggregative optimization problem over networks and introduce some basic assumptions.
\subsection{Problem Statement}
We define the aggregative optimization problem with $N$ agents:
\begin{equation}\label{Nopt_agg}
\begin{split}
\min_{\boldsymbol{x} \in X \triangleq \prod_{i=1}^{N} {X}_{i}}& f(\boldsymbol{x}) = \sum_{i=1}^{N}f_{i}(x_{i},\delta(\boldsymbol{x})) {\rm~with~}\delta(\boldsymbol{x}) = \frac{1}{N}\sum_{i=1}^{N}\phi_{i}(x_{i}),
\end{split}
\end{equation}
where $\boldsymbol{x} \triangleq \operatorname{col}\big((x_{i})_{i\in \mathcal{N}}\big)$ is the global strategy variable with $x_{i}\in\mathbb{R}^{n_{i}}$ being the decision variable of agent $i$. The function $f_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}$ is the local objective function of agent $i$ with $n\triangleq\sum_{i=1}^{N}n_{i}$, $X_i \subset \mathbb{R}^{n_i}$ denotes the local constraint set of agent $i,$ and $\mathcal{N} =\{1,\dots,N\}$ denotes the set of agents. Moreover, $\delta(\boldsymbol{x})$ is an aggregate information of all agents' variables, where $\phi_{i}:\mathbb{R}^{n_i}\rightarrow\mathbb{R}^{d}$ is only available to agent $i$.
The goal is to design a distributed algorithm to cooperatively seek an optimal decision variable to the problem
(\ref{Nopt_agg}).
The gradient of $f(\boldsymbol{x})$ is defined by
\[\nabla f(\boldsymbol{x}) \triangleq \nabla_{1} f(\boldsymbol{x}, \boldsymbol{\delta}(\boldsymbol{x}))+\nabla \phi(\boldsymbol{x}) \mathbf{1}_{N} \otimes \frac{1}{N} \sum_{i=1}^{N} \nabla_{\delta} f_{i}\left(x_{i},\delta(\boldsymbol{x})\right),
\] where $\nabla_{1} f(\boldsymbol{x},\boldsymbol{\delta}(\boldsymbol{x})): =\operatorname{col} (\nabla_{x_i}f_{i}(x_{i},\mathbf{z})_{|\mathbf{z}=\boldsymbol{\delta}(\boldsymbol{x})})_{i\in\mathcal{N}} $, and
\[\nabla \phi\left(\boldsymbol{x}\right):=\left[\begin{array}{lll}
\nabla \phi_{1}\left(x_{1}\right) & & \\
& \ddots & \\
& & \nabla \phi_{N}\left(x_{N}\right)
\end{array}\right] .\]
For ease of notation, we also specify the cost function as $f_{i}(x_{i},\delta(\boldsymbol{x})) = g_{i}(x_{i},z_{i})_{|z_{i}=\delta(\boldsymbol{x})}$ with a function $g_{i}:\mathbb{R}^{n_{i}+n} \rightarrow \mathbb{R}, i\in \mathcal{N}$. To move forward, define $g(\boldsymbol{x},\boldsymbol{z}) = \sum_{i=1}^{N}g_{i}(x_{i},z_{i}):\mathbb{R}^{n+Nd}\rightarrow \mathbb{R}$ for any $\boldsymbol{x}$ and $\boldsymbol{z} \triangleq \operatorname{col}((z_{i})_{i\in \mathcal{N}} )$. The gradient of $g(\boldsymbol{x},\boldsymbol{z})$ is defined by $\nabla_{1}g(\boldsymbol{x},\boldsymbol{z}):=\operatorname{col}(\nabla_{x_i}g_{i}(x_{i},z_{i})_{i\in\mathcal{N}})$, $\nabla_{2}g(\boldsymbol{x},\boldsymbol{z}):=\operatorname{col}(\nabla_{z_i}g_{i}(x_{i},z_{i})_{i\in\mathcal{N}})$.
Next, we impose some assumptions on the formulated problem.
We require the agent-specific problem to be convex and continuously differentiable.
\begin{assumption}~\label{ass-payoff}
(a) For each agent $i\in \mathcal{N},$ the strategy set $X_i$ is closed, convex and compact.
In addition, the diameter of $X = \prod_{i=1}^{N} {X}_{i} $ is defined as $\bar{\rho}:=\max_{\theta,\theta^{\prime}\in X}\|\theta-\theta^{\prime}\|^{2}_{2}$;
\\ (b) the global objective function $f$ is convex and differentiable in $\boldsymbol{x} \in X$, and
its gradient function is $L$-smooth in $\boldsymbol{x} \in X$, i.e.,
\[ f(\mathbf{y})-f(\mathbf{x}) \geq (\mathbf{y}-\mathbf{x})^T \nabla f(\mathbf{x}),~\| \nabla f(\mathbf{x}) -\nabla f(\mathbf{y}) \|\leq L\|\mathbf{x}-\mathbf{y}\|, \quad \forall \mathbf{x},\mathbf{y} \in X.\]
\end{assumption}
\begin{assumption}~\label{ass-gradient}
(a) $\nabla_{1} g(\boldsymbol{x}, \boldsymbol{z})$ is $l_{1}$-Lipschitz continuous in $\boldsymbol{z} \in \mathbb{R}^{Nd}$ for any $\boldsymbol{x} \in X,$ i.e.,
\[ \|\nabla_{1} g(\boldsymbol{x}, \boldsymbol{z}_1)-\nabla_{1} g(\boldsymbol{x}, \boldsymbol{z}_1)\| \leq l_1 \|\boldsymbol{z}_1-\boldsymbol{z}_2\|,~\forall \boldsymbol{z}_1, \boldsymbol{z}_2 \in \mathbb{R}^{Nd}.\]
(b) $\nabla_{2} g(\boldsymbol{x}, \boldsymbol{z})$ is $l_{2}$-Lipschitz continuous in $(\boldsymbol{x}, \boldsymbol{z}) \in X \times \mathbb{R}^{Nd}$, i.e.
\begin{align}
\|\nabla_{2} g(\boldsymbol{x}_1, \boldsymbol{z}_1)-\nabla_{2} g(\boldsymbol{x}_2, \boldsymbol{z}_2)\| \leq l_2 \|\boldsymbol{x}_1-\boldsymbol{x}_1\| +l_2 \|\boldsymbol{z}_1-\boldsymbol{z}_2\|, \forall \boldsymbol{x}_1, \boldsymbol{x}_2 \in X, \boldsymbol{z}_1, \boldsymbol{z}_2 \in \mathbb{R}^{Nd}.
\end{align}
(c) For each agent $i\in \mathcal{N}$, $\phi_{i}(x_{i})$ is differentiable in $x_i \in X_i$ and $\nabla\phi_{i}(x_{i}) \in \mathbb{R}^{n_{i}\times d}$ is uniformly bounded in $x_i \in X_i$, i.e.,
there exists a constant $c_{i}>0$ such that $\|\nabla\phi_{i}(x_{i})\| \leq c_{i}$ for any $x_i \in X_i$.\\
(d) For each agent $i\in \mathcal{N},$ $\phi_i\left(x_i\right)$ is $l_3$-Lipschitz continuous in $x_i \in X_i$.
\end{assumption}
\subsection{Graph theory}
We consider the information setting that each player $i \in \mathcal{N}$ knows the information of its private functions $f_{i},~\phi$ and the local constraint $X_i$, but have no access to the aggregate $\delta(\boldsymbol{x})$. Instead, each player is able to communicate with its neighbors over a time-varying graph $\mathcal{G}_{k}=\left\{\mathcal{N}, \mathcal{E}_{k}\right\}$. Define $W_{k}=$ $\left[\omega_{i j, k}\right]_{i, j=1}^{N}$ as the adjacency matrix of $\mathcal{G}_{k}$, where $\omega_{i j, k}>0$ if and only if $(j, i) \in \mathcal{E}_{k}$, and $\omega_{i j, k}=0$, otherwise. Denote by $N_{i, k} \triangleq\{j \in$ $\left.\mathcal{N}:(j, i) \in \mathcal{E}_{k}\right\}$ the neighboring set of player $i$ at time $k$.
We impose the following conditions on the time-varying communication graphs $\mathcal{G }_k = \{ \mathcal{N}, \mathcal{E }_k\}$.
\begin{assumption}~\label{ass-graph} (a) $W_k $ is doubly stochastic for any $k\geq 0$;
\\(b) There exists a constant $0< \eta<1$ such that
$$ \omega_{ij,k} \geq \eta , \quad \forall j \in \mathcal{N}_{i,k},~~\forall i \in \mathcal{N},~\forall k \geq 0;$$
(c) There exists a positive integer $B $ such that the union graph
$\big \{ \mathcal{N }, \bigcup_{l=1}^B \mathcal{E }_{k+l} \big \}$ is strongly connected for all $k\geq 0$.
\end{assumption}
We define a transition matrix $\Phi(k,s) =W_kW_{k-1}\cdots W_s$ for any $k\geq s\geq 0$ with $\Phi(k,k+1) =\mathbf{I}_N$,
and state a result that will be used in the sequel.
\begin{lemma}\label{lemma_graph}\cite[Proposition 1]{nedic2009distributed}
Let Assumption \ref{ass-graph} hold. Then there exist $\theta=(1-\eta/(4N^2))^{-2}>0$ and $\beta =(1-\eta/(4N^2))^{1/B} $ such that for any $k\geq s \geq 0,$
\begin{align}\label{geometric}
\left | \left[\Phi(k,s)\right]_{ij}-1/N \right| \leq \theta \beta^{k-s},\quad \forall i,j\in \mathcal{N}.
\end{align}
\end{lemma}
\section{Algorithm Design and Main Results }\label{sec:agg}
In this section, we design a distributed projection-free algorithm and provide its convergence performance.
\subsection{Distributed Projection-free Algorithm}
It is worth pointing out that the Frank-Wolfe method for the optimization problem $\min_{x\in C} f(x) $ requires minimizing a linear function over constraint sets \cite{scutari2012monotone}, in contrast to the projected gradient methods which require the minimization of quadratic functions over constraint sets.
Recall that the Frank-Wolfe step is given by
\begin{align}
(F W A)\left\{\begin{array}{l}
y_{k}=\operatorname{argmin}_{y \in C}\left\langle\nabla f\left(x_{k}\right), y\right\rangle, \\
x_{k+1}=\left(1-\alpha_{k}\right) x_{k}+\alpha_{k} y_{k} .
\end{array}\right.
\end{align}
Next we present the proposed distributed Frank-Wolfe algorithm with gradient tracking (D-FWAGT).
\begin{algorithm}[htbp]
\caption{Distributed Projection-free Method for Aggregative Optimization} \label{alg1}
{\it Initialize:} Set $k=0,$ $x_{ i,0} \in X_i$ and $ v_{ i,0}= \phi_{i}(x_{i,0}) $, and $y_{i,0} = \nabla_{v_i}g_{i}(x_{i,0},v_{i,0}) $ for each $i \in\mathcal{N}$.
{\it Iterate until convergence}\\
{\bf Consensus.} Each player computes an intermediate estimate by
\begin{align}
&\hat{v}_{i,k+1} = \sum_{j\in \mathcal{N}_{i,k}} w_{ij,k} v_{j,k} \label{alg-consensus},\\
&\hat{y}_{i,k+1} = \sum_{j\in \mathcal{N}_{i,k}} w_{ij,k} y_{j,k} \label{alg-g}.
\end{align}
{\bf Strategy Update.} Each agent $ i\in \mathcal{N}$ updates its decision variable and its estimate of the average aggregates by
\begin{align}
& s_{i,k} =\mathop{\rm argmin}_{s_{i} \in X_i} \Big( \big \langle \nabla_{1}g_{i}(x_{i,k},\hat{v}_{i,k+1})+\nabla{\phi}_{i}(x_{i,k})\hat{y}_{i,k+1},s_{i} \big \rangle \Big) ,\notag \\
&x_{i,k+1} = (1-\gamma_{k})x_{i,k} + \gamma_k s_{i,k} \label{alg-strategy0},\\
& v_{i,k+1} = \hat{v}_{i,k+1}+\phi_{i}(x_{i,k+1})-\phi_{i}(x_{i,k}) ,\label{alg-average}\\
& y_{i,k+1} = \hat{y}_{i,k+1} + \nabla_{v_{i}}g_{i}(x_{i,k+1},v_{i,k+1})-\nabla_{v_{i}}g_{i}(x_{i,k},v_{i,k}) \label{alg-gra2},
\end{align}
where $\gamma_{k}\in[0,1)$.
\end{algorithm}
Suppose that each agent $i$ at stage $k $ selects a strategy $x_{i,k} \in X_i $ as an estimate of its optimal strategy,
and holds the estimates $v_{i,k}$ and $y_{i,k}$ for the aggregate $\delta(\boldsymbol{x})$ and the gradient term $\frac{1}{N} \sum_{i=1}^{N} \nabla_{z} g_{i}\left(x_{i}, z \right)|_{z=\delta(\boldsymbol{x})}$, respectively.
At stage $k+1$, player $i$ observes or receives its neighbors' information $ v_{j,k},y_{i,k} ,j\in \mathcal{N}_{i,k}$ and updates two
intermediate estimates by the consensus step \eqref{alg-consensus} and \eqref{alg-g}.
Then it computes its gradient estimation, and updates its strategy $x_{i,k+1} $ by a projection-free scheme \eqref{alg-strategy0} with a Frank-Wolfe step.
Set $y_{i,0} = \nabla_{2}g_{i}(x_{i,0},v_{i,0}) $ without loss of generality.
Finally, player $i$ updates the estimate for average aggregate $v_{i,k+1}$ with the renewed strategy $x_{i,k+1} $ by
the dynamic average tracking scheme \eqref{alg-average},
and updates the estimate of gradient by the dynamic tracking scheme \eqref{alg-gra2}. The procedures are summarized in Algorithm \ref{alg1}.
Denote by $\boldsymbol{x}_{k}:=\operatorname{col}(x_{1,k},\dots,x_{N,k})$ and similar notations for $\boldsymbol{s}_{k},~\boldsymbol{v}_{k},~\boldsymbol{y}_{k},\hat{\boldsymbol{v}}_{k },~\hat{\boldsymbol{y}}_k$. We can write the algorithm \ref{alg1} in a compact form
\begin{align}
& \boldsymbol{s}_{k} = \mathop{\rm argmin}_{\boldsymbol{s}\in X} \langle\nabla_{1}g(\boldsymbol{x}_{k},\hat{\boldsymbol{v}}_{k+1})+\nabla{\phi}(\boldsymbol{x}_{k})\hat{\boldsymbol{y}}_{k+1},\boldsymbol{s}\rangle \label{fw_gl} ,\\
& \boldsymbol{x}_{k+1} = (1-\gamma_{k})\boldsymbol{x}_{k} + \gamma_{k}\boldsymbol{s}_{k} \label{fw_update},\\
& \boldsymbol{v}_{k+1} =\hat{\boldsymbol{v}}_{k+1} + \phi(\boldsymbol{x}_{k+1}) - \phi(\boldsymbol{x}_{k}) \label{tr_con},\\
& \boldsymbol{y}_{k+1} =\hat{\boldsymbol{y}}_{k+1} + \nabla_{2}g(\boldsymbol{x}_{k+1},\boldsymbol{v}_{k+1}) - \nabla_{2}g(\boldsymbol{x}_{k},\boldsymbol{v}_{k})\label{tr_gra},
\end{align}
where $\hat{\boldsymbol{v}}_{k+1}= W_{d,k}\boldsymbol{v}_{k}$, $\hat{\boldsymbol{y}}_{k+1} = W_{d,k}\boldsymbol{y}_{k}$, $W_{d,k}=W_k\otimes I_{d}$ and $\phi(\boldsymbol{x}_{k}):=\operatorname{col}(\phi(x_{1,k}),\dots,\phi(x_{N,k})).$
We impose the following conditions on the step-length sequence $\left(\gamma_{k}\right)_{k \in \mathbb{N}}$.
\begin{assumption}\label{rk}
i) (nonincreasing) $0 \leq \gamma_{k+1} \leq \gamma_{k} \leq 1$, for all $k \geq 0$;\\
ii) (nonsummable) $\sum_{k=0}^{\infty} \gamma_{k}=\infty$;\\
iii) (square-summable) $\sum_{k=0}^{\infty}\left(\gamma_{k}\right)^{2}<\infty$.
\end{assumption}
Denote by $\boldsymbol{x}^{\star}$ the optimal solution to the aggregative optimization problem \eqref{Nopt_agg}. We then present the main convergence result of this paper.
\begin{theorem}
Let Algorithm 1 be applied to the problem \eqref{Nopt_agg}, where the step size $\gamma_{k}$ satisfies Assumption \ref{rk}. Suppose that Assumptions \ref{ass-payoff}-\ref{ass-graph} hold. Then
$$
\lim_{k \rightarrow \infty} f(\boldsymbol{x}_{k}) = f(\boldsymbol{x}^{\star}).
$$
\end{theorem}
\section{Proof of Main Result}
We first establish bounds on the consensus error of the aggregate and gradient tracking estimate measured by
$\|\delta(\boldsymbol{x}_k) -\hat{v}_{i,k+1}\| $ and $\|\hat{\boldsymbol{y}}_{k+1} -\mathbf{1} \otimes \bar{y}_{k} \| $, respectively, which will play an important role in the proof of Theorem 1.
Similarly to \cite[Lemma 2]{koshal2016distributed}, we have the following result.
\begin{lemma}\label{lem_ave}
Let Assumption \ref{ass-graph} hold. Then there exist
\begin{align}
& \bar{v}_{k} = \frac{1}{N}\sum_{i=1}^{N}v_{i,k} = \frac{1}{N}\sum_{i=1}^{N}\phi_{i}(x_{i,k}) , \label{average_x}\\
& \bar{y}_{k} = \frac{1}{N}\sum_{i=1}^{N}y_{i,k} = \frac{1}{N}\sum_{i=1}^{N}\nabla_{v_{i}}g_{i}(x_{i,k},v_{i,k}). \label{average_y}
\end{align}
\end{lemma}
\noindent {\bf Proof.} In view of \eqref{tr_con} and double-stochasticity in Assumption \ref{ass-graph}, multiplying $\frac{1}{N} \mathbf{1}_{Nd}^{\top}$ on both sides of \eqref{tr_con} can lead to that
$$
\bar{v}_{k+1}=\bar{v}_{k}+\frac{1}{N} \sum_{i=1}^{N} \phi_{i}\left(x_{i, k+1}\right)-\frac{1}{N} \sum_{i=1}^{N} \phi_{i}\left(x_{i, k}\right)
$$
which further implies that
$$
\bar{v}_{k}-\frac{1}{N} \sum_{i=1}^{N} \phi_{i}\left(x_{i, k}\right)=\bar{v}_{0}-\frac{1}{N} \sum_{i=1}^{N} \phi_{i}\left(x_{i, 0}\right) .
$$
combining the above equality and $v_{i,0} = \phi_{i}(x_{i,0})$ yields the first assertion of this lemma.
Secondly, we prove \eqref{average_y} by induction. Since the estimates are initialized as $y_{i,0}=\nabla_{v_{i}}g_{i}(x_{i,0},v_{i,0})$ for all $i\in\mathcal{N}$, we have $\bar{y}_{0} = \frac{1}{N}\sum_{i=1}^{N}\nabla_{v_{i}}g_{i}(x_{i,0},v_{i,0})$. At step $k$, we assume that $\bar{y}_{k}= \frac{1}{N}\sum_{i=1}^{N}\nabla_{v_{i}}g_{i}(x_{i,k},v_{i,k}) $. We need to show that relation \eqref{average_y} holds at step $k+1$:
\begin{align}
\bar{y}_{k+1} &= \frac{1}{N}(\mathbf{1}^{\top}\otimes I_{d}) ((W_{k}\otimes I_{d})\boldsymbol{y}_{k} + \nabla_{2}g(\boldsymbol{x}_{k+1},\boldsymbol{v}_{k+1}) - \nabla_{2}g(\boldsymbol{x}_{k},\boldsymbol{v}_{k})) \notag \\
& = \frac{1}{N}(\mathbf{1}^{\top}\otimes I_{d}) ((W_{k}\otimes I_{d})\boldsymbol{y}_{k} + \frac{1}{N}\sum_{i=1}^{N}\nabla_{v_{i}}g_{i}(x_{i,k+1},v_{i,k+1}) - \frac{1}{N}\sum_{i=1}^{N}\nabla_{v_{i}}g_{i}(x_{i,k},v_{i,k}) \notag \\
& = \bar{y}_{k} + \frac{1}{N}\sum_{i=1}^{N}\nabla_{v_{i}}g_{i}(x_{i,k+1},v_{i,k+1}) - \frac{1}{N}\sum_{i=1}^{N}\nabla_{v_{i}}g_{i}(x_{i,k},v_{i,k}) \notag \\
& = \frac{1}{N}\sum_{i=1}^{N}\nabla_{v_{i}}g_{i}(x_{i,k+1},v_{i,k+1}), \notag
\end{align}
where the first equality follows by the update rule of $y_{i}'s$ in \eqref{alg-average} of Algorithm \ref{alg1}, the second follows from definition of $\bar{y}_{k}$, i.e. $\bar{y}_{k} = \frac{1}{N}(\mathbf{1}^{\top}\otimes I_d)\boldsymbol{y}_{k}$, the third follows by Assumption \ref{ass-graph}. While the last equality follows from the induction step $k$. \hfill $\blacksquare$
Then, we establish a bound on the consensus error $\|\delta(\mathbf{x}_k) -\hat{v}_{i,k+1}\|$ of the aggregate. This proof is similar to \cite{koshal2016distributed}, we state the proof of this proposition based on algorithm (\ref{alg1}) to ensure the integrity of the work.
For each $i\in \mathcal{N},$ we define
\begin{align}
M_i & \triangleq \max_{x_i \in X_i} \|x_i\|,~ M_H\triangleq \sum_{j=1}^N M_i, \label{def-bdst} \\
& {\rho}_i:=\max_{\theta,\theta^{\prime}\in X_i}\|\theta-\theta^{\prime}\|, \rho \triangleq \max_{i\in \mathcal{N}}\rho_{i} .\label{x_bound}
\end{align}
\begin{proposition} \label{prop_agg} Consider Algorithm \ref{alg1}. Let Assumptions \ref{ass-payoff}, and \ref{ass-graph} hold. Then
\begin{equation}\label{agg-bd0}
\|\delta(\boldsymbol{x}_k) -\hat{v}_{i,k+1}\|\leq \theta M_H \beta^{k} +\theta N \rho l_3 \sum_{s=1}^k \beta^{k-s} \gamma_{s-1},
\end{equation}
where the constants $\theta, \beta$ are defined in \eqref{geometric}.
\end{proposition}
\noindent {\bf Proof.} From \eqref{average_x} it follows that
\begin{align}\label{equi} \sum_{i=1}^N v_{i,k} =\sum_{i=1}^N \phi_i (x_{i,k}) = N\delta(\boldsymbol{x}_k),\quad \forall k\geq 0.
\end{align}
Akin to \cite[Eqn. (16)]{koshal2016distributed}, we give an upper bound on
$\left \|{ \delta(\boldsymbol{x}_k)} -\hat{v}_{i,k+1} \right \|.$
By combining \eqref{alg-average} with \eqref{alg-consensus}, we have
\begin{align*} v_{i,k+1} &= \sum_{j=1}^N[ \Phi (k,0)]_{ij} v_{j,0 }
+ \phi_{i}(x_{i,k+1})- \phi_{i}(x_{i,k})
+\sum_{s=1}^{k } \sum_{j=1}^N [ \Phi (k,s)]_{ij}( \phi_{j}(x_{j,s})- \phi_{i}(x_{j,s-1}) ) .
\end{align*}
Then by \eqref{alg-average}, we have
$$ \hat{v}_{i,k+1 }
=\sum_{j=1}^N[ \Phi (k,0)]_{ij} v_{j,0 } +\sum_{s=1}^{k } \sum_{j=1}^N [ \Phi (k,s)]_{ij}( \phi_{j}(x_{j,s})- \phi_{j}(x_{j,s-1}) ) . $$
By using \eqref{equi}, we have that
\begin{align*}
{ \delta(\boldsymbol{x}_k)}= {\sum_{j=1}^N v_{j,0}\over N}+ \sum_{s=1}^k \sum_{j=1}^N {1\over N}( \phi_{j}(x_{j,s})- \phi_{j}(x_{j,s-1}) ) .\end{align*}
Therefore, we obtain the following bound.
\begin{align*}
\left \|{ \delta(\boldsymbol{x}_k)} -\hat{v}_{i,k+1} \right \| &\leq \sum_{j=1}^N \left| {1\over N}-[\Phi(k,0)]_{ij}\right| \|v_{j,0}\| \notag\\
&+\sum_{s=1}^k \sum_{j=1}^N
\left| {1\over N}-[\Phi(k,s)]_{ij}\right| \big\| \phi_{j}(x_{j,s})- \phi_{j}(x_{j,s-1}) \big \|.
\end{align*}
Then by using \eqref{geometric}, and $v_{i,0}= x_{i,0} $, we obtain that
\begin{equation}\label{bd-consensus1}
\begin{split}
\left \|{ \delta(\boldsymbol{x}_k)} -\hat{v}_{i,k+1} \right \| \leq \theta \beta^{k}\sum_{j=1}^N \| x_{j,0} \| + \theta \sum_{s=1}^k \beta^{k-s} \sum_{j=1}^N \big\| \phi_{j}(x_{j,s})- \phi_{j}(x_{j,s-1}) \big \| .
\end{split} \end{equation}
Note that for any $s \ge 1$ and each $i\in \mathcal{N}$,
\begin{align}\label{phi}
\| \phi_{i}(x_{i,s})- \phi_{i}(x_{i,s-1})\| \leq l_3\| x_{i,s}- x_{i,s-1}\| = l_3 \gamma_{s-1}\| s_{i,s-1}- x_{i,s-1}\| \leq l_3 \gamma_{s-1}\rho,
\end{align}
where the first inequality follows by Assumption \ref{ass-gradient}(d) (i.e., the $l_3$-Lipschitz continuous of $\phi_{i}$), and the second equality follows by \eqref{alg-strategy0}, and the last inequality follows by the constraint set is convex and bounded as \eqref{x_bound}. By combining \eqref{phi}, \eqref{bd-consensus1} and \eqref{def-bdst}, we prove \eqref{agg-bd0}.
\hfill $\blacksquare$
Next, we establish a bound on the consensus error $\|\hat{\boldsymbol{y}}_{k+1} -\mathbf{1} \otimes \bar{y}_{k} \| $ of the gradient tracking step, where $\bar{y}_{k}$ is defined by Lemma \ref{lem_ave}.
\begin{proposition} \label{Pro_gra}
Consider Algorithm \ref{alg1}. Suppose that Assumption \ref{ass-payoff}-\ref{ass-graph} hold. Define $C_k \triangleq \max_{i\in \mathcal{N}}\left \|{ \delta(\boldsymbol{x}_k)} -\hat{v}_{i,k+1} \right\|$. Then the following hold for all $k\in \mathbb{N}$:
\begin{align}\label{average_gra}
\|\hat{\boldsymbol{y}}_{k+1} -\mathbf{1} \otimes \bar{y}_{k} \| \leq & \theta \beta^{k}\|\boldsymbol{y}_{0} \| + \sum_{s=1}^{k}\theta\beta^{k-s}(l_2\bar{\rho } + l_2l_3\sqrt{N}\rho) \gamma_{s-1} \notag \\
& + \sum_{s=1}^{k} l_2 l_3 (\sqrt{N}+1)\rho\theta\beta^{k-s} \gamma_{s-2}+\sum_{s=1}^{k}l_2\sqrt{N}\theta\beta^{k-s}(C_{s-1} +C_{s-2}),
\end{align}
\end{proposition}
where $\theta, \beta$ are defined in \eqref{geometric}, $\bar{\rho }$ is the diameter of convex set $X$ as in Assumption \ref{ass-payoff}.
\noindent{\bf{Proof:}}
By telescoping \eqref{tr_gra}, we obtain
\begin{align}
\boldsymbol{y}_{k+1} &= W_{d,k}(W_{d,k-1}\boldsymbol{y}_{k-1} + \nabla_2 g(\boldsymbol{x}_{k},\boldsymbol{v}_{k}) - \nabla_2 g(\boldsymbol{x}_{k-1},\boldsymbol{v}_{k-1})) + \nabla_2 g(\boldsymbol{x}_{k+1},\boldsymbol{v}_{k+1}) - \nabla_2 g(\boldsymbol{x}_{k},\boldsymbol{v}_{k}) \notag \\
& =\dots \notag \\
& = (\Phi(k,0) \otimes I_d) \boldsymbol{y}_{0} + \sum_{s=1}^{k}(\Phi(k,s)\otimes I_d)(\nabla_2 g(\boldsymbol{x}_{s},\boldsymbol{v}_{s}) - \nabla_2 g(\boldsymbol{x}_{s-1},\boldsymbol{v}_{s-1})) \notag\\
& + \nabla_2 g(\boldsymbol{x}_{k+1},\boldsymbol{v}_{k+1}) - \nabla_2 g(\boldsymbol{x}_{k},\boldsymbol{v}_{k}). \label{y_inter}
\end{align}
By arranging \eqref{tr_gra}, we can write $ W_{d,k} \boldsymbol{y}_{k} =\boldsymbol{y}_{k+1} - \nabla_2 g(\boldsymbol{x}_{k+1},\boldsymbol{v}_{k+1}) + \nabla_2 g(\boldsymbol{x}_{k},\boldsymbol{v}_{k})$. Then by exploiting the equivalence in \eqref{y_inter}, we have
\begin{align}\label{y_estimate}
W_{d,k} \boldsymbol{y}_{k} = (\Phi(k,0) \otimes I_d) \boldsymbol{y}_{0} + \sum_{s=1}^{k}(\Phi(k,s) \otimes I_d)(\nabla_2 g(\boldsymbol{x}_{s},\boldsymbol{v}_{s}) - \nabla_2 g(\boldsymbol{x}_{s-1},\boldsymbol{v}_{s-1}))
\end{align}
By equation \eqref{average_y}, we have $\bar{y}_{s} = \frac{1}{N}(\mathbf{1}^{\top}\otimes I_d)(\nabla_2 g(\boldsymbol{x}_{s},\boldsymbol{v}_{s})),\forall s\geq 0,$ which leads to:
\begin{align}\label{y_average}
\bar{y}_{k} = \frac{1}{N}(\mathbf{1}^{\top}\otimes I_d)\boldsymbol{y}_0 + \sum_{s=1}^{N}\frac{1}{N}(\mathbf{1}^{\top}\otimes I_d)(\nabla_2 g(\boldsymbol{x}_{s},\boldsymbol{v}_{s})-\nabla_2 g(\boldsymbol{x}_{s-1},\boldsymbol{v}_{s-1})).
\end{align}
From \eqref{y_estimate} and \eqref{y_average}, we have the following:
\begin{align}\label{bd-y1}
&\|W_{d,k}\boldsymbol{y}_{k} -\mathbf{1}\otimes \bar{y}_{k} \| \notag \\ &=\|(\Phi(k,0)-\frac{1}{N}\mathbf{1}\mathbf{1}^{\top})\boldsymbol{y}_{0}+\sum_{s=1}^{k}(\Phi(k,s)-\frac{1}{N}\mathbf{1}\mathbf{1}^{\top})(\nabla_2 g(\boldsymbol{x}_{s},\boldsymbol{v}_{s}) - \nabla_2 g(\boldsymbol{x}_{s-1},\boldsymbol{v}_{s-1}))\| \notag \\
& \leq \|\Phi(k,0)-\frac{1}{N}\mathbf{1}\mathbf{1}^{\top} \| \|\boldsymbol{y}_{0} \|+ \sum_{s=1}^{k}\|\Phi(k,s)-\frac{1}{N}\mathbf{1}\mathbf{1}^{\top}\| \|\nabla_2 g(\boldsymbol{x}_{s},\boldsymbol{v}_{s}) - \nabla_2 g(\boldsymbol{x}_{s-1},\boldsymbol{v}_{s-1})\| \\
& \leq \theta \beta^{k}\|\boldsymbol{y}_{0} \| + \sum_{s=1}^{k}\theta\beta^{k-s}\|\nabla_2 g(\boldsymbol{x}_{s},\boldsymbol{v}_{s}) - \nabla_2 g(\boldsymbol{x}_{s-1},\boldsymbol{v}_{s-1})\| ,\notag
\end{align}
where the first inequality follows from the Kronecker and Cauchy-Schuarz inequality, and the second inequality follows by Lemma \ref{lemma_graph}.
Next, we find an upper bound for $\|\nabla_2 g(\boldsymbol{x}_{s},\boldsymbol{v}_{s}) - \nabla_2g(\boldsymbol{x}_{s-1},\boldsymbol{v}_{s-1})\|$. Note by \eqref{phi} that
\begin{align*}
& \|\phi(\boldsymbol{x}_{s})-\phi(\boldsymbol{x}_{s-1})\| =\sqrt{\sum_{i=1}^N \| \phi_{i}(x_{i,s})- \phi_{i}(x_{i,s-1})\|^2} \leq \sqrt{N} l_3\rho \gamma_{s-1},
\\& \|\delta(\boldsymbol{x}_{s-1})-\delta(\boldsymbol{x}_{s-2})\|= \frac{1}{N}\left \|\sum_{i=1}^{N}
(\phi_{i}(x_{i,s-1})- \sum_{i=1}^{N}\phi_{i}(x_{i,s-2})) \right \| \leq l_3 \rho\gamma_{s-2}.
\end{align*}
Then by recalling that $\bar{\rho}:=\max_{\theta,\theta^{\prime}\in X}\|\theta-\theta^{\prime}\|^{2}_{2}$, and using \eqref{fw_update} and \eqref{tr_con}, we have:
\begin{align}\label{bd-g}
&\|\nabla_2 g(\boldsymbol{x}_{s},\boldsymbol{v}_{s}) - \nabla_2g(\boldsymbol{x}_{s-1},\boldsymbol{v}_{s-1})\| \leq l_2\|\boldsymbol{x}_{s}-\boldsymbol{x}_{s-1}\|+l_2\|\boldsymbol{v}_{s}-\boldsymbol{v}_{s-1} \| \notag\\
&= l_2 \| \gamma_{s-1}\boldsymbol{s}_{s-1}-\gamma_{s-1}\boldsymbol{x}_{s-1}\| \notag\\
&+ l_2\|\hat{\boldsymbol{v}}_{s} + \phi(\boldsymbol{x}_{s}) - \phi(\boldsymbol{x}_{s-1})-\mathbf{1}\otimes \delta(\boldsymbol{x}_{s-1})+\mathbf{1}\otimes \delta(\boldsymbol{x}_{s-1})-\mathbf{1}\otimes \delta(\boldsymbol{x}_{s-2})\notag \\
&+\mathbf{1}\otimes \delta(\boldsymbol{x}_{s-2})-(\hat{\boldsymbol{v}}_{s-1} + \phi(\boldsymbol{x}_{s-1}) - \phi(\boldsymbol{x}_{s-2})) \|\notag \\
& \leq l_2 \bar{\rho } \gamma_{s-1} + l_2\|\hat{\boldsymbol{v}}_{s} - \mathbf{1}\otimes\delta(\boldsymbol{x}_{s-1})\|+ l_2\|\phi(\boldsymbol{x}_{s})-\phi(\boldsymbol{x}_{s-1})\| + l_2\|\delta(\boldsymbol{x}_{s-1})-\delta(\boldsymbol{x}_{s-2})\|\notag\\
& + l_2\| \mathbf{1}\otimes\delta(\boldsymbol{x}_{s-2})-\hat{\boldsymbol{v}}_{s-1}\| +l_2\|\phi(\boldsymbol{x}_{s-1})-\phi(\boldsymbol{x}_{s-2})\|\notag\\
& \leq l_2\bar{\rho}\gamma_{s-1} + l_2\sqrt{N} C_{s-1} +l_2l_3\sqrt{N}\rho\gamma_{s-1} +l_2l_3\rho\gamma_{s-2} +l_2\sqrt{N} C_{s-2}+ l_2l_3\sqrt{N}\rho\gamma_{s-2} \notag \\
& \leq l_2\bar{\rho}\gamma_{s-1} +l_2l_3\sqrt{N}\rho\gamma_{s-1} + l_2l_3(\sqrt{N}+1)\rho\gamma_{s-2} + l_2\sqrt{N} C_{s-1} +l_2\sqrt{N} C_{s-2},
\end{align}
where the first inequality follows from Assumption \ref{ass-gradient}(b) (i.e., the $l_2$-lipschitz of continuous of $\nabla_2 g(\boldsymbol{x},\boldsymbol{z})$), the last inequality but one holds by definition $C_k = \max_{i\in \mathcal{N}}\left \|{ \delta(\boldsymbol{x}_k)} -\hat{v}_{i,k+1} \right\|$ and \eqref{phi}. Finally, by substituting \eqref{bd-g} into \eqref{bd-y1} we derive \eqref{average_gra}. \hfill $\blacksquare$
Now, we state a convergence property of Algorithm \ref{alg1}.
\begin{proposition} \label{prop_FW} Consider Algorithm \ref{alg1}. Let Assumptions \ref{ass-payoff}-\ref{ass-graph} hold. Define $h_{k}=f(\boldsymbol{x}_{k})-f(\boldsymbol{x}^{\star})$ and $c^{\prime}=\max_{i\in \mathcal{N}}c_{i}$. Then
\begin{equation}\label{convergence-pre}
h_{k+1}\leq (1-\gamma_{k})h_k + \frac{L}{2}\gamma_{k}^{2}\bar{\rho}^{2} +{\varepsilon_{1,k}}\gamma_{k} +{\varepsilon_{2,k}}\gamma_{k},
\end{equation}
where
\begin{align}
\varepsilon_{1,k} &= \bar{\rho}l_1\sqrt{N} (\theta \beta^{k-1}M_H + \theta N \rho \sum_{s=1}^k \beta^{k-s} r_{s-1}) ,\label{def-varek1} \\
\varepsilon_{2,k} &= \rho c^{\prime}\big(\theta \beta^{k}\|\boldsymbol{y}_{0} \| + \sum_{s=1}^{k}\theta\beta^{k-s}(l_2\bar{\rho } + l_2l_3\sqrt{N}\rho) \gamma_{s-1} + \sum_{s=1}^{k}l_2l_3(\sqrt{N}+1) \rho\theta\beta^{k-s} \gamma_{s-2}\notag \\
& +\sum_{s=1}^{k}l_2\sqrt{N}\theta\beta^{k-s}(C_{s-1} +C_{s-2}) +\sqrt{N}l_2 C_{k-1}+2\sqrt{N}l_2l_3 \rho \gamma_{k-1}\big) . \label{def-varek2}
\end{align}
\end{proposition}
\noindent{\bf{Proof:}} Note by \eqref{fw_update} that
$\boldsymbol{x}_{k+1}-\boldsymbol{x}_{k} = \gamma_{k}(\boldsymbol{s}_{k}-\boldsymbol{x}_{k} )$.
Then from the L-smoothness of $f$ and the boundedness of $X$, we have
\begin{align} \label{pro_main}
f(\boldsymbol{x}_{k+1}) &\leq f(\boldsymbol{x}_{k}) +\langle \nabla f(\boldsymbol{x}_{k}),\boldsymbol{x}_{k+1}-\boldsymbol{x}_{k} \rangle +\frac{L}{2}\|\boldsymbol{x}_{k+1}-\boldsymbol{x}_{k}\|^{2} \notag \\
&\leq f(\boldsymbol{x}_{k}) +\gamma_{k}\langle \nabla f(\boldsymbol{x}_{k}),\boldsymbol{s}_{k}-\boldsymbol{x}_{k} \rangle +\frac{L}{2}\gamma_{k}^{2}\bar{\rho}^{2},
\end{align}
where $\bar{\rho}:=\max_{\theta,\theta^{\prime}\in X}\|\theta-\theta^{\prime}\|^{2}_{2}$.
Note that
\begin{align}
\nabla f(\boldsymbol{x}_{k}) = \nabla_{1}g(\boldsymbol{x}_{k},\boldsymbol{\delta}(\boldsymbol{x}_{k}))+\nabla\phi(\boldsymbol{x}_{k})\mathbf{1}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_2g_{i}(x_{i,k},\delta(\boldsymbol{x}_{k}))
\end{align}
where $\boldsymbol{\delta}(\boldsymbol{x}_{k}) \triangleq \mathbf{1}\otimes \delta(\boldsymbol{x}_{k}) $. Thus, we see that for any $\boldsymbol{s}\in X$,
\begin{align*}
\langle \nabla f(\boldsymbol{x}_{k}),\boldsymbol{s} \rangle &= \langle \nabla_{1}g(\boldsymbol{x}_{k},\hat{\boldsymbol{v}}_{k+1})+ \nabla\phi(\boldsymbol{x})\hat{\boldsymbol{y}}_{k+1}, \boldsymbol{s} \rangle +\langle \nabla_{1}g(\boldsymbol{x}_{k},\boldsymbol{\delta}(\boldsymbol{x})) - \nabla_{1}g(\boldsymbol{x}_{k},\hat{\boldsymbol{v}}_{k+1}), \boldsymbol{s} \rangle \notag \\
&+ \left \langle \nabla\phi(\boldsymbol{x}_{k})\mathbf{1}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_2g_{i}(x_{i},\delta(\boldsymbol{x})) - \nabla\phi(\boldsymbol{x}_{k})\hat{\boldsymbol{y}}_{k+1}, \boldsymbol{s} \right \rangle.
\end{align*}
Thus, we have
\begin{align}\label{err_k}
\langle \nabla f(\boldsymbol{x}_{k}),\boldsymbol{s}_{k}-\boldsymbol{x}^{\star} \rangle &= \langle \nabla_{1}g(\boldsymbol{x}_{k},\hat{\boldsymbol{v}}_{k+1})+ \nabla\phi(\boldsymbol{x}_{k})\hat{\boldsymbol{y}}_{k+1}, \boldsymbol{s}_{k}-\boldsymbol{x}^{\star}\rangle \notag\\
& +\langle \nabla_{1}g(\boldsymbol{x},\boldsymbol{\delta}(\boldsymbol{x})) - \nabla_{1}g(\boldsymbol{x}_{k},\hat{\boldsymbol{v}}_{k+1}), \boldsymbol{s}_{k}-\boldsymbol{x}^{\star}\rangle \notag \\
&+ \left \langle \nabla\phi(\boldsymbol{x}_{k})\mathbf{1}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_2g_{i}(x_{i,k},\delta(\boldsymbol{x}_{k})) - \nabla\phi(\boldsymbol{x}_{k})\hat{\boldsymbol{y}}_{k+1}, \boldsymbol{s}_{k}-\boldsymbol{x}^{\star} \right \rangle \notag \\
&\leq \langle \nabla_{1}g(\boldsymbol{x}_{k},\boldsymbol{\delta}(\boldsymbol{x}_{k})) - \nabla_{1}g(\boldsymbol{x}_{k},\hat{\boldsymbol{v}}_{k+1}), \boldsymbol{s}_{k}-\boldsymbol{x}^{\star}\rangle \notag \\
&+ \left \langle \nabla\phi(\boldsymbol{x}_{k})\mathbf{1}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_2g_{i}(x_{i,k},\delta(\boldsymbol{x}_{k})) - \nabla\phi(\boldsymbol{x}_{k})\hat{\boldsymbol{y}}_{k+1}, \boldsymbol{s}_{k}-\boldsymbol{x}^{\star} \right \rangle
\\& \leq \bar{\rho}\|\nabla_{1} g(\boldsymbol{x}_{k},\boldsymbol{\delta}(\boldsymbol{x}_{k}))-\nabla_{1} g(\boldsymbol{x}_{k},\hat{\boldsymbol{v}}_{k+1})\| \notag \\
&+\bar{\rho} \left \|\nabla \phi(\boldsymbol{x}_k)\mathbf{1}_{N}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},\delta(\boldsymbol{x}_k))-\nabla \phi(\boldsymbol{x}_k)\hat{\boldsymbol{y}}_{k+1} \right \| \notag,
\end{align}
where the first inequality hold since $\langle \nabla_{1}g(\boldsymbol{x}_{k},\hat{\boldsymbol{v}}_{k+1})+ \nabla\phi(\boldsymbol{x}_{k})\hat{\boldsymbol{y}}_{k+1}, \boldsymbol{s}_{k}-\boldsymbol{x}^{\star}\rangle \leq 0$ by \eqref{fw_gl}, and the last inequality has utilized Assumption \ref{ass-payoff}(a).
Then by adding $\langle \nabla f(\boldsymbol{x}_{k}),\boldsymbol{x}^{\star}-\boldsymbol{x}_{k} \rangle$ to both side of \eqref{err_k} and using the triangle inequality, we have
\begin{align}\label{h_k}
\langle \nabla f(\boldsymbol{x}_{k}),\boldsymbol{s}_{k}-\boldsymbol{x}_{k} \rangle
& \leq \langle \nabla f(\boldsymbol{x}_{k}),\boldsymbol{x}^{\star}-\boldsymbol{x}_{k} \rangle +\bar{\rho}\|\nabla_{1} g(\boldsymbol{x}_{k},\boldsymbol{\delta}(\boldsymbol{x}_{k}))-\nabla_{1} g(\boldsymbol{x}_{k},\hat{\boldsymbol{v}}_{k+1})\| \notag \\
&+\bar{\rho} \| \nabla \phi(\boldsymbol{x}_k) \| \left \| \mathbf{1}_{N}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},\delta(\boldsymbol{x}_k))- \hat{\boldsymbol{y}}_{k+1} \right \| \notag \\
&\leq \langle \nabla f(\boldsymbol{x}_{k}),\boldsymbol{x}^{\star}-\boldsymbol{x}_{k} \rangle +\bar{\rho}l_{1}\|\boldsymbol{\delta}(\boldsymbol{x}_{k})-\hat{\boldsymbol{v}}_{k+1}\| \notag \\ &+\bar{\rho}c^{\prime}\left \|\mathbf{1}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},\delta(\boldsymbol{x}_k))
-\hat{\boldsymbol{y}}_{k+1}\right \|,
\end{align}
where the last inequality has applied Assumption \ref{ass-gradient}(a), Assumption \ref{ass-gradient}(b), and $c^{\prime}=\max_{i\in \mathcal{N}}c_{i}$.
Next, we prove the bound of {$\|\mathbf{1}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i},\delta(\boldsymbol{x}))-\hat{\boldsymbol{y}}_{k+1}\| $. By adding and subtracting $\frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},v_{i,k})$, we have
\begin{align*}
& \left \|\hat{\boldsymbol{y}}_{k+1} - \mathbf{1} \otimes \frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},\delta(\boldsymbol{x}_{k})) \right \|
\notag \\
& =\left \|\hat{\boldsymbol{y}}_{k+1} -\mathbf{1} \otimes \frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},v_{i,k}) +\mathbf{1}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},v_{i,k})- \mathbf{1}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},\delta(\boldsymbol{x}_{k})) \right \| \notag \\
& {\eqref{average_y} \atop =} \left \|\hat{\boldsymbol{y}}_{k+1} -\mathbf{1} \otimes \bar{y}_{k} \right \| + \left \|\mathbf{1}\otimes \frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},v_{i,k})- \mathbf{1}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},\delta(\boldsymbol{x}_k)) \right \| \notag \\
&\leq \|\hat{\boldsymbol{y}}_{k+1} -\mathbf{1} \otimes \bar{y}_{k} \| + \left \|\mathbf{1}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},v_{i,k})-
\mathbf{1}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},\delta(\boldsymbol{x}_{k-1}))\right\| \notag \\
& +\left \|\mathbf{1}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},\delta(\boldsymbol{x}_{k-1}))- \mathbf{1}\otimes\frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},\delta(\boldsymbol{x}_{k})) \right \| \notag .
\end{align*}
Then by using the triangle inequality and Assumption \ref{ass-gradient}, we obtain that
\begin{align}
&\left \|\hat{\boldsymbol{y}}_{k+1} - \mathbf{1} \otimes \frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},\delta(\boldsymbol{x}_{k})) \right \| \notag\\
& \leq \|\hat{\boldsymbol{y}}_{k+1} -\mathbf{1} \otimes \bar{y}_{k} \| + \frac{1}{\sqrt{N}}\sum_{i=1}^{N}\|\nabla_{2}g_{i}(x_{i,k},v_{i,k})-\nabla_{2}g_{i}(x_{i,k},\delta(\boldsymbol{x}_{k-1}))\| \notag \\
&+\frac{1}{\sqrt{N} }\sum_{i=1}^{N} \|\nabla_{2}g_{i}(x_{i,k},\delta(\boldsymbol{x}_{k}))-\nabla_{2}g_{i}(x_{i,k},\delta(\boldsymbol{x}_{k-1}))\|\notag\\
&\leq \|\hat{\boldsymbol{y}}_{k+1} -\mathbf{1} \otimes \bar{y}_{k} \|+\frac{1}{\sqrt{N}}\sum_{i=1}^{N}l_2\|\delta(\boldsymbol{x}_{k-1}) -v_{i,k} \| + \sqrt{N}l_2\|\delta(\boldsymbol{x}_{k})-\delta(\boldsymbol{x}_{k-1})\| \notag
\\&{\eqref{alg-average} \atop =} \|\hat{\boldsymbol{y}}_{k+1} -\mathbf{1} \otimes \bar{y}_{k} \|+\sqrt{N}l_2\|\delta(\boldsymbol{x}_{k})-\delta(\boldsymbol{x}_{k-1})\|\notag\\
&+\frac{1}{\sqrt{N}}\sum_{i=1}^{N}l_2\|\delta(\boldsymbol{x}_{k-1}) -\hat{v}_{i,k} + \phi_i(x_{i,k-1}) -\phi_i(x_{i,k})\| \notag \\
&\leq \|\hat{\boldsymbol{y}}_{k+1} -\mathbf{1} \otimes \bar{y}_{k} \|+\sqrt{N}l_2 C_{k-1} +\sqrt{N}l_2 l_3\rho\gamma_{k-1} +\sqrt{N}l_2l_3\rho \gamma_{k-1},
\end{align}
where the last inequality holds by the definition $C_k = \max_{i\in \mathcal{N}}\left \|{ \delta(\boldsymbol{x}_k)} -\hat{v}_{i,k+1} \right\|$ and \eqref{phi}. Then by applying Proposition \ref{Pro_gra} and recalling the definition of
$\varepsilon_{2,k}$ in \eqref{def-varek2}, we have
\begin{align}\label{grad_error}
\bar{\rho}c^{\prime}\|\hat{\boldsymbol{y}}_{k+1} - \mathbf{1} \otimes \frac{1}{N}\sum_{i=1}^{N}\nabla_{2}g_{i}(x_{i,k},\delta(\boldsymbol{x}_k))\| \leq \varepsilon_{2,k}.
\end{align}
Note by Proposition \ref{prop_agg} that
\begin{align}\label{N_agg_error}
\bar{\rho}l_1 \|\boldsymbol{\delta}(\boldsymbol{x}_{k})-\hat{\boldsymbol{v}}_{k+1}\| \leq \bar{\rho}l_1 \sqrt{N}(\theta \beta^{k-1}M_H + \theta N \rho \sum_{s=1}^k \beta^{k-s} r_{s-1}){\eqref{def-varek1}\atop =}\varepsilon_{1,k}.
\end{align}
Then by substituting \eqref{grad_error} and \eqref{N_agg_error} into \eqref{h_k}, we derive
\begin{align*}
\langle \nabla f(\boldsymbol{x}_{k}),\boldsymbol{s}_{k}-\boldsymbol{x}_{k} \rangle \leq \langle \nabla f(\boldsymbol{x}_{k}),\boldsymbol{x}^{\star}-\boldsymbol{x}_{k} \rangle +\varepsilon_{1,k}+ \varepsilon_{2,k} .\end{align*}
Therefore, by subtracting $f(\boldsymbol{x}^{\star})$ from both sides in the inequality \eqref{pro_main}, and using the above inequality, we obtain that
\begin{align} \label{conv_ineq}
h_{k+1} \leq h_{k} +\gamma_{k}\langle\boldsymbol{x}^{\star}-\boldsymbol{x}_{k}, \nabla f(\boldsymbol{x}_{k}) \rangle + \frac{L}{2}\gamma_{k}^{2}\bar{\rho}^{2} +{\varepsilon_{1,k}}\gamma_{k} +{\varepsilon_{2,k}}\gamma_{k}.
\end{align}
Since $f$ is convex, we observe $
\langle\boldsymbol{x}^{\star}-\boldsymbol{x}_{k}, \nabla f(\boldsymbol{x}_{k}) \rangle \leq -h_k.$
This incorporating with \eqref{conv_ineq} proves the proposition. \hfill $\blacksquare$
Next, we give the following convergence result of a recursive linear inequality.
\begin{proposition}\label{sequence} \cite[Lemma 3]{polyak1987introduction} Let the nonnegative sequence $\left\{u_{k}\right\}$ be generated by $u_{k+1} \leq q_{k} u_{k} + \nu_{k}$, where $0 \leq q_{k} \leq 1, \nu_{k} \geq 0$.
Suppose that
\begin{align}\label{sequence_polyak}
\sum_{k=0}^{\infty}\left(1-q_{k}\right)=\infty, \quad \nu_{k} /\left(1-q_{k}\right) \rightarrow 0.
\end{align}
Then $\lim _{k \rightarrow \infty} u_{k} \leq 0$. In particular, if $u_{k} \geq 0$, then, $u_{k} \rightarrow 0$
\end{proposition}
\begin{lemma}\label{infty} [\cite{sundhar2010distributed}, Lemma 3]
Let $\left(\gamma_{k}\right)_{k \in \mathbb{N}}$ be a scalar sequence.\\
a) If $\lim _{k \rightarrow \infty} \gamma_{k}=\gamma$ and $0<\tau<1$, then $\lim _{k \rightarrow \infty}$ $\sum_{\ell=0}^{k} \tau^{k-\ell} \gamma_{\ell}=\gamma /(1-\tau)$;\\
b) If $\gamma_{k} \geq 0$ for all $k, \sum_{k=0}^{\infty} \gamma_{k}<\infty$ and $0<\tau<1$, then $\sum_{k=0}^{\infty} \sum_{\ell=0}^{k} \tau^{k-\ell} \gamma_{\ell}<\infty .$
\end{lemma}
Based on Propositions \ref{prop_agg}, \ref{Pro_gra}, \ref{prop_FW} and \ref{sequence}, we proceed to prove that
the iterates generated by the proposed algorithm \ref{alg1} converge to the optimal point.
\noindent{\bf{Proof of Theorem 1:}} Set $q_k = 1- \gamma_{k}$, and $\nu_k = \frac{L}{2}\gamma_{k}^{2}\bar{\rho}^{2} +\varepsilon_{1,k}\gamma_{k} + \varepsilon_{2,k}\gamma_{k}$. We will apply Proposition 4 to \eqref{convergence-pre} to prove Theorem 1.
Firstly, we prove that \[\lim_{k\rightarrow \infty} \varepsilon_{1,k} + \varepsilon_{2,k} = 0.\]
Since $\lim_{k\rightarrow \infty}\gamma_{k} = 0$ by Assumption \ref{rk} and $0 < \beta < 1 $
by Lemma \ref{lemma_graph}, we drive $\lim_{k\rightarrow \infty}\sum_{s=1}^k \beta^{k-s}\gamma_{s-1} =0 $ and $\lim_{k\rightarrow \infty}\sum_{s=1}^k \beta^{k-s}\gamma_{s-2} =0 $ by Lemma \ref{infty}(a). Thus,
\begin{align}
\lim_{k\rightarrow \infty}\varepsilon_{1,k} = \lim_{k\rightarrow \infty}~\bar{\rho}l_1\sqrt{N} (\theta \beta^{k-1}M_H + \theta N \rho \sum_{s=1}^k \beta^{k-s} r^{s-1}) = 0.
\end{align}
Then by definition $C_k = \max_{i\in \mathcal{N}}\left \|{ \delta(\boldsymbol{x}_k)} -\hat{v}_{i,k+1} \right\|$ and using Proposition \ref{prop_agg}, we have $\lim_{k\rightarrow \infty} C_k = 0$.
Similarly, by using Lemma \ref{infty}(a), we obtain that
\[ \lim_{k\rightarrow \infty}\sum_{s=1}^k \beta^{k-s} C_{s-1} =0 ~{\rm and} ~\lim_{k\rightarrow \infty}\sum_{s=1}^k \beta^{k-s} C_{s-2} = 0.\] Thus,
\begin{align}
\lim_{k\rightarrow \infty}\varepsilon_{2,k} &= \lim_{k\rightarrow \infty} \rho c^{\prime}\big(\theta \beta^{k}\|\boldsymbol{y}_{0} \| + \sum_{s=1}^{k}\theta\beta^{k-s}(l_2\bar{\rho } + l_2l_3\sqrt{N}\rho) \gamma_{s-1} + \sum_{s=1}^{k}l_2l_3(\sqrt{N}+1) \rho\theta\beta^{k-s} \gamma_{s-2}\notag \\
& +\sum_{s=1}^{k}l_2\sqrt{N}\theta\beta^{k-s}(C_{s-1} +C_{s-2}) +\sqrt{N}l_2 C_{k-1}+
(\sqrt{N}+1) l_2l_3 \rho \gamma_{k-1}\big) = 0.
\end{align}
From Assumption \ref{rk} it follows that $\sum_{0}^{k}(1 - q_{k}) =\sum_{0}^{k}\gamma_{k} = \infty$ and $\lim_{k\rightarrow \infty} \frac{L\bar{\rho}^2}{2}\gamma_{k} = 0$.
In summary, we have proved the condition \eqref{sequence_polyak} required by Proposition \ref{sequence}. Then by applying Proposition \ref{sequence} to \eqref{convergence-pre}, we obtain $\lim_{k\rightarrow \infty} f(\boldsymbol{x}_{k})-f(\boldsymbol{x}^{\star})=0$. \hfill $\blacksquare$
\section{Numerical Simulation}\label{sec:Num}
In this section, we demonstrate the proposed algorithm by
solving an example with $N=5$ agents for problem (\ref{Nopt_agg}). Agent $i$'s local cost function is
\begin{align}\label{Simulation}
f_{i}\left(x_{i}, \delta(x)\right)=k_{i}\left(x_{i}-\chi_{i}\right)^{2}+P(\delta(x)) x_{i}
\end{align}
where $k_{i}$ is constant and $\chi_{i}$ is the fixed entities for $i=1, \cdots, N$, and $P(\sigma(x))=$ $a N \sigma(x)+p_{0}$ with $\sigma(x)=\frac{1}{N} \sum_{i \in \mathcal{N}} x_{i}$.
In simulation, the constraint set is set as an $l_{1}$ norm ball constraint $\Omega_i=\left\{x_i \mid\|x_i\|_{1} \leq R_i \right\}$. Then, $s_{i,k}$ in D-FWAGT admits a closed form solution
$$
s_{i,k}=\operatorname{argmin}_{s \in \Omega_i}\left\langle s, d_{i,k}\right\rangle=R_i \cdot(-\operatorname{sgn}\left[d_{i,k}\right])
$$
with $d_{i,k} \triangleq \nabla_{1}g_{i}(x_{i,k},\hat{v}_{i,k+1})+\nabla{\phi}_{i}(x_{i,k})\hat{y}_{i,k+1} $ as in algorithm \ref{alg1}.
Let the feasible sets is $\Omega_i=\left\{x_i \in \mathbb{R}^{n}: \|x_1\|_{1} \leq 5,\right.$ $\left.\|x_2\|_{1} \leq 7,\right.$ $\left.\|x_3\|_{1} \leq 9, \right.$ $\left.\|x_4\|_{1} \leq 3,\right.$ $\left.\|x_5\|_{1} \leq 6 \right\}$. And we have $col(\chi_{i})_{i=1,\dots,5} = [3,5,6,1,2]^{\top}\otimes \mathbf{1}_n$, $a=0.04$, and $p_{0} =5\times\mathbf{1}_n.$
We set an undirected time-varying graph as the communication network. The graph at each iteration is randomly drawn from a set of three graphs, whose union graph is connected. Set the adjacency matrix $W=\left[w_{i j}\right]$, where $w_{i j}=\frac{1}{\max \left\{\left|\mathcal{N}_{i }\right|,\left|\mathcal{N}_{j}\right|\right\}}$ for any $i \neq j$ with $(i, j) \in \mathcal{E}, w_{i i}=1-\sum_{j \neq i} w_{i j}$, and $w_{i j}=0$.
\begin{figure}[htbp]
\centering
\begin{subfigure}[$\gamma_k = 1/k, \gamma_k =1/\sqrt{k}$]
{
\begin{minipage}{0.3\linewidth}
\includegraphics[width=0.9\textwidth, height=0.25\textheight]{rate_fw1.png}
\end{minipage}
}
\end{subfigure}
\centering
\begin{subfigure}[$\gamma_k = 1/k, \gamma_k =1/k^2$]
{
\begin{minipage}{0.3\linewidth}
\includegraphics[width=0.9\textwidth, height=0.25\textheight]{rate_fw.png}
\end{minipage}
}
\end{subfigure}
\caption{Evolution of $f(\boldsymbol{x})'s$ versus iteration $s$.}
\label{fig1}
\end{figure}
Fig.\ref{fig1} displays the convergence of the proposed algorithm, and it compares the effect of the step size of the Frank Wolfe type algorithms on the convergence of the algorithm, by considering $\gamma_k = 1/\sqrt{k},$ $\gamma_k = 1/{k},$ and $\gamma_k = 1/{k}^2.$ Although the convergence rate of the algorithm is almost as fast for the three different decreasing step cases, too large decreasing step size could cause the result of the algorithm to be too far from the optimal solution.
\begin{figure}[H]
\centering
\begin{subfigure}[$n=32$]
{
\begin{minipage}{0.3\linewidth}
\includegraphics[width=0.9\textwidth, height=0.25\textheight]{rate_32.png}
\end{minipage}
}
\end{subfigure}
\centering
\begin{subfigure}[$n=64$]
{
\begin{minipage}{0.3\linewidth}
\includegraphics[width=0.9\textwidth, height=0.25\textheight]{rate_64.png}
\end{minipage}
}
\end{subfigure}
\centering
\begin{subfigure}[$n=128$]
{
\begin{minipage}{0.3\linewidth}
\includegraphics[width=0.9\textwidth, height=0.25\textheight]{rate_128.png}
\end{minipage}
}
\end{subfigure}
\caption{Optimal solution errors with different dimensions n = 32, 64, 128.}
\label{fig2}
\end{figure}
To demonstrate the properties of our algorithm. We compare our algorithm \ref{alg1} with projection based algorithm ($x_{i,k+1} =\Pi_{x_i}(x_{i,k}-\alpha \nabla_{1}g_{i}(x_{i,k},\hat{v}_{i,k+1})+\nabla{\phi}_{i}(x_{i,k})\hat{y}_{i,k+1})$), and select the dimensions of decision variables as the power of two. In Fig.\ref{fig2}, the x-axis is for the real running time (CPU time) in seconds, while the y-axis is for the optimal solution errors in each algorithm. We learn from Fig.\ref{fig2} that as the dimension increases, the actual running time (CPU time) of the projection-based algorithm is significantly longer than that of the projection-free algorithm. The reason is that searching for poles on the boundary of a high-dimensional constraint set (solving a linear program) is faster than computing the projection of a high-dimensional constraint set (solving a quadratic program).
\begin{table}[H]
\centering
\setlength{\tabcolsep}{5mm}
\begin{tabular}{c|c|c|c|c|c}
\hline \hline
\text { dimensions } & {n}=16 & {n}=32 & {n}=64 & {n}=128 & {n}=256 \\
\hline \text { D-FWAGT }({msec}) & 0.069 & 0.077 & 0.092 & 0.126 & 0.132 \\
\hline \text { PGA (msec) } & 78.5 &95.6 & 164.6 & 242.2 & 366.3 \\
\hline \hline
\end{tabular}
\caption{THE AVERAGE REAL RUNNING TIME OF SOLVING ONE-STAGE SUBPROBLEMS.}
\label{tab1}
\end{table}
In addition, in Tab. \ref{tab1}, we list the average actual running time for solving the single-stage subproblem, i.e., linear or quadratic programs. When the dimensional is low, the difference in time required to solve the linear program and the quadratic program on such constraint sets may not be too great. However, as the dimension explodes, solving the quadratic program becomes difficult in this case, but the time to solve the linear program hardly varies much. That is consistent with the advantages of the projection free approaches for large-scale problems.
\section{Conclusions }\label{sec:con}
This paper proposes a distributed projection free gradient method for aggregative optimization problem based on Frank-Wolfe method, and shows that the proposed method can achieve the convergence for the case of the cost function is convex. In addition, empirical results demonstrate that our method indeed brings speed-ups. It is of interest to explore the faster convergence rate projection-free algorithm for distributed aggregative optimization, and analysis the Frank-Wolfe method to the other classes of network optimizations in distributed and stochastic settings.
\bibliographystyle{elsarticle-num}
|
1,116,691,500,164 | arxiv | \section{Introduction}
Recently, there has been considerable interest in magnetic skyrmions, \cite{skyrev} particle-like topological spin textures discovered in chiral ferromagnets with Dzyaloshinskii-Moriya interaction and in dipolar ferromagnets with uniaxial anisotropy. These magnetic nanostructures are objects of many internal degrees of freedom and can be driven by ultralow electric current densities. \cite{drive1,drive2} Experimentally, skyrmions with both odd and even azimuthal winding numbers have been observed. \cite{sky1,sky2,bisky} The latter exist in dipolar ferromagnets where a ``spin helicity'' degree of freedom allows for the formation of more complex structures. In thin film samples, magnetic skyrmions can be stabilized over a wide temperature range \cite{yu11,garel82} including near the absolute zero.
In this work, we demonstrate the potential application of magnetic skyrmions to topological quantum computation (TQC). \cite{kitaev03,nayak08} We investigate a magnetic skyrmion in proximity to a conventional $s$-wave superconductor
and show that for strong exchange coupling between the itinerant electrons and the skyrmion there exists a zero-energy Majorana bound state (MBS) in the skyrmion core, if the skyrmion has an even azimuthal winding number. These MBSs exhibit non-Abelian statistics, and can be braided using superconducting tri-junctions.
There are two issues concerning TQC with MBSs in magnetic skyrmions. For a skyrmion with a single spin-flip in radial direction, the MBS localization length is comparable to the skyrmion size, leading to hybridization with gapless bulk states. This can be prevented by using skyrmions with multiple spin-flips radially, or by introducing a spin-orbit interaction (SOI), both of which stabilize the MBS.
Second, the MBS is generically accompanied by subgap localized fermionic states. Fortunately, these subgap fermions are spatially separated from the MBS due to a ``centrifugal'' force, and in addition, they respond to electric and magnetic fields, allowing further discrimination from the MBS.
In one dimension, it is known that a helical field or spin order, combined with the proximity-induced superconductivity, leads to the emergence of MBSs. \cite{gangadharaiah11L,klinovaja_stano12,kjaergaard12,klinovaja13L,braunecker13L,vazifeh13L,nadj13,pientka13,nadj14,kim2015:PRB,pawlak15}
This lies in the fact that a helical field is gauge equivalent \cite{braunecker10} to a uniform Zeeman field and a Rashba SOI, the latter two being the essential ingredients of topological phases supporting MBSs.
Qualitatively, our results can be understood through an analogy between a magnetic skyrmion and a 1D helical spin order in radial direction.
A skyrmion lattice in two dimensions was found to induce an effective $p$-wave pairing, resulting in degenerate zero-energy bound states. \cite{nakosai13} Similar pairing was deduced in a toy model of a single skyrmion in Ref.~\onlinecite{chen}, while Shiba states bound to a skyrmion were found in Ref.~\onlinecite{pershoguba}.
This paper is organized as follows. In Sec.~II, we introduce the model which captures the physics at the interface between a magnetic skyrmion and an $s$-wave superconductor. We show that under certain conditions a MBS emerges near the core of the skyrmion of an even azimuthal winding number and obtain the MBS wave function. In Sec.~III, we study the quasiparticle spectrum of the system and discuss the robustness of the MBSs. In Sec.~IV, we perform a tight-binding calculation whose results further support the findings in Secs.~II and III. In Sec.~V, we discuss the realization of non-Abelian statistics of the MBSs. We conclude in Sec.~VI.
\section{Model}
We consider a single magnetic skyrmion with a core of uniformly polarized spins. Outside the core region, the skyrmion spin varies locally in both radial and azimuthal directions until the far asymptotic region, again with uniform spin polarization. We parametrize the skyrmion spin texture as
\begin{equation}
\label{y1}
\hat{N}(\mathbf{r})=\big(\sin f(r)\cos n\theta, \sin f(r)\sin n\theta, \cos f(r)\big)
\end{equation}
in polar coordinates $\mathbf{r}=(r\cos\theta,r\sin\theta)$, where the piecewise function $f(r)$ is defined as: 0 for $r<r_0$, $\pi(r-r_0)/R$ for $r_0\leq r\leq r_0+pR$ and $\pi$ for $r>r_0+pR$, with $p$ a positive integer and $r_0\ll R$. By construction, the skyrmion core has size $2r_0$ and the skyrmion spin flips $p$ times moving radially outwards from the center. The azimuthal winding number $n$ takes integer values.
The Hamiltonian for itinerant electrons exchange coupled to the skyrmion spin texture is
\begin{equation}
\label{y2}
H_0=\int d\mathbf{r}\, \sum_{\gamma,\delta}\psi^{\dagger}_\gamma(-\frac{\nabla^2}{2m}-\mu+\alpha \hat{N}\cdot \vec{\sigma})_{\gamma\delta}\psi_\delta,
\end{equation}
where $\psi_\gamma$ is the annihilation operator of electron with spin $\gamma=\uparrow, \downarrow$, $m$ is the effective mass, $\mu$ is the chemical potential, $\alpha$ is the exchange coupling constant, and $\vec{\sigma}$ are Pauli matrices acting in spin space. (We set $\hbar=1$ throughout.)
The proximity-induced superconductivity is described by $H_S= \int d\mathbf{r} (\Delta \psi_{\uparrow}^{\dagger} \psi_{\downarrow}^{\dagger} +\textrm{H.c.})$, where $\Delta=\Delta_0 e^{i \varphi}$ is the pairing potential. The full Hamiltonian is $H=H_0+H_S=\int d \mathbf{r}\Psi^{\dagger}\mathcal{H}\Psi/2$, where
\begin{equation}
\label{y3}
\mathcal{H}=(-\frac{\nabla^2}{2m}-\mu)\tau_z+\alpha \hat{N}\cdot \vec{\sigma}+\Delta\tau_++\Delta^*\tau_-
\end{equation}
in Nambu basis $\Psi^{\dagger}=[\psi_{\uparrow}^{\dagger},\psi_{\downarrow}^{\dagger},\psi_{\downarrow},-\psi_{\uparrow}]$. The Pauli matrices $\vec{\tau}$ act in particle-hole space, and $\tau_{\pm}=(\tau_x\pm i \tau_y)/2$.
Quasiparticle excitations above the superconducting ground state are described by operators $\chi^{\dagger}=\int d \mathbf{r} \sum_{\gamma} ( u_{\gamma}\psi_{\gamma}^{\dagger}+v_{\gamma}\psi_{\gamma})$ satisfying the Bogoliubov-de Gennes (BdG) equation $\mathcal{H} \Upsilon(\mathbf{r})=E\Upsilon(\mathbf{r})$, where $\Upsilon(\mathbf{r})=[u_{\uparrow}(\mathbf{r}),u_{\downarrow}(\mathbf{r}),v_{\downarrow}(\mathbf{r}),-v_{\uparrow}(\mathbf{r})]^T$.
We look for zero-energy solutions to the BdG equation satisfying the Majorana condition $\mathcal{C}\Upsilon(\mathbf{r}) = \eta \Upsilon(\mathbf{r})$, where the particle-hole operator $\mathcal{C}=\sigma_y \tau_y K$, with $K$ the complex conjugation, and $\eta$ is some constant. The BdG equation is solved by eigenstates of the angular momentum-like operator $-i \partial_\theta + (n/2) \sigma_z$ commuting with $\mathcal{H}$,
\begin{equation}
\label{y4}
\Upsilon^l(r,\theta)=e^{i(l-\frac{n}{2}\sigma_z)\theta}e^{i\frac{1}{2}\tau_z\varphi} \Upsilon^l(r)
\end{equation}
with eigenvalues $l$, where the radial wave functions $\Upsilon^l(r)=[u_{\uparrow}^l(r),u_{\downarrow}^l(r),v_{\downarrow}^l(r),-v_{\uparrow}^l(r)]^T$. A single-valued wave function requires $l$ be an integer (half-integer) for even (odd) $n$. Under particle-hole transformation, solutions with angular momentum $l$ transform into those with $-l$. The quasiparticle spectrum is thus symmetric with respect to the $l=0$ sector, where non-degenerate zero-energy solutions must reside. The zero-mode wave function is $\Upsilon^0(r,\theta)=e^{-i\frac{n}{2}\sigma_z\theta}e^{i\frac{1}{2}\tau_z\varphi} \Upsilon^0(r)$, where $\Upsilon^0(r)$ is the kernel of the real matrix operator
\begin{equation}
\label{y5}
\mathcal{H}^l(r)=(-\frac{[\nabla^2]_r}{2m} -\mu)\tau_z +\alpha \sigma_z \cos f +\alpha \sigma_x \sin f+\Delta_0 \tau_x
\end{equation}
at $l=0$, with $[\nabla^2]_r=\partial_r^2+\frac{1}{r}\partial_r-\frac{1}{r^2}(l-\frac{n}{2}\sigma_z)^2$.
Without loss of generality, we choose $\Upsilon^0(r)$ to be real. The Majorana condition then translates to $v_{\uparrow, \downarrow}^0=\eta u_{\uparrow, \downarrow}^0$ with $\eta=\pm1$, which allows for the reduction of the radial equation $\mathcal{H}^{l=0}(r)\Upsilon^0(r)=0$ to
\begin{align}
\label{y6}
\left( \begin{array}{cc}
-\frac{[\nabla^2]_r}{2m} -\mu+\alpha \cos f & \alpha \sin f +\eta\Delta_0\\
\alpha \sin f -\eta\Delta_0 & -\frac{[\nabla^2]_r}{2m} -\mu-\alpha \cos f
\end{array} \right) \Phi=0
\end{align}
in terms of the two-spinor $\Phi(r)\equiv[u_{\uparrow}^0(r),u_{\downarrow}^0(r)]^T$. In the following, we solve Eq.~(\ref{y6}) for $n=2$, which is the simplest topologically stable spin configuration for an even $n$.
It is useful to make a rotation of Eq.~(\ref{y6}) by the unitary operator $U(r)=e^{i\frac{1}{2}\sigma_yf(r)}$, which gives
\begin{widetext}
\begin{align}
\label{y7}
\left( \begin{array}{cc}
-\frac{1}{2m} (\partial_r^2+\frac{1}{r}\partial_r-\frac{1}{r^2})-\tilde{\mu}+\alpha & i\frac{f'}{2m} \hat{p}_r+\frac{f''}{4m}+\eta\Delta_0 \\
-i\frac{f'}{2m} \hat{p}_r-\frac{f''}{4m} -\eta\Delta_0 & -\frac{1}{2m} (\partial_r^2+\frac{1}{r}\partial_r-\frac{1}{r^2})-\tilde{\mu}-\alpha
\end{array} \right) \tilde{\Phi}=0,
\end{align}
\end{widetext}
where $\tilde{\Phi}(r)\equiv U(r)\Phi(r)=[\tilde{u}_{\uparrow}(r), \tilde{u}_{\downarrow}(r)]^T$, $\tilde{\mu}=\mu-f'^2/8m$ and the Hermitian radial momentum operator $\hat{p}_r=-i(\partial_r+1/2r)$. \cite{fujikawa08} It is easy to verify that the full wave function after the rotation satisfies the Majorana condition.
Equation~(\ref{y7}) shows that the spatially varying skyrmion spin texture, where $f'\neq 0$, renormalizes the chemical potential, and more importantly, generates an effective SOI, \cite{footnote2} thereby establishing the connection to a 1D system with helical spin order. \cite{gangadharaiah11L,klinovaja_stano12,kjaergaard12,klinovaja13L,braunecker13L,vazifeh13L,nadj13,pientka13,nadj14,kim2015:PRB,pawlak15,braunecker10}
\begin{figure}
\centering
\includegraphics[width=3.2in]{wfdimful.eps}
\caption{(Color online) Radial wave function $\tilde{\Phi}(r)=[\tilde{u}_{\uparrow}(r), \tilde{u}_{\downarrow}(r)]^T$ of the MBS $\chi^{\textrm{in}}$, with the solid (black) and dashed (red) lines denoting $\tilde{u}_{\uparrow}(r)$ and $\tilde{u}_{\downarrow}(r)$, respectively. We set $\mu=0$, $\alpha=1$~meV, $\Delta_0=0.5$~meV, $R=100$~nm, $r_0=10$~nm, $n=2$, and $pR$ much larger than the shown range. The discontinuities in $\tilde{\Phi}'(r)$ at $r=r_0$ \big(less discernible for $\tilde{u}_{\downarrow}(r)$\big) arise from singularities in $f''(r)$, Eq.~(\ref{y7}).}
\end{figure}
We solve Eq.~(\ref{y7}) by exploring the similarity between our system and a 1D topological superconductor (TSC). We look for two Majoranas $\chi^{\textrm{in}}$ and $\chi^{\textrm{out}}$, one located near the inner boundary $r=r_0$ and the other located near the outer boundary $r=r_0+pR$. These are the analogues of the Majorana end states in a 1D TSC. In Appendix A, we show detailed construction of the analytical wave functions of $\chi^{\textrm{in}}$ and $\chi^{\textrm{out}}$, as well as a comparison with results from exact numerical diagonalization of the Hamiltonian in Eq.~(\ref{y5}). The analytical and numerical solutions agree well. We find that for $\alpha^2>\tilde{\mu}^2+\Delta_0^2$ a zero-energy MBS $\chi^{\textrm{in}}$ exists, an example plotted in Fig.~1. This MBS is accompanied with a \emph{delocalized} (extending to infinity) Majorana mode $\chi^{\textrm{out}}$. Fixing other parameters and decreasing the exchange energy, $\chi^{\textrm{in}}$ delocalizes and turns into an extended state at the critical point when $\alpha^2=\tilde{\mu}^2+\Delta_0^2$ (we numerically confirm this analytical result for the transition, corresponding also to the closing and reopening of a spectral gap). The inner and outer Majoranas then hybridize into a finite-energy fermion. Away from the critical point, $\chi^{\textrm{in}}$ and $\chi^{\textrm{out}}$ can still have a non-zero overlap in a finite-size skyrmion. The ratio $\xi/pR$, with $\xi$ the decay length of $\chi^{\textrm{in}}$, characterizes the protection of the MBS from hybridization. A rough upper bound for $\xi$ is given by $v_F/\Delta_{\min}$, where $v_F$ is the Fermi velocity set by the larger of the exchange energy $\alpha$ and $E_{\lambda}=m\lambda^2/2$, where $\lambda=\pi/2mR$ is the strength of the effective SOI generated by the skyrmion, and $\Delta_{\min}=\min\{\Delta_0,|\alpha-\sqrt{\Delta_0^2+\tilde{\mu}^2}|,\Delta_p\}$, where $\Delta_p=2\Delta_0\sqrt{E_{\lambda}(1+\tilde{\mu}/\alpha)/\alpha}$ is the effective $p$-wave pairing gap. \cite{alicea10} We find $\xi\sim R$ in the realistic parameter regime $\alpha>\Delta_0>E_{\lambda}$. Thus, a skyrmion with $p=1$ cannot effectively localize a MBS even in the topological phase when $\alpha^2>\tilde{\mu}^2+\Delta_0^2$. In practice, a pulse laser or a large field gradient may be used to excite skyrmions with $p>1$, \cite{yucomm} hosting well-localized MBSs. Given an even number of such skyrmions, the extended outer Majoranas hybridize with each other and drop out of the ground state degeneracy, leaving only the MBSs which can be used for TQC.
\section{Quasiparticle spectrum}
We obtain the quasiparticle spectrum by exact numerical diagonalization of the Hamiltonian in Eq.~(\ref{y5}). For simplicity, we let $r_0= 0$ and consider a skyrmion in a finite region of size $L$, requiring that the wave functions vanish for $r \geq L$. For large $p$, we expect to find two MBSs in the topological phase, one localized near $r=0$, as shown in Fig.~2(d), and the other localized near $r=pR$. The localization length $\xi$ of the MBSs depends on the skyrmion-generated SOI. We have confirmed numerically (not shown), that $\xi \propto \lambda$ in the strong SOI regime where $E_{\lambda}\gg \alpha$ and $\xi \propto 1/\lambda$ in the weak SOI regime with $E_{\lambda} \ll \alpha$. Exactly the same behavior was found for
MBSs in 1D TSCs. \cite{JL12}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{fig_E_MF.eps}
\caption{(Color online) Excitation spectrum (left panels) and the probability density $\rho(r)=r|\Upsilon^0(r)|^2$ of the lowest eigenstate in the $l=0$ sector (right panels, with the $u_{\uparrow}^0,u_{\downarrow}^0,v_{\uparrow}^0$, and $v_{\downarrow}^0$ components shown in yellow, blue, black, and red, respectively), for a system size $L=1\,\mu$m with Dirichlet boundary conditions and for realistic parameters $m=m_e$, $\mu=0$, $\alpha=1$~meV, $\Delta_0=0.5$~meV, $\varphi=0$, $R=100$~nm, $r_0=0$, and $n=2$.
(a-b) Skyrmion with $p=1$. (c-d) Skyrmion with $p=10$. (e-f) Skyrmion with $p=1$ in the presence of a SOI described by $H_{so}$, with $l_{so}=100$~nm. The energy of the lowest eigenstate in the $l=0$ sector returned by numerics is 3240~neV in (a), 20~neV in (c), and 201~neV in (e). Excitation energies for $l<0$ are not plotted, being identical to those for $l>0$. In right panels, wave functions are plotted for $r\leq L/2$, to be separated from the degenerate state localized at the outer boundary. In (d) and (f), $|v_{\uparrow, \downarrow}^0|=| u_{\uparrow, \downarrow}^0|$ and only the hole components are plotted. }
\end{figure}
Figure~2 shows the spectrum in the topological phase. For a skyrmion with a single radial spin-flip, Fig.~2(a), there is no spectral gap and the MBSs hybridize, see Fig.~2(b). For a skyrmion with multiple radial spin-flips, a gap separating the MBSs at $l=0$ and the quasi-continuum levels can be identified. Inside the gap, we find two sets of localized fermionic states with finite angular momenta $l$, \cite{footnote} associated with the two MBSs. The localized states near the outer MBS have nearly zero energies and form an almost flat (yet distinctively quadratic) band, while those near the inner MBS have higher energies and form a more dispersive band. For the purpose of TQC, we are concerned with the subgap states near the inner MBS. Although these localized fermions cannot change the nonlocal fermion parity shared by two spatially separated MBSs (in two different skyrmions), \cite{akhmerov10} they cause dephasing and affect the signal strength at the readout. The level spacing of these states thus sets a bound for the allowed temperature fluctuation when measurement is being carried out. Interestingly, we find that these states have nonzero charge and spin expectation values (see Appendix B), which may allow distinguishing them experimentally from the inner MBS, e.g., by applying electric and/or magnetic fields (and also via transport experiments \cite{ktlaw}). In contrast, the subgap states near the outer MBS have almost zero charge and spin expectations. Our numerics also shows that all subgap states are subject to a ``centrifugal'' force due to their finite angular momenta. At increasing $l$, they move away from the skyrmion core, and thus from the inner MBS. In practice, quasiparticle poisoning may also arise from the electrical driving of skyrmions. However, similar issues occur for the manipulation of MBSs in quantum wires as well, \cite{rainis12,klepj, pedrocchi15} and in principle a continuous error correction is necessary for performing TQC. \cite{hutter15} Refs.~\onlinecite{goldstein12,yang14,kells15} discussed general approaches to optimization of TQC in the presence of quasiparticle poisoning.
We now consider a magnetic skyrmion with a single radial spin-flip in the presence of an extrinsic SOI.
For numerical convenience (to be able to reduce the problem to a 1D one), we consider SOI of a special form, $H_{ so}=\int d\mathbf{r}\,\psi^{\dagger}\left[\{\cos\theta, ({\vec\sigma}\times \vec{p})\cdot \hat{z} \}+\{\sin\theta,(\vec{\sigma}\cdot \vec{p})\} \right]\psi/4ml_{so}$, \cite{sato09} where $\{,\}$ is the anticommutator and $l_{so}$ is chosen such that the extrinsic and skyrmion-generated SOIs are comparable in strength. As seen in Figs.~2(e) and (f), the extrinsic SOI opens a spectral gap and stabilizes the MBSs, though also with the appearance of subgap localized fermions.
\section{Tight-binding model}
The existence of MBSs in magnetic skyrmions is further supported by a numerical tight-binding calculation.
Fig.~3 shows the spectrum and the probability density profile of the zero-energy state for a skyrmion with $n=2$ and $p=25$.
The skyrmion induces many fermionic subgap states, but still
the zero-energy MBSs are clearly seen, well separated from the other states, similar to Figs.~2(c) and (e). These results support our previous conclusions based on analytics and 1D numerics.
\begin{figure}
\centering
\includegraphics[width=3.2in]{tightbinding2d.eps}
\caption{(Color online) (a) Energy spectrum of $\mathcal{H}$ found numerically in the tight-binding model with hopping amplitude $t$ and lattice constant $a$, in the presence of a skyrmion with $n=2$ and $p=25$. Here, $m$ labels the eigenstates. The inset shows ten energies closest to the chemical potential.
(b) The probability density profile of the lowest eigenstate at near-zero energy $E_M/t=1.6 \times 10^{-7}$. This electron state and its hole partner at $-E_M$ can be identified with two weakly overlapping MBSs (one at the center and one at the system edge) and are very well separated in energy from the remaining states. We have used $\alpha/t=1.2 $, $\Delta_0/t=0.4$, $\mu/t=0$, and $R/a=3.5$.}
\end{figure}
\section{Non-Abelian statistics}
Non-Abelian statistics does not follow immediately from braiding the MBSs in magnetic skyrmions, which unlike the MBSs in $p+ip$ superconductors \cite{volovik99,read00,ivanov01} are not bound to superconducting vortices. We overcome this difficulty with the help of a superconducting tri-junction, \cite{fu2008} in a spirit similar to TQC in one dimension. \cite{tqc1d} The tri-junction divides the space into three parts, with order-parameter phases $\varphi=\varphi_1,\varphi_2,\varphi_3$. As shown earlier, the $\varphi$-phase enters the MBS wave function as $\Upsilon^0(\mathbf{r})\propto e^{\frac{1}{2}\tau_z\varphi}$. A branch cut is needed to avoid the multi-valuedness of the MBS wave function. Consider now two MBSs $\chi^A$ and $\chi^B$, initially located in the $\varphi_1$-region and $\varphi_2$-region, respectively. The exchange of $\chi^A$ and $\chi^B$ involves three steps, as sketched in Fig.~4(a): First, $\chi^A$ crosses the $\varphi_1$-$\varphi_3$ junction; second, $\chi^B$ crosses the $\varphi_1$-$\varphi_2$ junction; and third, $\chi^A$ crosses the $\varphi_2$-$\varphi_3$ junction. When a MBS crosses a junction, the $\varphi$-phase locally felt by the MBS changes, giving rise to a rotation of the phase of the wave function. Such a phase rotation may be clockwise or anticlockwise, as represented in the $\varphi$-plane. However, phase rotations with the formation of intermediate $\pi$-junctions do not correspond to physical manipulations of MBSs, since $\pi$-junctions introduce additional zero modes \cite{fu2008,sau10L1} that were not present in the original problem. Let $ \varphi_{ij}=\min\{|\varphi_i-\varphi_j|,2\pi-|\varphi_i-\varphi_j|\}$, where $i,j=1,2,3$. The case $\{\varphi_{12}+\varphi_{23},\varphi_{23}+\varphi_{31},\varphi_{31}+\varphi_{12} \}>\pi$ corresponds to the situation where the three phases $\varphi_1,\varphi_2,\varphi_3$ form a $Y$-shape in the $\varphi$-plane, as shown in Fig.~4(b), such that all physical phase rotations are in the same direction. This leads to the non-Abelian transformation rule of the MBSs after the exchange: $\chi^A\rightarrow -\nu\chi^B$ and $\chi^B\rightarrow \nu\chi^A$, with $\nu=\pm1$. In practice, arrays of tri-junctions may be constructed to implement TQC.
\begin{figure}
\centering
\includegraphics[width=3.2in]{tqcboth.eps}
\caption{Braiding of two MBSs $\chi^A$ and $\chi^B$ (solid circles): (a)~real-space manipulation, with the hollow circle indicating the temporary position of $\chi^A$, and (b)~corresponding phase rotations in the MBS wave function, with the wavy line denoting a branch cut. The solid and dashed arrows show the trajectories of $\chi^A$ and $\chi^B$, respectively.}
\end{figure}
We note that the above scheme of TQC relies on the existence of a bulk gap in the system, i.e., in the far asymptotic region between skyrmions. Otherwise, braiding of MBSs can lead to non-universal results. Although not explicitly included in the model, we assume that such a bulk gap can be provided by the SOI ubiquitous in the magnetic materials hosting skyrmions.
\section{Conclusions}
We have demonstrated the existence of MBSs in magnetic skyrmions with even azimuthal winding numbers, placed in proximity to an $s$-wave superconductor. The electrical drivability of magnetic skyrmions makes the real-space manipulation of MBSs straightforward. TQC can be performed with the help of superconducting tri-junctions.
\begin{acknowledgments}
We thank X. Z. Yu and Y. Tokura for helpful discussions. We acknowledge partial support from the Swiss NSF and NCCR QSIT. This work was supported by JSPS KAKENHI Grant Number 16H02204.
\end{acknowledgments}
|
1,116,691,500,165 | arxiv | \section{Introduction and main results}\label{sect:Intro}
Let $X$ be a smooth projective curve over a field $\mathbf k$. The following additive categories associated with $X$ will be of primary interest to us in this paper: the category of Higgs bundles and the category of vector bundles with connections. The moduli stacks of objects of these categories are Artin stacks locally of finite type. Furthermore, these categories have homological dimensions two. Hence one can apply to them (a version of) the theory of motivic Donaldson--Thomas invariants (DT-invariants for short) developed in~\cite{KontsevichSoibelman08} and ask about explicit formulas for the motivic DT-series (see loc.~cit.\ for the terminology and the details).
Let us denote by $\mathcal M_{r,d}$ the moduli stack of rank $r$ degree $d$ Higgs bundles and by $\mathcal M^{ss}_{r,d}$ its open substack of finite type classifying semistable Higgs bundles. Calculating the DT-series of the category of Higgs bundles is equivalent to calculating the motivic classes of $\mathcal M^{ss}_{r,d}$. In the case when $\mathbf k$ is finite, these motivic classes are closely related to volumes of the stacks, see~\cite{SchiffmannIndecomposable}.
In this paper we consider the case of the field of characteristic zero and calculate the motivic classes of the stacks $\mathcal M_{r,d}^{ss}$. We also show that the motivic class of the moduli stack of rank $r$ bundles with connections is equal to the motivic class of $\mathcal M_{r,0}^{ss}$. We will give precise formulations of our results in the following subsections of the introduction. Our techniques are motivic generalizations of those of~\cite{SchiffmannIndecomposable} and~\cite{MozgovoySchiffmanOnHiggsBundles}. The main new ingredient is a motivic version of a theorem of Harder about Eisenstein series. Furthermore, motivic versions of many results of the loc. cit. require more substantial use of algebraic geometry; in particular, we make a systematic use of generic points of schemes. Moreover, several proofs from ~\cite{MozgovoySchiffmanOnHiggsBundles} require substantial revision or replacement in the motivic case.
The reader will notice that besides the results of Schiffmann and Mozgovoy--Schiffmann our paper is largely motivated by the general philosophy of motivic DT-invariants developed in~\cite{KontsevichSoibelman08}, which is a right framework for many questions about motivic invariants of $3$-dimensional Calabi--Yau categories or categories of homological dimension less than $3$.
\subsection{{Motivic classes of stacks}}\label{sect:IntroMotSt}
All the stacks considered in this paper will be Artin stacks locally of finite type over a field. For an arbitrary field $\mathbf k$ one defines the abelian group $\mathrm{Mot}(\mathbf k)$ as the group generated by isomorphism classes of $\mathbf k$-stacks of finite type modulo the following relations:
(i) $[\mathcal Y_1]=[\mathcal Y_2]+[\mathcal Y_1-\mathcal Y_2]$ whenever $\mathcal Y_2$ is a closed substack of $\mathcal Y_1$;
(ii) $[\mathcal Y_2]=[\mathcal Y_1\times\mathbb A_\mathbf k^r]$ whenever $\mathcal Y_2\to\mathcal Y_1$ is a vector bundle of rank $r$.
Note that $\mathrm{Mot}(\mathbf k)$ is a commutative ring under the product $[\mathcal X][\mathcal Y]=[\mathcal X\times\mathcal Y]$. This material is well-known (see e.g.~\cite[Sect.~1]{Ekedahl09}, \cite{Joyce07},~\cite{KontsevichSoibelman08}).
We define the dimensional completion of $\mathrm{Mot}(\mathbf k)$ as follows. Let $F^m\mathrm{Mot}(\mathbf k)$ be the subgroup generated by the classes of stacks of dimension $\le-m$. This is a ring filtration and we define the completed ring $\overline{\Mot}(\mathbf k)$ as the completion of $\mathrm{Mot}(\mathbf k)$ with respect to this filtration. In the completed ring one can take infinite sums of motivic classes of stacks of finite type provided these sums are convergent. The completion is necessary, for example, to define the residue of a series whose coefficients are motivic classes. The class~$[\mathcal Y]$ in $\mathrm{Mot}(\mathbf k)$ or $\overline{\Mot}(\mathbf k)$ is called the \emph{motivic class\/} of the stack $\mathcal Y$.
Later, we will also need a relative version of $\mathrm{Mot}(\mathbf k)$ and $\overline{\Mot}(\mathbf k)$. We refer the reader to Section~\ref{sect:MotFun} for details and references.
\subsection{Moduli stacks of Higgs bundles and connections}\label{sect:ModStack}
Fix a smooth geometrically connected projective curve $X$ over $\mathbf k$. By a Higgs bundle on~$X$, we mean a pair $(E,\Phi)$, where $E$ is a vector bundle on~$X$, and $\Phi:E\to E\otimes\Omega_X$ is an $\mathcal O_X$-linear morphism from $E$ to $E$ ``twisted'' by the sheaf of differential $1$-forms $\Omega_X$. By definition the rank of a pair $(E,\Phi)$ is the rank of $E$, the degree is the degree of $E$. Denote by $\mathcal M_{r,d}$ the moduli stack of rank $r$ degree $d$ Higgs bundles on $X$. We define Higgs sheaves similarly, by replacing vector bundles with coherent sheaves in the definition.
We also note for further use that every coherent sheaf $F$ on $X$ can be written as $T\oplus E$, where $T$ is a torsion sheaf, $E$ is a torsion free sheaf (that is, a vector bundle). In this decomposition $T$ is unique, while~$E$ is unique up to isomorphism.
The Higgs bundle $(E,\Phi)$ is called \emph{semistable\/} if for any subbundle $F\subset E$ preserved by $\Phi$ we have
\[
\frac{\deg F}{\rk F}\le\frac{\deg E}{\rk E}.
\]
Semistability is an open condition compatible with field extensions; we denote the open substack of semistable Higgs bundles by $\mathcal M^{ss}_{r,d}\subset\mathcal M_{r,d}$. The stack $\mathcal M^{ss}_{r,d}$ is of finite type. We refer the reader to Section~\ref{Sect:Higgs} for more details. Similarly one can deal with Higgs sheaves. The latter form an abelian category.
Denote by $\mathcal{C}onn_r$ the moduli stack of rank $r$ vector bundles with connections on $X$. That is, the stack classifying pairs $(E,\nabla)$, where $E$ is a rank $r$ vector bundle on $X$, $\nabla:E\to E\otimes\Omega_X$ is a $\mathbf k$-linear morphism of sheaves satisfying the Leibnitz rule: for any open subset $U$ of $X$, any $f\in H^0(U,\mathcal O_X)$ and any $s\in H^0(U,E)$ we have
\[
\nabla(fs)=f\nabla(s)+s\otimes df.
\]
Assume that $\mathbf k$ is a field of characteristic zero. Then the stack $\mathcal{C}onn_r$ is a stack of finite type. We will reprove this well-known fact in Section~\ref{sect:ConnIsoslopy}. Recall that every vector bundle admitting a connection has degree zero (Weil's theorem). Note also that in the case of bundles on curves every connection is automatically flat as $\wedge^2\Omega_X=0$. Our first main result is the following theorem.
\begin{theorem}\label{th:conn=higgs} If $\mathbf k$ is a field of characteristic zero, then we have the equality of motivic classes in $\mathrm{Mot}(\mathbf k)${\rm:}
\[
[\mathcal{C}onn_r]=[\mathcal M^{ss}_{r,0}].
\]
\end{theorem}
This theorem will be proved in Section~\ref{sect:CompHiggsConn}. The proof is inspired by~\cite{MozgovoySchiffmanOnHiggsBundles}.
To give the reader the flavor of the statement, we sketch a direct proof in the case of $r=2$ in Section~\ref{Sect:Conn=Higgs2}.
\begin{remark}
Note that every bundle with connection $(E,\nabla)$ is semistable in the following sense: if $F$ is a subbundle preserved by $\nabla$, then $\deg F=0=\deg E$. Thus the theorem tells that the motivic classes of the stacks of semistable Higgs bundles and the stack of semistable bundles with connections are equal. In fact, if $\mathbf k$ is the field of complex numbers, then the corresponding categories are equivalent by Simpson's non-abelian Hodge theory. However, we do not see how to derive the equality of motivic classes of stacks from Simpson's result.
\end{remark}
\subsection{Explicit formulas for motivic classes}
Our second main result is the explicit calculation of the motivic class of the stack of semistable Higgs bundles. This problem has some history including the paper by Mozgovoy~\cite{MozgovoyADHM} where the conjecture about the motivic class was made in the case when the rank and the degree are coprime, and the papers by Schiffmann~\cite{SchiffmannIndecomposable} and Mozgovoy--Schiffmann~\cite{MozgovoySchiffmanOnHiggsBundles} devoted to the calculation of the volume of the stack over a finite field. We assume that $\mathbf k$ is a field of characteristic zero.
In order to formulate our result let us recall some standard notions.
\subsubsection{Motivic zeta-functions}\label{sect:zeta}
Here we follow ~\cite{KapranovMotivic}. For a variety $Y$ set
\[
\zeta_Y(z):=\sum_{n=0}^\infty[Y^{(n)}]z^n\in\mathrm{Mot}(\mathbf k)[[z]],
\]
where $Y^{(n)}=Y^n/\mathfrak{S}_n$ is the $n$-th symmetric power of $Y$ ($\mathfrak{S}_n$ denotes the group of permutations).
Assume now that $Y=X$ is our smooth curve. For the rest of the introduction, we assume that $X$ has a divisor of degree one defined over $\mathbf k$. (Note that this condition is satisfied, if $X$ has a $\mathbf k$-rational point.) Let $g$ be the genus of $X$. Set $\mathbb{L}:=[\mathbb A_\mathbf k^1]$.
\begin{proposition}\label{lm:zetaX}
(i)
\[
\zeta_X(z)=P_X(z)/(1-z)(1-\mathbb{L} z)
\]
for a polynomial $P_X(z)$ with coefficients in $\mathrm{Mot}(\mathbf k)${\rm;}
(ii) $P_X(0)=1$ and the highest term of $P_X$ is $\mathbb{L}^gz^{2g}$.
(iii) We have
\[
\zeta_X(1/\mathbb{L} z)=\mathbb{L}^{1-g}z^{2-2g}\zeta_X(z).
\]
(iv) If $i\ne0,-1$, then $\zeta_X(\mathbb{L}^i)\ne0$ is invertible in $\overline{\Mot}(\mathbf k)$.
\end{proposition}
Before giving the proof, we note that part $(i)$ is used to view $\zeta_X(z)$ as a function on $\overline{\Mot}(\mathbf k)$ defined for all $z$ such that $1-z$ and $1-\mathbb{L} z$ are invertible in $\overline{\Mot}(\mathbf k)$. In particular, $\zeta_X(\mathbb{L}^i)$ is defined for $i\ne0,-1$.
\begin{proof}
Statements (i) and (iii) is~\cite[Thm.~1.1.9]{KapranovMotivic}. It is obvious that $P_X(0)=1$, the statement about the highest term of $P_X$ now follows from (iii). Statement (iv) in the case $i\le-2$ follows from the fact that $\zeta_X(\mathbb{L}^i)\in1+F^1\overline{\Mot}(\mathbf k)$, where $F^m\overline{\Mot}(\mathbf k)$ is the dimensional filtration. In the case $i\ge1$ the statement follows from (iii).
\end{proof}
It is convenient to introduce the ``normalized'' zeta-function $\tilde\zeta_X(z):=z^{1-g}\zeta_X(z)$ and the ``regularized'' zeta-function by setting
\[
\zeta_X^*(\mathbb{L}^{-u}z^v)=
\begin{cases}
\zeta_X(\mathbb{L}^{-u}z^v)\text{ if }v>0\text{ or }u>1,\\
\res_{z=\mathbb{L}^{-1}}\zeta_X(z)\frac{dz}z:=\displaystyle\frac{P_X(\mathbb{L}^{-1})}{1-\mathbb{L}^{-1}}\text{ if }(u,v)=(1,0),\\
\res_{z=1}\zeta_X(z)\frac{dz}z:=\displaystyle\frac{P_X(1)}{1-\mathbb{L}}\text{ if }(u,v)=(0,0).
\end{cases}
\]
(Cf.~the definition of the residue in Section~\ref{sect:res}.)
Let $(1+z\overline{\Mot}(\mathbf k)[[z]])^\times$ denote the multiplicative group of power series with constant term 1. One can uniquely extend the assignment $Y\mapsto\zeta_Y$ to a continuous homomorphism of topological groups
\begin{equation}\label{eq:zeta}
\zeta:\overline{\Mot}(\mathbf k)\to(1+z\overline{\Mot}(\mathbf k)[[z]])^\times
\end{equation}
such that for any $A\in\overline{\Mot}(\mathbf k)$ and any $n\in\mathbb{Z}$ we have $\zeta_{\mathbb{L}^nA}(z)=\zeta_A(\mathbb{L}^nz)$. More precisely, any class $A\in\overline{\Mot}(\mathbf k)$ can be written as the limit of a sequence $([Y_i]-[Z_i])/\mathbb{L}^{n_i}$, where $Y_i$ and $Z_i$ are varieties. We define
\[
\zeta_A(z)=\lim_{i\to\infty}\frac{\zeta_{Y_i}(\mathbb{L}^{-n_i}z)}{\zeta_{Z_i}(\mathbb{L}^{-n_i}z)}.
\]
For more details, see Section~\ref{sect:MotVar}. Note that the operation $A\mapsto\zeta_A$ gives a pre-lambda structure on the ring $\overline{\Mot}(\mathbf k)$ (see~\cite{GuzeinZadeEtAlOnLambdaRingStacks}).
Consider the ring of formal power series in two variables $\overline{\Mot}(\mathbf k)[[z,w]]$ (this is, actually, an example of a quantum torus, cf.~Section~\ref{sect:IntroRemarks}). Let $\overline{\Mot}(\mathbf k)[[z,w]]^+$ denote the ideal of power series with vanishing constant term, let $(1+\overline{\Mot}(\mathbf k)[[z,w]]^+)^\times$ be the multiplicative group of series with constant term equal~1.
We define the \emph{plethystic exponent\/} $\Exp:\overline{\Mot}(\mathbf k)[[z,w]]^+\to(1+\overline{\Mot}(\mathbf k)[[z,w]]^+)^\times$ by
\[
\Exp\left(\sum_{r,d} A_{r,d}w^rz^d\right)=\prod_{r,d}\Exp(A_{r,d}w^rz^d)=
\prod_{r,d}\zeta_{A_{r,d}}(w^rz^d).
\]
One shows easily that this is an isomorphism of abelian groups. Denote the inverse isomorphism by $\Log$ (the \emph{plethystic logarithm\/}).
\subsubsection{Explicit formulas for motivic classes of the stacks of semistable Higgs bundles}\label{sect:explicit} The following is the motivic version of~\cite[Sect.~1.4]{SchiffmannIndecomposable}.
Let $\lambda=(\lambda_1\ge\lambda_2\ge\ldots\ge\lambda_l>0)$ be a partition. We can also write it as $\lambda=1^{r_1}2^{r_2}\ldots t^{r_t}$, where $r_i$ is the number of occurrences of $i$ among $\lambda_j, 1\le j\le l$. The Young diagram of $\lambda$ is the set of the points $(i,j)\in\mathbb{Z}^2$ such that $1\le i\le\lambda_j$. For a box $s\in\lambda$ its arm $a(s)$ (resp. leg $l(s)$) is the number of boxes lying strictly to the right of (resp.~strictly above) $s$.
For a partition $\lambda$, set
\[
J^{mot}_\lambda(z)=\prod_{s\in\lambda}\zeta_X^*(\mathbb{L}^{-1-l(s)}z^{a(s)})\in\overline{\Mot}(\mathbf k)[[z]],
\]
where the product is over all boxes of the Young diagram corresponding to the partition. In particular, for the empty Young diagram $\lambda$ we get $J^{mot}_\lambda(z)=1$.
Set\footnote{Following~\cite{SchiffmannIndecomposable} we use inverse numeration of variables.}
\[
L^{mot}(z_n,\ldots,z_1)=
\frac1{\prod_{i<j}\tilde\zeta_X\left(\frac{z_i}{z_j}\right)}
\sum_{\sigma\in\mathfrak{S}_n}\sigma\left\{
\prod_{i<j}\tilde\zeta_X\left(\frac{z_i}{z_j}\right)
\frac1{\prod_{i<n}\left(1-\mathbb{L}\frac{z_{i+1}}{z_i}\right)}
\cdot\frac1{1-z_1}
\right\}.
\]
For a partition $\lambda=1^{r_1}2^{r_2}\ldots t^{r_t}$ such that $\sum_i r_i=n$, set $r_{<i}=\sum_{k<i}r_k$ and denote by $\res_\lambda$ the iterated residue along
\[
\begin{matrix}
\frac{z_n}{z_{n-1}}=\mathbb{L}^{-1},&\frac{z_{n-1}}{z_{n-2}}=\mathbb{L}^{-1},&\ldots,&
\frac{z_{2+r_{<t}}}{z_{1+r_{<t}}}=\mathbb{L}^{-1},\\
\vdots & \vdots && \vdots \\
\frac{z_{r_1}}{z_{r_1-1}}=\mathbb{L}^{-1},&
\frac{z_{r_1-1}}{z_{r_1-2}}=\mathbb{L}^{-1},&\ldots,&
\frac{z_2}{z_1}=\mathbb{L}^{-1}.
\end{matrix}
\]
Set
\[
\tilde H_\lambda^{mot}(z_{1+r_{<t}},\ldots,z_{1+r_{<i}},\ldots,z_1):=
\res_\lambda\left[
L^{mot}(z_n,\ldots,z_1)\prod_{\substack{j=1\\j\notin\{r_{<i}\}}}^n\frac{dz_j}{z_j}
\right]
\]
and
\begin{equation}\label{eq:subst}
H_\lambda^{mot}(z):=\tilde H_\lambda^{mot}(z^t\mathbb{L}^{-r_{<t}},\ldots,z^i\mathbb{L}^{-r_{<i}},\ldots,z).
\end{equation}
\begin{remark}
Neither the notion of residue, nor the substitution~\eqref{eq:subst} is obvious for rational functions whose coefficients belong to $\overline{\Mot}(\mathbf k)$ because it is not known whether this ring is integral. Let us give the precise definition. For a polynomial $P(z_n,\ldots,z_1)\in\overline{\Mot}(\mathbf k)[z_n,\ldots,z_1]$, let $P_\lambda(z)$ denote the one-variable polynomial, obtained from $P$ by substituting (for each $i$) $\mathbb{L}^{1-i}z^{j_i}$ instead of $z_i$, where $j_i$ is the unique number such that $r_{<j_i}<i\le r_{<j_i+1}$. Consider the product
\[
\prod_{\substack{j=1\\j\notin\{r_{<i}\}}}^n
\left(1-\frac{\mathbb{L} z_{j+1}}{z_j}\right)L^{mot}(z_n,\ldots,z_1).
\]
Inspecting the formula for $L^{mot}$ and using Lemma~\ref{lm:zetaX}, we can show that the product can be written in the form $P(z_n,\ldots,z_1)/Q(z_n,\ldots,z_1)$, where $Q_\lambda(0)$ is invertible in $\overline{\Mot}(\mathbf k)$. Then
\[
H_\lambda^{mot}(z):=\frac{P_\lambda(z)}{Q_\lambda(z)}.
\]
We expand this rational function in powers of $z$, so that $H_\lambda^{mot}(z)\in\overline{\Mot}(\mathbf k)[[z]]$. We refer the reader to Section~\ref{sect:res} and especially to Remark~\ref{rm:RationalRes} for the compatibility with the definition of the residue of a power series.
\end{remark}
Let us introduce the elements $B_{r,d}\in\overline{\Mot}(\mathbf k)$ via the formula
\begin{equation*}
\sum_{\substack{r,d\in\mathbb{Z}_{\ge0}\\(r,d)\ne(0,0)}}B_{r,d}w^rz^d=
\mathbb{L}\Log\left(\sum_\lambda\mathbb{L}^{(g-1)\langle\lambda,\lambda\rangle}J_\lambda^{mot}(z) H_\lambda^{mot}(z)w^{|\lambda|}
\right).
\end{equation*}
Here the sum is over all partitions, $\langle\lambda,\lambda\rangle:=\sum_i(\lambda_i')^2$, where $\lambda'=(\lambda_1'\ge\ldots\ge\lambda_i'\ge\ldots)$ is the conjugate partition, $|\lambda|=\sum_i\lambda_i$. Next, let $\tau{\ge}0$ be a rational number. {Define the elements $H_{r,d}\in\overline{\Mot}(\mathbf k)$ by}
\[
\sum_{d/r=\tau}{\mathbb{L}^{(1-g)r^2}}H_{r,d}w^rz^d=
\Exp\left(
\sum_{d/r=\tau}B_{r,d}w^rz^d
\right).
\]
Now we can formulate our second main result.
\begin{theorem}\label{th:ExplAnsw}
Let $\mathbf k$ be a field of characteristic zero.
{(i) The elements $H_{r,d}$ are periodic in $d$ with period $r$ {in the following sense}: for $d>r(r-1)(g-1)$ we have $H_{r,d}=H_{r,d+r}$.}
(ii) For any $r>0$ and any $d$ we have
\[
[\mathcal M_{r,d}^{ss}]=H_{r,d+er},
\]
whenever $e$ is large enough {\rm(}it suffices to take $e>(r-1)(g-1)-d/r$.{\rm)}
\end{theorem}
The proof of Theorem~\ref{th:ExplAnsw} will be given in Section~\ref{sect:ExplAnsw}. Note that $H_{r,d}$ can be computed explicitly in terms of the operations in the pre-lambda ring $\overline{\Mot}(\mathbf k)$. Thus, the above theorem gives an explicit answer for the motivic class of the stack $\mathcal M_{r,d}^{ss}$.
A similar statement is not known, and probably not literally true, in the case when $\mathbf k$ is a field of finite characteristic. However, for finite fields one can calculate the volume of the groupoid $\mathcal M_{r,d}^{ss}(\mathbf k)$. This volume has been calculated in~\cite{SchiffmannIndecomposable,MozgovoySchiffmanOnHiggsBundles}. Our answer is very similar to theirs. The proof in our case is also very similar to loc.~cit.~except for a few subtle points. One is a motivic version of a theorem of Harder, which we will discuss in Sections~\ref{sect:IntroHarder} and~\ref{sect:Borel}.
Combining part~(ii) of the above theorem with Theorem~\ref{th:conn=higgs}, we arrive at the following result.
\begin{corollary}
We have
\[
[\mathcal{C}onn_r]=[H_{r,er}]
\]
for any $e>(r-1)(g-1)$.
\end{corollary}
\subsection{Vector bundles with nilpotent endomorphisms}\label{sect:NilpIntro}
The proof of Theorem~\ref{th:ExplAnsw} is based on the following statement of independent interest. Let the stack $\mathcal E^{\ge0,nilp}$ classify pairs $(E,\Phi)$, where $E$ is a vector bundle on $X$ such that there are no non-zero morphisms $E\to F$, with $\deg F<0$, $\Phi$ is a nilpotent endomorphism of $E$. Then $\mathcal E^{\ge0,nilp}$ decomposes according to the rank and degree of the bundles: $\mathcal E^{\ge0,nilp}=\sqcup_{r,d}\mathcal E_{r,d}^{\ge0,nilp}$. It follows easily from Lemma~\ref{lm:Bun+} below that $\mathcal E_{r,d}^{\ge0,nilp}$ is an Artin stack of finite type.
\begin{theorem}\label{th:NilpEnd}
Let $\mathbf k$ be a field of characteristic zero. We have the following identity in $\overline{\Mot}(\mathbf k)[[z,w]]$.
\[
\sum_{r,d\ge0}[\mathcal E_{r,d}^{\ge0,nilp}]w^rz^d=\sum_{\lambda}
\mathbb{L}^{(g-1)\langle\lambda,\lambda\rangle}J_\lambda^{mot}(z) H_\lambda^{mot}(z)w^{|\lambda|}.
\]
\end{theorem}
This theorem will be proved in Section~\ref{sect:NilpEnd}.
\subsection{Harder's theorem on motivic classes of Borel reductions}\label{sect:IntroHarder}
As we mentioned in Section~\ref{sect:IntroMotSt}, for any stack $\mathcal X$ one can define the relative group of motivic functions $\mathrm{Mot}(\mathcal X)$ and its completion $\overline{\Mot}(\mathcal X)$ (so that $\mathrm{Mot}(\mathbf k)=\mathrm{Mot}(\Spec\mathbf k)$ and $\overline{\Mot}(\mathbf k)=\overline{\Mot}(\Spec\mathbf k)$). If $\mathcal Y$ is a stack of finite type over $\mathcal X$, we have a motivic function $[\mathcal Y\to\mathcal X]$ (and $\mathrm{Mot}(\mathcal X)$ is generated by these functions if $\mathcal X$ is of finite type). In particular, we have the ``constant function'' $\mathbf1_\mathcal X:=[\mathcal X\to\mathcal X]$. We review this standard material in Section~\ref{sect:MotFun}. Note that in a slightly different settings these groups were defined in~\cite{KontsevichSoibelman08}.
The group $\overline{\Mot}(\mathcal X)$ is a topological group.
\subsubsection{The stack of Borel reductions}\label{sect:BorelRed}
In this section $\mathbf k$ is a field of arbitrary characteristic. Denote by $\mathcal{B}un_{r,d}$ the moduli stack of rank $r$ degree $d$ vector bundles on $X$. By a \emph{Borel reduction\/} of a rank $r$ vector bundle $E$ we understand a full flag of subbundles
\[
0=E_0\subset E_1\subset\ldots\subset E_r=E.
\]
In particular, $E_i$ is a vector bundle of rank $i$ and $E/E_i$ is a vector bundle of rank $r-i$. The \emph{degree} of the Borel reduction is given by
\[
(d_1,\ldots,d_r),\text{ where }d_i:=\deg E_i-\deg E_{i-1}.
\]
(In particular, $\deg E_1=d_1$.)
Let $\mathcal{B}un_{r,d_1,\ldots,d_r}$ stand for the stack of rank $r$ vector bundles with a Borel reduction of degree $(d_1,\ldots,d_r)$.
We view $\mathcal{B}un_{r,d_1,\ldots,d_r}$ as a stack over $\mathcal{B}un_{r,d_1+\ldots+d_r}$ via the projection $(E_1\subset\ldots\subset E_r)\mapsto E_r$. In Section~\ref{Sect:ProofBorel} we explain that this projection is of finite type and prove the following theorem:
\begin{theorem}\label{th:IntroHarder}
For any $r>0$ and $d\in\mathbb{Z}$ we have in $\overline{\Mot}(\mathcal{B}un_{r,d})$
\[
\lim_{d_1\to-\infty}\ldots\lim_{d_{r-1}\to-\infty}\frac{[\mathcal{B}un_{r,d_1,\ldots,d_{r-1},d-d_1-\ldots-d_{r-1}}\to\mathcal{B}un_{r,d}]}
{\mathbb{L}^{-(2r-2)d_1-(2r-4)d_2-\ldots-2d_{r-1}}}=
\frac{
\mathbb{L}^{(r-1)\left(d+(1-g)\frac{r+2}2\right)}[\Jac]^{r-1}}
{(\mathbb{L}-1)^{r-1}\prod_{i=2}^r\!\zeta_X(\mathbb{L}^{-i})}\;\mathbf1_{\mathcal{B}un_{r,d}}.
\]
\end{theorem}
Here $\Jac=\Jac_X$ is the Jacobian variety of $X$. We note that $\zeta_X(\mathbb{L}^{-i})$ converges for $i\ge2$.
\begin{remark}
It is very important that the right hand side is a product of an element of $\overline{\Mot}(\Spec\mathbf k)$ with $\mathbf1_{\mathcal{B}un_{r,d}}$. This may be loosely reformulated as a statement that all vector bundles have approximately equal motivic classes of Borel reductions. Moreover, this is true uniformly over any substack of finite type.
\end{remark}
The generating functions for the motivic classes $[\mathcal{B}un_{r,d_1,\ldots,d_r}\to\mathcal{B}un_{r,d_1+\ldots+d_r}]$ are known as (motivic) Eisenstein series. The above theorem can be interpreted as a statement about the \emph{residue\/} of this Eisenstein series; see Theorem~\ref{th:ResHarder2} and Proposition~\ref{pr:HallFormulas}(vi) below.
\subsection{Relation with results of Kontsevich and Soibelman}\label{sect:IntroRemarks}
In~\cite{KontsevichSoibelman08} and~\cite{KontsevichSoibelman10} the authors developed a general theory of motivic Donaldson--Thomas invariants (DT-invariants for short) of three-dimensional Calabi--Yau categories (3CY categories for short). The categories considered in~\cite{KontsevichSoibelman08} are ind-constructible. Roughly speaking, this means that the objects of such categories are parameterized by unions of Artin stacks of finite type (see loc.~cit.~for the precise definition).
Most of the categories that appear ``in nature'' are ind-constructible. Important examples are given by representations of algebras (or dg-algebras), (derived) categories of coherent sheaves, categories of Higgs sheaves etc. In the case of coherent sheaves or Higgs sheaves on projective curves, the homological dimension of either of these categories is less than 3. However, one can upgrade them to 3CY categories. For that reason many questions about cohomological and motivic invariants of these categories can be reduced to the general theory developed in~\cite{KontsevichSoibelman08,KontsevichSoibelman10}. This remark could explain an appearance of motivic DT-invariants in some questions about the Hodge theory of character varieties (see~\cite{HLRV}).
A recent spectacular example is the main conjecture from~\cite{HLRV}. The categories of Higgs sheaves and of connections on a curve studied in this paper have cohomological dimension 2, and moreover are 2-dimensional Calabi--Yau categories (2CY categories for short).
\subsubsection{Hall algebras and quantum tori}\label{sect:QuTorus}
Let $\mathcal C$ be an ind-constructible abelian (or more generally $A_\infty$-triangulated) category endowed with a homomorphism of abelian groups $\cl: K_0(\mathcal C)\to\Gamma\simeq\mathbb{Z}^n$ (``Chern character''). We also assume that $\Gamma$ is endowed with an integer skew-symmetric form $\langle\bullet,\bullet\rangle$ and $\cl$ intertwines this form and the skew-symmetrization of the Euler form on $K_0(\mathcal C)$.
One associates to this data two associative algebras. The first algebra is the motivic Hall algebra $H(\mathcal C)$. As a $\overline{\Mot}(\mathbf k)$-module, it is equal to a group of stack motivic functions on the stack of objects $\Ob(\mathcal C)$. The other algebra is the quantum torus $R_{\mathcal C}=R_{\Gamma,\mathcal C}:=\bigoplus_{\gamma\in\Gamma}\overline{\Mot}(\mathbf k)e_\gamma$ associated with $(\Gamma,\langle\bullet,\bullet\rangle)$ (see~\cite{KontsevichSoibelman08} for the definitions of the multiplications on both algebras). Note that $R_{\Gamma,\mathcal C}$ is much more explicit but carries less information.
Suppose that the ind-constructible category $\mathcal C$ carries a constructible stability structure with the central charge $Z: \Gamma\to\mathbb C$ (see~\cite{KontsevichSoibelman08} for the details). Identify $\mathbb C$ with $\mathbb{R}^2$. Then for any strict sector $V\subset\mathbb{R}^2$ with the vertex in the origin, one can define a full subcategory $\mathcal C(V)\subset\mathcal C$ generated by the semistables with the central charge in~$V$. Furthermore, the corresponding motivic Hall algebra $H(\mathcal C(V))$ and the quantum torus $R_{\mathcal C(V)}$ admit natural completions $\widehat{H}(\mathcal C(V))$ and $\widehat{R}_{\mathcal C(V)}$. The former contains an element
\[
A_{\mathcal C(V)}^{Hall}:=\sum_{\substack{\gamma\in\Gamma\\mathbb{Z}(\gamma)\in V}}\mathbf1_{\Ob_\gamma(\mathcal C)}.
\]
Here $\mathbf1_{\Ob_\gamma(\mathcal C)}$ is the identity motivic function on $\Ob_\gamma(\mathcal C)$, where $\Ob_\gamma(\mathcal C)\subset\Ob(\mathcal C)$ is the substack parameterizing objects of class $\gamma$. In the case when ${\mathcal C}$ is a 3CY category there is a homomorphism of algebras $\Phi:=\Phi_V:\widehat{H}({\mathcal C}(V))\to \widehat{R}_{\mathcal C(V)}$. The element $A_{\mathcal C(V)}^{mot}:=\Phi(A_{\mathcal C(V)}^{Hall})$ is called the \emph{motivic DT-series\/} of $\mathcal C(V)$. The homomorphism $\Phi$ is defined in terms of the motivic Milnor fiber of the potential of ${\mathcal C}$, hence it exists literally for 3CY categories only.
In the case when $\mathcal C$ has homological dimension less or equal than two, one can ``upgrade'' it to a 3CY category by introducing a kind of ``Lagrangian multipliers''. Then the homomorphism $\Phi$ gives rise to a linear map $\widehat{H}({\mathcal C}(V))\to \widehat{R}_{\mathcal C(V)}$ that partially respects the products.
In particular, if $\mathcal C$ is a 2CY category then for each strict sector $V$ there is a linear map $\Phi:=\Phi_V: \widehat{H}({\mathcal C}(V))\to\widehat{R}_{\mathcal C(V)}$ that satisfies the property
$\Phi([F_1][F_2])=\Phi([F_1])\Phi([F_2])$ as long as $\Arg(Z(F_1))>\Arg(Z(F_2))$ (see e.g.~\cite{RenSoibelman} for details).
Applying $\Phi$ to the element $A_{\mathcal C(V)}^{Hall}$, we arrive at the element
\[
A_{\mathcal C(V)}^{mot}:=\sum_{\substack{\gamma\in\Gamma\\mathbb{Z}(\gamma)\in V}}w_\gamma[\Ob_\gamma(\mathcal C)] e_\gamma,
\]
where the ``weight'' $w_\gamma$ is derived from the general theory of \cite{KontsevichSoibelman08} (there is an alternative approach in~\cite{KontsevichSoibelman10}). In the Higgs sheaves case, the weight is given by $w_\gamma=w_{(r,d)}=\mathbb{L}^{(1-g)r^2}$.
The above series converge both in the motivic Hall algebra and in the quantum torus, since a choice of the strict sector $V$ forces us to make a summation over elements $\gamma$ that belong to a strict convex cone in $\Gamma\otimes\mathbb{R}$.
In the current paper we work with two different categories $\mathcal C$: the category of coherent sheaves on~$X$ and the category of coherent sheaves with Higgs fields. The latter category is 2CY, which forces the quantum torus to be commutative (because the Euler form is symmetric). In this case $\Gamma=\mathbb{Z}^2$ is generated by rank and degree gradation. Thus $R_{\mathcal C}=\overline{\Mot}(\mathbf k)[z,z^{-1},w,w^{-1}]$, where $z=e_{(0,1)}$, $w=e_{(1,0)}$. So the motivic DT-series $A_{{\mathcal C}(V)}^{mot}$ becomes a generating function in commuting variables.
The stability structure comes from the central charge $Z(F)=-\deg F+\sqrt{-1}\rk F$. The strict sector~$V$ is the second quadrant $\{x\le 0, y\ge 0\}$ in the plane $\mathbb{R}^2_{(x,y)}$. Therefore, our generating functions are series in two variables. It seems plausible that the whole theory of this paper can be developed for an arbitrary strict sector.
Following~\cite{SchiffmannIndecomposable}, we also use a slightly different framework, when a stability structure is imposed on the coherent sheaf itself rather than on the pair consisting of a sheaf with a Higgs field. Although in this case we do not have a stability structure on the category of Higgs sheaves, the natural forgetful functor from Higgs sheaves to coherent sheaves allows us to utilize the methods of ~\cite{KontsevichSoibelman08}.
\subsection{Further direction of work}
There are several questions that arise naturally in relation to our work.
(1) Generalization to the moduli stacks of $G$-connections (and Higgs $G$-bundles), where $G$ is an arbitrary reductive group. This would require a substantial change in the techniques, since the underlying categories are not additive.
(2) Generalization to the moduli stacks of connections and Higgs bundles with singularities. In the case of regular singularities, one can fix the types of parabolic structures at singular points and look for the motivic class of the moduli stack of parabolic connections and Higgs bundles. The paper~\cite{ChuangDiaconescuDonagiPantev}, although conjectural, contains an alternative approach to the problem via upgrading the computation of the motivic class of Higgs bundles to the problem about refined Pandharipande--Thomas invariants on the non-compact Calabi--Yau 3-fold associated with the spectral curve. The main target of this paper is the HLRV conjecture from~\cite{HLRV} and its generalizations. A different approach to parabolic Higgs bundles on the projective line was suggested (in the case of finite fields) in~\cite{LetellierParabolicHiggs}.
It looks plausible that the techniques of motivic Hall algebras employed in this paper can be used in the parabolic case as well (see e.g.~\cite{LinSpherical} for the case of Hall algebras over finite field).
The case of irregular singularities is less developed. Although one understands somehow the structure of the moduli stacks of Higgs bundles and connections (see e.g.~\cite{KontsevichSoibelman13},~\cite{SzaboIrregularHiggs}) the actual computations are not easy (see~\cite{HauselMerebWong} as well as ~\cite{Diaconescu}, ~\cite{DiaconescuDonagiPantev}).
(3) The relation to the HLRV conjecture in the parabolic and (especially) the irregular case is another natural question. So far, the formulas obtained by Mozgovoy and Schiffmann give an a priori different answer than was expected in~\cite{HLRV}. Since our approach is a motivic version of the one of Mozgovoy and Schiffmann, the same discrepancy is expected for the generalizations as well.
(4) Since the category of Higgs bundles on a curve is an example of a 2-dimensional Calabi--Yau category, one can try to speculate which of our results hold for more general 2CY categories. An interesting class of such categories was proposed in~\cite{RenSoibelman} in relation to semicanonical bases. It seems plausible that Cohomological Hall algebras provide the right framework for many questions arising from the HLRV conjecture. We should note that in~\cite{KontsevichSoibelman10} the authors used motivic groups different from those considered in this paper. However, the motivic Donaldson--Thomas series obtained for two different versions of motivic groups in~\cite{KontsevichSoibelman08} and~\cite{KontsevichSoibelman10} agree in the end. On the other hand, motivic DT-invariants appear naturally in relation to the HLRV conjecture. This remark explains our optimism concerning the relation of Cohomological Hall algebras and motivic classes of Higgs bundles (and connections) in all above-mentioned cases.
\subsection{Plan of the paper}
In Section~\ref{sect:MotFun} we discuss motivic classes of stacks. This material is standard and is presented here for the reader's convenience.
In Section~\ref{sect:Conn=Higgs} we introduce various stacks and provide relations between their motivic classes. In particular, we prove Theorem~\ref{th:conn=higgs} and give a relation between the moduli stacks of Higgs bundles, moduli stacks of vector bundles with endomorphisms, and moduli stacks of vector bundles with nilpotent endomorphisms.
In Section~\ref{sect:Borel} we prove Theorem~\ref{th:IntroHarder}. This statement was not known in the motivic setup, and, in a sense, was the main stumbling block for re-writing the results of~\cite{SchiffmannIndecomposable} and~\cite{MozgovoySchiffmanOnHiggsBundles} in the motivic situation.
In Section~\ref{sect:Hall} we discuss the motivic Hall algebra of the category of coherent sheaves on a curve. We do some explicit calculations in this Hall algebra. These calculations are used in Section~\ref{sect:Proofs} to prove Theorems~\ref{th:NilpEnd} and~\ref{th:ExplAnsw}.
\subsection{Acknowledgments} We thank D.~Arinkin, L.~Borisov, A.~Braverman, P.~Deligne, E. Diaconescu, S.~Gusein-Zade, T.~Hausel, J.~Heinloth, O.~Schiffmann, and M.~Smirnov, for useful discussions and correspondence. A part of this work was done while R.F.~was an Assistant Professor at Kansas State University, another part was done while R.F.~was visiting Max Planck Institute for Mathematics in Bonn. The work of R.F.~was partially supported by NSF grant DMS--1406532. Y.S.~thanks IHES for excellent research conditions and hospitality. His work was partially supported by NSF grants.
\section{Stack motivic functions and constructible subsets of stacks}\label{sect:MotFun}
In this section $\mathbf k$ is a field of any characteristic. Recall that in this paper we only work with Artin stacks locally of finite type over a field whose groups of stabilizers are affine. According to~\cite[Prop.~3.5.6, Prop.~3.5.9]{KreschStacks} every such stack has a stratification by global quotients of the form $X/\GL_n$, where $X$ is a scheme. We will often use this result below. In this section, we define the group of motivic functions on such a stack $\mathcal X$ (Notation: $\mathrm{Mot}(\mathcal X)$). Motivic functions on Artin or, more generally, constructible stacks were studied by different authors: see, for example,~\cite{Joyce07}, \cite{KontsevichSoibelman08} (or~\cite[Sect.~1]{Ekedahl09} in the case when $\mathcal X$ is the spectrum of a field), so no results of this section are really new. We have included this section for convenience of the reader and in order to fix the notation.
Recall from~\cite[Ch.~5]{LaumonMoretBailly} the notion of points of a $\mathbf k$-stack $\mathcal S$. Let $K\supset\mathbf k$ be a field extension. By a $K$-point of $\mathcal S$ we mean an object of the groupoid $\mathcal S(K)$. A $K'$-point $\xi'$ of $\mathcal S$ is \emph{equivalent\/} to a $K''$-point $\xi''$ of $\mathcal S$ if there is an extension $K\supset\mathbf k$ and $\mathbf k$-embeddings $K'\hookrightarrow K$, $K''\hookrightarrow K$ such that $\xi'_K$ is isomorphic to $\xi''_K$ (as an object of $\mathcal S(K)$). The set of equivalence classes of points of $\mathcal S$ is denoted by $|\mathcal S|$; this set carries Zariski topology.\footnote{If $Y\to\mathcal S$ is a surjective 1-morphism, where $Y$ is a scheme, then every point of $\mathcal S$ is equivalent to a $K$-point, where $K$ is a residue field of a point of $Y$. This explains why $|\mathcal S|$ is a set rather than a class.} If $Y$ is a scheme, then $|Y|$ is identified with the underlying set of $Y$. A 1-morphism $F:\mathcal S\to\mathcal S'$ induces a continuous map $|F|:|\mathcal S|\to|\mathcal S'|$.
\subsection{Stack motivic functions}\label{sect:MotFun1} Let $\mathcal X$ be an Artin stack of finite type. The abelian group $\mathrm{Mot}(\mathcal X)$ is the group generated by the isomorphism classes of finite type 1-morphisms $\mathcal Y\to\mathcal X$ modulo relations
(i) $[\mathcal Y_1\to\mathcal X]=[\mathcal Y_2\to\mathcal X]+[(\mathcal Y_1-\mathcal Y_2)\to\mathcal X]$ whenever $\mathcal Y_2$ is a closed substack of $\mathcal Y_1$.
(ii) If $\pi:\mathcal Y_1\to\mathcal X$ is a 1-morphism of finite type and $\psi:\mathcal Y_2\to\mathcal Y_1$ is a vector bundle of rank $r$, then
\[
[\mathcal Y_2\xrightarrow{\pi\circ\psi}\mathcal X]=[\mathcal Y_1\times\mathbb A_\mathbf k^r\xrightarrow{\pi\circ p_1}\mathcal X].
\]
We call $\mathrm{Mot}(\mathcal X)$ the \emph{group of stack motivic functions on $\mathcal X$} (usually we will drop the word ``stack''). We write $[\mathcal Y]$ instead of $[\mathcal Y\to\Spec\mathbf k]$. We write $\mathrm{Mot}(\mathbf k)$ instead of $\mathrm{Mot}(\Spec\mathbf k)$. We denote by $\mathbb{L}\in\mathrm{Mot}(\mathbf k)$ the element $[\mathbb A^1_{\mathbf k}]$ (called the Lefschetz motive). Note that motivic functions `don't feel nilpotents' in the sense that $[\mathcal X\to\mathcal Y]=[\mathcal X_{red}\to\mathcal Y]$.
For a 1-morphism $f:\mathcal X\to\mathcal Y$ of stacks of finite type, we have the pullback homomorphism $f^*:\mathrm{Mot}(\mathcal Y)\to\mathrm{Mot}(\mathcal X)$, such that $f^*([\mathcal S\to\mathcal Y])=[\mathcal S\times_\mathcal Y\mathcal X\to\mathcal X]$ (note that $f$ is not necessarily of finite type because $\mathcal X$ and $\mathcal Y$ may be stacks over different fields). For $A\in\mathrm{Mot}(\mathcal Y)$ we sometimes write $A|_\mathcal X$ instead of $f^*A$.
Next, if $f:\mathcal X\to\mathcal Y$ is of finite type, then we have the pushforward homomorphism $f_!:\mathrm{Mot}(\mathcal X)\to\mathrm{Mot}(\mathcal Y)$, such that $f_!([\pi:\mathcal S\to\mathcal X])=[f\circ\pi:\mathcal S\to\mathcal Y]$. For $\mathbf k$-stacks of finite type $\mathcal X$ and $\mathcal Y$ we also have an external product $\boxtimes:\mathrm{Mot}(\mathcal X)\otimes_\mathbb{Z} \mathrm{Mot}(\mathcal Y)\to\mathrm{Mot}(\mathcal X\times\mathcal Y)$.
We have the usual properties of pullbacks and pushforwards: $(fg)^*=g^*f^*$, $(fg)_!=f_!g_!$, the base change; easy proofs are left to the reader.
If $\mathcal X$ is a stack locally of finite type, denote by $\Opf(\mathcal X)$ the set of finite type open substacks $\mathcal U\subset\mathcal X$ ordered by inclusion. Set
\[
\mathrm{Mot}(\mathcal X):=\lim_{\longleftarrow}\mathrm{Mot}(\mathcal U)
\]
where the limit is taken over the partially ordered set $\Opf(\mathcal X)$. In other words, a motivic function on $\mathcal X$ amounts to a motivic function $A_\mathcal U$ on each $\mathcal U\in\Opf(\mathcal X)$ such that $(A_\mathcal U)|_\mathcal W=A_\mathcal W$, whenever $\mathcal W\subset\mathcal U$.
If $\mathcal X\to\mathcal Y$ is a 1-morphism of finite type (but $\mathcal Y$ is not necessarily of finite type), we write $[\mathcal X\to\mathcal Y]\in\mathrm{Mot}(\mathcal Y)$ for the inverse system given by $\mathcal U\mapsto\mathcal X\times_\mathcal Y\mathcal U$. Put $\mathbf1_\mathcal X:=[\mathcal X\to\mathcal X]\in\mathrm{Mot}(\mathcal X)$.
It is straightforward to check that the pullback extends to any 1-morphism of stacks, while the pushforward extends to any 1-morphism of finite type.
Set
\[
\mathrm{Mot}^{fin}(\mathcal X):=\cup_{\mathcal U\in\Opf(\mathcal X)}(j_\mathcal U)_!\mathrm{Mot}(\mathcal U)\subset\mathrm{Mot}(\mathcal X),
\]
where $j_\mathcal U:\mathcal U\to\mathcal X$ is the open immersion. This is the group of ``motivic functions with finite support''. Note that the pushforward and the pullback preserve $\mathrm{Mot}^{fin}$, provided the 1-morphism is of finite type. In fact, the pushforward $f_!:\mathrm{Mot}^{fin}(\mathcal X)\to\mathrm{Mot}^{fin}(\mathcal Y)$ may be defined for all morphisms locally of finite type. The importance of $\mathrm{Mot}^{fin}(\mathcal X)$ will be clear in Section~\ref{sect:BilinearForm}.
Next, the direct product makes $\mathrm{Mot}(\mathbf k)$ into a commutative associative unital ring. For any $\mathbf k$-stack $\mathcal X$ the group $\mathrm{Mot}(\mathcal X)$ is a module over $\mathrm{Mot}(\mathbf k)$. Moreover, pullbacks and pushforwards are homomorphisms of modules. (In fact, the fiber product makes any $\mathrm{Mot}(\mathcal X)$ into a ring, but we will not use this structure when $\mathcal X$ is not the spectrum of a field.)
Let $\mathcal X$ be an Artin stack of finite type. Let $F^m\mathrm{Mot}(\mathcal X)\subset\mathrm{Mot}(\mathcal X)$ be the subgroup generated by $\mathcal X$-stacks of relative dimension $\le-m$. We denote the completion with respect to this filtration by $\overline{\Mot}(\mathcal X)$ and call it \emph{the completed group of stack motivic functions}. Note that the operations $f^*$ and $f_!$ are continuous with respect to the topology given by the filtrations, so both the pullback and the pushforward extend to completed groups by continuity. We write $\overline{\Mot}(\mathbf k)$ instead of $\overline{\Mot}(\Spec\mathbf k)$.
If $\mathcal X$ is an Artin stack locally of finite type, we define
\[
\overline{\Mot}(\mathcal X):=\lim_{\longleftarrow}\overline{\Mot}(\mathcal U),
\]
where the inverse limit is taken over the partially ordered set $\Opf(\mathcal X)$. Note that $\overline{\Mot}(\mathcal X)$ has the inverse limit topology: a subset of $\overline{\Mot}(\mathcal X)$ is open if and only if it is a preimage of an open subset in $\overline{\Mot}(\mathcal U)$ for some $\mathcal U\in\Opf(\mathcal X)$. It follows that a sequence $A_n\in\overline{\Mot}(\mathcal X)$ converges to $A$ if and only if for all $\mathcal U\in\Opf(\mathcal X)$ we have $\lim_{n\to\infty}(A_n|_\mathcal U)=A|_\mathcal U$.
Everything we said about the groups of stack motivic functions extends to completed groups by continuity. In particular, we can extend the pullbacks to all 1-morphisms of stacks, and the pushforwards to all 1-morphisms of finite type. The product on $\mathrm{Mot}(\mathbf k)$ extends by continuity to $\overline{\Mot}(\mathbf k)$. The $\mathrm{Mot}(\mathbf k)$-module structure on $\mathrm{Mot}(\mathcal X)$ extends to a $\overline{\Mot}(\mathbf k)$-module structure on $\overline{\Mot}(\mathcal X)$. We also define
\[
\overline{\Mot}^{fin}(\mathcal X):=\cup_{\mathcal U\in\Opf(\mathcal X)}(j_\mathcal U)_!\overline{\Mot}(\mathcal U)\subset\overline{\Mot}(\mathcal X)
\]
and note that the pushforward $f_!:\overline{\Mot}^{fin}(\mathcal X)\to\overline{\Mot}^{fin}(\mathcal Y)$ may be defined for all morphisms locally of finite type.
We do not know whether the natural morphism $i:\mathrm{Mot}(\mathcal X)\to\overline{\Mot}(\mathcal X)$ is injective. However, we abuse notation by writing $A$ instead of $i(A)$, that is, by viewing an element of $\mathrm{Mot}(\mathcal X)$ as an element of $\overline{\Mot}(\mathcal X)$ if convenient.
\subsection{Algebraic groups}
\begin{lemma}\label{lm:GLnBun}
Let $\GL_n$ act on an Artin stack $\mathcal X$. Then we have in $\mathrm{Mot}(\mathcal X/\GL_n)$
\[
[\GL_n]\mathbf1_{\mathcal X/\GL_n}=[\mathcal X\to\mathcal X/\GL_n].
\]
{\rm(}We are using the $\mathrm{Mot}(\mathbf k)$-module structure on $\mathrm{Mot}(\mathcal X/\GL_n)${\rm)}.
\end{lemma}
\begin{proof}
Since $\mathcal X$ is locally of finite type, $\mathcal X/GL(n)$ is locally of finite type as well. Thus we may assume that $\mathcal X$ is of finite type. Recall that we have
\begin{equation}\label{eq:MotGL}
[\GL_n]=\prod_{i=0}^{n-1}(\mathbb{L}^n-\mathbb{L}^i).
\end{equation}
Set $\mathcal Y:=\mathcal X/\GL_n$. We use induction on $n$. The case $n=0$ is obvious. Let $\mathcal V:=\mathbb A^n\times^{\GL_n}\mathcal X$ be the rank $n$ vector bundle on $\mathcal Y$ associated with the principal $\GL_n$-bundle $\mathcal X\to\mathcal Y$. More precisely, we have $\mathcal V=(\mathbb A^n\times\mathcal X)/\GL_n$, where $\GL_n$ acts on $\mathbb A^n$ via the standard representation. Set $\mathcal V':=(\mathbb A^n-\{0\})\times^{\GL_n}\mathcal X$ so that $\mathcal V'$ is the complement of the zero section in $\mathcal V$. Thus we have
\begin{equation}\label{eq:VectBun}
\mathbb{L}^n\mathbf1_\mathcal Y=[\mathcal V\to\mathcal Y]=[\mathcal V'\to\mathcal Y]+\mathbf1_\mathcal Y.
\end{equation}
Let $\GL_{n-1}\subset\GL_n$ be the subgroup of block matrices of the form
\[
\begin{bmatrix}
1 & 0\\0 & *
\end{bmatrix}.
\]
We claim that $\mathcal X/\GL_{n-1}$ is a rank $n-1$ vector bundle on $\mathcal V'$. Indeed, consider the 1-morphism $\mathcal X\to(\mathbb A^n-\{0\})\times\mathcal X$, sending $x$ to $((1,0,\ldots,0),x)$. Its composition with the projection to $\mathcal V'$ is $\GL_{n-1}$-invariant because $\GL_{n-1}$ stabilizes $(1,0,\ldots,0)$. Thus we get a 1-morphism $\mathcal X/\GL_{n-1}\to\mathcal V'$. We need to show that it is a rank $n-1$ vector bundle. This is enough to check after a smooth base change, so we may assume that $\mathcal X=\GL_n\times\mathcal Y$, where $\GL_n$ acts on the first factor. In this case the statement is standard.
By induction hypothesis, we get in $\mathrm{Mot}(\mathcal X/\GL_{n-1})$: $[\mathcal X\to\mathcal X/\GL_{n-1}]=[\GL_{n-1}]\mathbf1_{\mathcal X/\GL_{n-1}}$. Applying~$f_!$ to both sides, where $f:\mathcal X/\GL_{n-1}\to\mathcal Y$ is the projection, we get $[\mathcal X\to\mathcal Y]=\mathbb{L}^{n-1}[\GL_{n-1}][\mathcal V'\to\mathcal Y]$. Combining with~\eqref{eq:VectBun} and~\eqref{eq:MotGL} we get the statement of the lemma.
\end{proof}
\begin{corollary}\label{cor:GLnTorsor}
Assume that $\mathcal X$ is a stack of finite type over a stack $\mathcal S$ and that the action of $\GL_n$ on $\mathcal X$ commutes with the projection to $\mathcal S$. Then we have in $\mathrm{Mot}(\mathcal S)$
\[
[\mathcal X\to\mathcal S]=[\GL_n][\mathcal X/\GL_n\to\mathcal S].
\]
\end{corollary}
\begin{proof}
Apply $f_!$ to the equality given by the above lemma, where $f:\mathcal X/\GL_n\to\mathcal S$ is the structure 1-morphism.
\end{proof}
Recall that an algebraic $\mathbf k$-group $G$ is called special, if every principal $G$-bundle on a scheme is Zariski locally trivial\footnote{We always assume $\mathbf k$-groups to be smooth, which is automatic if $\mathbf k$ has characteristic zero.}. Note that if $V$ is a $\mathbf k$-vector space, then $V$ (with its additive group structure) can be viewed as an algebraic $\mathbf k$-group; it is easily seen to be special (cf.~\cite[Sect.~2]{Joyce07}).
\begin{lemma}\label{lm:MotBG}
Let $G$ be a special $\mathbf k$-group, $\mathrm BG$ be the classifying stack of $G$. Then we have $[G][\mathrm BG]=1$ in $\mathrm{Mot}(\mathbf k)$.
\end{lemma}
\begin{proof}
First of all, in the case $G=\GL_n$ the statement follows easily from Corollary~\ref{cor:GLnTorsor}. In particular, the class of $\GL_n$ is invertible in $\mathrm{Mot}(\mathbf k)$.
Consider a closed embedding $G\to\GL_n$ and note that the quotient $\GL_n/G$ is a scheme. Since $G$ is special, we have $[\GL_n]=[G][\GL_n/G]$. On the other hand, by Corollary~\ref{cor:GLnTorsor}, we get
\[
[\GL_n/G]=[\GL_n][(\GL_n/G)/\GL_n]=[\GL_n][\mathrm BG].
\]
The lemma follows easily from the two equations and the fact that $[\GL_n]$ is invertible in $\mathrm{Mot}(\mathbf k)$.
\end{proof}
\subsection{Bilinear form}\label{sect:BilinearForm}
Let $\mathcal Z$ be a $\mathbf k$-stack of finite type. If $\mathcal X$ and $\mathcal Y$ are of finite type over $\mathcal Z$, we set
\[
([\mathcal X\to\mathcal Z]|[\mathcal Y\to\mathcal Z])=[\mathcal X\times_{\mathcal Z}\mathcal Y].
\]
Extending this by bilinearity, we get a symmetric bilinear form $\mathrm{Mot}(\mathcal Z)\otimes\mathrm{Mot}(\mathcal Z)\to\mathrm{Mot}(\mathbf k)$. We extend this by continuity to a symmetric form $\overline{\Mot}(\mathcal Z)\otimes\overline{\Mot}(\mathcal Z)\to\overline{\Mot}(\mathbf k)$.
Now, let $\mathcal Z$ be a $\mathbf k$-stack locally of finite type. Let $A\in\overline{\Mot}^{fin}(\mathcal Z)$, $B\in\overline{\Mot}(\mathcal Z)$. Write $A=j_! A_\mathcal V$, where $\mathcal V\in\Opf(\mathcal Z)$, $j:\mathcal V\to\mathcal X$ is the open immersion. Let $B$ be given by an inverse system $\mathcal U\mapsto B_\mathcal U$, where $\mathcal U$ ranges over $\Opf(\mathcal Z)$. Set $(A|B):=(A_\mathcal V|B_\mathcal V)$. One checks that this does not depend on the choice of $\mathcal V$ (to prove this, one first proves Lemma~\ref{lm:MotBilProd} below in the case, when $\mathcal Z$ and $\mathcal Z'$ are of finite type over~$\mathbf k$). In this way, we get a continuous bilinear form
\[
(\bullet|\bullet):\overline{\Mot}^{fin}(\mathcal Z)\otimes\overline{\Mot}(\mathcal Z)\to\overline{\Mot}(\mathbf k).
\]
Note that the restriction of this form to $\overline{\Mot}^{fin}(\mathcal Z)\otimes\overline{\Mot}^{fin}(\mathcal Z)$ is symmetric.
We abuse notation by writing sometimes $(A|B)$ instead of $(B|A)$ when $B\in\overline{\Mot}^{fin}(\mathcal Z)$ but $A\notin\overline{\Mot}^{fin}(\mathcal Z)$.
The following lemma is immediate.
\begin{lemma}\label{lm:MotBilProd} If $f:\mathcal Z\to\mathcal Z'$ is a 1-morphism of finite type, then
(i)
for all $A\in\overline{\Mot}^{fin}(\mathcal Z)$, $B\in\overline{\Mot}(\mathcal Z')$ we have
\[
(A|f^*B)=(f_!A|B).
\]
(ii) for all $A\in\overline{\Mot}^{fin}(\mathcal Z')$, $B\in\overline{\Mot}(\mathcal Z)$ we have
\[
(f^*A|B)=(A|f_!B).
\]
\end{lemma}
\subsection{Constructible subsets of stacks}\label{sect:Constr}
Let $\mathcal S$ be an Artin stack of finite type over $\mathbf k$. A subset $\mathcal X\subset|\mathcal S|$ is called \emph{constructible\/} if it belongs to the Boolean algebra generated by the sets of points of open substacks of $\mathcal S$. A \emph{stratification\/} of a constructible subset $\mathcal X\subset|\mathcal S|$ is a finite collection $\mathcal T_i$ of constructible subsets of $\mathcal S$ such that $\mathcal X=\sqcup_i\mathcal T_i$.
Let $\mathcal X$ be a constructible subset of a finite type stack $\mathcal S$. Consider a stratification $\mathcal X=\sqcup_i|\mathcal Y_i|$, where $\mathcal Y_i$ are locally closed substacks. Set
\[
\mathbf1_{\mathcal X,\mathcal S}:=\sum_i[\mathcal Y_i\to\mathcal S]\in\mathrm{Mot}(\mathcal S).
\]
It is easy to see that $\mathbf1_{\mathcal X,\mathcal S}$ does not depend on the stratification of $\mathcal X$.
If $\mathcal S$ is a stack locally of finite type, we call $\mathcal X\subset|\mathcal S|$ \emph{constructible\/}, if for every $\mathcal U\in\Opf(\mathcal S)$ the set $\mathcal X\cap|\mathcal U|$ is a constructible subset of $\mathcal U$. In this case, we define $\mathbf1_{\mathcal X,\mathcal S}$ via the inverse system $\mathcal U\mapsto\mathbf1_{\mathcal X\cap|\mathcal U|,\mathcal U}$. We sometimes write $\mathbf1_\mathcal X$ instead of $\mathbf1_{\mathcal X,\mathcal S}$, when $\mathcal S$ is clear.
If $g:\mathcal S\to\mathcal T$ is a 1-morphism of finite type and $\mathcal X\subset|\mathcal S|$ is a constructible subset, we use the notation $[\mathcal X\to\mathcal T]:=g_!\mathbf1_{\mathcal X,\mathcal S}$. When $\mathcal T=\Spec\mathbf k$, we write $[\mathcal X]$ for $[\mathcal X\to\Spec\mathbf k]$.
A constructible subset $\mathcal X\subset|\mathcal S|$ is of \emph{finite type\/}, if there is $\mathcal U\in\Opf(\mathcal S)$ such that $\mathcal X\subset|\mathcal U|$. In this case, $\mathbf1_{\mathcal X,\mathcal S}\in\mathrm{Mot}^{fin}(\mathcal S)$, so if $g:\mathcal S\to\mathcal T$ is a 1-morphism locally of finite type, then we can define $[\mathcal X\to\mathcal T]:=g_!\mathbf1_{\mathcal X,\mathcal S}$.
Let $\mathcal S$ and $\mathcal S'$ be stacks of finite type and $\mathcal X\subset|\mathcal S|$, $\mathcal X'\subset|\mathcal S'|$ be their constructible subsets. Let $\mathcal X=\sqcup_i|\mathcal T_i|$, $\mathcal X'=\sqcup_i|\mathcal T'_i|$ be their stratifications by locally closed substacks. We define the product of $\mathcal S$ and $\mathcal S'$ via $\mathcal S\times\mathcal S'=\sqcup_{i,j}|\mathcal T_i\times\mathcal T'_j|$.\footnote{Note that this is not the usual product of sets. The reason is that for stacks (even schemes) $\mathcal T$ and $\mathcal T'$ we have in general $|\mathcal T\times\mathcal T'|\ne|\mathcal T|\times|\mathcal T'|$.} It is easy to check that this product does not depend on stratifications and that we have $[\mathcal S\times\mathcal S']=[\mathcal S]\times[\mathcal S']$ in $\mathrm{Mot}(\mathbf k)$. It is also easy to extend the definition to any finite number of multiples.
\begin{remark}\label{rm:KS}
Our definition of motivic functions is essentially equivalent to that of~\cite[Sect.~4.2]{KontsevichSoibelman08}. In~\cite{KontsevichSoibelman08} a category of constructible stacks is defined. Intuitively, constructible stacks are Artin stacks ``up to stratification''. Precisely, the objects of the category are pairs $(X,G)$ where $X$ is a $\mathbf k$-scheme of finite type, $G$ is a linear group acting on $X$. We will not spell out the precise definition of morphisms here but note that one can define an equivalent category as a category whose objects are pairs $(\mathcal X,\mathcal S)$, where $\mathcal X$ is a constructible subset of the stack $\mathcal S$. The equivalence of categories is given by $(X,G)\mapsto(|X/G|,X/G)$. In loc.~cit.~the group of stack motivic functions is defined over a constructible stack.
\end{remark}
\subsection{Relation with motivic classes of varieties}\label{sect:MotVar}
Define $\mathrm{Mot}_{var}(\mathbf k)$ as the abelian group generated by the isomorphism classes of $\mathbf k$-varieties (=reduced schemes of finite type over $\mathbf k$) subject to the relation $[Z_1]=[Z_2]+[(Z_1-Z_2)]$ whenever $Z_2$ is a closed subvariety of $Z_1$. The direct product equips $\mathrm{Mot}_{var}(\mathbf k)$ with a ring structure. There is an obvious homomorphism $\mathrm{Mot}_{var}(\mathbf k)\to\mathrm{Mot}(\mathbf k)$. This homomorphism clearly extends to the localization
\[
\mathrm{Mot}_{var}(\mathbf k)[\mathbb{L}^{-1},(\mathbb{L}^i-1)^{-1}|i>0]\to\mathrm{Mot}(\mathbf k).
\]
It is easy to see that the above homomorphism is an isomorphism (see e.g.~\cite[Thm.~1.2]{Ekedahl09}).
Following~\cite[Sect.~2.1]{BehrendDhillon}, we define the dimensional completion of $\mathrm{Mot}_{var}(\mathbf k)$ as follows. Denote by $F^m\mathrm{Mot}_{var}(\mathbf k)[\mathbb{L}^{-1}]$ the subgroup of the localization $\mathrm{Mot}_{var}(\mathbf k)[\mathbb{L}^{-1}]$ generated by $[X]/\mathbb{L}^n$ with $\dim X-n\le-m$. This is a ring filtration and we define the completed ring $\overline{\Mot}_{var}(\mathbf k)$ as the completion of $\mathrm{Mot}_{var}(\mathbf k)[\mathbb{L}^{-1}]$ with respect to this filtration. Obviously, $\mathbb{L}$ and $\mathbb{L}^i-1$ are invertible in $\overline{\Mot}_{var}(\mathbf k)$ so we have a homomorphism $\mathrm{Mot}(\mathbf k)\to\overline{\Mot}_{var}(\mathbf k)$. It is not difficult to show that this extends by continuity to an isomorphism
$\overline{\Mot}(\mathbf k)\xrightarrow{\simeq}\overline{\Mot}_{var}(\mathbf k)$.
Recall that in Section~\ref{sect:zeta} we defined motivic zeta-functions of varieties. Now we can define the zeta-function of a motivic class (see eq.~\eqref{eq:zeta}), when $\mathbf k$ is a field of characteristic zero. Note the following well-known statement.
\begin{lemma}\label{lm:zeta}
(i) If $Z\subset Y$ is a closed subvariety, then $\zeta_Y=\zeta_Z\zeta_{Y-Z}$.
(ii) $\zeta_{\mathbb A^n\times Y}(z)=\zeta_Y(\mathbb{L}^n z)$.
\end{lemma}
Now we can use $(i)$ to extend zeta-functions to $\mathrm{Mot}_{var}(\mathbf k)$ and use $(ii)$ to extend to $\mathrm{Mot}_{var}(\mathbf k)[\mathbb{L}^{-1}]$. It remains to extend $\zeta$ to $\overline{\Mot}_{var}(\mathbf k)=\overline{\Mot}(\mathbf k)$ by continuity.
\subsection{Checking equality of motivic functions fiberwise}
The following statement will be our primary way to check that motivic functions are equal.
\begin{proposition}\label{pr:pointwise zero}
Let $A,B\in\mathrm{Mot}(\mathcal X)$ be motivic functions. Assume that for any field $K$ and any point $\xi:\Spec K\to\mathcal X$ we have $\xi^*A=\xi^*B$. Then $A=B$.
\end{proposition}
Viewing $\xi^*A$ as the ``value'' of $A$ at $\xi$, we can reformulate the proposition as the statement that equality of motivic functions can be checked pointwise.
\begin{proof}
We may assume that $B=0$ and that $\mathcal X$ is of finite type. If $\mathcal X=\sqcup_i\mathcal T_i$ is a stratification of $\mathcal X$ by locally closed substacks, and for all $i$ we have $A|_{\mathcal T_i}=0$, then $A=0$. Thus, using~\cite[Prop.~3.5.6, Prop.~3.5.9]{KreschStacks}, we may assume that $\mathcal X=X/\GL_n$ is a global quotient, where $X$ is a scheme of finite type over a field. By Lemma~\ref{lm:GLnBun} it is enough to show that the pullback of $A$ to $X$ is zero, so we may assume that $\mathcal X=X$ is a scheme. We may also assume that $X$ is integral. Let $\xi:\Spec K\to X$ be the generic point. It is enough to show that $\xi^*A=0$ implies that there is an open subset $U\subset X$ such that $A|_U=0$.
Next, multiplying $A$ by an invertible element of $\mathrm{Mot}(\mathbf k)$, we may assume that $A=\sum_i n_i[V_i\to X]$, where $V_i$ are $X$-schemes. Indeed, we may assume that $A$ is a combination of classes of stacks of the form $V/\GL_n$ but we have $[\GL_n][V/\GL_n\to X]=[V\to X]$ by Corollary~\ref{cor:GLnTorsor}.
Now by~\cite[Thm.~1.2]{Ekedahl09}, multiplying once more by an invertible element of $\mathrm{Mot}(\mathbf k)$ if necessary, we may assume that in the free abelian group generated by isomorphism classes of $K$-varieties we have
\[
\sum_i n_i[(V_i)_\xi]=\sum_i m_i([Y_i]-[Z_i]-[Y_i-Z_i]),
\]
where $Y_i$ are affine $K$-varieties and $Z_i$ are their closed subvarieties. Clearing denominators, we see that there is an open subset $W\subset X$, varieties $Y'_i$ over $W$, and their closed subvarieties $Z'_i$ such that $(Y'_i)_\xi\approx Y_i$, and under this isomorphism $(Z'_i)_\xi$ goes to $Z_i$.
Thus $\sum_i n_i[(V_i)_\xi]=\sum_i m_i([(Y'_i)_\xi]-[(Z'_i)_\xi]-[(Y'_i)_\xi-(Z'_i)_\xi]$. It follows that there is an open subset $U\subset W$ such that
\[
\left.\left(\sum_i n_i[V_i\to X]\right)\right|_U=
\left.\left(\sum_i m_i([Y'_i\to W]-[Z'_i\to W]-[(Y'_i-Z'_i)\to W]\right)\right|_U.
\]
We see that $A|_U=0$.
\end{proof}
\begin{corollary}\label{cor:pointwEqual}
Let $f:\mathcal X\to\mathcal Y$ be a finite type 1-morphism of stacks inducing for every $K\supset\mathbf k$ an equivalence of groupoids $\mathcal X(K)\to\mathcal Y(K)$. Then $[\mathcal X\to\mathcal Y]=\mathbf1_\mathcal Y$.
\end{corollary}
\begin{proof}
We would like to apply the previous proposition. Let $\xi:\Spec K\to\mathcal Y$ be a point and let $\mathcal X_\xi$ be the $\xi$-fiber of $f$. We need to show that $[\mathcal X_\xi]=[\Spec K]$ in $\mathrm{Mot}(K)$. This is easy if $\mathcal X$ and $\mathcal Y$ are schemes: then the fiber is a 1-point scheme, so we have $[\mathcal X_\xi]=[(\mathcal X_\xi)_{red}]=[\Spec K]$.
In general, $\mathcal X_\xi$ is a $K$-stack such that for all extensions $K'\supset K$ the groupoid $\mathcal X_\xi(K')$ is equivalent to the trivial one. In particular, $|\mathcal X_\xi|$ consists of a single point. Thus, according to the Kresch's result, we have $(\mathcal X_\xi)_{red}=X/\GL_n$, where $X$ is a $K$-scheme. The unique $K$-point of $\mathcal X_\xi$ gives rise to a $\GL_n$-equivariant morphism $\GL_n\to X$. Also, for any extension $K'\supset K$ this morphism induces an isomorphism $\GL_n(K')\to X(K')$. But we already know the statement for schemes, so we have $[\GL_n]=[X]$. It follows that $[\mathcal X_\xi]=[(\mathcal X_\xi)_{red}]=[X]/[\GL_n]=[\Spec K]$. Now we can apply the proposition.
\end{proof}
\section{Moduli stacks of connections, Higgs bundles, and vector bundles with nilpotent endomorphisms}\label{sect:Conn=Higgs}
In this section we introduce various stacks and provide relations between their motivic classes. In particular, we prove Theorem~\ref{th:conn=higgs} in Section~\ref{sect:CompHiggsConn}. We also give a relation between the moduli stacks of Higgs bundles, moduli stacks of vector bundles with endomorphisms (Lemma~\ref{lm:HiggsEnd}), and moduli stacks of vector bundles with nilpotent endomorphisms (Proposition~\ref{pr:NilpEndPow}). In this section $\mathbf k$ is a fixed field of characteristic zero and $K$ denotes an arbitrary field extension of $\mathbf k$.
\subsection{Krull--Schmidt theory for coherent sheaves}\label{App:KrSchm}
The results of this section are well-known but we include them here for the reader's convenience. In this section $X$ is a smooth connected projective variety over $\mathbf k$. For a vector bundle $E$ on $X$ we denote by $\End^{nil}(E)$ the nilradical of the finite dimensional $\mathbf k$-algebra $\End(E)$.
\begin{proposition}\label{pr:KrullSchmidt1}
(i) Let $F$ be an indecomposable vector bundle on $X$ and $\Psi\in\End(F)$. Then either $\Psi$ is nilpotent, or $\Psi$ is an automorphism.
(ii) Write a vector bundle $F$ as $F=\bigoplus_{i=1}^t F_i$, where $F_i\approx E_i^{\oplus n_i}$, and $E_i$ are pairwise non-isomorphic indecomposable bundles, $n_i>0$. Then we have
\[
\bigoplus_{i\ne j}\Hom(F_i,F_j)\subset\End^{nil}(F).
\]
\end{proposition}
\begin{proof}
(i) The increasing sequence of subsheaves $\Ker(\Psi^n)\subset F$ must stabilize at some $n$. Replacing $\Psi$ by $\Psi^n$ we may thus assume that $\Ker(\Psi^2)=\Ker\Psi$, that is, $\Ker\Psi\cap\Im\Psi=0$. Thus the inclusion morphism $\Ker\Psi\oplus\Im\Psi\to F$ is injective. Since both sheaves have the same Hilbert polynomial, the morphism must be an isomorphism. The statement follows.
(ii) Let $F'$ be an indecomposable component of $F_i$, $F''$ be an indecomposable component of $F_j$. It is enough to show that $\Hom(F',F'')\subset\End^{nil}(F)$. Let $\Psi\in\Hom(F',F'')$. We need to show that for any $\Psi'\in\End(F)$, $\Psi\Psi'$ is nilpotent. Replacing $\Psi'$ by its component with respect to a direct sum decomposition, we may assume that $\Psi'\in\Hom(F'',F')$. By part (i) $\Psi\Psi'$ is either nilpotent or an isomorphism. But the second possibility is ruled out by an assumption.
\end{proof}
The following proposition is~\cite[Thm.~3]{Atiyah-KrullSchmidt} when $\mathbf k$ is algebraically closed. The proof, in fact, goes through for any field. Alternatively, it is easy to derive this proposition from the previous one.
\begin{proposition}\label{pr:KrullSchmidt2}
Let $F$ be a vector bundle on $X$. Write $F=\bigoplus_{i=1}^t F_i$, where $F_i\approx E_i^{\oplus n_i}$, and $E_i$ are pairwise non-isomorphic indecomposable bundles, $n_i>0$. This decomposition of $F$ into the direct sum of indecomposables is unique up to permutation. That is, if $F=\bigoplus_{i=1}^t F'_i$, $F'_i\approx(E'_i)^{\oplus m_i}$, where $E'_i$ are pairwise non-isomorphic indecomposable bundles, $m_i>0$, then after renumeration of summands we get $n_i=m_i$, $E_i\approx E'_i$.
\end{proposition}
\subsection{Stacks of vector bundles and HN-filtrations}\label{sect:HN}
Let $\mathcal{B}un_r$ be the stack of vector bundles of rank~$r$ (over the fixed curve $X$). Let $\mathcal{B}un_{r,d}$ be its connected component classifying bundles of degree $d$. Recall that a vector bundle $E$ on $X_K$ (where, as usual, $K$ is an extension of $\mathbf k$) is \emph{semistable} if for every subbundle $F\subset E$ we have
\[
\frac{\deg F}{\rk F}\le\frac{\deg E}{\rk E}.
\]
According to~\cite[Prop.~3]{Langton75}, if $K'\supset K$ is a field extension, then $E_{K'}$ is semistable if and only if $E$ is semistable. The number $\deg E/\rk E$ is called the \emph{slope\/} of $E$. It is well known that each vector bundle $E$ on $X_K$ possesses a unique filtration
\[
0=E_0\subset E_1\subset\ldots\subset E_t=E
\]
such that for $i=1,\ldots,t$ the sheaf $E_i/E_{i-1}$ is a semistable vector bundle and for $i=1,\ldots,t-1$ the slope of $E_i/E_{i-1}$ is strictly greater than the slope of $E_{i+1}/E_i$ (see ~\cite[Sect.~1.3]{HarderNarasimhan}). This filtration is called the Harder--Narasimhan filtration (or HN-filtration for brevity) on $E$ and the sequence of slopes $(\tau_1>\ldots>\tau_t)$, where $\tau_i=\deg(E_i/E_{i-1})/\rk(E_i/E_{i-1})$, is called the \emph{HN-type\/} of $E$. It follows from~\cite[Prop.~3]{Langton75} that HN-type is compatible with field extensions.
\begin{lemma}\label{lm:Bun+}
(i) There is an open substack $\mathcal{B}un_{r,d}^{\ge\tau}\subset\mathcal{B}un_{r,d}$ classifying vector bundles whose HN-type $(\tau_1>\ldots>\tau_t)$ satisfies $\tau_t\ge\tau$.
(ii) A vector bundle $E\in\mathcal{B}un_{r,d}(K)$ is in $\mathcal{B}un_{r,d}^{\ge\tau}(K)$ if and only if there is no surjective morphism of vector bundles $E\to F$ such that the slope of $F$ is less than $\tau$.
(iii) The stack $\mathcal{B}un_{r,d}^{\ge\tau}$ is of finite type.
(iv) A constructible subset $\mathcal X\subset|\mathcal{B}un_r|$ is of finite type if and only if there are $\tau$ and $d_1$,\ldots,$d_n$ such that
\[
\mathcal X\subset\cup_{i=1}^n|\mathcal{B}un_{r,d_i}^{\ge\tau}|.
\]
\end{lemma}
\begin{proof}
(i) Since the HN-type is compatible with field extensions, we may assume that $\mathbf k$ is algebraically closed. In this case this follows from~\cite[Thm.~3 and Prop.~10]{ShatzHN} (or~\cite[Thm.~1.7]{MaruyamaBoundedness}).
(ii) The `if' direction is obvious. For the `only if', let $0=E_0\subset E_1\subset\ldots\subset E_t=E$ be the HN filtration of $E$ and assume that we have a surjective morphism $E\to F$, where the slope of $F$ is less than $\tau$. Let $0=F_0\subset F_1\subset\ldots\subset F_s=F$ be the HN filtration on $F$. Clearly, the slope of $F_s/F_{s-1}$ is less than $\tau$. Thus, replacing $F$ with $F_s/F_{s-1}$ and the morphism $E\to F$ with its composition with the projection to $F_s/F_{s-1}$, we may assume that $F$ is semistable.
Now, if the slope of $E_t/E_{t-1}$ is greater or equal to $\tau$, then for all $i$ the are no non-zero morphisms $E_i/E_{i-1}\to F$ (because these bundles are semistable and the slope of $E_i/E_{i-1}$ is greater, then the slope of $F$.) But then there are no non-zero morphisms from $E$ to $F$ and we come to a contradiction.
(iii) According to~\cite[Lemma~1.7.6]{HuybrechtsLehnModuli}, it is enough to show that all vector bundles $E$ in $\mathcal{B}un_{r,d}^{\ge\tau}$
are $m$-regular for $m>3-2g-\tau$.
Since $X$ is a curve, $m$-regularity just means that $H^1(X,E\otimes\mathcal O_X(m-1))=0$. By Serre duality, this cohomology group is dual to $\Hom(E,\Omega_X^{-1}\otimes\mathcal O_X(1-m))$. The latter space is zero by part (ii), since the slope of $\Omega_X^{-1}\otimes\mathcal O_X(1-m)$ is equal to $3-2g-m$.
(iv) The `if' part follows from (i). For the converse note that
\[
\{\mathcal{B}un_{r,d}^{\ge\tau}|d\in\mathbb{Z},\tau\in\mathbb{Z}\}
\]
is an open cover of $\mathcal{B}un_r$. Thus $\mathcal X$, being quasi-compact, is covered by finitely many $\mathcal{B}un_{r,d}^{\ge\tau}$.
\end{proof}
We will be mostly interested in the stack $\mathcal{B}un_{r,d}^{\ge0}$. We will call such vector bundles `HN-nonnegative'. Note that the tensorisation with a line bundle of degree $e$ gives an isomorphism $\mathcal{B}un_{r,d}^{\ge0}\simeq\mathcal{B}un_{r,d+er}^{\ge e}$. It follows from Lemma~\ref{lm:Bun+}(ii) that $E$ is HN-nonnegative if and only if there are no surjective morphisms $E\to F$, where $F$ is a vector bundle such that $\deg F<0$.
\subsubsection{Isoslopy vector bundles}
We will call a vector bundle $E$ on $X_K$ \emph{isoslopy\/} if it cannot be written as the direct sum of two vector bundles of different slope.
\begin{lemma}\label{lm:isoiii}
A vector bundle $E$ on $X$ is isoslopy if and only if its pullback to $X_K$ is isoslopy.
\end{lemma}
\begin{proof}
The `if' direction is obvious. For the `only if' direction we note first that the sum of isoslopy bundles of the same slope is isoslopy because of uniqueness of decomposition (Proposition~\ref{pr:KrullSchmidt2}). Thus it is enough to prove that if $E$ is an indecomposable vector bundle on $X$, then $E_K$ is isoslopy. We follow the strategy of the proof of~\cite[Prop.~3]{Langton75}. We may assume that $K\supset\mathbf k$ is a finitely generated extension. In view of the `if' direction, it is enough to consider two cases: (i) $K$ is an algebraic closure of $\mathbf k$, (ii) $K=\mathbf k(t)$ is purely transcendental of degree~1. In case (i) the statement follows from the fact that the Galois group of $K$ over $\mathbf k$ acts transitively on indecomposable summands of $E_K$.
Finally, if $E_{\mathbf k(t)}$ is the direct sum of two vector bundles of different slopes, then there is an open subset $U\subset\mathbb A^1_\mathbf k$ such that the pullback of $E$ to $X\times U$ is the direct sum of two vector bundles of different slopes (just clear denominators). Restricting this pullback to $X\times u$, where $u\in U$ is a $\mathbf k$-rational point, we come to contradiction.
\end{proof}
By the above lemma, the equivalence relation from Section~\ref{sect:MotFun} on the points of $\mathcal{B}un_{r,d}$ preserves isoslopy bundles. Thus we have a well-defined set $\mathcal{B}un_{r,d}^{iso}\subset|\mathcal{B}un_{r,d}|$. Set also
$\mathcal{B}un_{r,d}^{\ge0,iso}=|\mathcal{B}un_{r,d}^{\ge0}|\cap\mathcal{B}un_{r,d}^{iso}$.
\begin{lemma}\label{lm:iso}
(i) If $\ell$ is a line bundle on $X$ of degree $N>(r-1)(g-1)-d/r$, then tensorisation with~$\ell$ (which is a 1-morphism $\mathcal{B}un_{r,d}\to\mathcal{B}un_{r,d+Nr}$) induces a bijection
\[
\mathcal{B}un^{iso}_{r,d}\xrightarrow{\simeq}\mathcal{B}un^{\ge0,iso}_{r,d+Nr}.
\]
(ii) $\mathcal{B}un^{iso}_{r,d}\subset|\mathcal{B}un_{r,d}|$ is a constructible subset of finite type.
\end{lemma}
\begin{proof}
(i) Clearly, tensorisation with $\ell$ induces a bijection $\mathcal{B}un^{iso}_{r,d}\xrightarrow{\simeq}\mathcal{B}un^{iso}_{r,d+Nr}$.
It remains to show that $\mathcal{B}un^{\ge0,iso}_{r,d+Nr}=\mathcal{B}un^{iso}_{r,d+Nr}$. By contradiction, assume that $E\in\mathcal{B}un^{iso}_{r,d+Nr}$ but
$E\notin\mathcal{B}un^{\ge0,iso}_{r,d+Nr}$. Then $E$ is decomposable by~\cite[Cor.~4.2]{MozgovoySchiffmanOnHiggsBundles} (Formally speaking, the statement is only formulated for curves over finite fields, but the proof works over any field). Let $E_0$ be an indecomposable summand of $E$ such that its HN-type $(\tau_1>\ldots>\tau_t)$ satisfies $\tau_t<0$ (it exists by Lemma~\ref{lm:Bun+}(ii)). By the definition of isoslopy bundles, the slope of $E_0$ is equal to $d/r+N$, and clearly $d/r+N>(\rk E_0-1)(g-1)$, so $E_0$ cannot be indecomposable (again by~\cite[Cor.~4.2]{MozgovoySchiffmanOnHiggsBundles}). This contradiction completes the proof of (i).
Let us prove part (ii). By part (i), it is enough to prove the statement for $\mathcal{B}un^{\ge0,iso}_{r,d}$ (just replace $d$ by $d+Nr$ with large $N$). Let $\Pi$ be the set of all quadruples $(r',d',r'',d'')\in\mathbb{Z}_{\ge0}^4$ such that $r'+r''=r$, $d'+d''=d$, and $r'/d'\ne r/d$. Note that this set is finite.
For $\pi=(r',d',r'',d'')\in\Pi$, let $\mathcal B_\pi$ be the image of $\mathcal{B}un^{\ge0}_{r',d'}\times\mathcal{B}un^{\ge0}_{r'',d''}$ under the morphism, sending two vector bundles to their direct sum. Combining Lemma~\ref{lm:Bun+}(iii) with the stacky Chevalley theorem, we see that $\mathcal B_\pi$ is a constructible subset of $\mathcal{B}un^{\ge0}_{r,d}$. One easily checks that
\[
\mathcal{B}un^{\ge0,iso}_{r,d}=|\mathcal{B}un^{\ge0}_{r,d}|-\cup_{\pi\in\Pi}\mathcal B_\pi.
\]
Thus $\mathcal{B}un^{\ge0,iso}_{r,d}$ is constructible. Obviously, it is of finite type.
\end{proof}
\subsection{Higgs bundles whose underlying vector bundle is HN-nonnegative}\label{Sect:Higgs}
Recall from Section~\ref{sect:ModStack} the Artin stack $\mathcal M_{r,d}$ classifying Higgs bundles of rank $r$ and degree $d$. A simple argument similar to the proof of~\cite[Prop.~1]{FedorovIsoStokes} shows that it is an Artin stack locally of finite type and that the forgetful 1-morphism $(E,\Phi)\mapsto E$ is a schematic 1-morphism of finite type $\mathcal M_{r,d}\to\mathcal{B}un_{r,d}$. Set
\[
\mathcal M^{\ge0}_{r,d}:=\mathcal M_{r,d}\times_{\mathcal{B}un_{r,d}}\mathcal{B}un_{r,d}^{\ge0};
\]
by Lemma~\ref{lm:Bun+}(i), it is an open substack of finite type of $\mathcal M_{r,d}$.
On the other hand, recall from Section~\ref{sect:ModStack} that a Higgs bundle $(E,\Phi)\in\mathcal M_{r,d}(K)$ is called semistable if the slope of any subbundle $F\subset E$ is less or equal than the slope of $E$, provided that $F$ is preserved by $\Phi$. An argument similar to~\cite[Prop.~3]{Langton75} shows that this notion is stable with respect to field extensions. We emphasize that semistability of $(E,\Phi)$ does not imply in general semistability of $E$. According to~\cite[Lemma~3.7]{Simpson1}\footnote{This Lemma is formulated in the case, when the field is the field of complex numbers. However, the proof goes through for any field.}, there is an open substack $\mathcal M^{ss}_{r,d}$ classifying semistable Higgs bundles.
We call a Higgs bundle $(E,\Phi)$ on $X_K$ \emph{nonnegative-semistable\/} if $E$ is HN-nonnegative and whenever $F\subset E$ is an HN-nonnegative vector subbundle preserved by $\Phi$, the slope of $F$ is less or equal than the slope of $E$; an argument similar to~\cite[Prop.~3]{Langton75} shows that this notion is stable with respect to field extensions. Denote the stack of nonnegative-semistable Higgs bundles of rank $r$ and degree $d$ by $\mathcal M_{r,d}^{\ge0,ss}$; an argument similar to~\cite[Lemma~3.7]{Simpson1} shows that this is an open substack of $\mathcal M_{r,d}$.
\begin{remark}
In general, $\mathcal M_{r,d}^{\ge0,ss}\ne\mathcal M_{r,d}^{\ge0}\cap\mathcal M_{r,d}^{ss}$. The reason is that a nonnegative semistable Higgs bundle $(E,\Phi)$ might have a destabilizing subbundle $F$ such that $F$ is not HN-nonnegative, so it is not necessarily semistable in the usual sense.
\end{remark}
\begin{lemma}\label{lm:+ss}
(i) If $\ell$ is a line bundle on $X$ of degree $N>(r-1)(g-1)-d/r$, then tensorisation with $\ell$ induces an isomorphism
\[
\mathcal M^{ss}_{r,d}\xrightarrow{\simeq}\mathcal M^{\ge0,ss}_{r,d+Nr}.
\]
(ii) $\mathcal M^{ss}_{r,d}$ is a stack of finite type.
\end{lemma}
\begin{proof}
(i)
Clearly, tensorisation with $\ell$ induces an isomorphism $\mathcal M^{ss}_{r,d}\xrightarrow{\simeq}\mathcal M^{ss}_{r,d+Nr}$. It remains to notice that, according to~\cite[Cor.~3.3]{MozgovoySchiffmanOnHiggsBundles}, we have $\mathcal M^{ss}_{r,d+Nr}=\mathcal M^{\ge0,ss}_{r,d+Nr}$. (Formally speaking, the statement is only formulated for curves over finite fields, but the proof works over any field.)
Part (ii) is an obvious corollary of part (i).
\end{proof}
\subsection{Connections and isoslopy Higgs bundles}\label{sect:ConnIsoslopy}
Recall that $\mathcal{C}onn_r$ is the moduli stack of rank $r$ vector bundles with connections. An argument similar to the proof of~\cite[Prop.~1]{FedorovIsoStokes} shows that it is an Artin stack locally of finite type and that the forgetful 1-morphism $(E,\nabla)\mapsto E$ is a schematic 1-morphism of finite type $\mathcal{C}onn_r\to\mathcal{B}un_{r,0}$. We are using the well-known fact that a vector bundle admitting a connection must be of degree zero.
Let $\mathcal M^{\ge0,iso}_{r,d}\subset|\mathcal M^{\ge0}_{r,d}|$ be the set of points corresponding to Higgs bundles $(E,\Phi)$ such that $E\in\mathcal{B}un^{\ge0,iso}_{r,d}$. It follows from Lemma~\ref{lm:iso}(ii) that $\mathcal M^{\ge0,iso}_{r,d}$ is a constructible subset of finite type.
\begin{proposition}\label{pr:+iso}
The stack $\mathcal{C}onn_r$ is of finite type and we have in $\mathrm{Mot}(\mathbf k)$
\[
[\mathcal{C}onn_r]=[\mathcal M^{iso}_{r,0}].
\]
\end{proposition}
\begin{proof}
By Weil's theorem the image of $\mathcal{C}onn_r$ in $\mathcal{B}un_{r,0}$ is exactly $\mathcal{B}un^{iso}_{r,0}$ (Note that we only need to use the Weil's theorem for an algebraic closure of $\mathbf k$ because we know a priori that this image is constructible. By elementary logic, it is enough to know that the theorem is true for the field of complex numbers).
It is enough to show that we have in $\mathrm{Mot}(\mathcal{B}un_{r,0})$:
\begin{equation}\label{eq:Conn=HiggsRel}
[\mathcal M^{iso}_{r,0}\to\mathcal{B}un_{r,0}]=[\mathcal{C}onn_r\to\mathcal{B}un_{r,0}].
\end{equation}
We want to apply Proposition~\ref{pr:pointwise zero}. Let $\xi:\Spec K\to\mathcal{B}un_{r,0}$ be a point. It corresponds to a vector bundle $E$ on $X_K$. If $E$ is not isoslopy, then the pullbacks of both sides of~\eqref{eq:Conn=HiggsRel} are zero. If $E$ is isoslopy, then the pullback of the LHS of~\eqref{eq:Conn=HiggsRel} is the class of the vector space $V=H^0(X_K,\END(E)\otimes\Omega_{X_K})$, while the pullback of the RHS is the class of an affine space over this vector space, that is, a principal $V$-bundle. (Note that a priori this affine space only has a section after extending the field, but, as we noted above, a vector space with its additive group structure is a special group, so there are no non-trivial $V$-bundles on $\Spec K$.) Thus the fibers are isomorphic as schemes, so we can apply Proposition~\ref{pr:pointwise zero}, which proves~\eqref{eq:Conn=HiggsRel}. (Here $\END(E)$ denotes the sheaf of endomorphisms of $E$.)
\end{proof}
\subsection{Comparing Higgs fields and Higgs fields with isoslopy underlying vector bundle}\label{sect:Isoslopy}
Consider the following generating series
\[
H^{\ge0}(z,w):=1+\sum_{\substack{r>0\\d\ge 0}}\mathbb{L}^{(1-g)r^2}[\mathcal M^{\ge0}_{r,d}]w^rz^d\in1+w\mathrm{Mot}(\mathbf k)[[w,z]]
\]
and for a rational number $\tau\ge0$
\[
H_\tau^{\ge0,iso}(z,w):=1+\sum_{\substack{r>0\\d/r=\tau}}\mathbb{L}^{(1-g)r^2}[\mathcal M^{\ge0,iso}_{r,d}]w^rz^d\in1+w\mathrm{Mot}(\mathbf k)[[w,z]].
\]
\begin{proposition}\label{pr:IsoProd}
We have
\[
H^{\ge0}(z,w)=\prod_{\tau\ge0}H_\tau^{\ge0,iso}(z,w).
\]
\end{proposition}
\begin{proof}
First of all we would like to reformulate the proposition. Let $\mathcal E_{r,d}$ be the stack classifying the pairs $(E,\Psi)$, where $E$ is a vector bundle of rank $r$ and degree $d$, $\Psi$ is an endomorphism of $E$. Set $\mathcal E^{\ge0}_{r,d}:=\mathcal E_{r,d}\times_{\mathcal{B}un_{r,d}}\mathcal{B}un^{\ge0}_{r,d}$. Let $\mathcal E^{\ge0,iso}_{r,d}\subset|\mathcal E^{\ge0}_{r,d}|$ be the preimage of $\mathcal{B}un_{r,d}^{\ge0,iso}$.
\begin{lemma}\label{lm:HiggsEnd}
We have in $\mathrm{Mot}(\mathbf k)$
\[
[\mathcal M_{r,d}^{\ge0}]=\mathbb{L}^{(g-1)r^2}[\mathcal E^{\ge0}_{r,d}],\qquad
[\mathcal M_{r,d}^{\ge0,iso}]=\mathbb{L}^{(g-1)r^2}[\mathcal E^{\ge0,iso}_{r,d}].
\]
\end{lemma}
\begin{proof}
Let us prove the first equation (the second is analogous). It is enough to show that
\[
[\mathcal M_{r,d}^{\ge0}\to\mathcal{B}un^{\ge0}_{r,d}]=\mathbb{L}^{(g-1)r^2}[\mathcal E_{r,d}^{\ge0}\to\mathcal{B}un^{\ge0}_{r,d}].
\]
We want to apply Proposition~\ref{pr:pointwise zero}. Consider a point $\xi:\Spec K\to\mathcal{B}un^{\ge0}_{r,d}$ given by a vector bundle $E$ on $X_K$. The $\xi$-pullback of the LHS is the class of the vector space $H^0(X_K,\END(E)\otimes\Omega_{X_K})$, while the $\xi$-pullback of the RHS is the class of the vector space $\mathbb A^{(g-1)r^2}\oplus H^0(X_K,\END(E))$. Thus we only need to check that
\[
h^0(X_K,\END(E)\otimes\Omega_{X_K})=(g-1)r^2+h^0(X_K,\END(E)).
\]
This follows from Riemann--Roch Theorem and Serre duality.
\end{proof}
In view of this lemma we can re-write the proposition as
\[
1+\sum_{\substack{r>0\\d\ge0}}[\mathcal E^{\ge0}_{r,d}]w^rz^d=
\prod_{\tau\ge0}\left(
1+\sum_{\substack{r>0\\d/r=\tau}}[\mathcal E^{\ge0,iso}_{r,d}]w^rz^d
\right).
\]
Let $\Pi_{r,d}$ be the set of all sequences
\[
((r_1,d_1),(r_2,d_2),\ldots,(r_t,d_t))\in(\mathbb{Z}_{>0}\times\mathbb{Z}_{\ge0})^t,
\]
where $\sum_i r_i=r$, $\sum_i d_i=d$ and the sequence $d_i/r_i$ is strictly decreasing. We note that $\Pi_{r,d}$ as a finite set. Now our proposition is equivalent to the following lemma.
\begin{lemma}\label{lm:IsoStrat}
We have in $\mathrm{Mot}(\mathbf k)$
\[
[\mathcal E^{\ge0}_{r,d}]=\sum_{((r_i,d_i))\in\Pi_{r,d}}\prod_{i}[\mathcal E^{\ge0,iso}_{r_i,d_i}].
\]
\end{lemma}
\begin{proof}
For any sequence $\pi=((r_i,d_i))\in\Pi_{r,d}$ consider the 1-morphism
\[
i_\pi:\prod_i\mathcal{B}un_{r_i,d_i}^{\ge0}\to\mathcal{B}un_{r,d},
\]
sending a sequence of vector bundles to their direct sum. It follows from Lemma~\ref{lm:Bun+}(ii) that the image of this 1-morphism is contained in $\mathcal{B}un_{r,d}^{\ge0}$. Consider the constructible subset (recall the definition of product of constructible subsets from Section~\ref{sect:Constr})
\[
\prod_i\mathcal{B}un_{r_i,d_i}^{\ge0,iso}\subset\left|\prod_i\mathcal{B}un_{r_i,d_i}^{\ge0}\right|.
\]
By the stacky Chevalley theorem its image under $i_\pi$ is constructible, denote it by $\mathcal{B}un_\pi$. It follows easily from the fact that isotypic components of a vector bundle are unique up to isomorphism (see Proposition~\ref{pr:KrullSchmidt2}), that $\{\mathcal{B}un_\pi|\pi\in\Pi_{r,d}\}$ is a stratification of $\mathcal{B}un_{r,d}^{\ge0}$. Let $\mathcal E_\pi$ be the preimage of $\mathcal{B}un_\pi$ in $\mathcal E_{r,d}$. We see that it is enough to show that for all $\pi\in\Pi_{r,d}$ we have
\begin{equation}\label{eq:KrSchm}
[\mathcal E_\pi\to\mathcal{B}un^{\ge0}_{r,d}]=
\left[\prod_i\mathcal E^{\ge0,iso}_{r_i,d_i}\to\mathcal{B}un^{\ge0}_{r,d}\right].
\end{equation}
We want to apply Proposition~\ref{pr:pointwise zero}. Let $\xi:\Spec K\to\mathcal{B}un_{r,d}$ be a point. If it is not in $\mathcal{B}un_\pi$, then the pullbacks of both sides of the equation are zero. Otherwise, let $E$ be the vector bundle on $X_K$ corresponding to $\xi$.
\begin{claim}
The vector bundle $E$ can be written as $\bigoplus_i E_i$, where $E_i$ is an HN-nonnegative isoslopy vector bundle of rank $r_i$ and degree~$d_i$.
\end{claim}
\begin{proof}
Let $\overline K$ be an algebraic closure of $K$. By definition of $\mathcal{B}un_\pi$, there is a $\overline K$-point on the fiber of $i_\pi$ over $\xi$. This means that the base-changed vector bundle $E_{\overline K}$ can be decomposed as $\overline E_1\oplus\ldots\oplus\overline E_t$, where $\overline E_i\in\mathcal{B}un^{\ge0}_{r_i,d_i}(\overline K)$ is isoslopy. We need to show that $E$ can be decomposed similarly. Let us write $E=E'_1\oplus\ldots\oplus E'_s$, where $E'_1$,\ldots,$E'_s$ are indecomposable bundles. By Lemma~\ref{lm:isoiii}, $(E'_i)_{\overline K}$ is isoslopy. Note that the bundles $\overline E_1$, \ldots, $\overline E_t$ cannot have isomorphic indecomposable summands (being isoslopy of different slopes). Now the uniqueness of indecomposable summands (Proposition~\ref{pr:KrullSchmidt2}) shows that there is a partition $\{1,\ldots,s\}=I_1\sqcup\ldots\sqcup I_t$ such that $\overline E_i\approx\oplus_{j\in I_i}(E_j')_{\overline K}$. It remains to set $E_i=\oplus_{j\in I_i}E_j'$.
\end{proof}
Fix a decomposition provided by the claim. Note that the $\xi$-pullback of the LHS of~\eqref{eq:KrSchm} is the class of the vector space $\End(E)$. One checks that the $\xi$-pullback of the RHS of~\eqref{eq:KrSchm} is the class of the algebraic space representing the following functor:
\[
S\mapsto\{(G_1,\ldots,G_t,\Psi_1,\ldots,\Psi_t):E_S=G_1\oplus\ldots\oplus G_t,\Psi_i\in\End(G_i)\},
\]
where the vector bundle $G_i$ on $X\times S$ has rank $r_i$ and degree $d_i$. (Note that if $S$ is a spectrum of a field, then each $G_i$ is isoslopy according to Proposition~\ref{pr:KrullSchmidt2} and Lemma~\ref{lm:isoiii}.) Denote this space by~$Y_E$. We need to show that $[\End(E)]=[Y_E]\in\mathrm{Mot}(K)$. To this end we first construct a map of sets $I:\End(E)\to Y_E(K)$ as follows. For $\Psi\in\End(E)$ let us write $\Psi=(\Psi_{ij})$, where $\Psi_{ij}\in\Hom(E_i,E_j)$. Set $\Psi':=1+\sum_{i\ne j}\Psi_{ij}$. We will use this notation through the end of the subsection.
\begin{claim}
$\sum_{i\ne j}\Psi_{ij}$ belongs to the nilpotent radical of $\End(E)$.
\end{claim}
\begin{proof}
Note that $E_i$ and $E_j$ are isoslopy and their slopes are different, so these bundles cannot have isomorphic indecomposable summands. Now the statement follows from Proposition~\ref{pr:KrullSchmidt1}(ii).
\end{proof}
By the above claim $\Psi'$ is an automorphism of $E$. Define a map $I$ by
\[
I(\Psi)=(\Psi'(E_1),\ldots,\Psi'(E_t),\Psi'\Psi_{11}(\Psi')^{-1},\ldots,\Psi'\Psi_{tt}(\Psi')^{-1}).
\]
\begin{claim}
$I$ is an isomorphism.
\end{claim}
\begin{proof}
Assume that $I(\Psi_1)=I(\Psi_2)$. Then $\Psi'_2=\Psi'_1\Theta$, where $\Theta$ preserves the decomposition $E=E_1\oplus\ldots\oplus E_t$. Then
\[
\Id_{E_i}=(\Psi'_2)_{ii}=(\Psi'_1)_{ii}\Theta_{ii}=\Theta_{ii}.
\]
We see that $\Theta=1$, so that $\Psi'_2=\Psi'_1$. Now we also see that $(\Psi_2)_{ii}=(\Psi_1)_{ii}$, so that $\Psi_2=\Psi_1$, which proves injectivity.
Assume that $E=G_1\oplus\ldots\oplus G_t$, where $G_i$ is of rank $r_i$ and degree $d_i$; let $\Psi_i\in\End(G_i)$ for $i=1,\ldots,t$. By Proposition~\ref{pr:KrullSchmidt2}, we have an isomorphism $\Theta_i:E_i\to G_i$. Then $\Theta:=\bigoplus_i\Theta_i$ is an automorphism of~$E$. Let us write $\Theta=\Theta_1+\Theta_2$, where $\Theta_1\in\bigoplus_i\End(E_i)$, $\Theta_2\in\bigoplus_{i\ne j}\Hom(E_i,E_j)$. We have
\[
\Theta_1=\Theta(1-(\Theta)^{-1}\Theta_2),
\]
so $\Theta_1$ is an automorphism (because $\Theta_2\in\End^{nil}(E)$). Set $\tilde\Theta:=\Theta\Theta_1^{-1}$ and finally
\[
\Psi=\tilde\Theta-1+\sum_i(\tilde\Theta)^{-1}\Psi_i(\tilde\Theta).
\]
Note that $\tilde\Theta(E_i)=G_i$ and $\tilde\Theta_{ii}=1\in\End(E_i)$. It follows that $\Psi'=\tilde\Theta$, so that $\Psi'(E_i)=G_i$. We see that $I(\Psi)=(G_i,\Psi_i)$, which shows surjectivity of $I$.
\end{proof}
Now we complete the proof of Lemma~\ref{lm:IsoStrat}. It is easy to see that the construction of $I$ works in families, so, in fact, $I$ gives a morphism from $\End(E)$ to $Y_E$. If $K'$ is an extension of $K$, then, applying the previous claim to $E_{K'}$, we see that $I(K')$ is a bijection. Thus, by Corollary~\ref{cor:pointwEqual}, we see that $[\End(E)]=[Y_E]$. This proves~\eqref{eq:KrSchm}.
\end{proof}
Lemma~\ref{lm:IsoStrat} completes the proof of Proposition~\ref{pr:IsoProd}.
\end{proof}
\subsection{Kontsevich--Soibelman product}
The main result of this section is a simple corollary of the general formalism of~\cite{KontsevichSoibelman08} (see also ~\cite{RenSoibelman} for the formulas in the case of 2CY categories that is most interesting for us). The general theory relies on the notion of motivic Hall algebra introduced in~\cite{KontsevichSoibelman08}. For the reader not interested in the general framework, we present below a direct proof of the necessary wall-crossing formula. The general approach is outlined in Remark~\ref{rem:KSProduct} below.
Recall that in Section~\ref{sect:Isoslopy} we defined the generating series $H^{\ge0}(z,w)$. For $\tau\ge0$ consider one more generating series
\[
H_\tau^{\ge0,ss}(z,w):=1+\sum_{\substack{r>0\\d/r=\tau}}\mathbb{L}^{(1-g)r^2}[\mathcal M^{\ge0,ss}_{r,d}]w^rz^d\in1+w\mathrm{Mot}(\mathbf k)[[w,z]].
\]
\begin{proposition}\label{pr:KS}
\[
H^{\ge0}(z,w)=\prod_{\tau\ge0}H_\tau^{\ge0,ss}(z,w).
\]
\end{proposition}
\begin{proof}
Let $\Pi_{r,d}$ be as in the proof of Proposition~\ref{pr:IsoProd}. For $\pi=((r_1,d_1),\ldots,(r_t,d_t))\in\Pi_{r,d}$ consider the stack classifying collections
\begin{equation*}
(0\subset E_1\subset\ldots\subset E_t=E,\Phi),
\end{equation*}
where $E_i/E_{i-1}$ is a vector bundle of degree $d_i$ and rank $r_i$, $\Phi$ is a Higgs field on $E$ preserving each $E_i$. Denote by
$\mathcal M^\pi$ its open substack classifying collections such that for all $i$ the Higgs pair $(E_i/E_{i-1},\Phi_i)$, where $\Phi_i$ is induced by $\Phi$, is nonnegative-semistable.
\begin{lemma}\label{lm:VectSpStack}
The stack $\mathcal M^\pi$ is of finite type and we have in $\mathrm{Mot}(\mathbf k)$
\[
[\mathcal M^\pi]=\mathbb{L}^{(g-1)(r^2-r_1^2-\ldots-r_t^2)}\prod_i[\mathcal M^{\ge0,ss}_{r_i,d_i}].
\]
\end{lemma}
\begin{proof}
Set
\[
\pi'=((r_1,d_1),\ldots,(r_{t-1},d_{t-1}))\in\Pi_{(r_1+\ldots+r_{t-1},d_1+\ldots+d_{t-1})}.
\]
We will show that
\begin{equation}\label{eq:KSprod}
[\mathcal M^\pi]=\mathbb{L}^{(2g-2)r_t(r_1+\ldots+r_{t-1})}[\mathcal M^{\pi'}][\mathcal M^{\ge0,ss}_{r_t,d_t}].
\end{equation}
Since $r=r_1+\ldots+r_t$, the lemma will follow by induction on $t$.
There is a 1-morphism $\Lambda:\mathcal M^\pi\to\mathcal M^{\pi'}\times\mathcal M^{\ge0,ss}_{r_t,d_t}$, sending $(E,\Phi)$ to
\[
((E_1\subset\ldots\subset E_{t-1},\Phi|_{E_{t-1}}),(E_t/E_{t-1},\Phi')),
\]
where $\Phi'$ is the Higgs field induced by $\Phi$ on $E_t/E_{t-1}$. Let
$\xi_1=(E_1\subset\ldots\subset E_{t-1},\Phi_1)$ be a $K$-point of $\mathcal M^{\pi'}$, $\xi_2=(E,\Phi_2)$ be a $K$-point of $\mathcal M^{\ge0,ss}_{r_t,d_t}$. The fiber of $\Lambda$ over $(\xi_1,\xi_2)$ is the quotient
\[
\Ext^1((E,\Phi_2),(E_{t-1},\Phi_1))/\Hom((E,\Phi_2),(E_{t-1},\Phi_1)),
\]
where the Hom space acts on the Ext space trivially. (Here $\Ext$ and $\Hom$ are calculated in the category of Higgs sheaves.) Since the additive group is special, by Lemma~\ref{lm:MotBG} the motivic class of this stack in $\mathrm{Mot}(K)$ is equal to $\mathbb{L}^d$, where $d$ is the dimension of this stack. Using the results of~\cite[Sect.~2.1]{MozgovoySchiffmanOnHiggsBundles}, we see that
\[
d=\deg\Omega_X\,\rk E\,\rk E_{t-1}=(2g-2)r_t(r_1+\ldots+r_{t-1}).
\]
In particular, this dimension is constant, so by Proposition~\ref{pr:pointwise zero} we have
\[
[\mathcal M^\pi\to\mathcal M^{\pi'}\times\mathcal M^{\ge0,ss}_{r_t,d_t}]=
\mathbb{L}^{(2g-2)r_t(r_1+\ldots+r_{t-1})}\mathbf1_{\mathcal M^{\pi'}\times\mathcal M^{\ge0,ss}_{r_t,d_t}}.
\]
Applying pushforward, we get~\eqref{eq:KSprod}.
\end{proof}
Let us return to the proof of the proposition. We have an obvious forgetful 1-morphism $\sqcup_{\pi\in\Pi_{r,d}}\mathcal M^\pi_{r,d}\to\mathcal M_{r,d}$. It follows from Harder--Narasimhan theory (applied to the category of Higgs bundles whose underlying vector bundle is HN-nonnegative) and Corollary~\ref{cor:pointwEqual} that
\[
\left[\sqcup_{\pi\in\Pi_{r,d}}\mathcal M^\pi_{r,d}\right]=[\mathcal M^{\ge0}_{r,d}].
\]
Combining this with the previous lemma, we get
\[
[\mathcal M^{\ge0}_{r,d}]=\sum_{((r_1,d_1),\ldots,(r_t,d_t))\in\Pi_{r,d}}\mathbb{L}^{(g-1)(r^2-r_1^2-\ldots-r_t^2)}
\prod_i[\mathcal M^{\ge0,ss}_{r_i,d_i}].
\]
This is equivalent to the proposition.
\end{proof}
\begin{remark}\label{rem:KSProduct}
Let us recall the general approach to the wall-crossing formulas from \cite{KontsevichSoibelman08}. Let $\mathcal C$ be an ind-constructible category endowed with a class map $\cl: K_0(\mathcal C)\to\Gamma\simeq\mathbb{Z}^n$. Let $\Gamma$ be endowed with an integer skew-symmetric form $\langle\bullet,\bullet\rangle$ such that $\cl$ intertwines this form and the skew-symmetrization of the Euler form on $K_0(\mathcal C)$. Assume also that we are given a constructible stability structure on $\mathcal C$ and that $V\subset\mathbb{R}^2$ is a strict sector. In Section~\ref{sect:QuTorus} we explained that in this situation one obtains an element $A_{{\mathcal C}(V)}^{Hall}$ of the motivic Hall algebra $H(\mathcal C)$. Then the following factorization formula holds:
\[
A_{{\mathcal C}(V)}^{Hall}=\prod^\to_{l\subset V}A_{{\mathcal C}(l)}^{Hall}.
\]
Here $A_{{\mathcal C}(l)}^{Hall}$ are defined similarly to $A_{{\mathcal C}(V)}^{Hall}$ but for the categories ${\mathcal C}(l)$ associated with each ray $l\subset V$ with the vertex at $(0,0)$. The product is taken in the clockwise order. In general there are countably many factors in the product. In the case of 3CY categories we apply the homomorphism $\Phi=\Phi_V$ and obtain a similar factorization formula for quantum DT-series, which are elements of the corresponding quantum tori. In the case of 2CY categories we apply the linear map $\Phi$ from Section~\ref{sect:QuTorus}, since it respects the product in the clockwise order. Then we obtain a similar factorization formula for quantum DT-series (they are elements of a commutative quantum torus).
In our case, the category $\mathcal C$ is the category of Higgs bundles on $X$ such that the underlying vector bundle is HN-nonnegative. As in Section~\ref{sect:QuTorus}, the stability structure is standard with the central charge $Z(F)=-\deg F+\sqrt{-1}\rk F$ and we take strict sector $V$ to be the second quadrant $\{x\le 0, y\ge 0\}$ in the plane $\mathbb{R}^2_{(x,y)}$. In this case, $\mathcal C(V)=\mathcal C$. Applying the above considerations we obtain Proposition \ref{pr:KS}.
\end{remark}
\subsection{Comparing Higgs bundles and bundles with connections}\label{sect:CompHiggsConn}
We need a simple lemma.
\begin{lemma}\label{lm:EqSlp}
Let $R$ be a commutative ring. For a rational number $\tau\ge0$ define
\[
R_\tau[[z,w]]=\sum_{\substack{r>0\\d/r=\tau}}Rw^rz^d\subset wR[[z,w]].
\]
For $\tau\ge0$ and $i=1,2$, assume that we are given series $H_\tau^i(z,w)\in1+R_\tau[[z,w]]$. Then
\begin{equation}\label{eq:slopes}
\prod_{\tau\ge0} H^1_\tau(z,w)=\prod_{\tau\ge0} H^2_\tau(z,w)
\end{equation}
implies that for all $\tau$ we have $H_\tau^1(z,w)=H_\tau^2(z,w)$.
\end{lemma}
\begin{proof}
For $r>0$, $d\ge0$, let $a^i_{r,d}$ be the coefficient of $H_{d/r}^i$ at $w^rz^d$. We need to show that $a^1_{r,d}=a^2_{r,d}$. We may assume that we have $a^1_{r',d'}=a^2_{r',d'}$ whenever $r'+d'<r+d$. Equating coefficients of~\eqref{eq:slopes} at $w^rz^d$ proves the claim.
\end{proof}
\begin{theorem}\label{th:ss=iso}
We have in $\mathrm{Mot}(\mathbf k)$
\[
[\mathcal M^{ss}_{r,d}]=[\mathcal M^{iso}_{r,d}].
\]
\end{theorem}
\begin{proof}
Combining Proposition~\ref{pr:IsoProd} and Proposition~\ref{pr:KS} we get
\[
\prod_{\tau\ge0}H_\tau^{\ge0,iso}(z,w)=H^{\ge0}(z,w)=\prod_{\tau\ge0}H_\tau^{\ge0,ss}(z,w).
\]
By Lemma~\ref{lm:EqSlp} we get
\[
H_\tau^{\ge0,iso}(z,w)=H_\tau^{\ge0,ss}(z,w),
\]
that is,
\[
[\mathcal M^{\ge0,iso}_{r,d}]=[\mathcal M^{\ge0,ss}_{r,d}].
\]
Next, for $N>(r-1)(g-1)-d/r$, we get using Lemmas~\ref{lm:+ss} and~\ref{lm:iso}
\[
[\mathcal M^{ss}_{r,d}]=[\mathcal M^{\ge0,ss}_{r,d+Nr}]=
[\mathcal M^{\ge0,iso}_{r,d+Nr}]=[\mathcal M^{iso}_{r,d}].
\]
\end{proof}
Now we are ready to prove Theorem~\ref{th:conn=higgs}.
\begin{proof}[Proof of Theorem~\ref{th:conn=higgs}]
Combine Proposition~\ref{pr:+iso} and Theorem~\ref{th:ss=iso}.
\end{proof}
\subsection{Vector bundles with nilpotent endomorphisms}
Recall from Section~\ref{sect:NilpIntro} the stack $\mathcal E_{r,d}^{\ge0,nilp}\subset\mathcal E_{r,d}^{\ge0}$ parameterizing HN-nonnegative vector bundles with nilpotent endomorphisms. Define a power structure on the ring $\overline{\Mot}(\mathbf k)$:
\[
\Pow: (1+\overline{\Mot}(\mathbf k)[[w,z]]^+)^\times\times\overline{\Mot}(\mathbf k)\to\overline{\Mot}(\mathbf k)[[z,w]]:
(f,A)\mapsto\Exp(A\Log(f)),
\]
where $\Exp$ and $\Log$ were defined in Section~\ref{sect:zeta}. This power structure has been studied in~\cite{GuzeinZadeEtAlOnLambdaRingStacks} in the case when $\mathbf k$ is algebraically closed (it also appeared earlier in the proof of~\cite[Prop.~7]{KontsevichSoibelman10}). Our main result in this section is the following proposition.
\begin{proposition}\label{pr:NilpEndPow}
We have in $\overline{\Mot}(\mathbf k)$
\[
1+\sum_{r,d}[\mathcal E_{r,d}^{\ge0}]w^rz^d=\Pow\left(
1+\sum_{r,d}[\mathcal E_{r,d}^{\ge0,nilp}]w^rz^d,\mathbb{L}
\right).
\]
\end{proposition}
\begin{proof}
Given a collection of $\mathbf k$-stacks $\mathcal X_{r,d}$, where $r,d\in\mathbb{Z}$, we view the stack $\sqcup_{r,d}\mathcal X_{r,d}$ as a $\mathbb{Z}^2$-graded stack. If $K\supset\mathbf k$ is a finite extension and $\phi:\Spec K\to\mathcal X_{r,d}$ is a point, we define $\cl(\phi):=[K:\mathbf k](r,d)\in\mathbb{Z}^2$.
If $T$ is a reduced scheme finite over $\Spec\mathbf k$, and $\phi:T\to\sqcup_{r,d}\mathcal X_{r,d}$ is a 1-morphism, we set $\cl(\phi):=\sum_{x\in T}\cl(\phi|_x)$. The proof of the proposition is based on the following lemma.
\begin{lemma}\label{lm:Pow}
Let $V$ be a variety over $\mathbf k$, let $\mathcal X_{r,d}$ be stacks of finite type over $\mathbf k$, where $r$ and $d$ run over the set of nonnegative integers not equal to zero simultaneously. Let $\mathcal Y_{r,d}$ be the stack parameterizing pairs $(T,\phi)$, where $T$ is a finite subset of closed points of $V$, $\phi:T\to\sqcup_{s,e}\mathcal X_{s,e}$ is a 1-morphism such that $\cl(\phi)=(r,d)$. Then we have
\[
\left(1+\sum_{r,d}[\mathcal X_{r,d}]w^rz^d\right)^{[V]}=1+\sum_{r,d}[\mathcal Y_{r,d}]w^rz^d.
\]
\end{lemma}
\begin{proof}
See~\cite[Sect.~2]{BryanMorrison} in the case $\mathbf k=\mathbb C$, and note that it generalizes immediately to any field of characteristic zero (cf.~also~\cite[Prop.~7]{KontsevichSoibelman10} for the case of arbitrary field of characteristic zero).
\end{proof}
Let $\mathcal Y_{r,d}$ be the stack parameterizing pairs $(T,\phi)$, where $T$ is a finite subset of $\mathbb A^1_\mathbf k$, $\phi:\mathbb A^1_\mathbf k\to\sqcup_{r,d}\mathcal E^{\ge0,nilp}_{r,d}$ is a 1-morphism with $\cl(\phi)=(r,d)$. According to Lemma~\ref{lm:Pow} we just need to show that $[\mathcal Y_{r,d}]=[\mathcal E^{\ge0}_{r,d}]$. Define the 1-morphism $\mathcal Y_{r,d}\to\mathcal E^{\ge0}_{r,d}$ as follows. Consider a pair $(T,\phi)\in\mathcal Y_{r,d}$. Write $T=\{x_1,\ldots,x_t\}$, $\phi(x_i)=(E_i,\Psi_i)\in\mathcal E^{\ge0,nilp}_{r_i,d_i}\mathbf k(x_i))$. The 1-morphism sends $(T,\phi)$ to
\begin{equation*}
\bigoplus_{i=1}^t R_{\mathbf k(x_i)/\mathbf k}(E_i,x_i\Id+\Psi_i).
\end{equation*}
Here $x_i\in\mathbb A^1_\mathbf k$ is viewed as an element of $\mathbf k(x_i)$, the functor of restriction of scalars $R_{\mathbf k(x_i)/\mathbf k}$ is the pushforward with respect to the finite morphism $X_{\mathbf k(x_i)}\to X$. One checks that this construction works in families, so we get a required 1-morphism.
According to Corollary~\ref{cor:pointwEqual} it remains to prove the following version of Jordan decomposition.
\begin{lemma}\label{lm:Jordan}
(i) Let $(E,\Psi)\in\mathcal E_{r,d}(K)$ be a bundle with an automorphism. There is a finite set $\{x_1,\ldots,x_t\}$ of closed points of $\mathbb A_K^1$, a sequence of pairs $(E_i,\Psi_i)\in\mathcal E_{r_i,d_i}^{nilp}(\mathbf k(x_i))$, and an isomorphism
\[
(E,\Psi)\xrightarrow{\simeq}\bigoplus_{i=1}^t R_{\mathbf k(x_i)/K}(E_i,x_i\Id+\Psi_i).
\]
(ii) Such a set $\{x_1,\ldots,x_t\}$ is unique and $(E_i,\Psi_i)$ are unique in the following sense: if $(E_i',\Psi_i')$ is another sequence with an isomorphism $(E,\Psi)\xrightarrow{\simeq}\bigoplus_{i=1}^t R_{\mathbf k(x_i)/K}(E_i',x_i\Id+\Psi_i')$, then there are unique isomorphisms $(E_i,\Psi_i)\to(E_i',\Psi_i')$ making the obvious diagram commutative.
\end{lemma}
\begin{proof}
Consider the characteristic polynomial of $\Psi$: $f(x)=\det(x\Id-\Psi)$. The coefficients of this polynomial are global sections of $\mathcal O_{X_K}$ so $f(x)\in K[x]$. Let $T=\{x_1,\ldots,x_t\}\subset\mathbb A_K^1$ be the set of roots of $f(x)$. Then $f(x)=\prod_{i=1}^t f_i(x)^{r_i}$, where $f_i(x)$ is an irreducible polynomial of $x_i$ over $K$. The Cayley--Hamilton Theorem (applied at the generic point of $X$) shows that $f(\Psi)=0$, so we have a homomorphism $\pi:K[x]/(f(x))\to\End(E)$, sending the image $\bar x$ of $x$ in $K[x]/(f(x))$ to $\Psi$. Let $\epsilon_i\in K[x]/(f(x))$ be the components of the unity with respect to the decomposition $K[x]/(f(x))=\oplus_{i=1}^t K[x]/(f_i(x)^{r_i})$. Then we have $(E,\Phi)=\bigoplus(E'_i,\Psi'_i)$, where $E'_i:=(\pi(\epsilon_i))(E)$, $\Psi'_i:=\Psi|_{E'_i}$.
Since we have $\Psi'_i=\pi(\bar x\epsilon_i)$, we see that $f_i(\Psi'_i)^{r_i}=0$. By Hensel's Lemma there is $g_i\in x+f_iK[x]$ such that $f_i(g_i)$ is divisible by $f_i^{r_i}$. Set $\Lambda_i=g_i(\Psi'_i)$. Then $f_i(\Lambda_i)=0$ so $\Lambda_i$ gives a $\mathbf k(x_i)$-structure on $E'_i$. Thus $(E'_i,\Psi'_i)=R_{\mathbf k(x_i)/K}(E_i,\Psi_i)$ for a pair $(E_i,\Psi_i)$. It is easy to see that $\Psi_i-x_i\Id$ is nilpotent. We have proved the existence part of the lemma. We leave the uniqueness to the reader.
\end{proof}
Lemma~\ref{lm:Jordan} completes the proof of the proposition.
\end{proof}
\subsection{Example: rank two case of Theorem~\ref{th:conn=higgs}}\label{Sect:Conn=Higgs2} In this section we give a direct proof of Theorem~\ref{th:conn=higgs} for bundles of rank two.
Let us write $\mathcal{B}un_{2,0}=\mathcal B'\sqcup\mathcal B''$, where $\mathcal B'$ is the open substack of semistable vector bundles, $\mathcal B''$ is the complement.
Set $\mathcal M':=\mathcal M^{ss}_{2,0}\times_{\mathcal{B}un_{2,0}}\mathcal B'$, $\mathcal C':=\mathcal{C}onn_2\times_{\mathcal{B}un_{2,0}}\mathcal B'$. Define $\mathcal M''$ and $\mathcal C''$ similarly. Clearly, it is enough to show that $[\mathcal M']=[\mathcal C']$ and $[\mathcal M'']=[\mathcal C'']$.
To show the first equation, note that every Higgs bundle whose underlying vector bundle is semistable, is also semistable. Thus, the fibers of the projection $\mathcal M'\to\mathcal B'$ are vector spaces. The fibers of the projection $\mathcal C'\to\mathcal B'$ are affine spaces modeled over these vector spaces: the crucial point is that every semistable vector bundle admits a connection, which follows from Weil's Theorem.
Showing that $[\mathcal M'']=[\mathcal C'']$ is more involved. Let $\mathcal B'''$ be the moduli stack of pairs $L\subset E$ where $E$ is a vector bundle in $\mathcal{B}un_{2,0}$, $L$ is a line subbundle of positive degree. Note that $L$ is the unique destabilizing subbundle, so the image of the forgetful 1-morphism $\mathcal B'''\to\mathcal{B}un_{2,0}$ is exactly $\mathcal B''$ and the fibers of the 1-morphism are points. It follows that the motivic classes of $\mathcal M''':=\mathcal M''\times_{\mathcal B''}\mathcal B'''$ and of $\mathcal M''$ are equal. Similarly, we define $\mathcal C''':=\mathcal C''\times_{\mathcal B''}\mathcal B'''$ and show that $[\mathcal C''']=[\mathcal C'']$. It remains to show that $[\mathcal M''']=[\mathcal C''']$.
Let $\mathcal{P}ic$ denote the stack of line bundles on $X$. We have a 1-morphism $\mathcal B'''\to\mathcal{P}ic\times\mathcal{P}ic$, given by $L\subset E\mapsto(L,E/L)$. This gives 1-morphisms $\mathcal M'''\to\mathcal{P}ic\times\mathcal{P}ic$ and $\mathcal B'''\to\mathcal{P}ic\times\mathcal{P}ic$.
Take a $K$-point of $\mathcal{P}ic\times\mathcal{P}ic$, which is represented by a pair of line bundles $(L_1,L_2)$ on $X_K$. Denote the corresponding fibers of $\mathcal M'''$ and $\mathcal C'''$ by $\mathcal M(L_1,L_2)$ and $\mathcal C(L_1,L_2)$. By Proposition~\ref{pr:pointwise zero}, it suffices to show that $[\mathcal M(L_1,L_2)]=[\mathcal C(L_1,L_2)]$.
Now, let $\mathcal N(L_1,L_2)$ be the stack classifying collections $(L_1\hookrightarrow E\twoheadrightarrow L_2,\Phi)$, where $L_1\hookrightarrow E\twoheadrightarrow L_2$ is a short exact sequence, $\Phi\in\Hom(E,E\otimes\Omega_X)$ is a Higgs field. Set $V:=\Ext^1(L_2,L_1)$, let $V^\vee:=\Hom(L_1,L_2\otimes\Omega_X)$ be the dual vector space. The stack classifying extensions $0\to L_1\to E\to L_2\to0$ is the quotient $V/\Hom(L_2,L_1)$. Thus, we get a forgetful morphism $\pi:\mathcal N(L_1,L_2)\to V/\Hom(L_2,L_1)$.
Similarly, we have a 1-morphism $\pi^\vee:\mathcal N(L_1,L_2)\to V^\vee$, sending $(L_1\hookrightarrow E\twoheadrightarrow L_2,\Phi)$ to
\[
L_1\hookrightarrow E\xrightarrow{\Phi}E\otimes\Omega_X\twoheadrightarrow L_2\otimes\Omega_X.
\]
Next, $\mathcal M(L_1,L_2)$ is the open substack of $\mathcal N(L_1,L_2)$ corresponding to semistable Higgs bundles. Since $L_1$ is the only possible destabilizing subbundle, we see that $\mathcal M(L_1,L_2)=\mathcal N(L_1,L_2)-(\pi^\vee)^{-1}(0)$. On the other hand, for $(L_1\hookrightarrow E\twoheadrightarrow L_2,\Phi)\in\mathcal N(L_1,L_2)$, $E$ admits a connection if and only if
\[
\pi(L_1\hookrightarrow E\twoheadrightarrow L_2,\Phi)\ne0
\]
as follows easily from Weyl's Theorem. Arguing as in the proof of $[\mathcal M']=[\mathcal B']$, we see that $[\mathcal C(L_1,L_2)]=[\mathcal N(L_1,L_2)]-[\pi^{-1}(0)]$.
Thus we are left with showing that $[\pi^{-1}(0)]=[(\pi^\vee)^{-1}(0)]$. Now, $[\pi^{-1}(0)]$ is
\[
\Hom(L_1\oplus L_2,(L_1\oplus L_2)\otimes\Omega_X)/\Hom(L_2,L_1);
\]
On the other hand, $(\pi^\vee)^{-1}(0)$ classifies Higgs fields preserving $L_1$. Thus we have a 1-morphism $\psi:(\pi^\vee)^{-1}(0)\to\Hom(L_2,L_2\otimes\Omega_X)$. The 1-morphism $\psi\times\pi$ makes $(\pi^\vee)^{-1}(0)$ into an affine bundle on $\Hom(L_2,L_2\otimes\Omega_X)\times V/\Hom(L_2,L_1)$ with the fiber isomorphic to $\Hom(E,L_1\otimes\Omega_X)$. Thus the class of $(\pi^\vee)^{-1}(0)$ is the class of a quotient of a vector space by an action of a vector space. Using the Riemann--Roch theorem, Serre duality, an exact sequence for $\Hom$, and the fact that $\Hom(L_1,L_2)=0$, one calculates the dimensions of these stacks and sees that they are the same. This completes the proof.
\section{Motivic classes of Borel reductions}\label{sect:Borel}
The goal of this section is to prove Theorems~\ref{th:Harder1} and~\ref{th:ResHarder2} (we will see that in fact these theorems are equivalent). Theorem~\ref{th:ResHarder2} is the motivic analogue of Harder's residue formula~\cite[Thm.~2.2.3]{HarderAnnals} for $\GL_n$. A slightly different form of Theorem~\ref{th:Harder1} appeared in Section~\ref{sect:IntroHarder} as Theorem~\ref{th:IntroHarder}.
In the current section $\mathbf k$ is a field of any characteristic and $X$ is a smooth geometrically connected projective curve over $\mathbf k$. Recall that when $\mathbf k$ is a field of characteristic zero, we set $X^{(i)}=X^i/\mathfrak{S}_i$. In this section, we let $X^{(i)}$ denote the Hilbert scheme of degree $i$ finite subschemes of $X$. When $\mathbf k$ has characteristic zero, this definition agrees with the previous one. We assume that there is a divisor $D$ on $X$ defined over $\mathbf k$ such that $\degD=1$. We denote by $\Jac$ the Jacobian variety of~$X$. As before, $K$ denotes an arbitrary extension of $\mathbf k$.
\subsection{Limits of motivic classes of Borel reductions}\label{Sect:ProofBorel}
\begin{lemma}\label{lm:neutral}
For all $d\in\mathbb{Z}$ the moduli stack $\mathcal{P}ic^d$ of degree $d$ line bundles on $X$ is the neutral $\mathbf{G_m}$-gerbe over $\Jac$. That is, $\mathcal{P}ic^d\simeq\Jac\times\mathbf{B}\mathbf{G_m}$, where $\mathbf{B}\mathbf{G_m}$ is the classifying stack of $\mathbf{G_m}$.
\end{lemma}
\begin{proof}
First of all, tensorisation with $\mathcal O_X(dD)$ gives an isomorphism $\mathcal{P}ic^0\to\mathcal{P}ic^d$.
Let us write $D=D_1-D_2$, where $D_i$ are effective divisors on $X$, we view $D_i$ as a closed subscheme of~$X$ (not necessarily reduced). Let $S$ be a test scheme. By abuse of notation we denote two projections $D_i\times S\to S$ by $p$.
For a scheme $S$ we denote by $Pic(S)$ the abelian group of isomorphism classes of line bundles on $S$. The Picard variety $\Pic(X)$ represents the functor $S\mapsto Pic(X\times S)/Pic(S)$; note that $\Jac$ is just its neutral component.
For a line bundle $\ell$ on $S\times X$, let $\det(\ell|_D)$ denote the line bundle on $S$ given by
\[
\wedge^{\deg D_1}(p_*(\ell|_{S\times D_1}))\otimes\bigl(\wedge^{\deg D_2}(p_*(\ell|_{S\times D_2}))\bigr)^{-1}.
\]
It is easy to see that $\det(\bullet|_D):Pic(X\times S)\to Pic(S)$ is left inverse to the pullback functor. Using this fact, it is easy to see that $\Pic(X)$ represents the functor, sending $S$ to the set of pairs $(\ell,s)$, where $\ell$ is a line bundle on $S\times X$, $s$ is a trivialization of $\det(\ell|_D)$ (cf.~\cite[Lemma~2.9]{KleimanFGA}). Thus we have a universal line bundle $L$ on $\Pic(X)\times X$, whose restriction to $\Jac\times X$ trivializes the $\mathbf{G_m}$-gerbe.
\end{proof}
Recall that in Section~\ref{sect:BorelRed} we defined the stack $\mathcal{B}un_{r,d_1,\ldots,d_r}$ classifying vector bundles on $X$ with Borel reductions of degree $(d_1,\ldots,d_r)$. We view $\mathcal{B}un_{r,d_1,\ldots,d_r}$ as a stack over $\mathcal{B}un_{r,d_1+\ldots+d_r}$ via the projection $(E_1\subset\ldots\subset E_r)\mapsto E_r$. This projection is schematic and of finite type (for the proof, embed the fiber of this projection into a product of Quot schemes). Set
\[
\mathcal{B}un_{r,d_1,\ldots,d_r}^{\ge\tau}:=\mathcal{B}un_{r,d_1,\ldots,d_r}\times_{\mathcal{B}un_{r,d_1+\ldots+d_r}}\mathcal{B}un_{r,d_1+\ldots+d_r}^{\ge\tau}.
\]
\begin{theorem}\label{th:Harder1}
For any $\tau$, any $r\in\mathbb{Z}_{>0}$, and $d\in\mathbb{Z}$ we have in $\overline{\Mot}(\mathcal{B}un_{r,d}^{\ge\tau})$
\[
\lim_{d_1\to-\infty}\ldots\lim_{d_{r-1}\to-\infty}
\frac{[\mathcal{B}un_{r,d_1,\ldots,d_{r-1},d-d_1-\ldots-d_{r-1}}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}]}
{\mathbb{L}^{-(2r-2)d_1-(2r-4)d_2-\ldots-2d_{r-1}}}=
\frac{
\mathbb{L}^{(r-1)\left(d+(1-g)\frac{r+2}2\right)}[\Jac]^{r-1}}
{(\mathbb{L}-1)^{r-1}\prod_{i=2}^r\!\zeta_X(\mathbb{L}^{-i})}\;\mathbf1_{\mathcal{B}un_{r,d}^{\ge\tau}}.
\]
\end{theorem}
This theorem will be proved in Section~\ref{sect:ProofHarder}.
\begin{proof}[Derivation of Theorem~\ref{th:IntroHarder} from Theorem~\ref{th:Harder1}]
It is enough to show that the statement holds after restriction to any finite type open substack of $\mathcal{B}un_{r,d}$ (see the discussion of topology on $\overline{\Mot}$ in Section~\ref{sect:MotFun1}). It remains to use the previous theorem and Lemma~\ref{lm:Bun+}(iv).
\end{proof}
\subsection{Eisenstein series}\label{sect:Eisenstein} We would like to reformulate the above theorem using residues. Define the Eisenstein series
\[
E_{r,d}^{\ge\tau}(z_1,\ldots,z_r):=
\sum_{\substack{d_1,\ldots,d_r\in\mathbb{Z}\\ d_1+\ldots+d_r=d}}\mathbb{L}^{(r-1)d_1+(r-2)d_2+\ldots+d_{r-1}}
[\mathcal{B}un_{r,d_1,\ldots,d_r}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}]z_1^{d_1}\ldots z_r^{d_r}.
\]
\begin{remark}
The reason we restrict to a single degree $d$ and to HN-nonnegative vector bundles is that we want to work with the group of motivic functions on a finite type stack $\mathcal{B}un_{r,d}^{\ge\tau}$, cf.~Lemma~\ref{lm:Bun+}(iv). In Section~\ref{sect:Hall} we will have to work over non-finite type stacks.
\end{remark}
For an abelian group $M$, consider the following group of formal Laurent series
\begin{equation}\label{eq:ring}
M\left(\!\left(z_1,\frac{z_2}{z_1},\ldots,\frac{z_r}{z_{r-1}}\right)\!\right):=
M[z_1^{\pm1},\ldots, z_r^{\pm1}]\otimes_{\mathbb{Z}[z_1,\frac{z_2}{z_1},\ldots,\frac{z_r}{z_{r-1}}]}
\mathbb{Z}\left[\left[
z_1,\frac{z_2}{z_1},\ldots,\frac{z_r}{z_{r-1}}
\right]\right].
\end{equation}
\begin{lemma}\label{lm:LaurSer}
We have
\[
E_{r,d}^{\ge\tau}(z_1,\ldots,z_r)\in \overline{\Mot}(\mathcal{B}un_{r,d}^{\ge\tau})\left(\!\left(z_1,\frac{z_2}{z_1},\ldots,\frac{z_r}{z_{r-1}}\right)\!\right).
\]
\end{lemma}
\begin{proof}
We note that by Lemma~\ref{lm:Bun+}(ii) we have for each point $(E_0\subset\ldots\subset E_r)$ of $\mathcal{B}un_{r,d_1,\ldots,d_r}^{\ge\tau}$ and $i=0,\ldots,r-1$
\begin{equation*}
\frac{\deg(E_r/E_i)}{\rk(E_r/E_i)}=\frac{d_{i+1}+\ldots+d_r}{r-i}\ge\tau.
\end{equation*}
We can re-write
\begin{multline*}
E_{r,d}^{\ge\tau}(z_1,\ldots,z_r)=\\ \sum_{\substack{d_1,\ldots,d_r\in\mathbb{Z}\\ d_1+\ldots+d_r=d}}\mathbb{L}^{(r-1)d_1+(r-2)d_2+\ldots+d_{r-1}}[\mathcal{B}un_{r,d_1,\ldots,d_r}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}]
z_1^{d_1+\ldots+d_r}\left(\frac{z_2}{z_1}\right)^{d_2+\ldots+d_r}\ldots
\left(\frac{z_r}{z_{r-1}}\right)^{d_r}.
\end{multline*}
The statement follows.
\end{proof}
In the next section we will give precise definitions of residues of power series and prove the following theorem. (In fact, we will see that this theorem is just a reformulation of Theorem~\ref{th:Harder1}.)
\begin{theorem}\label{th:ResHarder2}
We have
\[
\res_{\frac{z_2}{z_1}=\frac{z_3}{z_2}=\ldots=\frac{z_r}{z_{r-1}}
=\mathbb{L}^{-1}}E_{r,d}^{\ge\tau}(z_1,\ldots,z_r)\prod_{i=2}^r\frac{dz_i}{z_i}=
z_1^d\frac{
\mathbb{L}^{\frac{(1-g)(r-1)(r+2)}2}[\Jac]^{r-1}}
{(\mathbb{L}-1)^{r-1}\prod_{i=2}^r\!\zeta_X(\mathbb{L}^{-i})}\;\mathbf1_{\mathcal{B}un_{r,d}^{\ge\tau}}.
\]
\end{theorem}
\subsection{Residues of formal series}\label{sect:res}
Let $M$ be a topological abelian group (in our applications we will take $M=\overline{\Mot}(\mathcal{B}un_{r,d}^{\ge\tau})$, $M=\overline{\Mot}(\mathcal{B}un_{r,d}$) etc). Let $A(z)\,dz\in M((z))\,dz$. Plugging in $z=x$ into the product $(x-z)A(z)$ we get an infinite series. If it converges in $M$, we define the residue of as the sum of this series:\footnote{Usually residue is defined with opposite sign. We follow conventions of~\cite{HarderAnnals} and~\cite{SchiffmannIndecomposable}. This simplifies our formulas.}
\[
\res_{z=x}A(z)\,dz:=((x-z)A(z))|_{z=x}.
\]
\begin{remark}\label{rm:RationalRes}
Assume that $R(z)$ is a rational function with coefficients in $\overline{\Mot}(\mathbf k)$, that is, an element of the total ring of fractions of $\overline{\Mot}(\mathbf k)[z]$. We say that it has at most first order pole at $x\in\overline{\Mot}(\mathbf k)$ if it can be written as
\[
\frac{P(z)}{(x-z)Q(z)},
\]
where $Q(x)$ is not a zero divisor in $\overline{\Mot}(\mathbf k)$. In this case we can define
\[
\res_{z=x}R(z)\,dz=\frac{P(x)}{Q(x)}.
\]
On the other hand, if $Q(x)$ is invertible, we can expand $R(z)$ in powers of $z$. The residue of the corresponding series may or may not exist, but if it exists, it is equal to the residue of the rational function. Similar considerations apply to the case of many variables considered below.
\end{remark}
\begin{lemma}\label{lm:res1dim}
If $A(z)=\sum_{-\infty}^\infty A_dz^d$ with $A_d=0$ for $d\ll0$, then
\[
\res_{z=x}A(z)\,dz=\lim_{d\to\infty}A_dx^{d+1}.
\]
Moreover, the residue exists if and only if the limit exists.
\end{lemma}
\begin{proof}
The $d$-th partial sums of the series $((x-z)A(z))|_{z=x}$ is
\[
\sum_{-\infty}^d(A_ix^{i+1}-A_{i-1}x^i)=A_dx^{d+1}
\]
(note that the infinite sum has only finitely many non-zero terms because $A_d=0$ for $d\ll0$). The statement follows.
\end{proof}
Let $\mathcal X$ be a stack of finite type. Note that an infinite series with coefficients in $\overline{\Mot}(\mathcal X)$ converges if and only if its terms tend to zero. Using this fact, it is not difficult to prove the following statement.
\begin{lemma}\label{lm:CauchyProd}
Assume that $A(z)\in\overline{\Mot}(\mathbf k)((z))$ converges at a certain $x\in\overline{\Mot}(\mathbf k)$ and that for $B(z)\in\overline{\Mot}(\mathcal X)((z))$ the residue $\res_{z=x}B(z)\,dz$ exists. Then
\[
\res_{z=x}A(z)B(z)\,dz=A(x)\res_{z=x}B(z)\,dz.
\]
\end{lemma}
Now consider the case of many variables. For a series $A(z_1,\ldots,z_r)$ in~\eqref{eq:ring} and $x\in M$ define
\[
\res_{\frac{z_r}{z_{r-1}}=x}A(z_1,\ldots,z_r)\,\frac{dz_r}{z_r}=
\left.\left(1-\frac{z_r}{xz_{r-1}}\right)A(z_1,\ldots,z_r)\right|_{z_r=xz_{r-1}}.
\]
Note that, by definition, this residue (if it exists) is a series in variables $z_1,\ldots,z_{r-1}$, and, moreover, we have
\[
\res_{\frac{z_r}{z_{r-1}}=x}A(z_1,\ldots,z_r)\in M\left(\!\left(z_1,\frac{z_2}{z_1},\ldots,\frac{z_{r-1}}{z_{r-2}}\right)\!\right).
\]
Thus we can define the iterated residue
\begin{multline}\label{}
\res_{\frac{z_2}{z_1}=x_1,\ldots,\frac{z_r}{z_{r-1}}=x_{r-1}}A(z_1,\ldots,z_r)\prod_{i=2}^r\frac{dz_i}{z_i}:=\\
\res_{\frac{z_2}{z_1}=x_1}
\left(
\ldots
\left(
\res_{\frac{z_{r-1}}{z_{r-2}}=x_{r-2}}\left(\res_{\frac{z_r}{z_{r-1}}=x_{r-1}}A(z_1,\ldots,z_r)\frac{dz_r}{z_r}\right)
\frac{dz_{r-1}}{z_{r-1}}\right)
\ldots
\frac{dz_2}{z_2}\right).
\end{multline}
We see that this iterated residue is a Laurent series in one variable $z_1$.
\begin{lemma}\label{lm:HighDimRes}
Let $A(z_1,\ldots,z_r)=\sum_{d_1,\ldots,d_r}A_{d_1\ldots d_r}z_1^{d_1}\ldots z_r^{d_r}$ be a series in~\eqref{eq:ring} and let
\[
\res_{\frac{z_2}{z_1}=x_1,\ldots,\frac{z_r}{z_{r-1}}=x_{r-1}}A(z_1,\ldots,z_r)\prod_{i=2}^r\frac{dz_i}{z_i}=
\sum_i B_dz_1^d.
\]
Then
\[
B_d=\lim_{d_1\to-\infty}\ldots\lim_{d_{r-1}\to-\infty} A_{d_1,\ldots,d_{r-1},d-d_1-\ldots-d_{r-1}}
x_1^{d-d_1}x_2^{d-d_1-d_2}\ldots x_{r-1}^{d-d_1-\ldots-d_{r-1}}.
\]
Moreover, the iterated residue exists if and only if the limits exist.
\end{lemma}
\begin{proof}
We proceed by induction on $r$. If $r=1$ the statement holds trivially. Assuming that the statement holds for $r-1$, we calculate as in the proof of Lemma~\ref{lm:res1dim}
\begin{multline*}
\res_{\frac{z_r}{z_{r-1}}=x_{r-1}}A(z_1,\ldots,z_r)\frac{dz_r}{z_r}=\\
\sum_{d_1,\ldots,d_r}(A_{d_1,\ldots,d_r}x_{r-1}^{d_r}-A_{d_1,\ldots,d_{r-2},d_{r-1}+1,d_r-1}x_{r-1}^{d_r-1})
z_1^{d_1}\ldots z_{r-2}^{d_{r-2}}z_{r-1}^{d_{r-1}+d_r}.
\end{multline*}
Let us perform a change of variables: $j_1=d_1$, \ldots, $j_{r-2}=d_{r-2}$, $j_{r-1}=d_{r-1}+d_r$, $j=d_{r-1}$. Then we get
\begin{multline*}
\sum_{d_1,\ldots,d_r}(A_{d_1,\ldots,d_r}x_{r-1}^{d_r}-A_{d_1,\ldots,d_{r-2},d_{r-1}+1,d_r-1}x_{r-1}^{d_r-1})
z_1^{d_1}\ldots z_{r-2}^{d_{r-2}}z_{r-1}^{d_{r-1}+i_r}=\\
\sum_{j_1,\ldots,j_{r-1}}\left(
\sum_j\left(A_{j_1,\ldots,j_{r-2},j,j_{r-1}-j}x_{r-1}^{j_{r-1}-j}-A_{j_1,\ldots,j_{r-2},j+1,j_{r-1}-j-1}x_{r-1}^{j_{r-1}-j-1}\right)
\right)
z_1^{j_1}\ldots z_{r-1}^{j_{r-1}}=\\
\sum_{j_1,\ldots,j_{r-1}}\left(
\lim_{j\to-\infty}A_{j_1,\ldots,j_{r-2},j,j_{r-1}-j}x_{r-1}^{j_{r-1}-j}
\right)
z_1^{j_1}\ldots z_{r-1}^{j_{r-1}}.
\end{multline*}
Now by induction hypothesis we have
\begin{multline*}
\res_{\frac{z_2}{z_1}=x_1,\ldots,\frac{z_r}{z_{r-1}}=x_{r-1}}A(z_1,\ldots,z_r)\prod_{i=2}^r\frac{dz_i}{z_i}=\\
\res_{\frac{z_2}{z_1}=x_1,\ldots,\frac{z_{r-1}}{z_{r-2}}=x_{r-2}}
\left(
\sum_{j_1,\ldots,j_{r-1}}\left(
\lim_{j\to-\infty}A_{j_1,\ldots,j_{r-2},j,j_{r-1}-j}x_{r-1}^{j_{r-1}-j}
\right)
z_1^{j_1}\ldots z_{r-1}^{j_{r-1}}
\right)\prod_{i=2}^{r-1}\frac{dz_i}{z_i}=\\
\sum_dz_1^d
\left(
\lim_{j_1,\ldots,j_{r-2},j\to-\infty}A_{j_1,\ldots,j_{r-2},j,d-j_1-\ldots-j_{r-2}-j}
x_1^{d-j_1}\ldots x_{r-2}^{d-j_1-\ldots-j_{r-2}}x_{r-1}^{d-j_1-\ldots-j_{r-2}-j}
\right).
\end{multline*}
\end{proof}
\subsection{Derivation of Theorem~\ref{th:ResHarder2} from Theorem~\ref{th:Harder1}}
Let us write
\[
\res_{\frac{z_2}{z_1}=\frac{z_3}{z_2}=\ldots=\frac{z_r}{z_{r-1}}= \mathbb{L}^{-1}}E_{r,d}^{\ge\tau}(z_1,\ldots,z_r)\prod_{i=2}^r\frac{dz_i}{z_i}=
\sum_n B_n z_1^n.
\]
It follows easily from Lemma~\ref{lm:HighDimRes} that $B_n=0$ if $n\ne d$. On the other hand, by the same lemma, we have
\begin{multline}
B_d=\lim_{d_1\to-\infty}\ldots\lim_{d_{r-1}\to-\infty}
\frac{\mathbb{L}^{(r-1)d_1+\ldots+d_{r-1}}[\mathcal{B}un^{\ge\tau}_{r,d_1,\ldots,d_{r-1},d-d_1-\ldots-d_{r-1}}\to\mathcal{B}un_{r,d}^{\ge\tau}]}
{\mathbb{L}^{d-d_1}\mathbb{L}^{d-d_1-d_2}\ldots\mathbb{L}^{d-d_1-\ldots-d_{r-1}}}=\\ \mathbb{L}^{(1-r)d}\lim_{d_1\to-\infty}\ldots\lim_{d_{r-1}\to-\infty}
\frac{[\mathcal{B}un^{\ge\tau}_{r,d_1,\ldots,d_{r-1},d-d_1-\ldots-d_{r-1}}\to\mathcal{B}un_{r,d}^{\ge\tau}]}{\mathbb{L}^{-(2r-2)d_1-(2r-4)d_2-\ldots-2d_{r-1}}}=\\
\mathbb{L}^{(1-r)d}\cdot\frac{
\mathbb{L}^{(r-1)\left(d+(1-g)\frac{r+2}2\right)}[\Jac]^{r-1}}
{(\mathbb{L}-1)^{r-1}\prod_{i=2}^r\!\zeta_X(\mathbb{L}^{-i})}\;\mathbf1_{\mathcal{B}un_{r,d}^{\ge\tau}}
=\frac{
\mathbb{L}^{(r-1)\left((1-g)\frac{r+2}2\right)}[\Jac]^{r-1}}
{(\mathbb{L}-1)^{r-1}\prod_{i=2}^r\!\zeta_X(\mathbb{L}^{-i})}\;\mathbf1_{\mathcal{B}un_{r,d}^{\ge\tau}}.
\end{multline}
\subsection{Stacks of partial flags}
Before we prove Theorem~\ref{th:Harder1}, we need some preliminaries. We introduce the stack $\mathcal{B}un_{r,d_1,\ldots,d_l}^{\ge\tau}$ classifying collections $0=E_0\subset E_1\subset\ldots\subset E_l$, where $E_l$ is vector bundle in $\mathcal{B}un_{r,d_1+\ldots+d_l}^{\ge\tau}$, $E_i$ is a vector subbundle of rank $i$ for $i=1,\ldots,l-1$, and we have $\deg(E_i/E_{i-1})=d_i$ for $i=1,\ldots,l$. We study this stack for $l=2$ first.
\begin{proposition}\label{pr:l=2}
We have in $\overline{\Mot}(\mathcal{B}un_{r,d}^{\ge\tau})$:
\[\
\lim_{d_1\to-\infty}\frac{[\mathcal{B}un_{r,d_1,d-d_1}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}]}{\mathbb{L}^{-rd_1}}=
\frac{\mathbb{L}^{d+r(1-g)}[\Jac]}{(\mathbb{L}-1)\zeta_X(\mathbb{L}^{-r})}\mathbf1_{\mathcal{B}un_{r,d}^{\ge\tau}}.
\]
\end{proposition}
We will prove this proposition in Section~\ref{sect:pr:l=2}. We first need to introduce more stacks. Let $\mathcal{L}au_{r,d_1,d_2}^{\ge\tau}$ classify collections $0=E_0\subset E_1\subset E_2$, where $E_2$ is a vector bundle in $\mathcal{B}un_{r,d_1+d_2}^{\ge\tau}$, $E_1$ is a subsheaf of rank one and degree $d_1$. Note that $E_1$ is a line bundle but $E_2/E_1$ might have torsion. We view $\mathcal{L}au_{r,d_1,d_2}^{\ge\tau}$ as a stack over $\mathcal{B}un_{r,d_1+d_2}^{\ge\tau}$ via the obvious projection.
\begin{remark}
The stacks $\mathcal{L}au_{r,d_1,d_2}^{\ge\tau}$ are Laumon's relative compactifications of $\mathcal{B}un_{r,d_1,d_2}^{\ge\tau}\to\mathcal{B}un_{r,d_1+d_2}^{\ge\tau}$. Thus the notation. The reason we need these stacks is that they are simpler than $\mathcal{B}un_{r,d_1,d_2}^{\ge\tau}$, when $d_1$ is small, see Lemma~\ref{lm:Laumon_l=2} below. On the other hand, their relation with $\mathcal{B}un_{r,d_1,d_2}^{\ge\tau}$ is also quite simple as we see momentarily.
\end{remark}
\begin{lemma}\label{lm:LauStrat}
We have in $\mathrm{Mot}(\mathcal{B}un_{r,d_1+d_2}^{\ge\tau})$
\begin{equation*}
[\mathcal{L}au_{r,d_1,d_2}^{\ge\tau}\to\mathcal{B}un_{r,d_1+d_2}^{\ge\tau}]=
\sum_{i\ge0}[X^{(i)}\times\mathcal{B}un_{r,d_1+i,d_2-i}^{\ge\tau}\to\mathcal{B}un_{r,d_1+d_2}^{\ge\tau}].
\end{equation*}
\end{lemma}
\begin{proof}
Note first, that the sum in the RHS is finite. Indeed, $\mathcal{B}un_{r,d_1+i,d_2-i}^{\ge\tau}$ is empty, when $i$ is large enough. Recall that $X^{(i)}$ parameterizes length $i$ subschemes of $X$. We have a 1-morphism
\[
\phi:\sqcup_{i\ge0}(X^{(i)}\times\mathcal{B}un_{r,d_1+i,d_2-i}^{\ge\tau})\to
\mathcal{L}au_{r,d_1,d_2}^{\ge\tau}
\]
sending $(D,E_1\subset E_2)$ to $(E_1(-D),E_2)$. We claim that this 1-morphism induces an isomorphism on $K$-points for any field $K$. Indeed, take $(E_1\subset E_2)\in\mathcal{L}au_{r,d_1,d_2}^{\ge\tau}(K)$. The coherent sheaf $E_2/E_1$ can be written as $T\oplus E$, where $T$ is a uniquely defined torsion sheaf and $E$ is a vector bundle. Let $E_1'$ be the inverse image of $T$ under the projection $E_2\to E_2/E_1$. Then $E_1=E_1'(-D)$ for an effective divisor $D\subset X$ (because $E_1'/E_1\approx T$ is torsion). Now $(E_1\subset E_2)=\phi(D,E_1'\subset E_2)$ and it is easy to see that this is the only $K$-point mapping to $(E_1\subset E_2)$. It remains to use Corollary~\ref{cor:pointwEqual}.
\end{proof}
\begin{lemma}\label{lm:Laumon_l=2}
If $d_1<2-2g+\tau$, then we have in $\mathrm{Mot}(\mathcal{B}un_{r,d_1+d_2}^{\ge\tau})$
\[
[\mathcal{L}au^{\ge\tau}_{r,d_1,d_2}\to\mathcal{B}un_{r,d_1+d_2}^{\ge\tau}]=
\frac{\mathbb{L}^{d_1+d_2+r(-d_1-g+1)}-1}{\mathbb{L}-1}[\Jac]\mathbf1_{\mathcal{B}un_{r,d_1+d_2}^{\ge\tau}}.
\]
\end{lemma}
\begin{proof}
Recall that the stack $\mathcal{P}ic_{d_1}$ classifies degree $d_1$ line bundles on $X$. Let $\mathcal L$ be the universal line bundle on $\mathcal{P}ic_{d_1}\times X$. Denote by $\mathcal E$ the universal vector bundle on $\mathcal{B}un_{r,d_1+d_2}^{\ge\tau}\times X$. Denote by $p_{ij}$ the projections from
\[
\mathcal{P}ic_{d_1}\times\mathcal{B}un_{r,d_1+d_2}^{\ge\tau}\times X
\]
to the products of the $i$-th and the $j$-th factors. Set
\[
\mathcal F:=\HOM(p_{13}^*\mathcal L,p_{23}^*\mathcal E)\quad
\text{ and }\mathcal V:=(p_{12})_*\mathcal F,
\]
where $\HOM$ stands for the sheaf of homomorphisms. Note that $\mathcal V$ is a coherent sheaf because $p_{12}$ is a proper 1-morphism.
\emph{Claim.} If $d_1<2-2g+\tau$, then the coherent sheaf $\mathcal V$ is locally free of rank $d_1+d_2+r(-d_1-g+1)$.
\begin{proof}[Proof of the claim]
Let $\xi=(\ell,E)$ be a $K$-point of $\mathcal{P}ic_{d_1}\times\mathcal{B}un^{\ge\tau}_{r,d_1+d_2}$ so that $\ell$ is a line bundle on $X_K$, $E$ is a vector bundle on $X_K$. The fiber of $\mathcal F$ over $\xi\times X=X_K$ is $\mathcal F_\xi=\HOM(\ell,E)$. According to~\cite[Sect.~5, Cor.~2]{MumfordAbelian} we only need to show that $h^0(X_K,\mathcal F_\xi)=d_1+d_2+r(-d_1-g+1)$ (and, in particular, this dimension does not depend on $\xi$).
First of all, we claim that $H^1(X_K,\mathcal F_\xi)=0$. Indeed, by Serre duality the vector space $H^1(X_K,\mathcal F_\xi)$ is dual to $\Hom(E,\Omega_{X_K}\otimes\ell)$. The latter space is zero by Lemma~\ref{lm:Bun+}(ii). Now by Riemann--Roch we have
\[
h^0(X_K,\mathcal F_\xi)=h^0(X_K,E\otimes\ell^{-1})=d_1+d_2+r(-d_1-g+1).
\]
The claim is proved.
\end{proof}
Consider the complement of the zero section in the total space of the vector bundle $\mathcal V$. It is clear from the construction that this complement classifies triples $(\ell,E,s)$, where $\ell$ is a degree $d_1$ line bundle on $X$, $E$ is a vector bundle on $X$ of rank $r$ and degree $d_1+d_2$, $s:\ell\to E$ is a non-zero (=injective) morphism. Now it is easy to see that this complement is isomorphic to $\mathcal{L}au_{r,d_1,d_2}^{\ge\tau}$. Thus, by the previous claim and Lemma~\ref{lm:neutral} we have in $\mathrm{Mot}(\mathcal{B}un_{r,d_1+d_2}^{\ge\tau})$:
\begin{multline*}
[\mathcal{L}au_{r,d_1,d_2}^{\ge\tau}\to\mathcal{B}un_{r,d_1+d_2}^{\ge\tau}]=\\
(\mathbb{L}^{d_1+d_2+r(-d_1-g+1)}-1)[\mathcal{P}ic_{d_1}]\mathbf1_{\mathcal{B}un_{r,d_1+d_2}^{\ge\tau}}=
\frac{\mathbb{L}^{d_1+d_2+r(-d_1-g+1)}-1}{\mathbb{L}-1}[\Jac]\mathbf1_{\mathcal{B}un_{r,d_1+d_2}^{\ge\tau}}.
\end{multline*}
\end{proof}
\subsection{Proof of Proposition~\ref{pr:l=2}}\label{sect:pr:l=2}
Consider the generating series
\[
E(z):=\sum_{d_1}[\mathcal{B}un_{r,-d_1,d+d_1}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}]z^{d_1}\in\overline{\Mot}(\mathcal{B}un_{r,d}^{\ge\tau})((z))
\]
and
\[
\tilde E(z):=\sum_{d_1}[\mathcal{L}au_{r,-d_1,d+d_1}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}]z^{d_1}\in\overline{\Mot}(\mathcal{B}un_{r,d}^{\ge\tau})((z)).
\]
It follows from Lemma~\ref{lm:LauStrat} that
\begin{equation*}
\tilde E(z)=\zeta_X(z) E(z).
\end{equation*}
Now we calculate, using Lemma~\ref{lm:res1dim} twice and Lemma~\ref{lm:CauchyProd}.
\begin{multline*}
\lim_{d_1\to-\infty}\frac{[\mathcal{B}un_{r,d_1,d-d_1}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}]}{\mathbb{L}^{-rd_1}}=
\mathbb{L}^r\res_{z=\mathbb{L}^{-r}}E(z)\,dz=\\
\frac{\mathbb{L}^r}{\zeta_X(\mathbb{L}^{-r})}\res_{z=\mathbb{L}^{-r}}\tilde E(z)\,dz=
\frac1{\zeta_X(\mathbb{L}^{-r})}\lim_{d_1\to-\infty}\frac{[\mathcal{L}au_{r,d_1,d-d_1}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}]}{\mathbb{L}^{-rd_1}}.
\end{multline*}
We also used that $\zeta_X(\mathbb{L}^{-r})$ converges and is invertible in $\overline{\Mot}(\mathbf k)$ for any $r\ge2$ (see Lemma~\ref{lm:zetaX}). Now by Lemma~\ref{lm:Laumon_l=2} we have
\begin{multline*}
\frac1{\zeta_X(\mathbb{L}^{-r})}\lim_{d_1\to-\infty}\frac{[\mathcal{L}au_{r,d_1,d-d_1}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}]}{\mathbb{L}^{-rd_1}}=\\
\frac{[\Jac]}{(\mathbb{L}-1)\zeta_X(\mathbb{L}^{-r})}
\lim_{d_1\to-\infty}\frac{\mathbb{L}^{d+r(-d_1-g+1)}-1}{\mathbb{L}^{-rd_1}}
\mathbf1_{\mathcal{B}un_{r,d}^{\ge\tau}}=
\frac{\mathbb{L}^{d+r(1-g)}[\Jac]}{(\mathbb{L}-1)\zeta_X(\mathbb{L}^{-r})}
\mathbf1_{\mathcal{B}un_{r,d}^{\ge\tau}}.
\end{multline*}
\qed
\begin{remark}
It is an easy consequence of the above calculations that $\tilde E(z)$, $E(z)$, and $E_{2,d}^{\ge\tau}(z_1,z_2)$ are expansions of rational functions. We do not know if $E_{r,d}^{\ge\tau}$ is an expansion of a rational function for $r\ge3$.
\end{remark}
\subsection{Proof of Theorem~\ref{th:Harder1}}\label{sect:ProofHarder}
We will prove for $2\le l\le r$ that
\[
\lim_{d_1\to-\infty}\ldots\lim_{d_{l-1}\to-\infty}
\frac{[\mathcal{B}un_{r,d_1,\ldots,d_{l-1},d-d_1-\ldots-d_{l-1}}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}]}
{\mathbb{L}^{-(r+l-2)d_1-(r+l-4)d_2-\ldots-(r-l+2)d_{l-1}}}=\frac{
\mathbb{L}^{(l-1)(d+(1-g)\frac{2r-l+2}2)}[\Jac]^{l-1}}
{(\mathbb{L}-1)^{l-1}\prod_{i=r-l+2}^r\!\zeta_X(\mathbb{L}^{-i})}\;\mathbf1_{\mathcal{B}un_{r,d}^{\ge\tau}}.
\]
Our theorem is equivalent to this statement with $l=r$. We use induction on $l$. For $l=2$ this is Proposition~\ref{pr:l=2} above. Assume that the formula is proved for $l-1$.
\begin{lemma}
We have
\begin{equation*}
\mathcal{B}un_{r,d_1,\ldots,d_l}^{\ge\tau}\simeq\mathcal{B}un_{r,d_1,\ldots,d_{l-2},d_{l-1}+d_l}^{\ge\tau}\times_{\mathcal{B}un_{r-l+2,d_{l-1}+d_l}^{\ge\tau}}
\mathcal{B}un_{r-l+2,d_{l-1},d_l}^{\ge\tau}.
\end{equation*}
\end{lemma}
\begin{proof}
The isomorphism sends $\mathcal E_1\subset\ldots\subset\mathcal E_l$ to the pair
\[
(\mathcal E_1\subset\ldots\subset\mathcal E_{l-2}\subset\mathcal E_l,\mathcal E_{l-1}/\mathcal E_{l-2}\subset\mathcal E_l/\mathcal E_{l-2}).
\]
\end{proof}
Let us return to the proof of the theorem. First, we fix $d,d_1,\ldots,d_{l-2}$. Set $d':=d-d_1-\ldots-d_{l-2}$. Let
$f:\mathcal{B}un_{r,d_1,\ldots,d_{l-2},d'}^{\ge\tau}\to\mathcal{B}un_{r-l+2,d'}^{\ge\tau}$ and
$g:\mathcal{B}un_{r,d_1,\ldots,d_{l-2},d'}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}$ be the projections. These projections together with the 1-morphisms of the previous lemma fit into the diagram:
\[
\begin{CD}
\mathcal{B}un_{r,d_1,\ldots,d_{l-1},d'-d_{l-1}}^{\ge\tau} @>>>
\mathcal{B}un_{r,d_1,\ldots,d_{l-2},d'}^{\ge\tau} @>g>>\mathcal{B}un_{r,d}^{\ge\tau}\\
@VVV @VVfV \\
\mathcal{B}un_{r-l+2,d_{l-1},d'-d_{l-1}}^{\ge\tau} @>>> \mathcal{B}un_{r-l+2,d'}^{\ge\tau}.
\end{CD}
\]
Using the above lemma and Proposition~\ref{pr:l=2}, we calculate
\begin{multline}
\lim_{d_{l-1}\to-\infty}
\frac{[\mathcal{B}un_{r,d_1,\ldots,d_{l-1},d-d_1-\ldots-d_{l-1}}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}]}
{\mathbb{L}^{-(r-l+2)d_{l-1}}}=\\
\lim_{d_{l-1}\to-\infty}\frac{g_!f^*
[\mathcal{B}un_{r-l+2,d_{l-1},d'-d_{l-1}}^{\ge\tau}\to\mathcal{B}un_{r-l+2,d'}^{\ge\tau}]}{\mathbb{L}^{-(r-l+2)d_{l-1}}}=\\
g_!f^*\left(
\lim_{d_{l-1}\to-\infty}\frac{[\mathcal{B}un_{r-l+2,d_{l-1},d'-d_{l-1}}^{\ge\tau}\to\mathcal{B}un_{r-l+2,d'}^{\ge\tau}]}{\mathbb{L}^{-(r-l+2)d_{l-1}}}
\right)=\\
g_!f^*\left(
\frac{\mathbb{L}^{d'+(r-l+2)(1-g)}[\Jac]}{(\mathbb{L}-1)\zeta_X(\mathbb{L}^{-(r-l+2)})}\mathbf1_{\mathcal{B}un_{r-l+2,d'}^{\ge\tau}}
\right)=
\frac{\mathbb{L}^{d'+(r-l+2)(1-g)}[\Jac]}{(\mathbb{L}-1)\zeta_X(\mathbb{L}^{-(r-l+2)})}[\mathcal{B}un_{r,d_1,\ldots,d_{l-2},d'}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}].
\end{multline}
It remains to use the induction hypothesis:
\begin{multline}
\lim_{d_1\to-\infty}\ldots\lim_{d_{l-1}\to-\infty}
\frac{[\mathcal{B}un_{r,d_1,\ldots,d_{l-1},d-d_1-\ldots-d_{l-1}}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}]}
{\mathbb{L}^{-(r+l-2)d_1-(r+l-4)d_2-\ldots-(r-l+2)d_{l-1}}}=\\
\lim_{d_1\to-\infty}\ldots\lim_{d_{l-2}\to-\infty}
\frac{\mathbb{L}^{d'+(r-l+2)(1-g)}[\Jac][\mathcal{B}un_{r,d_1,\ldots,d_{l-2},d'}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}]}
{\mathbb{L}^{-(r+l-2)d_1-(r+l-4)d_2-\ldots-(r-l+4)d_{l-2}}(\mathbb{L}-1)\zeta_X(\mathbb{L}^{-(r-l+2)})}=\\
\frac{\mathbb{L}^{d+(r-l+2)(1-g)}[\Jac]}{(\mathbb{L}-1)\zeta_X(\mathbb{L}^{-(r-l+2)})}\lim_{d_1\to-\infty}\ldots\lim_{d_{l-2}\to-\infty}
\frac{[\mathcal{B}un_{r,d_1,\ldots,d_{l-2},d'}^{\ge\tau}\to\mathcal{B}un_{r,d}^{\ge\tau}]}
{\mathbb{L}^{-(r+l-3)d_1-(r+l-5)d_2-\ldots-(r-l+3)d_{l-2}}}=\\
\frac{\mathbb{L}^{d+(r-l+2)(1-g)}[\Jac]}{(\mathbb{L}-1)\zeta_X(\mathbb{L}^{-(r-l+2)})}\cdot
\frac{\mathbb{L}^{(l-2)(d+(1-g)\frac{2r-l+3}2)}[\Jac]^{l-2}}
{(\mathbb{L}-1)^{l-2}\prod_{i=r-l+3}^r\!\zeta_X(\mathbb{L}^{-i})}\;\mathbf1_{\mathcal{B}un_{r,d}^{\ge\tau}}=
\frac{
\mathbb{L}^{(l-1)(d+(1-g)\frac{2r-l+2}2)}[\Jac]^{l-1}}
{(\mathbb{L}-1)^{l-1}\prod_{i=r-l+2}^r\!\zeta_X(\mathbb{L}^{-i})}\;\mathbf1_{\mathcal{B}un_{r,d}^{\ge\tau}}.
\end{multline}
The theorem is proved.\qed
\section{Some identities in motivic Hall algebras }\label{sect:Hall}
For an ind-constructible abelian (or more generally triangulated $A_\infty$) category Kontsevich and Soibelman define its motivic Hall algebra in~\cite{KontsevichSoibelman08} (see also~\cite{JoyceConfigurationsII}). We will need this construction for the category of coherent sheaves on the curve $X$. In this case, the formulas of~\cite{KontsevichSoibelman08} simplify drastically, so we prefer to give a direct definition, referring the interested reader to~\cite{KontsevichSoibelman08} for the general case.\footnote{For a nice introduction to motivic Hall algebras of categories of coherent sheaves, see~\cite{BridgeleandHallIntro}.}
We also define a version of comultiplication. Note that there is some peculiarity in the motivic case (in particular, coassociativity does not make literal sense). We notice that (in the particular case of the category of sheaves on curves) there is a compatibility between multiplication and comultiplication resembling Green's Theorem. Finally, we do some concrete calculations to be used in Section~\ref{sect:Proofs}.
In this section $\mathbf k$ is a field of arbitrary characteristic except for Section~\ref{sect:torsheaves}, where we need the field to be of characteristic zero. We keep the assumptions from the previous section, in particular: $X$ is a smooth geometrically connected projective curve over the field $\mathbf k$ and we assume that there is a divisor~$D$ on $X$ defined over $\mathbf k$ such that $\degD=1$. As before, $K$ denotes an extension of $\mathbf k$.
\subsection{Motivic Hall algebra of the category of coherent sheaves}
For any stack $\mathcal X$ we consider the ring
\[
\overline{\Mot}(\mathcal X)[\sqrt\mathbb{L}]:=\overline{\Mot}(\mathcal X)[t]/(t^2-\mathbb{L}).
\]
We can easily extend pullbacks, pushforwards, and products to these rings. We note that $\overline{\Mot}(\mathcal X)\subset\overline{\Mot}(\mathcal X)[\sqrt\mathbb{L}]$, since $t^2-\mathbb{L}$ is a monic polynomial. Thus, when proving an identity in $\overline{\Mot}(\mathcal X)$, we may work in $\overline{\Mot}(\mathcal X)[\sqrt\mathbb{L}]$.
Set
\[
\Gamma:=\mathbb{Z}^2,\qquad\Gamma_+:=\{(r,d)\in\mathbb{Z}_{\ge0}\times\mathbb{Z}\,|\;d\ge0\text{ if }r=0\}
\]
so that $\Gamma_+$ is a subsemigroup of $\Gamma$. If $F$ is a coherent sheaf on $X_K$ of generic rank $r$ and degree $d$, we say that $F$ is of class $(r,d)\in\Gamma_+$, we also write $\cl(F)=(r,d)$. Let $\mathcal{C}oh_\gamma$ be the moduli stack of coherent sheaves on $X$ of class $\gamma\in\Gamma_+$. In particular, we have $\mathcal{C}oh_{(0,0)}=\Spec\mathbf k$. We also consider $\mathcal{C}oh_r:=\sqcup_d\mathcal{C}oh_{(r,d)}$; this is the moduli stack of rank $r$ sheaves. Finally, set $\mathcal{C}oh:=\sqcup_{r\ge0}\mathcal{C}oh_r$.
For $(r_i,d_i)\in\Gamma$, $i=1,2$, we set
\[
\langle(r_1,d_1),(r_2,d_2)\rangle=(1-g)r_1r_2+(r_1d_2-r_2d_1)
\]
and
\[
((r_1,d_1),(r_2,d_2))=\langle(r_1,d_1),(r_2,d_2)\rangle+\langle(r_2,d_2),(r_1,d_1)\rangle=(2-2g)r_1r_2.
\]
Note that the symmetrized form only involves $r_1$ and $r_2$.
Next, we note that for coherent sheaves $F_1$ and $F_2$ on $X$ we have
\[
\dim\Hom(F_1,F_2)-\dim\Ext^1(F_1,F_2)=\langle\cl(F_1),\cl(F_2)\rangle.
\]
Set
\[
\mathcal H_\gamma:=\overline{\Mot}(\mathcal{C}oh_\gamma)[\sqrt\mathbb{L}]\text{ and }
\mathcal H_\gamma^{fin}:=\overline{\Mot}^{fin}(\mathcal{C}oh_\gamma)[\sqrt\mathbb{L}]
\text{ for }\gamma\in\Gamma_+.
\]
Finally, set
\[
\mathcal H':=\bigoplus_{\gamma\in\Gamma_+}\mathcal H_\gamma,\qquad
\mathcal H'_{fin}:=\bigoplus_{\gamma\in\Gamma_+}\mathcal H_\gamma^{fin},\text{ and }
\widehat\mathcal H':=\prod_{\gamma\in\Gamma_+}\mathcal H_\gamma.
\]
For $\gamma_1,\gamma_2\in\Gamma_+$ let $\mathcal{C}oh_{\gamma_2,\gamma_1}$ be the stack classifying pairs of sheaves $(F_1\subset F)$ such that $\cl(F_1)=\gamma_1$, $\cl(F/F_1)=\gamma_2$. We have a diagram
\[
\mathcal{C}oh_{\gamma_2}\times\mathcal{C}oh_{\gamma_1}\xleftarrow{p}\mathcal{C}oh_{\gamma_2,\gamma_1}\xrightarrow{s}\mathcal{C}oh_{\gamma_1+\gamma_2}.
\]
Here $p(F_1\subset F)=(F/F_1,F_1)$, $s(F_1\subset F)=F$. Note that both $p$ and $s$ are 1-morphisms of finite type.
The multiplication on $\mathcal H'$ is defined as follows: if $f_i\in\mathcal H_{\gamma_i}$ ($i=1,2$), then
\[
f_2f_1:=\mathbb{L}^{\frac12\langle\gamma_2,\gamma_1\rangle}s_!p^*(f_2\boxtimes f_1).
\]
We extend this $\mathcal H'$ by bilinearity. The above product makes $\mathcal H'$ into a unital associative algebra over $\overline{\Mot}(\mathbf k)[\sqrt\mathbb{L}]$.
Directly, one can define the $n$-fold multiplication on $\mathcal H'$ as follows. Let $\mathcal{C}oh_{\gamma_n,\ldots,\gamma_1}$ denote the stack of filtrations of coherent sheaves $0=F_0\subset F_1\subset\ldots\subset F_n=F$ such that for all $i$ we have $\cl(F_i/F_{i-1})=\gamma_i$. We have a diagram
\[
\mathcal{C}oh_{\gamma_n}\times\ldots\times\mathcal{C}oh_{\gamma_1}
\xleftarrow{p_{(n)}}\mathcal{C}oh_{\gamma_n,\ldots,\gamma_1}\xrightarrow{s_{(n)}}\mathcal{C}oh_{\gamma_1+\ldots+\gamma_n}.
\]
The 1-morphisms $p_{(n)}$ and $s_{(n)}$ are defined similarly to $p$ and $s$; they are also of finite type. Now, for $f_i\in\mathcal H_{\gamma_i}$ we have
\[
f_n\ldots f_1=\mathbb{L}^{\sum_{i>j}\frac12\langle\gamma_i,\gamma_j\rangle}s_{(n)!}p_{(n)}^*(f_n\boxtimes\ldots\boxtimes f_1).
\]
Note that $\mathcal H'_{fin}\subset\mathcal H'$ is a subalgebra. On the other hand, $\widehat\mathcal H^{\prime}$ is not an algebra because multiplication would involve infinite summation. However, $\widehat\mathcal H'_{tor}:=\prod_{d\ge0}\mathcal H_{(0,d)}$ is. Moreover, the restriction of the multiplication on $\mathcal H'$ to $\mathcal H'\otimes\left(\bigoplus_d\mathcal H_{0,d}\right)$ extends to the action $\widehat\mathcal H'\otimes\widehat\mathcal H'_{tor}\to\widehat\mathcal H'$; this action preserves $\mathcal H'$ and the rank gradation on $\mathcal H'$.
Precisely, the action is defined as follows: Let $\mathcal{C}oh_{tor}:=\sqcup_{d\ge0}\mathcal{C}oh_{0,d}$ be the stack of torsion sheaves and let
$\mathcal{C}oh_{\bullet,tor}$ denote the stack classifying pairs $F_1\subset F$, where $F$ is a coherent sheaf on $X$, $F_1$ is subsheaf such that $F_1$ is torsion. We have projections
\[
\mathcal{C}oh\times\mathcal{C}oh_{tor}\xleftarrow{p}\mathcal{C}oh_{\bullet,tor}\xrightarrow{s}\mathcal{C}oh
\]
defined by $p(F_1\subset F)=(F/F_1,F_1)$ and $s(F_1\subset F)=F$. Both projections are of finite type, so for $f_1\in\mathcal H'_{tor}$ and $f_2\in\widehat\mathcal H'$ we define
\[
f_2f_1:=s_!p^*(T(f_2\boxtimes f_1)),
\]
where $T$ acts on $\mathcal{C}oh_{(0,d)}\times\mathcal{C}oh_{(r,e)}$ via the multiplication by $\mathbb{L}^{-\frac12dr}$.
\begin{remark}
We can define Hall algebras using $\mathrm{Mot}(\mathcal{C}oh_\gamma)[\sqrt\mathbb{L}]$ instead of $\overline{\Mot}(\mathcal{C}oh_\gamma)[\sqrt\mathbb{L}]$. Everything except for Proposition~\ref{pr:HallFormulas}\eqref{Halliv} would work. Thus Proposition~\ref{pr:MainCalc} is also true as a statement about series with coefficients in $\mathrm{Mot}(\mathbf k)[\sqrt\mathbb{L}]$.
\end{remark}
\subsection{Extended Hall algebras}
Set $\Gamma':=\mathbb{Z}$ and let $\mathbb{Z}[\Gamma']$ be the group algebra. We denote the element corresponding to $r\in\Gamma'$ by $k_r$. Thus $\mathbb{Z}[\Gamma']\approx\mathbb{Z}[k_1,k_1^{-1}]$ is the ring of Laurent polynomials. We let $\Gamma'$ act on $\mathcal H'$ via
\[
r\cdot f=\mathbb{L}^{(1-g)rr'}f\text{ whenever }f\in\mathcal H_{r',d}.
\]
This gives a semidirect product
\[
\mathcal H:=\mathcal H'\otimes_\mathbb{Z}\Z[\Gamma'].
\]
Thus, $\mathcal H$ is an associative algebra. Note that $\mathcal H$ is graded by $\Gamma_+$. We view $\mathcal H'$ and $\mathbb{Z}[\Gamma']$ as subalgebras of~$\mathcal H$. We have in $\mathcal H$: $k_r f=\mathbb{L}^{(1-g)rr'}f k_r$ if $f\in\mathcal H_{r',d}$. We define the subalgebra $\mathcal H_{fin}:=\mathcal H_{fin}'\otimes\mathbb{Z}[\Gamma']\subset\mathcal H$, the $\overline{\Mot}(\mathbf k)[\sqrt\mathbb{L}]$-module $\widehat\mathcal H:=\widehat\mathcal H'\otimes\mathbb{Z}[\Gamma']\supset\mathcal H$ and the algebra $\widehat\mathcal H_{tor}:=\widehat\mathcal H'_{tor}\otimes\mathbb{Z}[\Gamma']$ acting on $\widehat\mathcal H$ on the right.
\begin{remark}
One may define a larger algebra $\mathcal H'\otimes\mathbb{Z}[\Gamma]$ as in~\cite[Sect.~4.1]{SchiffmannIndecomposable} by making $\gamma\in\Gamma$ act on $f\in\mathcal H_{\gamma'}$ via $\gamma\cdot f=\mathbb{L}^{\frac12(\gamma,\gamma')}f$. However, the symmetrized bilinear form depends on the ranks only, so the element of $\mathbb{Z}[\Gamma]$ corresponding to $(0,1)\in\Gamma$ is central. The quotient by the ideal generated by $(0,1)$ is isomorphic to $\mathcal H$ (we identify $\Gamma'$ with $\Gamma/(0,1)\mathbb{Z}$).
\end{remark}
\subsection{``Comultiplication'' in the Hall algebra} One would like to define a comultiplication $\widehat\mathcal H\to\widehat\mathcal H\hat\otimes\widehat\mathcal H$, where $\hat\otimes$ is the product completed with respect to the $\Gamma_+$-grading. However, in the motivic case this is not possible because $\overline{\Mot}(\mathcal{C}oh_{\gamma_2}\times\mathcal{C}oh_{\gamma_1})\ne\overline{\Mot}(\mathcal{C}oh_{\gamma_2})\hat\otimes\overline{\Mot}(\mathcal{C}oh_{\gamma_1})$. We will circumvent this problem as follows. Set $\mathcal H_{\gamma_2,\gamma_1}:=\overline{\Mot}(\mathcal{C}oh_{\gamma_2}\times\mathcal{C}oh_{\gamma_1})[\sqrt\mathbb{L}]$ and $\widehat\mathcal H_{(2)}:=\prod_{\gamma_2,\gamma_1\in\Gamma_+}\mathcal H_{\gamma_2,\gamma_1}\otimes\mathbb{Z}[(\Gamma')^2]$. Later, we will also need the space $\mathcal H_{(2),fin}:=\oplus_{\gamma_2,\gamma_1\in\Gamma_+}\mathcal H^{fin}_{\gamma_2,\gamma_1}\otimes\mathbb{Z}[(\Gamma')^2]$, where $\mathcal H^{fin}_{\gamma_2,\gamma_1}:=\overline{\Mot}(\mathcal{C}oh_{\gamma_2}\times\mathcal{C}oh_{\gamma_1})[\sqrt\mathbb{L}]$.
We are going to construct a map
\begin{equation*}
\Delta:\widehat\mathcal H\to\widehat\mathcal H_{(2)}.
\end{equation*}
To give such a map, one needs to give for each pair $(\gamma_2,\gamma_1)\in\Gamma_+^2$ a map $\Delta_{\gamma_2,\gamma_1}:\widehat\mathcal H\to\mathcal H_{\gamma_2,\gamma_1}\otimes\mathbb{Z}[(\Gamma')^2]$. This map is given by
\[
\Delta_{\gamma_2,\gamma_1}(f\otimes k_r):=\mathbb{L}^{\frac12\langle\gamma_2,\gamma_1\rangle} p_!s^*f_{\gamma_1+\gamma_2}
\otimes k_{r_1+r}\otimes k_r,
\]
where $f\in\mathcal H'$, $f_{\gamma_1+\gamma_2}$ is the projection of $f$ to $\mathcal H_{\gamma_1+\gamma_2}$, and $\gamma_1=(r_1,d_1)$.
Note that we have a homomorphism of $\overline{\Mot}(\mathbf k)$-modules $\boxtimes:\widehat\mathcal H\hat\otimes\widehat\mathcal H\to\widehat\mathcal H_{(2)}$ given by external product of motivic functions.
\begin{remark}
The coassociativity does not make sense for $\Delta$. However, one has the following replacement. First, one defines the $n$-point completed Hall algebra $\widehat\mathcal H_{(n)}$ and the $n$-th comultiplication $\Delta_{(n)}:\widehat\mathcal H\to\widehat\mathcal H_{(n)}$. Assume that we have $\Delta(f)=\boxtimes(g)$, where $f\in\widehat\mathcal H$, $g\in\widehat\mathcal H\hat\otimes\widehat\mathcal H$. Then for any $n$ and $m$ we have
\[
\Delta_{(m+n)}(f)=(\Delta_{(m)}\boxtimes\Delta_{(n)})(g).
\]
We will not use this coassociativity.
\end{remark}
\begin{proposition}\label{pr:DeltaHom}
Assume that either $f_1,f_2\in\mathcal H_{fin}$, or $f_1\in\widehat\mathcal H$, $f_2\in\widehat\mathcal H_{tor}$. Then
\[
\Delta(f_1f_2)=\Delta(f_1)\Delta(f_2).
\]
In particular the product in the RHS converges.
\end{proposition}
Note that $\widehat\mathcal H_{(2)}$ is not an algebra because the product involves infinite summation. The convergence part of the proposition means that, under assumptions of the proposition, for any degree $\delta\in(\Gamma_+)^2$ all but finitely many terms in the corresponding sum are zero. This is easy to check if $f_2\in\widehat\mathcal H_{tor}$; in the case $f_1,f_2\in\mathcal H_{fin}$ this follows from the fact that for any finite type substack $\mathcal X\subset\mathcal{C}oh$ there is $d\in\mathbb{Z}$ such that for any $F\in\mathcal X(K)$ and any quotient $F'$ of $F$ we have $\deg F'\ge d$.
We leave a lengthy proof of the equation to the reader but we observe that the argument of~\cite[Sect.~1.5]{SchiffmannLectures} is actually motivic.
\subsection{The bilinear form}
According to Section~\ref{sect:BilinearForm}, we have a bilinear form $\mathcal H_\gamma^{fin}\otimes\mathcal H_\gamma\to\overline{\Mot}(\mathbf k)[\sqrt\mathbb{L}]$; we extend it to $\mathcal H'_{fin}\otimes\widehat\mathcal H'$ by letting $\mathcal H_\gamma$ to be pairwise orthogonal. We extend it to $\mathcal H_{fin}\otimes\widehat\mathcal H$ by setting $(f\otimes k_r|g\otimes k_{r'})=\mathbb{L}^{(1-g)rr'}(f|g)$. Similarly, we define a bilinear form $\mathcal H_{(2),fin}\otimes\widehat\mathcal H_{(2)}\to\overline{\Mot}(\mathbf k)[\sqrt\mathbb{L}]$.
\begin{lemma}\label{lm:CoProdPairing}
Let $f\in\widehat\mathcal H$, $g_1,g_2\in\mathcal H_{fin}$. Then
\[
(g_1g_2|f)=(g_1\boxtimes g_2|\Delta(f)).
\]
\end{lemma}
\begin{proof}
A simple calculation using Lemma~\ref{lm:MotBilProd}.
\end{proof}
\subsection{``Standard'' objects}
For $\gamma\in\Gamma_+$ set $\mathbf1_\gamma:=\mathbf1_{\mathcal{C}oh_\gamma}\in\mathcal H_\gamma$, $\mathbf1_\gamma^{vec}:=\mathbf1_{\mathcal{B}un_\gamma}\in\mathcal H_\gamma$.
Define the generating series
\[
E_r(z):=\sum_{d\in\mathbb{Z}}\mathbf1_{r,d}z^d\in\prod_{d\in\mathbb{Z}}\mathcal H_{(r,d)}z^d\subset\mathcal H[[z^{-1},z]].
\]
Define also $E_r^{vec}(z):=\sum_{d\in\mathbb{Z}}\mathbf1^{vec}_{r,d}z^d$. Note that $E_0(z)\in\mathcal H_{tor}[[z]]$. Note also that $E_0^{vec}(z)=1$.
\begin{remark}\label{rm:Convergence}
The series $E_r(z)$ is homogeneous in the sense that the coefficient at $z^d$ belongs to $\mathcal H_{r,d}$. Thus, for any $x\in\overline{\Mot}(\mathbf k)[\sqrt\mathbb{L}]$ we can calculate $E_r(x)$ as an element of the completion $\widehat\mathcal H$. Moreover, we can recover $E_r(z)$ from $E_r(1)$ as $E_r(z)=\sum_{d\in\mathbb{Z}}(E_r(1))_{(r,d)}z^d$, where the subscript $(r,d)$ stands for the $(r,d)$-component.
We can use this correspondence between homogeneous series and elements of $\widehat\mathcal H$ to multiply any homogeneous series by a homogeneous series of rank 0 on the right because $\widehat\mathcal H_{tor}$ acts on $\widehat\mathcal H$. Proposition~\ref{pr:HallFormulas}(\ref{Hallv},\ref{Halliii2}) below should be understood in this sense.
\end{remark}
For $r>0$ set
\[
vol_r:=\frac{\mathbb{L}^{(g-1)(r^2-1)}}{\mathbb{L}-1}[\Jac]
\zeta_X(\mathbb{L}^{-2})\ldots\zeta_X(\mathbb{L}^{-r}) \in\overline{\Mot}(\mathbf k).
\]
\begin{remark}
The stack $\mathcal{B}un_{r,d}$ is of infinite type for $r>1$. However, one can define its motivic class as
\[
[\mathcal{B}un_{r,d}]:=\lim_{\tau\to-\infty}[\mathcal{B}un_{r,d}^{\ge\tau}]\in\overline{\Mot}(\mathbf k).
\]
(See~\cite[Lemma~3.1]{BehrendDhillon}.) It is an easy consequence of~\cite[Sect.~6]{BehrendDhillon} that $[\mathcal{B}un_{r,d}]=vol_r$. We will never use this in the current paper but the notation $vol_r$ will be convenient when comparing our paper with~\cite{SchiffmannIndecomposable}.
\end{remark}
\begin{proposition}\label{pr:HallFormulas}
We have the following identities.\\ \setcounter{noindnum}{0
\refstepcounter{noindnum}{\rm(}\thenoindnum\/{\rm)} \label{Hallv}
\[
E_r(z)=E_r^{vec}(z)E_0(\mathbb{L}^{-\frac12r}z).
\]
\refstepcounter{noindnum}{\rm(}\thenoindnum\/{\rm)} \label{Halli}
\[
E_0(z)E_0(w)=E_0(w)E_0(z).
\]
\refstepcounter{noindnum}{\rm(}\thenoindnum\/{\rm)} \label{Hallii}
\[
E_0(z)E_r^{vec}(w)=\left(\prod_{i=0}^{r-1}\zeta_X\left(\mathbb{L}^{-\frac r2+i}\frac zw\right)\right)
E_r^{vec}(w)E_0(z).
\]
\refstepcounter{noindnum}{\rm(}\thenoindnum\/{\rm)} \label{Halliii}
\[
\Delta(E_r(z))=
\sum_{s+t=r}\mathbb{L}^{\frac12st(g-1)}E_s(\mathbb{L}^{\frac t2}z)k_t\boxtimes E_t(\mathbb{L}^{-\frac s2}z).
\]
\refstepcounter{noindnum}{\rm(}\thenoindnum\/{\rm)} \label{Halliii2}
\[
\Delta(E_r^{vec}(z))=
\sum_{s+t=r}\mathbb{L}^{\frac12st(g-1)}
E_s^{vec}(\mathbb{L}^{\frac t2}z)E_0(\mathbb{L}^{\frac{t-s}2}z)E_0^{-1}(\mathbb{L}^{-\frac{t+s}2}z)k_t
\boxtimes E_t^{vec}(\mathbb{L}^{-\frac s2}z).
\]
\refstepcounter{noindnum}{\rm(}\thenoindnum\/{\rm)} \label{Halliv}
\[
E_r^{vec}(\mathbb{L}^{\frac12(1-r)}z_1)=C\cdot\res_{\frac{z_2}{z_1}=\ldots=\frac{z_r}{z_{r-1}}=\mathbb{L}^{-1}}
(E_1^{vec}(z_r)\ldots E_1^{vec}(z_1))\prod_{i=2}^r\frac{dz_i}{z_i},
\]
where
\[
C=\mathbb{L}^{\frac14(1-g)r(r-1)}vol_1^{-r}vol_r.
\]
\end{proposition}
\begin{proof}
We start the proof with~\eqref{Hallv}. In view of Remark~\ref{rm:Convergence} and the definition of the action of $\mathcal H_{tor}$ on~$\mathcal H$, we only need to show that we have in $\overline{\Mot}(\mathcal{C}oh)$:
\[
\mathbf1_{\mathcal{C}oh}=s_!\mathbf1_\mathcal X,
\]
where $\mathcal X$ is the constructible subset of $\mathcal{C}oh_{\bullet,tor}$ corresponding to the pairs $(F_1\subset F)$ such that $F/F_1$ is a vector bundle. This follows from the uniqueness of the torsion subsheaf and Corollary~\ref{cor:pointwEqual}.
\eqref{Halli} is equivalent to the equation
\[
[\mathcal{C}oh_{(0,l_1),(0,l_2)}\to\mathcal{C}oh_{(0,l_1+l_2)}]=
[\mathcal{C}oh_{(0,l_2),(0,l_1)}\to\mathcal{C}oh_{(0,l_1+l_2)}].
\]
We will show a stronger equation in $\mathrm{Mot}(\mathcal{C}oh_{(0,l_1+l_2)}\times\mathcal{C}oh_{(0,l_1)})$:
\[
[\mathcal{C}oh_{(0,l_1),(0,l_2)}\xrightarrow{\phi}\mathcal{C}oh_{(0,l_1+l_2)}\times\mathcal{C}oh_{(0,l_1)}]=
[\mathcal{C}oh_{(0,l_2),(0,l_1)}\xrightarrow{\psi}\mathcal{C}oh_{(0,l_1+l_2)}\times\mathcal{C}oh_{(0,l_1)}].
\]
Here $\phi$ and $\psi$ are defined by $\phi(F'\subset F)=(F,F/F')$, $\psi(F'\subset F)=(F,F')$. Let $F$ and $F'$ be torsion sheaves on $X_K$ representing a point $\xi:\Spec K\to\mathcal{C}oh_{(0,l_1+l_2)}\times\mathcal{C}oh_{(0,l_1)}$. According to Proposition~\ref{pr:pointwise zero} we just need to check that the motivic classes of the $\xi$-fibers of $\phi$ and $\psi$ are equal. These fibers are equal to the space of surjective (resp.~injective) morphisms $\Hom_{sur}(F,F')$ (resp.~$\Hom_{inj}(F',F)$).
Let $Z\subset X$ be the union of scheme-theoretic supports of $F$ and $F'$. We may assume that $Z_{red}=z$ is a single point of $X_K$ because the space of injective (or surjective) morphisms decomposes into the product over the points of $Z_{red}$. Note that the restriction of $F$ to $z$ corresponds to a vector space over $\mathbf k(z)$; the same is true for $F'$. Upon choosing bases in these vector spaces, we identify $\Hom_{sur}(F|_z,F'|_z)$ and~$\Hom_{inj}(F'|_z,F|_z)$ with spaces of matrices of maximum rank (of sizes $\dim F'|_z\times\dim F|_z$ and $\dim F|_z\times\dim F'|_z$ resp.); we see that the motivic classes of these spaces coincide.
Next, a morphism from $F\to F'$ is surjective if and only if its restriction to $z$ is surjective. Now it is easy to see that the fibers of the restriction morphism $\Hom_{sur}(F,F')\to\Hom_{sur}(F|_z,F'|_z)$ are vector spaces. Similarly, the fibers of the morphism
$\Hom_{inj}(F',F)\to\Hom_{inj}(F'|_z,F|_z)$ are vectors spaces easily seen to be of the same dimension. One more application of Proposition~\ref{pr:pointwise zero} completes the proof of \eqref{Halli}.
To prove~\eqref{Hallii}, note first that by Lemma~\ref{lm:zeta} we have
\[
\prod_{i=0}^{r-1}\zeta_X\left(\mathbb{L}^{-\frac r2+i}\frac zw\right)=
\zeta_{X\times\P^{r-1}}\left(\mathbb{L}^{-\frac r2}\frac zw\right).
\]
Thus \eqref{Hallii} is equivalent to the equation for all $d\ge0$ and $e\in\mathbb{Z}$:
\[
\mathbf1_{(0,d)}\mathbf1_{(r,e)}^{vec}=
\sum_{i=0}^d\mathbb{L}^{-\frac{ir}2}[(X\times\P^{r-1})^{(i)}]\mathbf1^{vec}_{(r,e+i)}\mathbf1_{(0,d-i)}.
\]
Unwinding the definition of multiplication in the Hall algebra, we see that this is equivalent to the following equation. Let $\widehat{\mathcal{C}oh}_{(0,d),(r,e)}$ be the open substack of $\mathcal{C}oh_{(0,d),(r,e)}$ classifying pairs of sheaves $(F_1\subset F)$ such that $F_1$ is torsion free. Similarly, let $\widehat{\mathcal{C}oh}_{(r,e),(0,d)}$ be the open substack of $\mathcal{C}oh_{(r,e),(0,d)}$ classifying pairs of sheaves $(F_1\subset F)$ such that $F/F_1$ is torsion free. It is enough to show that in $\mathrm{Mot}(\mathcal{C}oh_{r,d+e})$ we have
\begin{equation}\label{eq:ii}
[\widehat{\mathcal{C}oh}_{(0,d),(r,e)}\to\mathcal{C}oh_{r,d+e}]=
\sum_{i=0}^d\mathbb{L}^{r(d-i)}[(X\times\P^{r-1})^{(i)}][\widehat{\mathcal{C}oh}_{(r,e+i),(0,d-i)}\to\mathcal{C}oh_{r,d+e}].
\end{equation}
To this end, let $F$ be a coherent sheaf on $X_K$ of class $(r,d+e)$. Write $F=T\oplus E$, where $T$ is torsion and $E$ is torsion free. Set $i=\deg E-e$. The fiber $\mathcal X_F$ of $\widehat{\mathcal{C}oh}_{(0,d),(r,e)}\to\mathcal{C}oh_{r,d+e}$ over $F$ is the scheme of subsheaves $F'\subset T\oplus E$ such that $F'$ is locally free of class $(r,e)$ (in particular, it is empty if $i<0$). Let $\pi:F\to E$ be the projection, the assignment $F'\mapsto\pi(F')$ is a morphism $\mathcal X_F\to\mathcal{M}od_i(E)$, where $\mathcal{M}od_i(E)$ classifies degree $i$ modifications of $E$, that is, subsheaves $E'\subset E$ such that $E/E'$ is torsion of degree $i$. The fibers of this 1-morphism are isomorphic to vector spaces of dimension $\dim\Hom(F',T)=r\deg T=r(d-i)$. Thus
\[
[\mathcal X_F]=\mathbb{L}^{r(d-i)}[\mathcal{M}od_i(E)]=\mathbb{L}^{r(d-i)}[(X\times\P^{r-1})^{(i)}],
\]
where the second equation follows from the proof of~\cite[Prop.~3.6]{GarciaPradaHeinlothSchmitt}.
Now we calculate the fiber of $\widehat{\mathcal{C}oh}_{(r,e+i),(0,d-i)}\to\mathcal{C}oh_{r,d+e}$ over $F$. This is the scheme of subsheaves $T'\subset F=T\oplus E$ such that $T'$ is torsion of degree $d-i$ and such that $F/T'$ is torsion free. But then we necessarily have $T=T'$. Thus, the fiber consists of a unique point if $d-i=d+e-\deg E$ and empty otherwise. Now we easily derive~\eqref{eq:ii} from Proposition~\ref{pr:pointwise zero}.
Next, \eqref{Halliii} is equivalent to the following statement: for any $\gamma_1,\gamma_2\in\Gamma_+$ we have $\Delta_{\gamma_2,\gamma_1}(\mathbf1_{\gamma_1+\gamma_2})=
\mathbb{L}^{-\frac12\langle\gamma_2,\gamma_1\rangle}\mathbf1_{\gamma_2}\boxtimes\mathbf1_{\gamma_1}$. Unwinding the definition of $\Delta_{\gamma_2,\gamma_1}$, we see that this is equivalent to
\[
[\mathcal{C}oh_{\gamma_2,\gamma_1}\to\mathcal{C}oh_{\gamma_2}\times\mathcal{C}oh_{\gamma_1}]=
\mathbb{L}^{-\langle\gamma_2,\gamma_1\rangle}\mathbf1_{\mathcal{C}oh_{\gamma_2}\times\mathcal{C}oh_{\gamma_1}}.
\]
Let $F_i$ be coherent sheaves on $X_K$ of class $\gamma_i$ ($i=1,2$). According to Proposition~\ref{pr:pointwise zero}, we just need to show that the motivic class of the moduli stack $\mathcal X$ of exact sequences $0\to F_1\to F\to F_2\to0$ is equal to $\mathbb{L}^{-\langle\gamma_2,\gamma_1\rangle}$ in $\mathrm{Mot}(K)$. This follows easily from the fact that we have an affine bundle $\Ext^1(F_2,F_1)\to\mathcal X$ modeled over the additive group $\Hom(F_2,F_1)$. (Recall that $\dim\Ext^1(F_2,F_1)-\dim\Hom(F_2,F_1)=-\langle\gamma_2,\gamma_1\rangle$.)
For~\eqref{Halliii2}, note first that $E_0(z)$ is invertible in $\mathcal H[[z]]$. By part~\eqref{Hallv} we have
\[
E_r^{vec}(z)=E_r(z)E_0^{-1}(\mathbb{L}^{-\frac12r}z),
\]
where this equation should be understood as explained in Remark~\ref{rm:Convergence}. It remains to apply the comultiplication $\Delta$, and use Proposition~\ref{pr:DeltaHom} and part~\eqref{Halliii} of the current proposition.
For part~\eqref{Halliv}, we have
\begin{equation}\label{eq:E1prod}
E_1^{vec}(z_r)\ldots E_1^{vec}(z_1)=\sum_{d_1,\ldots,d_r}
\mathbb{L}^{\frac{r(r-1)}4(1-g)+\frac{(r-1)d_1+(r-3)d_2+\ldots+(1-r)d_r}2}
[\mathcal{B}un_{r,d_1,\ldots,d_r}\to\mathcal{C}oh_r]z_1^{d_1}\ldots z_r^{d_r},
\end{equation}
where $\mathcal{B}un_{r,d_1,\ldots,d_r}$ is defined in Section~\ref{sect:BorelRed}. Since both sides of our equation are supported on $\mathcal{B}un_r$, it is enough to prove the statement upon restricting to $\mathcal{B}un_r$. Note that convergence on $\overline{\Mot}(\mathcal{B}un_r)$ is convergence on open substacks of finite type (see Section~\ref{sect:MotFun1}), so it is enough to show \eqref{Halliv} upon restricting to $\mathcal{B}un_{r,d}^{\ge\tau}$ (see Lemma~\ref{lm:Bun+}(iv) and the definition of the residue in Section~\ref{sect:res}). We get
\[
E_1^{vec}(z_r)\ldots E_1^{vec}(z_1)|_{\mathcal{B}un_{r,d}^{\ge\tau}}=\mathbb{L}^{\frac{r(r-1)}4(1-g)+\frac{(1-r)d}2}E_{r,d}^{\ge\tau}(z_1,\ldots,z_r),
\]
where $E_{r,d}^{\ge\tau}(z_1,\ldots,z_r)$ is defined in Section~\ref{sect:Eisenstein}. It remains to use Theorem~\ref{th:ResHarder2}.
\end{proof}
\subsection{Truncated generating series}
Note that the slope of a non-zero torsion sheaf is equal to $+\infty$. Thus, if $0=E_0\subset E_1\subset\ldots\subset E_t=E$ is the HN-filtration on a vector bundle $E$ and $T$ is a torsion sheaf, then the HN-filtration on $T\oplus E$ is given by
\[
0\subset T\subset T\oplus E_1\subset\ldots\subset T\oplus E_t=T\oplus E.
\]
We define $\mathcal{C}oh_{r,d}^{\ge0}$ as the constructible (in fact, open) subset of $\mathcal{C}oh_{r,d}$ classifying HN-nonnegative sheaves, that is, sheaves $T\oplus E$ as above such that $E$ is HN-nonnegative. We define $\mathcal{B}un_{r,d}^{<0}$ to be the constructible subset of $\mathcal{C}oh_{r,d}$ classifying sheaves with strictly negative HN-type. The reason for the notation is that every such sheaf is a vector bundle. It follows easily from Lemma~\ref{lm:Bun+}(iii) that these subsets are of finite type. Set
\[
\mathbf1_{r,d}^{\ge0}=\mathbf1_{\mathcal{C}oh_{r,d}^{\ge0}},\qquad
\mathbf1_{r,d}^{vec,\ge0}=\mathbf1_{\mathcal{B}un_{r,d}^{\ge0}},\qquad
\mathbf1_{r,d}^{<0}=\mathbf1_{\mathcal{B}un_{r,d}^{<0}}.
\]
We also define the generating series
\[
E_r^{\ge0}(z):=\sum_{d\in\mathbb{Z}}\mathbf1_{r,d}^{\ge0}z^d\in\mathcal H_{fin}[[z]].
\]
Define similarly $E_r^{vec,\ge0}(z)\in\mathcal H_{fin}[[z]]$ and $E_r^{<0}(z)\in z^{-1}\mathcal H_{fin}[[z^{-1}]]$.
\begin{lemma}\label{lm:HallProduct}We have
\[
\begin{split}
(i)\qquad & E_r(z)=\sum_{\substack{s+t=r \\ s,t\ge0}}\mathbb{L}^{\frac12(g-1)st}E_s^{<0}(\mathbb{L}^{\frac t2}z)E_t^{\ge0}(\mathbb{L}^{-\frac s2}z),\\
(ii)\qquad & E_r^{vec}(z)=\sum_{\substack{s+t=r \\ s,t\ge0}}\mathbb{L}^{\frac12(g-1)st}E_s^{<0}(\mathbb{L}^{\frac t2}z)E_t^{vec,\ge0}(\mathbb{L}^{-\frac s2}z),\\
(iii)\qquad & E_r^{\ge0}(z)=E_r^{vec,\ge0}(z)E_0(\mathbb{L}^{-\frac12r}z).
\end{split}
\]
\end{lemma}
\begin{remark}\label{rm:HallInfinite}
Note that the RHS of (i) and (ii) involve infinite summation. As we will see from the proof, the restrictions of the series to every finite type substack of each $\mathcal{C}oh_\gamma$ have only finitely many non-zero terms. (cf.~the discussion of the topology on $\overline{\Mot}$ in Section~\ref{sect:MotFun1}).
\end{remark}
\begin{proof}
Let $\mathcal{C}oh^{\pm}_{\gamma_2,\gamma_1}$ be the constructible subset of $\mathcal{C}oh_{\gamma_2,\gamma_1}$ classifying pairs $F_1\subset F$ such that $F_1$ is HN-nonnegative, $F/F_1$ has strictly negative HN-type.
Let $s_{\gamma_2,\gamma_1}:\mathcal{C}oh_{\gamma_2,\gamma_1}\to\mathcal{C}oh_{\gamma_1+\gamma_2}$ be the forgetful 1-morphism (denoted simply by $s$ above), let $\mathcal{C}oh^{\pm,\prime}_{\gamma_2,\gamma_1}$ be the constructible image of $\mathcal{C}oh^{\pm}_{\gamma_2,\gamma_1}$ under this 1-morphism. Since for every sheaf $F$ there is a unique exact sequence $0\to F^{\ge0}\to F\to F^{<0}\to0$ with HN-nonnegative $F^{\ge0}$, $F^{<0}$ having strictly negative HN-type, we get
\[
\sum_{\gamma_1+\gamma_2=\gamma}(s_{\gamma_2,\gamma_1})_!\mathbf1_{\mathcal{C}oh^{\pm}_{\gamma_2,\gamma_1}}=
\sum_{\gamma_1+\gamma_2=\gamma}\mathbf1_{\mathcal{C}oh^{\pm,\prime}_{\gamma_2,\gamma_1}}=\mathcal{C}oh_{\gamma_1+\gamma_2}.
\]
We note that the sums are finite on each substack of finite type according to Lemma~\ref{lm:Bun+}(iv). Writing $\gamma_1+\gamma_2=(r,d)$, we get the following Hall algebra identity
\[
\mathbf1_{(r,d)}=\sum_{\substack{s+t=r \\ s,t\ge0}}\sum_{e+f=d}
\mathbb{L}^{\frac12((g-1)st+te-sf)}\mathbf1_{s,e}^{<0}\mathbf1_{t,f}^{\ge0}.
\]
This is equivalent to the first formula of the lemma. The second formula is proved similarly. The proof of the third formula is completely similar to the proof of Proposition~\ref{pr:HallFormulas}\eqref{Hallv}.
\end{proof}
\subsection{Torsion sheaves}\label{sect:torsheaves}
Note that $\mathbf1_{0,l}\in\mathcal H_{(0,l)}^{fin}$.
\begin{proposition}\label{pr:torsionsheaves}
We have
\[
\sum_{l\ge0}(\mathbf1_{0,l}|\mathbf1_{0,l})z^l=\Exp\left(\frac{[X]}{\mathbb{L}-1}z\right)=
\prod_{i\ge1}\zeta_X(\mathbb{L}^{-i}z).
\]
\end{proposition}
\begin{proof}
We need some preliminaries. Let $\mathcal N_d$ be the stack of dimension $d$ vector spaces with nilpotent endomorphisms (later, we will identify $\mathcal N_d$ with the stack of coherent sheaves supported at a point on a curve set-theoretically).
\begin{lemma}\label{cltorspt}
We have
\[
[\mathcal N_d]=\frac{\mathbb{L}^{d(d-1)}}{(\mathbb{L}^d - 1)\cdots(\mathbb{L}^d - \mathbb{L}^{d-1})}.
\]
\end{lemma}
\begin{proof}
Clearly, $[\mathcal N_d] = [Nil_d]/[\GL_d]$, where $Nil_d$ is the nilpotent cone for $\mathfrak{gl}_d$. Thus we only need to show that $[Nil_d]=\mathbb{L}^{d(d-1)}$.
To compute $[Nil_d]$, note that for every $f\in\mathfrak{gl}_d$ the Fitting Decomposition Theorem lets us write $\mathbf k^d=\Ker(f^d)\oplus\Im(f^d)$. We can write $\mathfrak{gl}_d=\bigsqcup_{m=0}^d E_m$ as a disjoint union of subvarieties, where $E_m$ consists of $f\in\mathfrak{gl}_d$ such that $\Ker(f^d)$ in the Fitting decomposition has dimension equal to $m$.
For each $m$, denote by $V_m$ the scheme parameterizing decompositions $\mathbf k^d=L_1\oplus L_2$, where $L_1$ is of dimension $m$. Let $\tilde V_m\subset E_m\times V_m$ be the incidence variety consisting of triples $(f,L_1,L_2)$ such that $\Ker f^d=L_1$, $\Im f^d=L_2$. For every extension $K\supset\mathbf k$ the $K$-fibers of the projection $\tilde V_m\to E_m$ are points, while the $K$-fibers of the projections $\tilde V_m\to V_m$ are easily seen to be isomorphic to $(Nil_m)_K\times(\GL_{d-m})_K$. Now, using Proposition~\ref{pr:pointwise zero}, we get:
\begin{multline*}
[E_m]=[\tilde V_m]=[Nil_m][\GL_{d-m}][V_m]=\\ [Nil_m][\GL_{d-m}]\left[\GL_d/(\GL_m\times\GL_{d-m})\right]
=\frac{[\GL_d][Nil_m]}{[\GL_m]}.
\end{multline*}
Thus
\[
[\mathfrak{gl}_d]=\mathbb{L}^{d^2}=\sum_{m=0}^d \frac{[\GL_d][Nil_m]}{[\GL_m]}.
\]
Now, it is easy to see by induction that $[Nil_d] = \mathbb{L}^{d(d-1)}$.
\end{proof}
\begin{lemma}\label{lm:cNl}
\[
\sum_{l\ge0}[\mathcal N_l]z^l=\Exp\left(\frac z{\mathbb{L}-1}\right).
\]
\end{lemma}
\begin{proof}
\begin{multline*}
\sum_{l\ge0}[\mathcal N_l]z^l=1+\sum_{i\ge1}\frac{\mathbb{L}^{i(i-1)}}{(\mathbb{L}^i-1)\cdots(\mathbb{L}^i-\mathbb{L}^{i-1})}z^i =
1+\sum_{i\ge1}\frac{(\mathbb{L}^{-1}z)^i}{(1 - \mathbb{L}^{-i})\cdots(1-\mathbb{L}^{-1})}\\
=\prod_{k>0}\frac1{1-\mathbb{L}^{-k}z} =\Exp\left(\frac{\mathbb{L}^{-1}z}{1-\mathbb{L}^{-1}}\right)=\Exp\left(\frac z{\mathbb{L}-1}\right).
\end{multline*}
\end{proof}
Let us view $\sqcup_{l\ge0}\mathcal N_l$ as a $\mathbb{Z}$-graded stack. Similarly to Lemma~\ref{lm:Pow} consider pairs $(T,\phi)$, where $T\subset X$ is a finite subset of closed points, $\phi:T\to\sqcup_{l\ge0}\mathcal N_l$ is a 1-morphism of degree $d$. We define $\deg(\phi):=\sum_{x\in T}[\mathbf k(x):\mathbf k]\deg\phi(x)$, where $\deg\phi(x)=l$ if $\phi(x)\in\mathcal N_l(\mathbf k(x))$. We let $\mathcal Y_l$ be the stack classifying such pairs $(T,\phi)$ with $\deg\phi=l$.
\begin{lemma}
\[
[\mathcal Y_d]=[\mathcal{C}oh_{0,d}].
\]
\end{lemma}
\begin{proof}
Let $\mathcal Z_d$ be the stack classifying pairs $(T,\mathcal E)$, where $T\subset X$ is as above, $\mathcal E$ is a torsion sheaf on $X$ of degree $d$ set theoretically supported on $T$. We have a forgetful map $\mathcal Z_d\to\mathcal{C}oh_{0,d}$ and an application of Corollary~\ref{cor:pointwEqual} gives $[\mathcal Z_d\to\mathcal{C}oh_{0,d}]=\mathbf1_{\mathcal{C}oh_{0,d}}$ (indeed, the set-theoretical reduced support is uniquely defined) so that $[\mathcal Z_d]=[\mathcal{C}oh_{0,d}]$.
On the other hand, denote by $\mathcal Z_d^{(i)}$ the locally closed substack of $\mathcal Z_d$ corresponding to $T$ such that $\deg T=\sum_{x\in T}[\mathbf k(x):\mathbf k]=i$. Define $\mathcal Y_d^{(i)}$ similarly. We claim that
\begin{equation}\label{eq:Ydi}
[\mathcal Y_d^{(i)}\to X^{(i)}]=[\mathcal Z_d^{(i)}\to X^{(i)}].
\end{equation}
Indeed, if $K\supset\mathbf k$ is a field extension, then a $K$-point of $X^{(i)}$ is given by a finite subset $T\subset X_K$. Choose local coordinates at the points of $T$. The fiber of $\mathcal Z_d^{(i)}\to X^{(i)}$ over $T\subset X_K$, parameterizes all degree $d$ torsion sheaves supported on $T$. Every such sheaf $E$ can be written uniquely as $\bigoplus_{x\in T}E_x$, and each $E_x$ can be identified with a pair consisting of a vector space over $\mathbf k(x)$ and a nilpotent endomorphism. This gives an isomorphism between this fiber and the corresponding fiber of $\mathcal Y_d^{(i)}\to X^{(i)}$. It remains to use Proposition~\ref{pr:pointwise zero}.
Now we derive from~\eqref{eq:Ydi} that
\[
[\mathcal Y_d]=\sum_i[\mathcal Y_d^{(i)}]=\sum_i[\mathcal Z_d^{(i)}]=[\mathcal Z_d].
\]
\end{proof}
Now we prove the proposition using Lemma~\ref{lm:Pow} and Lemma~\ref{lm:cNl}:
\[
\sum_{d\ge0}[\mathcal{C}oh_{0,d}]z^d=\sum_{d\ge0}[\mathcal Y_d]z^d=\Pow\left(\sum_{d\ge0}[\mathcal N_d]z^d,[X]\right)=
\Exp\left(\frac{[X]}{\mathbb{L}-1}z\right).
\]
\end{proof}
\section{Motivic classes of the stacks of vector bundles with filtrations and proofs of Theorems~\ref{th:NilpEnd} and~\ref{th:ExplAnsw}}\label{sect:Proofs}
In this section $\mathbf k$ is a field of characteristic zero. We keep assumptions from the previous sections: $X$ is a smooth geometrically connected projective curve over the field $\mathbf k$ and there is a divisor $D$ on $X$ defined over $\mathbf k$ such that $\degD=1$.
Fix $s\in\mathbb{Z}_{>0}$ and put $\underline z=(z_s,\ldots,z_1)$. Let $\underline r=(r_s,\ldots,r_1)$ be an $s$-tuple of positive integers; set $n=\sum_ir_i$. Set
\[
G^{\ge0}_{\underline r}(\underline z,w):=
\left(E_{r_s}(z_s)\ldots E_{r_1}(z_1)\left|E_n^{\ge0}(w)\right.\right)
\]
and
\[
Y^{\ge0}_{\underline r}(\underline z,w):=
\left(E_{r_s}^{vec}(z_s)\ldots E_{r_1}^{vec}(z_1)\left|E_n^{\ge0}(w)\right.\right).
\]
The product is taken in $\mathcal H$. Note that, up to some powers of $\mathbb{L}$, $G^{\ge0}_{\underline r}(\underline z,w)$ (resp.~$Y^{\ge0}_{\underline r}(\underline z,w)$) is the generating series for the motivic classes of the moduli stacks of rank $r$ HN-nonnegative coherent sheaves (resp.~vector bundles) with partial flags of type $(r_1,\ldots,r_s)$.
\subsection{Relating $G^{\ge0}$ with $Y^{\ge0}$}
\begin{proposition}\label{pr:GtoY}
We have an equation for series with coefficients in $\overline{\Mot}(\mathbf k)[\sqrt\mathbb{L}]$:
\[
G^{\ge0}_{\underline r}(\underline z,w)=X^{\ge0}_{\underline r}(\underline z,w)Y^{\ge0}_{\underline r}(\underline z,w),
\]
where
\[
X^{\ge0}_{\underline r}(\underline z,w)=
\Exp\left(\frac{[X]}{\mathbb{L}-1}
\left[
\sum_i\mathbb{L}^{-\frac12(n+r_i)}z_iw+\sum_{i>j}\frac{z_i}{z_j}
\left(\mathbb{L}^{\frac{r_j}2}-\mathbb{L}^{-\frac{r_j}2}\right)\mathbb{L}^{-\frac{r_i}2}
\right]\right).
\]
\end{proposition}
Note that, for this to make sense, we need to extend $\Exp$ to the ideal of the ring
\[
\overline{\Mot}(\mathbf k)[\sqrt\mathbb{L}]\left[\!\left[z_1,\frac{z_2}{z_1},\ldots,\frac{z_n}{z_{n-1}},w\right]\!\right]
\]
consisting of series without constant term; but this is straightforward.
\begin{proof}
The proof repeats that of~\cite[Prop.~5.1]{SchiffmannIndecomposable}. It uses Proposition~\ref{pr:DeltaHom}, Lemma~\ref{lm:CoProdPairing}, Proposition~\ref{pr:HallFormulas}, Lemma~\ref{lm:HallProduct}(iii), and Proposition~\ref{pr:torsionsheaves}.
The only slight difference is that we do not use coassociativity to prove the equation
\[
\left(
\prod_{i=1}^s E_0(\mathbb{L}^{-\frac{r_i}2}z_i)\left|E_0(\mathbb{L}^{-\frac n2}w)
\right.\right)=
\prod_{i=1}^s \left(E_0(\mathbb{L}^{-\frac{r_i}2}z_i)\left|E_0(\mathbb{L}^{-\frac n2}w)\right.\right)
\]
but we apply Proposition~\ref{pr:HallFormulas}\eqref{Halli} and Lemma~\ref{lm:CoProdPairing} $s-1$ times instead.
\end{proof}
\subsection{Calculation of $Y^{\ge0}$}
Our first goal is to calculate $Y^{\ge0}_{\underline1}(\underline z,w)$, where $\underline1=(1,\ldots,1)=1^s$. Set also
\[
Y^{<0}_{\underline1}(\underline z,w)=\left(
E_1^{vec}(z_s)\ldots E_1^{vec}(z_1)\left|E_s^{<0}(w)
\right.\right),\qquad
Y_{\underline1}(\underline z,w)=\left(
E_1^{vec}(z_s)\ldots E_1^{vec}(z_1)\left|E_s(w)
\right.\right).
\]
We note that $E_1^{vec}(z_i)\in\mathcal H_{fin}[[z_i^{-1},z_i]]$, so $Y_{\underline1}(\underline z,w)$ makes sense. We need a simple lemma.
\begin{lemma}\label{lm:Y1}
\[
Y_{\underline1}(\underline z,w)=
\mathbb{L}^{(g-1)\frac{s(s-1)}4}[\Jac]^s
\sum_{d_1,\ldots,d_s\in\mathbb{Z}}z_1^{d_1}\ldots z_s^{d_s}
\mathbb{L}^{\frac12\sum_i d_i(2i-s-1)}w^{\sum_i d_i}.
\]
\end{lemma}
\begin{proof}
According to~\eqref{eq:E1prod}, we have
\[
Y_{\underline1}(\underline z,w)=\mathbb{L}^{(1-g)\frac{s(s-1)}4}
\sum_{d_1,\ldots,d_s\in\mathbb{Z}}
\mathbb{L}^{\frac{(s-1)d_1+(s-3)d_2+\ldots+(1-s)d_s}2}
[\mathcal{B}un_{s,d_1,\ldots,d_s}]z_1^{d_1}\ldots z_s^{d_s}w^{\sum_i d_i}.
\]
Thus it is enough to show that
\[
[\mathcal{B}un_{s,d_1,\ldots,d_s}]=\mathbb{L}^{(g-1)\frac{s(s-1)}2+(s-1)d_s+\ldots+(1-s)d_1}[\Jac]^s.
\]
This is proved by induction on $s$. Consider the morphism
\[
\mathcal{B}un_{s,d_1,\ldots,d_s}\to
\mathcal{B}un_{s-1,d_1,\ldots,d_{s-1}}\times\Pic^{d_s}
\]
sending $(E_1\subset\ldots\subset E_{s-1}\subset E_s)$ to $((E_1\subset\ldots\subset E_{s-1}),E_s/E_{s-1})$. It is enough to show that the motivic classes of its fibers are equal to
\[
\mathbb{L}^{(g-1)(s-1)+(s-1)d_s-d_1-\ldots-d_{s-1}}.
\]
The fibers are the stacks $\Ext^1(E_s/E_{s-1},E_{s-1})/\Hom(E_s/E_{s-1},E_{s-1})$ of dimension
\begin{multline*}
\dim\Ext^1(E_s/E_{s-1},E_{s-1})-\dim\Hom(E_s/E_{s-1},E_{s-1})=
-\langle(1,d_s),(s-1,d_1+\ldots+d_{s-1})\rangle=\\
(g-1)(s-1)+(s-1)d_s-d_1-\ldots-d_{s-1}.
\end{multline*}
This completes the proof. (Cf.~the proof of Lemma~\ref{lm:VectSpStack}.)
\end{proof}
Our nearest goal is to prove the motivic analogue of~\cite[Prop.~5.3]{SchiffmannIndecomposable}. The proof is very similar to the one given in~\cite{SchiffmannIndecomposable} except for two points. The first is that we do not have an honest comultiplication on $\mathcal H$ but this problem is minor. The more important thing is that we do not know a priori that our series are expansions of rational functions. We will see, however, that this follows from the proof.
Recall that we defined normalized motivic zeta-function $\tilde\zeta_X$ and regularized motivic zeta-function $\zeta_X^*$ in Section~\ref{sect:zeta}. We will drop the index $X$ from now on.
\begin{proposition}\label{pr:MainCalc}
For any $s\ge1$ we have
\begin{equation}\label{eq:Y1>Formula}
Y^{\ge0}_{\underline1}(\underline z,w)=
\frac{\mathbb{L}^{\frac14(g-1)s(s-1)}[\Jac]^s}{\prod_{i<j}\tilde\zeta\left(\frac{z_i}{z_j}\right)}
\sum_{\sigma\in\mathfrak{S}_s}\sigma
\left[
\prod_{i<j}\tilde\zeta\left(\frac{z_i}{z_j}\right)\cdot
\frac1{\prod_{i<s}\left(1-\mathbb{L}\frac{z_{i+1}}{z_i}\right)}\cdot
\frac1{1-\mathbb{L}^{\frac{1-s}2}z_1w}
\right]
\end{equation}
and
\begin{equation}\label{eq:Y1<Formula}
Y^{<0}_{\underline1}(\underline z,w)=(-1)^s
\frac{\mathbb{L}^{\frac14(g-1)s(s-1)}[\Jac]^s}{\prod_{i<j}\tilde\zeta\left(\frac{z_i}{z_j}\right)}
\sum_{\sigma\in\mathfrak{S}_s}\sigma
\left[
\prod_{i<j}\tilde\zeta\left(\frac{z_i}{z_j}\right)\cdot
\frac1{\prod_{i<s}\left(1-\mathbb{L}^{-1}\frac{z_i}{z_{i+1}}\right)}\cdot
\frac1{1-\mathbb{L}^{\frac{s-1}2}z_sw}
\right],
\end{equation}
where the rational functions are expanded in the regions $z_1\gg\ldots\gg z_s$, $w\ll1$ and $z_1\gg\ldots\gg z_s$, $w\gg1$ respectively.
\end{proposition}
\begin{remark}\label{rm:RatExpand}
The coefficients of the rational functions in the RHS of~\eqref{eq:Y1>Formula} and~\eqref{eq:Y1<Formula} belong to the ring $\overline{\Mot}(\mathbf k)[\sqrt\mathbb{L}]$, and it is not known whether this ring is integral. Thus some care should be taken. Note, however, that the RHS of~\eqref{eq:Y1>Formula} can be written in the form
\[
\frac{P(\underline z,w)}{M(\underline z,w)(1+Q(\underline z,w))},
\]
where $P(\underline z,w)$ is a polynomial, $M(\underline z,w)$ is a monomial in $\underline z$ and $w$, $Q(\underline z,w)$ is a polynomial in $z_{i+1}/z_i$ and $z_iw$ \emph{without constant term} (see Lemma~\ref{lm:zetaX}). Thus we define the expansion as
\[
\frac{P(\underline z,w)}{M(\underline z,w)(1+Q(\underline z,w))}=
\frac{P(\underline z,w)}{M(\underline z,w)}\left(
\sum_{i\ge0}(-Q(\underline z,w))^i
\right).
\]
Similar considerations apply to~\eqref{eq:Y1<Formula} and to the coefficients of $w$-expansions of the RHS of~\eqref{eq:Y1>Formula} and~\eqref{eq:Y1<Formula}.
\end{remark}
\begin{proof}[Proof of Proposition~\ref{pr:MainCalc}]
The proof is analogous to that of~\cite{SchiffmannIndecomposable}, we indicate the places, where some changes are needed. We use induction on $s$. The case $s=1$ is easy (the proof repeats that of~\cite{SchiffmannIndecomposable}). Next, set $E^{vec}_{\underline1}(\underline z)=E_1^{vec}(z_s)\ldots E_1^{vec}(z_1)$. Using Lemma~\ref{lm:HallProduct}(ii) and the fact that $\mathcal H_{fin}$ is a subalgebra of $\mathcal H$, we get (cf.~also Remark~\ref{rm:HallInfinite})
\begin{equation}\label{eq:Y1}
Y_{\underline1}(\underline z,w)=Y^{\ge0}_{\underline1}(\underline z,w)+Y^{<0}_{\underline1}(\underline z,w)+
\sum_{\substack{u+t=s\\u,t>0}}\mathbb{L}^{\frac12(g-1)ut}
\left(
E_{\underline1}^{vec}(\underline z)|
E_u^{<0}(\mathbb{L}^{\frac t2}w) E_t^{\ge0}(\mathbb{L}^{-\frac u2}w)
\right).
\end{equation}
By Proposition~\ref{pr:HallFormulas}\eqref{Halliii2} we get
\[
\Delta(E_1^{vec}(z))=
E_1^{vec}(z)\boxtimes1+
E_0(\mathbb{L}^{\frac12}z)E_0(\mathbb{L}^{-\frac12}z)^{-1}k_1\boxtimes E_1^{vec}(z).
\]
Let $\sigma:\{1,\ldots,s\}\to\{1,2\}$ be a map, set
\[
X_\sigma=\prod_i^\to C_{\sigma(i)}(z_i),
\]
where $C_1(z)=E_1^{vec}(z)\boxtimes1$, $C_2(z)=E_0(\mathbb{L}^{\frac12}z)E_0(\mathbb{L}^{-\frac12}z)^{-1}k_1\boxtimes E_1^{vec}(z)$. We get by Proposition~\ref{pr:DeltaHom}
\[
\Delta(E_{\underline1}^{vec}(\underline z))=\sum_\sigma X_\sigma.
\]
Thus, denoting by $\Delta_{u,t}$ the component of $\Delta$ of rank $(u,t)$, we get
\[
\Delta_{u,t}(E_{\underline1}^{vec}(\underline z))=\sum_{\sigma\in\Sh_{u,t}} X_\sigma.
\]
where $\Sh_{u,t}$ denotes the set of $(u,t)$-shuffles, that is, maps $\sigma:\{1,\ldots,u+t\}\to\{1,2\}$ such that 1 has exactly $u$ preimages. Now combining~\eqref{eq:Y1} and Lemma~\ref{lm:CoProdPairing} we get
\begin{equation}\label{eq:Y1prelim}
Y_{\underline1}(\underline z,w)=Y^{\ge0}_{\underline1}(\underline z,w)+Y^{<0}_{\underline1}(\underline z,w)+
\sum_{\substack{u+t=s\\u,t>0}}\sum_{\sigma\in\Sh_{u,t}}\mathbb{L}^{\frac12(g-1)ut}
\left(
X_\sigma|
E_u^{<0}(\mathbb{L}^{\frac t2}w)\boxtimes E_t^{\ge0}(\mathbb{L}^{-\frac u2}w)
\right).
\end{equation}
Fix $\sigma\in\Sh_{u,t}$ and set
\[
H_\sigma(\underline z)=\prod_{\substack{(i,j),j>i\\ \sigma(i)=1,\sigma(j)=2}}
\frac{\tilde\zeta\left(\frac{z_j}{z_i}\right)}{\tilde\zeta\left(\frac{z_i}{z_j}\right)}
\]
(We expand $H_\sigma$ in $z_1\gg\ldots\gg z_s$.)
Now, repeating literally the argument from~\cite{SchiffmannIndecomposable} and using Proposition~\ref{pr:HallFormulas}(\ref{Halli}, \ref{Hallii}) and Lemma~\ref{lm:zetaX}(iii), we get
\[
X_\sigma=H_\sigma(\underline z)\cdot\left(
\prod_{i,\sigma(i)=1}^\to E_1^{vec}(z_i)
\prod_{j,\sigma(j)=2}^\to E_0(\mathbb{L}^{\frac12}z_j)E_0(\mathbb{L}^{-\frac12}z_j)^{-1}k_1^t
\right)\boxtimes
\prod_{j:\sigma(j)=2}^\to E_1^{vec}(z_j).
\]
Plugging this into~\eqref{eq:Y1prelim}, we get as in~\cite{SchiffmannIndecomposable}
\[
Y_{\underline1}(\underline z,w)=Y^{\ge0}_{\underline1}(\underline z,w)+Y^{<0}_{\underline1}(\underline z,w)+
\sum_{\substack{u+t=s\\u,t>0}}\sum_{\sigma\in\Sh_{u,t}}Y_\sigma(\underline z,w),
\]
with
\begin{equation}\label{eq:Ysigma}
Y_\sigma(\underline z,w)=\mathbb{L}^{\frac12(g-1)ut}H_\sigma(\underline z)
Y_{\underline1}^{<0}(z_{i_u},\ldots,z_{i_1},\mathbb{L}^{\frac t2}w)
Y_{\underline1}^{\ge0}(z_{j_t},\ldots,z_{j_1},\mathbb{L}^{-\frac u2}w),
\end{equation}
where $(i_u,\ldots,i_1)$ (resp.~ $(j_t,\ldots,j_1)$) are the reordering in the decreasing order of the set $\sigma^{-1}(1)$ (resp.\ $\sigma^{-1}(2)$).
Now let us write
\[
Y^{\ge0}_{\underline 1}(\underline z,w)=\sum_{n\ge0}y_n^{\ge0}(\underline z)w^n,\qquad
Y^{<0}_{\underline 1}(\underline z,w)=\sum_{n<0}y_n^{<0}(\underline z)w^n,\qquad
Y_\sigma(\underline z,w)=\sum_n y_{\sigma,n}(\underline z)w^n.
\]
As in~\cite[(5.11)]{SchiffmannIndecomposable}, using Lemma~\ref{lm:Y1}, we can re-write~\eqref{eq:Ysigma} as
\begin{equation}\label{eq:yn}
\mathbb{L}^{(g-1)\frac{s(s-1)}4}[\Jac]^s
\sum_{\substack{l_1,\ldots,l_s\in\mathbb{Z}\\ \sum_i l_i=n}}z_1^{l_1}\ldots z_s^{l_s}
\mathbb{L}^{\frac12\sum_i l_i(2i-s-1)}=
y_n^{\ge0}(\underline z)+\sum_{u,\sigma}y_{\sigma,n}(\underline z)
\end{equation}
for $n\ge0$. (And the similar statement is true for $n<0$ if we replace $y_n^{\ge0}$ by $y_n^{<0}$.)
We know from the induction hypothesis that each $y_{\sigma,n}$ is an expansion of a rational function in a certain asymptotic region in the sense of Remark~\ref{rm:RatExpand}. However, we do not know a priori that $y_n^{\ge0}$ and $y_n^{<0}$ are expansions of rational functions. Let us prove this. Note two things.
(*) There is a polynomial $R(\underline z)$ in $z_{i+1}/z_i$ with constant term one such that $R(\underline z)y_n^{\ge0}(\underline z)$ is a Laurent polynomial. This follows from Remark~\ref{rm:RatExpand} and the fact that the LHS of~\eqref{eq:yn} is annihilated by $\prod_i(1-\mathbb{L} z_{i+1}/z_i)$.
(**) $y_n^{\ge0}(\underline z)$ belongs to
\begin{equation}\label{eq:NonZeroDiv}
\overline{\Mot}(\mathbf k)[\sqrt\mathbb{L}]
\left(\!\left(z_1,\frac{z_2}{z_1},\ldots,\frac{z_s}{z_{s-1}}\right)\!\right).
\end{equation}
This is proved similarly to Lemma~\ref{lm:LaurSer}.
\begin{lemma}
Any formal power series satisfying (*) and (**) is an expansion of a rational function in the region $z_1\gg\ldots\gg z_s$.
\end{lemma}
\begin{proof}
Let $Q(\underline z)$ be a formal power series satisfying (*) and (**). let $R(\underline z)$ be a polynomial in $z_{i+1}/z_i$ with constant term one such that $P(\underline z)=R(\underline z)Q(\underline z)$ is a Laurent polynomial. Subtracting from $Q(\underline z)$ the expansion of $P(\underline z)/R(\underline z)$ in the region $z_1\gg\ldots\gg z_s$, we may assume that $P=0$. However, $R$ is a non-zero divisor in~\eqref{eq:NonZeroDiv}.
\end{proof}
We see that $y_n^{\ge0}(\underline z)$ is an expansion of a rational function in the asymptotic region $z_1\gg\ldots\gg z_s$. Similar considerations show that $y_n^{<0}(\underline z)$ is an expansion of a rational function in the same region. The rest of the proof is completely similar to that of~\cite[Prop.~5.3]{SchiffmannIndecomposable}.
\end{proof}
Recall that in Section~\ref{sect:explicit} for a partition $\lambda=1^{r_1}2^{r_2}\ldots t^{r_t}$ such that $\sum_i r_i=n$, we defined the iterated residue $\res_\lambda$.
\begin{corollary}\label{cor:Y}
We have the following equations of series with coefficients in $\overline{\Mot}(\mathbf k)[\sqrt\mathbb{L}]$.
\begin{multline*}
Y^{\ge0}_{\underline r}(\mathbb{L}^{-\frac12r_t}z_{1+r_{<t}},\ldots,\mathbb{L}^{-\frac12r_i}z_{1+r_{<i}},\ldots,
\mathbb{L}^{-\frac12r_1}z_1,w)=\mathbb{L}^{b(\underline r)}\prod_i vol_{r_i}\\
\cdot\res_\lambda\left[
\frac1{\prod_{i<j}\tilde\zeta\left(\frac{z_i}{z_j}\right)}
\sum_{\sigma\in\mathfrak{S}_n}
\left\{
\prod_{i<j}\tilde\zeta\left(\frac{z_i}{z_j}\right)\cdot
\frac1{\prod_{i<n}\left(1-\mathbb{L}\frac{z_{i+1}}{z_i}\right)}\cdot
\frac1{1-\mathbb{L}^{\frac{-n}2}z_1w}
\right\}
\right]\prod_{\substack{j=1\\j\notin\{r_{<i}\}}}^n\frac{dz_j}{z_j},
\end{multline*}
where
\[
b(\lambda)=\frac12(g-1)\sum_{i<j}r_ir_j.
\]
\end{corollary}
\begin{proof}
Combine the previous proposition with $s=n$ and Proposition~\ref{pr:HallFormulas}\eqref{Halliv} (cf.~\cite[(5.15)]{SchiffmannIndecomposable}).
\end{proof}
\subsection{Proof of Theorem~\ref{th:NilpEnd}}\label{sect:NilpEnd}
Combining Proposition~\ref{pr:GtoY} with Proposition~\ref{cor:Y}, we get a formula for $G^{\ge0}_{\underline r}$. Let $\mathcal E_{r,d}^{coh,nilp}$ be the moduli stack of coherent sheaves on $X$ of class $(r,d)$ with nilpotent endomorphisms. Let $\mathcal E_{r,d}^{\ge0,coh,nilp}$ denote the constructible subset corresponding to HN-nonnegative sheaves.
Repeating almost literally the arguments from~\cite[Sect.~3, Sect.~5.1, Sect.~5.6--5.7]{SchiffmannIndecomposable}, we get
\begin{equation}\label{eq:CohNilp}
\sum_{r,d\ge0}[\mathcal E_{r,d}^{\ge0,coh,nilp}]w^rz^d=\sum_{\lambda}
\mathbb{L}^{(g-1)\langle\lambda,\lambda\rangle}J_\lambda^{mot}(z) H_\lambda^{mot}(z)w^{|\lambda|}
\cdot\Exp\left(\frac{[X]}{\mathbb{L}-1}\cdot\frac{z}{1-z}\right),
\end{equation}
where $\displaystyle\frac z{1-z}$ is expanded in powers of $z$.
\begin{lemma} We have in $\mathrm{Mot}(\mathbf k)[[z,w]]$
\[
\sum_{r,d\ge0}[\mathcal E_{r,d}^{\ge0,coh,nilp}]w^rz^d=
\sum_{r,d\ge0}[\mathcal E_{r,d}^{\ge0,nilp}]w^rz^d\cdot
\sum_{d\ge0}[\mathcal E_{0,d}^{coh,nilp}]z^d.
\]
\end{lemma}
\begin{proof}
Fix $r$ and $d$. Consider the stratification $\mathcal E_{r,d}^{\ge0,coh,nilp}=\sqcup_i\mathcal E_i$, where $\mathcal E_i$ consists of pairs $(F,\Phi)$ such that the torsion part of $F$ has degree $i$. It is enough to show that
\[
[(\mathcal E_{r,d-i}^{\ge0,nilp}\times\mathcal{C}oh_{0,i}^{nilp})\xrightarrow{\phi}\mathcal{C}oh_{r,d}]=
[\mathcal E_i\xrightarrow{\psi}\mathcal{C}oh_{r,d}],
\]
where the 1-morphism $\phi$ is defined as $((E,\Psi),(T,\Phi))\mapsto E\oplus T$. Let $\xi:\Spec K\to\mathcal{C}oh_{r,d}$ be a point represented by a coherent sheaf $F$ over $X_K$. Write $F=T\oplus E$, where $T$ is torsion, $E$ is torsion free. According to Proposition~\ref{pr:pointwise zero}, we need to check that the $\xi$-fibers of $\phi$ and $\psi$ have the same motivic classes. We may assume that $T$ has degree $i$ and that $E$ is HN-nonnegative (otherwise both fibers are empty).
The $\xi$-fiber of $\psi$ is the motivic class of $Nil(T\oplus E)$, where the notation stands for the nilpotent cone of the algebra $\End(T\oplus E)$. The $\xi$-fiber of the direct sum morphism $\mathcal{C}oh_{r,d-i}\times\mathcal{C}oh_{0,i}\to\mathcal{C}oh_{r,d}$ is the additive group $\Hom(T,E)$. Thus the $\xi$-fiber of $\phi$ is equal to
\[
Nil(T)\times Nil(E)\times\Hom(T,E).
\]
Using the fact that every endomorphism of $T\oplus E$ preserves $T$, it is easy to see that the above scheme is isomorphic to $Nil(T\oplus E)$. Proposition~\ref{pr:pointwise zero} completes the proof, (cf.~the proof of~\cite[Thm.~1.4]{MozgovoySchiffmanOnHiggsBundles}).
\end{proof}
\begin{lemma}
\[
\sum_{d\ge0}[\mathcal E_{0,d}^{coh,nilp}]z^d=
\Exp\left(\frac{[X]}{\mathbb{L}-1}\cdot\frac{z}{1-z}\right).
\]
\end{lemma}
\begin{proof}
Put $w=0$ in~\eqref{eq:CohNilp}.
\end{proof}
Combining~\eqref{eq:CohNilp} and the last two lemmas, we get Theorem~\ref{th:NilpEnd}.
\subsection{Proof of Theorem~\ref{th:ExplAnsw}}\label{sect:ExplAnsw}
We claim that
\begin{equation}\label{eq:Hrd=Mge0}
[\mathcal M_{r,d}^{\ge0,ss}]=H_{r,d}.
\end{equation}
(We use notation from Sections~\ref{sect:explicit} and~\ref{Sect:Higgs}.) According to Lemma~\ref{lm:EqSlp}, we just need to show that
\begin{equation*}
\prod_{\tau\ge0}\left(1+\sum_{d/r=\tau}{\mathbb{L}^{(1-g)r^2}[\mathcal M_{r,d}^{\ge0,ss}]}w^rz^d\right)=
\prod_{\tau\ge0}\left(\Exp\left(
\sum_{d/r=\tau}B_{r,d}w^rz^d.
\right)\right).
\end{equation*}
By Proposition~\ref{pr:KS} the LHS is equal to
\[
1+\sum_{r>0,d\ge 0}\mathbb{L}^{(1-g)r^2}[\mathcal M^{\ge0}_{r,d}]w^rz^d.
\]
The RHS is equal to
\begin{multline*}
\Exp\left(\sum_{d,r\ge0}B_{r,d}w^rz^d\right)=
\Exp\left(\mathbb{L}\Log\left(\sum_\lambda\mathbb{L}^{(g-1)\langle\lambda,\lambda\rangle}J_\lambda^{mot}(z) H_\lambda^{mot}(z)w^{|\lambda|}\right)\right)=\\
\Pow\left(\sum_\lambda\mathbb{L}^{(g-1)\langle\lambda,\lambda\rangle}J_\lambda^{mot}(z) H_\lambda^{mot}(z)w^{|\lambda|},\mathbb{L}\right).
\end{multline*}
(We used a property of $\Exp$ and the definition of $\Pow$.) According to Theorem~\ref{th:NilpEnd} and Proposition~\ref{pr:NilpEndPow}, the last expression is equal to $\sum_{r,d}[\mathcal E^{\ge0}_{r,d}]w^rz^d$. Now Lemma~\ref{lm:HiggsEnd} completes the proof of~\eqref{eq:Hrd=Mge0}.
Finally, Lemma~\ref{lm:+ss}(i) completes the proof of Theorem~\ref{th:ExplAnsw}.
|
1,116,691,500,166 | arxiv | \section{Introduction}
The Tevatron Run-II program officially started in March 2001
after the previous run (Run I)
ended in 1996. Between these years, the Tevatron accelerator and the CDF and D0
detectors have undergone vast upgrades. The accelerator complex
has added
the
Main Injector, replacing the old Main Ring, to inject higher intensity beams
to the Tevatron and to produce more anti-protons to be used for collisions.
Also the Tevatron beam energy has been increased
and it resulted in a center-of-mass energy of 1.96 TeV.
The instantaneous luminosity has improved steadily since the beginning of Run II,
and at the time of the Conference a record value was $8.3 \times 10^{31}$~cm$^{-2}$~s$^{-1}$.
This is about 5 times higher than the Run-I record value, and almost matches the Run-IIa goal
of $8.6 \times 10^{31}$.
The integrated luminosity delivered to each experiment
has exceeded 500~pb$^{-1}$,
and with about 80\% of them recorded by the detector.
Both the CDF and D0 experiments have broad physics program being carried out
with the data. In the remainder of this manuscript we summarize those in the
areas of electroweak physics, top quark physics and bottom quark physics.
Exotic physics including higgs and susy searches
is covered by another speaker~\cite{d0-sp}.
\section{Electroweak Physics}
\subsection{Production of single gauge bosons}
\begin{figure}[htb]
\begin{center}
\includegraphics*[width=0.46\textwidth]{Z_InvMassCCCP_40_130_nocuts.eps}
\includegraphics*[width=0.42\textwidth]{E07F08.eps}
\caption{%
Invariant mass distributions of lepton pairs
for $Z^0 \rightarrow e^+ e^-$ (left, CDF) and
$Z^0 \rightarrow \mu^+ \mu^-$ (right, D0) candidates.
}
\label{fig:z0}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics*[width=0.44\textwidth]{mt_bless_mulike_022604.eps}
\hspace*{3mm}
\includegraphics*[width=0.37\textwidth]{E06F04.eps}
\caption{%
Transverse mass distributions of lepton and missing $E_T$ system
for $W \rightarrow e \nu$ candidates (left: CDF, right: D0).
}
\label{fig:w-mt}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics*[width=0.55\textwidth]{cs_vs_e_withplugW.eps} \hspace*{3mm}
\includegraphics*[width=0.29\textwidth]{gamma_summary_cdf.eps}
\caption{%
Left: Production cross section of weak vector bosons
as a function of collision center-of-mass energy.
Right: $W$ boson width measurements.
}
\label{fig:w-z-cross}
\end{center}
\end{figure}
Since the termination of the operation of the LEP2 collider,
Tevatron is the only accelerator capable of producing $W^\pm$ and $Z^0$ bosons.
CDF and D0 collaborations have performed studies of various aspects
of the weak boson properties.
They are cleanly identified with their decays to
leptons (electrons or muons). Figure~\ref{fig:z0} shows invariant mass spectra
of di-lepton candidates from CDF and D0. The production cross sections are
measured to be~\cite{cdf-z-cross}
\begin{eqnarray*}
\sigma (\bar p p \rightarrow
Z^0 \rightarrow \ell^+ \ell^- ) & = &
254.3 \pm 3.3 \pm 4.3 \pm 15.3\ {\rm pb \ (CDF)} \\
& = &
291.3 \pm 3.0 \pm 6.9 \pm 18.9 \ {\rm pb \ (D0)},
\end{eqnarray*}
in good agreement with a theory prediction of
$250.5 \pm 3.8$~pb (NNLO, MRST)~\cite{w-cross}.
$W$ boson decays are identified
with an energetic lepton and a large missing transverse energy.
Figure~\ref{fig:w-mt} shows transverse mass distributions.
The production cross sections are measured to be~\cite{cdf-z-cross,d0-w-cross}
\begin{eqnarray*}
\sigma (\bar p p \rightarrow
W \rightarrow \ell \nu ) & = &
2777 \pm 10 \pm \ \, 52 \pm 167 \ {\rm pb \ \ (CDF)} \\
& = &
2865.2 \pm 8.3 \pm 62.8 \pm 40.4 \pm 186.2 \ {\rm pb \ \ (D0 \ }e)\\
& = &
3226 \pm 128 \pm 100 \pm 322 \ {\rm pb \ \ (D0 \ \ \mu)},
\end{eqnarray*}
again in good agreement with theory,
$2687 \pm 40 \ {\rm pb \ (NNLO, MRST)}$~\cite{w-cross}.
Figure~\ref{fig:w-z-cross}~(left) shows a summary of those measurements,
along with earlier measurements at the CERN collider, as a function of the
collision center-of-mass energy.
The ratio $R$ of the $W$ and $Z$ boson production rates,
defined by
\[ R \equiv
\frac { \sigma (\bar p p \rightarrow W^\pm )
\cdot
{\cal B} (W^\pm \rightarrow \ell \nu ) }
{ \sigma (\bar p p \rightarrow Z^0 )
\cdot
{\cal B} (Z^0 \rightarrow \ell^+ \ell^- )} ,
\]
includes the branching fraction of the $W$ boson. Using a theory prediction
for the ratio of production cross sections and measurements of
${\cal B} (Z \rightarrow \ell^+ \ell^- )$,
one can extract
${\cal B} (W^+ \rightarrow \ell^+ \nu )$, or
the total width assuming
the leptonic partial width.
The extracted numbers are
$\Gamma_W = 2.071 \pm 0.040$~GeV (CDF) and
$ 2.187 \pm 0.128$~GeV (D0).
They are shown in Figure~\ref{fig:w-z-cross}~(right).
\subsection{Pair production of gauge bosons}
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.43\textwidth]{comb_phoet_log.eps} \hspace*{8mm}
\includegraphics*[width=0.4\textwidth]{E01F03norot3.eps}
\caption{%
Transverse momentum distribution of photons produced in association
with the $W$ boson. Left: CDF, right: D0.
}
\label{fig:w-gam}
\end{center}
\end{figure}
The unified electroweak theory has a non-Abelian gauge structure
and results in self-couplings of gauge bosons.
CDF has observed the production of $W^+ W^-$ pairs
for the first time
in hadron colliders.
The leptonic decay channel of the $W$ bosons is used,
resulting in two energetic leptons and a large missing $E_T$, and no extra jet activities.
The production cross section has been measured to be~\cite{cdf-ww}
\[
\sigma (\bar p p \rightarrow
W^+ W^- )
= 14.3 \, ^{+ \, 5.6} _{ - \, 4.9} \pm 1.6 \pm 0.9~{\rm pb} ,
\]
to be compared with a theory prediction of $12.5\pm 0.9$~pb~\cite{theory-ww}.
Associated production of a $W$ or $Z$ boson with a photon is also studied.
Photons are identified in the transverse momentum ranges
above 7 GeV and 8 GeV in CDF and D0, respectively.
Figure~\ref{fig:w-gam} shows transverse momentum spectra of
photons in $W \gamma$ candidate events.
Production cross section is measured to be~\cite{cdf-wgam}
\begin{eqnarray*}
& & \sigma ( \bar p p \rightarrow W \gamma )
\cdot {\cal B} ( W \rightarrow \ell \nu ) \\
& = & 19.7 \pm 1.7 \pm 2.0 \pm 1.1
\ {\rm pb \ \ (CDF)
\ vs.} \ 19.3 \pm 1.4\ {\rm pb} \ \ {\rm (theory) } \\
& = & 19.3 \pm 2.7 \pm 6.1 \pm 1.2
\ {\rm pb \ \ (D0)
\ \ \ \, vs. } \ 16.4 \pm 0.4 \ {\rm pb} \ \ { \rm (theory) } ,
\end{eqnarray*}
where the difference reflects the different kinematic requirements.
A more direct test
of the gauge couplings
can be performed
if photon angular distributions of those
events
are studied
and radiation amplitude zero is looked for directly.
\section {Top Quark Physics}
The top quark was discovered by CDF and D0 in Tevatron Run-I data of about 100~pb$^{-1}$.
Tens of events were reconstructed then, and therefore all measurements were
statistically limited.
The expected 20-fold increase in the amount of data in Run~II should allow
more detailed studies of top quark properties.
They include production cross section, mass, production kinematics,
$t \bar t$ spin correlations, helicity of $W$ bosons produced in top decays,
branching fractions of various decay modes and searches for rare decays including those
beyond the standard model.
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.45\textwidth]{njet_tt.eps} \hspace*{3mm}
\includegraphics*[width=0.38\textwidth]{T03F01f.eps}
\caption{%
Jet multiplicity distributions of top candidate events
in the di-lepton channel.
Left: CDF, right: D0.
}
\label{fig:top-jet-mul-dilep}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics*[width=0.29\textwidth]{run167551event7969376SVX.eps}
\includegraphics*[width=0.35\textwidth]{plotAllTags_Mar11_ht200.eps}
\includegraphics*[width=0.32\textwidth]{T05F03a.eps}
\caption{%
Left: Example of $b$-tagged top candidate event.
Middle and right:
Jet multiplicity distributions of top candidate events
in the lepton plus jet channel from CDF and D0.
The signal region is $N_{\rm jet} \geq 3$.
}
\label{fig:top-example}
\end{center}
\end{figure}
\subsection{Production cross sections}
The signature of top quark pair production and decays is two $W$ bosons and two $b$ quark jets.
Depending on the $W$ decay modes, the final states can be
two leptons and two jets, one lepton and four jets,
or six jets. The two $b$ quark jets can be identified
by secondary vertices (reflecting detectable $B$-hadron lifetimes) or ``soft" leptons
from semileptonic decays.
The di-lepton channel is particularly clean and does not usually
require that the $b$-hjets be identified.
Figure~\ref{fig:top-jet-mul-dilep} shows the
jet multiplicity distributions of the candidate events in this channel.
The production cross section is measured to be~\cite{cdf-top-cross-dil}
\begin{eqnarray*}
\sigma ( \bar p p \rightarrow t \bar t X )
& = & \ \, 7.0 \, ^{ +2.7} _{ - \, 2.3} \, ^{ +1.5} _{ - \, 1.3} \pm 0.4 \
{\rm pb} \ \ \ {\rm (CDF \ 197 \ pb}^{-1}) \\
& = &
14.3 \, ^{ +5.1} _{ - \, 4.3} \, ^{ +2.6} _{ - \, 1.9} \pm 0.9 \
{\rm pb} \ \ \ {\rm (D0 \ \ \ \ 150 \ pb}^{-1}) .
\end{eqnarray*}
The lepton plus jet final state is less pure,
and it is required that one or both of $b$-quark
jets be identified.
An example event with secondary vertex tags is shown in Figure~\ref{fig:top-example}~(left).
The jet multiplicity distributions of these candidate events are
also shown in Figure~\ref{fig:top-example}.
The excess of events in the bins $N_{\rm jet} \geq 3$ bins is nicely described after the
inclusion of top contributions.
The production cross section is measured to be~\cite{cdf-top-cross}
\begin{eqnarray*}
\sigma ( \bar p p \rightarrow t \bar t X )
& = & \ \, 5.6 \, ^{ +1.2} _{ - \, 1.0} \, ^{ +1.0} _{ - \, 0.7}
\ \ \ \ \ \ \ \ \, {\rm pb \ \ \ (CDF \ 162 \ pb}^{-1}) \\
& = &
8.2 \pm 1.3 \, ^{ +\, 1.9} _{ - \, 1.6} \pm 0.5
\ {\rm pb \ \ \
(D0 \ \ \ 160 \ pb}^{-1}) .
\end{eqnarray*}
There exit many other measurements of the quantity
using various different techniques~\cite{top-other}.
\subsection{Top quark mass}
\begin{figure}[htb]
\begin{center}
\includegraphics*[width=0.40\textwidth]{mw-mt.eps} \hspace*{3mm}
\includegraphics*[width=0.45\textwidth]{mtop0403.eps}
\caption{%
Left: Top quark and $W$ boson mass measurements
and their relation to the Higgs boson mass.
Right: Summary of top quark mass measurements from Run I.
}
\label{fig:mw-mtop}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics*[width=0.42\textwidth]{topquark.eps}
\hspace*{3mm}
\includegraphics*[width=0.40\textwidth]{EvMass_datMC.eps}
\caption{%
Left: Top quark mass likelihood distribution
from D0 in re-analysis of Run-I data.
Right: Distribution of top quark mass reconstructed with DLM at CDF.
}
\label{fig:top-mass}
\end{center}
\end{figure}
The mass of the top quark is an important quantity to
measure, not just for its own right,
but also from other physics perspectives.
When combined with the $W$ boson mass,
it can provide indirect information on the Higgs boson mass.
Figure~\ref{fig:mw-mtop} shows this relation,
together with an expected precision in the measurements
of the $W$ boson and top quark masses in Tevatron Run II~\cite{run1-top}.
The figure also summarizes Run-I measurements of the top quark mass.
The D0 collaboration has re-analyzed Run-I data using a new technique of
determining the top quark mass,
utilizing maximal information from the events, including
matrix elements for $t\bar t$ production and decay and parton
distributions~\cite{d0-nature}.
The extracted top quark mass is
\[
m_t = 180.1 \pm 3.6 \pm 4.0 \ {\rm GeV} / c^2 .
\]
The CDF Collaboration has applied a method called Dynamical Likelihood Method,
originally developed in the '80s~\cite{DLM}, to measure the top quark mass.
Figure~\ref{fig:top-mass} shows the reconstructed top mass
distribution. The extracted mass value is~\cite{mass-dlm}
\[
m_t = 174.9 \, ^{+ \, 4.5} _{- \, 5.0} \pm 6.2 \ {\rm GeV}/ c^2.
\]
There exit many other measurements using various techniques,
including those made public
since the time of this Conference.
However, we will not describe them in this report.
We refer the reader to Ref.~\cite{top-other}.
\subsection{$W$ helicity in top decays}
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.35\textwidth]{T06F01b.eps}
\includegraphics*[width=0.29\textwidth]{combined_lepton_pt-scale99.eps}
\includegraphics*[width=0.33\textwidth]{T06F04.eps}
\caption{%
Left: Lepton angular distributions in the rest frame of the $W$ boson
with respect to the top quark direction
in the cases of longitudinal $(0)$ and left-handed $(-)$ $W$ bosons.
Middle: Transverse momentum distribution of leptons in top candidate events (CDF).
Right: Lepton angular distribution in lepton$+$jet top candidate events (D0).
}
\label{fig:w-angle}
\end{center}
\end{figure}
The $W^-$ boson produced in the top quark decay can be either left-handed or longitudinally
polarized. Their mixture is predicted reliably by the standard model and is
\begin{eqnarray*}
f_0 & \equiv &
\frac { \Gamma ( t \rightarrow b \, W_0 ) }
{ \Gamma ( t \rightarrow b \, W_0 )
+ \Gamma ( t \rightarrow b \, W_L ) }
= \frac { m_t^2 } { m_t^2 + 2 \, m_W^2} \\
& = &
0.70 \ \ \ {\rm for } \ \ m_t = 175 \ \ {\rm GeV/}c^2 ,
\end{eqnarray*}
where $f_0$ is the fraction of the longitudinal component.
Two different methods have been used to extract the fraction.
The first examines the lepton momentum spectrum.
The different $W$ polarization states
give different lepton angular distributions in the top quark decay.
This is shown in Figure~\ref{fig:w-angle}.
The angle $\theta^*$ is the angle of the lepton momentum direction
in the rest frame of the $W$ boson
with respect to the top quark momentum direction.
The left-handed $W$ boson produces leptons
peaked toward the backward direction,
while for the longitudinal
$W$ boson the distribution is symmetric.
The right-handed $W$ boson would give forward-peaked leptons,
which is absent in the standard model.
The lepton momentum in the laboratory frame reflects
these angular distributions, and is harder for the leptons from
longitudinal $W$ and softer for those from
the left-handed $W$. The lepton spectrum from CDF is shown
in Figure~\ref{fig:w-angle}~(middle).
The extracted longitudinal fraction $f_0$,
under the assumption that the right-handed component is absent,
is~\cite{d0-helicity}
\[
f_0 = 0.27 \, ^{ + \, 0.35} _{ - \, 0.24} ,
\]
not inconsistent with the standard model prediction.
D0 reconstructs the angle $\theta^*$ and examines its
distribution. It is shown in Figure~\ref{fig:w-angle}~(right).
The fraction of the (non-SM) right-handed polarization ($f_+$)
is fitted for, with the longitudinal fraction $f_0$ fixed to the standard model value,
and is determined to be~\cite{d0-helicity}
\[
f_+ = -0.13 \pm 0.23, \ \ {\rm or} \ \
f_+ < 0.244 \ \ (90\% \ {\rm CL} ) .
\]
\section{Bottom Quark Physics}
Since the confirmation of large CP violation in some of $B$ hadron decay modes a few years ago,
the thrust of $B$ physics is now in testing the Kobayashi-Maskawa picture of
CP violation and in particular the consistency of the unitarity triangle,
and in searches for possible effects of new physics such as supersymmetry.
A lot of excitement has emerged since the summer of 2003,
when the Belle Collaboration
announced a possible hint of new physics
in a measurement of CP asymmetry in the
$B^0 \rightarrow \phi K^0_S$ decay.
Within the standard model, the CP asymmetry measured in this decay mode should be
identical to that measured in the (well-established)
$B^0 \rightarrow J/\psi K^0_S$ decay mode.
In both cases
CP asymmetry arises, in the standard model,
from the complex phase of $B^0 \bar B^0$ mixing
and is $\sin 2\beta$.
However, if a new phase exists in their decays,
the asymmetries in the two modes can be different.
The $B^0 \rightarrow \phi K^0_S$ decay
proceeds via a quark-level transition $b \rightarrow s \bar s s$,
which is a loop process in the standard model
and is suppressed
relative to tree-level processes.
The asymmetry in the $B^0 \rightarrow \phi K^0_S$ mode
as of summer 2003 was
$-0.96 \pm 0.51$~\cite{belle}\footnote{
As of ICHEP 2004, the new value of CP asymmetry in
the $\phi K^0$ mode from Belle
is $ +0.06 \pm 0.33 \pm 0.09$ (hep-ex/0409049).
},
which is about 3.5~$\sigma$ away
from $+ 0.731 \pm 0.056$ measured with $b \rightarrow c \bar c s$ modes.
Therefore, if new physics exists, it is in the $b \rightarrow s$
transitions.
The CDF and D0 experiments can provide unique tests of some of the $b \rightarrow s$
processes, taking advantage of decays of the $B^0_s$ meson,
which cannot be produced at the $\Upsilon(4S)$
resonance.
They are :
\begin{itemize}
\item $B^0_s \bar B^0_s$ oscillations.
\item Search for CP violation in $B^0_s \rightarrow J/\psi \, \phi$.
\item Measurement of CP asymmetries in $B^0_d \rightarrow \pi^+ \pi^-$
and $B^0_s \rightarrow K^+ K^-$ modes.
\item Search for rare decays $B^0_{s,d} \rightarrow \mu^+ \mu^-$.
\end{itemize}
We discuss each of them in some detail below.
Before doing that, however, we describe
a benchmark $B$ physics measurement
from D0, which concerns the ratio of the charged and neutral $B$ mesons,
$B^-$ and $\bar B^0$.
\begin{figure}[htb]
\begin{center}
\includegraphics*[width=0.45\textwidth]{B03F02.eps}
\hspace*{3mm}
\includegraphics*[width=0.45\textwidth]{B03F04.eps}
\caption{%
Left: Signal of $\bar B \rightarrow \mu^- \bar \nu D^{*+} X$ decays
reconstructed in D0 data.
Right: Ratio of $\mu^- D^{*+} $ and $\mu^- D^0$ yields
as a function of their estimated proper decay time.
}
\label{fig:d0-dstar}
\end{center}
\end{figure}
The lifetimes of different $B$-hadron species
are of interest,
because they offer probes into $B$-hadron decay mechanisms
beyond the simple spectator model picture.
One way to measure their lifetimes separately is
to use signals of fully reconstructed decays.
Another way is to use semileptonic decays, which can be written as
$\bar B \rightarrow \ell^- \bar \nu {\bf D}$, where
${\bf D}$ is a charm hadron system whose charge is
correlated with the parent $B$ hadron charge.
To be more specific,
the $\ell^- D^{*+}$ final state is dominated by the $\bar B^0$ meson decays,
and
the $\ell^- D^0$ final state is dominated by the $B^-$ meson decays
(provided that those coming from $D^{*+}$ decays are excluded),
allowing us to extract the two lifetimes.
The D0 signal of the $\bar B \rightarrow \mu^- \bar \nu D^{*+} X$ decay
is shown in Figure~\ref{fig:d0-dstar}~(left).
D0 examines the lifetime dependence of
the ratio of the rates of the two final states,
$\mu^- D^{*+}$ and $\mu^- D^0$.
If the lifetimes of the two parent $B$ meson
states are different, the ratio should change as a function
of the decay time.
Figure~\ref{fig:d0-dstar}~(right) shows this dependence,
and the ratio clearly deceases with an increasing decay time,
meaning that the $\bar B^0$ meson has
a shorter lifetime than the $B^-$ meson.
The extracted number is~\cite{d0-life}
\[
\tau(B^-) / \tau( \bar B^0 ) = 1.093 \pm 0.021 \pm 0.022,
\]
consistent with recent measurements at the $B$ factory and other experiments.
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.43\textwidth]{results_prl2_s-scale.eps}
\hspace*{3mm}
\includegraphics*[width=0.48\textwidth]{D0-bs-mumu.eps}
\caption{%
Dimuon invariant mass distributions near the $B$ meson mass.
Left from CDF and right from D0.
}
\label{fig:bmumu}
\end{center}
\end{figure}
\vspace*{5mm}
Now we discuss $b \rightarrow s$ flavor-changing processes
and related topics
that can be studied at the Tevatron.
\subsection{Search for rare decays $B^0_{s,d} \rightarrow \mu^+ \mu^-$}
In the standard model these decays can proceed
via higher-order, box and loop,
diagrams with weak bosons in the intermediate states.
They are also suppressed by CKM factors, $| V_{ts} | ^2 $ for the $B^0_s$ meson
and $ | V_{td} | ^2 $ for the $B^0_d$ meson.
Furthermore, the initial state is spin zero,
so they are suppressed by helicity conservation.
The standard model predictions for the branching fractions are~\cite{buras}
\begin{eqnarray*}
{\cal B } ( B^0_s \rightarrow \mu^+ \mu^- ) & = &
(3.4 \ \pm \ 0.5 \ ) \times 10^{- \, 9} \\
{\cal B } ( B^0_d \rightarrow \mu^+ \mu^- ) & = &
(1.00 \pm 0.14 ) \times 10^{-10} .
\end{eqnarray*}
The values for the corresponding electron modes are five orders of
magnitude smaller.
They are extremely small values,
and so the decay is a good place to look for
effects of new physics.
Both the CDF and D0 experiments have performed the search.
Figure~\ref{fig:bmumu}~(left) shows an invariant mass spectrum
of CDF dimuon candidate events near the $B$ meson mass.
The shaded regions show the search windows.
One candidate event
is found in the overlap region of $B_d^0$ and $B_s^0$ mesons
in a data sample of 171~pb$^{-1}$,
while the expected number of background events is $1.1 \pm 0.3$.
The following upper limits (95\% CL)
have been placed~\cite{bmumu}
\begin{eqnarray*}
{\cal B} ( B^0_s \rightarrow \mu^+ \mu^- ) & < &
7.5 \times 10^{-7} \\
{ \cal B} ( B^0_d \rightarrow \mu^+ \mu^- ) & < &
1.9 \times 10^{-7} ,
\end{eqnarray*}
which improves the previous CDF limits by a factor of three.
The mass distribution from D0 is also shown in the figure. At the time of
the Conference they had completed a sensitivity study but
had not opened the signal region data box yet.
The estimated sensitivity
with 180~pb$^{-1}$ of data
was
$ {\cal B}(B^0_s \rightarrow \mu^+ \mu^-)
\sim 10.1 \times 10^{-7}
$
at the 95\% CL.
After the Conference D0 has improved the sensitivity further and
obtained a 95\% CL upper limit of~\cite{d0-bsmumu-pub}
\[
{\cal B}(B^0_s \rightarrow \mu^+ \mu^-)
< 5.0
\times 10^{-7} .
\]
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.45\textwidth]{B07F01a.eps}
\hspace*{3mm}
\includegraphics*[width=0.40\textwidth]{bs-psiphi-life-apr04.eps}
\caption{%
$B^0_s \rightarrow J/\psi \phi$ decays reconstructed
(left, D0) and the decay time distribution (right, CDF).
}
\label{fig:bs-psi-phi}
\end{center}
\end{figure}
\subsection{Studies of $B^0_s \rightarrow J/\psi \, \phi$ decays}
The decay mode $B^0_s \rightarrow J/\psi \, \phi$ is
attractive experimentally because
it provides distinctive signatures.
The reconstruction can be done with relative ease.
Figure~\ref{fig:bs-psi-phi}~(left) shows the signal from the
D0 experiment in 220~pb$^{-1}$ of data.
The lifetime of the $B^0_s$ meson is measured to be~\cite{d0-bslife}
\[
\tau(B^0_s) =
1.473 \, ^{ + \, 0.052 } _ {-\, 0.050 } \pm 0.023 \ {\rm ps} .
\]
CDF reconstructs a similar signal (not shown), whose decay time
distribution is shown in Figure\ref{fig:bs-psi-phi}~(right). CDF extracts~\cite{cdf-bs-mass-life}
\begin{eqnarray*}
m(B^0_s) & = & 5366.01 \pm 0.73 \pm 0.33 \ {\rm MeV}/ c^2 \\
\tau(B^0_s) & = & 1.369 \pm 0.100 \pm 0.010 \ {\rm ps}
\end{eqnarray*}
using a data sample of 240~pb$^{-1}$.
Theory predicts that the lifetime of the $B^0_s$ meson
should be the same as that of the $B^0_d$ meson within
1\%.
However, a sizable width difference,
$\Delta \Gamma_s / \Gamma_s$ of order 10\%~\cite{lenz},
can exist
between the two mass eigenstates of the $B^0_s \bar B^0_s$ system.
The decay mode currently in discussion has been measured to be
dominated by a CP-even state,
which should correspond roughly to
the shorter lifetime mass eigenstate
of the two~\cite{buchalla}.
Therefore, the lifetime measured with this mode can be shorter if
it is compared to other measurements of the $B^0_s$ meson lifetime,
which come mostly from flavor-specific final states,
or if it is compared to the $B^0_d$ meson lifetime.
As for the CP content of the decay mode,
CDF has measured the polarizations in the decay as well as in the
$B^0 \rightarrow J/\psi K^{*0}$ mode.
The fraction of the transverse helicity state (CP odd)
is measured to be~\cite{dgog}
\begin{eqnarray*}
\Gamma_\perp / \Gamma & = & 0.183 \pm 0.051 \pm 0.054
\ \ \ ( B^0_d \rightarrow J/\psi \, K^{*0} ) \\
& = & 0.232 \pm 0.100 \pm 0.013 \ \ \ ( B^0_s \rightarrow J/\psi \, \phi ) .
\end{eqnarray*}
After the Conference, CDF released~\cite{dgog} a result of
an attempt to measure $\Delta \Gamma_s$,
which gives a value of
\[
\Delta \Gamma_s / \Gamma_s = 0.65 \, ^{+0.25} _ {-0.33} \pm 0.01 ,
\]
though it is not significant yet.
A sizable $\Delta \Gamma$ is important also because it may
allow CP studies of $B^0_s$ meson decays without the necessity
of flavor tagging.
In the future, this decay mode will be used to search for
mixing-induced CP violation.
The mode is a $B_s^0$ equivalent
of the $B^0_d \rightarrow J/\psi K^0_S$ mode, except that the
$B^0_s$ mode is not a pure CP eigenstate.
If the phase of particle-antiparticle oscillations
is non-zero, it can lead to CP asymmetry in time-dependent decay rates
modulated with the oscillation frequency $\Delta m$.
In the standard model,
$B^0_s \bar B^0_s$ mixing receives
very little complex phase because it is
arg($V_{ts}$). Therefore, if CP violation is found to be sizable,
it will be an unambiguous signal of new physics.
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.40\textwidth]{CDF-bplus-d0pi-245pb.eps}
\hspace*{3mm}
\includegraphics*[width=0.40\textwidth]{CDF-b0-dplus-pi-245pb.eps}
\caption{%
Example of fully reconstructed $B$ meson signals
among CDF data triggered with SVT.
Left: $B^- \rightarrow D^0 \pi^- \rightarrow (K^- \pi^+ ) \pi^-$.
Right: $\bar B^0 \rightarrow D^+ \pi^- \rightarrow ( K^- \pi^+ \pi^+ ) \pi^-$.
}
\label{fig:svt-b-sig}
\end{center}
\end{figure}
\begin{figure}[thb]
\begin{center}
\includegraphics*[width=0.40\textwidth]{bs-svt-265pb.eps}
\hspace*{5mm}
\includegraphics*[width=0.36\textwidth]{D0-blds-250.eps}
\caption{%
Example of reconstructed $B^0_s$ meson signals.
Left: $\bar B^0_s \rightarrow D^+_s \pi^- \rightarrow (\phi \pi^+ ) \pi^-$ (CDF).
Right: $\bar B^0_s \rightarrow \mu^- \bar \nu D^+_s X $ (D0).
}
\label{fig:bs-sig}
\end{center}
\end{figure}
\subsection{$B^0_s \bar B^0_s$ oscillations}
To study the consistency of the unitarity triangle
of the KM matrix,
the information on the lengths of the sides of the triangle
and their precise determination are crucial.
In particular the determination of $ | V_{td} |$
from the oscillation frequency $\Delta m_d$ of $B^0_d \bar B^0_d$ mixing
currently
suffers from a relatively large theoretical uncertainty,
typically of order 20\%.
Unquenched lattice calculations of $B$ meson decay constants
have become available in recent years,
and they should help reduce the uncertainty.
From the experimental side,
the theory uncertainty can in principle be reduced
once we observe the $B^0_s \bar B^0_s$ oscillations
and use the ratio $\Delta m_s / \Delta m_d$
to extract $| V_{ts} / V_{td} |$.
For example, Ref.~\cite{lattice} calculates the ratio $\xi$,
of the $B^0_s$ and $B^0_d$ decay constants times the bag parameters,
to be
$\xi \equiv ( f_{B_s} \sqrt{ B_{B_s} } ) /
( f_{B_d} \sqrt{ B_{B_d} } )
= 1.14 \, \pm \, 0.03 \, ^{ + \, 0.13} _{-\, 0.02}$,
which involves a smaller uncertainty than
when trying to extract $V_{td}$ using only $\Delta m_d$.
However, the expected very high value of $\Delta m_s$
poses a challenge
to experiments.
In order to measure particle-antiparticle oscillations
with a very high frequency, it is necessary, or at least desirable,
to have precise vertex determinations and
a good proper time resolution.
The former can be achieved by
having a good detector very close to the $B$ meson
production point,
and the latter requires measuring $B$ meson momentum on an event-by-event
basis. The latter can be achieved if decays are reconstructed fully.
Such reconstruction of all-hadronic final states has become possible
in CDF Run-II, by using silicon detector information
at the second level of the trigger (SVT triggers).
Figure~\ref{fig:svt-b-sig} shows an example of such signals.
They will be used as calibration modes, for understanding
flavor tagging and proper time resolution.
The $B^0_s$ meson signals are also reconstructed by CDF with the SVT
trigger. Figure~\ref{fig:bs-sig} shows the signal, of about 340 events in
265~pb$^{-1}$ of data.
The study can in principle be made using
the partially reconstructed semileptonic decay
$\bar B^0_s \rightarrow \ell^- \bar \nu D^+_s X$.
Figure~\ref{fig:bs-sig} (right) shows such a signal from the D0 experiment,
which takes advantage of a large acceptance of the muon detector.
Attempts will be made using these signals
to look for the oscillations
and possibly set a lower limit on $\Delta m_s$ toward winter 2005.
\begin{figure}[thb]
\begin{center}
\includegraphics*[width=0.40\textwidth]{b-hh-180pb.eps}
\hspace*{5mm}
\includegraphics*[width=0.40\textwidth]{mpipi-mc.eps}
\caption{%
Left:
Invariant mass spectrum of two charged particles near the $B$ meson mass by CDF.
Pion masses are assigned to each particle.
Right: Monte Carlo simulation of the mass spectra for four decay modes considered.
}
\label{fig:b-hh}
\end{center}
\end{figure}
\subsection{
Studies of $B^0_{d,s} \rightarrow P P $ decays }
The decay mode $B^0_d \rightarrow \pi^+ \pi^-$,
if it proceeds only through a $b \rightarrow u$ ``tree" transition,
receives the phase of $V_{ub}$ in the decay (which is angle $\gamma$)
and the phase of $V_{td}$ in $B^0_d \bar B^0_d $ mixing (which is $\beta$).
Therefore, its CP asymmetry should allow a determination
of $\sin 2 (\beta + \gamma)$, which should be identical
to $\sin 2 \alpha$ if the triangle closes ($ \alpha + \beta + \gamma = \pi$).
However, the existence of the $b \rightarrow s $ ``penguin" amplitude
complicates the matter, making it less straightforward
to extract
angle $\alpha$ from experimentally observed CP asymmetry.
Various strategies have been proposed to solve this, but they are not necessarily
easy experimentally.
One proposed by Fleischer~\cite{fleischer} is unique in that
it measures
CP asymmetries
in the $B^0_s \rightarrow K^+ K^-$ mode in conjunction with
the $B^0_d \rightarrow \pi^+ \pi^-$ mode,
and extract angle $\gamma$
as well as tree and penguin decay amplitudes,
taking advantage of the penguin pollution.
It is by no means easy experimentally, because it involves
extraction of CP asymmetries modulated with $\Delta m_s$.
The first step toward those measurements will be
to see the signals of
two-body decays of the $B$ mesons.
CDF uses a data sample triggered using SVT.
Figure~\ref{fig:b-hh} shows the invariant mass
spectrum of two-track pairs, where the pion mass is assumed
for both charged particles.
A clear signal of 900 events
is observed near the $B$ meson mass.
The peak is actually expected to be a mixture of
four decay modes,
$B^0_d \rightarrow K^+ \pi^-$ and $ \rightarrow \pi^+ \pi^-$,
and
$B^0_s \rightarrow K^+ K^-$ and $ \rightarrow K^- \pi^+$.
A Monte Carlo calculation of the mass spectra of these decay modes
is shown in Figure~\ref{fig:b-hh}~(right).
Specific ionization measurements ($dE/dx$) in the main tracking chamber
are used to separate kaons and pions statistically
and, together with the mass distributions,
to estimate the mixture of the four decay modes.
The approximate yields are
509 for $B^0_d \rightarrow K^+ \pi^-$,
134 for $B^0_d \rightarrow \pi^+ \pi^-$,
and
232 for $B^0_s \rightarrow K^+ K^-$.
This represents the first observation of the decay $B^0_s \rightarrow K^+ K^-$.
The ratio of the production fractions times the branching fractions is
measured to be~\cite{b-hh}
\[
\frac { f(\bar b \rightarrow B^0_s ) \, \cdot
{\cal B} ( B^0_s \rightarrow K^+ K^- ) }
{ f(\bar b \rightarrow B^0_d ) \, \cdot \,
{\cal B} ( B^0_d \rightarrow K^+ \pi^- ) }
=
0.48 \pm 0.12 \pm 0.07 .
\]
Also the ratio of the branching fractions for the $B^0_d$ meson is extracted to be
\[
\frac { {\cal B} ( B^0_d \rightarrow \pi^+ \pi^- ) }
{ {\cal B} ( B^0_d \rightarrow K^+ \pi^- ) } = 0.26 \pm 0.11 \pm 0.06 ,
\]
as well as direct CP asymmetry in the $B^0_d \rightarrow K^+ \pi^-$ decay
\[
{\cal A}_{\rm CP}( B^0 \rightarrow K^+ \pi^- )
\equiv \frac { \Gamma ( \bar B^0 \rightarrow K^- \pi^+ ) - \Gamma ( B^0 \rightarrow K^+ \pi^- ) }
{ \Gamma ( \bar B^0 \rightarrow K^- \pi^+ ) + \Gamma ( B^0 \rightarrow K^+ \pi^- ) }
= -0.04 \pm 0.08 \pm 0.01 .
\]
The latter is becoming competitive in precision with the Belle and BaBar results.
In a longer term we hope to measure angle $\gamma$ with the Fleischer method.
\begin{figure}[thb]
\begin{center}
\includegraphics*[width=0.40\textwidth]{phik_bmass.eps}
\hspace*{5mm}
\includegraphics*[width=0.40\textwidth]{cdf-bs-phiphi-obs.eps}
\caption{%
Reconstructed CDF signals of $B^+ \rightarrow \phi K^+$ (left)
and $B^0_s \rightarrow \phi \phi$ (right).
}
\label{fig:phik}
\end{center}
\end{figure}
\subsection{Direct check with $B \rightarrow \phi K$ decays}
CDF reconstructs the $B^+ \rightarrow \phi K^+$ decays
using the SVT trigger.
Figure~\ref{fig:phik} shows the signal of about 50 events.
The ratio of branching fractions is measured to be
\[
\frac { {\cal B} ( B^+ \rightarrow \phi K^+ ) }
{ {\cal B} ( B^+ \rightarrow J/ \psi K^+ ) }
= ( 0.72 \pm 0.13 \pm 0.07 ) \times 10^{-2} .
\]
The decay proceeds through the same quark level transition
$ b \rightarrow s \bar s s $ as the $B^0 \rightarrow \phi K^0_S$ mode.
Therefore if new physics phase exists it should show up in this mode
as well. Direct CP asymmetry is measured to be
\[
{\cal A}_{\rm CP} =
-0.07 \pm 0.17 \, ^{ +0.06} _ {-0.05} ,
\]
which does not seem to show a large deviation from zero,
and
again achieves a precision comparable to $B$ factory measurements.
Another $b \rightarrow s \bar s s $ transition mode has been seen at
CDF. It is the $B^0_s \rightarrow \phi \phi$ decay mode, whose
signal is shown in Figure~\ref{fig:phik} as well.
\section{Conclusion}
The Tevatron Run-II program is in progress since 2001.
Both CDF and D0 experiments have accumulated roughly five times more data than in Run I,
with much improved detectors.
Many physics results have been produced and more are expected in the near future.
They could be summarized as follows.
\begin{itemize}
\item Electroweak physics
Production of weak vector bosons has been measured
at a new center of mass energy, and has provided
opportunities to study electroweak phenomena very precisely.
Pairs of gauge bosons are now being produced in reasonably high statistics,
and interactions among gauge bosons will be studied in detail.
Measurement of the $W$ boson mass is
a high priority in the near term future.
\item Top quark physics
Top quark pair production has been confirmed in Run II data.
New precision in measurements of production cross sections and mass is expected and in some cases is already
achieved.
Many new different measurements will be performed, taking advantage of expected 20-fold increase
in the data sample and 30\% increase in the production cross section.
Combined with $W$ boson mass measurements, top quark mass measurements
will provide indirect information on the Higgs boson mass.
\item Bottom quark physics
CDF has vastly improved its $B$ physics capability by introducing a displaced track trigger,
enabling to collect $B$ decays into final states consisting only of hadrons.
D0 is also commissioning a trigger under the same philosophy. \\
CDF and D0 could provide useful and unique measurements of $b \rightarrow s$
transitions, including $B^0_s \bar B^0_s$ oscillations,
searches for $B^0_s \rightarrow \mu^+ \mu^-$ decays
and non-zero width difference
$\Delta \Gamma_s$,
and
studies of decays $B^0_s \rightarrow J/\psi \phi$, $B^0_s \rightarrow K^+ K^-$,
and $B \rightarrow \phi K$.
\end{itemize}
In the near future, each of the CDF and D0 experiments is
expected to collect about
2~fb$^{-1}$ of integrated luminosity.
We hope to see some exciting measurements come out of the data.
\section{Acknowledgements}
I would like to thank the organizers of the Conference,
in particular Professor Kaoru Hagiwara, Dr.~Nobuchika Okada
and Dr.~Junichi Kanzaki
for their help,
and
also for
patiently waiting for my writing this Proceedings contribution.
Many members of the CDF and D0 Collaborations
helped me during the preparation of my talk and this manuscript.
They include Evelyn Thomson,
Marco Verzocchi,
Arnulf Quadt,
Aurelio Juste,
and Ralf Bernhard.
|
1,116,691,500,167 | arxiv | \section{What else can be done?}
\textcolor{cyan}{1. For obtaining the winding number, if we replace the perturbation in the topological response with a phase factor, does it induce some sort of twisted boundary condition for the 1D boundary system? If so, can we obtain the winding number by changing this phase factor?}
\textcolor{cyan}{2. Can we find some other quantities to show that there is a tendency of moving toward one direction along edges?}
\textcolor{cyan}{3. Entanglement entropy between the top row, other boundaries, and the bulk for the trapezoidal case? Search the following key words: multipartite entanglement measure; mutual information. Ask quantum information people?}
\end{comment}
\clearpage
\onecolumngrid
\begin{center}
\textbf{\large Supplementary Materials for ``Non-Hermitian boundary spectral winding''}\end{center}
\setcounter{equation}{0}
\setcounter{figure}{0}
\renewcommand{\theequation}{S\arabic{equation}}
\renewcommand{\thefigure}{S\arabic{figure}}
\renewcommand{\cite}[1]{\citep{#1}}
\section{Hexagonal geometries of the non-Hermitian breathing Kagome model}
In the main text, we have taken a non-Hermitian breathing Kagome model as an example, where nontrivial boundary spectral winding emerges with triangle and some trapezoidal geometries.
More generally, boundary spectral winding can be expected also in other geometries or other 2D non-Hermitian models, as long as chiral non-reciprocal pumping dominates its 1D boundary. In this section, we shall demonstrate extra examples of this model with hexagonal geometries, which can be obtained from
the triangle lattice by removing a few rows of lattice sites from each corner. Specially, we start from a triangle lattice with $L$ unit cells along its bottom row, then remove top $Q$ rows of unit cells, and the top lattice sites of unit cells in the $Q+1$ row on each corner.
\begin{figure}[ht]
\includegraphics[width=1\linewidth]{SM_Hex_v5.pdf}
\caption{(a1) to (a3) Energy spectra of hexagonal lattices with $L=30$, and $Q=1,3,5$ respectively, colors indicate the FD of each eigenstate. (b1) to (b3) Summed distribution of bulk states in (a1) to (a3) respectively. (c1) to (c3) Summed distribution of edge states in (a1) to (a3) respectively. (d1) to (d3) Element $|G_{mn}|$ of the Green's function for (a1) to (a3). For each case, reference energy is chosen to be $E_r=-1$, which is enclosed by the boundary loop spectrum in (a1) and (a2).
$n$ is chosen as the top-left corner site for each hexagonal lattice, and $m=(x,y)$ ranges across all lattice sites.
Other parameters are $t_a=0.25,t_b=1$, and $\alpha=0.5$.
}
\label{sm_hex}
\end{figure}
Removing lattice sites on each corner divides 1D boundary of the triangle geometry into three disconnected segments.
Similar with the trapezoidal geometry, the top geometric edge of each corner of a hexagonal lattice acts as a part of its physical bulk,
and bulk states of the system show vanishing distribution only along the other three edges [Figs. \ref{sm_hex}(b1)-(b3)].
Edge states in hexagonal lattices are also seen to exhibit a clear accumulation at one side of each of these edges, i.e. three of the six corners of a hexagonal lattice, [Figs. \ref{sm_hex}(c1)-(c3)].
When $Q$ increases, areas with nontrivial boundary spectral winding in the complex energy plane shrink and eventually become some lines [Fig. \ref{sm_hex}(a3)].
Meanwhile,
localization of these corner states also become stronger, which can be seen from larger values of FD [brighter color from Fig. \ref{sm_hex}(c1) to (c3)].
As discussed in the main text, vanishing boundary spectral winding and strong corner localization in Fig. \ref{sm_hex}(a3) and (c3) indicate the emergence of an OBC type of HSTE.
In the main text, we have argued that in a trapezoid lattice, the two ends of its physical edge are (weakly) connected when only a few rows (e.g $M=5$) of lattice sites are removed from the top corner of its parent triangle lattice.
This intuitive picture can be more clearly seen in hexagonal lattices, whose physical edge breaks into three segments.
To see this, we consider the amplitudes of elements $G_{mn}$ of the Green's function $G=1/(E_r-H)$, with $E_r$ a reference energy chosen to be enclosed by the boundary spectrum. As discussed in the main text, This quantity describes the strength of a response field at site $m$ to a driven field at site $n$~\cite{Wanjura2020,xue2021simple}.
Here we have chosen $n$ to be the top-left corner of the hexagonal lattices, and illustrated $|G_{mn}|$ for all sites in the lattices.
It is seen in Fig. \ref{sm_hex}(d1) and (d2) that in the presence of nontrivial boundary winding, $|G_{mn}|$ presents a peak at the top-right corner of the lattices, suggesting that an input signal travels through all edges and is amplified maximally on this corner site.
On the other hand, when nontrivial boundary winding vanishes, $|G_{mn}|$ reaches its maximal value at the bottom-left corner, i.e. the other end of the left physical edge, suggesting that it is disconnected from the rest edges and behaves as an OBC non-Hermitian chain [Fig. \ref{sm_hex}(d3)].
The exponentially increasing $|G_{mn}|$ also indicates a directional/chiral signal amplification as in an actual 1D system.
The amplification ratio ${\rm Max}[|G_{mn}|]$ is seen to be much smaller in Fig. \ref{sm_hex}(d1),
which is also in consistence to the understanding that in this case, the edges are more closer to a 1D PBC non-Hermitian chain, which does not support a directional amplification.
\section{Topological boundary response for different geometries}
In the main text, we consider a topological response to detect the boundary spectral winding of our systems,
namely a response on a boundary lattice site to a local driving field on a neighboring site, with a local perturbation introduced the hopping connecting these two sites.
A one-on-one correspondence is established between the boundary spectral winding and a response quantity defined as
$$\nu_{mn}(\beta)=\partial \ln |G_{mn}(\beta)|/\partial \beta,$$
with $\beta$ a parameter controlling the perturbation to a boundary hopping ($t'\rightarrow e^{-\beta}t'$),
$G_{mn}$ an element of the Green's function $G(\beta)=1/[E_r-H(\beta)]$, $E_r$ a reference energy for defining the boundary spectral winding, and $(m,n)$ labeling the response and driven lattice sites respectively.
Intuitively, these two sites can be chosen arbitrarily along edges, except for those in the top edge of a trapezoid lattice, which belongs to the physical bulk of the system.
In Fig. 4 in the main text, the response and driven lattice sites are chosen from the middle unit cell in the bottom of the lattice.
In Fig. \ref{sm_qr5}, we demonstrate the same results as in Fig. 4(c), but with the response and driven lattice sites chosen from left and top edge respectively.
It is seen that the response quantity $\nu_{mn}$ still manifests the boundary spectral winding accurately for the former case, but not for the latter case.
\begin{figure}[ht]
\includegraphics[width=1.0\linewidth]{SM_QR5.pdf}
\caption{(a) Element $G_{mn}(\beta)$ of the Green's function for the same trapezoidal lattice as in Fig. 3(d) in the main text,
for different reference energies $E_r$ enclosed by the loop-like boundary spectrum (blue dots), and within the gap (red dots) respectively.
The lattice contains $25$ rows of unit cells and $30$ unit cells in the last row, with the top lattice site removed in each unit cell in the top row.
$m$ and $n$ are chosen as the top and bottom-left sites of the first unit cell in the 13th last row, i.e. two edge sites in the middle unit cell of left edge of the trapezoid lattice.
(b) Topological response $\nu_{mn}(\beta)$ for the same system and parameters as in (a).
(c) Topological response $\nu_{mn}(\beta)$ at $\beta=0$ for the same system with different $E_r$.
(d) to (f) the same results as in (a) to (c), only with $m$ and $n$ chosen to be the bottom-right and -left sites of the third unit cell in the top row of the lattices, i.e. two edge sites in the middle unit cell of top edge of the trapezoid lattice.
In panel (d) and (e), both green and blue dots correspond to $E_r$ enclosed by the loop-like boundary spectrum.
Other parameters are $L=30,M=5,t_a=0.25,t_b=1.0,\alpha=0.5$.
}
\label{sm_qr5}
\end{figure}
\section{Boundary spectral winding in non-Hermitian Benalcazar-Bernevig-Hughes model}
\begin{figure}
\includegraphics[width=1\linewidth]{nH_BBH_v3.pdf}
\caption{(a) Sketch of the non-Hermitian BBH model. (b) Energy spectra of non-Hermitian BBH model with $L_x=L_y=20$, colors indicate the FD of each eigenstate. (c) and (d) Summed distribution of bulk states and edge states respectively. Other parameters are $t_x=t_y=0.25, t'=1.0, \alpha_x=\alpha_y=0.5$.
(e) Element $G_{mn}(\beta)$ of the Green's function of the non-Hermitian BBH model, with $m$ and $n$ chosen as the bottom-left and -right sites of the 11th unit cell in the bottom row of the lattices, and a perturbed hopping parameter $t'\rightarrow e^{-\beta}t'$ between these two sites.
Reference energies $E_r$ are chosen to be enclosed by the loop-like boundary spectrum (blue dots), and within the gap (red dots) respectively.
(f) Topological response $\nu_{mn}(\beta)$ for the same system and parameters as in (f).
(g) Topological response $\nu_{mn}(\beta)$ at $\beta=0$.
}
\label{sm_bbh}
\end{figure}
In this section we consider another example, namely the non-Hermitian Benalcazar-Bernevig-Hughes (BBH) model \cite{benalcazar2017HOTI,Benalcazar2017HOTI2} as shown in Fig. \ref{sm_bbh}(a), which also support nontrivial boundary spectral winding in certain parameter regimes.
Its bulk Hamiltonian reads
\begin{equation}
H(\mathbf{k})=\begin{pmatrix}
0 & t'+t_x^- e^{-ik_x} & -t'-t_y^+ e^{ik_y} & 0\\
t'+t_x^+ e^{ik_x} & 0 & 0& t'+t_y^- e^{ik_y}\\
-t'-t_y^- e^{-ik_y} & 0 & 0 & t'+t_x^+ e^{-ik_x}\\
0 & t'+t_y^+ e^{-ik_y} & t'+t_x^- e^{ik_x} & 0\\
\end{pmatrix},
\end{equation}
where $t_x^{\pm}=t_x e^{\pm\alpha_x}$ and $t_y^{\pm}=t_y e^{\pm\alpha_y}$ represent asymmetric intercell hopping parameters along $\hat{x}$ and $\hat{y}$ directions respectively, and $t'$ is the amplitude of intracell Hermitian hopping. In the Hermitian scenario with $\alpha_x=\alpha_y=0$, the BBH model supports both 1st-order edge states and 2nd-order corner states.
The non-Hermitian BBH model we construct can be viewed as a combination of four sets of non-Hermitian SSH chains along two different directions ($\hat{x}, \hat{y}$). In each direction, two different SSH chains are chosen to have opposite non-reciprocity, leading to destructive interference of non-reciprocity in the bulk. FD is close to 2 for eigenstates in four bulk bands (orange color) for the system with a rectangle geometry [Fig. \ref{sm_bbh}(b)], and the summed bulk distribution $\rho_\text{bulk}(\mathbf{r})$ distributes uniformly in the 2D bulk [Fig. \ref{sm_bbh}(c)], indicating the absence of NHSE for bulk states.
Similar to the non-Hermitian breathing Kagome lattice with triangle geometry in the main text, the non-Hermitian BBH model we construct also has nontrivial boundary spectral winding [Fig. \ref{sm_bbh}(b)],
as its four edges support non-reciprocity toward the same chiral direction. Numerically, we observe
a weak eigenstate accumulation toward these corners along edges of the 2D system [Fig. \ref{sm_bbh}(d)].
To detect the boundary spectral winding, we consider a topological response to an external local driving field, as discussed in the main text.
Specifically, a perturbation is introduced to the amplitude of one intra-cell hopping along the system's 1D boundary,
\begin{equation*}
t'\rightarrow t' e^{-\beta},
\end{equation*}
and we calculate the Green's function as $G(\beta)=1/[E_r-H(\beta)]$. As shown in Fig. \ref{sm_bbh}(e),
The topological response associated with boundary spectral winding can be extracted from an element of the Green's funtion $G_{mn}$, with $m$ and $n$ labeling the two lattice sites connected by the perturbed hopping.
As seen in Fig. \ref{sm_bbh}(e), $G_{mn}$ increases with $\beta$ and eventually stops at a large constant value for a reference energy enclosed by the left loop-like boundary spectrum ($E_r=-1$). In contrast, $G_{mn}$ decreases to a small value for a reference energy outside the boundary spectrum ($E_r=-1+0.5i$). The topological response quantity $\nu_{mn}(\beta)=\partial \ln\vert G_{mn}(\beta)\vert/\partial\beta$ also exhibits distinguished behaviors for different $E_r$, as shown in Fig. \ref{sm_bbh}(f), offering a way to detect the boundary spectral winding. Therefore we scan $E_r$ for a parameter regime covering the system's full spectrum, and demonstrate the value of $\nu_{mn}(\beta)$ at $\beta=0$ in Fig. \ref{sm_bbh}(g). As excepted, the region with nontrivial boundary spectral winding is characterized by $\nu_{mn}(0)\simeq 1$, while other regions generally have a non-positive $\nu_{mn}(0)$.
|
1,116,691,500,168 | arxiv | \section{Introduction}
Heavy quarks production processes provide a powerful insight into our
understanding of Quantum Chromodinamics. The large mass of the heavy quark can
make the perturbative calculations reliable, even for total cross sections, by
cutting off infrared singularities and by
setting a large scale at which the strong coupling can be
evaluated and found -- possibly -- small enough. On the experimental side, the
possibility to
tag heavy flavoured hadrons by means of microvertex detectors can on the other
hand provide accurate measurements.
All these potentialities must of course be matched by accurate enough
theoretical evaluations of the production cross section. In this talk I shall
describe the state of the art of such calculations for heavy quarks
photoproduction. I shall first review the next-to-leading order (NLO) QCD
evaluations recently presented by Frixione, Mangano, Nason and Ridolfi.
These calculations, available for total cross
sections, one-particle and two-particles distributions, are now a consolidated
result and provide a benchmark for future developments.
Large logarithms appear in the NLO fixed order calculations and
potentially make it less reliable in some regimes: $\log(S/m^2)$
and $\log(p_T^2/m^2)$ become large when the center of mass energy $\sqrt{S}$ or
the transverse momentum $p_T$ of the
observed quark is much larger than its mass. I shall describe the resummation of
$\log(p_T^2/m^2)$ terms, leaving the high energy resummation to Marcello Ciafaloni's
talk\cite{ciafaloni}.
The perturbative fragmentation function
technique used in the resummation of the large $\log(p_T^2/m^2)$ terms has a
non-perturbative extension which can be used to describe the
transition from $c$ quarks to $D^*$ mesons. I shall therefore also discuss
the determination
of these non-perturbative fragmentation functions and their inclusion into the
heavy quarks photoproduction calculation, showing a comparison with data from
HERA.
\section{Fixed Order NLO Calculation}
Heavy quarks photoproduction at leading order in the strong coupling $\alpha_s$
looks a very simple process: only the tree level diagram $\gamma g \to Q\bar Q$
contributes at the partonic level, and the final answer for the total cross
section
is simple and well behaved, being finite everywhere.
At a deeper thinking, however, problems seem to arise. For instance, one may
ask himself why not to include initial state heavy quarks, coming from the
hadron and to be scattered by the photon, like $\gamma Q \to Q g$. To include
consistently such a diagram is not an easy task, especially if one wants to
keep the quark massive. Taking it massless, on the other hand, would not only be
a bad approximation but would also produce a divergent total cross section.
\begin{figure}
\begin{center}
\epsfig{file=sig_vs_ecm_96.eps,width=10cm,height=6.cm,clip=}
\fcaption{Total cross section for $c\bar c$
photoproduction\protect\cite{fmnr-rev}.}
\label{cctot}
\end{center}
\end{figure}
A way out of this problem was provided by Collins, Soper and Sterman\cite{css},
who argued that
the following factorization formula holds for heavy
quarks hadroproduction total cross sections:
\begin{equation}
\sigma(\sqrt{S},m) = \sum_{ij} \int f_{i/H_1} f_{j/H_2} \hat\sigma(ij\to Q\bar
Q;\sqrt{S},m).
\label{QQfact}
\end{equation}
The sum on the partons runs only on $i$ and $j$ being gluons or light quarks,
and the heavy quarks are only generated at the perturbative level by gluon
splitting. There is therefore no need to try to accommodate them in the
colliding hadrons and the relevant kinematics can be kept exact.
Eq.~(\ref{QQfact}) provides the basis for an exact perturbative calculation of heavy
quarks production to NLO. For what concerns photoproduction, such a calculation
has been first performed by P.~Nason and K.~Ellis, and subsequently confirmed
by J.~Smith and W.L. van Neerven \cite{en}.
When going to order $\alpha\alpha_s^2$
in photon-hadron collision, however, a new feature appears. The photon can now
couple directly to massless quarks, for instance in processes like $\gamma q
\to Q\bar Q q$, and in a given region of phase space a collinear singularity
will appear. It can be consistently factored out, but this requires the
introduction of {\sl photon} parton distribution functions (PDF)
which, pretty much like the
hadron ones, will describe the probability that before the
interaction the photon splits into hadronic components (light quarks or gluons,
in this case). Such a behaviour is sometimes called {\sl resolved photon} (as
opposed to {\sl direct}). A full NLO calculation for heavy quark
photoproduction will therefore also require a NLO calculation for
hadroproduction\cite{nde}, where one of the PDF's will be the
photon's one.
A factorization scale $\mu_\gamma$, related to the subtraction of the
singularity at the photon vertex, will link the two pieces and its dependence
on the result will only cancel when both are taken into account.
Frixione, Mangano, Nason and Ridolfi\cite{fmnr} (FMNR) have recently presented
Montecarlo integrators for these two calculations, thereby allowing detailed
comparisons with experimental data. A very extensive collection of such
comparisons is presented in a recent review\cite{fmnr-rev}, from which we
select some plots to be shown here.
\begin{figure}
\begin{center}
\epsfig{file=e687pt2.eps,width=7cm,clip=}
\hspace{.5cm}
\epsfig{file=e691pt2.eps,width=7cm,clip=}
\fcaption{Differential $p_T$ distributions for charm production in fixed target
experiments\protect\cite{fmnr-rev}.}
\label{onept}
\end{center}
\end{figure}
A comparison of total cross section experimental results and theoretical
predictions for $c\bar c$ photoproduction is shown in fig. \ref{cctot}.
Although large uncertainties are present, the comparison suggests agreement
between theory and experiment. The new HERA data, at large center of mass
energy, can be seen to appear larger than the pointlike (= direct) photon
prediction only. This suggests the need for a resolved photon component,
but by no means can determine it precisely.
One-particle transverse momentum ($p_T$) distributions are shown in fig.
\ref{onept}. The pure QCD predictions can be seen to be significantly harder
than the data. However, when corrected with two non-perturbative contributions
they can be matched to the data. These non-perturbative addictions are meant to
represent a primordial transverse momentum $k_T$ of the colliding partons,
other than the one already taken into account by the QCD radiative corrections,
and the effect of the fragmentation of the produced heavy quark into the
observed heavy flavoured hadrons, here described by the so-called Peterson
fragmentation function with $\epsilon = 0.06$.
\begin{figure}
\begin{center}
\epsfig{file=na14phi.eps,width=7cm,clip=}
\hspace{.5cm}
\epsfig{file=e687ptqq.eps,width=7cm,clip=}
\fcaption{Two particles correlations in fixed target
experiments\protect\cite{fmnr-rev}.}
\label{twopart}
\end{center}
\end{figure}
Comparisons between data and theory for two-particle correlations, like the
azimuthal difference $\Delta\phi$ or the relative transverse momentum
$p_T(Q\overline{Q})$ of the produced heavy quark pair, are shown in fig.
\ref{twopart}. Distributions like these are trivial in leading order QCD,
since the $Q$ and the $\overline{Q}$ are produced back-to-back. Hence,
$\Delta\phi = \pi$ and $p_T(Q\overline{Q}) = 0$. NLO corrections (as well
as non-perturbative contributions) can broaden these distributions, and one
could think of being able to perform a direct measurement of $O(\alpha_s^3)$
effects. The plots do however show that non perturbative contributions play a
key role in allowing a good description of the data. One can, however, still
check that the same inputs allow for a good description of both one- and
two-particles distributions, as seems to be the case here.
The overall result of these comparisons can therefore be summarized as
follows. Total cross sections seem to be well reproduced by the calculation
both at fixed target and HERA regimes, but the huge uncertainties present both
on the experimental and the theoretical side do not allow the study of finer
details like, for instance, the determination of the resolved component at
HERA. For what concerns transverse momentum distributions at fixed target, they
can be reproduced after allowing for heavy quark fragmentation effects and for
a primordial transverse momentum of the incoming partons of the order of 1 GeV.
These same non-perturbative corrections also allow for a description of
two-particles correlations, thereby pointing towards a consistent picture.
\begin{figure}[t]
\begin{center}
\vspace{-1cm}
\epsfig{file=fig1.eps,width=13cm,clip=}
\fcaption{Comparison between fixed order (FMNR) and resummed (PFF)
calculation for charm photoproduction $p_T$ distribution\protect\cite{cg}.
It is worth noticing how the two calculations describe
differently the (unphysical) resolved and direct components, but agree on their
sum (a physical observable).}
\label{ptres}
\end{center}
\end{figure}
\section{Large Transverse Momentum Resummation}
Like any perturbative expansion, the NLO calculation for heavy quarks
photoproduction is only reliable and accurate as long as the coefficients of
the coupling constant remain small. Large terms of the kind $\log(p_T^2/m^2)$
do however appear in the cross section, and for growing $p_T$ they will
eventually became large enough to spoil the convergence of the series.
Such terms need therefore to be resummed to all orders to allow for a
sensible phenomenological prediction. Such a resummation has been performed
along the following lines\cite{cg}.
One observes that in the large-$p_T$ limit ($p_T \gg m$) the
only important mass terms are those appearing in the logs, all the others being
power suppressed. This means that an alternative description of heavy quark
production can be achieved by using {\sl massless} quarks and providing at the
same time perturbative distribution and fragmentation functions also for the
heavy quark, describing the logarithmic mass dependence. The factorization
formula becomes
\begin{equation}
d\sigma(p_T) = \sum_{ijk} \int F_{i/H_1}(\mu,[m]) F_{j/H_2}(\mu,[m])
d\hat\sigma(ij\to k; p_T, \mu) D_k^Q(\mu,m),
\label{fact}
\end{equation}
with parton indices $i$,$j$ and $k$ also running on $Q$, taken massless in
$\hat\sigma$, now an $\overline{\mathrm MS}$
subtracted cross section for light partons production.
The dependence on $m$ of the parton distribution functions $F_{i/H}$,
shown among square brackets in eq. (\ref{fact}), is only there
when $i$ or $j$ happens to be the heavy quark $Q$.
The key point is that the large mass of the heavy quark allows for
the evaluation in perturbative QCD (pQCD) of its distribution and fragmentation
functions.
Initial state conditions for {$F_{{ Q}/H}(\mu_0={ m})$}\cite{ct}
and {$D_k^{ Q}(\mu_0\simeq{ m})$}\cite{melenason}
can be calculated in pQCD at NLO level in the $\overline{\rm MS}$ scheme:
\begin{eqnarray}
&&F_{Q/H}(x,\mu_0=m) = 0\\
&&D_Q^Q(x,\mu_0) = \delta(1-x) + {{\alpha_s(\mu_0)
C_F}\over{2\pi}}\left[
{{1+x^2}\over{1-x}}\left(\log{{\mu_0^2}\over{m^2}} -2\log(1-x)
-1\right)\right]_+ \\
&&D_g^Q(x,\mu_0) = {{\alpha_s(\mu_0) T_F}\over{2\pi}}
(x^2 + (1-x)^2)
\log{{\mu_0^2}\over{m^2}} \\
&&D_{q,\bar q,\bar Q}^Q(x,\mu_0) = 0
\end{eqnarray}
The massive logs will hence appear only through these function, which can
then be
evolved with the Altarelli-Parisi equations up to the large scale set by $\mu
\simeq p_T$. This evolution will resum to all orders the large logarithms
previously mentioned.
It is important to mention that due to the neglecting of power suppressed mass
terms this approach becomes unreliable when $p_T\simeq m$. In this region
only a case by case comparison with the full NLO massive calculation -- here
reliable and to be taken as a benchmark -- can tell
how accurate the resummed result is.
Phenomenological analyses show that the effect of the resummation becomes
sizeable only at very large $p_T$, say greater than 20 GeV for charm
photoproduction. Fig.~\ref{ptres} shows the effect of such a resummation for a
fixed photon energy in HERA-like kinematics. The resummed calculation can
be seen to match the fixed order one at $p_T \sim m$, where
resummation effects are not expected to be important, and to behave more softly
in the large $p_T$ region. This particular theoretical refinement should
therefore not be phenomenologically overly relevant for present-day HERA
physics, data being only available up to $p_T \simeq 12$ GeV.
\section{On the Inclusion of $c\to D^*$ Fragmentation Effects}
When comparing theory with data, one always faces the problem of describing as
closely as possible what the experiments do observe. With heavy quarks
production the problem lies in the experiments actually seeing the decay
products of heavy flavoured hadrons rather than the heavy quark itself. This is
due to the heavy quark strong and non-perturbative binding
into a hadron prior to decay. This binding involves
the exchange and radiation of low-momentum (order $\Lambda_{QCD}$) gluons, and
typically degrades the momentum of the hadron with respect to the one of the
original quark. Such a degradation can be described with the help of a
non-perturbative fragmentation function (FF) which, lacking the theoretical
tools to calculate, can be extract by fitting experimental data.
An often employed parametrization for such a function is the so called
Peterson\cite{pssz} one, which reads
\begin{equation}
D_{np}(z;\epsilon) \sim {1\over{z\left[1-1/z-\epsilon/(1-z)\right]^2}}.
\label{peterson}
\end{equation}
The value of $\epsilon$ is predicted to scale like $\Lambda_{QCD}^2/m^2$. For
charm to $D^*$ fragmentation a global analysis\cite{chrin} based on leading
order Montecarlo simulations gives the value $\epsilon \simeq 0.06$. This value
has so far usually been taken as the reference one, and used for instance
together
with the NLO fixed order calculation by FMNR in the plots shown in Section 2.
One should however carefully consider how $\epsilon$ has been extracted from
$e^+e^-$ experimental data. Experiments usually report the energy or momentum
fraction ($x_E$ or $x_p$) of the observed hadron with respect to the beam
energy. On the other hand the fraction which appears as the argument of the
non-perturbative FF is rather to be taken with respect to the fragmenting quark
momentum, usually denoted by $z$ (see for instance \cite{chrin} for a
discussion on this point). These two fractions are not coincident, due to hard
radiation processes which lower the momentum of the quark before it fragments
into the hadron. In order to deconvolute these effects one usually runs a
Montecarlo simulation of the collision process at hand, including both the
perturbative parton showers and the subsequent hadronization of the partons
into the observed hadrons. The latter can be parametrized in the Montecarlo by
the Peterson fragmentation function, and the value of
$\epsilon$ which best describes the data can be extracted. Clearly this
procedure leads to a resulting value for $\epsilon$ which depends on the
details of the description of the perturbative part. Indeed, the showering
softens the momentum distribution of the heavy quark, producing an effect
qualitatively similar to that of the non-perturbative FF. On the quantitative
level, the amount of softening (and hence the value of $\epsilon$) required by
the non-perturbative FF to describe the data is related to the amount of
softening already performed at the perturbative level. A leading or a
next-to-leading description of the showering can therefore produce different
values for $\epsilon$, whose value is then not a ``unique'' and ``true''one,
but rather closely interconnected with the details of the description of the
pQCD part of the problem.
\begin{figure}[t]
\begin{center}
\epsfig{file=argus.ps,
bbllx=30pt,bblly=160pt,bburx=540pt,bbury=660pt,
width=7cm,height=6cm,clip=}
\hspace{.5cm}
\epsfig{file=opal.ps,
bbllx=30pt,bblly=160pt,bburx=540pt,bbury=660pt,
width=7cm,height=6cm,clip=}
\fcaption{Distributions of $D^*$ mesons as measured by the
ARGUS and OPAL experiments, together with the theoretical
curves\protect\cite{cgee} fitted to
the same data with
the $(1-x)^\alpha x^\beta$ (full line) and the Peterson (dashed line)
non-perturbative fragmentation functions.}
\label{argusopal}
\end{center}
\end{figure}
In ref. \cite{cgee} fits to $D^*$ data taken by the ARGUS and OPAL experiments
have been performed with NLO accuracy using a fragmentation description for
the heavy quark
production like the one described in Section 3, complemented with the
inclusion of a non-perturbative component via the ansatz
\begin{equation}
D_k^{D^*}(\mu) = D_k^c(\mu) \otimes D_c^{D^*},
\label{ansatz}
\end{equation}
represented by the convolution of a perturbatively calculable fragmentation
function of the parton $k$ into the heavy quark $c$ and the non-perturbative
form $D_c^{D^*}$ describing the $c\to D^*$ transition. This non perturbative
form is taken to be scale independent, i.e. all scaling effects are assumed to
be described by the Altarelli-Parisi evolution of the perturbative part
$D_k^c(\mu)$. A similar approach had already been introduced in \cite{melenason}.
Results for these fits are shown in fig. \ref{argusopal}. The value for
$\epsilon$ has been consistently found to be of order 0.02 rather than the
customary 0.06 one, resulting instead from fits with leading order evolution.
Recalling the previous discussion, this comes to no surprise: next-to-leading
order evolution softens more the heavy quark spectrum, and a harder
non-perturbative fragmentation function is therefore needed to provide a
satisfactory description of the data (see \cite{cgee} for a full discussion).
Similar fits to $e^+e^-$ data have also been performed by Binnewies, Kniehl and
Kramer\cite{bkk} (BKK). These authors do instead find, again with NLO
evolution, a value for $\epsilon$ still
close to the usual 0.06. This discrepancy, beyond irrelevant nomenclature
differences, can be traced back to a discrepancy in the implementation of
the factorization scheme. The scheme used in \cite{cgee}, as originally set up
in \cite{melenason}, is the customary $\overline{\rm MS}$ one. Considering for
instance the
dominant non-singlet component only for simplicity, the $e^+e^-\to QX$ momentum
distribution $d\sigma/dx$ can be schematically written as the convolution
(= product in Mellin
moments space) of a short distance coefficient function, an Altarelli-Parisi
evolution kernel $E(\mu,\mu_0)$, a perturbative initial state condition for
the heavy quark perturbative
fragmentation function (PFF) and a fixed non-perturbative FF,
\begin{equation}
d\sigma(\sqrt{S},m) = \Big(1 + \alpha_s(\mu) c(\sqrt{S},\mu)\Big)
E(\mu,\mu_0) \Big(1+\alpha_s(\mu_0) d(\mu_0,m)\Big) D_{np},
\label{cgmn}
\end{equation}
where the perturbative expansions of the coefficient function and the PFF have
been explicitly shown. The factorization scale $\mu$ is taken of the order of
the (large) collision energy $\sqrt{S}$, and the initial scale $\mu_0$ is
taken of the order of the quark mass $m$.
BKK on the other hand, employing a scheme introduced by Kniehl, Kramer and
Spira\cite{kks} (KKS), write $d\sigma(\sqrt{S},m)$ as
\begin{equation}
d\sigma(\sqrt{S},m) = \Big(1 + \alpha_s(\mu) c(\sqrt{S},\mu) + \alpha_s(\mu)
d(\mu_0,m)\Big) E(\mu,\mu_0) D_{np}.
\label{bkks}
\end{equation}
These two expressions can be seen to differ by $O(\alpha_s^2)$ terms. However, one
of these terms is given by
\begin{equation}
\alpha_s(\mu) - \alpha_s(\mu_0) = -b_0 \alpha_s^2 \log\frac{\mu^2}{\mu_0^2}
\end{equation}
and is, therefore, one of the next-to-leading logarithms (NLL)
$\alpha_s^k \log^{k-1}(\sqrt{S}/m)$ we are resumming. Hence the two calculations
differ by a NLL term and cannot possibly {\sl both} implement
correctly a resummation at the NLL level.
To better understand the discrepancy, the BKKS scheme can for instance be
rewritten in the form (\ref{cgmn}), with an initial state condition for the
PFF containing the large scale $\mu$ as the argument for $\alpha_s$ rather
than the small one $\mu_0$. This choice of a large scale is however in
contradiction with the factorization theorem hypotheses, which only allow for
small scales in initial conditions, to avoid the appearance of unresummed
large logs. Choosing the large $\mu$ leads at a practical level to the
difference being reabsorbed into a different value for the $\epsilon$
parameter, which happens quite accidentally to be 0.06 rather than 0.02. One
can show that, replacing in the BKKS formula (\ref{bkks}) the $\alpha_s(\mu)
d(\mu_0,m)$ term with $\alpha_s(\mu_0) d(\mu_0,m)$ (or alternatively
appropriately
modifying the NLO splitting vertices in the evolution kernel), $\epsilon =
0.02$ is once again found from the NLL fits within this scheme too.
\begin{figure}[t]
\begin{center}
\epsfig{file=peteps.ps,
bbllx=30pt,bblly=160pt,bburx=540pt,bbury=660pt,
width=7cm,height=6cm,clip=}
\hspace{.5cm}
\epsfig{file=h1_jeff.ps,
bbllx=30pt,bblly=160pt,bburx=540pt,bbury=660pt,
width=7cm,height=6cm,clip=}
\fcaption{Effect on the $D^*$ photoproduction cross section of a decreasing
value for $\epsilon$ (left), and comparison of H1 data with the fixed order
calculation by FMNR (histograms, $\epsilon = 0.06$) and the fragmentation
function approach (smooth lines, $\epsilon = 0.02$).}
\label{h1}
\end{center}
\end{figure}
On the phenomenological side, and making use of the universality argument, one
can now argue that the use of a ``harder'' Peterson form with $\epsilon=0.02$
is probably more suited when combined with a NLO perturbative calculation like
the FMNR one which, albeit only at fixed order, contains NLL gluon radiation.
Decreasing $\epsilon$ means increasing the cross section at large $p_T$, being
the $p_T$ distribution steeply falling with increasing transverse momentum.
This could help reconciling the HERA experimental data\cite{h1-zeus} with the
perturbative NLO calculation, which was shown to underestimate them a little
when convoluted with a Peterson with $\epsilon=0.06$: fig. \ref{h1} shows, on
the left, how the cross section for $D^*$ photoproduction at HERA increases
with decreasing $\epsilon$ and, on the right, a comparison of the H1 data
with the fixed order prediction by FMNR ($\epsilon = 0.06$) and the
fragmentation functions one with $\epsilon = 0.02$. One should notice that the
$p_T$ values involved are still pretty small: this means that the fixed order
calculation is still reliable and the accuracy of the resummed one has to be
assessed first by comparing with the former. In this case they are found to be
in good agreement, the difference in the plot being mainly given by the
different $\epsilon$ values.
Last but not least, it is worth mentioning how, going from LO to NLO analyses,
a similar hardening of the non-perturbative
fragmentation function is also expected for the
$b$ quark. The corresponding increase of the hadroproduction bottom $p_T$
distributions\cite{mlm}
would be welcome in the light of the Tevatron data presently overshooting the
theoretical predictions by at least 30\%.
{
\vspace{.4cm}\noindent
{\bf Acknowledgements.} I wish to thank the Organizers of this Workshop for
the invitation to give this talk,
Mario Greco for his collaboration and Paolo Nason for the many conversations
on the heavy quarks physics items I've been reviewing here.
}
\section{References}
\newcommand{\zp}[3]{{\it Zeit.\ Phys.\ }{\bf C#1} (19#2) #3}
\newcommand{\pl}[3]{{\it Phys.\ Lett.\ }{\bf B#1} (19#2) #3}
\newcommand{\plold}[3]{{\it Phys.\ Lett.\ }{\bf #1B} (19#2) #3}
\newcommand{\np}[3]{{\it Nucl.\ Phys.\ }{\bf B#1} (19#2) #3}
\newcommand{\prd}[3]{{\it Phys.\ Rev.\ }{\bf D#1} (19#2) #3}
\newcommand{\prl}[3]{{\it Phys.\ Rev.\ Lett.\ }{\bf #1} (19#2) #3}
\newcommand{\prep}[3]{{\it Phys.\ Rep.\ }{\bf C#1} (19#2) #3}
\newcommand{\niam}[3]{{\it Nucl.\ Instr.\ and Meth.\ }{\bf #1} (19#2) #3}
\newcommand{\mpl}[3]{{\it Mod.\ Phys.\ Lett.\ }{\bf A#1} (19#2) #3}
\vspace{-.5cm}
\small
|
1,116,691,500,169 | arxiv | \section{Introduction}\label{intro}
Molecular clouds (MCs) are the densest regions of the interstellar
medium and the birth sites of stars. Nevertheless, despite this
important role in star formation, key aspects of MC evolution remain
unclear: What are the key parameters in determining the star-forming
activity of MCs? How do these parameters change with MC evolution? The
column density distribution of MCs has been found to be sensitive to
the relevant physical processes ~\citep{2012A&ARv..20...55H}. The
study of the density structure of clouds that are at different
evolutionary stages can therefore help to understand which physical
processes are dominating the cloud structure at those stages.
Column density probability density functions (\textit{N}-PDFs) are
useful tools for inferring the role of different physical processes in
shaping the structure of molecular clouds. Observations have shown
that non-star-forming molecular clouds show
\emph{bottom-heavy}\footnote{Most of their mass is in low-column
density material.} \textit{N}-PDFs, while the star-forming
molecular clouds show \emph{top-heavy}\footnote{They have a
significant amount of mass enclosed in high-column density regions.}
\textit{N}-PDFs~\citep{2009A&A...508L..35K,2011A&A...530A..64K,2014Sci...344..183K,2013A&A...549A..53K,2013ApJ...766L..17S}.
It is generally accepted that the \emph{top-heavy} \textit{N}-PDFs
are well described by a power-law function in their high-column
density regimes. The description of the shapes of the low-column
density regimes of both kinds of \textit{N}-PDFs is still a matter
of debate. The papers cited above describe the low-column density
regimes as log-normal functions. In contrast,~\citet{alves-2014}
and~\citet{lombardi-15} argue that a power-law function fits the
observed \textit{N}-PDFs throughout their range. The origin of
these differences is currently unclear.
Simulations predict that turbulence-dominated gas develops
a log-normal \textit{N}-PDF~\citep{2013ApJ...763...51F}; such a form is predicted
for the volume density PDF (hereafter \textit{$\rho$}-PDF) of isothermal,
supersonic turbulent, and non-self-gravitating
gas~\citep{1994ApJ...423..681V,1997ApJ...474..730P,1998ApJ...504..835S,2001ApJ...546..980O,2011ApJ...730...40P,2011MNRAS.416.1436B,2013ApJ...763...51F}.
Log-normal \textit{$\rho$}-PDFs can, however, result also
from processes other than supersonic turbulence such as
gravity opposed only by thermal-pressure forces or
gravitationally-driven ambipolar diffusion~\citep{2010MNRAS.408.1089T}.
The log-normal $N$-PDF is defined as:
\begin{equation}\label{eq:log-normal}
p(s; \mu, \sigma_{s}) = \frac{1}{\sigma_{s}\sqrt{2\pi}}exp \left(\frac{-(s-\mu)^{2}}{2\sigma_{s}^{2}}\right),
\end{equation}
where $s=\mathrm{ln\,}(A_{V}/\overline{A_{V}})$
is the mean-normalized visual extinction
(tracer of column density, see Section~\ref{sec:NH}), and $\mu$
and $\sigma_{s}$ are respectively the mean and standard
deviation of the distribution. The log-normal
component that is used to describe low column densities
has typically the width of
$\sigma_{s}=0.3-0.4$~\citep{2009A&A...508L..35K}.
It has been suggested that the determination
of the width can be affected by issues such as unrelated
dust emission along the line of sight to the cloud~\citep{schneider-15}.
Practically all
star-forming clouds in the Solar neighborhood
show an excess to this component at higher column densities,
following a power-law, or a wider log-normal
function~\citep{2009A&A...508L..35K,2013A&A...549A..53K,2013ApJ...766L..17S},
especially reflecting their ongoing
star formation activity~\citep{2014Sci...344..183K,2014ApJ...787L..18S,2015arXiv150405188S}.
Such behavior is suggested by the predictions that
develope \textit{top-heavy} \textit{$\rho$}-PDFs for self-gravitating
systems~\citep{2000ApJ...535..869K,2000ApJS..128..287K,2011ApJ...727L..20K,2014ApJ...781...91G}.
Another interesting measure of the density structure of
molecular clouds is the dense gas mass fraction (DGMF) that
describes the mass enclosed by regions with
$M(A_{V} \geq A_{V}\arcmin)$, relative to the total mass
of the cloud, $M_{\mathrm{tot}}$.
\begin{equation}\label{dgmf}
dM\arcmin=\frac{M(A_{V} \geq A_{V}\arcmin)}{M_{\mathrm{tot}}}.
\end{equation}
The DGMF has been recently linked to the star-forming
rates of molecular clouds:~\citet{2010ApJ...723.1019H}
and~\citet{2010ApJ...724..687L,2012ApJ...745..190L} showed,
using samples of nearby molecular clouds and external galaxies,
that there is a relation between the mean star-forming rate (SFR)
surface density ($\Sigma_{\mathrm{SFR}}$) and the mean
mass surface density ($\Sigma_{\mathrm{mass}}$) of MCs:
$\overline{\Sigma}_{\mathrm{SFR}}\propto f_{\mathrm{DG}}\overline{\Sigma}_{mass}$, where
$f_{\mathrm{DG}}=\frac{M(A_{V}>7.0\,\mathrm{mag})}{M_{\mathrm{tot}}}$.
Furthermore, in a sample of eight molecular clouds within 1\,kpc,
a correlation $\Sigma_{\mathrm{SFR}}\propto \Sigma_{\mathrm{mass}}^{2}$ was reported
by~\citet{2011ApJ...739...84G}.
Combining these two results suggests $f_{\mathrm{DG}}\propto\Sigma_{\mathrm{mass}}$.
Despite their utility, a complete, global understanding of the
\textit{N}-PDFs and DGMFs of
molecular clouds is still missing. One of the main
problems arises from the fact that the dynamic ranges
of different observational techniques sample the \textit{N}-PDFs
and DGMFs differently.
Previous works have employed various methods:
CO line emission only samples \textit{N}-PDFs between
$A_{V}\approx3-8$\,mag~\citep{2009ApJ...692...91G}.
NIR extinction traces column density at wider, but still narrow,
dynamic range, $A_{V}\approx1-25$\,mag~\citep{2001A&A...377.1023L}.
~\citet{2013A&A...549A..53K} and~\citet{2013A&A...553L...8K} used a novel extinction technique
that combines NIR and MIR data, considerably increasing the observable dynamic
range, $A_{V}=3 - 100$\,mag.~\citet{2013ApJ...766L..17S} and~\citet{lombardi-15} used \textit{Herschel}
FIR data to sample \textit{N}-PDFs at $A_{V}<100$\,mag.
Another observational hindrance
arises from the limited spatial resolution of observations.
The high-column densities typically correspond to small spatial
scales in molecular clouds; to probe the \textit{N}-PDFs at high-column
densities requires spatial resolution that approaches the scale
of dense cores in the clouds ($\sim$0.1\,pc).~\textit{Herschel} reaches resolution
of $\sim$36$\arcsec$ that corresponds to 0.17\,pc at 1\,kpc distance.
Extinction mapping using both NIR and MIR wavelengths can
reach arcsecond-scale resolution, but only about ten clouds
have been studied so far with that technique~\citep{2013A&A...549A..53K,2013A&A...557A.120K}.
Born out of the observational limitations above,
the most important weakness in previous studies is that they only
analyze relatively nearby molecular clouds ($d\lesssim1.5\,\mathrm{kpc}$).
Thus, they probe only a very limited range of Galatic
environments, which prohibits the development of a global
picture of the factors that control \textit{N}-PDFs
across different Galactic environments.
Extending \textit{N}-PDF studies to
larger distances is imperative for three principal reasons.
First, studying the more massive and distant MCs will allow us to sample the
entire MC mass range present in the Galaxy. Second, larger numbers of
MCs over all masses provide statistically meaningful samples. Finally,
extending to larger distances is necessary to study the possible
effect of the Galactic structure on the mass distribution statistics.
In this paper, we employ the ATLASGAL~\citep{2009A&A...504..415S,csengeri-2014} survey to study a large
sample of molecular clouds in the Galaxy.
The ATLASGAL survey traces submillimeter dust emission at 870\,$\mu$m.
Submillimeter dust emission is an optically thin tracer of interstellar dust,
and hence a direct tracer of gas if a canonical
dust-gas mass ratio is assumed. The submillimeter observing
technique employed in the ATLASGAL survey filters out diffuse emission
on spatial scales greater than 2.5$\arcmin$,
hence making the survey most sensitive to the densest material of the
interstellar medium in which star formation occurs.
With this data set we can observe the cold dense interiors of
molecular clouds in both the near and far sides of the Galactic plane.
With an angular resolution of $19.2\arcsec$, ATLASGAL improves
by almost a factor of two the resolution of $Herschel$
observations, thus providing a more detailed view of the
dense material inside molecular clouds.
We will use this data sample to study the
\textit{N}-PDFs and DGMFs of molecular clouds at different evolutionary classes.
\section{Data and methods}\label{sec:data}
We used continuum maps at 870\,$\mu$m from
ATLASGAL to identify MCs in
the Galactic plane region between
$l\in[9\degr,21\degr]$ and $|b|\leq1\degr$, where the
rms of the survey is 50\,mJy/beam.
We selected this area, because extensive auxiliary data
sets were available for it, and specifically, starless clumps
have already been identified by~\citet{2012A&A...540A.113T}.
We classified the identified molecular cloud regions in
three groups based on their evolutionary classes:
starless clumps (SLCs), star-forming clouds (SFCs), and \ion{H}{II}\,\, regions.
In the following, we describe how each
class was defined and how we
estimated the distance to each region.
\subsection{Source selection}\label{sec:def}
We identified molecular cloud regions based primarily on ATLASGAL
dust emission data. As a first step, we defined objects from ATLASGAL data
simply by using $3\sigma$ emission contours (0.15\,Jy/beam) to define the region boundaries.
Then, we used distances available in literature (see Sect.~\ref{sec-dist})
to group together neighbouring objects
located at similar distances (within the assumed distance
uncertainty of 0.5\,kpc), i.e., those that are likely
associated with the same molecular cloud. As a next step,
we expanded the boundaries of the regions down to their 1$\sigma$ level
in the cases in which they show close contours at 1$\sigma$.
Finally, each region created in this manner was classified either as a SLC,
SFC, or \ion{H}{II}\,\, region using information about their stellar content
available in literature.
An example of the region definition is shown in Fig.~\ref{fig:galPlane} (see also Appendix~\ref{ap:maps}).
We identify a total of 615 regions, 330 of them with known distances and classified
either as SLC, SFC, or \ion{H}{II}\,\, regions (Fig.~\ref{fig:dist-hist}).
Throughout this paper we refer to each of the ellipses shown in Fig.~\ref{fig:galPlane} with the
term \emph{region}. In the following we explain the definition of the three evolutionary classes in detail.
\ion{H}{II}\,\, regions are defined as regions hosting previously cataloged \ion{H}{II}\,\,
regions. We used the
catalogues~\citet{1989ApJS...69..831W},~\citet{1989ApJS...71..469L},~\citet{1993ApJ...418..368G},~\citet{1996A&AS..115...81B},
~\citet{1996ApJ...472..173L},~\citet{2000ApJ...530..371F},
and~\citet{2013MNRAS.435..400U}. We identified 114 \ion{H}{II}\,\, regions in the
considered area. Distances are known for 84 of them (74\%). Two thirds
(57) of the \ion{H}{II}\,\, regions with distance estimates lie at near distances
($d<5.5$\,kpc). If we assume the same distribution for the 30 \ion{H}{II}\,\,
regions with unknown distances, 20 of them would be located at near
distances. Nevertheless, we exclude these regions from our analysis.
We summarize the number of regions with and without known distances in
Table~\ref{tab:number} (see Sect.~\ref{sec-dist}).
The star-forming clouds (SFCs) are defined as the subset of regions
devoid of \ion{H}{II}\,\, regions but containing young stellar objects (YSOs) and
protostars. Here the presence of YSOs and protostars is assumed to be a clear indication
of ongoing star formation.
For this purpose, we used the YSO catalogues of~\citet{2011ApJ...731...90D} and~\citet{2014PASJ...66...17T}.
The former search signs of active star formation in the Bolocam Galactic Plane Survey~\citep[BGPS]{bgps}
using the GLIMPSE Red Source catalogue~\citep{robitaille-2008},
the EGO catalogue~\citep{cyganowski-2008}, and the RMS catalogue~\citep{lumsden-2013}.
They found 1341 YSOs in the area $l\in[9\degr,21\degr]$ and $|b|\leq0.5\degr$
and it is $>$98\% complete at the 0.4\,Jy level~\citep{2011ApJ...731...90D}.
~\citet{2014PASJ...66...17T} present a catalog of 44001 YSO candidates,
2138 in the area $l\in[9\degr,21\degr]$ and $|b|\leq1\degr$, with a reliability of 90\%
in the YSO classification.
All the regions showing spatially coincident YSOs were classified as SFCs.
We only require one YSO to classify a region as SFC,
but our SFCs have more than one.
The probability of classifying a SFC as a
region without YSOs due to completeness issues in the
YSOs catalogues is therefore very low.
We identified 184 SFCs, 126 of them with
known distances. The 80\% (99) of the SFCs with known
distances lie at $d<5.5$\,kpc and are therefore studied in
this paper. Assuming the same SFC distribution
for the SFCs with unknown distances, we estimate that the 80\% (46) of the SFCs with
no distance estimates would be located at near distances.
Finally, we adopted the starless clump catalog
from~\citet{2012A&A...540A.113T} to define our sample of SLCs.
They present a SLC sample with peak
column densities $N>10^{23}\mathrm{\,cm^{-2}}$. The properties of this SLC sample
were specifically chosen in order to detect potential high-mass star progenitors.
~\citet{2012A&A...540A.113T} used uniform criteria to classify their
SLC sample: absence of GLIMPSE and/or 24$\,\mu$m MIPSGAL sources.
~\citet{2012A&A...540A.113T} identified 120 SLCs with
known distances\footnote{We adopt only
regions with solved kinematic distance ambiguity (KDA) as
sources with known distances.}
in the Galactic plane area studied.
All SLCs are located inside our previously defined
\ion{H}{II}\,\, regions or SFCs (see Fig.~\ref{fig:galPlane}).
We note a caveat in the above evolutionary class definition scheme.
Our scheme makes an effort to capture the dominant
evolutionary phase of the region, but it is clear that not all the regions
are straightforward to classify. In principle, the distinction between
\ion{H}{II}\,\, regions and SFCs is well defined; it depends on whether the regions
host an \ion{H}{II}\,\, region or not. However, eight regions harbor only UC\ion{H}{II}\,\, regions
whose extent is tiny compared to the full extent of
those regions (\#34, \#54, \#192, \#195, \#233, \#246, \#247 and \#390).
Since our aim is to capture the dominant evolutionary phase, we classified
these regions as SFCs.
We also note that our evolutionary class
definition is based only on the stellar content of the regions.
The SLCs exhibit no indications of star-forming activity, SFCs have star-forming sources,
\ion{H}{II}\,\, region have formed massive stars.
However, we emphasize that we cannot assume that all the SLCs will definitely
form stars. Similarly, we cannot assume that all the star-forming content
within SFCs will become massive enough to create \ion{H}{II}\,\, regions, although some
of them will. Therefore
we do not aim to draw a \textit{sequential} evolutionary link between these three
classes of regions. Instead, the estimated time-scales
for each class
instead aim to identify independent evolutionary time-scales for each observational class.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{./fig/atlasgalOverMipsRegions_09_poster.eps}
\caption{MIPSGAL 24\,$\mu$m map of the Galactic plane
between $9\deg < l < 10.5\deg$. Yellow
contours indicate the 3$\sigma$ (0.15\,Jy/beam) emission level of
the ATLASGAL data. Red and blue ellipses show
the \ion{H}{II}\,\, regions and SFCs, respectively.
SLCs are shown with green filled diamonds.
Similar maps for the Galactic plane between
$10.5\deg < l < 21\deg$ and $|b|\leq1\deg$ are shown in
Appendix~\ref{ap:maps}.}
\label{fig:galPlane}
\end{figure*}
\subsection{Distance estimates and convolution to a common spatial resolution}
\label{sec-dist}
We adopted distances to each region from literature.
The two main literature sources used were~\citet{2013ApJ...770...39E}
and~\citet{2012A&A...544A.146W}.
The former catalog measures kinematic distances of molecular clumps
identified with sub-mm dust emission.
They solve the kinematic distance ambiguity (KDA)
using Bayesian distance probability density functions.
They use previous data sets to establish the prior
distance probabilities to be used in the Bayesian analysis.
This method has a 92\% agreement with Galactic Ring Survey based distances.
In total, 68 out of 330 regions have counterparts in~\citet{2013ApJ...770...39E}.
~\citet{2012A&A...544A.146W} measured the kinematic distances to dense
clumps in the ATLASGAL survey using ammonia observations. We obtained
distance estimates for 80 regions from this catalog. We also used
other catalogs based on kinematic distances~\citep{1997MNRAS.291..261W,2006ApJ...639..227S,2006ApJS..162..346R,2009ApJ...699.1153R,2013MNRAS.435..400U,2012A&A...540A.113T},
and in a three-dimensional model of interstellar extinction~\citep{2009ApJ...706..727M}.
A detailed discussion on the methods for distance estimates is
beyond the scope of this paper. We therefore refer to the
cited papers for a detailed discussion on them. Table~\ref{tab:dist-ref} shows the
number of distance estimates adopted from each literature source.
In regions with more than one distance estimate,
we estimated the distance averaging the different values.
For all but six of the studied regions ($\sim96$\%)
the distance ambiguity was solved in at least one of the cited papers.
Regions with different KDA solutions in literature
(i.e. with several clouds along the same line-of-sight) were
removed from our sample to avoid line-of-sight contamination.
For the remaining six regions we used maps from the
GLIMPSE and MIPS surveys to search for dark shadows against
background emission (e.g. Stutz et al. 2009; Ragan et al. 2012).
The near distance was adopted for regions associated with IRDCs.
Since all the SLCs of our sample are
embedded in \ion{H}{II}\,\, regions or SFCs (see Section~\ref{sec:def}), we
compared the distance estimates for the SLCs
and for their hosting regions. In every but one case, the
distance estimates of the SLCs and their hosting SFCs or \ion{H}{II}\,\, regions
were in good agreement. In the only inconsistent case, the SLC was
located at the far distance in~\citep{2012A&A...540A.113T} and its hosting
SFC was located at the near distance. Since the KDA solutions of
the SLC and its hosting SFC differ, we removed out both regions
from the final sample (see also previous paragraph).
Figure~\ref{fig:dist-hist} shows the distance distribution of our
sample. A vast majority ($\sim$ 80\%) of our regions is located
within 5\,kpc distance.
There is a gap between 6 and 10\,kpc,
coinciding with the central hole of the Galactic molecular
ring~\citep{1989ApJ...339..919S}\footnote{We
note that the existence of the Galactic Ring has recently been questioned
by~\citet{2012MNRAS.421.2940D}, who proposed a two symmetric spiral
arm pattern for the Milky Way as an explanation of observations.}.
At the far side of the Galaxy, there are three
density enhancements that coincide
with the Sagittarius, Norma, and Perseus spiral arms
(Fig.~\ref{fig:MW-faceon}).
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{./fig/distanceHisto.eps}}
\caption{Distance distribution of the molecular
cloud regions. Black solid line shows the total number of regions.
Red dotted line shows the HII regions,
the blue dashed line the SFCs, and green filled area the SLCs.
Black dashed vertical line at 5\,kpc shows the common distance to
which we have smoothed the data.}
\label{fig:dist-hist}
\end{figure}
\begin{figure}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics{./fig/MWFaceOnPaper_mod.eps}}
\caption{Artist impression of face-on view of the Milky Way
(R. Hurt, SSC-Caltech, MPIA graphic, Tackenberg et al. 2012).
\ion{H}{II}\,\, regions are shown as red circles, star-forming clouds as blue circles and
starless clumps as green circles.
Circle sizes are proportional to region sizes. The right panel shows a zoom
to the region enclosed by the black rectangle in the left panel,
where the source density is highest.}
\label{fig:MW-faceon}
\end{figure}
We study only regions within 5\,kpc since the highest
source density of our sample is located there.
Assuming an error in distance determination of about 0.5\,kpc,
we also included regions located between 5\,kpc and 5.5\,kpc.
We convolved the ATLASGAL data of all closer regions to a
common 5\,kpc distance resolution using
a Gaussian kernel of
$FWHM=\sqrt{(19.2\arcsec)^{2}(\frac{5\mathrm{\,kpc}}{d\mathrm{\,kpc}})^{2}-(19.2\arcsec)^{2}}$.
This convolution was done for each region individually.
At the distance of 5\,kpc, the $19.2\arcsec$ resolution of the ATLASGAL
translates to about 0.5\,pc.
We therefore do not resolve the dense cores ultimately
linked to star formation that have typically a size of
$\sim$0.1\,pc~\citep{2009EAS....34..195M,2013A&A...559A..79R}.
When smoothing maps to a common distance,
some of the smaller SLCs
were washed out by strong emission gradients
likely associated with nearby strong sources. This artificially
increases the SLCs column densities.
To minimize the effect, we inspected each SLC by eye,
discarding those that were significantly affected by
strong gradients. Appendix~\ref{app:TSL} shows the SLCs
included in the final sample.
The total number of regions studied in this paper,
and the number of regions in each evolutionary class,
are listed in Table~\ref{tab:number}.
\begin{table}
\caption{Completeness of each evolutionary class} %
\centering
\begin{tabular}{c c c c c c}
\hline\hline
&\ion{H}{II}\,\, &SFCs &SLCs &No class \\\hline
Total &114 &184 &210 &107 \\
Known $d$\tablefootmark{a} &84 &126 &120 &--- \\
$d<5.5$\,kpc &57 &99 &111 &--- \\
$d>5.5$\,kpc &27 &27 &9 &--- \\
miss. $d<5.5$\,kpc\tablefootmark{b} &20 &18 &102 &87 \\
miss. $d>5.5$\,kpc\tablefootmark{b} &10 &8 &8 &20 \\
Studied &57 &99 &31\tablefootmark{c} &--- \\
\hline
\label{tab:number}
\end{tabular}
\tablefoot{
\tablefoottext{a}{Only SLCs with KDA solved and, if more than one distance estimate,
agreement between different literature sources.}
\tablefoottext{b}{Number of regions lost due to
lack of distance estimates. We assume homogeneous distribution of the sources
along the Galactic plane area studied.}
\tablefoottext{c}{We only studied isolated SLCs (see Sect.~\ref{sec-dist})}}
\end{table}
\begin{table}
\caption{Literature sources from which distances were obtained} %
\centering
\begin{tabular}{c c c c}
\hline\hline
Reference &\ion{H}{II}\,\, &SFCs &SLCs \\\hline
1 &7 &50 &11 \\
2 &19 &6 &18 \\
3 &23 &1 &7 \\
4 &39 &23 &18 \\
5 &7 &11 &5 \\
6 &14 &26 &5 \\
7&--- &13 &--- \\
8&17 &--- &5 \\
\hline
\label{tab:dist-ref}
\end{tabular}
\tablebib{(1)~\citet{2013ApJ...770...39E};
(2)~\citet{2012A&A...540A.113T}; (3)~\citet{2013MNRAS.435..400U};
(4)~\citet{2012A&A...544A.146W}; (5)~\citet{2009ApJ...699.1153R};
(6)~\citet{2009ApJ...706..727M}; (7)~\citet{2006ApJ...639..227S};
(8)~\citet{1997MNRAS.291..261W}.}
\end{table}
\onllongtab{
\begin{landscape}
\begin{longtable}{ccccccccccc}
\caption{Regions studied in this paper}\\
\hline
\hline
ID & Type & RA (J2000) & DE (J2000) & Radius ($\arcsec$) & Maj. Ax
($\arcsec$) & Min. Ax ($\arcsec$) & Angle ($\degr$) & Distance
(kpc) & Dist. dispersion (kpc) & References \\
\hline
\endfirsthead
\caption{Continued.} \\
\hline
ID & Type & RA (J2000) & DE (J2000) & Radius ($\arcsec$) & Maj. Ax
($\arcsec$) & Min. Ax ($\arcsec$) & Angle ($\degr$) & Distance\tablefootmark{b}
(kpc) & Dist. dispersion\tablefootmark{b} (kpc) & References \\
\hline
\endhead
\hline
\endfoot
\hline
\endlastfoot
\label{tab:regions}
2 & SFC & 18:05:35.91 & -20:52:28.00 & 90 & --- & --- & --- & 3.2 & --- & 1 \\
3 & SFC & 18:05:47.64 & -21:00:32.00 & 90 & --- & --- & --- & 4 & --- & 1 \\
4 & HII & 18:06:15.12 & -20:31:36.50 & 150 & --- & --- & --- & 5.5 & 0.4 & 3 \\
5 & SFC & 18:06:46.98 & -20:59:04.10 & 180 & --- & --- & --- & 4.5 & --- & 1 \\
7 & HII & 18:06:52.03 & -21:04:27.35 & 180 & --- & --- & --- & 4.8 & <0.1 & 4,6 \\
8 & SFC & 18:06:53.75 & -21:18:46.50 & 150 & --- & --- & --- & 4.4 & --- & 1 \\
11 & SFC & 18:07:34.17 & -20:26:03.30 & 150 & --- & --- & --- & 2.6 & --- & 1 \\
13 & SFC & 18:07:42.18 & -21:23:02.20 & 200 & --- & --- & --- & 4.7 & 0.3 & 1 \\
16 & SFC & 18:07:55.85 & -20:28:26.60 & 220 & --- & --- & --- & 2.5 & --- & 1 \\
27 & HII & 18:08:53.62 & -18:16:08.10 & 350 & --- & --- & --- & 3 & 0.1 & 4,2 \\
28 & SFC & 18:08:59.71 & -20:11:29.40 & 150 & --- & --- & --- & 2 & --- & 4 \\
29 & HII & 18:09:23.23 & -20:08:38.60 & 150 & --- & --- & --- & 3.6 & 0.1 & 4,3 \\
31 & SFC & 18:09:24.38 & -20:01:35.70 & 220 & --- & --- & --- & 2.3 & 0.4 & 1,4 \\
30 & HII & 18:09:01.33 & -19:48:33.30 & --- & 290 & 250 & -60 & 5.2 & --- & 2 \\
33 & HII & 18:09:05.33 & -19:28:00.80 & 150 & --- & --- & --- & 2.5 & --- & 3 \\
36 & HII & 18:09:37.96 & -20:19:16.10 & 540 & --- & --- & --- & 1.9 & 0.2 & 4 \\
37 & SFC & 18:09:06.10 & -21:03:37.20 & 420 & --- & --- & --- & 3.7 & 0.3 & 4 \\
38 & SFC & 18:09:22.35 & -21:15:35.70 & 300 & --- & --- & --- & 5.2 & --- & 6 \\
44 & HII & 18:09:29.30 & -19:16:19.30 & --- & 620 & 500 & -60 & 3.4 & --- & 8 \\
47 & SFC & 18:09:54.44 & -19:44:46.90 & --- & 450 & 360 & -10 & 3.5 & 0.2 & 1,4,2 \\
49 & SFC & 18:10:02.27 & -18:50:12.00 & 280 & --- & --- & --- & 3.4 & --- & 1 \\
50 & HII & 18:10:05.43 & -20:59:11.90 & 360 & --- & --- & --- & 3.7\tablefootmark{a} & --- & 4,8 \\
51 & HII & 18:10:12.40 & -20:46:22.90 & --- & 800 & 260 & -50 & 3.7 & 0.2 & 4,8 \\
54 & SFC & 18:10:26.70 & -19:20:56.20 & --- & 920 & 380 & -65 & 3.5 & 0.2 & 1,4,3,2,6 \\
55 & HII & 18:10:54.13 & -19:52:35.30 & --- & 860 & 630 & -30 & 5.2 & --- & 6 \\
60 & HII & 18:10:51.72 & -17:55:57.40 & 240 & --- & --- & --- & 2.4 & 0.3 & 1,4,6 \\
61 & SFC & 18:10:54.72 & -20:32:50.30 & 180 & --- & --- & --- & 5.4 & --- & 6 \\
65 & SFC & 18:11:05.69 & -19:37:05.40 & --- & 300 & 180 & -30 & 5.2 & <0.1 & 6,2 \\
72 & HII & 18:11:32.30 & -19:30:42.10 & 300 & --- & --- & --- & 5.2 & <0.1 & 8,6 \\
73 & SFC & 18:11:46.63 & -18:17:40.40 & --- & 240 & 120 & 90 & 3.8 & --- & 1 \\
74 & HII & 18:11:43.26 & -18:16:54.00 & --- & 480 & 300 & -10 & 3.6 & --- & 1 \\
75 & SFC & 18:11:56.19 & -18:48:14.50 & 200 & --- & --- & --- & 3.2 & --- & 1 \\
78 & SFC & 18:11:59.63 & -19:07:48.50 & 400 & --- & --- & --- & 4.6 & --- & 1 \\
79 & SFC & 18:12:00.81 & -19:36:00.90 & 350 & --- & --- & --- & 3.2 & --- & 1 \\
81 & SFC & 18:12:04.07 & -17:52:43.80 & 210 & --- & --- & --- & 4.3 & --- & 6 \\
82 & SFC & 18:12:08.63 & -16:42:41.60 & 360 & --- & --- & --- & 3.4 & --- & 6 \\
83 & HII & 18:12:12.42 & -17:40:52.80 & --- & 450 & 360 & -63 & 2.2 & 0.4 & 4,2,6 \\
86 & SFC & 18:12:14.29 & -18:26:52.00 & 130 & --- & --- & --- & 4.8 & --- & 6 \\
88 & SFC & 18:12:26.51 & -17:32:46.30 & --- & 240 & 90 & -10 & 3.9 & --- & 6 \\
90 & HII & 18:12:32.90 & -18:30:13.00 & 80 & --- & --- & --- & 4.2 & <0.1 & 4,3 \\
91 & SFC & 18:12:32.62 & -17:29:16.80 & --- & 120 & 100 & 80 & 3.6 & --- & 1 \\
94 & SFC & 18:12:41.69 & -18:42:58.80 & 120 & --- & --- & --- & 5 & --- & 4 \\
98 & SFC & 18:12:56.03 & -19:04:23.40 & --- & 700 & 250 & -80 & 3.9 & --- & 6 \\
100 & HII & 18:13:09.20 & -18:07:49.70 & --- & 450 & 250 & -40 & 2.6 & <0.1 & 4,2 \\
102 & SFC & 18:12:51.72 & -18:48:13.50 & --- & 500 & 120 & -50 & 5 & --- & -6 \\
103 & HII & 18:13:12.52 & -18:00:07.20 & 150 & --- & --- & --- & 2.6 & 0.1 & 4,3 \\
104 & SFC & 18:13:12.15 & -16:41:11.40 & --- & 210 & 100 & 30 & 3 & --- & 6 \\
105 & SFC & 18:13:09.52 & -18:15:57.60 & --- & 370 & 240 & -45 & 3.5 & --- & 1 \\
106 & SFC & 18:13:30.90 & -17:18:06.70 & 165 & --- & --- & --- & 2.3 & --- & 1 \\
108 & SFC & 18:13:38 & -18:12:15.20 & --- & 180 & 120 & 35 & 3.6 & --- & 1 \\
110 & SFC & 18:13:35.59 & -17:23:47.20 & --- & 270 & 210 & -10 & 4.5 & --- & 1 \\
111 & SFC & 18:13:54.20 & -17:16:28.50 & --- & 330 & 210 & 90 & 2.4 & --- & 1 \\
114 & HII & 18:13:52.96 & -18:56:44.90 & --- & 450 & 300 & 100 & 3.9\tablefootmark{a} & 0.3 & 4,3,8,6 \\
116 & HII & 18:14:09.82 & -17:27:24.20 & --- & 570 & 220 & -45 & 4.4 & 0.1 & 4,3,2 \\
120 & SFC & 18:14:05.27 & -18:14:49.00 & 300 & --- & --- & --- & 3.5 & --- & 1 \\
121 & SFC & 18:14:13.00 & -18:20:41.80 & --- & 150 & 120 & -30 & 5.1 & --- & 6 \\
122 & HII & 18:14:16.00 & -17:57:18.60 & --- & 780 & 550 & -35 & 4 & 0.6 & 4,8,3,2 \\
123 & SFC & 18:14:16.28 & -17:14:57.00 & --- & 210 & 120 & -60 & 2.8 & --- & 1 \\
134 & HII & 18:14:35.87 & -16:45:31.30 & --- & 960 & 560 & -60 & 4.4 & 0.3 & 4 \\
136 & HII & 18:14:30.50 & -17:38:34.00 & --- & 750 & 400 & -65 & 4.4 & 0.6 & 1,4,2,6 \\
140 & SFC & 18:14:50.37 & -17:20:31.50 & --- & 420 & 210 & -75 & 4.5 & --- & 4 \\
146 & SFC & 18:14:55.69 & -17:46:58.50 & --- & 210 & 120 & 80 & 5.1 & --- & 6 \\
149 & SFC & 18:14:57.28 & -18:27:01.10 & 270 & --- & --- & --- & 5 & --- & 6 \\
152 & SFC & 18:15:01.51 & -17:42:22.80 & 100 & --- & --- & --- & 4.6 & --- & 6 \\
154 & SFC & 18:15:11.03 & -17:47:21.20 & --- & 360 & 130 & 95 & 3.7 & --- & 1 \\
159 & SFC & 18:15:46.15 & -17:34:20.50 & --- & 750 & 540 & 80 & 3.8 & 0.4 & 1,4,2,6 \\
160 & HII & 18:15:37.51 & -17:04:23.50 & 400 & --- & --- & --- & 4.1 & <0.1 & 4,3 \\
162 & SFC & 18:15:38.97 & -18:10:42.50 & 760 & --- & --- & --- & 3.5 & --- & 2,6 \\
163 & HII & 18:15:40.22 & -17:19:46.70 & 260 & --- & --- & --- & 2.6 & --- & 1 \\
165 & HII & 18:15:45.18 & -16:38:58.20 & 240 & --- & --- & --- & 5.4 & --- & 3 \\
170 & SFC & 18:16:00.58 & -16:04:53.20 & 440 & --- & --- & --- & 2.8 & --- & 4 \\
175 & SFC & 18:16:12.68 & -16:43:53.70 & --- & 240 & 150 & -10 & 4.3 & --- & 6 \\
177 & SFC & 18:16:20.00 & -17:05:00.90 & --- & 120 & 70 & 80 & 3.7 & --- & 1 \\
178 & HII & 18:16:31.62 & -16:50:59.60 & --- & 600 & 240 & 240 & 3.7 & <0.1 & 4,2 \\
179 & HII & 18:16:04.39 & -19:42:05.20 & 460 & --- & --- & --- & 1.6 & --- & 8 \\
183 & SFC & 18:16:45.65 & -17:02:26.90 & 300 & --- & --- & --- & 3.7 & --- & 1 \\
185 & HII & 18:16:57.86 & -16:15:33.90 & --- & 600 & 360 & -60 & 2.4 & 0.3 & 4,3,2,8 \\
186 & HII & 18:16:51.62 & -18:41:28.30 & 300 & --- & --- & --- & 4 & --- & 4 \\
187 & HII & 18:16:58.96 & -16:42:08.70 & --- & 380 & 240 & 30 & 3.6 & 0.2 & 4,2,6 \\
190 & HII & 18:17:11.87 & -16:27:00.60 & --- & 630 & 300 & -25 & 3.7\tablefootmark{a} & 0.1 & 4,3,2 \\
192 & SFC & 18:16:18.76 & -14:18:57.50 & 1100 & --- & --- & --- & 2.2 & 0.2 & 4 \\
195 & SFC & 18:17:27.20 & -17:08:58.50 & --- & 480 & 220 & -45 & 2.7 & 0.3 & 1 \\
196 & SFC & 18:17:21.85 & -17:00:48.80 & --- & 330 & 220 & -60 & 2.3 & --- & 6 \\
200 & SFC & 18:17:36.02 & -15:56:27.60 & --- & 360 & 200 & -30 & 2.7 & 0.1 & 4,7 \\
202 & SFC & 18:17:31.86 & -16:16:59.20 & --- & 330 & 140 & -80 & 3.7 & --- & 1 \\
207 & SFC & 18:17:35.93 & -15:47:45.70 & 240 & --- & --- & --- & 3.1 & 0.2 & 1,7 \\
209 & SFC & 18:17:52.47 & -16:29:02.00 & --- & 370 & 210 & -45 & 4.6 & --- & 1 \\
210 & SFC & 18:17:55.42 & -17:13:05.50 & 250 & --- & --- & --- & 2.9 & --- & 6 \\
214 & HII & 18:17:45.73 & -16:01:22.20 & --- & 400 & 200 & -30 & 2.8 & <0.1 & 7,6 \\
217 & SFC & 18:17:59.25 & -16:15:15.20 & --- & 480 & 270 & 100 & 3.7 & 0.2 & 1,2,4 \\
218 & HII & 18:18:09.85 & -16:52:16.80 & --- & 750 & 500 & -70 & 2.5 & 0.4 & 1,2,4 \\
219 & SFC & 18:18:15.15 & -16:04:42.50 & --- & 360 & 200 & -10 & 3.9 & --- & 7a \\
220 & HII & 18:18:16.55 & -15:59:13.50 & --- & 180 & 135 & 80 & 4 & --- & 4,3 \\
227 & SFC & 18:18:46.76 & -16:22:29.30 & --- & 170 & 150 & 80 & 4.6 & --- & 1 \\
228 & HII & 18:18:47.08 & -13:32:56.90 & --- & 2000 & 1200 & -60 & 2 & 0.4 & 4,3,8,2,6 \\
230 & HII & 18:18:54.20 & -16:48:55.70 & --- & 260 & 160 & 90 & 2.7 & 0.3 & 4,2,8 \\
233 & SFC & 18:19:06.85 & -16:33:09.50 & --- & 570 & 280 & 100 & 2.3 & 0.2 & 1,4,2 \\
234 & SFC & 18:19:06.97 & -16:11:20.30 & 300 & --- & --- & --- & 4.7 & <0.1 & 4 \\
236 & SFC & 18:19:11.92 & -16:18:25.60 & --- & 420 & 150 & --- & 2.6 & --- & 1 \\
238 & SFC & 18:19:38.17 & -16:42:43.10 & --- & 650 & 330 & -45 & 2.4 & --- & 1 \\
242 & HII & 18:19:51.20 & -15:55:05.60 & 200 & --- & --- & --- & 2.3 & --- & 4 \\
246 & SFC & 18:20:12.44 & -13:51:05.10 & --- & 450 & 360 & 70 & 2.6 & <0.1 & 7,6 \\
247 & SFC & 18:20:37.42 & -15:37:55.10 & 100 & --- & --- & --- & 3.6 & --- & 1 \\
248 & HII & 18:20:27.49 & -16:07:40.60 & 850 & --- & --- & --- & 2.2 & 0.2 & 4,8,3,2 \\
251 & SFC & 18:20:39.33 & -14:02:41.30 & 450 & --- & --- & --- & 2.6 & --- & 1 \\
253 & SFC & 18:20:56.57 & -15:24:41.80 & 180 & --- & --- & --- & 3.8 & <0.1 & 1,4,6 \\
256 & HII & 18:21:05.67 & -14:17:49.10 & --- & 580 & 340 & 90 & 2.5 & --- & 7 \\
259 & HII & 18:21:09.17 & -14:31:46.20 & --- & 700 & 350 & -40 & 4.3\tablefootmark{a} & 0.5 & 4,3,8 \\
260 & HII & 18:21:10.92 & -15:02:42.00 & --- & 300 & 150 & 80 & 3.3 & --- & 3 \\
261 & SFC & 18:21:12.27 & -16:30:08.20 & 210 & --- & --- & --- & 2.6 & --- & 1 \\
263 & HII & 18:21:04.64 & -14:46:39.40 & --- & 600 & 450 & -10 & 3.9 & <0.1 & 4 \\
265 & SFC & 18:21:44.83 & -14:56:52.90 & --- & 380 & 250 & -60 & 3.9 & --- & 6 \\
270 & HII & 18:21:59.64 & -14:16:11.60 & --- & 360 & 260 & -40 & 3.6\tablefootmark{a} & 0.2 & 4,7 \\
271 & HII & 18:22:04.97 & -14:08:55.40 & --- & 210 & 150 & -30 & 3.6 & --- & 4,3 \\
272 & HII & 18:22:05.14 & -14:48:42.90 & 100 & --- & --- & --- & 3.8 & --- & 1 \\
279 & HII & 18:22:22.95 & -14:35:35.30 & --- & 190 & 120 & -60 & 3.8 & --- & 1 \\
287 & SFC & 18:22:41.01 & -14:27:54.20 & 330 & --- & --- & --- & 2.4 & --- & 7 \\
292 & HII & 18:22:52.78 & -13:59:26.60 & 550 & --- & --- & --- & 4.8 & --- & 6 \\
297 & SFC & 18:23:18.85 & -13:15:50.60 & 240 & --- & --- & --- & 2.2 & <0.1 & 5,4,7 \\
298 & SFC & 18:23:26.37 & -13:49:53.30 & 240 & --- & --- & --- & 4.3 & --- & 7 \\
301 & HII & 18:23:35.90 & -13:09:28.50 & 240 & --- & --- & --- & 6.6 & --- & 5 \\
302 & SFC & 18:23:37.57 & -13:18:52.90 & 210 & --- & --- & --- & 4.2 & --- & 5 \\
313 & HII & 18:24:34.34 & -12:52:02.70 & 300 & --- & --- & --- & 4 & 0.3 & 4,3,8 \\
320 & SFC & 18:25:07.20 & -14:35:12.50 & 400 & --- & --- & --- & 3.4 & --- & 4 \\
322 & SFC & 18:25:09.19 & -12:44:38.70 & --- & 300 & 120 & 100 & 3.5 & <0.1 & 1,4 \\
323 & HII & 18:24:58.83 & -12:01:05.50 & 650 & --- & --- & --- & 4.5 & --- & 6 \\
325 & HII & 18:25:16.95 & -13:14:13.00 & 850 & --- & --- & --- & 4.6 & 0.5 & 4,3,8,2,5 \\
326 & SFC & 18:25:19.61 & -12:53:00.50 & --- & 500 & 230 & 95 & 3.9 & 0.2 & 1,5,7 \\
327 & SFC & 18:25:22.32 & -13:34:38.30 & 320 & --- & --- & --- & 4 & 0.5 & 1,7 \\
328 & HII & 18:25:35.01 & -12:20:58.10 & --- & 660 & 360 & 65 & 4.2 & 0.1 & 4,2,5 \\
330 & SFC & 18:25:28.88 & -13:58:26.00 & 420 & --- & --- & --- & 5.1 & --- & 6 \\
332 & SFC & 18:25:53.36 & -12:06:08.30 & 360 & --- & --- & --- & 2.5 & <0.1 & 1 \\
336 & HII & 18:26:00.56 & -11:52:26.90 & --- & 720 & 440 & -20 & 1.9 & --- & 4,8,3,5 \\
340 & HII & 18:26:20.45 & -12:40:52.20 & 240 & --- & --- & --- & 4.6 & 0.3 & 4,7,6 \\
341 & SFC & 18:26:22.16 & -12:57:16.60 & 180 & --- & --- & --- & 3.6 & --- & 1 \\
342 & HII & 18:26:24.02 & -12:03:09.10 & --- & 200 & 160 & 80 & 2.5 & 0.2 & 8,3 \\
343 & SFC & 18:26:30.07 & -12:32:42.00 & --- & 360 & 300 & 10 & 4.7 & 0.3 & 7,6 \\
348 & SFC & 18:26:46.14 & -12:03:25.70 & 230 & --- & --- & --- & 4.6 & --- & 6 \\
351 & HII & 18:26:44.64 & -12:24:05.30 & --- & 390 & 330 & 200 & 4.5 & 0.2 & 4,8,5 \\
359 & HII & 18:27:03.75 & -12:43:48.30 & 480 & --- & --- & --- & 4.5 & 0.3 & 5,4,2,3 \\
363 & SFC & 18:27:17.69 & -10:36:28.60 & 300 & --- & --- & --- & 2.6 & 0.1 & 1,7 \\
365 & SFC & 18:27:09.45 & -13:19:49.80 & 480 & --- & --- & --- & 4.2 & --- & 5 \\
366 & SFC & 18:27:35.55 & -12:19:14.70 & 290 & --- & --- & --- & 4 & 0.1 & 1 \\
367 & SFC & 18:27:45.09 & -12:46:41.10 & 320 & --- & --- & --- & 4.4 & 0.2 & 5,4 \\
370 & SFC & 18:27:42.08 & -11:36:29.60 & --- & 300 & 180 & -60 & 4.4 & --- & 1 \\
372 & HII & 18:27:52.09 & -12:36:17.50 & 440 & --- & --- & --- & 4.9 & --- & 2,5 \\
373 & SFC & 18:28:06.14 & -11:39:09.90 & --- & 100 & 70 & -60 & 4.2 & --- & 1 \\
376 & SFC & 18:28:20.95 & -11:35:51.60 & --- & 180 & 110 & -50 & 3.5 & <0.1 & 1,4 \\
380 & SFC & 18:28:23.18 & -11:41:19.00 & --- & 260 & 220 & -30 & 4.4 & 0.3 & 1,4,7 \\
381 & SFC & 18:28:23.59 & -11:47:37.90 & 240 & --- & --- & --- & 3.5 & 0.2 & 4,5 \\
386 & SFC & 18:28:48.60 & -11:48:40.00 & --- & 420 & 230 & -60 & 4.6 & --- & 1 \\
389 & SFC & 18:29:18.34 & -12:10:38.60 & --- & 820 & 430 & -10 & 4.2 & --- & 5 \\
390 & SFC & 18:29:14.36 & -11:50:21.70 & 250 & --- & --- & --- & 3.4 & 0.2 & 4,2,5 \\
396 & SFC & 18:29:59.97 & -11:00:30.50 & 330 & --- & --- & --- & 4.2 & 0.2 & 1 \\
399 & SFC & 18:30:30.89 & -11:12:11.90 & --- & 630 & 330 & -30 & 4.7 & --- & 5 \\
400 & SFC & 18:30:30.81 & -11:52:50.40 & 730 & --- & --- & --- & 3.5 & <0.1 & 5,7 \\
406 & HII & 18:18:47.30 & -15:48:58.30 & 220 & --- & --- & --- & 1.9 & --- & 7 \\
407 & HII & 18:19:59.95 & -15:27:06.90 & 560 & --- & --- & --- & 3.7 & --- & 6 \\
26a & SLC & 18:08:49.40 & -19:40:15.00 & 80 & --- & --- & --- & 3.1 & <0.1 & 2,1 \\
30a & SLC & 18:09:08.56 & -19:45:56.20 & 60 & --- & --- & --- & 5.2 & --- & 2 \\
54c & SLC & 18:10:40.07 & -19:10:40.90 & 100 & --- & --- & --- & 3.5 & 0.2 & 1,3,2,4 \\
65a & SLC & 18:11:09.60 & -19:36:32.00 & 96 & --- & --- & --- & 5.2 & <0.1 & 2,6 \\
83c & SLC & 18:12:19.40 & -17:40:12.20 & 70 & --- & --- & --- & 2.2 & 0.4 & 2,4,6 \\
106a & SLC & 18:13:35.58 & -17:18:39.60 & --- & 60 & 35 & -50 & 2.3 & --- & 1 \\
116b & SLC & 18:14:13.22 & -17:25:15.80 & 50 & --- & --- & --- & 4.4 & 0.1 & 3,2,4 \\
122b & SLC & 18:13:43.80 & -17:56:34.00 & --- & 80 & 60 & --- & 4 & 0.6 & 3,2,4,8 \\
185a & SLC & 18:16:50.35 & -16:21:44.80 & --- & 150 & 80 & 60 & 2.4 & 0.3 & 3,2,4,8 \\
203a & SLC & 18:17:32.59 & -17:06:41.60 & 120 & --- & --- & --- & 2.6 & --- & 2 \\
217a & SLC & 18:18:02.99 & -16:17:23.10 & 80 & --- & --- & --- & 3.7 & 0.2 & 1,2,4 \\
218b & SLC & 18:18:03.23 & -16:54:35.50 & --- & 95 & 60 & -80 & 2.5 & 0.4 & 1,2,4 \\
233a & SLC & 18:19:02.22 & -16:39:59.00 & 125 & --- & --- & --- & 2.3 & 0.2 & 1,2,4 \\
233b & SLC & 18:19:13.55 & -16:34:57.10 & 85 & --- & --- & --- & 2.3 & 0.2 & 1,2,4 \\
233e & SLC & 18:19:13.36 & -16:35:04.20 & 100 & --- & --- & --- & 2.3 & 0.2 & 1,2,4 \\
247a & SLC & 18:20:23.82 & -15:38:31.80 & --- & 170 & 80 & 95 & 3.6 & --- & 1 \\
325g & SLC & 18:25:23.77 & -13:19:04.50 & --- & 125 & 80 & -10 & 4.6 & 0.5 & 2,3,4,5,8 \\
325i & SLC & 18:25:22.56 & -13:17:13.40 & 40 & --- & --- & --- & 4.6 & 0.5 & 2,3,4,5,8 \\
328c & SLC & 18:25:22.76 & -12:23:26.90 & 100 & --- & --- & --- & 4.2 & 0.1 & 2,4,5 \\
379a & SLC & 18:28:18.30 & -12:06:26.00 & 170 & --- & --- & --- & 4.4 & --- & 2 \\
390b & SLC & 18:29:26.67 & -11:50:45.60 & 50 & --- & --- & --- & 3.4 & 0.2 & 2,4,5 \\
159a & SLC & 18:16:06.90 & -17:36:27.10 & 65 & --- & --- & --- & 3.8 & 0.4 & 1,2,4,8 \\
162a & SLC & 18:15:40.50 & -18:13:08.00 & 75 & --- & --- & --- & 3.5 & <0.1 & 2,6 \\
160a & SLC & 18:15:25.30 & -17:05:43.70 & 90 & --- & --- & --- & 4.1 & <0.1 & 3,2,4 \\
187f & SLC & 18:16:59.68 & -16:39:26.80 & --- & 90 & 70 & -40 & 4 & 0.2 & 2,4,6 \\
212a & SLC & 18:17:50.60 & -15:53:34.00 & 50 & --- & --- & --- & 2.7 & --- & 2 \\
212e & SLC & 18:17:52.84 & -15:55:16.90 & 50 & --- & --- & --- & 2.7 & --- & 2 \\
253a & SLC & 18:20:55.50 & -15:24:05.00 & 50 & --- & --- & --- & 3.8 & <0.1 & 1,4,6 \\
329a & SLC & 18:25:32.88 & -13:01:51.70 & 120 & --- & --- & --- & 4.5 & --- & 2 \\
372a & SLC & 18:27:43.01 & -12:35:42.70 & 50 & --- & --- & --- & 4.9 & <0.1 & 2,5 \\
397a & SLC & 18:30:02.90 & -12:15:51.00 & 120 & --- & --- & --- & 3.1 & --- & 2 \\
\end{longtable}
\tablebib{(1)~\citet{2013ApJ...770...39E};
(2)~\citet{2012A&A...540A.113T}; (3)~\citet{2013MNRAS.435..400U};
(4)~\citet{2012A&A...544A.146W}; (5)~\citet{2009ApJ...699.1153R};
(6)~\citet{2009ApJ...706..727M}; (7)~\citet{2006ApJ...639..227S};
(8)~\citet{1997MNRAS.291..261W}.}
\tablefoot{
\tablefoottext{a}{These regions have not previous solution for the KDA, but they show shadows against
the NIR background radiation, so we located them at the near distance solution.}
\tablefoottext{b}{The distance shown in this table is the result of averaging all the distance
estimates available for each source.}
\tablefoottext{c}{The dispersion of all the distance
estimates available for each region, when more than one
distance estimates are available.}}
\end{landscape}}
\subsection{Column density and mass estimation}\label{sec:NH}
Column densities of molecular gas were calculated via
\begin{equation}\label{eq:NH2}
N_{\mathrm{H_{2}}} [\mathrm{cm^{-2}}] =
\frac{RF_{\lambda}}{B_{\lambda}(T_{Dust})\mu
m_{\mathrm{H}}\kappa\Omega},
\end{equation}
where $F_{\lambda}$ and $B_{\lambda}(T)$ are respectively the flux and
the blackbody radiation as a function of temperature, $T$, at
870\,$\mu$m. The quantity $\mu$ is the mean molecular weight (assumed
to be 2.8) of the interstellar medium per hydrogen molecule,
$m_{\mathrm{H}}$ is the mass of the hydrogen atom, $\Omega$ is the
beam solid angle, and $R=154$ is the gas-to-dust
ratio~\citep{2011piim.book.....D}. We used a dust absorption
coefficient $\kappa=1.85\mathrm{\,cm^{2}g^{-1}}$ at $870\,\mathrm{\mu}$m, which was calculated by interpolation of
the~\citet{1994A&A...291..943O} dust model of grains with thin ice
mantles and a mean density of $n=10^{6}\mathrm{\,cm^{-3}}$. We
assumed $T=15$\,K for SLCs and SFCs~\citep{2012A&A...544A.146W}, in
agreement with previous dust temperature estimations within infrared
dark clouds~\citep[IRDCs]{2010ApJ...723..555P} and in envelopes of
star-forming cores~\citep{2010A&A...518L..87S,2013A&A...551A..98L}.
For \ion{H}{II}\,\, regions we assumed $T=25$\,K. This dust temperature is in
agreement with the average dust temperatures in PDR regions
surrounding \ion{H}{II}\,\, regions ($T=26$\,K), where most of the FIR-submm dust
emission of these objects comes from~\citep{anderson-12}. It also
agrees with the mean temperature found in the central region of
NGC6334 ($T\sim24$\,K), that is an expanding \ion{H}{II}\,\,
region~\citep{2013A&A...554A..42R}. For a better comparison with
previous works, we present the column density data also in units of
visual extinction using a conversion: $N_{\mathrm{H_{2}}} =
0.94\times10^{21}\,A_{V}\,\mathrm{cm^{-2}}\,\mathrm{mag^{-1}}$
~\citep{1978ApJ...224..132B}. The rms noise of the ATLASGAL data
(50\,mJy) corresponds to $A_{V}=4.5$\,mag for both the SFCs and SLCs
and 2.2\,mag\footnote{The difference in the rms values in terms of
$A_{V}$ is due to the temperatures assumed for each evolutionary
class.} for \ion{H}{II}\,\, regions. No saturation problems were found in the
ATLASGAL survey. The optical depth is $<<1$, therefore our
measurements do not suffer from optical depth effects in the
high-column density regime~\citep{2009A&A...504..415S}.
We estimated the total gas mass of each region from
dust continuum emission, assuming that
emission is optically thin:
\begin{equation}\label{mass}
M_{g} = \frac{Rd^{2}F_{\lambda}}{B_{\lambda}(T_{Dust})\kappa},
\end{equation}
where $d$ is the distance to the region. We assume the same values
for the other listed quantities as we assume for
the column density determination (Eq.~\ref{eq:NH2}).
Masses of the regions cover three orders of magnitude
(Fig.~\ref{fig:mass-hist}).
The masses of the SLCs span $0.2-4\times10^{3}\,\mathrm{M_{\mathrm{\sun}}}$,
the SFCs $0.3-15\times10^{3}\,\mathrm{M_{\mathrm{\sun}}}$,
and the \ion{H}{II}\,\, regions $0.2-200\times10^{3}\,\mathrm{M_{\mathrm{\sun}}}$.
Larger masses for \ion{H}{II}\,\, regions and SFCs are expected since both
have much larger sizes than SLCs (Fig.~\ref{fig:galPlane}).
The derived mass and column density values depend on the assumed dust
properties,
specifically on $\kappa_{870\,\mathrm{\mu m}}$, $R$ and $T_{Dust}$.
Both $\kappa_{870\,\mathrm{\mu m}}$ and $R$ are subject to
uncertainties:
$\kappa_{870\,\mathrm{\mu m}}$ values differ by $\sim1$\,dex in different
dust models~\citep{2005ApJ...632..982S,2011ApJ...728..143S}.
Eq.~\ref{eq:NH2} and Eq.~\ref{mass} assume isothermal clouds.
This is clearly an oversimplification,
increasing the uncertainty in the derived masses.
Mass depends also on $d^{2}$, making
uncertainties in distance a major contributor
to the absolute uncertainties.
If we adopt $\Delta d\sim 0.5$\,kpc, nearby regions
will be more affected by distance uncertainties (50\% at 1\,kpc)
than the most distant regions (10\% at 5\,kpc).
This assumption agrees with the distance uncertainties
reported by~\citep{2009ApJ...699.1153R}.
We note that the absolute uncertainty in our derived column densities is
very large, potentially larger than a factor 10.
The relative uncertainties between the evolutionary classes
can be influenced by the different temperature assumptions or
intrinsic differences in the dust properties.
The isothermal assumption introduces differences in the low-column
density regime of the $N$-PDFs, but it has negligible effect in
shaping the column density distribution at high-column densities (see App.~\ref{app:tcomp}).
In the case of dust properties, we have no knowledge about any observational-based study suggesting changes
in them in molecular clouds at different evolutionary
phases. We therefore assume that the dust properties do not
introduce relative uncertainties between the three molecular cloud
classes defined.
\subsection{Physical properties of the evolutionary classes}
We define and analyze in this work three distinctive evolutionary
classes of objects: SLCs, SFCs and \ion{H}{II}\,\, regions. The objects in these
classes are different in their physical characteristics.
These differences originate dominantly from the fact that the \ion{H}{II}\,\,
regions and SFCs are typically extended regions
(i.e., molecular clouds or even cloud complexes), while SLCs are smaller,
"clump-like" structures. We quantify here the basic physical properties
of the objects in our three evolutionary classes.
The properties are also listed in Table ~\ref{tab:ph-prop}.
Figure~\ref{fig:mass-hist}
shows the mass distribution of our regions. The mass distribution of \ion{H}{II}\,\, regions spans
3\,dex from $10^{2}-10^{5}\mathrm{\,M_{\sun}}$. SFCs have
masses of $10^{2}-10^{4}\mathrm{\,M_{\sun}}$, and SLCs show the most narrow
mass range, $10^{2}-10^{3}\mathrm{M_{\,\sun}}$.
The spread of the distribution of mean
column densities is $\overline{A_{V}}=3-25$\,mag
and it peaks at $\overline{A_{V}}=7$\,mag (see Fig.~\ref{fig:mean-av}).
The $\overline{A_{V}}$ distribution differs in each evolutionary class.
While most SFCs and SLCs have $\overline{A_{V}}\lesssim10$\,mag,
a considerable number of \ion{H}{II}\,\, regions show $\overline{A_{V}}>10$\,mag.
We note that the mean column densities of our sample are overestimated
due to the spatial filtering of ATLASGAL,
and so is the peak of the $\overline{A_{V}}$ distribution.
This effect is more important in \ion{H}{II}\,\, regions and SFCs since
they have larger areas and hence larger fraction of diffuse
material that is filtered out than the SLCs.
Figure~\ref{fig:mean-av} also shows the size
distribution of each class and of the total sample.
The SLCs have the smallest sizes of the sample with
a mean size of 1.4\,pc and a range of sizes between
1\,pc and 2.5\,pc. The range of sizes of the SFCs
is 2-15\,pc, with a mean of 5.3\,pc.
The \ion{H}{II}\,\, regions have the largest mean size of the three evolutionary
classes, 7\,pc, and also the largest spread, 2-18\,pc.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{./fig/MassHisto_mod.eps}}
\caption{Mass distribution of the molecular cloud regions. Filled green area
shows mass for starless clumps, dashed blue line shows star-forming clouds
and red dashed line shows \ion{H}{II}\,\, regions. Masses are given in solar mass
units.}
\label{fig:mass-hist}
\end{figure}
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{./fig/fig5.eps}}
\caption{Mean column density, $\overline{A_{V}}$, and size distribution of
all the regions. In the scatter plot we show
the \ion{H}{II}\,\, regions in red, the SFCs in blue and
the SLCs in green. The histograms show the $\overline{A_{V}}$ and size distributions
of each evolutionary class (same colors) and the whole sample (black).}
\label{fig:mean-av}
\end{figure}
\begin{table}
\caption{Mean physical properties of the evolutionary classes} %
\centering
\begin{tabular}{c c c c}
\hline\hline
&\ion{H}{II}\,\, &SFCs &SLCs \\\hline
Mass [M$_{\sun}$]\tablefootmark{a} &18$\times10^{3}$ &2.7$\times10^{3}$ &1.2$\times10^{3}$\\
$\overline{A_{V}}$ [mag] &10.6 $\pm$ 6.2 &7.0 $\pm$ 2.5 &8.2 $\pm$ 4.2 \\
Size [pc] &7.0 $\pm$ 4.0 &5.3 $\pm$ 2.7 &1.4 $\pm$ 0.6 \\\hline
\label{tab:ph-prop}
\end{tabular}
\end{table}
\section{Results}\label{results}
\subsection{Column density distribution}
We use the column density data to study the column density
distributions of the regions.
In the following, we first analyze the \textit{N}-PDFs and DGMFs.
We then examine the relationship between the
total mass and the column density distribution of the regions.
\subsubsection{\textit{N}-PDFs}\label{sec:pdf}
We first analyze the total \textit{N}-PDFs of the three evolutionary classes.
To construct the \textit{N}-PDFs, we used the mean
normalized column densities $s=\mathrm{ln}\,(A_{V}/\overline{A_{V}})$ (see
Eq.~\ref{eq:log-normal}) of each region. We calculated $\overline{A_{V}}$ as the
mean column density of all the pixels of each region.
The resulting \textit{N}-PDFs were then stacked together
to form the total \textit{N}-PDFs, shown in Fig.~\ref{fig:total-pdfs}
as the black histogram. The three classes show clearly different \textit{N}-PDFs: the \ion{H}{II}\,\, regions
have the widest (or shallowest) \textit{N}-PDF, followed by a slightly narrower (or steeper)
\textit{N}-PDF of SFCs. The SLCs have the narrowest \textit{N}-PDF.
Interpreting the low-column density shape of the \textit{N}-PDFs requires
taking into account two issues.
First, ideally the \textit{N}-PDF should not be affected by how exactly
the field-of-view
toward an individual region is cropped, i.e., it must include all column
density values above a given level.
Second, one must ascertain that the pixels are not dominated by noise or
contamination
from neighbouring regions. To fold these two limitations into one, we
define a
''reliability limit'' of the \textit{N}-PDFs as the minimum column density
value above which all regions of the evolutionary class are well defined by a
closed emission iso-contour (see Appendix~\ref{app:TSL},~\ref{app:HII}
and~\ref{app:SFC}).
These levels are $s=-1.5, -0.75,$ and $0$ for \ion{H}{II}\,\, regions, SFCs, and
SLCs, respectively.
These levels correspond typically to $A_{V} = 2, 4,$ and $9$\,mag,
all at least 1-$\sigma$ above the noise level (50\,mJy; see
Section~\ref{sec:NH}).
The larger reliability limit in SLCs originates from the
fact that they are embedded in \ion{H}{II}\,\, regions and SFCs, being surrounded
by emission levels higher than the map noise. The total number of pixels
above these
limits are $20\times10^{4}$, $9\times10^{4}$, and $10^{4}$
for \ion{H}{II}\,\, regions, SFCs, and SLCs, respectively.
We note that this definition of the reliability
limit is very conservative; it is set by the lowest iso-contour above
which \emph{all} regions of the evolutionary class show a closed contour.
Most regions, however, have this limit at lower $s$ values. We also
note that systematic uncertainties such as the dust opacity uncertainty do
not affect
(or, are unlikely to affect) the relative shapes of the three classes with
respect to each others.
To quantify the shapes of the \textit{N}-PDFs shown in
Fig.~\ref{fig:total-pdfs},
we fit them, equally sampled in the log space, with
a combination of log-normal (see Eq.~\ref{eq:log-normal})
and power-law ($p(s) \propto cs^{p}$) functions.
We used five free parameters in the fit: the width ($\sigma_{s}$)
and mean ($\mu$) of the log-normal function,
the slope ($p$) and constant ($c$) of the power-law, and
the break point between both functions ($s_{t}$).
Furthermore, molecular cloud masses
should be recovered when integrating the fitted function, representing
an extra boundary condition to the fit.
The fitting range was defined as all $s$
values larger than the reliability limit. We
weighted the data points by their Poisson noise.
We obtained the uncertainties of the fitted parameters
by fitting the \textit{N}-PDFs using different bin
sizes~\citep{2014ApJ...787L..18S}.
Results are summarized in Table~\ref{tab:fit-results}.
SLCs are well described by a log-normal \textit{N}-PDF
($\sigma_{s,SLC}=0.5\pm0.1$).
Even though the peak of the \textit{N}-PDFs is below the
reliability limit, it is well constrained by the fit
because of the normalization factor in Eq.~\ref{eq:log-normal}.
The \textit{N}-PDFs of \ion{H}{II}\,\, regions and SFCs
are inconsistent with a single log-normal function;
they are better described by a
combination of a log-normal function at low column densities
and a power-law function at high column densities.
The low-column density log-normal portion of \ion{H}{II}\,\,
regions is wider ($\sigma_{s,\ion{H}{II}\,\,}=0.9\pm0.09$)
than that of SFCs ($\sigma_{s,SFC}=0.5\pm0.05$).
The mean-normalized column densities at which the
\textit{N}-PDFs transition from log-normal to power-law
is similar in both classes, \ion{H}{II}\,\, regions and SFCs: $s_{t}=1.0\pm0.2$.
We also find differences in the power-law slopes of the \textit{N}-PDFs.
The power-law slope is clearly shallower for \ion{H}{II}\,\, regions ($p=-2.1\pm0.1$)
than for SFCs ($p=-3.8\pm0.3$).
\begin{figure*}
\centering
\includegraphics[width=0.33\textwidth]{./fig/totalepss_new_HII_paper.eps}
\includegraphics[width=0.33\textwidth]{./fig/totalepss_new_PSC_paper.eps}
\includegraphics[width=0.33\textwidth]{./fig/totalepss_new_SLC_paper.eps}
\caption{Total mean normalized column density PDFs of \ion{H}{II}\,\, regions (left),
SFCs (center) and SLCs (right).
All panels: horizontal axis show mean normalized column densities,
$s=\mathrm{ln}(\,A_{V}/\overline{A_{V}})$.
Vertical error bars
show Poisson standard deviation, $\sigma_{poisson} \propto \sqrt{N}$.
The best-fit curves assuming a combination of log-normal and
power-law functions are indicated, respectively, by red and green solid lines,
with the fit errors indicated as shaded regions of the same colors.
The gray shaded regions indicate data below the reliability limit.
These data were excluded from the fit. Blue shaded regions
show the range of values obtained for
the mean-normalized column density value at which \textit{N}-PDFs
deviate from a log-normal to a power-law like function, $s_{t}$.
}
\label{fig:total-pdfs}
\end{figure*}
\subsubsection{Dense Gas Mass Fraction}\label{sec:dgmf}
In Section~\ref{intro} we defined the DGMFs as the fraction of gas mass
enclosed by regions with $M(A_{V} \geq A_{V}\arcmin)$, relative to the
total mass of the cloud (see Eq.~\ref{dgmf}).
Fig.~\ref{fig:dgmf-mean} shows the mean DGMFs of each evolutionary class.
Generally, \ion{H}{II}\,\, regions exhibit larger reservoirs of
high-column density gas than the SLC and SFC regions.
We quantified the shapes of the mean DGMFs by fitting
them with a combination of exponential
($\propto e^{\alpha A_{V}}$)
and power-law ($\propto A_{V}^{\beta}$) functions,
leaving both exponents and the breaking point as
free parameters and weighting each point by the
Poisson standard deviation.
Fit errors were calculated as in Section~\ref{sec:pdf}, resulting in
parameter value uncertainties of 10\%-15\%.
While the mean DGMF of the SLCs is well fitted by an exponential,
\ion{H}{II}\,\, regions and SFCs transition from an exponential to a power-law
shape at $A_{V}\geq20$\,mag.
This change is evidently linked to the change from log-normal to
power-law shape in the \textit{N}-PDF because the DGMFs are an integral of
the \textit{N}-PDF.
\ion{H}{II}\,\, regions show the shallowest mean DGMF ($\alpha = -0.06$), followed by
SLCs ($\alpha =
-0.11$) and SFCs ($\alpha = -0.14$).
In the power-law portion of the DGMFs, \ion{H}{II}\,\, regions
are also shallower ($\beta=-1.0$) than SFCs ($\beta=-2.1$).
The amount of mass enclosed by the power-law DGMF is 30\% of the
total mass in \ion{H}{II}\,\, regions, almost a factor of three lower, 10\%,
for the SFCs.
The mean DGMF of \ion{H}{II}\,\, regions above $A_{V}=300$\,mag is
dominated by regions \#4, \#55 and \#122 (see Table~\ref{tab:regions}).
This flat tail is built up by less than 1\% of the pixels in
each of the mentioned regions, and hence, is not representative
of the whole \ion{H}{II}\,\, sample.
\begin{table}
\caption{Results of the best-fit parameters to the total \textit{N}-PDFs
and DGMFs.}
\centering
\begin{tabular}{c | c c c | c c }
\hline\hline
& & \textit{N}-PDFs & & & DGMFs \\
\hline
& $\sigma_{s}$\tablefootmark{a} & $p$\tablefootmark{b}
& $s_{t}$\tablefootmark{c} &
$\alpha$\tablefootmark{d} & $\beta$\tablefootmark{e} \\\hline
\ion{H}{II}\,\, & 0.9$\pm$0.09 & -2.1$\pm$0.1 & 1.0$\pm$0.2
& -0.06\tablefoottext{f} & -1.0 \\
SFCs & 0.5$\pm$0.05 & -3.8$\pm$0.3 & 1.0$\pm$0.2
& -0.14 & -2.1 \\
SLCs & 0.5$\pm$0.1 & --- & ---
& -0.11 & ---
\label{tab:fit-results}
\end{tabular}
\tablefoot{
\tablefoottext{a}{Standard deviation of the log-normal portion of the
$N$-PDFs.}
\tablefoottext{b}{Slope of the power-law portion of the $N$-PDFs.}
\tablefoottext{c}{Transition from log-normal to power-law portion of the
\textit{N}-PDFs in mean-normalized column densities.}
\tablefoottext{d}{Slope of the exponential portion of the DGMFs.}
\tablefoottext{e}{Slope of the power-law portion of the DGMFs.}
\tablefoottext{f}{Relative errors of DGMFs account for 10\%.}
}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=0.33\textwidth]{./fig/meanDGMFs_new_HII_both.eps}
\includegraphics[width=0.33\textwidth]{./fig/meanDGMFs_new_PSC.eps}
\includegraphics[width=0.33\textwidth]{./fig/meanDGMFs_new_SLC.eps}
\caption{Mean DGMFs of \ion{H}{II}\,\, regions (left),
SFCs (center) and SLCs (right).
Solid colored lines show mean normalized
DGMFs. DGMFs were normalized to the reliability limit
of each evolutionary class: $A_{V}=2,\,4,\,9$\,mag for
\ion{H}{II}\,\, regions, SFCs and SLCs, respectively.
Colored dashed lines show the
fit of the DGMFs with exponential functions. Dashed-dotted colored
lines show fits with power-law tails in the higher $A_{V}$ range.
Grey shaded regions show
statistical poisson errors of the DGMFs.
Small box in left panel shows the whole mean DGMF of \ion{H}{II}\,\, regions
up to $A_{V}=1000$\,mag.}
\label{fig:dgmf-mean}
\end{figure*}
\subsubsection{Relationship between the region's mass and column density
distribution}\label{sec:massDgmf}
Does the dense gas mass fraction of a region depend on its mass
and from therein, affect the SFR - cloud mass relation
presented by~\citet{2012ApJ...745..190L}?
We analyze the DGMFs of each evolutionary
class divided in five mass intervals (listed in Table~\ref{tab:dgmf-mass})
that have at least 9 regions each to answer this question.
\begin{table}
\caption{Mass intervals of each evolutionary class in
Fig.~\ref{fig:dgmf-mass} }%
\centering
\begin{tabular}{c c c c}
\hline\hline
[$\mathrm{M_{\sun}}$]& \ion{H}{II}\,\, & SFCs & SLC \\
\hline
$M_{i.\,1}$ & --- & $<10^{3}$ &$<10^{3}$ \\
$M_{i.\,2}$ & $1-2\times10^{3}$ & $1-2\times10^{3}$ &$1-2\times10^{3}$ \\
$M_{i.\,3}$ & $2-5\times10^{3}$ & $2-5\times10^{3}$ &$2-5\times10^{3}$ \\
$M_{i.\,4}$ & $0.5-1\times10^{4}$ & $0.5-1\times10^{4}$ & --- \\
$M_{i.\,5}$ & $>10^{4}$ & --- & --- \\
\hline
\label{tab:dgmf-mass}
\end{tabular}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=0.33\textwidth]{./fig/MassIntervalDGMFs_new_HII.eps}
\includegraphics[width=0.33\textwidth]{./fig/MassIntervalDGMFs_new_PSC.eps}
\includegraphics[width=0.33\textwidth]{./fig/MassIntervalDGMFs_new_SLC.eps}
\caption{Mass-binned average DGMFs for each evolutionary class. Each line
shows the DGMF for each of the mass intervals listed in the
corresponding panel and defined in Table~\ref{tab:dgmf-mass}. Dotted lines,
dotted-dashed lines, dashed lines, and solid lines progress from less
to most massive bins, respectively. The DGMFs were normalized
following the procedure described in Section~\ref{sec:dgmf} and shown in
Fig.~\ref{fig:dgmf-mean}.
}
\label{fig:dgmf-mass}
\end{figure*}
Figure~\ref{fig:dgmf-mass} shows the mean DGMFs of each mass
interval for the three evolutionary classes.
In all evolutionary classes, most massive regions
have shallower DGMFs
than those of less massive regions.
We fit the mean DGMFs with exponential and
power-law functions as described in Sect.~\ref{sec:dgmf}.
Most DGMFs could not be fitted well with the combination of both
functions over their entire column density range.
Only the DGMFs of the most massive SFCs and \ion{H}{II}\,\, regions
required two component functions; DGMFs of less massive regions are well
described by an exponential alone. Exponents derived from
this analysis are shown in Table~\ref{tab:dgmf-mass-exp}.
In all evolutionary classes, the exponent of the
exponential function, $\alpha$, increases
with mass (see Table~\ref{tab:dgmf-mass-exp}).
In order to further investigate this correlation, we repeated the
same fitting procedure for each individual region. Results are shown in
Fig.~\ref{fig:mass-dgmf-fits}.
Top panel of Fig.~\ref{fig:mass-dgmf-fits} shows
the relationship between $\alpha$ and
the mass of each region: $\alpha\propto M^{-0.43\pm0.05}$,
that has a correlation coefficient $r=0.64$ and
a significance value $p=0.18$.
The fit parameters and their errors were obtained
from a Monte-Carlo simulation of 10$^{6}$ cycles. On each cycle
we selected a random sample of points and fitted the resulting data set.
We adopt the average of the best fit parameters obtained on each cycle
as the best fit values and their standard deviation as the
error of the fit.
It could be argued that the correlation between $\alpha$ and
mass is dominated by the most massive ($M>3\times10^{4}M_{\sun}$) \ion{H}{II}\,\, regions.
To establish whether the correlation strength depends
strongly on these few massive clouds we also explored
this correlation without those extreme points.
The correlation coefficient is somewhat lower in this case $r=0.56$
and a significant value $p=0.21$.
However, there is no significant difference in the
resulting fit ($\alpha\propto M^{-0.40\pm0.08}$).
Power-law exponents of DGMFs also exhibit
a correlation with mass: $\beta\propto M^{-0.16\pm0.03}$ (see middle panel
of Fig.~\ref{fig:mass-dgmf-fits}).
The larger scatter seen in the data from the exponential fits
relative to that seen in power-law fits may indicate
that the power-law regimes of DGMFs are much better constrained
than the exponential regimes.
\begin{table}
\caption{Slopes of the exponential and power-law fits to DGMFs, $\alpha$,
$\beta$
for the mass ranges presented in Table~\ref{tab:dgmf-mass}.}
\centering
\begin{tabular}{c c c c c c}
\hline\hline
& $M_{i.\,1}$ & $M_{i.\,2}$ & $M_{i.\,3}$ & $M_{i.\,4}$ & $M_{i.\,4}$\\
\hline
$\alpha$(\ion{H}{II}\,\,) & --- & -0.25 & -0.22 & -0.06 & -0.04 \\
$\alpha$(SFCs) & -0.29 & -0.20 & -0.18 & -0.10 & --- \\
$\alpha$(SLC) & -0.32 & -0.19 & -0.09 & --- & --- \\
$\beta$(\ion{H}{II}\,\,) & --- & --- & --- & -0.59 & -0.61 \\
$\beta$(SFCs) & --- & --- & -1.03 & --- & --- \\
\hline
\label{tab:dgmf-mass-exp}
\end{tabular}
\end{table}
\begin{figure}[h!!]
\centering
\includegraphics[scale = 0.38]{./fig/dgmfExpFits_New.eps}\\
\includegraphics[scale = 0.38]{./fig/dgmfPLfits_New.eps}\\
\includegraphics[scale = 0.38]{./fig/fDG_lada.eps}
\caption{\emph{Top:} relationship between the mass of the
regions [$M_{\sun}$] and
the exponent of the exponential fit to the DGMFs ($\alpha_{exp}$).
Black solid line shows the best fit to the data
and the shaded region shows its $\sigma$ error. The dotted and
dashed lines show the best fit when the most massive
\ion{H}{II}\,\, regions are removed. Colors indicate the
evolutionary class of each point as indicated. \emph{Middle:} relationship
between
mass of the regions and the
slope of the power-law range of the DGMFs.
Black line shows the best fit to the data
and the shaded region shows its $\sigma$ error.
\emph{Bottom: }relationship between
the mean gas mass surface density of the MCs, $\Sigma_{\mathrm{mass}}$,
and the dense gas mass fraction of gas,
$f_{\mathrm{DG}}=\frac{M(A_{V}>7.0\,\mathrm{mag})}{M_{\mathrm{tot}}}$.
The crosses show the $f_{\mathrm{DG}}$ obtained
integrating the exponential regime of the DGMFs of each region
in the range $A_{V}=0-7$\,mag (see third paragraph in Sect.~\ref{sec:4.2}).
Black line shows a linear fit to the data in the range
$\Sigma_{\mathrm{mass}}=50-200\,\mathrm{M_{\sun}pc^{-2}}$.
Vertical dashed line at
$\Sigma_{\mathrm{mass}}=116\,\mathrm{M_{\sun}pc^{-2}}$~\citep{2010ApJ...724..687L,2012ApJ...745..190L}
indicates the threshold for the dense gas.}
\label{fig:mass-dgmf-fits}
\end{figure}
\section{Discussion}\label{disc}
\subsection{\textit{N}-PDFs as a measure of the evolutionary stage of objects.}\label{sec:pdf-evol}
The total \textit{N}-PDFs of different evolutionary classes exhibit clear differences;
these differences can be linked to differences in the
mechanisms that drive the evolution of objects within the various classes.
The \textit{N}-PDF of SLCs is well described by a single log-normal function
(see Fig. ~\ref{fig:total-pdfs}). This agrees with previous observations of
starless low-mass clouds~\citep{2009A&A...508L..35K} or starless regions of
star-forming clouds~\citep{2012A&A...540L..11S,2013ApJ...766L..17S,2013A&A...554A..42R}.
In particular, this simple log-normal form agrees with predictions for
turbulence-dominated media from numerical
simulations~\citep{1997ApJ...474..730P,2001ApJ...557..727V}.
In contrast, the total \textit{N}-PDFs of star-forming clouds, i.e.,
\ion{H}{II}\,\, regions and SFCs, show two components that can be described by
log-normal and power-law functions.
The power-law components of the \textit{N}-PDFs of the \ion{H}{II}\,\, regions
are shallower than those of the SFCs. Previous studies have found
that the power-law slopes are within $p=[-1.5, -3.3]$, with shallower
slopes related to most active star-forming regions.
In the only study with a resolution similar to
ours,~\citet{2013A&A...554A..42R}
found a non-star-forming region in NGC\,6334 to have a steep \textit{N}-PDF
slope\footnote{~\citet{2013A&A...554A..42R} quote the equivalent radial
density profile ($\kappa$), which can be related with the slope of
the power-law tail of the \textit{N}-PDF via $p = -2/(\kappa-1)$.}
($p=-5.7$), moderately star-forming regions to have shallower slopes
($p=-3.3, -3.0$), and an \ion{H}{II}\,\, region to have the shallowest slope ($p=-1.5$).
Their trend to have shallower \textit{N}-PDF in a cloud region that
contains an \ion{H}{II}\,\, region is similar to what we find in our work.
Theories and simulations that consider turbulent gas under the influence
of gravity predict power-law-like tails for \textit{N}-PDFs with exponents
comparable to what is observed~\citep{2011ApJ...727L..20K,2013ApJ...763...51F},
possibly featuring flattening of the power-law over time-scales relevant for
star formation~\citep{2011MNRAS.416.1436B,2013ApJ...763...51F,2014ApJ...781...91G}.
~\citet{2011ApJ...727L..20K} showed that a collapsing spherical
cloud with a power-law density distribution, $\rho \propto r^{-\kappa}$,
will have a power-law \textit{N}-PDF with a slope of $p = -2/(\kappa-1)$.
The power-law slopes that we observe (see Table~\ref{tab:fit-results})
indicate $\kappa=1.9$ and 1.5 for the \ion{H}{II}\,\, regions and SFCs, respectively.
The former is very close to the value $\kappa=2$ of a collapsing isothermal
sphere~\citep{1987ARA&A..25...23S}, suggesting that the density
distribution of \ion{H}{II}\,\, regions may be dominated by self-gravity.
The value of $\kappa=1.5$ we find for SFCs can also be
indicative of a collapse slowed down by turbulence-driving
effects~\citep{2014ApJ...781...91G}.
We note a caveat in this analysis. Our SFCs and \ion{H}{II}\,\, regions are unlikely
to be close to spheres and their large sizes make them unlikely to be
under general free-fall collapse.
However, these regions are composed of numerous smaller ATLASGAL clumps (see
Fig.~\ref{fig:galPlane}) that may
be closer to spherical symmetry, and we are averaging the emission of all
these smaller clumps. Indeed, the density profile exponent of our SFCs is
similar to that found by~\citet{2002ApJ...566..945B} in a sample of small high mass
star-forming objects, which correspond to our definition of SFCs.
Recent works based on \emph{Herschel} observations have
explored possible effects of other processes (e.g. ionising radiation or shock
compression) on the \textit{N}-PDFs of \ion{H}{II}\,\, regions~\citep{2012A&A...540L..11S,2013ApJ...766L..17S,2014A&A...564A.106T}.
~\citet{2014A&A...564A.106T} report \textit{N}-PDFs with two log-normal components. They relate
the log-normal component at low column-densities to the
turbulent motions of the gas and
the component at high-column densities to ionization pressure.
They also suggest that the presence of these double-peaked \textit{N}-PDFs
depends on the relative importance of ionizing and turbulent pressures.
The total \textit{N}-PDFs of our \ion{H}{II}\,\, regions, composed of 60
individual regions, does not exhibit
such behavior. This could originate from a combination
of several factors: \textit{i)} the low-column density component detected by
~\citet{2014A&A...564A.106T} is at column densities of $A_{v}\lesssim6$\, mag.
These column densities are generally filtered out from
the ATLASGAL data; \textit{ii)} the size-scales of the molecular clouds
studied in~\citet{2014A&A...564A.106T} and this work
are different and it may happen that the ionisation front
of the \ion{H}{II}\,\, regions is not spatially resolved in our observations.
The above models offer an attractive possibility to link the observed
\textit{N}-PDF tails to self-gravitating gas in molecular clouds. However, it has not yet
been shown observationally that the power-law parts would be definitely caused by
self-gravity; an alternative interpretation has been proposed
by~\citet{2011A&A...536A..48K} who suggested that the overall pressure conditions in the clouds may play
a role in producing the observed power-law-like behavior in low-mass molecular
clouds.
\subsection{Dense Gas Mass Fraction in molecular clouds}\label{sec:4.2}
With our cloud sample, we are able to study the DGMFs
of molecular clouds over a relatively wide dynamic range of
column densities and separately in various evolutionary classes.
The continuous DGMF functions (Eq.~\ref{dgmf}) allow for a more complete
census of the dense gas in the
clouds than the analysis of the ratios of two tracers, e.g., of
CO emission and dust emission.
We find that the DGMFs of \ion{H}{II}\,\, regions are shallower
than those of SFCs and SLCs.
This suggests a direct relation between
the star-forming activity of molecular clouds
and their relative dense gas mass fraction.
Similar results have also
been previously found in
nearby regions~\citep{2009ApJ...703...52L,2010ApJ...724..687L,2009A&A...508L..35K,2013A&A...549A..53K}
and filmentary clouds~\citep{2010A&A...518L.102A}.
We detect a clear correlation between the DGMF slope
and cloud mass (see Fig.~\ref{fig:dgmf-mass}).
Previously,~\citet{2014ApJ...780..173B}
found no correlation between molecular cloud mass and the dense
gas fraction in a large sample of molecular clouds.
They defined the dense gas fraction as the ratio of dust
emission-derived mass, traced with
1\,mm flux, above $A_{V}=9.5$\,mag to
CO-derived mass: $f_{\mathrm{DG}}=M_{\mathrm{dust}}/M^{\mathrm{CO}}_{GMC}$.
Their result imply that there is no correlation between the
mass of CO-traced gas ($A_{V}\sim3-8$\,mag) and
the mass of gas at column densities $A_{V}>9$\,mag.
Unfortunately, we do not measure the CO mass
of our MCs and therefore we cannot directly compare
our results with those of~\citet{2014ApJ...780..173B}.
The correlation we find between the molecular cloud masses
and the slope of DGMFs suggests
that the dense gas fraction depends on the mass of moderately dense gas
($A_{V}\gtrsim10$\,mag) rather than the CO mass of the clouds.
~\citet{2010ApJ...724..687L,2012ApJ...745..190L} suggested that
star formation rates depend linearly on the amount of dense gas in
molecular clouds: $\Sigma_{\mathrm{SFR}}\propto f_{\mathrm{DG}}\Sigma_{\mathrm{mass}}$,
with $f_\mathrm{DG} = M(A_\mathrm{V} > 7 \ \mathrm{mag}) / M_\mathrm{tot}$.
Combining this relation with ~\citet{2011ApJ...739...84G},
who derived the relation $\Sigma_{\mathrm{SFR}}\propto\Sigma^{2}_{\mathrm{mass}}$,
suggests $f_{\mathrm{DG}}\propto\Sigma_{\mathrm{mass}}$.
We find that this correlation indeed exists in the
range $\Sigma_{\mathrm{mass}}=50-200\,\mathrm{M_{\sun}pc^{-2}}$
(see Fig.~\ref{fig:mass-dgmf-fits}).
At higher surface densities the relationship flattens at $f_{\mathrm{DG}}\cong0.8$,
suggesting that the maximum amount of dense gas that a MC can harbor is
around 80\% of its total mass. Consequently, the maximum $\Sigma_{\mathrm{SFR}}$ of a molecular
cloud is reached at $f_{\mathrm{DG}}\cong0.8$. This value depends on the definition
of the column density threshold ($A_{V}^{\mathrm{th}}$) of the dense gas
becoming lower for higher values of $A_{V}^{\mathrm{th}}$.
The spatial filtering of ATLASGAL data (see Appendix~\ref{sec:herComp})
results in overestimated $f_{\mathrm{DG}}$ values. We therefore propose
$f_{\mathrm{DG}}\cong0.8$ as an upper limit to the actual maximum $f_{\mathrm{DG}}$ of a MC.
The overestimation of the $f_{\mathrm{DG}}$ values derived
above can be studied using DGMFs.
In general, DGMFs have been shown to follow an exponential function,
$\propto e^{\alpha A_{V}}$, down to low column densities~\citep{2009A&A...508L..35K,2013A&A...549A..53K}.
We adopted the $\alpha$ values calculated in Section~\ref{sec:massDgmf}
and integrated the exponential DGMF in the range
$A_{V}=0-7$\,mag to obtain an estimate of $f_{\mathrm{DG}}$.
The result is shown with crosses in bottom panel of Fig~\ref{fig:mass-dgmf-fits}.
The mean overestimation of $f_{\mathrm{DG}}$ in SFCs and \ion{H}{II}\,\, regions
is 2 and $\sim$1.3 respectively.
We did not include the SLCs in this experiment
because their reliability limit is $A_{V}=9$\,mag
and therefore they have $f_{\mathrm{DG}}=1$ (i.e.
all its mass is enclosed in regions $A_{V}>7$\,mag).
Our data can also help to understand the $SFR$ - dense gas mass relation
suggested by~\citet{2012ApJ...745..190L}. The $SFR$ - dense gas mass relation
shows significant scatter of star formation rates for a given dense gas mass,
about 0.6\,dex (see Fig. 2~\citet{2012ApJ...745..190L} and
Fig.~\ref{fig:fDGmass}).
This scatter shows that not all clouds with the same amount of dense gas
form stars with the same rate. To gain insight into this, we calculated
the dense gas mass fractions and star formation rates for our regions as
defined by~\citet{2012ApJ...745..190L},
i.e., $SFR = 4.6 \times 10^{-8} f_\mathrm{DG} M_\mathrm{tot}$\, M$_{sun}$yr$^{-1}$.
Figure~\ref{fig:fDGmass} shows the $SFR$ - dense gas mass relation with
data points
from~\citet{2010ApJ...724..687L}. The figure also shows the mean SFR of
our regions in six mass bins, with error bars showing the relative standard
deviation of $f_\mathrm{DG}$. The standard deviations are also listed in
Table~\ref{tab:scatter}.
The relative standard deviation of $f_\mathrm{DG}$ over the entire mass range
of our regions is 0.71, which is slightly higher than the relative scatter of
SFR in~\citet{2012ApJ...745..190L}, $f_\mathrm{DG}=0.56$. We conclude that
the scatter in star formation rates for a given dense gas mass can
originate
from differences in dense gas fractions, i.e., in the total masses of
clouds for a given
dense gas mass. This, in turn, suggests that the dense gas mass is not the
only ingredient
affecting the star formation rate, but the lower-density envelope of the cloud
also plays
a significant role. However, we note the caveat that ATLASGAL filters out
low-column
densities , which may make the dense gas fractions we derive not comparable
with those in~\citet{2012ApJ...745..190L}, derived using dust extinction data.
\begin{table}
\caption{Statistics of $f_{\mathrm{DG}}$ in this work and
in~\citep{2012ApJ...745..190L}.}
\centering
\begin{tabular}{c c c c}
\hline\hline
$M_{\mathrm{tot}}$ & $\overline{f_{\mathrm{DG}}}$ & $\sigma/(\overline{f_{\mathrm{DG}}})$ & \# of
regions \\
\hline
This work & & & \\\hline
$<0.8\times10^{3}$ & $0.24$ & $0.22$ & 13 \\
$0.8-2.2\times10^{3}$ & $0.29$ & $0.73$ & 42 \\
$2.2-6.0\times10^{3}$ & $0.31$ & $0.64$ & 39 \\
$6.0-17\times10^{3}$ & $0.41$ & $0.50$ & 27 \\
$17-46\times10^{3}$ & $0.58$ & $0.33$ & 10 \\
$>46\times10^{3}$ & $0.83$ & $0.16$ & 4 \\
\textbf{Entire range} & \textbf{0.39 }& \textbf{0.71} & \textbf{135}\\
\hline
\citep{2012ApJ...745..190L} & & & \\\hline
$0.8-100\times10^{3}$ & $0.11$ & $0.56$ & 11\\
\hline
\label{tab:scatter}
\end{tabular}
\end{table}
\begin{figure}[h!!]
\centering
\includegraphics[width = 0.5\textwidth]{./fig/fDG_scatter_check.eps}
\caption{SFR as defined in~\citet{2012ApJ...745..190L}
for different mass ranges of SFCs and \ion{H}{II}\,\, regions.
Red crosses show data from~\citet{2012ApJ...745..190L}.
Solid black vertical lines show the standard deviation, $\sigma$, for
each mass bin for our study. Black dashed line shows the constant value $f_{\mathrm{DG}}=1$.}
\label{fig:fDGmass}
\end{figure}
\subsection{Evolutionary time-scales of the evolutionary classes as indicated by their \textit{N}-PDFs}
If \textit{N}-PDFs evolve during the lives of molecular clouds, could they give us
information about the evolutionary timescales of the clouds in the three classes we have defined?
\citet{2014ApJ...781...91G} have developed an analytical model which
predicts the evolution of the \textit{$\rho$}-PDFs of a system in
free-fall collapse.
They estimate the relative evolution time-scale, $t_{\mathrm{E}}$,
from the free-fall time at the mean
density, $\overline{\rho}$, of the molecular cloud,
$t_{\mathrm{ff}}(\overline{\rho})$,
and the density at which the \textit{$\rho$}-PDFs begin to show a
power-law shape, $\rho_{\mathrm{tail}}$
\begin{equation}\label{giri}
t_{\mathrm{E}} = \sqrt{0.2\frac{\overline{\rho}}{\rho_{\mathrm{tail}}}}t_{\mathrm{ff}}(\overline{\rho}).
\end{equation}
The model also predicts the mass fraction
of gas in regions with $\rho > \rho_{\mathrm{tail}}$.
We denote this mass as $M_{\mathrm{dense}}$.
Since our work is based on column densities
instead of volume densities, we need to write
Eq.~\ref{giri} in terms of column
densities. To this aim, we assume a ratio between 2D and
3D variances, $R=\sigma^{2}_{N/<N>}/\sigma^{2}_{\rho/\overline{\rho}}$.
This relation is also valid for the ratios
$\overline{\rho}/\rho_{\mathrm{tail}}$ and $\overline{A}_{V}/A_{V}^{\mathrm{tail}}=e^{-s_{t}}$,
where $A_{V}^{\mathrm{tail}}$ is the column density value at
which the \textit{N}-PDF becomes a power-law and $s_{t}$ is
the mean normalized $A_{V}^{\mathrm{tail}}$. Then, Eq.~\ref{giri} can be written as
\begin{equation}\label{giri2}
t_{\mathrm{E}} =
\sqrt{\frac{0.2}{\sqrt{R}}e^{-s_{t}}}t_{\mathrm{ff}}(\overline{A}_{V}).
\end{equation}
This equation allows us to estimate
the evolutionary time-scale of a molecular
cloud using two observable quantities, namely $\overline{A}_{V}$ and
$A_{V}^{\mathrm{tail}}$.
The factor $R$ is still not well constrained.
~\citet{2014Sci...344..183K} obtained observationally a value of
$R\sim0.4$ while~\citet{2010MNRAS.403.1507B} obtained
$R=[0.03,0.15]$ in MHD turbulence simulations without gravity. In the following we use
the observationally derived value, $R=0.4$, to estimate
the time-scales of our three evolutionary classes.
We estimate the uncertainty in the time-scales as
the relative uncertainty between $R=0.4$ and $R=0.15$,
which is roughly 30\%.
We find that our \ion{H}{II}\,\, and SFCs classes
have evolutionary time-scales $t_{E,\ion{H}{II}\,\,}=t_{\mathrm{E,SFC}}=0.4\pm0.1t_{\mathrm{ff}}$
and their relative mass of gas in regions
with $s>s_{t}$ are $M_{\mathrm{dense,\ion{H}{II}\,\,}}\sim30\pm0.05\%$
and $M_{\mathrm{dense,SFC}}\sim10\pm0.06\%$ where the uncertainties were obtained
from the 1-$\sigma$ uncertainties in $s_{t}$ (see Section~\ref{sec:pdf}).
Since the \textit{N}-PDF of the SLCs has no power-law tail, we calculated
an upper limit of their evolutionary time-scale by using the
largest extinction in their \textit{N}-PDFs as a lower
limit, $s_{t}>1.2$. We obtained $t_{E,SLC}<0.3\pm0.1t_{\mathrm{ff}}$.
\begin{table}
\caption{Evolutionary time-scales}
\centering
\begin{tabular}{c c c c c}
\hline\hline
& $t_E$ [$\mathrm{t_{\mathrm{ff}}}$] & $t_E$ [Myr] & $M_{\mathrm{dense}}$ [\%] &
$\overline{\rho}$ [cm$^{-3}$]\\
\hline
\ion{H}{II}\,\, & $0.4\pm0.1$ & $0.7\pm0.2$ & $30^{+0.05}_{-0.06}$
& $0.3\times10^{3}$ \\
SFCs & $0.4\pm0.1$ & $0.3\pm0.1$ & $10^{+0.06}_{-0.04}$
& $1.5\times10^{3}$ \\
SLCs & $0.3\pm0.1$ & $\lesssim0.1\pm0.03$ & ---
& $4.7\times10^{3}$ \\
\hline
\label{tab:giriTab}
\end{tabular}
\end{table}
The above relative time-scales can be used to estimate absolute
time-scales if the free-fall time is known. We estimate the free-fall
time of each evolutionary class as $t_{\mathrm{ff}}=\sqrt{3\pi/32G\overline{\rho}}$.
The mean density of each class was estimated
using their mean masses and effective radii\footnote{We define the
effective radius as the radius of a
circle with the same area as the projected area of a given cloud.}
and assuming spherical symmetry.
We find that the mean evolutionary
time-scale for our \ion{H}{II}\,\, regions is $t_{E,\ion{H}{II}\,\,}=0.7\pm0.2$\,Myr,
and the time-scale of SFCs is
$t_{E,\mathrm{SFC}}=0.3\pm0.1$\,Myr.
SLCs have the shortest time-scales,
$t_{E,\mathrm{SLC}}<0.1\pm0.03$\,Myr.
We note that the absolute time-scales are measured
using the onset of the gravitational collapse in the molecular cloud as $t=0$
and that they were specifically estimated independently for each of the three
classes of clouds defined in this work.
Do the above results agree with previous time-scale estimations?
The evolutionary time-scale of our SLC
sample is within the range of collapse life-times
derived in other studies.
For example,~\citet{2012A&A...540A.113T} derived a life-time of
$6\times10^{4}$\,yr and~\citet{2013A&A...559A..79R}, $7-17\times10^{4}$\,yr
for the starless core phase.
Furthermore,~\citet{csengeri-2014} found a time-scale
of $7.5\pm2.5\times10^4$\,yr for massive starless clumps in the Galaxy using
ATLASGAL data. In all these studies, as well as in the present paper,
the starless clumps are massive enough to be able to harbor high-mass
star-forming activity. Similar evolutionary time-scales have also been found
in regions that are more likely to only form low-mass stars,
e.g., in Perseus~\citep{2014AAS...22345433W}.
The SFC evolutionary time-scale
is close to recent age estimates
of Class 0+1 protostars,
$\sim0.5-0.4$\,Myr~\citep[Table~1]{2014prpl.conf..195D}.
\ion{H}{II}\,\, regions are subject to other physical processes
apart from gravity, such as Rayleigh-Taylor (RT) instabilities involved in
the expansion
of the \ion{H}{II}\,\, regions and shocks due to stellar feedback.
These processes make this simple evolutionary
model hardly applicable to them and we therefore do not
discuss the evolutionary time obtained for \ion{H}{II}\,\, regions further.
Finally, we mention several caveats associated with
the time-scales derived above.
The mean column density used in Eq.~\ref{giri2}
corresponds only to the mean observed column
density and not necessarily to the actual
mean column density that should be used in Eq.~\ref{giri2}.
In addition, the factor, $R$, relating 2D and 3D
variances of mean normalized densities is still not well constrained.
Furthermore, this model assumes a single cloud undergoing
free-fall collapse. While this assumption can be true
for the SLCs, it is unlikely to be the case in SFCs.
As mentioned in Section~\ref{sec:pdf-evol},
we assume that the smaller ATLASGAL clumps which compose
each SFC region are close to spherical symmetry.
With these caveats, we only aim to study
the evolutionary time-scales in terms of orders of magnitude.
Considering these caveats, we conclude that the method of
estimating evolutionary time-scale presented agrees
with independently derived typical ages for SLCs and SFCs.
\section{Conclusions}\label{conc}
We have used ATLASGAL 870\,$\mu$m dust continuum data
to study the column density distribution of
330 molecular clouds molecular clouds that we divide in three
evolutionary classes: starless clumps (SLCs), star-forming clouds (SFCs),
and \ion{H}{II}\,\, regions.
Our large sample of molecular clouds allows us
to study their column density distributions at
Galactic scale for the first time.
We study the column density distributions
of the clouds over a wide dynamic range
$A_{V}\sim3-1000$\,mag, spanning a wide range of
cloud masses ($10^{2}-10^{5}\,\mathrm{M_{\sun}}$).
In the following we summarize the main results obtained.
\begin{itemize}
\item The total \textit{N}-PDFs of SLCs is well described by a log-normal
function with a width of about $\sigma_{s}\sim0.5$. The total
\textit{N}-PDF of SFCs and \ion{H}{II}\,\,
regions show power-law tails at high column
densities, with \ion{H}{II}\,\, regions having a shallower slope.
These observations agree with a picture in which the density distribution
of SLCs is dominated by turbulent motions.
The SFCs are significantly affected by gravity,
although turbulence may still play a role in structuring the clouds.
The density distributions of \ion{H}{II}\,\, regions are consistent with
gravity-dominated media.
Our statistical sample shows that this picture, earlier observed in clouds of
the Solar neighborhood, is relevant also at Galactic scale.
\item DGMFs of SLCs are well described by exponential functions
with exponent $\alpha_{exp}=-0.1$.
The DGMFs of \ion{H}{II}\,\, regions and SFCs are better described by power-laws
with exponents of $\beta=-1.0$ and $\beta=-2.1$ respectively.
The DGMF shape depends on cloud mass, being shallower for
the most massive clouds and steeper for the less massive clouds.
This dependence exists in all evolutionary classes.
\item We find an approximately linear correlation
$f_{\mathrm{DG}}\propto\Sigma_{\mathrm{mass}}$ for
$\Sigma_{\mathrm{mass}}=50-200\,\mathrm{M_{\sun}pc^{-2}}$, valid for all
evolutionary classes. This relation flatens at $f_{\mathrm{DG}}\cong0.8$ in MCs,
suggesting that the maximum
star-forming activity in MCs is reached at $f_{\mathrm{DG}}\cong0.8$.
We also find that the intrinsic scatter of $f_{\mathrm{DG}}$ is ($\sim$0.7\,dex)
is similar to the scatter seen in the relation
SFR - dense gas mass of~\citep{2010ApJ...724..687L,2012ApJ...745..190L}.
This suggests that both, the dense gas mass
and the lower-density envelope of the cloud, play
a significant role in affecting the star formation rate.
\item We estimate the evolutionary time-scales
of our three classes using an analytical model which
predicts the evolution of the PDF of a cloud in free-fall
collapse~\citep{2014ApJ...781...91G}.
We found $t_{\mathrm{E}}\lesssim0.1$\,Myr,
$t_{\mathrm{E}}\sim0.3$\,Myr, and $t_{\mathrm{E}}\lesssim0.7$\,Myr for SLCs, SFCs, and \ion{H}{II}\,\,
regions, respectively.
Both time-scales agree with previous,
independent age estimates of corresponding objects,
suggesting that molecular cloud evolution
may indeed be imprinted into the observable \textit{N}-PDF functions.
\ion{H}{II}\,\, regions show a complexity of physical processes that
make this model hard to apply to them.
\end{itemize}
\begin{acknowledgements}
The work of J.A. is supported by the Sonderforschungsbereich (SFB) 881
\textquotedblleft The Milky Way System\textquotedblright \,and
the International Max-Planck Research School (IMPRS) at Heidelberg University.
The work of J.K. and A.S. was supported by the
Deutsche Forschungsgemeinschaft priority
program 1573 (\textquotedblleft Physics of the Interstellar
Medium\textquotedblright).
This research has made use of the SIMBAD
database, operated at CDS, Strasbourg, France.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,116,691,500,170 | arxiv | \section{Conclusions}
The specific separation of time scales in the different regimes
allowed us to identify the relevant variables and describe each
regime by a specific \emph{simplified model}. In the tunneling
(high damping) regime the mechanical degree of freedom is almost
frozen and all the features revealed by the Wigner distribution,
the current and the current noise can be reproduced with a
resonant tunneling model with tunneling rates renormalized due to
the movable quantum dot. Most of the features of the shuttling
regime (self-sustained oscillations, charge-position correlation)
are captured by a simple model derived as the zero-noise limit of
the full description. Finally for the coexistence regime we
proposed a dynamical picture in terms of slow dichotomous
switching between the tunneling and shuttling modes. This
interpretation was mostly suggested by the presence in the
stationary Wigner function distributions of both the
characteristic features of the tunneling and shuttling dynamics
and by a corresponding gigantic peak in the Fano factor. We based
the derivation of the simplified model on the fast variables
elimination from the Klein-Kramers equations for the Wigner
function and a subsequent derivation of an effective bistable
potential for the amplitude of the dot oscillation (the relevant
slow variable in this regime).
The comparison of the results obtained using the simplified models
with the full description in terms of Wigner distributions,
current and current-noise proves that the models, at least in the
limits set by the chosen investigation tools, capture the relevant
features of the shuttle dynamics.
The work of T.~N.\ is a part of the research plan MSM 0021620834
that is financed by the Ministry of Education of the Czech
Republic, and A.~D.\ acknowledges financial support from the
Deutsche Forschungsgemeinschaft within the framework of the
Graduiertenkolleg ``Nichtlinearit\"at und Nichtgleichgewicht in
kondensierter Materie'' (GRK 638).
\section{Klein-Kramers Equation}
The shuttle dynamics has an appealing simple classical
interpretation: the name ``shuttle" suggests the idea of
sequential and periodical loading, mechanical transport and
unloading of electrons between a source and a drain lead.
Motivated by the possibility of observing signatures of quantum
dynamics of the mechanical degree of freedom for a nanoscale
shuttle, we decided, following the suggestion of Armour and
MacKinnon \cite{arm-prb-02}, to explore a system with a quantized
oscillator. We express our results in terms of the Wigner function
because in this way we can simultaneously keep the intuitive
phase-space picture and handle the quantum-classical
correspondence \cite{nov-prl-03}.
The phase space of the shuttle device is spanned by the triplet
charge -- position -- momentum. Correspondingly, the Wigner
function is constructed from the reduced density matrix $\sigma_{ii}$
($i=0,1$ indicates the empty and charged states respectively):
\begin{equation}
W_{ii}(q,p,t) =
\frac{1}{2 \pi \hbar}
\int_{-\infty}^{+\infty}\!\!\!d \xi
\left\langle q-\frac{\xi}{2}\right| \sigma_{ii}(t) \left|q+\frac{\xi}{2}\right\rangle
\exp\left(\frac{ip\xi}{\hbar}\right)
\label{eq:WF}
\end{equation}
where the reduced density matrix $\sigma$ is defined as the trace
over the mechanical and thermal baths of the full density matrix:
\begin{equation}
\sigma = {\rm Tr}_{\rm B}\{\rho\}
\end{equation}
The dynamics of the shuttle device is then completely described by
the equation of motion for the Wigner distribution
\cite{fed-prl-04,andrea}:
\begin{equation}
\eqalign{
\fl \phantom{abba}\frac{\partial W_{00}}{\partial t} =&
\left[m \omega^2 q\frac{\partial}{\partial p}
-\frac{p}{m}\frac{\partial}{\partial q}
+\gamma \frac{\partial}{\partial p}p
+\gamma m \hbar \omega \left(n_B + \frac{1}{2} \right)
\frac{\partial^2}{\partial p^2}\right]W_{00}
\\
\fl &+\Gamma_{R}e^{ 2q/\lambda} W_{11}
-\Gamma_{L}e^{- 2q/\lambda}\sum_{n=0}^{\infty}
\frac{(-1)^n}{(2n)!}
\left(\frac{\hbar}{\lambda}\right)^{2n}
\frac{\partial^{2n}W_{00}}{\partial p^{2n}}
\\
\fl \phantom{abba}\frac{\partial W_{11}}{\partial t} =&
\left[m
\omega^2(q-d)\frac{\partial }{\partial p}
-\frac{p}{m}\frac{\partial}{\partial q}
+\gamma \frac{\partial}{\partial p}p
+\gamma m \hbar \omega \left(n_B + \frac{1}{2} \right)
\frac{\partial^2}{\partial p^2}\right]W_{11}
\\
\fl &+\Gamma_{L}e^{- 2q/\lambda} W_{00}
-\Gamma_{R}e^{2q/\lambda}\sum_{n=0}^{\infty}
\frac{(-1)^n}{(2n)!}
\left(\frac{\hbar}{\lambda}\right)^{2n}
\frac{\partial^{2n}W_{11}}{\partial p^{2n}}}
\label{eq:KleinKramers}
\end{equation}
where $(q,p)$ are the position and momentum coordinates of the
mechanical phase space and $n_B$ is the Bose distribution
calculated at the natural frequency of the harmonic oscillator.
Only the diagonal charge states enter the Klein-Kramers equations
\eref{eq:KleinKramers}: the off-diagonal charge states vanish
given the incoherence of the leads and are thus excluded from the
dynamics.
We distinguish in Eqs.~\eref{eq:KleinKramers} contributions of
different physical origin: the coherent terms that govern the
dynamics of the (shifted) harmonic oscillator, the dissipative
terms proportional to the mechanical damping constant $\gamma$
and, finally the driving terms proportional to the bare tunneling
rates $\Gamma_{L,R}$.
The ability of the formalism to treat the quantum-classical
correspondence is explicit in Eqs.~\eref{eq:KleinKramers}: given a
length, a mass, and a time scale for the system we can rescale the
phase space coordinates and an expansion in $\hbar/S_{\rm sys}$
will appear where $S_{\rm sys}$ is the typical action of the
system. Classical systems have a large action $S_{\rm sys} \gg
\hbar$ and only the first term ($\hbar/S_{\rm sys} \to 0$) in the
expansion is relevant. In the opposite limit $S_{\rm sys} \approx
\hbar$ the full expansion should be considered.
\section{Introduction}
A generic example of a nanoelectromechanical (NEMS) device is
given by the {\it charge shuttle} (originally proposed by Gorelik
{\it et al.}~\cite{gor-prl-98}): a movable single-electron device,
working in the Coulomb blockade regime, which can exhibit regular
charge transport, where one electron within each mechanical
oscillation cycle is transported from the source to the drain --
see Figure 1 for a schematic illustration (see also
Ref.\cite{sch-njp-02}, which contains an illustrative computer
animation).
\begin{figure}[h]
\begin{center}
\includegraphics[angle=0,width=.7\textwidth]{Figures/SdS_model.eps}
\caption{\small \textit{Schematic representation of the single-dot shuttle: electrons tunnel
from the left lead at chemical potential ($\mu_L$) to the quantum
dot and eventually to the right lead at lower chemical potential $\mu_R$.
The position dependent tunneling amplitudes are indicated.
$X$ is the displacement from the equilibrium position.
The springs represent the harmonic potential in which the central dot
can move.}}
\label{fig:SdS_model}
\end{center}
\end{figure}
The device shown in Fig.~\ref{fig:SdS_model} can exhibit a number
of different charge transport mechanisms, to be discussed below,
and transitions between the various regimes can be induced by
varying a control parameter, such as the applied bias, or the
mechanical damping. Since its introduction, the charge shuttle has
inspired a large number of theoretical papers (see, {\it e.g.},
Refs.
\cite{arm-prb-02,nov-prl-03,fed-prl-04,pis-prb-04,nov-prl-04,fli-prb-04,fli-epl-05,pis-prl-05,fed-prl-05}).
To the best of our knowledge, a clear-cut experimental
demonstration of a shuttling transition is not yet available,
though significant progress has been made, such as the {\it
driven} shuttle of Erbe {\it et al.}~\cite{erb-prl-01}, or the
nanopillars studied by Scheible and Blick \cite{sch-apl-04}. In
the present paper, we study the quantum shuttle, {\it i.e.}\ a
device where also the mechanical motion needs to be quantized (the
physical condition for this to happen is $\lambda\simeq x_{zp}$,
where $\lambda$ is the tunneling length describing the exponential
decay of the wave-functions into vacuum, and $x_{zp}$ is the
quantum-mechanical zero-point amplitude). In contrast to many of
the earlier theoretical papers on quantum shuttles
\cite{arm-prb-02,nov-prl-03,nov-prl-04,fli-prb-04}, where
extensive numerical calculations were employed, the main aim here
is to develop simplified models which allow significant analytic
progress, and hence lead to a transparent physical interpretation.
From the previous numerical studies we know that there are (at
least) three well-defined transport regimes for a quantum
nanomechanical device: (i) the tunneling regime, (ii) the shuttle
regime, and (iii) the coexistence regime. As we show in
subsequent sections, each of these regimes is characterized by
certain inequalities governing the various time-scales, and a
systematic exploitation of these inequalities allows us then to
develop the aforementioned simplified models. In all three cases
we will compare the predictions of the simplified models to the
ones obtained with the full numerics. While in most cases the
comparison is very satisfactory, we do not always observe
quantitative agreement; these discrepancies are analyzed and
directions for future work are indicated.
The paper is organized as follows. In Sections 2, 3 and 4 we
briefly introduce the microscopic model for the quantum shuttle,
introduce the Klein-Kramers equations for the Wigner functions,
and summarize the phenomenology extracted from previous numerical
studies, respectively. Section 5 contains the main results of
this paper, \emph{i.e.}, the derivations and analysis of the
simplified models for the three transport regimes. We end the
paper with a short conclusion of the main results.
\section{The Single-Dot Quantum Shuttle}
The Single-Dot Quantum Shuttle (SDQS) consists of a movable
quantum dot (QD) suspended between source and drain leads (see
Fig.~\ref{fig:SdS_model}). The center of mass of the QD is
confined to a potential that, at least for small displacements
from its equilibrium position, can be considered harmonic. Due to
its small geometric size, the QD has a very small capacitance and
thus a charging energy that may exceed the thermal energy $k_B T$
(approaching room temperature in the most recent realizations
\cite{sch-apl-04}). For this reason we assume that only one excess
electron can occupy the device (Coulomb blockade) and we describe
the electronic state of the central dot as a two-level system
(empty/charged). Electrons can tunnel between leads and dot with
tunneling amplitudes which depend exponentially on the position of
the central island due to the decreasing/increasing overlapping of
the electronic wave functions. The Hamiltonian of the model reads:
\begin{equation} H =H_{\rm sys}+H_{\rm leads}+H_{\rm bath}
+H_{\rm tun}+H_{\rm int}
\label{eq:SdS-Ham0}
\end{equation}
where
\begin{equation}
\eqalign{
&H_{\rm sys} =\frac{\hat{p}^2}{2 m}
+\frac{1}{2}m \omega^2 \hat{x}^2
+(\varepsilon_1- e\mathcal{E}
\hat{x})c_1^{\dag}c_1^{\phantom{\dagger}}\\
&H_{\rm leads} = \sum_{k}(\varepsilon_{l_k}
c^{\dagger}_{l_k}c^{\phantom{\dagger}}_{l_k}
+\varepsilon_{r_k}
c^{\dagger}_{r_k}c^{\phantom{\dagger}}_{r_k})\\
&H_{\rm tun} = \sum_{k}[T_{l}(\hat{x}) c^{\dagger}_{l_k}c_1^{\phantom{\dagger}}
+
T_{r}(\hat{x}) c^{\dagger}_{r_k}c_1^{\phantom{\dagger}}] + h.c.\\
&H_{\rm bath} + H_{\rm int }= {\rm generic \; heat \; bath}}
\label{eq:SdS-Ham}
\end{equation}
The hat over the position and momentum ($\hat{x},\hat{p}$) of the
dot indicates that they are operators since the mechanical degree
of freedom is quantized. Using the language of quantum optics we
call the movable grain alone the \emph{system}. This is then
coupled to two electric \emph{baths} (the leads) and a generic
heat bath. The system is described by a single electronic level of
energy $\varepsilon_1$ and a harmonic oscillator of mass $m$ and
frequency $\omega$. When the dot is charged the electrostatic force
($e \mathcal{E} $) acts on the grain and gives the
\emph{electrical influence} on the mechanical dynamics. The
electric field $\mathcal{E}$ is generated by the voltage drop
between left and right lead. In our model, though, it is kept as
an external parameter, also in view of the fact that we will
always assume the potential drop to be much larger than any other
energy scale of the system (with the only exception of the
charging energy of the dot).
The leads are Fermi seas kept at two different chemical potentials
($\mu_L$ and $\mu_R$) by the external applied voltage ($\Delta V =
(\mu_L - \mu_R)/e$ ) and all the energy levels of the system lie
well inside the bias window. The oscillator is immersed into a
dissipative environment that we model as a collection of bosons
coupled to the oscillator by a weak bilinear interaction:
\begin{equation}
\eqalign{
H_{\rm bath} &= \sum_{\mathbf{q}} \hbar \omega_{\mathbf{q}}{d_{\mathbf{q}}}^{\dagger} d_{\mathbf{q}}\\
H_{\rm int} &= \sum_{\mathbf{q}} \hbar g \sqrt{\frac{2m\omega}{\hbar}} \hat{x}({d_{\mathbf{q}}} +
{d_{\mathbf{q}}}^{\dagger})}
\end{equation}
where the operator $d_{\mathbf{q}}^{\dagger}$ creates a bath boson
with wave number $\mathbf{q}$. The damping rate is given by:
\begin{equation} \gamma(\omega) = 2 \pi g^2 D(\omega)
\end{equation}
where $D(\omega)$ is the density of states for the bosonic bath at the
frequency of the system oscillator. A bath that generates a
frequency independent $\gamma$ is called Ohmic.
The coupling to the electric baths is introduced with the
tunneling Hamiltonian $H_{\rm tun}$. The tunneling amplitudes
$T_{l}(\hat{x})$ and $T_{r}(\hat{x})$ depend exponentially on the
position operator $\hat{x}$ and represent the \emph{mechanical
feedback} on the electrical dynamics:
\begin{equation} T_{l,r}(\hat{x})=t_{l,r}\exp(\mp\hat{x}/\lambda)
\end{equation}
where $\lambda$ is the tunneling length. The tunneling
rates from and to the leads ($\bar{\Gamma}_{L,R}$) can be expressed
in terms of the amplitudes:
\begin{equation} \bar{\Gamma}_{L,R}=\langle \Gamma_{L,R}(\hat{x})\rangle
=\left \langle \frac{2 \pi}{\hbar}D_{L,R}\exp\left(\mp \frac{2 \hat{x}}{\lambda}\right)
|t_{l,r}|^2 \right\rangle
\end{equation}
where $D_{L,R}$ are the densities of states of the left and right
lead respectively and the average is taken with respect to the quantum
state of the oscillator.
The model has three relevant time scales: the period of the
oscillator $2\pi/\omega$, the inverse of the damping rate $1/\gamma$
and the average injection/ejection time $1/\bar{\Gamma}_{L,R}$. It
is possible also to identify three important length scales: the
zero point uncertainty $x_{zp} =\sqrt{\frac{\hbar}{2m\omega}}$, the
tunneling length $\lambda$ and the displaced oscillator
equilibrium position $d=\frac{e\mathcal{E}}{m \omega^2}$. The ratios
between the time scales and the ratios between length scales
distinguish the different operating regimes of the SDQS.
\section*{}
\input{Introduction7}
\input{Model7}
\input{EOM7}
\input{Phenomenology7}
\input{Simplified7}
\input{Conclusions7}
\section*{References}
\section{Phenomenology}
The stationary solution of the Klein-Kramers equations for the
Wigner distributions \eref{eq:KleinKramers} describes the average
long time behavior of the shuttling device. Information about the
different long time operating regimes can be extracted from the
distribution itself or from the experimentally accessible
stationary current and zero frequency current noise.
The mechanical damping rate $\gamma$ is the control parameter of
our analysis. At high damping rates the total Winger distribution
is concentrated around the origin of the phase space and
represents the harmonic oscillator in its ground state. While
reducing the mechanical damping a ring develops and, after a
short coexistence, the central ``dot'' eventually disappears
(Figure \ref{fig:WF}). The ring is the noisy representation of the
low damping limit cycle trajectory (shuttling) that develops from
the high damping equilibrium position (tunneling). Equilibrium and
limit cycle dynamics coexist in the intermediate damping bistable
configuration where the system randomly switches between tunneling
and shuttling regimes. The charge resolved Winger distributions
$W_{00}$ and $W_{11}$ also reveal the charge-position (momentum)
correlation typical of the shuttling regime: for negative
displacements and positive momentum (\emph{i.e.}\ leaving the
source lead) the dot is prevalently charged while it is empty for
positive displacements and negative momentum (coming from the
drain lead).
\begin{figure}[h]
\begin{center}
\includegraphics[width=.6\textwidth]{Figures/WF.eps}
\caption{\small
\textit{Charge resolved Wigner function distributions for different
mechanical damping rates (horizontal axis: coordinate in units of
$x_0=\sqrt{\hbar/m\omega}$; vertical axis: momentum in $\hbar/x_0$).
The rows represent from top to bottom the empty $(W_{00})$, charged
$(W_{11})$ and total $(W_{\rm tot} = W_{00}+W_{11})$ Wigner distribution
respectively. The columns represent from left to right the shuttling
$(\gamma=0.025\,\omega)$, coexistence $(\gamma=0.029\,\omega)$ and tunneling
regime $(\gamma=0.1\,\omega)$ respectively. The Figure is partially reproduced from
\cite{nov-prl-04}.}}
\label{fig:WF}
\end{center}
\end{figure}
Also the stationary current and the current noise (expressed in
terms of the Fano factor) show distinctive features for the
different operating regimes. At high damping rates the shuttling
device behaves essentially like the familiar double-barrier system
since the dot is (almost) static and far from both electrodes. The
current is determined essentially by the bare tunneling rates
$\Gamma_{L,R}$ and the Fano factor differs only slightly from the
values found for resonant tunneling devices ($F=1/2$ for a
symmetric device). At low damping rates the current saturates at
one electron per mechanical cycle (corresponding to current
$I/e\omega=1/2\pi$) since the electrons are shuttled one by one
from the source to the drain lead by the oscillating dot while the
extremely sub-poissonian Fano factor reveals the deterministic
character of this electron transport regime. The fingerprint of
the coexistence regime, at intermediate damping rates, is a
substantial enhancement of the Fano factor. The current
interpolates smoothly between the shuttling and tunneling limiting
values (Figure \ref{fig:CurrNoiseSDQS}).
\begin{figure}[h]
\begin{center}
\includegraphics[angle=-90, width=.48\textwidth]{Figures/Current.eps}
\includegraphics[angle=-90, width=.48\textwidth]{Figures/Noise.eps}
\caption{\small \textit{Left panel - Stationary current for the SDQS vs.~damping $\gamma$.
The mechanical dissipation rate $\gamma$ and the electrical rate $\Gamma = \Gamma_L = \Gamma_R$ are
given in units of the mechanical frequency $\omega$,
the tunneling length $\lambda$ in terms of $x_0 = \sqrt{\hbar/m\omega}$. The
other parameters are $d = 0.5\,x_0$ and $T=0$. The current
saturates in the shuttling (low damping) regime to one electron
per cycle independently from the parameters while is
substantially proportional to the bare electrical rate $\Gamma = \Gamma_L =
\Gamma_R$ in the tunneling regime (high damping).
Right panel - Fano factor for the SDQS vs.~damping $\gamma$. The curves correspond to the same
parameters of the left panel.
The very low noise in the shuttling (low damping) regime is a sign of ordered transport.
The huge super-poissonian Fano factors correspond to the onset of the coexistence regime.
The Figure is taken from \cite{nov-prl-04}.}}
\label{fig:CurrNoiseSDQS}
\end{center}
\end{figure}
The cross-over damping rate is determined by the effective
tunneling rates of the electrons. We get the following physical
picture: every time an electron jumps on the movable grain the
grain is subject to the electrostatic force $e\mathcal{E}$ that
accelerates it towards the drain lead. Energy is pumped into the
mechanical system and the dot starts to oscillate. If the damping
is high compared to the tunneling rates the oscillator dissipates
this energy into the environment before the next tunneling event:
on average the dot remains in its ground state. On the other hand,
for very small damping the relaxation time of the oscillator is
long and multiple ``forcing events" occur before the relaxation
takes place. This continuously drives the oscillator away from
equilibrium and a stationary state is reached only when the energy
pumped per cycle into the system is dissipated during the same
cycle in the environment.
\section{Simplified models} \label{sec:simplified_models}
We qualitatively described in the previous section three possible
operating regimes for shuttle-devices. The specific separation of
time scales allows us to identify the relevant variables and
describe each regime by a specific simplified model. Models for
the tunneling, shuttling and coexistence regime are analyzed
separately in the three following subsections. We also give a
comparison with the full description in terms of Wigner
distributions, current and current-noise to illustrate how the
models capture the relevant dynamics.
\subsection{Renormalized resonant tunneling}
\label{sec:tunneling}
The electrical dynamics has the longest time-scale in the
tunneling regime since the mechanical relaxation time (which is
much longer than the oscillation period) is much shorter than the
average injection or ejection time. Because of this time-scale
separation, the observation of the device dynamics would most of
the time show two mechanically frozen states:
\begin{description}
\item[0.]Empty dot in the ground state
\item[1.]Charged dot moved to the shifted equilibrium position
by the constant electrostatic force $e\mathcal{E}$.
\end{description}
We combine this observation with a quantum description of the
mechanical oscillator and possible thermal noise under the
assumption that the reduced density matrix of the device can be
written in the form:
\begin{equation}
\eqalign{
\sigma_{00}(t) &= p_{00}(t)\sigma_{\rm th}(0)\\
\sigma_{11}(t) &= p_{11}(t)\sigma_{\rm th}(e\mathcal{E})}
\label{eq:tun-ansatz}
\end{equation}
where
\begin{equation} \sigma_{\rm th}(\mathcal{F}) =
\frac{e^{-\beta(H_{\rm osc} -\mathcal{F} x)}}
{{\rm Tr}_{\rm osc}\left[e^{-\beta(H_{\rm osc} -\mathcal{F} x)}\right]}
\end{equation}
is the thermal density matrix of a harmonic oscillator subject to
an external force $\mathcal{F}$. The functions $p_{00}(t)$ and
$p_{11}(t)$ represent the probability to find the system
respectively in the state 0 or 1, respectively. The equations of
motion for the probabilities $p_{ii}(t)$ can be derived by
inserting the assumption (\ref{eq:tun-ansatz}) in the definition
\eref{eq:WF} and taking the integral over the mechanical degrees
of freedom in the corresponding Klein-Kramers equations
\eref{eq:KleinKramers}. This results in the rate equations
\begin{equation}
\frac{d}{dt} \left(
\begin{array}{c}
p_{00}\\
p_{11}
\end{array}
\right)
=
\left(
\begin{array}{c}
\tilde{\Gamma}_R\, p_{11} -\tilde{\Gamma}_L\, p_{00} \\
\tilde{\Gamma}_L\, p_{00} - \tilde{\Gamma}_R\, p_{11}
\end{array}
\right) \equiv \mathcal{L} \left(
\begin{array}{c}
p_{00}\\
p_{11}
\end{array}
\right)
\label{eq:ME-tun}
\end{equation}
where
\begin{equation}\eqalign{
\tilde{\Gamma}_L &= \Gamma_L \Tr_{\rm mech}\left\{\sigma_{\rm
th}(0)e^{-\frac{2\hat{x}}{\lambda}}\right\}=
\Gamma_L \int d q\,d p\, e^{-\frac{2q}{\lambda}}W_{\rm th}(q,p)\\
\tilde{\Gamma}_R &= \Gamma_R \Tr_{\rm mech}\left\{\sigma_{\rm
th}(e\mathcal{E})e^{\frac{2\hat{x}}{\lambda}}\right\}=
\Gamma_R\int d q\,d p\, e^{\frac{2q}{\lambda}}W_{\rm th}(q-d,p)}
\label{eq:Rates1}
\end{equation}
are the renormalized injection and ejection rates and $\Tr_{\rm
mech}$ indicates the trace over the mechanical degrees of freedom
of the device. We have also introduced the Liouvillean operator:
\begin{equation} \mathcal{L} =
\left(%
\begin{array}{cc}
-\tilde{\Gamma}_L & \tilde{\Gamma}_R\\
\tilde{\Gamma}_L &-\tilde{\Gamma}_R
\end{array}%
\right) \label{eq:Liouville}
\end{equation}
The thermal equilibrium Wigner function $W_{\rm th}(q-d,p)$ is
defined as the Wigner representation of the thermal equilibrium
density matrix $\sigma_{\rm th}(e\mathcal E)=\sigma_{\rm th}(m\omega^2 d)$:
\begin{equation}
W_{\rm th}(q-d,p) = \frac{1}{ 2 \pi m\omega\ell^2}
\exp\left\{-\frac{1}{2}\left[\left(\frac{q-d}{\ell}\right)^2 +
\left(\frac{p}{\ell m\omega}\right)^2 \right]\right\}
\label{eq:Wth}
\end{equation}
where $\ell = \sqrt{\frac{\hbar}{2m\omega}(2n_B +1)}$ reduces to the
zero point uncertainty length $x_{zp}=\sqrt{\frac{\hbar}{2m\omega}}$
in the zero temperature limit. In the high temperature limit $k_B
T \gg \hbar \omega$, $\ell$ tends to the thermal length
$\lambda_{\rm{th}}=\sqrt{k_B T/(m\omega^2)}$. Using (\ref{eq:Wth}) in
\eref{eq:Rates1} gives the renormalized rates:
\begin{equation} \eqalign{
\tilde{\Gamma}_L &= \Gamma_L e^{2\left(\frac{\ell}{\lambda}\right)^2}\\
\tilde{\Gamma}_R &= \Gamma_R e^{\frac{2d}{\lambda} + 2\left(\frac{\ell}{\lambda}\right)^2}
}
\label{eq:Ratesfinal}
\end{equation}
Equations (\ref{eq:ME-tun}) describe the dynamics of a resonant
tunneling device. All the effects of the movable grain are
contained in the effective rates $\tilde{\Gamma}_L,\tilde{\Gamma}_R$. As
expected the ejection rate is modified by the ``classical" shift
$d$ of the equilibrium position due to the electrostatic force on
the charged dot. Note that both rates are also \emph{enhanced} by
the fuzziness in the position of the oscillator due to thermal and
quantum noise. The relevance of this correction is given by the
ratio between $\ell$ and the tunneling length $\lambda$.
\subsubsection{Phase-space distribution}
The phase space distribution for the stationary state of the
simplified model for the tunneling regime is built on the Wigner
representation of the thermal density matrix $\sigma_{\rm th}$ and the
stationary solution of the system (\ref{eq:ME-tun}) for the
occupation $p_{ii}$ of the electromechanical states $i$:
\begin{equation}\eqalign{
W_{00}^{\rm stat}(q,p) &=
\frac{\tilde{\Gamma}_R}{\tilde{\Gamma}_L + \tilde{\Gamma}_R} W_{\rm th}(q,p)\\
W_{11}^{\rm stat}(q,p) &=
\frac{\tilde{\Gamma}_L}{\tilde{\Gamma}_L + \tilde{\Gamma}_R} W_{\rm th}(q-d,p)
}
\end{equation}
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=-90,width=.8\textwidth]{Figures/Cutstun.eps}
\caption{\small \textit{Comparison between the numerical and the analytical results
for the Wigner distribution functions. The coordinate (left) or momentum (right) cuts always
cross the maximum of the distribution. The green (blue) circles are numerical results
in the charged (empty) dot configuration, and the full lines
represent the analytical calculations. The parameters are:
$\Gamma_L = \Gamma_R = 0.01\,\omega$, $\gamma=0.25\,\omega$, $d = 0.5\,x_0$, $\lambda = 2\,x_0$, $T=0$.
We also plotted with dots the numerical results for $\Gamma = 0.001\,\omega$.}
\label{fig:Cutstun}}
\end{center}
\end{figure}
The stationary distribution of the tunneling model is determined
by the length $\ell$ and associated momentum $m\omega\ell$, the
equilibrium position shift $d$, the tunneling length $\lambda$ and the
ratio between left and right bare electrical rates
$\Gamma_L/\Gamma_R$. The mechanical relaxation rate $\gamma$ drops
out from the solution and only sets the range of applicability of
the simplified model.
In Figures \ref{fig:Cutstun} and \ref{fig:CutstunTemp} we compare
the Wigner functions calculated both analytically and numerically
in the tunneling regime. They show in general a good agreement
(Figure \ref{fig:Cutstun}). The matching is further improved when
reducing the bare injection rate $\Gamma_{L,R} = 0.001\omega$ thus
enlarging the time scale separation $\Gamma_{L,R} \ll \gamma$ typical of
the tunneling regime. The temperature dependence of the stationary
Wigner function distribution (Figure\ \ref{fig:CutstunTemp})
verifies the scaling given by the temperature dependent length
$\ell$.
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=-90,width=.8\textwidth]{Figures/CutstunTemp.eps}
\caption{\small \textit{Tunneling Wigner distributions as a function of the temperature.
The relevant parameters are: $\gamma = 0.25\,\omega$, $\Gamma = 0.001\,\omega$, $n_B = 0,0.75,1.5$ respectively
represented by dots, circles and asterisks. Full lines are the analytical results.}
\label{fig:CutstunTemp}}
\end{center}
\end{figure}
\subsubsection{Current}
Since the effect of the oscillator degree of freedom is entirely
included in the renormalized rates, the system can be treated
formally as a static quantum dot.
The time-dependent currents thus read:
\begin{equation}\eqalign{
I_R(t) &= \tilde{\Gamma}_R p_{11}(t)\\
I_L(t) &= \tilde{\Gamma}_L p_{00}(t)
}
\end{equation}
In the stationary limit they coincide:
\begin{equation}
I^{\rm stat}= \tilde{\Gamma}_R p^{\rm stat}_{11} = \tilde{\Gamma}_L p^{\rm
stat}_{00} = \frac{\tilde{\Gamma}_R
\tilde{\Gamma}_L}{\tilde{\Gamma}_L+\tilde{\Gamma}_R} \label{eq:Currentun}
\end{equation}
We show in Figure \ref{fig:Currentun} the current calculated
numerically and the asymptotic value of the tunneling regime given
by Eq.~\eref{eq:Currentun}.
\subsubsection{Current-noise}
We start the calculation with the MacDonald formula for the zero
frequency current noise \cite{fli-prb-04,andrea,ela-pla-02}:
\begin{equation} S(0) = \lim_{t \to \infty}\frac{d}{dt}
\Big[\sum_{n = 0}^{\infty} n^2P_n(t) -
\big(\sum_{n= 0}^{\infty}nP_n(t)\big)^2\Big]
\label{eq:Macdonald}
\end{equation}
where $P_n(t)$ is the probability that $n$ electrons have been
collected at time $t$ in the right lead. This probability is
connected to the $n$-resolved probabilities $p_{ii}^{(n)}$ of the
two effective states of the tunneling model by the relation:
\begin{equation}
P_n(t)=p_{00}^{(n)}(t) + p_{11}^{(n)}(t)
\end{equation}
The $n$-resolved probabilities $p_{ii}^{(n)}$ satisfy the equation
of motion:
\begin{equation}
\frac{d}{dt} \left(
\begin{array}{c}
p^{(n)}_{00}\\
p^{(n)}_{11}
\end{array}\right)=
\left(
\begin{array}{c}
\tilde{\Gamma}_R\, p^{(n-1)}_{11} -\tilde{\Gamma}_L\, p^{(n)}_{00} \\
\tilde{\Gamma}_L\, p^{(n)}_{00} - \tilde{\Gamma}_R\, p^{(n)}_{11}
\end{array}
\right)
\end{equation}
that can be derived by tracing the equation of motion for the
total density matrix $\rho$ over bath states with a fixed number
($n$) of electrons collected in the right lead and finally
integrating over the mechanical degrees of freedom. The evaluation
of the different terms of the current-noise \eref{eq:Macdonald}
can be carried out by introducing the generating functions
$F_{ii}(t;z) = \sum_n p_{ii}^{(n)}(t)z^n$
\cite{nov-prl-04,andrea}. The Fano factor is calculated in terms
of the stationary probabilities $p_{ii}^{\rm stat}$ and the
pseudoinverse of the Liouvillean \eref{eq:Liouville}
$\mathcal{QL}^{-1}\mathcal{Q}$.
\begin{equation} F = 1 - \frac{2}{I^{\rm stat}}
\left(\begin{array}{cc} 1 & 1 \end{array}\right)\left(%
\begin{array}{cc}
0 & \tilde{\Gamma}_R\\
0 & 0
\end{array}\right)
\mathcal{QL}^{-1}\mathcal{Q}
\left(%
\begin{array}{cc}
0 & \tilde{\Gamma}_R\\
0 & 0
\end{array}\right)
\left(%
\begin{array}{c}
p_{00}^{\rm stat}\\
p_{11}^{\rm stat}
\end{array}\right)
\label{eq:Fanotun}
\end{equation}
For a detailed evaluation of the formula \eref{eq:Fanotun} we
refer the reader to the section IV B of \cite{jau-preprint-04}.
The resulting Fano factor
\begin{equation}
F = \frac{\tilde{\Gamma}_L^2 + \tilde{\Gamma}_R^2}
{(\tilde{\Gamma}_L + \tilde{\Gamma}_R)^2}
\end{equation}
assumes the familiar form for a tunneling junction, albeit with
renormalized rates. In Fig.~\ref{fig:Currentun} the value of the
Fano factor given by the above formula is depicted as the
high-damping asymptote of the full calculation.
\begin{figure}
\begin{center}
\includegraphics[angle=-90,width=.45\textwidth]{Figures/Currentun.eps}
\includegraphics[angle=-90,width=.45\textwidth]{Figures/Fanotun.eps}
\caption{\small \textit{Left panel - Current as a function of the damping for the SDQS.
The asymptotic tunneling limit is indicated. The parameters are:
$\Gamma_L = \Gamma_R = 0.01\,\omega$, $\gamma=0.25\,\omega$, $d = 0.5\,x_0$, $\lambda = 2\,x_0$, $T=0$.
Right panel - Current-noise as a function of the damping for the SDQS.
The asymptotic tunneling limit is indicated. The parameters are
the same as the ones reported for the current.
}
\label{fig:Currentun}}
\end{center}
\end{figure}
\subsection{Shuttling: a classical transport regime}
\label{sec:shuttling}
The simplified model for the shuttling dynamics is based on the
observation -- extracted from the full description -- that the
system exhibits in this operating regime extremely low Fano
factors ($F \approx 10^{-2}$): we assume that there is \emph{no
noise at all} in the system. Its state is represented by a point
that moves on a trajectory in the device phase-space spanned by
position, momentum and charge of the oscillating dot. The charge
on the oscillating dot is a stochastic variable governed by
tunnelling processes, however in the shuttling regime the
tunnelling events are effectively deterministic since they are
highly probable only at specific times (or positions) defined by
the mechanical dynamics.
\subsubsection{Equation of motion for the relevant variables}
We implement the zero noise assumption in the set of coupled
Klein-Kramers equations (\ref{eq:KleinKramers}) in two steps: we
first set $T=0$ and then simplify the equations further by
neglecting all the terms of the $\hbar$ expansion since we assume
the classical action of the oscillator to be much larger than the
Planck constant. We obtain:
\begin{equation}
\eqalign{
\frac{\partial W_{00}^{\rm cl}}{\partial \tau} =&
\left[X\frac{\partial}{\partial P} - P\frac{\partial}{\partial X} +
\frac{\gamma}{\omega} \frac{\partial}{\partial P}P\right]\,W_{00}^{\rm cl}\\
&-\frac{\Gamma_L}{\omega} e^{-2X} W_{00}^{\rm cl} +\frac{\Gamma_R}{\omega} e^{2X} W_{11}^{\rm cl}\\
\frac{\partial W_{11}^{\rm cl}}{\partial \tau} =&
\left[\left(X - \frac{d}{\lambda}\right)\frac{\partial}{\partial P} - P\frac{\partial}{\partial X} +
\frac{\gamma}{\omega} \frac{\partial}{\partial P}
P\right]\,W_{11}^{\rm cl}\\
&-\frac{\Gamma_R}{\omega} e^{ 2X} W_{11}^{\rm cl}
+\frac{\Gamma_L}{\omega} e^{-2X} W_{00}^{\rm cl}
} \label{eq:FPshuttling}
\end{equation}
where we have introduced the dimensionless variables:
\begin{equation}
\tau = \omega t, \quad X = \frac{q}{\lambda}, \quad P = \frac{p}{m \omega \lambda}
\label{eq:nodimension}
\end{equation}
The superscript ``cl" indicates that we are dealing with the
classical limit of the Wigner function because of the complete
elimination of the quantum ``diffusive" terms from the
Klein-Kramers equations. In this spirit, it is natural to try an
Ansatz for the Wigner functions, in which the position and
momentum dependencies are separable:
\begin{equation}
\eqalign{
W_{00}^{\rm cl}(X,P,\tau) = p_{00}(\tau)\delta(X-X^{\rm cl}(\tau))\delta(P-P^{\rm cl}(\tau))\\
W_{11}^{\rm cl}(X,P,\tau) = p_{11}(\tau)\delta(X-X^{\rm cl}(\tau))\delta(P-P^{\rm cl}(\tau))
}
\label{eq:separation}
\end{equation}
where the trace over the system phase-space sets the constraint
$p_{00} + p_{11} = 1$. The variables $X^{\rm cl}$ and $P^{\rm cl}$
represent the position and momentum of the (center of mass) of the
oscillating dot; $p_{11(00)}$ is the probability for the quantum
dot to be charged (empty).
By inserting the Ansatz (\ref{eq:separation}) into equation
(\ref{eq:FPshuttling}) and matching the coefficients of the terms
proportional to $\delta\times\delta$ we obtain the equations of
motion for the charge probabilities $p_{ii}$:
\begin{equation}
\eqalign{
\dot{p}_{00} &=-\frac{\Gamma_L}{\omega}e^{-2X^{\rm cl}}p_{00}
+\frac{\Gamma_R}{\omega}e^{2X^{\rm cl}}p_{11}\\
\dot{p}_{11} &= \frac{\Gamma_L}{\omega}e^{-2X^{\rm cl}}p_{00}
-\frac{\Gamma_R}{\omega}e^{2X^{\rm cl}}p_{11}
}
\label{eq:electrical}
\end{equation}
Matching the coefficients proportional to the distributions
$\delta\times\delta'$ (here $\delta'$ is the derivative of the
delta-function) yields the equations for the mechanical degrees of
freedom:
\begin{equation}
\eqalign{
p_{00}\dot{X}^{\rm cl} &= p_{00} P^{\rm cl}\\
p_{11}\dot{X}^{\rm cl} &= p_{11} P^{\rm cl}\\
p_{00}\dot{P}^{\rm cl} &= p_{00}(-X^{\rm cl} - \frac{\gamma}{\omega} P^{\rm cl})\\
p_{11}\dot{P}^{\rm cl} &= p_{11}(-X^{\rm cl} +\frac{d}{\lambda}- \frac{\gamma}{\omega} P^{\rm cl})\\
}
\label{eq:mechanical1}
\end{equation}
The equations involving $\dot{P}^{\rm cl}$ have a solution only if
\begin{equation}
p_{00}p_{11} = 0
\label{eq:condition}
\end{equation}
combined with the normalization condition $p_{00} + p_{11} = 1$.
Under these conditions the system \eref{eq:mechanical1} is
equivalent to
\begin{equation}
\eqalign{
\dot{X}^{\rm cl} &= P^{\rm cl}\\
\dot{P}^{\rm cl} &= -X^{\rm cl} +\frac{d}{\lambda}p_{11}- \frac{\gamma}{\omega} P^{\rm cl}\\
}
\label{eq:mechanical}
\end{equation}
The condition \eref{eq:condition} also follows by substituting the
Ansatz \eref{eq:separation} into the equations
\eref{eq:FPshuttling} and by using the equations of motion
\eref{eq:mechanical} and \eref{eq:electrical}. This shows that
\eref{eq:condition} also sets the limits of the validity of the
Ansatz \eref{eq:separation}. However, the only differentiable
solution for \eref{eq:condition} is $p_{00} = 0$ or $p_{11} = 0$
for all times, which is not compatible with the equation of motion
\eref{eq:electrical}. Thus, the Ansatz \eref{eq:separation} does
not yield exact solutions to the original equations
\eref{eq:FPshuttling}.
While an exact solution has not been found, we can still proceed
with the following physical argument. Suppose now that the
switching time between the two allowed states, $p_{11}=1;\,
p_{00}=0$ or $p_{00}=1;\, p_{11}=0$, is much shorter than the
shortest mechanical time (the oscillator period $T = 2\pi /\omega$). A
solution of the system of equations (\ref{eq:mechanical}) and
(\ref{eq:electrical}) with this time scale separation would
satisfy the condition (\ref{eq:condition}) ``almost everywhere"
and, when inserted into (\ref{eq:separation}) would represent a
solution for (\ref{eq:FPshuttling}).
\noindent We rewrite the set of equations (\ref{eq:mechanical})
and (\ref{eq:electrical}) as:
\begin{equation}
\eqalign{
\dot{X} &= P\\
\dot{P} &= -X + d^* Q - \gamma^* P\\
\dot{Q} &= \Gamma_L^* e^{-2X}(1-Q) -\Gamma_R^* e^{2X}Q
}
\label{eq:shuttlingfin}
\end{equation}
where we have dropped the ``cl" superscript, renamed $p_{11}
\equiv Q$, used the trace condition $p_{00} = 1-p_{11}$, and
defined the rescaled parameters: $d^* = d/\lambda, \quad \gamma^* = \gamma/\omega,
\quad \Gamma_{L,R}^* = \Gamma_{L,R}/\omega$. In the following section we
analyze the dynamics implied by Eq.~(\ref{eq:shuttlingfin}).
\subsubsection{Stable limit cycles}
Here we give the results of a numerical solution of
Eq.~(\ref{eq:shuttlingfin}) for different values of the parameters
and different initial conditions. For the parameter values that
correspond to the fully developed shuttling regime, the system has
a limit cycle solution with the desirable time scale separation we
discussed in the previous section. Figure \ref{fig:Limitcycles}
shows the typical appearance of the limit cycle.
\begin{figure}[h]
\begin{center}
\includegraphics[angle=0,width=0.5\textwidth]{Figures/Limitcycles.eps}
\caption{\small \textit{Different representations of the limit cycle
solution of the system of differential equations (\ref{eq:shuttlingfin})
that describes the shuttling regime. For a detailed description see the text.
$X$ is the coordinate in units of the tunneling length $\lambda$,
while $P$ is the momentum in units of $m\omega\lambda$.}
\label{fig:Limitcycles}}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[angle=0,width=.5\textwidth]{Figures/rings.eps}
\caption{\small \textit{Correspondence between the Wigner function representation
and the simplified trajectory limit for the shuttling regime. The white ring is the ($X,P$)
projection of the limit cycle. The $Q=1$ and $Q=0$ portions of the trajectory are visible in the
charged and empty dot graphs respectively. The parameters are $\gamma = 0.02\,\omega$, $d = 0.5\,x_0$,
$\Gamma=0.05\,\omega$, $\lambda = x_0$ in the upper row, $\gamma = 0.02\,\omega$, $d = 0.5\,x_0$,
$\Gamma=0.05\,\omega$, $\lambda = 2\,x_0$ in the central row and $\gamma = 0.02\,\omega$, $d = 0.5\,x_0$,
$\Gamma=0.01\,\omega$, $\lambda = 2\,x_0$ in the lower row.}
\label{fig:rings}}
\end{center}
\end{figure}
In Fig.~\ref{fig:Limitcycles}(a) we show the charge $Q(\tau)$ as a
function of time. The charge value is jumping periodically from 0
to 1 and back with a period equal to the mechanical period. The
transition itself is almost instantaneous. In panels (b), (c) and
(d) three different projections of the 3D-phase-space trajectory
are reported and the time evolution along them is intended
clockwise. The $X,P$ projection shows the characteristic circular
trajectory of harmonic oscillations. In the $X,Q$ ($P,Q$)
projection the position(momentum)-charge correlation is visible.
The full description of the SDQS in the shuttling regime has a
phase space visualization in terms of a ring shaped
\emph{stationary} total Wigner distribution function, see
Fig.~\ref{fig:WF}. We can interpret this fuzzy ring as the
probability distribution obtained from many different noisy
realizations of (quasi) limit cycles. The stationary solution for
the Wigner distribution is the result of a diffusive dynamics on
an effective ``Mexican hat" potential that involves both amplitude
and phase of the oscillations. In the noise-free semiclassical
approximation we turn off the diffusive processes and the
point-like state describes in the shuttling regime a single
trajectory with a definite constant amplitude and \emph{periodic}
phase. We expect this trajectory to be the average of the noisy
trajectories represented by the Wigner distribution. In the third
column of Fig.~\ref{fig:rings} the total Wigner function
corresponding to different parameter realization of the shuttling
regime is presented. The white circle is the semiclassical
trajectory. In the first two columns the asymmetric sharing of the
ring between the charged and empty states is also compared with
the corresponding $Q=1$ and $Q=0$ portions of the semiclassical
trajectory.
In the semiclassical description we also have direct access to
the current as a function of the time. For example the right lead
current reads:
\begin{equation}
I_R(\tau) = Q(\tau)\Gamma_R e^{2X(\tau)}
\end{equation}
and is also a periodic function with peaks in correspondence to
the unloading processes. The integral of $I_R(\tau)$ over one
mechanical period is 1 and represents the number of electron
shuttled per cycle by the oscillating dot, in complete agreement
with the full description.
\subsection{Coexistence: a dichotomous process}
\label{sec:coexistence}
The longest time-scale in the coexistence regime corresponds to
infrequent switching between the shuttling and the tunneling
regime, see Fig.~\ref{fig:time-scales}. The amplitude of the dot
oscillations is the relevant variable that is recording this slow
dynamics. We analyze this particular operating regime of the SDQS
in four steps. (i) We first explore the consequences of the slow
switching in terms of current and current noise. (ii) Next, we
derive the effective bistable potential which controls the
dynamics of the oscillation amplitude. (iii) We then apply
Kramers' theory for escape rates to this effective potential and
calculate the switching rates between the two amplitude metastable
states corresponding to the local minima of the potential. (iv) We
conclude the section by comparing the (semi)analytical results of
the simplified model with the numerical calculations for the full
model.
\begin{figure}[h]
\begin{center}
\includegraphics[angle=0,width=.6\textwidth]{Figures/dichscheme.eps}
\caption{\small \textit{Schematic representation of the time evolution
of the current in the dichotomous process between current modes in the SDQS coexistence
regime. The relevant currents are the
shuttling ($I_{\rm sh}$) and the tunneling ($I_{\rm tun}$) currents, respectively.
The switching
rates $\Gamma_{\rm in}$ and $\Gamma_{\rm out}$ correspond to switching in and out
of the tunneling mode.}\label{fig:time-scales}}
\end{center}
\end{figure}
\subsubsection{Two current modes}\label{sec:dichot}
Let us consider a bistable system with two different modes that we
call for convenience $Shuttling$ (sh) and $Tunneling$ (tun) and
two different currents $I_{\rm sh}$ and $I_{\rm tun}$,
respectively, associated with these modes. The system can switch
between the shuttling and the tunneling mode randomly, but with
definite rates: namely $\Gamma_{\rm in}$ for the process ``shuttling
$\to$ tunneling" and $\Gamma_{\rm out}$ for the opposite, ``tunneling
$\to$ shuttling". We collect this information in the master
equation:
\begin{equation}
\dot{\mathbf{P}} = \frac{d}{dt}\left(\begin{array}{c} P_{\rm sh} \\ P_{\rm tun}\\ \end{array}\right) =
\left(\begin{array}{cc} -\Gamma_{\rm in} & \phantom{-}\Gamma_{\rm out}\\ \phantom{-}\Gamma_{\rm in} & -\Gamma_{\rm out}\\
\end{array}\right)
\left(\begin{array}{c} P_{\rm sh} \\ P_{\rm tun} \\ \end{array}\right) = \mathbf{LP}
\label{eq:MasterEquation}
\end{equation}
For such a system the average current and the Fano factor read
\cite{andrea,jor-prl-04}:
\begin{equation}
\eqalign{
I^{\rm stat} &= \frac{I_{\rm sh} \Gamma_{\rm out} + I_{\rm tun}\Gamma_{\rm in}}{\Gamma_{\rm in}+ \Gamma_{\rm out}}\\
%
F &= \frac{S(0)}{I^{\rm stat}} =
2\frac{(I_{\rm sh}-I_{\rm tun})^2}
{I_{\rm sh} \Gamma_{\rm out} + I_{\rm tun}\Gamma_{\rm in}}
\frac{\Gamma_{\rm in}\Gamma_{\rm out}}{(\Gamma_{\rm in}+\Gamma_{\rm out})^2}
}
\label{eq:dichCurrFano}
\end{equation}
The framework of the simplified model for the coexistence regime
is given by these formulas. The task is now to identify the two
modes in the dynamics of the shuttle device and, above all,
calculate the switching rates. This can be done by using the
Kramers' escape rates for a bistable effective potential.
\subsubsection{Effective potential}
The tunneling to shuttling crossover visualized by the total
Wigner function distribution (Fig.~\ref{fig:WF}) can be understood
in terms of an effective stationary potential in the phase space
generated by the non-linear dynamics of the shuttle device. We
show in Fig.~\ref{fig:hats} the three qualitatively different
shapes of the potential surmised from the observation of the
stationary Wigner functions associated with the three operating
regimes. Recently Fedorets {\it et al.}~\cite{fed-prl-04}
initiated the study of the tunneling-shuttling transition in terms
of an effective radial potential. Taking inspiration from their
work we extend the analysis to the slowest \emph{dynamics} in the
device and use quantitatively the idea of the effective potential
for the description of the coexistence regime.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=.6\textwidth]{Figures/hats.eps}
\caption{\small \textit{Schematic representation of the
effective potentials for the three operating regimes.}
\label{fig:hats}}
\end{center}
\end{figure}
In the process of elimination of the fast variables we start from
the Klein-Kramers equations for the SDQS that we rewrite
symmetrized by shifting the coordinates origin to $d/2$:
\begin{equation}
\fl
\eqalign{
\frac{\partial W_{00}}{\partial t} =& \left[m \omega^2 \left(q +
\frac{d}{2}\right)\frac{\partial}{\partial p}
-\frac{p}{m}\frac{\partial}{\partial q}
+\gamma \frac{\partial}{\partial p}p
+\gamma m \hbar \omega \left(n_B + \frac{1}{2} \right)
\frac{\partial^2}{\partial p^2}\right]W_{00}\\
& +\Gamma_{R}e^{ 2q/\lambda} W_{11}
-\Gamma_{L}e^{- 2q/\lambda}\sum_{n=0}^{\infty}
\frac{(-1)^n}{(2n)!}
\left(\frac{\hbar}{\lambda}\right)^{2n}
\frac{\partial^{2n}W_{00}}{\partial p^{2n}}\\
\frac{\partial W_{11}}{\partial t} =& \left[m \omega^2 \left(q -
\frac{d}{2}\right)\frac{\partial}{\partial p}
-\frac{p}{m}\frac{\partial}{\partial q}
+\gamma \frac{\partial}{\partial p}p
+\gamma m \hbar \omega \left(n_B + \frac{1}{2} \right)
\frac{\partial^2}{\partial p^2}\right]W_{11}\\
& +\Gamma_{L}e^{- 2q/\lambda} W_{00}
-\Gamma_{R}e^{2q/\lambda}\sum_{n=0}^{\infty}
\frac{(-1)^n}{(2n)!}
\left(\frac{\hbar}{\lambda}\right)^{2n}
\frac{\partial^{2n}W_{11}}{\partial p^{2n}}
}
\label{eq:KlKr}
\end{equation}
where the renormalization of the tunneling rates due to the
coordinate shift has been absorbed in a redefinition of the
$\Gamma$'s. The idea is to get rid of the variables that due to their
fast dynamics are not relevant for the description of the
coexistence regime. In equations (\ref{eq:KlKr}) we describe the
electrical state of the dot as empty or charged. We switch to a
new set of variables with the definition:
\begin{equation}
W_{\pm} = W_{00} \pm W_{11}
\end{equation}
In absence of the harmonic oscillator the state $|+\rangle$ would
be fixed by the trace sum rule and the state $|-\rangle$ would
relax to zero on a time scale fixed by the tunneling rates. We
assume that also in the presence of the mechanical degree of
freedom the relaxation dynamics of the $|-\rangle$ state is much
faster than the one of the $|+\rangle$ state.
In the dimensionless phase space given by the coordinates $X$ and
$P$ of \eref{eq:nodimension} we switch to the polar coordinates
defined by the relations \cite{fed-prl-04}:
\begin{equation}
X = A \sin \phi \quad P = A \cos \phi
\end{equation}
Since we are interested only in the dynamics of the amplitude in
the phase space (the slowest in the coexistence regime) we
introduce the projector $\mathcal{P}_{\phi}$ that averages over
the phase :
\begin{equation}
\mathcal{P}_{\phi}[\bullet] = \frac{1}{2\pi}\int_0^{2 \pi} d\phi \bullet
\end{equation}
We also need the orthogonal complement $\mathcal{Q}_{\phi} = 1 -
\mathcal{P}_{\phi}$. Using these two operators we decompose the
Wigner distribution function into:
\begin{equation}
W_+ = \mathcal{P}_{\phi} W_+ + \mathcal{Q}_{\phi} W_+ = \bar{W}_+ + \tilde{W}_+
\end{equation}
Finally we make a perturbation expansion of (\ref{eq:KlKr}) in the
small parameters:
\begin{equation}
\frac{d}{\lambda} \ll 1,\quad \left(\frac{x_0}{\lambda}\right)^2 \ll 1, \quad \frac{\gamma}{\omega} \ll 1
\label{eq:smallparameters}
\end{equation}
These three inequalities correspond to the three physical
assumptions:
\begin{enumerate}
\item The external electrostatic force is a small perturbation of
the harmonic oscillator restoring force in terms of the
sensitivity to displacement of the tunneling rates. This justifies
an oscillator-independent treatment of the tunneling regime.
\item The tunneling length is large compared to the zero point
fluctuations. Since the oscillator dynamics for the shuttling
regime (and then partially also for the coexistence regime)
happens on the scale of the tunneling length, this condition
ensures a quasi-classical behaviour of the harmonic oscillator.
\item The coupling of the oscillator to the thermal bath is weak
and the oscillator dynamics is under-damped.
\end{enumerate}
Using these approximations the Klein-Kramers equations
\eref{eq:KlKr} reduce (for details see, \emph{e.g.},
\cite{fed-prl-04,andrea}) to the form:
\begin{equation}
\partial_{\tau} \bar{W}_+(A,\tau)=
\frac{1}{A}\partial_A A[V'(A) + D(A)\partial_A]\bar{W}_+(A,\tau)
\label{eq:KramersquasiA}
\end{equation}
where $V'(A) = \frac{d}{dA}V(A)$ and $D(A)$ are given functions of
A. Before calculating explicitly the functions $V'$ and $D$ we
explore the consequences of the formulation of the Klein-Kramers
equations (\ref{eq:KlKr}) in the form (\ref{eq:KramersquasiA}).
The stationary solution of the equation (\ref{eq:KramersquasiA})
reads \cite{fed-prl-04}:
\begin{equation}
\bar{W}_+^{\rm stat}(A) = \frac{1}{\mathcal{Z}}
\exp\left(-\int_0^A
dA' \frac{V'(A')}{D(A')}\right)\label{eq:FedW}
\end{equation}
where $\mathcal{Z}$ is the normalization that ensures the integral
of the phase-space distribution to be unity: $\int_0^{\infty}
dA'2\pi A' \bar{W}_+^{\rm stat}(A') = 1$. Equation
(\ref{eq:KramersquasiA}) is identical to the Fokker-Planck
equation for a particle in the two-dimensional rotationally
invariant potential $V$ (see Fig.~\ref{fig:hats}) with stochastic
forces described by the (position dependent) diffusion coefficient
$D$. All contributions to the effective potential $V$ and
diffusion coefficient $D$ can be grouped according to the power of
the small parameters that they contain.
\begin{equation}
\eqalign{
\fl
V'(A) &=
\frac{\gamma}{\omega} \frac{A}{2}
+\frac{d}{2\lambda} \alpha_0(A)
+\left(\frac{x_0}{\lambda}\right)^4\alpha_1(A)
+\left(\frac{d}{2\lambda}\right)^2\alpha_2(A)
+\frac{\gamma}{\omega}\frac{d}{2\lambda}\alpha_3(A)\\
\fl
D(A) &=
\frac{\gamma}{\omega}\left(\frac{x_0}{\lambda}\right)^2
\frac{1}{2}\left(n_B + \frac{1}{2}\right)
+\left(\frac{x_0}{\lambda}\right)^4\beta_1(A)
+\left(\frac{d}{2\lambda}\right)^2\beta_2(A)
+\frac{\gamma}{\omega}\frac{d}{2\lambda}\beta_3(A)
}
\end{equation}
where the $\alpha$ functions read:
\begin{equation}
\eqalign{
\alpha_0 =&
\mathcal{P}_{\phi}\cos \phi \hat{G}_0 \Gamma_-\\
\alpha_1 =&
-\frac{1}{4}\mathcal{P}_{\phi} \cos \phi \Gamma_-
\partial_P (\hat{G}_0 \Gamma_-)\\
\alpha_2 =&
\mathcal{P}_{\phi} \cos \phi \hat{G}_0 \Gamma_-
\hat{g}_0 \mathcal{Q}_{\phi}
\partial_P (\hat{G}_0 \Gamma_-)\\
\alpha_3 =&
\mathcal{P}_{\phi}\cos \phi \Big[\hat{G}_0^2 \Gamma_- +
A \hat{G}_0 \partial_P (\hat{G}_0 \Gamma_-)
- \frac{A}{2} \sin \phi \partial_P (\hat{G}_0 \Gamma_-)\Big]
}
\end{equation}
and the $\beta$'s can be written as:
\begin{equation}
\eqalign{
\beta_1 =& \frac{1}{4}
\mathcal{P}_{\phi}\cos^2\phi \Big[\Gamma_+ - \Gamma_- \hat{G}_0 \Gamma_-\Big]\\
\beta_2 =&
\mathcal{P}_{\phi} \cos\phi \Big[\hat{G}_0 \cos\phi
+ \hat{G}_0\Gamma_- \hat{g}_0 \mathcal{Q}_{\phi} \cos\phi \hat{G}_0\Gamma_-\Big]\\
\beta_3 =&
A \mathcal{P}_{\phi}\cos \phi \Big[ \hat{G}_0 \cos^2\phi
\hat{G}_0 \Gamma_- + \frac{1}{4} \hat{G}_0 \Gamma_- \sin 2\phi
- \frac{1}{4} \sin 2\phi \hat{G}_0\Gamma_-\Big]\\
}
\end{equation}
where
\begin{equation}
\eqalign{
\Gamma_{\pm}&= \Gamma_L e^{-2A \sin \phi} \pm \Gamma_R e^{2A \sin \phi}\\
\hat{g}_0 &=(\partial_{\phi})^{-1}\\
\hat{G}_0 &=(\partial_{\phi} + \Gamma_+)^{-1}
}
\end{equation}
The $\alpha$ and $\beta$ functions are calculated by isolating in the
Liouvillean for the distribution $\bar{W}_+$ the driving and
diffusive components with generic forms
\begin{equation}
\frac{1}{A}\partial_A A\{ \alpha_i(A) \}
\label{eq:driving}
\end{equation}
and
\begin{equation}
\frac{1}{A}\partial_A A\{ \beta_i(A) \}\partial_A
\label{eq:diffusive}
\end{equation}
respectively. As an example we give the derivation of the
functions $\alpha_3$ and $\beta_3$. We start by rewriting the equation of
motion \eref{eq:KramersquasiA} for the distribution $\bar{W}_+$ in
the form:
\begin{equation}
\partial_{\tau}\bar{W}_+ = \mathcal{L}[\bar{W}_+] \approx
(\mathcal{L}^{I} + \mathcal{L}^{II})[\bar{W}_+]
\end{equation}
where we have distinguished the Liouvilleans of first and second
order in the small parameter expansion \eref{eq:smallparameters}.
The contribution $\frac{\gamma}{\omega}\frac{d}{\lambda}$ of the second order
Liouvillean $\mathcal{L}^{II}$ reads:
\begin{equation}
\fl
\mathcal{L}_{\gamma d} =
\mathcal{P}_{\phi}\partial_P [\hat{G}_0 \partial_P P \hat{G}_0 \Gamma_- +
P \hat{g}_0 \mathcal{Q}_{\phi} \partial_P \hat{G}_0 \Gamma_- +
\hat{G}_0 \Gamma_- \hat{g}_0 \mathcal{Q}_{\phi} \partial_P P]
\end{equation}
and represents the starting point for the calculation of the
functions $\alpha_3$ and $\beta_3$. We express then the differential
operators $\partial_P$ in polar coordinates and take into account
that the Liouvillean is applied to a function $\bar{W}_+$
independent of the variable $\phi$:
\begin{equation}
\fl
\eqalign{
\mathcal{L}_{\gamma d} = &
\frac{1}{A} \partial_A A \mathcal{P}_{\phi} \cos \phi
\Big[\hat{G}_0^2 \Gamma_- + \hat{G}_0 A \cos \phi \partial_P (\hat{G}_0\Gamma_-)
+ \hat{G}_0 A \cos^2 \phi \hat{G}_0 \Gamma_- \partial_A \\
& + A \cos \phi \hat{g}_0 \mathcal{Q}_{\phi} \partial_P(\hat{G}_0\Gamma_-)
+ A \cos \phi \hat{g}_0 \mathcal{Q}_{\phi} \cos \phi \hat{G}_0\Gamma_- \partial_A \\
& +
\hat{G}_0\Gamma_- \hat{g}_0 \mathcal{Q}_{\phi} A \cos^2 \phi \partial_A \Big]
}
\label{eq:quasia3b3}
\end{equation}
Finally, we separate in \eref{eq:quasia3b3} the driving term from
the diffusive contributions and thus identify the functions $\alpha_3$
and $\beta_3$ \footnote{In this step of the derivation we have also
used the projector $\mathcal{P}_{\phi}$ to define a scalar product
$\mathcal{P}_{\phi} f(\phi) g(\phi) \equiv (f,g)$ and the adjoint
relation: $(f, \hat{O}g ) = (\hat{O}^{\dagger}f,g)$.}:
\begin{equation}
\fl
\eqalign{
\mathcal{L}_{\gamma d} = &
\frac{1}{A} \partial_A A
\left\{
\mathcal{P}_{\phi} \cos \phi
\Big[
\hat{G}_0^2 \Gamma_- +
A \hat{G}_0 \partial_P (\hat{G}_0 \Gamma_-)
- \frac{A}{2} \sin \phi \partial_P (\hat{G}_0 \Gamma_-)
\Big]
\right\} +\\
&\frac{1}{A} \partial_A A
\left\{
\mathcal{P}_{\phi} \cos \phi
\Big[
\hat{G}_0 \cos^2\phi
\hat{G}_0 \Gamma_- + \frac{1}{4} \hat{G}_0 \Gamma_- \sin 2\phi
- \frac{1}{4} \sin 2\phi \hat{G}_0\Gamma_-
\Big]
\right\}\partial_A
}
\end{equation}
Some of these results appear also in the work by Fedorets {\it et
al.}~\cite{fed-prl-04}. Since we have projected out the phase
$\phi$ we are effectively working in a one-dimensional phase space
given by the amplitude $A$. Note however that
Eq.~(\ref{eq:KramersquasiA}) though is \emph{not} as it stands a
Kramers equation for a single variable. This is related to the
fact that also the distribution $\bar{W}_+$ is \emph{not} the
amplitude distribution function, but, so to speak, a cut at fixed
phase of a two dimensional rotationally invariant distribution.
The difference is a geometrical factor $A$. We define the
amplitude probability distribution $\mathcal{W}(A,\tau) = A
\bar{W}_+(A,\tau)$ and insert this definition in equation
(\ref{eq:KramersquasiA}). We obtain:
\begin{equation}
\eqalign{
\partial_{\tau} \mathcal{W}(A,\tau)
&= \partial_A A[V'(A) +
D(A)\partial_A]\frac{1}{A}\mathcal{W}(A,\tau)\\
&= \partial_A [\mathcal{V}'(A) + D(A)\partial_A]\mathcal{W}(A,\tau)
}
\label{eq:KramersA}
\end{equation}
where we have defined the geometrically corrected potential
\begin{equation}
\mathcal{V}(A) = V(A) - \int_{A_0}^{A} \!\! \frac{D(A')}{A'} dA'
\end{equation}
which for an amplitude independent diffusion coefficient gives a
corrected potential diverging logarithmically in the origin. The
lower limit of integration is arbitrary and reflects the arbitrary
constant in the definition of the potential. The equation
(\ref{eq:KramersA}) is the one-dimensional Kramers equation that
constitutes the starting point for the calculation of the
switching rates that characterize the coexistence regime.
The effective potential $\mathcal{V}$ that we obtained has, for
parameters that correspond to the coexistence regime, a typical
double-well shape (see \emph{e.g.} the left panel of figure
\ref{fig:Codisexample}). We assume for a while the diffusion
constant to be independent of the amplitude $A$. In this
approximation the stationary solution of the equation
(\ref{eq:KramersA}) reads:
\begin{equation}
\mathcal{W}^{\rm stat}(A)
= \frac{1}{\mathcal{Z}} \exp\left(-\frac{\mathcal{V}(A)}{D}\right)
\label{eq:stationary}
\end{equation}
where $\mathcal{Z}$ is the normalization: $\mathcal{Z} =
\int_0^{\infty} \exp\left(-\frac{\mathcal{V}(A)}{D}\right)dA $.
The probability distribution is concentrated around the minima of
the potential and has a minimum at the potential barrier (see
\emph{e.g} the right panel of Fig. \ref{fig:Codisexample}). If
this potential barrier is high enough (\emph{i.e.}\
$\mathcal{V}_{\rm max} - \mathcal{V}_{\rm min} \gg D$) we clearly
identify two distinct states with definite average amplitude: the
lower amplitude state corresponding to the tunneling regime and
the higher to the shuttling.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=.48\textwidth]{Figures/ratesscheme.eps}
\includegraphics[angle=0,width=.48\textwidth]{Figures/Codisexample.eps}
\caption{\small \textit{Left panel -- Bistable effective potential for the SDQS coexistence regime.
The important amplitudes for the calculation of the rates are indicated. In red (blue) are indicated
the reflecting (full) and absorbing (dashed) borders for the calculation of the $\Gamma_{\rm out,(in)}$
escape rate.
Right panel -- Example of the stationary distribution $\bar{W}_+^{\rm stat}$ (blue) and the
amplitude distribution $\mathcal{W}^{\rm stat}$ (green) for the SDQS in the coexistence regime.
The tunneling and shuttling states are in both cases well separated.}
\label{fig:Codisexample}}
\end{center}
\end{figure}
The coexistence regime of a SDQS is mapped into a classical model
for a particle moving in a bistable potential $\mathcal{V}$ with
random forces described by the diffusion constant $D$. The
correspondent escape rates from the tunneling to the shuttling
mode ($\Gamma_{\rm out}$) and back ($\Gamma_{\rm in}$) can be calculated
using the standard theory of Mean First Passage Time (MFPT) for a
random variable \cite{risken}:
\begin{equation}
\eqalign{
\Gamma_{\rm out}&= D
\left(
\int_{A_{\rm tun}}^{A_{\rm out}} dB \,
e^{ \frac{{\mathcal V}(B)}{D}}
\int_{A_{\rm min}}^{B} dA \,
e^{-\frac{{\mathcal V}(A)}{D}}
\right)^{-1}\\
\Gamma_{\rm in}&= D
\left(
\int_{A_{\rm in}}^{A_{\rm shut}}dB \,
e^{\frac{{\mathcal V }(B)}{D}}
\int_{B}^{A_{\rm max}}dA\,
e^{-\frac{{\mathcal V}(A)}{D}}
\right)^{-1}\\
}
\label{eq:rates}
\end{equation}
where integration limits of equation \eref{eq:rates} are
graphically represented in the left panel of Fig.
\ref{fig:Codisexample}. We can now insert the explicit expression
for the switching rates $\Gamma_{\rm in }$ and $\Gamma_{\rm out}$ in
Eq.~\eref{eq:dichCurrFano} and obtain in this way the current and
Fano factor for the coexistence regime. They represent, together
with the stationary distribution \eref{eq:stationary} the main
result of this section and allow us for a quantitative comparison
between the simplified model and the full description of the
coexistence regime.
\subsubsection{Comparison}
The phase space distribution is the most sensitive object to
compare the model and the full description. One of the basic
procedures adopted in the derivation of the Kramers equation
(\ref{eq:KramersA}) is the expansion to second order in the small
parameters (\ref{eq:smallparameters}). In order to test the
reliability of the model we simplify as much as possible the
description reducing the model to a classical description: namely
taking the zero limit for the parameter
$\left(\frac{x_0}{\lambda}\right)$. We realize physically this
condition assuming a large temperature and a tunneling length $\lambda$
of the order of the thermal length $\lambda_{\rm
th}=\sqrt{\frac{k_BT}{m\omega^2}}$. Also the full description is
slightly changed, but not qualitatively: the three regimes are
still clearly present with their characteristics. The numerical
calculation of the stationary density matrix is though based on a
totally different approach. In the quantum regime we used the
Arnoldi iteration scheme for the numerically demanding calculation
of the null vector of the big ($10^4\times 10^4$) matrix
representing the Liouvillean \cite{nov-prl-03,nov-prl-04}.
Problems concerning the convergence of the Arnoldi iteration due
to the delicate issue of preconditioning forced us to abandon this
method in the classical case. We adopted instead the continued
fraction method \cite{risken}.
\begin{figure}[h]
\begin{center}
\includegraphics[angle=0,width=.5\textwidth]{Figures/CompWF.eps}
\caption{\small
\textit{Stationary amplitude probability distribution $\mathcal{W}$
for the SDQS in the coexistence regime. We compare the results obtained from the simplified
model (full line) and from the full description (asterisks).
These results are obtained in the classical high temperature regime $k_BT \gg
\hbar\omega$. The amplitude is measured in units of
$\lambda_{\rm th} = \sqrt{\frac{k_BT}{m\omega^2}}$. The mechanical damping $\gamma$ in units of the mechanical
frequency $\omega$. The other parameter values are $d = 0.05\,\lambda_{\rm th}$ and $\Gamma =
0.015\,\omega$, $\lambda = 2\,\lambda_{\rm th}$.}
\label{fig:CompWF}}
\end{center}
\end{figure}
In Figs.~\ref{fig:CompWF}, \ref{fig:Compcurr} and
\ref{fig:CompNoise} we present the results for the stationary
Wigner function, the current and the Fano factor, respectively, in
the semiclassical approximation and in the full description.
\begin{figure}[h]
\begin{center}
\includegraphics[angle=0,width=.5\textwidth]{Figures/Compcurr.eps}
\caption{\small \textit{Current in the coexistence regime of SDQS.
Comparison between semianalytical and full numerical description.
For the parameter values see Fig.~\ref{fig:CompWF}.}
\label{fig:Compcurr}}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[angle=0,width=.5\textwidth]{Figures/Compnoise.eps}
\caption{\small \textit{Fano factor in the coexistence regime of SDQS.
Comparison between semianalytical and full numerical description.
For the parameter values see Fig.~\ref{fig:CompWF}.}
\label{fig:CompNoise}}
\end{center}
\end{figure}
Deep in the quantum regime the coexistence regime (\emph{e.g.}\
Fig.~\ref{fig:WF} where the amplitude of the shuttling
oscillations is $\approx 7 x_0$) is not captured quantitatively
with the simplified model. Given that the concept of elimination
of the fast dynamics is still valid, we believe that the
discrepancy indicates that the expansion in the small parameters
has not been carried out to sufficiently high order . The
effective potential calculated from a second order expansion still
gives the position of the ring structure with reasonable accuracy
but the overall stationary Wigner function is not fully reproduced
due to an inaccurate diffusion function $D(A)$. One should thus
consider higher order terms in the parameter $(x_0/\lambda)^2$. A
higher order expansion, however, represents a fundamental problem
since it would produce terms with higher order derivatives with
respect to the amplitude $A$ in the Fokker-Planck equation and,
consequently, a straight forward application of the escape time
theory is no longer possible.
It has nevertheless been demonstrated \cite{fli-epl-05} with the
help of the higher cumulants of the current that the description
of the coexistence regime as a dichotomous process is valid also
deep in the quantum regime ($\lambda = 1.5\,x_0$), the only necessary
condition being a separation of the ring and dot structures in the
stationary Wigner function distribution.
We observe that the second order expansion for the effective
potential \eref{eq:smallparameters} is essentially converged, and
able to give the correct position of the shuttling ring also in
the quantum regime. From Eq.~\eref{eq:FedW} it is clear that a
strongly amplitude dependent diffusion constant $D(A)$ would
destroy this agreement. We conjecture that a higher order
expansion may lead to an effective renormalization of the
diffusion constant. To test this idea we used the diffusion
constant as a fitting parameter, and found that the current and
the Fano factor are very accurately reproduced by using a fitted
diffusion constant, with a value approximately twice larger than
the one calculated at zero temperature and in second order in the
small parameters. Clearly, further work is needed to find out
whether the agreement is due to fortuitous coincidence, or if a
physical justification can be given to this conjecture. Also, an
investigation of whether the results obtained in the classical
limit are extendable to the quantum limit by a controlled
renormalization of the diffusion constant is called for.
|
1,116,691,500,171 | arxiv | \section{Introduction}
The purpose of this note is to present several different types of sum formulas for Schur multiple zeta values, which can be seen as generalizations of classical sum formulas for multiple zeta(-star) values. Schur multiple zeta values are real numbers introduced in \cite{NakasujiPhuksuwanYamasaki2018}, and they can be seen as a simultaneous generalization of the multiple zeta values (MZVs) and multiple zeta-star values (MZSVs), which are defined for an index ${\boldsymbol{k}}=(k_1,\dots,k_d) \in \mathbb{Z}_{\geq 1}^d$ with $k_d\geq 2$ by
\begin{equation} \label{eq:mzv}
\zeta({\boldsymbol{k}})\coloneqq\sum_{0<m_1<\cdots<m_d} \frac{1}{m_1^{k_1}\cdots m_d^{k_d}} \,, \qquad \zeta^\star({\boldsymbol{k}})\coloneqq\sum_{0<m_1\leq\cdots \leq m_d} \frac{1}{m_1^{k_1}\cdots m_d^{k_d}} \,.
\end{equation}
Here the condition $k_d\geq 2$ ensures the convergence of the above sums, and the index ${\boldsymbol{k}}$ is called admissible in this case. For an index ${\boldsymbol{k}} = (k_1,\dots,k_d)$ we write $\wt({\boldsymbol{k}})=k_1+\dots+k_d$ to denote its weight and $\dep({\boldsymbol{k}})=d$ for its depth. As classical result is then (\cite{Granville1997},\cite{Hoffman92}), that the sum of MZ(S)Vs over all admissible indices of fixed weight $w$ and depth $d$ evaluates to (an integer multiple of) $\zeta(w)$, i.e., for $d\geq 1$ and $w\geq d+1$
\begin{align} \label{eq:mzvsumformula}
\sum_{\substack{{\boldsymbol{k}} \text{ admissible}\\\wt({\boldsymbol{k}})=w\\\dep({\boldsymbol{k}})=d}} \zeta({\boldsymbol{k}}) = \zeta(w), \qquad\sum_{\substack{{\boldsymbol{k}} \text{ admissible}\\\wt({\boldsymbol{k}})=w\\\dep({\boldsymbol{k}})=d}} \zeta^\star({\boldsymbol{k}}) = \binom{w-1}{d-1}\zeta(w)\,.
\end{align}
Schur MZVs generalize MZ(S)Vs by replacing an index ${\boldsymbol{k}}$ by a Young tableau (see Definition \ref{def:schurmzv} for the exact definition). For example, if we have a skew Young diagram $\lambda\slash\mu = (2,2) \slash (1) = \ytableausetup{centertableaux, boxsize=0.6em}\begin{ytableau}
\none & \, \\
\, & \,
\end{ytableau}$ and $k_1, k_3 \geq 1$, $k_2\geq 2$ the Schur MZV of \emph{shape} $\lambda\slash\mu$ for the Young tableau ${\footnotesize \ytableausetup{centertableaux, boxsize=1.3em} {\boldsymbol{k}} = \begin{ytableau}
\none & k_1 \\
k_3 & k_2
\end{ytableau} }\in \YT(\lambda\slash\mu)$ is defined by
\ytableausetup{centertableaux, boxsize=1.3em}
\begin{align}
\zeta({\boldsymbol{k}}) = \zeta\left(\ {\footnotesize \begin{ytableau}
\none & k_1 \\
k_3 & k_2
\end{ytableau}}\ \right) = \sum_{
\arraycolsep=1.4pt\def0.8{0.8}
{\footnotesize \begin{array}{ccc}
& &\,\,\,\,\,\,m_1 \\
& &\,\,\,\,\,\vsmall \\
& m_3 &\leq m_2
\end{array} }} \frac{1}{m_1^{k_1} m_2^{k_2} m_3^{k_3}}\,,
\end{align}
where in the sum $m_1,m_2,m_3\geq 1$.
Schur MZVs generalize MZVs and MZSVs in the sense that we recover these in the special cases $\lambda= (\underbrace{1,\ldots,1}_{d})$ and $\lambda=(d)$, i.e.,
\[ \zeta(k_1,\dots,k_d) = \zeta\left(\ {\footnotesize \ytableausetup{centertableaux, boxsize=1.3em}
\begin{ytableau}
k_1 \\
\svdots \\
k_d
\end{ytableau}}\ \right) \quad \text{ and }\qquad \zeta^\star(k_1,\dots,k_d) =\zeta\left(\ {\footnotesize \ytableausetup{centertableaux, boxsize=1.3em}
\begin{ytableau}
k_1 & \cdots &k_d
\end{ytableau}}\ \right) \,. \]
The classical sum formulas \eqref{eq:mzvsumformula} state that for the shapes $\lambda=(1,\dots,1)$ or $\lambda=(d)$ the sum of Schur MZVs over all admissible Young tableaux of shape $\lambda$ and weight $w\geq d+1$ evaluates to (a multiple of) $\zeta(w)$. Therefore, it is natural to ask, how this situation is for a general shape $\lambda \slash \mu$, i.e., if there exist explicit evaluations of
\begin{align}\label{eq:sk}
S_w(\lambda \slash \mu) \coloneqq \sum_{\substack{{\boldsymbol{k}} \in \YT(\lambda \slash \mu) \\ {\boldsymbol{k}} \text{ admissible}\\\wt({\boldsymbol{k}})=w}} \zeta({\boldsymbol{k}})\,.
\end{align}
The optimistic guess that $ S_w(\lambda \slash \mu)$ is always a rational multiple of $\zeta(w)$ seems to be wrong since we will see that, for example, for $\lambda = (2,2)=\ytableausetup{centertableaux, boxsize=0.4em}
\begin{ytableau}
\, & \, \\
\, & \,
\end{ytableau}$ and $\mu =\varnothing$ we have for $w\geq 5$
\begin{equation}\label{eq:22square}\begin{split}
S_w\left(\ {\footnotesize \ytableausetup{centertableaux, boxsize=0.8em}
\begin{ytableau}
\, & \, \\
\, & \,
\end{ytableau}} \ \right)
&= -(w-2) \zeta(1,w-1) + (w-4) \zeta(2,w-2)+2\zeta(3,w-3)\\
& \quad -2\zeta(3)\zeta(w-3)+(w-2)\zeta(2)\zeta(w-2)\,,
\end{split}
\end{equation}
which, by computer experiments, seems not to evaluate to a rational multiple of $\zeta(w)$ for arbitrary $w$. Nevertheless, we see that \eqref{eq:22square} gives a representation as a sum of products of MZVs, where the number of terms does not depend on $w$. That such an expression exists is not obvious since a priori, the number of terms in \eqref{eq:sk} increases with the weight $w$. It is, therefore, also reasonable to call \eqref{eq:22square} a sum formula. Since it is of a different type as the sum formula of MZVs, we introduce the following (rough) classification of types of sum formulas for shape $\lambda \slash \mu$:
\begin{enumerate}[leftmargin=4cm]
\item[\emph{single type}:] $S_w(\lambda \slash \mu)$ evaluates to a rational multiple of $\zeta(w)$.
\item[\emph{polynomial type}:] $S_w(\lambda \slash \mu)$ evaluates to a polynomial in $\zeta(m)$ with $m\leq w$.
\item[\emph{bounded type}:] $S_w(\lambda \slash \mu)$ is expressed as a $\mathbb{Q}$-linear combination of MZVs, where the number of terms does not depend\footnote{By this, we mean that for a fixed $\lambda/\mu$ we can find $a( {\boldsymbol{k}} ,w) \in \mathbb{Q}$ for admissible indices ${\boldsymbol{k}}$ and $w\geq 1$ with $S_{w}(\lambda/\mu)=\sum_{ {\boldsymbol{k}} } a({\boldsymbol{k}},w) \zeta({\boldsymbol{k}})$ such that there exists a positive real number $C$ with $\abs{\{ {\boldsymbol{k}} \mid
a({\boldsymbol{k}},w)\neq 0\}} < C$ for all $w\geq 1$.} on $w$, but just on $\lambda \slash \mu$.
\end{enumerate}
With this terminology, the classical sum formulas are of single type, and the formula \eqref{eq:22square} is of bounded type.
In this note, we will focus on the case when $\lambda \slash \mu$ has either just one corner or $\lambda \slash \mu$ is a ribbon, by which we mean a (skew) Young diagram which is connected and contains no $2\times 2$ block of boxes. For example, if $\lambda$ is just a column or a row, we obtain the single type sum formulas \eqref{eq:mzvsumformula}. As a generalization we will show (Theorem \ref{thm:anti-hook}) that for $\lambda \slash \mu = (\underbrace{d, \dots, d}_{r}) \slash (\underbrace{d-1, \dots, d-1}_{r-1})$, i.e., when $\lambda \slash \mu$ is an anti-hook, we have
\begin{align}
S_w(\lambda \slash \mu) = S_w\left(\ {\footnotesize \ytableausetup{centertableaux, boxsize=1.2em}
\begin{ytableau}
\none & \none & \, \\
\none & \none & \svdots \\
\, & \cdots & \,
\end{ytableau}}\ \right) &= \binom{w-1}{d-1}\zeta(w)\,.
\end{align}
This can be seen as a unification of the two classical sum formulas \eqref{eq:mzvsumformula}. In Theorem \ref{thm:stair 1}, we will show that also the ``stairs of tread one'', where the height of each stair is the same, will give rise to single type sum formulas. For example, as special cases, we get the following formulas, valid for all $w\geq 8$
\begin{align}\label{eq:exampletreadone}
S_w\left(\ \ytableausetup{centertableaux, boxsize=0.8em}
\begin{ytableau}
\none & \none & \, \\
\none & \none & \, \\
\none & \, & \, \\
\none & \, & \none \\
\, & \, & \none \\
\end{ytableau} \ \right)= \frac{(w-8)(w-1)}{2}\zeta(w) \,,\quad
S_w\left(\, \ytableausetup{centertableaux, boxsize=0.8em}
\begin{ytableau}
\none & \none &\none & \, \\
\none & \none &\, & \, \\
\none & \, &\, & \none \\
\, & \, &\none & \none \\
\end{ytableau}\, \right)= \frac{(w-9)(w-8)(w-1)}{6}\zeta(w) \,.
\end{align}
For arbitrary ribbons with $n$ corners, we will see in Corollary \ref{cor:ncornerribbon} that the corresponding $S_w$ can be always expressed in terms of MZVs of depth $\leq n$.
Especially, in Theorem \ref{thm:ribbon_2_corner} we give an explicit bounded type sum formula
for any ribbon with two corners in terms of double zeta values.
In the last section, we consider shapes that have exactly one corner but which are not necessarily ribbons. We will show that arbitrary shapes with one corner always give rise to sum formulas of bounded type (Theorem \ref{thm:onecornerwithphi}) similar to \eqref{eq:22square}. This will also lead to relations among $S_w(\lambda / \mu)$ for different shapes $\lambda/ \mu$. For example, as a special case of Theorem \ref{thm:S_w rel}, we will see that for any $w\geq 1$
\begin{align*}
\ytableausetup{centertableaux, boxsize=0.8em}
2 S_w\!\!\left(\, \begin{ytableau}
\none & \none & \, \\
\none & \none & \, \\
\none & \, & \, \\
*(lightgray)\, & \, & \,
\end{ytableau}\,\right)
- S_w\!\!\left(\, \begin{ytableau}
\none & \, \\
*(lightgray) \, & \, \\
\, & \, \\
\, & \,
\end{ytableau}\,\right)
-4 S_w\!\!\left(\, \begin{ytableau}
\none & *(lightgray)\, \\
\none & \, \\
\none & \, \\
\, & \, \\
\, & \,
\end{ytableau}\,\right)
= (w-7) S_w\!\!\left(\, \begin{ytableau}
\none & \, \\
\none & \, \\
\, & \, \\
\, & \,
\end{ytableau}\,\right)\,.
\end{align*}
Notice that the diagrams appearing in the left-hand side are obtained by adding one box (the gray ones) to the diagram in the right-hand side.
The structure of this paper is as follows: In Section \ref{sec:weightedsum} we give a recursive formula for a certain weighted sums of MZVs, which will be used in later sections. For this, we will recall the notion of $2$-posets and their associated integrals. In Section \ref{sec:ribbons}, we consider $S_w(\lambda / \mu)$ where $\lambda / \mu$ are ribbons. Section \ref{sec:OneCorner} will be devoted to the case of shapes with exactly one corner.
\subsection{Notation and definition of Schur MZVs} We will use the following notation in this work. A tuple ${\boldsymbol{k}}=(k_1,\dots,k_d)\in \mathbb{Z}^d_{\geq 1}$ will be called an \emph{index} and we write ${\boldsymbol{k}}=\varnothing$ when $d=0$.
We call ${\boldsymbol{k}}$ \emph{admissible} if $k_d\geq 2$ or if ${\boldsymbol{k}}=\varnothing$. For $n\geq 1$ and $k\in \mathbb{Z}_{\ge 1}$ we use for the $n$-time repetition of $k$ the usual notation $\{k\}^n = \underbrace{k,\dots,k}_{n}$.
A \emph{partition} of a natural number $n$ is a tuple $\lambda = (\lambda_1,\dots,\lambda_h)$ of positive integers
$\lambda_1 \geq \dots \geq \lambda_h \geq 1$ with $n = |\lambda|= \lambda_1 + \dots + \lambda_h$. We will also use the notation $\lambda=(n^{m_n},\ldots,2^{m_2},1^{m_1})$, where $m_i=m_i(\lambda)$ is the multiplicity of $i$ in $\lambda$.
For another partition $\mu=(\mu_1,\dots,\mu_r)$ we write $\mu \subset \lambda$ if $r\leq h$ and $\mu_i\le\lambda_i$ for $i=1,\dots,r$, we define the \emph{skew Young diagram} $D(\lambda/\mu)$ of $\lambda \slash \mu$ by
\[D(\lambda \slash \mu) = \left\{(i,j) \in \mathbb{Z}^2 \mid 1 \leq i \leq h\,, \mu_i < j \leq \lambda_i \right\},\]
where $\mu_i = 0$ for $i>r$. In the case where $\mu=\varnothing$ is the empty partition (i.e., the unique partition of zero) we just write $\lambda\slash\mu = \lambda$.
A \emph{Young tableau} ${\boldsymbol{k}} = (k_{i,j})_{(i,j) \in D(\lambda \slash \mu)}$ of shape $\lambda \slash \mu$ is a filling of $D(\lambda\slash\mu)$ obtained by putting $k_{i,j}\in\mathbb{Z}_{\geq 1}$ into the $(i,j)$-entry of $D(\lambda\slash\mu)$. For shorter notation, we will also just write $(k_{i,j})$ in the following if the shape $\lambda \slash \mu$ is clear from the context.
A Young tableau $(m_{i,j})$ is called \emph{semi-standard} if $m_{i,j}<m_{i+1,j}$ and $m_{i,j}\leq m_{i,j+1}$ for all possible $i$ and $j$.
The set of all Young tableaux and all semi-standard Young tableaux of shape $\lambda \slash \mu$ are denoted by $\YT(\lambda \slash \mu)$ and
$\SSYT(\lambda \slash \mu)$, respectively.
An entry $(i,j)\in D(\lambda\slash\mu)$ is called a \emph{corner} of $\lambda \slash \mu$
if $ (i,j+1) \not\in D(\lambda\slash\mu)$ and $ (i+1,j) \not\in D(\lambda\slash\mu)$.
We denote the set of all corners of $\lambda \slash \mu$ by $C(\lambda \slash \mu)$.
For a Young tableau ${\boldsymbol{k}} = (k_{i,j}) \in\YT(\lambda \slash \mu)$ we define its \emph{weight} by $\wt({\boldsymbol{k}}) = \sum_{(i,j) \in D(\lambda \slash \mu)} k_{i,j}$
and we call it \emph{admissible} if $k_{i,j} \geq 2$ for all $(i,j) \in C(\lambda\slash\mu)$.
\begin{defn}\label{def:schurmzv}
For an admissible ${\boldsymbol{k}} = (k_{i,j}) \in \YT(\lambda \slash \mu)$
the \emph{Schur multiple zeta value} (Schur MZV) is defined by
\begin{align}\label{eq:defschurmzv}
\zeta({\boldsymbol{k}}) \coloneqq \sum_{(m_{i,j}) \in \SSYT(\lambda \slash \mu)} \prod_{(i,j) \in D(\lambda \slash \mu)} \frac{1}{m_{i,j}^{k_{i,j}}} \,.
\end{align}
\end{defn}
Note that the admissibility of ${\boldsymbol{k}}$ ensures the convergence of \eqref{eq:defschurmzv} (\cite[Lemma 2.1]{NakasujiPhuksuwanYamasaki2018}). For the empty tableau ${\boldsymbol{k}}=\varnothing$, we have $\zeta(\varnothing)=1$.
Finally, we mention that the convention for the binomial coefficients we use in this work is for $n,k \in \mathbb{Z}$ given by
\begin{align*}
\binom{n}{k} \coloneqq\begin{cases} \frac{n (n-1) \cdots (n-(k-1))}{k!}& \text{if }k>0,\\
1& \text{if }k=0,\\
0& \text{if }k<0.
\end{cases}
\end{align*}
\subsection*{Acknowledgement}
The first author was partially supported by JSPS KAKENHI Grant Numbers JP19K14499, JP21K13771.
The third author was partially supported by JSPS KAKENHI Grant Numbers JP19K23402, JP21K13772.
The fourth author was partially supported by JSPS KAKENHI Grant Numbers JP18H05233, JP18K03221, JP21K03185.
The fifth author was partially supported by JSPS KAKENHI Grant Numbers JP21K03206.
\section{Weighted sum formulas}
\label{sec:weightedsum}
When evaluating sums of Schur MZVs we will often encounter weighted sums of MZVs, which we will discuss in this section. For indices ${\boldsymbol{n}}=(n_1,\ldots,n_d)$, ${\boldsymbol{k}}=(k_1,\ldots,k_d)$ and an integer $l\ge 0$, define
\begin{align}
P_l({\boldsymbol{n}};{\boldsymbol{k}})
&\coloneqq \sum_{\substack{{\boldsymbol{w}}=(w_1,\ldots,w_d): \text{ adm.}\\w_i\ge n_i \ (i=1, \dots, d)\\
\wt({\boldsymbol{w}})=\wt({\boldsymbol{k}})+l}
}
\prod^{d}_{i=1}\binom{w_i-n_i}{k_i-1}\cdot \zeta(w_1,\ldots,w_d)
\end{align}
Notice that by definition
$P_l(\varnothing;\varnothing)=1$ if $l=0$ and $0$ otherwise.
In particular, we put
$P_l({\boldsymbol{k}}) \coloneqq P_l((1,\ldots,1);{\boldsymbol{k}})$
and $Q_l({\boldsymbol{k}}) \coloneqq P_l((1,\ldots,1,2);{\boldsymbol{k}})$.
The aim of this section is to obtain explicit bounded expressions of $P_l({\boldsymbol{k}})$ and $Q_l({\boldsymbol{k}})$, which play important roles throughout the present paper. Here, we say that an expression of $P_l({\boldsymbol{k}})$ or $Q_l({\boldsymbol{k}})$ is {\it bounded} if the number of terms appearing in the expression does not depend on $l$ but only on ${\boldsymbol{k}}$. Notice that, since $\binom{w-2}{k-1}=\sum^{k-1}_{j=0}(-1)^j\binom{w-1}{k-j-1}$, we have
\begin{equation}
\label{for:PtoQ}
Q_l({\boldsymbol{k}})
=\sum^{k_d-1}_{j=0}(-1)^jP_{l+j}(k_1,\ldots,k_{d-1},k_d-j)\,.
\end{equation}
In particular, $Q_l({\boldsymbol{k}})=P_l({\boldsymbol{k}})$ if ${\boldsymbol{k}}$ is non-admissible, whence it is sufficient to study only $P_l({\boldsymbol{k}})$. Moreover, we may assume that $l>0$ because the case $l=0$ is trivial: $P_0({\boldsymbol{k}})=\zeta({\boldsymbol{k}})$ if ${\boldsymbol{k}}$ is admissible and $0$ otherwise and $Q_0({\boldsymbol{k}})=0$. Furthermore, when $d=1$, we have for $k\ge 1$ and $l>0$
\[
P_l(k)
=\binom{l+k-1}{k-1}\zeta(l+k)\,,
\quad
Q_l(k)
=\binom{l+k-2}{k-1}\zeta(l+k)
\]
and therefore also assume that $d\ge 2$.
The next proposition asserts that $P_l({\boldsymbol{k}})$ satisfies recursive formulas with respect to $l$, which can be described by using the Schur MZVs of anti-hook shape
\begin{align}\begin{split}\label{eq:antihookzeta}
\zeta\begin{varray}
{\boldsymbol{l}} \\ {\boldsymbol{k}}
\end{varray}
={}&
\zeta
\begin{varray}
l_1,\ldots,l_s\\
k_1,\ldots,k_r
\end{varray}\\
\coloneqq{}&
\ytableausetup{boxsize=1.2em}
\zeta\left(\;\begin{ytableau}
\none & \none & \none & k_1 \\
\none & \none & \none & \svdots \\
l_1 & \cdots & l_s & k_r
\end{ytableau}\;\right)
=
\sum_{0<a_1<\cdots<a_r\ge b_s\ge\cdots\ge b_1>0}
\frac{1}{a_1^{k_1}\cdots a_r^{k_r}b_1^{l_1}\cdots b_s^{l_s}}
\end{split}
\end{align}
with ${\boldsymbol{l}}=(l_1,\ldots,l_s)$ being an index and ${\boldsymbol{k}}=(k_1,\ldots,k_r)$ a non-empty admissible index.
\begin{prop}
\label{prop:bwsumformula}
Let $d\ge 2$ and $l>0$.
\begin{enumerate}[label=\textup{(\roman*)}]
\item
If ${\boldsymbol{k}}=(k_1,\ldots,k_d)$ is admissible, then it holds that
\begin{equation}
\label{eq:general weighted sum adm}
P_{l}({\boldsymbol{k}})
=\sum_{i=1}^d\sum_{a_i=0}^{k_i-1} (-1)^{k_1+\cdots+k_{i-1}+a_i}
P_{k_i-1-a_i}(k_{i-1},\ldots,k_1,l+1)\, P_{a_i}(k_{i+1},\ldots,k_d)\,.
\end{equation}
\item
If ${\boldsymbol{k}}=(k_1,\ldots,k_d)$ is non-admissible (i.e., $k_d=1$), then it holds that
\begin{equation}
\label{eq:general weighted sum nonadm}
\begin{aligned}
&P_{l}({\boldsymbol{k}})
=\sum^{d}_{i=1}\sum_{a_{i}=0}^{k_{i}-1}
(-1)^{k_1+\cdots+k_{i-1}+a_{i}}P_{k_{i}-1-a_i}(k_{i-1},\ldots,k_{1},l+1)P_{a_i}(k_{i+1},\ldots,k_{d-1},1)\\
&\ \ +\sum_{i=1}^{d-1}(-1)^{l+d+k_i}
\sum_{\substack{(b_0,\ldots,b_{d-1})\in\mathbb{Z}_{\ge 1}^d\\
b_0\ge 2,\,b_i=2\\
b_0+\cdots+b_{d-1}=\wt({\boldsymbol{k}})+l+1}}(-1)^{b_0+b_1+\cdots+b_{i-1}}
\binom{b_0-1}{l}
\Biggl\{\prod^{d-1}_{\substack{j=1 \\ j\ne i}}\binom{b_j-1}{k_j-1}\Biggr\}\\
&\ \ \times
\sum_{j=i}^{d-1}\sum_{c_j=1}^{b_j-1}(-1)^{c_j+j+b_{j+1}+\cdots +b_{d-1}}
\zeta\begin{varray}
c_j,b_{j+1},\ldots,b_{d-1}\\
b_{i-1},\ldots,b_1,b_0
\end{varray} \zeta(b_{i+1},\ldots,b_{j-1},b_{j}-c_{j}+1)\,.
\end{aligned}
\end{equation}
\end{enumerate}
\end{prop}
\begin{remark}\label{rem:plisbounded}
The number of terms in the expression
\eqref{eq:general weighted sum nonadm}
is actually bounded with respect to $l$
by the following reason: By the existence of the binomial coefficient $\binom{b_0-1}{l}$,
the summation variable $b_0$ can be restricted by $b_0\ge l+1$
and then the other summation variables are bounded independently of $l$.
Also, by applying the formula \eqref{eq:zeta(bk) via SSD} to an anti-hook, we can expand each term
\[\zeta\begin{varray}
c_j,b_{j+1},\ldots,b_{d-1}\\
b_{i-1},\ldots,b_1,b_0
\end{varray}\]
into a sum of MZVs. The number of appearing MZVs is independent of $b_0,\ldots,b_{d-1}$ and $c_j$.
Finally, by using the harmonic product formula,
we can rewrite the products of MZVs into a sum of MZVs.
The resulting number of terms, after the application of the harmonic product formula, depends only on the original number of entries and so it is independent of $l$.
\end{remark}
To prove Proposition \ref{prop:bwsumformula}, we first recall the notion of 2-posets and the associated integrals introduced by the fourth-named author in \cite{YamamotoIntegral}.
\begin{defn}[{\cite[Definition~2.1]{YamamotoIntegral}}]\leavevmode
\begin{enumerate}[label=\textup{(\roman*)}]
\item A $2$-poset is a pair $(X,\delta_X)$,
where $X=(X,\le)$ is a finite partially ordered set (poset for short)
and $\delta_X$ is a map (called the label map of $X$) from $X$ to $\{0,1\}$.
We often omit $\delta_X$ and simply say ``a $2$-poset $X$''.
Moreover, a 2-poset $X$ is called admissible if $\delta_X(x)=0$ for all maximal elements $x$
and $\delta_X(x)=1$ for all minimal elements $x$.
\item For an admissible $2$-poset $X$, the associated integral $I(X)$ is defined by
\begin{equation}
\label{def:Yintegral}
I(X)=\int_{\Delta_X}\prod_{x\in X}\omega_{\delta_X(x)}(t_x)\,,
\end{equation}
where $\Delta_X=\left\{\left.(t_x)_{x}\in [0,1]^{X}\,\right|\,\text{$t_x<t_y$ if $x<y$}\right\}$ and
$\omega_0(t)=\frac{dt}{t}$ and $\omega_1(t)=\frac{dt}{1-t}$.
\end{enumerate}
\end{defn}
We depict a $2$-poset as a Hasse diagram in which an element $x$ with $\delta_X(x)=0$ (resp. $\delta_X(x)=1$) is represented by $\circ$ (resp. $\bullet$). For example, the diagram
\begin{align}
\label{ex:YI}
\begin{tikzpicture}[thick,x=10pt,y=10pt,baseline=(base)]
\coordinate (base) at (3,2);
\node[vertex_black] (B1) at (0,0) {};
\node[vertex_white] (W11) at (0,1) {};
\node[vertex_black] (B2) at (0,2) {};
\node[vertex_black] (B3) at (0,3) {};
\node[vertex_white] (W31) at (0,4) {};
\node[vertex_black] (B4) at (2,2) {};
\node[vertex_white] (W41) at (3,3) {};
\node[vertex_white] (W42) at (4,4) {};
\node[vertex_black] (B5) at (5,3) {};
\node[vertex_white] (W51) at (6,4) {};
\draw{
(B1)--(W11)--(B2)--(B3)--(W31)--(B4)--(W41)--(W42)--(B5)--(W51)
};
\end{tikzpicture}
\end{align}
represents the $2$-poset $X=\{x_1,\ldots,x_{10}\}$ with order $x_1<x_2<x_3<x_4<x_5>x_6<x_7<x_8>x_9<x_{10}$ and label $(\delta_X(x_1),\ldots,\delta_X(x_{10}))=(1,0,1,1,0,1,0,0,1,0)$.
In \cite{KanekoYamamoto2018}, it is shown that the Schur MZVs of anti-hook shape has the following expression by the associated integral of a $2$-poset. This can be regarded as simultaneous generalization of the integral expressions of MZVs and MZSVs.
\begin{thm}[{\cite[Theorem~4.1]{KanekoYamamoto2018}}]
\label{thm:integral series identity}
For an index ${\boldsymbol{l}}=(l_1,\ldots,l_s)$ and a non-empty admissible index ${\boldsymbol{k}}=(k_1,\ldots,k_r)$, we have
\begin{align}
\zeta\begin{varray}
{\boldsymbol{l}} \\ {\boldsymbol{k}}
\end{varray}
=
I
\left(
\begin{tikzpicture}[thick,x=8pt,y=8pt,baseline=44pt]
\node[vertex_black] (Bi) at (0,0) {};
\node[vertex_white] (Wi-11) at (0,1) {};
\node[vertex_white] (Wi-12) at (0,3) {};
\node[vertex_black] (B2) at (0,5) {};
\node[vertex_white] (W11) at (0,6) {};
\node[vertex_white] (W12) at (0,8) {};
\node[vertex_black] (B1) at (0,9) {};
\node[vertex_white] (W01) at (0,10) {};
\node[vertex_white] (W02) at (0,12) {};
\node[vertex_black] (Bd) at (3,9) {};
\node[vertex_white] (Wd1) at (4,10) {};
\node[vertex_white] (Wd2) at (6,12) {};
\node[vertex_black] (Bi+2) at (10,9) {};
\node[vertex_white] (Wi+21) at (11,10) {};
\node[vertex_white] (Wi+22) at (13,12) {};
\draw{
(Bi)--(Wi-11) (B2)--(W11) (W12)--(B1)--(W01) (W02)--(Bd)--(Wd1)
(Wd2)--(7,10.5) (9,10.5)--(Bi+2)--(Wi+21) (Wi+22)
};
\draw[densely dotted] {
(Wi-11)--(Wi-12)--(B2) (W11)--(W12) (W01)--(W02) (Wd1)--(Wd2)
(7.3,10.5)--(8.7,10.5) (Wi+21)--(Wi+22)
};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,1)--(0,3) node [midway,xshift=-18pt,yshift=0pt]{\scriptsize $k_{1}-1$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,6)--(0,8) node [midway,xshift=-24pt,yshift=0pt]{\scriptsize $k_{r-1}-1$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,10)--(0,12) node [midway,xshift=-18pt,yshift=0pt]{\scriptsize $k_r-1$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=2pt]
(4,10)--(6,12) node [midway,xshift=-15pt,yshift=4pt]{\scriptsize $l_{s}-1$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=2pt]
(11,10)--(13,12) node [midway,xshift=-15pt,yshift=4pt]{\scriptsize $l_{1}-1$};
\end{tikzpicture}
\ \right)\,.
\end{align}
\end{thm}
For example, for the $2$-poset $X$ given by \eqref{ex:YI}, we have
$\zeta\begin{varray}
2,3 \\ 2,1,2
\end{varray}
=I(X)$.
In our proof of \cref{prop:bwsumformula}, we consider a kind of extention of the integral $I(X)$
to non-admissible $2$-posets $X$. This extension is given by using the notion of ``admissible part''
which we define below.
Let $\mathscr{X}$ be the set of isomorphism classes of $2$-posets,
and $\mathbb{Q}\mathscr{X}$ denote the $\mathbb{Q}$-vector space freely generated by this set.
We equip $\mathbb{Q}\mathscr{X}$ with a $\mathbb{Q}$-algebra structure by setting $[X]\cdot[Y]\coloneqq[X\sqcup Y]$.
If we let $\mathscr{X}^0\subset\mathscr{X}$ be the subset consisting of admissible $2$-posets,
its $\mathbb{Q}$-span $\mathbb{Q}\mathscr{X}^0$ becomes a $\mathbb{Q}$-subalgebra of $\mathbb{Q}\mathscr{X}$ and
the integral \eqref{def:Yintegral} defines a $\mathbb{Q}$-algebra homomorphism $I\colon\mathbb{Q}\mathscr{X}^0\to\mathbb{R}$.
Let $\mathscr{T}\subset\mathscr{X}$ be the subset of totally ordered $2$-posets.
Then a $\mathbb{Q}$-linear map $\mathbb{Q}\mathscr{X}\to\mathbb{Q}\mathscr{T}$, which we call the \emph{totally ordered expansion},
is defined by
\[[X]=[X,\le,\delta]\longmapsto [X]^\mathrm{tot}\coloneqq\sum_{\le'}[X,\le',\delta], \]
where $[X]=[X,\le,\delta]$ is the isomorphism class of any $2$-poset $X$
and $\le'$ runs over the total orders on the set $X$
which are refinements of the original partial order $\le$.
We have $[X]^\mathrm{tot}=[X_a^b]^\mathrm{tot}+[X_b^a]^\mathrm{tot}$ for any $2$-poset $X$ and non-comparable elements $a,b\in X$,
where $X^b_a$ denotes the 2-poset obtained from $X$ by adjoining the relation $a<b$.
Note also that the integration map $I\colon\mathbb{Q}\mathscr{X}^0\to\mathbb{R}$ factors through the totally ordered expansion,
i.e., we have $I([X])=I([X]^\mathrm{tot})$ for any $[X]\in\mathscr{X}^0$.
For any $2$-poset $X$, we define its \emph{admissible part} $[X]^\mathrm{adm}$ to be
the partial sum of the totally ordered expansion $[X]^\mathrm{tot}$ consisting of the admissible terms.
For example, if $X=
\begin{tikzpicture}[thick,x=10pt,y=10pt,baseline=10pt]
\node[vertex_white] (W1) at (-1,1) {};
\node[vertex_black] (B1) at (0,0) {};
\node[vertex_white] (W2) at (1,1) {};
\node[vertex_black] (B2) at (1,2) {};
\draw{
(W1)--(B1)--(W2)--(B2)
};
\end{tikzpicture}
\,$, we have
\[
[X]^\mathrm{tot}=
\left[\
\begin{tikzpicture}[thick,x=10pt,y=10pt,baseline=10pt]
\node[vertex_black] (B1) at (0,0) {};
\node[vertex_white] (W1) at (0,1) {};
\node[vertex_black] (B2) at (0,2) {};
\node[vertex_white] (W2) at (0,3) {};
\draw{
(B1)--(W1)--(B2)--(W2)
};
\end{tikzpicture}
\ \right]
+2
\left[\
\begin{tikzpicture}[thick,x=10pt,y=10pt,baseline=10pt]
\node[vertex_black] (B1) at (0,0) {};
\node[vertex_white] (W1) at (0,1) {};
\node[vertex_white] (W2) at (0,2) {};
\node[vertex_black] (B2) at (0,3) {};
\draw{
(B1)--(W1)--(W2)--(B2)
};
\end{tikzpicture}
\ \right]\, \text{ and }\
[X]^{\mathrm{adm}}=
\left[\
\begin{tikzpicture}[thick,x=10pt,y=10pt,baseline=10pt]
\node[vertex_black] (B1) at (0,0) {};
\node[vertex_white] (W1) at (0,1) {};
\node[vertex_black] (B2) at (0,2) {};
\node[vertex_white] (W2) at (0,3) {};
\draw{
(B1)--(W1)--(B2)--(W2)
};
\end{tikzpicture}
\ \right]\,. \]
Then the $\mathbb{Q}$-linear map
\[\mathbb{Q}\mathscr{X}\longrightarrow\mathbb{R};\ [X]\longmapsto I([X]^\mathrm{adm})\]
is an extension of $I\colon\mathbb{Q}\mathscr{X}^0\to\mathbb{R}$.
Notice that this map is \emph{not} an algebra homomorphism,
unlike the map defined by using the shuffle regularization.
In the following computations, we omit the symbol `$\mathrm{tot}$';
for example, we write $[X]=[X_a^b]+[X_b^a]$ instead of $[X]^\mathrm{tot}=[X_a^b]^\mathrm{tot}+[X_b^a]^\mathrm{tot}$.
In other words, we compute in the quotient space $\mathbb{Q}\mathscr{X}/\Ker([X]\mapsto[X]^\mathrm{tot})$.
\begin{proof}
[Proof of Proposition~\ref{prop:bwsumformula}]
Set $m_i=k_i-1$ for $1\le i\le d$.
Define
\[
X=X_{l}({\boldsymbol{k}})=
\begin{tikzpicture}[thick,x=8pt,y=8pt,baseline=44pt]
\node[vertex_black] (B1) at (0,0) {};
\node[vertex_white] (W01) at (-2,1) {};
\node[vertex_white] (W02) at (-2,4) {};
\node[vertex_white] (W11) at (2,1) {};
\node[vertex_white] (W12) at (2,3) {};
\node[vertex_black] (B2) at (2,4) {};
\node[vertex_white] (W21) at (2,5) {};
\node[vertex_white] (W22) at (2,7) {};
\node[vertex_black] (Bd) at (2,9) {};
\node[vertex_white] (Wd1) at (2,10) {};
\node[vertex_white] (Wd2) at (2,12) {};
\draw{
(B1)--(W01) (B1)--(W11) (W12)--(B2)--(W21) (Bd)--(Wd1)
};
\draw[densely dotted] {
(W01)--(W02) (W11)--(W12) (W21)--(W22)--(Bd) (Wd1)--(Wd2)
};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,1)--(-2,4) node [midway,xshift=-8pt,yshift=0pt]{\scriptsize $l$};
\draw[decorate,decoration={brace,amplitude=3},xshift=2pt,yshift=0pt]
(2,3)--(2,1) node [midway,xshift=12pt,yshift=0pt]{\scriptsize $m_1$};
\draw[decorate,decoration={brace,amplitude=3},xshift=2pt,yshift=0pt]
(2,7)--(2,5) node [midway,xshift=12pt,yshift=0pt]{\scriptsize $m_2$};
\draw[decorate,decoration={brace,amplitude=3},xshift=2pt,yshift=0pt]
(2,12)--(2,10) node [midway,xshift=12pt,yshift=0pt]{\scriptsize $m_d$};
\end{tikzpicture}
\,.
\]
Then we have
\begin{equation}
\label{for:P is I of X}
P_l({\boldsymbol{k}})=I([X]^{\mathrm{adm}})\,,
\end{equation}
which is the key ingredient of the proof.
On the other hand, we see that
\begin{align}
[X]
&=\sum_{a=0}^{m_1}(-1)^a
\left[
\begin{tikzpicture}[thick,x=8pt,y=8pt,baseline=30pt]
\node[vertex_black] (B1) at (0,0) {};
\node[vertex_white] (W01) at (-2,1) {};
\node[vertex_white] (W02) at (-2,4) {};
\node[vertex_white] (W11) at (2,1) {};
\node[vertex_white] (W12) at (2,3) {};
\node[vertex_black] (B2) at (11,0) {};
\node[vertex_white] (W1) at (9,1) {};
\node[vertex_white] (W2) at (9,3) {};
\node[vertex_white] (W21) at (13,1) {};
\node[vertex_white] (W22) at (13,3) {};
\node[vertex_black] (Bd) at (13,5) {};
\node[vertex_white] (Wd1) at (13,6) {};
\node[vertex_white] (Wd2) at (13,8) {};
\draw{
(B1)--(W01) (B1)--(W11) (W1)--(B2)--(W21) (Bd)--(Wd1)
};
\draw[densely dotted] {
(W01)--(W02) (W11)--(W12) (W1)--(W2) (W21)--(W22)--(Bd) (Wd1)--(Wd2)
};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,1)--(-2,4) node [midway,xshift=-8pt,yshift=0pt]{\scriptsize $l$};
\draw[decorate,decoration={brace,amplitude=3},xshift=2pt,yshift=0pt]
(2,3)--(2,1) node [midway,xshift=18pt,yshift=0pt]{\scriptsize $m_1\!-\!a$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(9,1)--(9,3) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $a$};
\draw[decorate,decoration={brace,amplitude=3},xshift=2pt,yshift=0pt]
(13,3)--(13,1) node [midway,xshift=12pt,yshift=0pt]{\scriptsize $m_2$};
\draw[decorate,decoration={brace,amplitude=3},xshift=2pt,yshift=0pt]
(13,8)--(13,6) node [midway,xshift=12pt,yshift=0pt]{\scriptsize $m_d$};
\end{tikzpicture}
\ \right]
+(-1)^{k_1}
\left[
\begin{tikzpicture}[thick,x=8pt,y=8pt,baseline=30pt]
\node[vertex_black] (B2) at (0,0) {};
\node[vertex_white] (W11) at (-2,1) {};
\node[vertex_white] (W12) at (-2,3) {};
\node[vertex_black] (B1) at (-2,4) {};
\node[vertex_white] (W01) at (-2,5) {};
\node[vertex_white] (W02) at (-2,8) {};
\node[vertex_white] (W21) at (2,1) {};
\node[vertex_white] (W22) at (2,3) {};
\node[vertex_black] (Bd) at (2,5) {};
\node[vertex_white] (Wd1) at (2,6) {};
\node[vertex_white] (Wd2) at (2,8) {};
\draw{
(B2)--(W11) (W12)--(B1)--(W01) (B2)--(W21) (Bd)--(Wd1)
};
\draw[densely dotted] {
(W11)--(W12) (W01)--(W02) (W21)--(W22)--(Bd) (Wd1)--(Wd2)
};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=-0pt]
(-2,1)--(-2,3) node [midway,xshift=-11pt,yshift=0pt]{\scriptsize $m_1$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=-0pt]
(-2,5)--(-2,8) node [midway,xshift=-8pt,yshift=0pt]{\scriptsize $l$};
\draw[decorate,decoration={brace,amplitude=5},xshift=2pt,yshift=0pt]
(2,3)--(2,1) node [midway,xshift=12pt,yshift=0pt]{\scriptsize $m_2$};
\draw[decorate,decoration={brace,amplitude=5},xshift=2pt,yshift=0pt]
(2,8)--(2,6) node [midway,xshift=12pt,yshift=0pt]{\scriptsize $m_d$};
\end{tikzpicture}
\ \right]\,.
\end{align}
By repeating similar computations, we have
\begin{align}
[X]
&=\sum_{i=1}^{d-1}\sum_{a_i=0}^{k_i-1}(-1)^{k_1+\cdots+k_{i-1}+a_i}
[X_{i,a_i}]
+(-1)^{k_1+\cdots+k_{d-1}}[X_{d,0}]\,,
\end{align}
where
\[
X_{i,a}
=
\begin{tikzpicture}[thick,x=8pt,y=8pt,baseline=50pt]
\node[vertex_black] (Bi) at (0,0) {};
\node[vertex_white] (Wi-11) at (-2,1) {};
\node[vertex_white] (Wi-12) at (-2,3) {};
\node[vertex_black] (B2) at (-2,5) {};
\node[vertex_white] (W11) at (-2,6) {};
\node[vertex_white] (W12) at (-2,8) {};
\node[vertex_black] (B1) at (-2,9) {};
\node[vertex_white] (W01) at (-2,10) {};
\node[vertex_white] (W02) at (-2,13) {};
\node[vertex_white] (Wi1) at (2,1) {};
\node[vertex_white] (Wi2) at (2,3) {};
\node[vertex_black] (Bi+1) at (11,0) {};
\node[vertex_white] (W1) at (9,1) {};
\node[vertex_white] (W2) at (9,3) {};
\node[vertex_white] (Wi+11) at (13,1) {};
\node[vertex_white] (Wi+12) at (13,3) {};
\node[vertex_black] (Bd) at (13,5) {};
\node[vertex_white] (Wd1) at (13,6) {};
\node[vertex_white] (Wd2) at (13,8) {};
\draw{
(Bi)--(Wi-11) (B2)--(W11) (W12)--(B1)--(W01) (Bi)--(Wi1)
(Bi+1)--(W1) (Bi+1)--(Wi+11) (Bd)--(Wd1)
};
\draw[densely dotted] {
(Wi-11)--(Wi-12)--(B2) (W11)--(W12) (W01)--(W02) (Wi1)--(Wi2)
(W1)--(W2) (Wi+11)--(Wi+12)--(Bd) (Wd1)--(Wd2)
};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,10)--(-2,13) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $l$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,6)--(-2,8) node [midway,xshift=-12pt,yshift=0pt]{\scriptsize $m_1$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,1)--(-2,3) node [midway,xshift=-18pt,yshift=0pt]{\scriptsize $m_{i-1}$};
\draw[decorate,decoration={brace,amplitude=5},xshift=2pt,yshift=0pt]
(2,3)--(2,1) node [midway,xshift=20pt,yshift=0pt]{\scriptsize $m_i\!-\!a$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(9,1)--(9,3) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $a$};
\draw[decorate,decoration={brace,amplitude=3},xshift=2pt,yshift=0pt]
(13,3)--(13,1) node [midway,xshift=16pt,yshift=0pt]{\scriptsize $m_{i+1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=2pt,yshift=0pt]
(13,8)--(13,6) node [midway,xshift=12pt,yshift=0pt]{\scriptsize $m_d$};
\end{tikzpicture}
\,, \quad
X_{d,0}
=
\begin{tikzpicture}[thick,x=8pt,y=8pt,baseline=50pt]
\node[vertex_black] (Bd) at (0,0) {};
\node[vertex_white] (Wd-11) at (-2,1) {};
\node[vertex_white] (Wd-12) at (-2,3) {};
\node[vertex_black] (B2) at (-2,5) {};
\node[vertex_white] (W11) at (-2,6) {};
\node[vertex_white] (W12) at (-2,8) {};
\node[vertex_black] (B1) at (-2,9) {};
\node[vertex_white] (W01) at (-2,10) {};
\node[vertex_white] (W02) at (-2,13) {};
\node[vertex_white] (Wd1) at (2,1) {};
\node[vertex_white] (Wd2) at (2,3) {};
\draw{
(Bd)--(Wd-11) (B2)--(W11) (W12)--(B1)--(W01) (Bd)--(Wd1)
};
\draw[densely dotted] {
(Wd-11)--(Wd-12)--(B2) (W11)--(W12) (W01)--(W02) (Wd1)--(Wd2)
};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,10)--(-2,13) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $l$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,6)--(-2,8) node [midway,xshift=-12pt,yshift=0pt]{\scriptsize $m_1$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,1)--(-2,3) node [midway,xshift=-18pt,yshift=0pt]{\scriptsize $m_{d-1}$};
\draw[decorate,decoration={brace,amplitude=5},xshift=2pt,yshift=0pt]
(2,3)--(2,1) node [midway,xshift=12pt,yshift=0pt]{\scriptsize $m_d$};
\end{tikzpicture}
\,,
\]
that is,
\begin{align*}
X_{i,a}
&=X_{k_i-1-a}(k_{i-1},\ldots,k_1,l+1)\sqcup X_{a}(k_{i+1},\ldots,k_d)\,, \\
X_{d,0}
&=X_{k_d-1}(k_{d-1},\ldots,k_1,l+1)\,.
\end{align*}
Notice that $X_{d,0}$ is always admissible because $l>0$.
By taking the admissible parts and making the integrals associated with these 2-posets, we have
\begin{equation}
\label{for:Pl}
\begin{aligned}
P_l({\boldsymbol{k}})
&=\sum_{i=1}^{d-1}\sum_{a_i=0}^{k_i-1}(-1)^{k_1+\cdots+k_{i-1}+a_i}
I([X_{i,a_i}]^{\mathrm{adm}})\\
&\quad +(-1)^{k_1+\cdots+k_{d-1}}P_{k_d-1}(k_{d-1},\ldots,k_1,l+1)\,.
\end{aligned}
\end{equation}
If $m_d>0$ (i.e., ${\boldsymbol{k}}$ is admissible),
since $X_{i,a}$ is also admissible,
the formula \eqref{eq:general weighted sum adm} is immediately obtained from \eqref{for:P is I of X} and \eqref{for:Pl}.
If $m_d=0$ (i.e., ${\boldsymbol{k}}$ is non-admissible),
noticing that $X_{i,a}$ is admissible if and only if
$i=d-1$ and $a>0$, we have\footnote{For a condition $P$, we let $\mathbbm{1}_{P}$ denote the indicator function on $P$, that is, $\mathbbm{1}_{P}=1$ if $P$ is satisfied and $0$ otherwise.
We also put $\overline{\mathbbm{1}}_{P}=1-\mathbbm{1}_{P}$.
Condition with multiple lines
stands for the conjunction of all lines.} from \eqref{for:Pl}
\begin{equation}
\begin{aligned}
&P_l({\boldsymbol{k}})
=\sum_{i=1}^{d-1}\sum_{a_i=0}^{k_i-1}
\overline{\mathbbm{1}}_{\substack{i=d-1\\ a_i\ne 0}}
(-1)^{k_1+\cdots+k_{i-1}+a_i}
I([X_{i,a_i}]^{\mathrm{adm}})\\
&\quad +\sum^{d}_{i=d-1}\sum_{a_{i}=0}^{k_{i}-1}
\overline{\mathbbm{1}}_{\substack{i=d-1\\ a_i=0}}
(-1)^{k_1+\cdots+k_{i-1}+a_{i}}P_{k_{i}-1-a_i}(k_{i-1},\ldots,k_{1},l+1)P_{a_i}(k_{i+1},\ldots,k_{d-1},1)\,.
\end{aligned}
\end{equation}
Now, we compute $I([X_{i,a}]^{\mathrm{adm}})$.
Observe that
\[
[X_{i,a}]^\mathrm{adm}
=[X_{i,a}^{(1)}]+[X_{i,a}^{(2)}]+[X_{i,a}^{(3)}]\,,
\]
where
\begin{align*}
X_{i,a}^{(1)}
&=
\begin{tikzpicture}[thick,x=8pt,y=8pt,baseline=50pt]
\node[vertex_black] (Bi) at (0,0) {};
\node[vertex_white] (Wi-11) at (-2,1) {};
\node[vertex_white] (Wi-12) at (-2,3) {};
\node[vertex_black] (B2) at (-2,5) {};
\node[vertex_white] (W11) at (-2,6) {};
\node[vertex_white] (W12) at (-2,8) {};
\node[vertex_black] (B1) at (-2,9) {};
\node[vertex_white] (W01) at (-2,10) {};
\node[vertex_white] (W02) at (-2,13) {};
\node[vertex_white] (Wi1) at (2,1) {};
\node[vertex_white] (Wi2) at (2,3) {};
\node[vertex_black] (Bi+1) at (11,0) {};
\node[vertex_white] (W1) at (9,1) {};
\node[vertex_white] (W2) at (9,3) {};
\node[vertex_white] (Wi+11) at (13,1) {};
\node[vertex_white] (Wi+12) at (13,3) {};
\node[vertex_black] (Bd-1) at (13,5) {};
\node[vertex_white] (Wd-11) at (13,6) {};
\node[vertex_white] (Wd-12) at (13,8) {};
\node[vertex_black] (Bd) at (13,9) {};
\draw{
(Bi)--(Wi-11) (B2)--(W11) (W12)--(B1)--(W01) (Bi)--(Wi1) (Wi2)--(W02)
(Bi+1)--(W1) (Bi+1)--(Wi+11) (Bd-1)--(Wd-11) (Wd-12)--(Bd)--(W2) (Bd)--(W02)
};
\draw[densely dotted] {
(Wi-11)--(Wi-12)--(B2) (W11)--(W12) (W01)--(W02) (Wi1)--(Wi2)
(W1)--(W2) (Wi+11)--(Wi+12)--(Bd-1) (Wd-11)--(Wd-12)
};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,10)--(-2,13) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $l$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,6)--(-2,8) node [midway,xshift=-12pt,yshift=0pt]{\scriptsize $m_1$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,1)--(-2,3) node [midway,xshift=-18pt,yshift=0pt]{\scriptsize $m_{i-1}$};
\draw[decorate,decoration={brace,amplitude=5},xshift=2pt,yshift=0pt]
(2,3)--(2,1) node [midway,xshift=20pt,yshift=0pt]{\scriptsize $m_i-a$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(9,1)--(9,3) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $a$};
\draw[decorate,decoration={brace,amplitude=3},xshift=2pt,yshift=0pt]
(13,3)--(13,1) node [midway,xshift=16pt,yshift=0pt]{\scriptsize $m_{i+1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=2pt,yshift=0pt]
(13,8)--(13,6) node [midway,xshift=16pt,yshift=0pt]{\scriptsize $m_{d-1}$};
\end{tikzpicture}
\,,\\
X_{i,a}^{(2)}
&=
\begin{tikzpicture}[thick,x=8pt,y=8pt,baseline=50pt]
\node[vertex_black] (Bi) at (0,0) {};
\node[vertex_white] (Wi-11) at (-2,1) {};
\node[vertex_white] (Wi-12) at (-2,3) {};
\node[vertex_black] (B2) at (-2,5) {};
\node[vertex_white] (W11) at (-2,6) {};
\node[vertex_white] (W12) at (-2,8) {};
\node[vertex_black] (B1) at (-2,9) {};
\node[vertex_white] (W01) at (-2,10) {};
\node[vertex_white] (W02) at (-2,13) {};
\node[vertex_white] (Wi1) at (2,1) {};
\node[vertex_white] (Wi2) at (2,14) {};
\node[vertex_black] (Bi+1) at (11,0) {};
\node[vertex_white] (W1) at (9,1) {};
\node[vertex_white] (W2) at (9,3) {};
\node[vertex_white] (Wi+11) at (13,1) {};
\node[vertex_white] (Wi+12) at (13,3) {};
\node[vertex_black] (Bd-1) at (13,5) {};
\node[vertex_white] (Wd-11) at (13,6) {};
\node[vertex_white] (Wd-12) at (13,8) {};
\node[vertex_black] (Bd) at (13,9) {};
\draw{
(Bi)--(Wi-11) (B2)--(W11) (W12)--(B1)--(W01) (Bi)--(Wi1) (Wi2)--(W02)
(Bi+1)--(W1) (Bi+1)--(Wi+11) (Bd-1)--(Wd-11) (Wd-12)--(Bd)--(W2) (Bd)--(Wi2)
};
\draw[densely dotted] {
(Wi-11)--(Wi-12)--(B2) (W11)--(W12) (W01)--(W02) (Wi1)--(Wi2)
(W1)--(W2) (Wi+11)--(Wi+12)--(Bd-1) (Wd-11)--(Wd-12)
};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,10)--(-2,13) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $l$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,6)--(-2,8) node [midway,xshift=-12pt,yshift=0pt]{\scriptsize $m_1$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,1)--(-2,3) node [midway,xshift=-18pt,yshift=0pt]{\scriptsize $m_{i-1}$};
\draw[decorate,decoration={brace,amplitude=5},xshift=2pt,yshift=0pt]
(2,14)--(2,1) node [midway,xshift=20pt,yshift=0pt]{\scriptsize $m_i-a$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(9,1)--(9,3) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $a$};
\draw[decorate,decoration={brace,amplitude=3},xshift=2pt,yshift=0pt]
(13,3)--(13,1) node [midway,xshift=16pt,yshift=0pt]{\scriptsize $m_{i+1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=2pt,yshift=0pt]
(13,8)--(13,6) node [midway,xshift=16pt,yshift=0pt]{\scriptsize $m_{d-1}$};
\end{tikzpicture}
\,,
\intertext{and}
X_{i,a}^{(3)}
&=
\begin{tikzpicture}[thick,x=8pt,y=8pt,baseline=50pt]
\node[vertex_black] (Bi) at (0,0) {};
\node[vertex_white] (Wi-11) at (-2,1) {};
\node[vertex_white] (Wi-12) at (-2,3) {};
\node[vertex_black] (B2) at (-2,5) {};
\node[vertex_white] (W11) at (-2,6) {};
\node[vertex_white] (W12) at (-2,8) {};
\node[vertex_black] (B1) at (-2,9) {};
\node[vertex_white] (W01) at (-2,10) {};
\node[vertex_white] (W02) at (-2,13) {};
\node[vertex_white] (Wi1) at (2,1) {};
\node[vertex_white] (Wi2) at (2,3) {};
\node[vertex_black] (Bi+1) at (11,0) {};
\node[vertex_white] (W1) at (9,1) {};
\node[vertex_white] (W2) at (9,11) {};
\node[vertex_white] (Wi+11) at (13,1) {};
\node[vertex_white] (Wi+12) at (13,3) {};
\node[vertex_black] (Bd-1) at (13,5) {};
\node[vertex_white] (Wd-11) at (13,6) {};
\node[vertex_white] (Wd-12) at (13,8) {};
\node[vertex_black] (Bd) at (13,9) {};
\draw{
(Bi)--(Wi-11) (B2)--(W11) (W12)--(B1)--(W01) (Bi)--(Wi1)
(Bi+1)--(W1) (Bi+1)--(Wi+11) (Bd-1)--(Wd-11) (Wd-12)--(Bd)--(W2)
};
\draw[densely dotted] {
(Wi-11)--(Wi-12)--(B2) (W11)--(W12) (W01)--(W02) (Wi1)--(Wi2)
(W1)--(W2) (Wi+11)--(Wi+12)--(Bd-1) (Wd-11)--(Wd-12)
};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,10)--(-2,13) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $l$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,6)--(-2,8) node [midway,xshift=-12pt,yshift=0pt]{\scriptsize $m_1$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(-2,1)--(-2,3) node [midway,xshift=-18pt,yshift=0pt]{\scriptsize $m_{i-1}$};
\draw[decorate,decoration={brace,amplitude=5},xshift=2pt,yshift=0pt]
(2,3)--(2,1) node [midway,xshift=20pt,yshift=0pt]{\scriptsize $m_i-a$};
\draw[decorate,decoration={brace,amplitude=5},xshift=-2pt,yshift=0pt]
(9,1)--(9,11) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $a$};
\draw[decorate,decoration={brace,amplitude=3},xshift=2pt,yshift=0pt]
(13,3)--(13,1) node [midway,xshift=16pt,yshift=0pt]{\scriptsize $m_{i+1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=2pt,yshift=0pt]
(13,8)--(13,6) node [midway,xshift=16pt,yshift=0pt]{\scriptsize $m_{d-1}$};
\end{tikzpicture}
\,.
\end{align*}
Here we understand that $X_{i,m_i}^{(2)}=X_{i,0}^{(3)}=0$.
It is easy from \eqref{for:P is I of X} again to see that
\[
I([X_{i,a}^{(3)}])
=P_{k_i-1-a}(k_{i-1},\ldots,k_1,l+1)\,P_a(k_{i+1},\ldots,k_{d-1},1)
\]
and hence
\begin{equation}
\label{for:Plnonad2}
\begin{aligned}
& P_l({\boldsymbol{k}})
=\sum_{i=1}^{d-1}\sum_{a_i=0}^{k_i-1}\overline{\mathbbm{1}}_{\substack{i=d-1\\ a_i\ne 0}}(-1)^{k_1+\cdots+k_{i-1}+a_i}
I([X^{(1)}_{i,a_i}]+[X^{(2)}_{i,a_i}])\\
&\quad +\sum^{d}_{i=1}\sum_{a_{i}=0}^{k_{i}-1}
(-1)^{k_1+\cdots+k_{i-1}+a_{i}}P_{k_{i}-1-a_i}(k_{i-1},\ldots,k_{1},l+1)P_{a_i}(k_{i+1},\ldots,k_{d-1},1)\,.
\end{aligned}
\end{equation}
Moreover, we see that $[X_{i,a}^{(1)}]+[X_{i,a}^{(2)}]$ can be written in terms of 2-posets
\begin{equation}
\label{def:Y}
Y_i(p_0,\ldots,\check{p}_i,\ldots,p_{d-1})
=
\begin{tikzpicture}[thick,x=8pt,y=8pt,baseline=44pt]
\node[vertex_black] (Bi) at (0,0) {};
\node[vertex_white] (Wi-11) at (0,1) {};
\node[vertex_white] (Wi-12) at (0,3) {};
\node[vertex_black] (B2) at (0,5) {};
\node[vertex_white] (W11) at (0,6) {};
\node[vertex_white] (W12) at (0,8) {};
\node[vertex_black] (B1) at (0,9) {};
\node[vertex_white] (W01) at (0,10) {};
\node[vertex_white] (W02) at (0,12) {};
\node[vertex_black] (Bi+1) at (5,0) {};
\node[vertex_white] (Wi+11) at (5,1) {};
\node[vertex_white] (Wi+12) at (5,3) {};
\node[vertex_black] (Bd-1) at (5,5) {};
\node[vertex_white] (Wd-11) at (5,6) {};
\node[vertex_white] (Wd-12) at (5,8) {};
\node[vertex_black] (Bd) at (5,9) {};
\draw{
(Bi)--(Wi-11) (B2)--(W11) (W12)--(B1)--(W01)
(Bi+1)--(Wi+11) (Bd-1)--(Wd-11) (Wd-12)--(Bd)--(W02)
};
\draw[densely dotted] {
(Wi-11)--(Wi-12)--(B2) (W11)--(W12) (W01)--(W02)
(Wi+11)--(Wi+12)--(Bd-1) (Wd-11)--(Wd-12)
};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,1)--(0,3) node [midway,xshift=-14pt,yshift=0pt]{\scriptsize $p_{i-1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,6)--(0,8) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $p_1$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,10)--(0,12) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $p_0$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(5,1)--(5,3) node [midway,xshift=-14pt,yshift=0pt]{\scriptsize $p_{i+1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(5,6)--(5,8) node [midway,xshift=-14pt,yshift=0pt]{\scriptsize $p_{d-1}$};
\end{tikzpicture}
\end{equation}
for various values $p_0,,\ldots,\check{p}_i,\ldots,p_{d-1}$
($\check{p}_i$ means that $p_i$ is skipped).
Actually, setting $m_0=l$, we have
\begin{align}
[X^{(1)}_{i,a}]
&=\sum_{\substack{{\boldsymbol{b}}_i=(b_0,\ldots,\check{b}_{i},\ldots,b_{d-1})\in\mathbb{Z}^{d-1}_{\ge 0} \\
\wt(b_0,\ldots,b_{i-1})=m_i-a_i \\
\wt(b_{i+1},\ldots,b_{d-1})=a_i}}
\binom{m_0+b_0-1}{m_0-1}\prod^{d-1}_{\substack{j=1 \\ j\ne i}}\binom{m_j+b_j}{m_j} [Y_i({\boldsymbol{b}}_i+{\boldsymbol{m}}_i)]\,,\\
[X^{(2)}_{i,a}]
&=\sum_{\substack{{\boldsymbol{b}}_i=(b_0,\ldots,\check{b}_{i},\ldots,b_{d-1})\in\mathbb{Z}^{d-1}_{\ge 0} \\ \wt(b_0,\ldots,b_{i-1})=m_i-a_i \\
\wt(b_{i+1},\ldots,b_{d-1})=a_i}}
\binom{m_0+b_0-1}{m_0}\prod^{d-1}_{\substack{j=1 \\ j\ne i}}\binom{m_j+b_j}{m_j} [Y_i({\boldsymbol{b}}_i+{\boldsymbol{m}}_i)]\,,
\end{align}
where ${\boldsymbol{m}}_i=(m_0,\ldots,\check{m}_i,\ldots,m_{d-1})$.
Hence, using the identity
$\binom{s-1}{t-1}+\binom{s-1}{t}=\binom{s}{t}$ for $s,t\ge 0$,
we have
\begin{equation}
\label{for:X1X2}
\begin{aligned}
[X^{(1)}_{i,a}]+[X^{(2)}_{i,a}]
&=\sum_{\substack{{\boldsymbol{b}}_i=(b_0,\ldots,\check{b}_{i},\ldots,b_{d-1})\in\mathbb{Z}^{d-1}_{\ge 0} \\ \wt(b_0,\ldots,b_{i-1})=m_i-a_i \\
\wt(b_{i+1},\ldots,b_{d-1})=a_i}}
\prod^{d-1}_{\substack{j=0 \\ j\ne i}}\binom{m_j+b_j}{m_j} [Y_i({\boldsymbol{b}}_i+{\boldsymbol{m}}_i)]\,.
\end{aligned}
\end{equation}
Substituting this into \eqref{for:Plnonad2} and changing the order of summations, we see that
\begin{align*}
& P_l({\boldsymbol{k}})
=\sum_{i=1}^{d-1}
(-1)^{l+k_i}
\sum_{\substack{{\boldsymbol{b}}_i=(b_0,\ldots,\check{b}_{i},\ldots,b_{d-1})\in\mathbb{Z}^{d-1}_{\ge 1}\\ b_0\ge 2 \\ \wt({\boldsymbol{b}}_i)=\wt({\boldsymbol{k}})+l-1}}
(-1)^{b_0+\cdots+b_{i-1}}
\prod^{d-1}_{\substack{j=0 \\ j\ne i}}\binom{b_j-1}{m_j}
I[Y_i({\boldsymbol{b}}_i-\{1\}^{d-1})]\\
&\quad +\sum^{d}_{i=1}\sum_{a_{i}=0}^{k_{i}-1}
(-1)^{k_1+\cdots+k_{i-1}+a_{i}}P_{k_{i}-1-a_i}(k_{i-1},\ldots,k_{1},l+1)P_{a_i}(k_{i+1},\ldots,k_{d-1},1)\,.
\end{align*}
Therefore, one obtains \eqref{eq:general weighted sum nonadm} by employing the expression of $I([Y_i({\boldsymbol{b}}_i-\{1\}^{d-1})])$ given in Lemma \ref{lem:I of Y} below.
This completes the proof.
\end{proof}
\begin{lemma}
\label{lem:I of Y}
For $1\le i\le d-1$ and ${\boldsymbol{p}}=(p_0,\ldots,\check{p}_i,\ldots,p_{d-1})\in\mathbb{Z}_{\ge 0}^{d-1}$
with $p_0>0$, we have
\begin{equation}
\begin{aligned}
I([Y_i({\boldsymbol{p}})])
&=\sum_{j=i}^{d-1}\sum_{c_j=0}^{p_j-1}
(-1)^{c_j+p_{j+1}+\cdots+p_{d-1}}
\zeta\begin{varray}
c_j+1,p_{j+1}+1,\cdots,p_{d-1}+1\\
p_{i-1}+1,\cdots,p_1+1,p_0+1
\end{varray} \\
&\quad \times \zeta(p_{i+1}+1,\ldots,p_{j-1}+1,p_{j}-c_{j}+1)\,,
\end{aligned}
\end{equation}
where we set $p_i=1$.
\end{lemma}
\begin{proof}
We compute as
\begin{align}
[Y_i({\boldsymbol{p}})]
&=\sum_{c_{d-1}=0}^{p_{d-1}-1}(-1)^{c_{d-1}}
\left[
\begin{tikzpicture}[thick,x=8pt,y=8pt,baseline=44pt]
\node[vertex_black] (Bi) at (0,0) {};
\node[vertex_white] (Wi-11) at (0,1) {};
\node[vertex_white] (Wi-12) at (0,3) {};
\node[vertex_black] (B2) at (0,5) {};
\node[vertex_white] (W11) at (0,6) {};
\node[vertex_white] (W12) at (0,8) {};
\node[vertex_black] (B1) at (0,9) {};
\node[vertex_white] (W01) at (0,10) {};
\node[vertex_white] (W02) at (0,12) {};
\node[vertex_black] (Bd) at (2,9) {};
\node[vertex_white] (Wd1) at (3,10) {};
\node[vertex_white] (Wd2) at (5,12) {};
\node[vertex_black] (Bi+1) at (8,0) {};
\node[vertex_white] (Wi+11) at (8,1) {};
\node[vertex_white] (Wi+12) at (8,3) {};
\node[vertex_black] (Bd-1) at (8,5) {};
\node[vertex_white] (Wd-11) at (8,6) {};
\node[vertex_white] (Wd-12) at (8,8) {};
\draw{
(Bi)--(Wi-11) (B2)--(W11) (W12)--(B1)--(W01) (W02)--(Bd)--(Wd1)
(Bi+1)--(Wi+11) (Bd-1)--(Wd-11)
};
\draw[densely dotted] {
(Wi-11)--(Wi-12)--(B2) (W11)--(W12) (W01)--(W02) (Wd1)--(Wd2)
(Wi+11)--(Wi+12)--(Bd-1) (Wd-11)--(Wd-12)
};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,1)--(0,3) node [midway,xshift=-14pt,yshift=0pt]{\scriptsize $p_{i-1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,6)--(0,8) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $p_1$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,10)--(0,12) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $p_0$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=2pt]
(3,10)--(5,12) node [midway,xshift=-14pt,yshift=4pt]{\scriptsize $c_{d-1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(8,1)--(8,3) node [midway,xshift=-14pt,yshift=0pt]{\scriptsize $p_{i+1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(8,6)--(8,8) node [midway,xshift=-28pt,yshift=0pt]{\scriptsize $p_{d-1}-c_{d-1}$};
\end{tikzpicture}
\ \right]
+(-1)^{p_{d-1}}
\left[
\begin{tikzpicture}[thick,x=8pt,y=8pt,baseline=44pt]
\node[vertex_black] (Bi) at (0,0) {};
\node[vertex_white] (Wi-11) at (0,1) {};
\node[vertex_white] (Wi-12) at (0,3) {};
\node[vertex_black] (B2) at (0,5) {};
\node[vertex_white] (W11) at (0,6) {};
\node[vertex_white] (W12) at (0,8) {};
\node[vertex_black] (B1) at (0,9) {};
\node[vertex_white] (W01) at (0,10) {};
\node[vertex_white] (W02) at (0,12) {};
\node[vertex_black] (Bd) at (2,9) {};
\node[vertex_white] (Wd1) at (3,10) {};
\node[vertex_white] (Wd2) at (5,12) {};
\node[vertex_black] (Bi+1) at (7,0) {};
\node[vertex_white] (Wi+11) at (7,1) {};
\node[vertex_white] (Wi+12) at (7,3) {};
\node[vertex_black] (Bd-2) at (7,5) {};
\node[vertex_white] (Wd-21) at (7,6) {};
\node[vertex_white] (Wd-22) at (7,8) {};
\node[vertex_black] (Bd-1) at (7,9) {};
\draw{
(Bi)--(Wi-11) (B2)--(W11) (W12)--(B1)--(W01) (W02)--(Bd)--(Wd1)
(Bi+1)--(Wi+11) (Bd-2)--(Wd-21) (Wd-22)--(Bd-1)--(Wd2)
};
\draw[densely dotted] {
(Wi-11)--(Wi-12)--(B2) (W11)--(W12) (W01)--(W02) (Wd1)--(Wd2)
(Wi+11)--(Wi+12)--(Bd-1)
};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,1)--(0,3) node [midway,xshift=-14pt,yshift=0pt]{\scriptsize $p_{i-1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,6)--(0,8) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $p_1$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,10)--(0,12) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $p_0$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=2pt]
(3,10)--(5,12) node [midway,xshift=-8pt,yshift=6pt]{\scriptsize $p_{d-1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(7,1)--(7,3) node [midway,xshift=-14pt,yshift=0pt]{\scriptsize $p_{i+1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(7,6)--(7,8) node [midway,xshift=-14pt,yshift=0pt]{\scriptsize $p_{d-2}$};
\end{tikzpicture}
\ \right] \\
&=\cdots\\
&=\sum_{j=i+1}^{d-1}\sum_{c_j=0}^{p_j-1}(-1)^{c_j+p_{j+1}+\cdots +p_{d-1}}
\left[
\begin{tikzpicture}[thick,x=8pt,y=8pt,baseline=44pt]
\node[vertex_black] (Bi) at (0,0) {};
\node[vertex_white] (Wi-11) at (0,1) {};
\node[vertex_white] (Wi-12) at (0,3) {};
\node[vertex_black] (B2) at (0,5) {};
\node[vertex_white] (W11) at (0,6) {};
\node[vertex_white] (W12) at (0,8) {};
\node[vertex_black] (B1) at (0,9) {};
\node[vertex_white] (W01) at (0,10) {};
\node[vertex_white] (W02) at (0,12) {};
\node[vertex_black] (Bd) at (2,9) {};
\node[vertex_white] (Wd1) at (3,10) {};
\node[vertex_white] (Wd2) at (5,12) {};
\node[vertex_black] (Bj+2) at (9,9) {};
\node[vertex_white] (Wj+21) at (10,10) {};
\node[vertex_white] (Wj+22) at (12,12) {};
\node[vertex_black] (Bj+1) at (14,9) {};
\node[vertex_white] (Wj+11) at (15,10) {};
\node[vertex_white] (Wj+12) at (17,12) {};
\node[vertex_black] (Bi+1) at (20,0) {};
\node[vertex_white] (Wi+11) at (20,1) {};
\node[vertex_white] (Wi+12) at (20,3) {};
\node[vertex_black] (Bj) at (20,5) {};
\node[vertex_white] (Wj1) at (20,6) {};
\node[vertex_white] (Wj2) at (20,8) {};
\draw{
(Bi)--(Wi-11) (B2)--(W11) (W12)--(B1)--(W01) (W02)--(Bd)--(Wd1)
(Wd2)--(6,10.5) (8,10.5)--(Bj+2)--(Wj+21) (Wj+22)--(Bj+1)--(Wj+11)
(Bi+1)--(Wi+11) (Bj)--(Wj1)
};
\draw[densely dotted] {
(Wi-11)--(Wi-12)--(B2) (W11)--(W12) (W01)--(W02) (Wd1)--(Wd2)
(6.3,10.5)--(7.7,10.5) (Wj+21)--(Wj+22) (Wj+11)--(Wj+12)
(Wi+11)--(Wi+12)--(Bj) (Wj1)--(Wj2)
};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,1)--(0,3) node [midway,xshift=-14pt,yshift=0pt]{\scriptsize $p_{i-1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,6)--(0,8) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $p_1$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,10)--(0,12) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $p_0$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=2pt]
(3,10)--(5,12) node [midway,xshift=-12pt,yshift=4pt]{\scriptsize $p_{d-1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=2pt]
(10,10)--(12,12) node [midway,xshift=-12pt,yshift=4pt]{\scriptsize $p_{j+1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=2pt]
(15,10)--(17,12) node [midway,xshift=-8pt,yshift=4pt]{\scriptsize $c_j$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(20,1)--(20,3) node [midway,xshift=-14pt,yshift=0pt]{\scriptsize $p_{i+1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(20,6)--(20,8) node [midway,xshift=-18pt,yshift=0pt]{\scriptsize $p_j-c_j$};
\end{tikzpicture}
\ \right]\\
&\qquad +(-1)^{p_{i+1}+\cdots+p_{d-1}}
\left[
\begin{tikzpicture}[thick,x=8pt,y=8pt,baseline=44pt]
\node[vertex_black] (Bi) at (0,0) {};
\node[vertex_white] (Wi-11) at (0,1) {};
\node[vertex_white] (Wi-12) at (0,3) {};
\node[vertex_black] (B2) at (0,5) {};
\node[vertex_white] (W11) at (0,6) {};
\node[vertex_white] (W12) at (0,8) {};
\node[vertex_black] (B1) at (0,9) {};
\node[vertex_white] (W01) at (0,10) {};
\node[vertex_white] (W02) at (0,12) {};
\node[vertex_black] (Bd) at (2,9) {};
\node[vertex_white] (Wd1) at (3,10) {};
\node[vertex_white] (Wd2) at (5,12) {};
\node[vertex_black] (Bi+2) at (9,9) {};
\node[vertex_white] (Wi+21) at (10,10) {};
\node[vertex_white] (Wi+22) at (12,12) {};
\node[vertex_black] (Bi+1) at (14,9) {};
\draw{
(Bi)--(Wi-11) (B2)--(W11) (W12)--(B1)--(W01) (W02)--(Bd)--(Wd1)
(Wd2)--(6,10.5) (8,10.5)--(Bi+2)--(Wi+21) (Wi+22)--(Bi+1)
};
\draw[densely dotted] {
(Wi-11)--(Wi-12)--(B2) (W11)--(W12) (W01)--(W02) (Wd1)--(Wd2)
(6.3,10.5)--(7.7,10.5) (Wi+21)--(Wi+22)
};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,1)--(0,3) node [midway,xshift=-14pt,yshift=0pt]{\scriptsize $p_{i-1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,6)--(0,8) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $p_1$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=0pt]
(0,10)--(0,12) node [midway,xshift=-10pt,yshift=0pt]{\scriptsize $p_0$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=2pt]
(3,10)--(5,12) node [midway,xshift=-12pt,yshift=4pt]{\scriptsize $p_{d-1}$};
\draw[decorate,decoration={brace,amplitude=3},xshift=-2pt,yshift=2pt]
(10,10)--(12,12) node [midway,xshift=-12pt,yshift=4pt]{\scriptsize $p_{i+1}$};
\end{tikzpicture}
\ \right]\,.
\end{align}
This together with Theorem~\ref{thm:integral series identity} show the desired result.
\end{proof}
In the following we give explicit expressions of $P_l({\boldsymbol{k}})$ for the cases $d=2$ and $d=3$ which are valid whether ${\boldsymbol{k}}$ is admissible or not.
\begin{cor}\label{cor:pldep2}
For $k_1,k_2\ge 1$ and $l>0$, we have
\begin{align}
\label{eq:d2 weighted sum any index}
&P_{l}(k_1,k_2)
=
(-1)^{k_2}
\sum_{\substack{w_1,w_2\ge2\\w_1+w_2=k_1+k_2+l}}
(-1)^{w_1}
\binom{w_1-1}{k_2-1}\binom{w_2-1}{l}\zeta(w_1)\zeta(w_2)\\
&\hspace{7mm}
+(-1)^{k_1}
\sum_{\substack{w_1\ge 1, w_2\ge2\\ w_1+w_2=k_1+k_2+l}}
\binom{w_1-1}{k_1-1}\binom{w_2-1}{l}\zeta(w_1,w_2)
+\mathbbm{1}_{k_2=1}\binom{l+k_1-1}{k_1-1}\zeta
\begin{varray}
1 \\ l+k_1
\end{varray}
\,.
\end{align}
\end{cor}
\begin{cor}\label{cor:pldep3}
For $k_1,k_2,k_3\ge 1$ and $l>0$, we have
\begin{align}
& P_{l}(k_1,k_2,k_3)=(-1)^{k_1+k_{2}}
\!\!\!\!
\sum_{\substack{w_1,w_2\ge1,w_3\ge 2\\ w_1+w_2+w_3=k_1+k_2+k_3+l}}
\!\!\!\!
\binom{w_1-1}{k_2-1}\binom{w_2-1}{k_1-1}\binom{w_3-1}{l}
\zeta(w_1,w_2,w_3)\\
&\hspace{10mm}+(-1)^{k_2+k_3}\!\!\!\!\!\!\!
\!\!\!\!\!\!\! \sum_{\substack{
w_1\ge1, w_2\ge 2,w_3\ge 2\\
w_1+w_2+w_3=k_1+k_2+k_3+l
}}\!\!\!\!
(-1)^{w_1+w_2}
\binom{w_1-1}{k_2-1}\binom{w_2-1}{k_3-1}\binom{w_3-1}{l}
\zeta(w_1,w_2)\zeta(w_3)\\
&\hspace{10mm}
+(-1)^{k_1+k_3}
\!\!\!\!
\sum_{\substack{w_1\ge 2, w_2\ge1, w_3\ge2\\ w_1+w_2+w_3=k_1+k_2+k_3+l}}
\!\!\!\!
(-1)^{w_1}
\binom{w_1-1}{k_3-1}\binom{w_2-1}{k_1-1}\binom{w_3-1}{l}
\zeta(w_1)\zeta(w_2,w_3)\\
&\hspace{10mm}
+\mathbbm{1}_{k_3=1}(-1)^{k_2+1}\sum_{\substack{b_0\ge 2,b_2\ge 1\\ b_0+b_2=k_1+k_2+l}}(-1)^{b_2}
\binom{b_0-1}{l}\binom{b_2-1}{k_2-1}\\
&\hspace{60mm}
\times
\left\{
(-1)^{b_{2}}\zeta
\begin{varray}
1,b_2\\
b_0
\end{varray}
+\sum_{c_2=1}^{b_2-1}(-1)^{c_2}\zeta
\begin{varray}
c_2\\
b_0\end{varray}
\zeta(b_2-c_2+1)
\right\}
\\
&\hspace{10mm}
+\mathbbm{1}_{k_3=1}(-1)^{k_1}
\sum_{\substack{b_0\ge 2,b_1\ge 1 \\ b_0+b_1=k_1+k_2+l}}
\binom{b_0-1}{l}\binom{b_1-1}{k_1-1}
\zeta
\begin{varray}
1 \\ b_1,b_0
\end{varray}
\,.
\end{align}
\end{cor}
\section{Ribbons}\label{sec:ribbons}
\subsection{Preparation}
In this section, we study the sums of Schur MZVs for ribbon diagrams.
Recall that a skew Young diagram is called a \emph{ribbon} if it is connected and contains no $2\times 2$ block of boxes.
Explicitly, such a can be drawn as
\begin{equation}\label{eq:ribbon}
\begin{tikzpicture}[x=12pt,y=12pt,baseline=90pt]
\draw (0,0) rectangle (4,1);
\draw[decorate,decoration={brace,amplitude=5},xshift=0pt,yshift=1pt]
(0,1)--(4,1) node [midway,xshift=0pt,yshift=10pt]{$s_1$};
\draw (4,0) rectangle (5,4);
\draw[decorate,decoration={brace,mirror,amplitude=5},xshift=1pt,yshift=0pt]
(5,0)--(5,4) node [midway,xshift=12pt,yshift=0pt]{$r_1$};
\draw (4,4) rectangle (8,5);
\draw[decorate,decoration={brace,amplitude=5},xshift=0pt,yshift=1pt]
(4,5)--(8,5) node [midway,xshift=0pt,yshift=10pt]{$s_2$};
\draw (8,4) rectangle (9,8);
\draw[decorate,decoration={brace,mirror,amplitude=5},xshift=1pt,yshift=0pt]
(9,4)--(9,8) node [midway,xshift=12pt,yshift=0pt]{$r_2$};
\draw[dashed] (8.5,8.5)--(10.5,10.5);
\draw (10,11) rectangle (14,12);
\draw[decorate,decoration={brace,amplitude=5},xshift=0pt,yshift=1pt]
(10,12)--(14,12) node [midway,xshift=0pt,yshift=10pt]{$s_n$};
\draw (14,11) rectangle (15,15);
\draw[decorate,decoration={brace,mirror,amplitude=5},xshift=1pt,yshift=0pt]
(15,11)--(15,15) node [midway,xshift=12pt,yshift=0pt]{$r_n$};
\end{tikzpicture}
\end{equation}
where the integers $s_1\ge 0$, $s_2,\ldots,s_n,r_1,\ldots,r_n>0$ indicate the numbers of boxes.
\begin{defn}
For integers $w,s_1,\ldots,s_n\ge 0$ and $r_1,\ldots,r_n>0$,
we write
\begin{equation}
S_w\binom{s_1,\ldots,s_n}{r_1,\ldots,r_n}\coloneqq\sum_{\substack{{\boldsymbol{l}}_1,\ldots,{\boldsymbol{l}}_n\\ \dep({\boldsymbol{l}}_i)=s_i\\
{\boldsymbol{k}}_1,\ldots,{\boldsymbol{k}}_n:\text{ adm.}\\ \dep({\boldsymbol{k}}_i)=r_i\\ \sum_i\wt({\boldsymbol{k}}_i)+\sum_i\wt({\boldsymbol{l}}_i)=w}}
\zeta\begin{varray}{\boldsymbol{l}}_1,&\ldots,&{\boldsymbol{l}}_n\\ {\boldsymbol{k}}_1,&\ldots,&{\boldsymbol{k}}_n \end{varray},
\end{equation}
where we define as a generalization of \eqref{eq:antihookzeta}
\begin{equation}\label{eq:zeta(ll/kk)}
\zeta\begin{varray}{\boldsymbol{l}}_1,&\ldots,&{\boldsymbol{l}}_n\\ {\boldsymbol{k}}_1,&\ldots,&{\boldsymbol{k}}_n \end{varray}
\coloneqq \sum_{\substack{0<b_{i,1}\le\cdots\le b_{i,s_i+1}\\ 0<a_{i,1}<\cdots<a_{i,r_i}\\
b_{i,s_i+1}=a_{i,r_i}\;(i=1,\ldots,n)\\
b_{i+1,1}<a_{i,1}\;(i=1,\ldots,n-1)}}
\prod_{i=1}^n\frac{1}{a_{i,1}^{k_{i,1}}\cdots a_{i,r_i}^{k_{i,r_i}}
b_{i,1}^{l_{i,1}}\cdots b_{i,s_i}^{l_{i,s_i}}}
\end{equation}
for indices ${\boldsymbol{l}}_i=(l_{i,1},\ldots,l_{i,s_i})$ of depth $s_i$ and
${\boldsymbol{k}}_i=(k_{i,1},\ldots,k_{i,r_i})$ of depth $r_i$.
Note that the series \eqref{eq:zeta(ll/kk)} is meaningful even if some $s_i$ is zero,
i.e., ${\boldsymbol{l}}_i=\varnothing$.
\end{defn}
\begin{remark} Notice that only for $s_2,\dots,s_n>0$ the $S_w\binom{s_1,\ldots,s_n}{r_1,\ldots,r_n}$ gives the sum over all admissible tableaux of shape \eqref{eq:ribbon} as in \eqref{eq:sk}.
For example, we have
\[S_w\begin{varray}2 \\ 2 \end{varray}=
\sum_{\substack{a,b,d\ge 1,\,c\ge 2\\ a+b+c+d=w}}
\zeta\left(\;{\footnotesize\begin{ytableau}
\none & \none & d \\
a & b & c
\end{ytableau}\;}\right) = S_w\left(\;{\footnotesize\begin{ytableau}
\none & \none & \, \\
\, & \, & \,
\end{ytableau}\;}\right) \]
but
\[S_w\begin{varray}2, & 0 \\ 1, & 1 \end{varray}=
\sum_{\substack{a,b\ge 1,\,c,d\ge 2\\ a+b+c+d=w}}
\zeta\left(\;{\footnotesize\begin{ytableau}
\none & \none & d \\
a & b & c
\end{ytableau}\;}\right) \ne S_w\left(\;{\footnotesize\begin{ytableau}
\none & \none & \, \\
\, & \, & \,
\end{ytableau}\;}\right) . \]
In the latter, the index $(d)$ is required to be admissible, i.e., $d\geq 2$.
\end{remark}
Note that $S_w\begin{varray}s_1,&\ldots,&s_n\\ r_1,&\ldots,&r_n \end{varray}$ is nonzero
only when
\[w\ge s_1+\cdots+s_n+r_1+\cdots+r_n+n. \]
Our basic strategy of computing these sums on ribbons is
to reduce the number of corners $n$ by using the following formula:
\begin{prop}\label{prop:inductive}
Let $s_1,\ldots,s_n\ge 0$ and $r_1,\ldots,r_n>0$ be integers.
For $1\le i\le n-1$ with $r_i\ge 2$, we have
\begin{align}\label{eq:inductive}
S_w\begin{varray}s_1,&\ldots,&s_i,&s_{i+1},&\ldots,&s_n\\
r_1,&\ldots,&r_i,&r_{i+1},&\ldots,&r_n\end{varray}
+S_w\begin{varray}s_1,&\ldots,&s_i,&s_{i+1}+1,&\ldots,&s_n\\
r_1,&\ldots,&r_i-1,&r_{i+1},&\ldots,&r_n\end{varray}\\
=\sum_{w_1+w_2=w}S_{w_1}\begin{varray}s_1,&\ldots,&s_i\\ r_1,&\ldots,&r_i \end{varray}\cdot
S_{w_2}\begin{varray}s_{i+1},&\ldots,&s_n\\ r_{i+1},&\ldots,&r_n \end{varray}.
\end{align}
\end{prop}
\begin{proof}
By switching the inequality $b_{i+1,1}<a_{i,1}$ in \eqref{eq:zeta(ll/kk)} (for the given $i$)
to the opposite $a_{i,1}\le b_{i+1,1}$, we deduce that
\begin{multline}
\zeta\begin{varray}
{\boldsymbol{l}}_1,&\ldots,&{\boldsymbol{l}}_i,&{\boldsymbol{l}}_{i+1},&\ldots,&{\boldsymbol{l}}_n\\
{\boldsymbol{k}}_1,&\ldots,&{\boldsymbol{k}}_i,&{\boldsymbol{k}}_{i+1},&\ldots,&{\boldsymbol{k}}_n
\end{varray}
+\zeta\begin{varray}
{\boldsymbol{l}}_1,&\ldots,&{\boldsymbol{l}}_i,&{\boldsymbol{l}}_{i+1}',&\ldots,&{\boldsymbol{l}}_n\\
{\boldsymbol{k}}_1,&\ldots,&{\boldsymbol{k}}_i',&{\boldsymbol{k}}_{i+1},&\ldots,&{\boldsymbol{k}}_n
\end{varray}\\
=\zeta\begin{varray}{\boldsymbol{l}}_1,&\ldots,&{\boldsymbol{l}}_i\\ {\boldsymbol{k}}_1,&\ldots,&{\boldsymbol{k}}_i \end{varray}\cdot
\zeta\begin{varray}{\boldsymbol{l}}_{i+1},&\ldots,&{\boldsymbol{l}}_n\\ {\boldsymbol{k}}_{i+1},&\ldots,&{\boldsymbol{k}}_n \end{varray},
\end{multline}
where ${\boldsymbol{k}}_i'=(k_{i,2},\ldots,k_{i,r_i})$ and ${\boldsymbol{l}}_{i+1}'=(k_{i,1},l_{i+1,1},\ldots,l_{i+1,s_{i+1}})$.
Then \eqref{eq:inductive} follows.
\end{proof}
By using Proposition \ref{prop:inductive} repeatedly,
the sums on general ribbons are expressed in terms of the values
of the type $S_w\begin{varray}s,&0,&\ldots,&0\\ r_1,&r_2,&\ldots,&r_n\end{varray}$.
For example, we have
\begin{align}
S_w\begin{varray} s,&1\\ r_1,&r_2 \end{varray}
&=\sum_{w_1+w_2=w}S_{w_1}\begin{varray} s\\ r_1+1\end{varray} S_{w_2}\begin{varray} 0\\ r_2\end{varray}
-S_w\begin{varray} s,&0\\ r_1+1,&r_2 \end{varray},\\
S_w\begin{varray} s,&2\\ r_1,&r_2 \end{varray}
&=\sum_{w_1+w_2=w}S_{w_1}\begin{varray} s\\ r_1+1\end{varray} S_{w_2}\begin{varray} 1\\ r_2\end{varray}
-S_w\begin{varray} s,&1\\ r_1+1,&r_2 \end{varray}\\
&=\sum_{w_1+w_2=w}S_{w_1}\begin{varray} s\\ r_1+1\end{varray} S_{w_2}\begin{varray} 1\\ r_2\end{varray}\\
&\qquad -\sum_{w_1+w_2=w}S_{w_1}\begin{varray} s\\ r_1+2\end{varray}
S_{w_2}\begin{varray} 0\\ r_2\end{varray}+S_w\begin{varray} s,&0\\ r_1+2,&r_2 \end{varray}
\end{align}
and so on (cf.~Lemma \ref{lem:ribbon_2_corner_prelim}).
For the latter type sums, the following formula holds:
\begin{thm}\label{thm:s00}
For $w\ge 0$, $s\ge 0$ and $r_1,\ldots,r_n>0$, we have
\begin{equation}\label{eq:s00}
S_w\begin{varray}
s, & 0, & \ldots, & 0\\
r_1, & r_2, & \ldots, & r_n
\end{varray}
=\sum_{\substack{t_1,\ldots,t_n\ge 0\\ t_1+\cdots+t_n=s}}
\sum_{\substack{w_i\ge r_i+t_i+1\\ w_1+\cdots+w_n=w}}
\prod_{i=1}^n\binom{w_i-1}{t_i}\zeta(w_1,\ldots,w_n).
\end{equation}
\end{thm}
\begin{proof}
Put $r\coloneqq r_1+\cdots+r_n$.
Then the left hand side is the sum of the series
\begin{equation}\label{eq:linearize}
\ytableausetup{boxsize=1.2em}
\zeta\left(\;\begin{ytableau}
\none & \none & \none & k_1 \\
\none & \none & \none & \svdots \\
l_1 & \cdots & l_s & k_r
\end{ytableau}\;\right)=\sum_{0<a_1<\cdots<a_r\ge b_s\ge\cdots\ge b_1>0}
\frac{1}{a_1^{k_1}\cdots a_r^{k_r}b_1^{l_1}\cdots b_s^{l_s}},
\end{equation}
where $(l_1,\ldots,l_s)$ runs through indices of depth $s$ and $(k_1,\ldots,k_r)$ runs through
indices of depth $r$ such that $k_p\ge 2$ for $p\in\{r_n,r_n+r_{n-1},\ldots,r\}$,
satisfying $l_1+\cdots+l_s+k_1+\cdots+k_r=w$.
By ``stuffling'', i.e., classifying all possible orders of $a_p$'s and $b_q$'s,
this series is expanded into a certain sum of MZVs $\zeta(w_1,\ldots,w_{r+u})$
of weight $w$ with $0\le u\le s$. Here each of entries of $(w_1,\ldots,w_{r+u})$ is
of the form
\begin{equation}\label{eq:decompose}
k_p+l_q+\cdots+l_{q'} \text{ or } l_q+\cdots+l_{q'},
\end{equation}
where $l_q,\ldots,l_{q'}$ are some consecutive members of $l_1,\ldots,l_s$,
possibly zero members in the former case and at least one member in the latter.
Let us fix an index $(w_1,\ldots,w_{r+u})$ of weight $w$ with $0\le u\le s$
and count how many times $\zeta(w_1,\ldots,w_{r+u})$ appears in
$S_w\begin{varray}
s, & 0, & \ldots, & 0\\
r_1, & r_2, & \ldots, & r_n
\end{varray}$
when all series \eqref{eq:linearize} are expanded as above.
First, choose the numbers $u_1,\ldots,u_n\ge 0$ satisfying $u_1+\cdots+u_n=u$,
and consider the cases that
$w_{r_n+u_n+\cdots+r_i+u_i}$ contains $k_{r_n+\cdots+r_i}$ as a summand for $i=1,\ldots,n$.
Then the number of possibilities of the places where other $k_p$'s appear is
$\prod_{i=1}^n\binom{r_i-1+u_i}{u_i}$. Moreover,
each entry of $(w_1,\ldots,w_{r+u})$ is decomposed into a sum of type \eqref{eq:decompose}
and the total number of the plus symbol `$+$' is $s-u$.
The number of possible such decompositions is
\begin{align}
\binom{(w_1-1)+\cdots+(w_{r_n+u_n-1}-1)+(w_{r_n+u_n}-2)+\cdots+(w_{r+u}-2)}{s-u}&\\
=\binom{w-n-r-u}{s-u}&.
\end{align}
Therefore we obtain that
\begin{align}
S_w\begin{varray}
s, & 0, & \ldots, & 0\\
r_1, & r_2, & \ldots, & r_n
\end{varray}
=\sum_{\substack{u_1,\ldots,u_n\ge 0\\ u\coloneqq u_1+\cdots+u_n\le s}}
\binom{w-n-r-u}{s-u}\prod_{i=1}^n\binom{r_i+u_i-1}{u_i}&\\
\times\sum_{\substack{w_1,\ldots,w_{r+u}\ge 1\\ w_p\ge 2\;(p=r_n+u_n+\cdots+r_i+u_i)\\
w_1+\cdots+w_{r+u}=w}}\zeta(w_1,\ldots,w_{r+u})&.
\end{align}
By applying Ohno's relation to the last sum, we see that this is equal to
\begin{align}\label{eq:Ohno}
S_w\begin{varray}
s, & 0, & \ldots, & 0\\
r_1, & r_2, & \ldots, & r_n
\end{varray}
=\sum_{\substack{u_1,\ldots,u_n\ge 0\\ u\coloneqq u_1+\cdots+u_n\le s}}
\binom{w-n-r-u}{s-u}\prod_{i=1}^n\binom{r_i+u_i-1}{u_i}&\\
\times\sum_{\substack{w_i\ge r_i+u_i+1\\ w_1+\cdots+w_n=w}}\zeta(w_1,\ldots,w_n)&.
\end{align}
By means of the identity
\[
\binom{w-n-r-u}{s-u}
=\sum_{\substack{v_1,\ldots,v_n\ge 0\\ v_1+\cdots+v_n=s-u}}
\prod_{i=1}^n\binom{w_i-1-r_i-u_i}{v_i},
\]
one can rewrite the expression \eqref{eq:Ohno} as
\begin{align}
&\sum_{\substack{u_1,\ldots,u_n\ge 0\\ v_1,\ldots,v_n\ge 0\\ u_1+v_1+\cdots+u_n+v_n=s}}
\sum_{\substack{w_i\ge r_i+u_i+1\\ w_1+\cdots+w_n=w}}
\prod_{i=1}^n\binom{w_i-1-r_i-u_i}{v_i}\binom{r_i+u_i-1}{u_i}\zeta(w_1,\ldots,w_n)\\
&=\sum_{\substack{t_1,\ldots,t_n\ge 0\\ t_1+\cdots+t_n=s}}
\sum_{\substack{w_i\ge r_i+t_i+1\\ w_1+\cdots+w_n=w}}
\prod_{i=1}^n\Biggl(\sum_{u_i=0}^{t_i}\binom{w_i-1-r_i-u_i}{t_i-u_i}\binom{r_i+u_i-1}{u_i}\Biggr)
\zeta(w_1,\ldots,w_n).
\end{align}
Here we put $t_i=u_i+v_i$ and use that
\[\binom{w_i-1-r_i-u_i}{t_i-u_i}=0\quad \text{ for } r_i+u_i+1\le w_i\le r_i+t_i. \]
Thus the theorem follows from the identity
\begin{align}
\sum_{u_i=0}^{t_i}\binom{w_i-1-r_i-u_i}{t_i-u_i}\binom{r_i+u_i-1}{u_i}
&=\binom{w_i-1}{t_i}. \qedhere
\end{align}
\end{proof}
\begin{cor}\label{cor:ncornerribbon}
For $n\geq 1$ and integers $s_1,\ldots,s_n\ge 0$ and $r_1,\ldots,r_n>0$ the
$S_w\binom{s_1,\ldots,s_n}{r_1,\ldots,r_n}$ with $w\geq s_1+\dots+s_n+r_1+\dots+r_n+n$ can be written as a $\mathbb{Q}$-linear combination of MZVs of weight $w$ and depth $\leq n$.
\end{cor}
\begin{proof}
This is a direct consequence of Proposition \ref{prop:inductive} and Theorem \ref{thm:s00}.
\end{proof}
\begin{cor}\label{cor:symmetric sum}
For any $s\ge 0$ and $r_1,\ldots,r_n>0$, the symmetric sum
\[\sum_{\sigma\in\mathfrak{S}_n}
S_w\begin{varray} s,&0,&\ldots,&0\\ r_{\sigma(1)},&r_{\sigma(2)},&\ldots,&r_{\sigma(n)}\end{varray}\]
is a polynomial in single zeta values. In particular,
$S_w\begin{varray} s,&0,&\ldots,&0\\ r,&r,&\ldots,&r\end{varray}$
is a polynomial in single zeta values.
\end{cor}
\begin{proof}
By \eqref{eq:s00}, this symmetric sum is equal to
\[\sum_{\substack{t_1,\ldots,t_n\ge 0\\ t_1+\cdots+t_n=s}}
\sum_{\substack{w_i\ge r_i+t_i+1\\ w_1+\cdots+w_n=w}}
\prod_{i=1}^n\binom{w_i-1}{t_i}\sum_{\sigma\in\mathfrak{S}_n} \zeta(w_{\sigma(1)},\ldots,w_{\sigma(n)}). \]
Thus the claim follows from Hoffman's symmetric sum formula \cite[Theorem 2.2]{Hoffman92}.
\end{proof}
\begin{example}
The case of $n=1$ will be treated in Theorem \ref{thm:anti-hook}.
Here let us consider the case of $n=2$ and $r_1=r_2=r$.
We have
\begin{align}
S_w\begin{varray} s,& 0\\ r,& r \end{varray}
&=\sum_{\substack{t_1,t_2\ge 0\\ t_1+t_2=s}}\sum_{\substack{w_i\ge r+t_i+1\\ w_1+w_2=w}}
\binom{w_1-1}{t_1}\binom{w_2-1}{t_2}\zeta(w_1,w_2)\\
&=\frac{1}{2}\sum_{\substack{t_1,t_2\ge 0\\ t_1+t_2=s}}\sum_{\substack{w_i\ge r+t_i+1\\ w_1+w_2=w}}
\binom{w_1-1}{t_1}\binom{w_2-1}{t_2}\bigl(\zeta(w_1,w_2)+\zeta(w_2,w_1)\bigr)\\
&=\frac{1}{2}\sum_{\substack{t_1,t_2\ge 0\\ t_1+t_2=s}}\sum_{\substack{w_i\ge r+t_i+1\\ w_1+w_2=w}}
\binom{w_1-1}{t_1}\binom{w_2-1}{t_2}\bigl(\zeta(w_1)\zeta(w_2)-\zeta(w)\bigr).
\end{align}
This is a kind of sum formula of polynomial type.
For a sum formula of bounded type, see \S\ref{sec:general_nagoya}.
\end{example}
\subsection{Sum formulas of single type}
\label{subsec:single_type}
In this subsection,
we present sum formulas of single type for two special types of ribbons.
The first is a simultaneous generalization of the classical sum formulas
for the MZVs and MZSVs stated in \eqref{eq:mzvsumformula}.
\begin{thm}\label{thm:anti-hook}
For any integers $r\ge 1$, $s\ge 0$ and $w\ge s+r+1$, we have
\begin{equation}\label{eq:anti-hook}
S_w\begin{varray}s \\ r\end{varray}=\binom{w-1}{s}\zeta(w).
\end{equation}
\end{thm}
\begin{proof}
This immediately follows from Theorem \ref{thm:s00}.
\end{proof}
The second formula is for the ``stair of tread one'' shape.
The proof is an application of Proposition \ref{prop:inductive}
and Theorem \ref{thm:s00}.
\begin{thm}\label{thm:stair 1}
For any integers $r\ge 1$, $n\ge 1$ and $w\ge (r+2)n+1$, we have
\begin{equation}\label{eq:stair 1}
S_w\begin{varray}\{1\}^{n-1},&1\\ \{r\}^{n-1},&r+1\end{varray}
=c_{w,r}(n)\zeta(w),
\end{equation}
where
\[
c_{w,r}(n)\coloneqq \frac{w-1}{n}\binom{w-(r+1)n-2}{n-1}.
\]
\end{thm}
See \eqref{eq:exampletreadone} in the introduction for examples in the cases $r=n=2$ and $r=1$, $n=3$.
\begin{remark}
The coefficient $c_{w,r}(n)$ is a positive integer. In fact,
\[
c_{w,r}(n)=(r+1)\binom{w-(r+1)n-2}{n-1}+\binom{w-(r+1)n-1}{n}.
\]
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:stair 1}]
We prove \eqref{eq:stair 1} by induction on $n$.
For $n=1$, this is a special case of \eqref{eq:anti-hook}.
Let us assume $n>1$ and for $1\le i\le n$, put
\[
S_{w,r}(n,i)\coloneqq S_w\begin{varray}
\{1\}^{i-1}, & 1, &\{0\}^{n-i}\\
\{r\}^{i-1}, & r+1, &\{r+1\}^{n-i}
\end{varray}, \qquad
T_{w,r}(n)\coloneqq S_w\begin{varray}
\{0\}^n\\
\{r+1\}^n
\end{varray}.
\]
Then, for $1\le i\le n-1$, Proposition \ref{prop:inductive} shows that
\[
S_{w,r}(n,i)+S_{w,r}(n,i+1)
=\sum_{\substack{w_1\ge (r+2)i+1\\ w_2\ge (r+2)(n-i)}}S_{w_1,r}(i,i)\cdot T_{w_2,r}(n-i)
\]
(here and in what follows, we omit the ``total weight $=w$'' condition like $w_1+w_2=w$).
The induction hypothesis gives $S_{w_1,r}(i,i)=c_{w_1,r}(i)\zeta(w_1)$ for $1\le i\le n-1$,
while Theorem \ref{thm:s00} shows that
\[
T_{w_2,r}(n-i)=\sum_{\substack{w'_1,\ldots,w'_{n-i}\ge r+2\\ w'_1+\cdots+w'_{n-i}=w_2}}
\zeta(w'_1,\ldots,w'_{n-i}).
\]
Hence we obtain
\begin{equation}\label{eq:A_i+B_i}
\begin{split}
S_{w,r}(n,i)+S_{w,r}(n,i+1)
&=\sum_{\substack{w_1\ge (r+2)i+1\\ w'_1,\ldots,w'_{n-i}\ge r+2}}
c_{w_1,r}(i)\zeta(w_1)\zeta(w'_1,\ldots,w'_{n-i})\\
&=A_i+B_i,
\end{split}
\end{equation}
where $A_i$ and $B_i$ are defined by
\begin{align}
A_i&\coloneqq\sum_{\substack{w_1\ge (r+2)i+1\\ w'_1,\ldots,w'_{n-i}\ge r+2}}
c_{w_1,r}(i)\bigl\{\zeta(w_1,w'_1,\ldots,w'_{n-i})+\cdots+\zeta(w'_1,\ldots,w'_{n-i},w_1)\bigr\}\notag\\
&=\sum_{j=1}^{n-i+1}\sum_{\substack{w_1,\ldots,w_{n-i+1}\ge r+2\\ w_j\ge (r+2)i+1}}
c_{w_j,r}(i)\zeta(w_1,\ldots,w_{n-i+1})\label{eq:A_i}
\end{align}
and
\begin{align}
B_i&\coloneqq\sum_{\substack{w_1\ge (r+2)i+1\\ w'_1,\ldots,w'_{n-i}\ge r+2}}
c_{w_1,r}(i)\bigl\{\zeta(w_1+w'_1,\ldots,w'_{n-i})+\cdots+\zeta(w'_1,\ldots,w_1+w'_{n-i})\bigr\}\\
&=\sum_{j=1}^{n-i}\sum_{\substack{w_1,\ldots,w_{n-i}\ge r+2\\ w_j\ge (r+2)(i+1)+1}}
\sum_{a=(r+2)i+1}^{w_j-(r+2)} c_{a,r}(i)\zeta(w_1,\ldots,w_{n-i}).
\end{align}
Note that the definition of $A_i$ works for $i=n$ and gives $A_n=c_{w,r}(n)\zeta(w)$.
For $1\le i\le n-1$, we have $A_{i+1}=B_i$ since
\begin{align}
\sum_{a=(r+2)i+1}^{w_j-(r+2)}c_{a,r}(i)
&=\sum_{b=0}^k\frac{(r+2)i+b}{i}\binom{b+i-1}{i-1}\qquad (k\coloneqq w_j-(r+2)(i+1)-1)\\
&=(r+1)\sum_{b=0}^k\binom{b+i-1}{i-1}+\sum_{b=0}^k\binom{b+i}{i}\\
&=(r+1)\binom{k+i}{i}+\binom{k+i+1}{i+1}=c_{w_j,r}(i+1).
\end{align}
Thus \eqref{eq:A_i+B_i} says that
\[S_{w,r}(n,i)+S_{w,r}(n,i+1)=A_i+A_{i+1}. \]
This implies inductively that $S_{w,r}(n,i)=A_i$, starting from
\[S_{w,r}(n,1)=\sum_{j=1}^n\sum_{\substack{w_1,\ldots,w_n\ge r+2\\ w_j\ge r+3}}
(w_j-1)\zeta(w_1,\ldots,w_n)=A_1, \]
which is a consequence of \eqref{eq:s00} and \eqref{eq:A_i}.
The final identity $S_{w,r}(n,n)=A_n=c_{w,r}(n)\zeta(w)$ is exactly the desired formula.
\end{proof}
\subsection{Two corners}
\label{sec:general_nagoya}
We next study ribbon shapes with two corners which give a bounded type sum formula.
The key ingredient is the sum formula weighted by the binomial coefficients
(\cref{prop:bwsumformula}).
For ribbons with two corners, we use only the case $d=2$.
For ease of calculation,
we rewrite \cref{cor:pldep2} in the following form
with $m_1=k_1-1$ and $m_2=k_2-1$:
For integers $m_1,m_2\ge0$ and $w\ge m_1+m_2+3$,
we have
\begin{equation}
\label{eq:weighted_sum_ready_to_apply}
\begin{aligned}
&\sum_{\substack{w_1\ge 1,w_2\ge 2\\w_1+w_2=w}}
\binom{w_1-1}{m_1}\binom{w_2-1}{m_2}
\zeta(w_1,w_2)\\
&=
(-1)^{m_2+1}
\sum_{\substack{w_1,w_2\ge2\\w_1+w_2=w}}
(-1)^{w_1}
\binom{w_1-1}{m_2}\binom{w_2-1}{m_1+m_2+1-w_1}
\zeta(w_1)\zeta(w_2)\\
&\hspace{20mm}
+(-1)^{m_1+1}
\sum_{\substack{w_1\ge 1, w_2\ge2\\ w_1+w_2=w}}
\binom{w_1-1}{m_1}\binom{w_2-1}{m_1+m_2+1-w_1}\zeta(w_1,w_2)\\
&\hspace{30mm}
+\mathbbm{1}_{m_2=0}\binom{w-2}{m_1}\bigl(\zeta(w)+\zeta(1,w-1)\bigr).
\end{aligned}
\end{equation}
We then start with a preliminary calculation:
\begin{lemma}
\label{lem:ribbon_2_corner_prelim}
For $s_1,s_2\ge0$, $r_1,r_2>0$ and $w\ge s_1+s_2+r_1+r_2+2$, we have
\begin{align}
S_w\begin{varray}
s_1, & s_2\\
r_1, & r_2
\end{varray}
&=
\sum_{i=0}^{s_2-1}
(-1)^{s_2-i-1}
\sum_{\substack{w_1\ge s_1+s_2+r_1-i+1\\w_2\ge r_2+i+1\\w_1+w_2=w}}
\binom{w_1-1}{s_1}\binom{w_2-1}{i}\zeta(w_1)\zeta(w_2)\\
&\hspace{20mm}
+(-1)^{s_2}
\sum_{\substack{t_1,t_2\ge0\\t_1+t_2=s_1}}
\sum_{\substack{w_1\ge s_2+r_1+t_1+1\\w_2\ge r_2+t_2+1\\w_1+w_2=w}}
\binom{w_1-1}{t_1}\binom{w_2-1}{t_2}\zeta(w_1,w_2).
\end{align}
\end{lemma}
\begin{proof}
By a repeated application of Proposition~\ref{prop:inductive},
we obtain
\begin{align}
S_{w}
\begin{varray}
s_{1}, & s_{2}\\
r_{1}, & r_{2}
\end{varray}
&=
\sum_{w_1+w_2=w}
S_{w_1}
\begin{varray}
s_{1}\\
r_{1}+1
\end{varray}
S_{w_2}
\begin{varray}
s_{2}-1\\
r_{2}
\end{varray}
-
S_{w}\begin{varray}
s_{1}, & s_{2}-1\\
r_{1}+1, & r_{2}
\end{varray}\\
&=\cdots\\
&=
\sum_{i=1}^{s_2}
(-1)^{i-1}
\sum_{w_1+w_2=w}
S_{w_1}
\begin{varray}
s_{1}\\
r_{1}+i
\end{varray}
S_{w_2}
\begin{varray}
s_{2}-i\\
r_{2}
\end{varray}
+(-1)^{s_2}
S_{w}\begin{varray}
s_{1}, & 0\\
s_2+r_{1}, & r_{2}
\end{varray}\\
&=
\sum_{i=0}^{s_2-1}
(-1)^{s_2-i-1}
\sum_{w_1+w_2=w}
S_{w_1}
\begin{varray}
s_{1}\\
s_2+r_{1}-i
\end{varray}
S_{w_2}
\begin{varray}
i\\
r_{2}
\end{varray}
+(-1)^{s_2}
S_{w}\begin{varray}
s_{1}, & 0\\
s_2+r_{1}, & r_{2}
\end{varray}.
\end{align}
By applying Theorem~\ref{thm:anti-hook}
to the first sum with taking care of admissible range
and by applying Theorem~\ref{thm:s00}
to the second sum, we obtain the lemma.
\end{proof}
Before proceeding to the general case,
we show that there is indeed a polynomial type sum formula
for a specific class of ribbons with two corners.
With the notation in Lemma~\ref{lem:ribbon_2_corner_prelim},
the following formula is the case $r_2=s_2+r_1$.
\begin{thm}
\label{thm:Yamasaki_polynomial_observation}
For $s_1,s_2\ge0$, $r_1>0$ and $w\ge s_1+2s_2+2r_1+2$, we have
\[
S_w\begin{varray}
s_1, & s_2\\
r_1, & s_2+r_1
\end{varray}
\]
is a polynomial in single zeta values.
\end{thm}
\begin{proof}
By Lemma~\ref{lem:ribbon_2_corner_prelim} with $r_2=s_2+r_1$,
it suffices to show
\[
\sum_{\substack{t_1,t_2\ge0\\t_1+t_2=s_1}}
\sum_{\substack{w_1\ge s_2+r_1+t_1+1\\w_2\ge s_2+r_1+t_2+1\\w_1+w_2=w}}
\binom{w_1-1}{t_1}\binom{w_2-1}{t_2}\zeta(w_1,w_2)
\]
can be written as a polynomial of single zeta values.
By symmetry, this sum is
\[
=
\frac{1}{2}
\sum_{\substack{t_1,t_2\ge0\\t_1+t_2=s_1}}
\sum_{\substack{w_1\ge s_2+r_1+t_1+1\\w_2\ge s_2+r_1+t_2+1\\w_1+w_2=w}}
\binom{w_1-1}{t_1}\binom{w_2-1}{t_2}
\bigl(\zeta(w_1,w_2)+\zeta(w_2,w_1)\bigr).
\]
Then, the result follows by the harmonic product formula.
\end{proof}
The next theorem is a sum formula for general ribbons with two corners, which is of bounded type.
Although the explicit formula itself is rather complicated,
it is a direct consequence of \cref{eq:weighted_sum_ready_to_apply} and \cref{lem:ribbon_2_corner_prelim}.
\begin{thm}
\label{thm:ribbon_2_corner}
For $s_1,s_2\ge0$, $r_1,r_2>0$ and $w\ge s_1+s_2+r_1+r_2+2$, we have
\begin{equation}
\label{thm:ribbon_2_corner:result}
\begin{aligned}
S_w\begin{varray}
s_1, & s_2\\
r_1, & r_2
\end{varray}
&=\binom{w-2}{s_1+s_2}\zeta(w)\\
&\hspace{10mm}
+\sum_{\substack{w_1, w_2\ge2\\w_1+w_2=w}}
A_{w_1,w_2}^{s_1,s_2,r_1,r_2}\zeta(w_1)\zeta(w_2)
+
\sum_{\substack{w_1\ge1, w_2\ge2\\w_1+w_2=w}}
B_{w_1,w_2}^{s_1,s_2,r_1,r_2}\zeta(w_1,w_2),
\end{aligned}
\end{equation}
where the integers
$A_{w_1,w_2}^{s_1,s_2,r_1,r_2}$
and
$B_{w_1,w_2}^{s_1,s_2,r_1,r_2}$
are given by
\begin{align}
A_{w_1,w_2}^{s_1,s_2,r_1,r_2}
&\coloneqq
(-1)^{w_1}C_{w_1,w_2}^{s_1,s_2}\\
&\hspace{10mm}
-\mathbbm{1}_{w_1\le s_1+r_1\ \textup{or}\ w_2\le s_2+r_2-1}
\binom{w_1-1}{s_1}\binom{w_2-2}{s_2-1}\\
&\hspace{10mm}
+\mathbbm{1}_{w_1>s_1+r_1}(-1)^{s_1+r_1+w_1}
\binom{w_1-1}{s_1}
\binom{w_2-2}{s_1+s_2+r_1-w_1}\\
&\hspace{10mm}
+\mathbbm{1}_{r_2<w_2\le s_2+r_2-1}
(-1)^{s_2+r_2+w_2}
\binom{w_1-1}{s_1}\binom{w_2-2}{r_2-1},
\\
B_{w_1,w_2}^{s_1,s_2,r_1,r_2}
&\coloneqq
C_{w_1,w_2}^{s_1,s_2}
-
(-1)^{s_2}
\sum_{\substack{
t_1,t_2\ge0\\
t_1\ge w_1-(s_2+r_1)\ \textup{or}\ t_2\ge w_2-r_2\\
t_1+t_2=s_1}}
\binom{w_1-1}{t_1}\binom{w_2-1}{t_2}
\end{align}
with
\[
C_{w_1,w_2}^{s_1,s_2}
\coloneqq
(-1)^{s_2}
\sum_{\substack{0\le i\le s_1\\1\le j\le s_2\\i+j=w_1}}
\binom{w_1-1}{i}\binom{w_2-1}{s_1-i}
-(-1)^{s_1}
\binom{w_1-1}{s_1}
\binom{w_2-2}{s_1+s_2-w_1}.
\]
\end{thm}
\begin{remark}
By our convention on binomial coefficients,
$A_{w_1,w_2}^{s_1,s_2,r_1,r_2}=B_{w_1,w_2}^{s_1,s_2,r_1,r_2}=0$ unless
\[
w_1\le s_1+s_2+r_1 \quad \text{ or } \quad w_2\le \max(s_2+r_2-1,s_1+r_2)
\]
and so \cref{thm:ribbon_2_corner:result}
is a bounded type sum formula
after expanding the product $\zeta(w_1)\zeta(w_2)$ by the harmonic product formula.
\end{remark}
\begin{proof}[Proof of \cref{thm:ribbon_2_corner}]
Write the equation in Lemma~\ref{lem:ribbon_2_corner_prelim} as
\[
S_w\begin{varray}
s_1, & s_2\\
r_1, & r_2
\end{varray}
=
S_1+S_2.
\]
The sum $S_1$ can be rewritten as
\begin{align}
S_1
&=
\sum_{i=0}^{s_2-1}
(-1)^{s_2-i-1}
\sum_{\substack{w_1,w_2\ge2\\w_1+w_2=w}}
\binom{w_1-1}{s_1}\binom{w_2-1}{i}\zeta(w_1)\zeta(w_2)\\
&\hspace{5mm}-
\sum_{i=0}^{s_2-1}
(-1)^{s_2-i-1}
\sum_{\substack{2\le w_1\le s_1+s_2+r_1-i\\w_1+w_2=w}}
\binom{w_1-1}{s_1}\binom{w_2-1}{i}\zeta(w_1)\zeta(w_2)\\
&\hspace{5mm}-
\sum_{i=0}^{s_2-1}
(-1)^{s_2-i-1}
\sum_{\substack{2\le w_2\le r_2+i\\w_1+w_2=w}}
\binom{w_1-1}{s_1}\binom{w_2-1}{i}\zeta(w_1)\zeta(w_2)\\
&=S_{11}-S_{12}-S_{13},\quad\text{say}.
\end{align}
By the harmonic product formula, we have
$S_{11}=S_{111}+S_{112}+S_{113}$
with
\begin{align}
S_{111}
&\coloneqq
\sum_{i=0}^{s_2-1}
(-1)^{s_2-i-1}
\sum_{\substack{w_1,w_2\ge 2\\w_1+w_2=w}}
\binom{w_1-1}{s_1}\binom{w_2-1}{i}\zeta(w_1,w_2),\\
S_{112}
&\coloneqq
\sum_{i=0}^{s_2-1}
(-1)^{s_2-i-1}
\sum_{\substack{w_1,w_2\ge 2\\w_1+w_2=w}}
\binom{w_1-1}{i}\binom{w_2-1}{s_1}\zeta(w_1,w_2),\\
S_{113}
&\coloneqq
\biggl\{
\sum_{i=0}^{s_2-1}
(-1)^{s_2-i-1}
\sum_{\substack{w_1,w_2\ge 2\\w_1+w_2=w}}
\binom{w_1-1}{s_1}\binom{w_2-1}{i}
\biggr\}
\zeta(w).
\end{align}
For the sum $S_{111}$,
after supplementing the term with $w_1=1$,
by \cref{eq:weighted_sum_ready_to_apply}, we get
\begin{align}
S_{111}
&=
(-1)^{s_2}
\sum_{i=0}^{s_2-1}
\sum_{\substack{w_1,w_2\ge2\\w_1+w_2=w}}
(-1)^{w_1}
\binom{w_1-1}{i}\binom{w_2-1}{s_1+i+1-w_1}\zeta(w_1)\zeta(w_2)\\
&\hspace{5mm}
+
\sum_{i=0}^{s_2-1}(-1)^{s_1+s_2-i}
\sum_{\substack{w_1\ge 1, w_2\ge2\\ w_1+w_2=w}}
\binom{w_1-1}{s_1}\binom{w_2-1}{s_1+i+1-w_1}\zeta(w_1,w_2)\\
&\hspace{10mm}
+\mathbbm{1}_{s_2>0}(-1)^{s_2-1}\binom{w-2}{s_1}
\bigl(\zeta(w)+\zeta(1,w-1)\bigr)\\
&\hspace{15mm}
-\mathbbm{1}_{s_1=0}
\sum_{i=0}^{s_2-1}(-1)^{s_2-i-1}\binom{w-2}{i}\zeta(1,w-1),
\end{align}
where we should note that the sum $S_{111}$ is empty if $s_2=0$ and $\mathbbm{1}_{s_2>0}$
is inserted to cover such a degenerate case.
By swapping the summation
and changing variable via $i\leadsto w_1-i-1$, we have
\begin{align}
&(-1)^{s_2}
\sum_{i=0}^{s_2-1}
\sum_{\substack{w_1,w_2\ge2\\w_1+w_2=w}}
(-1)^{w_1}
\binom{w_1-1}{i}\binom{w_2-1}{s_1+i+1-w_1}\zeta(w_1)\zeta(w_2)\\
&=
(-1)^{s_2}
\sum_{\substack{w_1,w_2\ge2\\w_1+w_2=w}}
(-1)^{w_1}\zeta(w_1)\zeta(w_2)
\sum_{\substack{0\le i\le s_1\\1\le j\le s_2\\i+j=w_1}}
\binom{w_1-1}{i}\binom{w_2-1}{s_1-i}.
\end{align}
Since $\binom{w_2-1}{s_1+i+1-w_1}=0$ if $i<w_1-s_1-1$,
by changing variable via
$i\leadsto i+w_1-s_1-1$, we have
\begin{align}
&\sum_{i=0}^{s_2-1}(-1)^{s_1+s_2-i}
\sum_{\substack{w_1\ge 1, w_2\ge2\\ w_1+w_2=w}}
\binom{w_1-1}{s_1}\binom{w_2-1}{s_1+i+1-w_1}\zeta(w_1,w_2)\\
&=
-
(-1)^{s_1}
\sum_{\substack{w_1\ge1,w_2\ge2\\w_1+w_2=w}}
\binom{w_1-1}{s_1}\binom{w_2-2}{s_1+s_2-w_1}
\zeta(w_1,w_2).
\end{align}
Combining the above two formulas with
$\sum_{i=0}^{s_2-1}(-1)^{s_2-i-1}\binom{w-2}{i}
=\binom{w-3}{s_2-1}$,
we have
\begin{align}
S_{111}
&=
(-1)^{s_2}
\sum_{\substack{w_1,w_2\ge2\\w_1+w_2=w}}
(-1)^{w_1}\zeta(w_1)\zeta(w_2)
\sum_{\substack{0\le j\le s_1\\1\le i\le s_2\\i+j=w_1}}
\binom{w_1-1}{j}\binom{w_2-1}{s_1-j}\\
&\hspace{5mm}
-(-1)^{s_1}
\sum_{\substack{w_1\ge1,w_2\ge2\\w_1+w_2=w}}
\binom{w_1-1}{s_1}\binom{w_2-2}{s_1+s_2-w_1}
\zeta(w_1,w_2)\\
&\hspace{10mm}
-\mathbbm{1}_{s_2>0}(-1)^{s_2}
\binom{w-2}{s_1}\bigl(\zeta(w)+\zeta(1,w-1)\bigr)
-\mathbbm{1}_{s_1=0}
\binom{w-3}{s_2-1}\zeta(1,w-1).
\end{align}
Similarly, for the sum $S_{112}$,
we can apply \cref{eq:weighted_sum_ready_to_apply}
to get
\begin{align}
S_{112}
&=
\sum_{i=0}^{s_2-1}(-1)^{s_1+s_2-i}
\sum_{\substack{w_1,w_2\ge2\\w_1+w_2=w}}
(-1)^{w_1}
\binom{w_1-1}{s_1}\binom{w_2-1}{s_1+i+1-w_1}
\zeta(w_1)\zeta(w_2)\\
&\hspace{10mm}
+(-1)^{s_2}
\sum_{i=0}^{s_2-1}
\sum_{\substack{w_1\ge 1, w_2\ge2\\ w_1+w_2=w}}
\binom{w_1-1}{i}\binom{w_2-1}{s_1+i+1-w_1}\zeta(w_1,w_2)\\
&\hspace{30mm}
+\mathbbm{1}_{s_1=0}\sum_{i=0}^{s_2-1}(-1)^{s_2-i-1}\binom{w-2}{i}
\bigl(\zeta(w)+\zeta(1,w-1)\bigr)\\
&\hspace{40mm}
+\mathbbm{1}_{s_2>0}
(-1)^{s_2}\binom{w-2}{s_1}\zeta(1,w-1).
\end{align}
By a calculation similar to that of $S_{111}$, we have
\begin{align}
S_{112}
&=
-(-1)^{s_1}
\sum_{\substack{w_1,w_2\ge2\\w_1+w_2=w}}
(-1)^{w_1}
\binom{w_1-1}{s_1}\binom{w_2-2}{s_1+s_2-w_1}
\zeta(w_1)\zeta(w_2)\\
&\hspace{5mm}
+(-1)^{s_2}
\sum_{\substack{w_1\ge1,w_2\ge2\\w_1+w_2=w}}
\zeta(w_1,w_2)
\sum_{\substack{0\le i\le s_1\\1\le j\le s_2\\i+j=w_1}}
\binom{w_1-1}{i}\binom{w_2-1}{s_1-i}\\
&\hspace{15mm}
+\mathbbm{1}_{s_1=0}\binom{w-3}{s_2-1}\bigl(\zeta(w)+\zeta(1,w-1)\bigr)
+\mathbbm{1}_{s_2>0}(-1)^{s_2}\binom{w-2}{s_1}\zeta(1,w-1).
\end{align}
For the sum $S_{113}$,
by $\sum_{i=0}^{s_2-1}(-1)^{s_2-i-1}\binom{w_2-1}{i}=\binom{w_2-2}{s_2-1}$ we have
\begin{align}
S_{113}
&=\biggl\{\sum_{i=0}^{s_2-1}
(-1)^{s_2-i-1}
\sum_{\substack{w_1,w_2\ge2\\w_1+w_2=w}}
\binom{w_1-1}{s_1}\binom{w_2-1}{i}\biggr\}\zeta(w)\\
&=
\biggl\{
\sum_{\substack{w_1\ge1, w_2\ge2\\w_1+w_2=w}}
\binom{w_1-1}{s_1}
\binom{w_2-2}{s_2-1}
-\mathbbm{1}_{s_1=0}\binom{w-3}{s_2-1}
\biggr\}\zeta(w)\\
&=
\biggl\{
\mathbbm{1}_{s_2>0}\binom{w-2}{s_1+s_2}
-\mathbbm{1}_{s_1=0}\binom{w-3}{s_2-1}
\biggr\}\zeta(w).
\end{align}
We next consider the sum $S_{12}$ and $S_{13}$.
By swapping the summation, we have
\begin{align}
S_{12}
&=
\sum_{\substack{2\le w_1\le s_1+s_2+r_1\\w_1+w_2=w}}
\binom{w_1-1}{s_1}\zeta(w_1)\zeta(w_2)
\sum_{i=0}^{\min(s_2-1,s_1+s_2+r_1-w_1)}
(-1)^{s_2-i-1}\binom{w_2-1}{i}\\
&=
\sum_{\substack{2\le w_1\le s_1+r_1\\w_1+w_2=w}}
\binom{w_1-1}{s_1}\zeta(w_1)\zeta(w_2)
\sum_{i=0}^{s_2-1}
(-1)^{s_2-i-1}\binom{w_2-1}{i}\\
&\hspace{10mm}
+
\sum_{\substack{s_1+r_1<w_1\le s_1+s_2+r_1\\w_1+w_2=w}}
\binom{w_1-1}{s_1}\zeta(w_1)\zeta(w_2)
\sum_{i=0}^{s_1+s_2+r_1-w_1}
(-1)^{s_2-i-1}\binom{w_2-1}{i}\\
&=
\sum_{\substack{2\le w_1\le s_1+r_1\\w_1+w_2=w}}
\binom{w_1-1}{s_1}\binom{w_2-2}{s_2-1}
\zeta(w_1)\zeta(w_2)\\
&\hspace{10mm}
-
\sum_{\substack{s_1+r_1<w_1\le s_1+s_2+r_1\\w_1+w_2=w}}
(-1)^{s_1+r_1+w_1}
\binom{w_1-1}{s_1}
\binom{w_2-2}{s_1+s_2+r_1-w_1}
\zeta(w_1)\zeta(w_2).
\end{align}
Similarly, we have
\begin{align}
S_{13}
&=
\sum_{\substack{2\le w_2\le s_2+r_2-1\\w_1+w_2=w}}
\binom{w_1-1}{s_1}\binom{w_2-2}{s_2-1}\zeta(w_1)\zeta(w_2)\\
&\hspace{10mm}
-\sum_{\substack{r_2<w_2\le s_2+r_2-1\\w_1+w_2=w}}
(-1)^{s_2+r_2+w_2}
\binom{w_1-1}{s_1}\binom{w_2-2}{r_2-1}\zeta(w_1)\zeta(w_2).
\end{align}
For the sum $S_{2}$, we can dissect as
$S_{2}=S_{21}-S_{22}-S_{23}$
by writing
\begin{align}
S_{21}
&\coloneqq
(-1)^{s_2}
\sum_{\substack{t_1,t_2\ge0\\t_1+t_2=s_1}}
\sum_{\substack{w_1\ge 1,w_2\ge 2\\w_1+w_2=w}}
\binom{w_1-1}{t_1}\binom{w_2-1}{t_2}\zeta(w_1,w_2),\\
S_{22}
&\coloneqq
(-1)^{s_2}
\sum_{\substack{t_1,t_2\ge0\\t_1+t_2=s_1}}
\sum_{\substack{1\le w_1\le s_2+r_1+t_1\\w_1+w_2=w}}
\binom{w_1-1}{t_1}\binom{w_2-1}{t_2}\zeta(w_1,w_2),\\
S_{23}
&\coloneqq
(-1)^{s_2}
\sum_{\substack{t_1,t_2\ge0\\t_1+t_2=s_1}}
\sum_{\substack{2\le w_2\le r_2+t_2\\w_1+w_2=w}}
\binom{w_1-1}{t_1}\binom{w_2-1}{t_2}\zeta(w_1,w_2).
\end{align}
For $S_{21}$, we can calculate the sum over binomial coefficients and get
\begin{align}
S_{21}
=
(-1)^{s_2}\binom{w-2}{s_1}
\sum_{\substack{w_1\ge 1,w_2\ge 2\\w_1+w_2=w}}
\zeta(w_1,w_2)
=
(-1)^{s_2}\binom{w-2}{s_1}\zeta(w)
\end{align}
by the usual sum formula.
For $S_{22},S_{23}$, note that
\begin{align}
S_{22}
&=
(-1)^{s_2}
\sum_{\substack{w_1\ge1,w_2\ge2\\w_1+w_2=w}}
\zeta(w_1,w_2)
\sum_{\substack{t_1,t_2\ge0\\t_1\ge w_1-(s_2+r_1)\\t_1+t_2=s_1}}
\binom{w_1-1}{t_1}\binom{w_2-1}{t_2},
\\
S_{23}
&=
(-1)^{s_2}
\sum_{\substack{w_1\ge1,w_2\ge2\\w_1+w_2=w}}
\zeta(w_1,w_2)
\sum_{\substack{t_1,t_2\ge0\\t_2\ge w_2-r_2\\t_1+t_2=s_1}}
\binom{w_1-1}{t_1}\binom{w_2-1}{t_2}.
\end{align}
Since the range of $(t_1,t_2)$ for $S_{22}$ and $S_{23}$ are disjoint, we have
\begin{align}
S_{22}
+
S_{23}
=
(-1)^{s_2}
\sum_{\substack{w_1\ge1,w_2\ge2\\w_1+w_2=w}}
\zeta(w_1,w_2)
\sum_{\substack{t_1,t_2\ge0\\
t_1\ge w_1-(s_2+r_1)\ \text{or}\ t_2\ge w_2-r_2\\
t_1+t_2=s_1}}
\binom{w_1-1}{t_1}\binom{w_2-1}{t_2}.
\end{align}
By combining the above calculations, we obtain the theorem.
\end{proof}
As a special case of Theorem~\ref{thm:ribbon_2_corner},
we obtain the following sum formula of bounded type
for the hook shape. Note that the left hand side of the next corollary corresponds to the hook shape $(s,1^{r})$ when $s\ge2$.
\begin{cor}
\label{cor:hook_sum_formula}
For $s,r\ge1$ and $w\ge s+r+2$, we have
\begin{align}
S_w
\begin{varray}
0, & s-1\\
r, & 1
\end{varray}
&=\binom{w-2}{s-1}\zeta(w)
-\sum^{s-1}_{k=1}\binom{w-k-2}{s-k-1}\zeta(k,w-k)
+(-1)^{s}\sum^{s+r-1}_{k=s}\zeta(k,w-k)\\
&\ \ \ -\sum^{s-1}_{k=2}(-1)^k\binom{w-k-2}{s-k-1}\zeta(k)\zeta(w-k)
-\sum^{r}_{k=2}\binom{w-k-2}{s-2}\zeta(k)\zeta(w-k)\\
&\ \ \ +(-1)^{r}\sum^{s+r-1}_{k=r+1}(-1)^k\binom{w-k-2}{r+s-1-k}\zeta(k)\zeta(w-k).
\end{align}
\end{cor}
\begin{proof}
Follows from \cref{thm:ribbon_2_corner}
and some rearrangement of terms.
\end{proof}
\begin{example}
For the shape $\lambda = (3,1,1)$ we obtain for $w\ge 7$ the sum formula
\begin{align}
S_{w}\left(\,
{\footnotesize
\ytableausetup{centertableaux, boxsize=0.6em}
\begin{ytableau}
\, & \, &\, \\
\, \\
\, \\
\end{ytableau}}\,
\right) &= S_w
\begin{varray}
0, & 2\\
2, & 1
\end{varray}\\
&=\binom{w-2}{2}\zeta(w)-(w-3)\zeta(2)\zeta(w-2)-(w-5)\zeta(3)\zeta(w-3)+\zeta(4)\zeta(w-4)
\\
&\hspace{0.2cm}-(w-3)\zeta(1,w-1)-\zeta(2,w-2)-\zeta(3,w-3)-\zeta(4,w-4)
\end{align}
from \cref{cor:hook_sum_formula} by taking $s=3$ and $r=2$.
\end{example}
\section{Diagrams with one corner}
\newcommand{\mathfrak{H}}{\mathfrak{H}}
\label{sec:OneCorner}
In this section, we consider general shapes with one corner. Recall that in Section \ref{sec:weightedsum} we defined for an index ${\boldsymbol{k}}=(k_1,\dots,k_d)$ and $l\geq 0$
\begin{align}\label{eq:defQl}
Q_l({\boldsymbol{k}})&=
\sum_{\substack{{\boldsymbol{w}}=(w_1,\ldots,w_d):\text{ adm.}\\ \wt({\boldsymbol{w}})=\wt({\boldsymbol{k}})+l}}
\binom{w_1-1}{k_1-1}\cdots \binom{w_{d-1}-1}{k_{d-1}-1}\binom{w_{d}-2}{k_{d}-1}
\zeta({\boldsymbol{w}})\,.
\end{align}
We will show (Theorem \ref{thm:onecornerwithphi}) that any $S_{w}(\lambda/\mu)$ with $\lambda/\mu$ having one corner, can be expressed explicitly in terms of $Q_{w-|\lambda/\mu|}$. This will follow from a purely combinatorial argument and we will see that this statement is already true for truncated Schur MZVs. For an integer $M\geq 1$ and an arbitrary, not necessarily admissible, Young tableau ${\boldsymbol{k}}$, these are defined by
\begin{align}\label{eq:deftruncschurmzv}
\zeta_M({\boldsymbol{k}}) = \sum_{\substack{(m_{i,j}) \in \SSYT(\lambda \slash \mu)\\m_{i,j}<M}} \prod_{(i,j) \in D(\lambda \slash \mu)} \frac{1}{m_{i,j}^{k_{i,j}}} \,.
\end{align}
In particular, for an integer $M\geq 1$ and an index ${\boldsymbol{k}} = (k_1,\dots,k_d)$ with $k_1,\dots,k_d\geq 1$ the truncated MZVs are given by
\begin{align}
\zeta_M(k_1,\dots,k_d) = \sum_{0 < m_1 < \dots < m_d < M} \frac{1}{m_1^{k_1}\dots m_d^{k_d}}\,.
\end{align}
To give the precise statement of above mentioned theorem, we will need to introduce some algebraic setup following \cite{Hoffman97}.
Denote by $\mathfrak{H}^1=\mathbb{Q}\langle z_k \mid k\geq 1\rangle$ the non-commutative polynomial ring in the variables $z_k$ for $k\geq1$. A monic monomial in $\mathfrak{H}^1$ is called a word and the empty word will be denoted by ${\bf 1}$.
For $k\geq 1$ and $n\in \mathbb{Z}$, we define
\begin{align}
z_k^n = \left\{\begin{array}{cl}
\underbrace{z_k\cdots z_k}_{n}&\text{if $n>0$},\\[7mm]
\mathbf{1}&\text{if $n=0$},\\[2mm]
0&\text{if $n<0$}.
\end{array}\right.
\end{align}
There is a one-to-one correspondence between indices and words;
to each index ${\boldsymbol{k}}=(k_1,\ldots,k_d)$ corresponds the word
$z_{\boldsymbol{k}}\coloneqq z_{k_1}\cdots z_{k_d}$.
Thus we can extend various functions on indices to $\mathbb{Q}$-linear maps on $\mathfrak{H}^1$.
For example, we define $Q_l\colon\mathfrak{H}^1\to\mathbb{R}$ by setting $Q_l(z_{\boldsymbol{k}})=Q_l({\boldsymbol{k}})$ and
extending it linearly.
We define the stuffle product $\ast$ and the index shuffle product $\mathbin{\widetilde{\shuffle}}$ on $\mathfrak{H}^1$ as the $\mathbb{Q}$-bilinear products, which satisfy ${\bf 1} \ast w = w \ast {\bf 1} = w$ and ${\bf 1} \mathbin{\widetilde{\shuffle}} w = w \mathbin{\widetilde{\shuffle}} {\bf 1} = w$ for any word $w\in \mathfrak{H}^1$ and for any $i,j \geq 1$ and words $w_1,w_2 \in \mathfrak{H}^1$,
\begin{align}
z_{i} w_1 \ast z_{j} w_2 &= z_i (w_1 \ast z_{j} w_2) + z_j (z_i w_1 \ast w_2) + z_{i+j} (w_1 \ast w_2)\,, \\
z_{i} w_1 \mathbin{\widetilde{\shuffle}} z_{j} w_2 &= z_i (w_1 \mathbin{\widetilde{\shuffle}} z_{j} w_2) + z_j (z_i w_1 \mathbin{\widetilde{\shuffle}} w_2)\,.
\end{align}
By \cite[Theorem 2.1]{Hoffman97} we obtain a commutative $\mathbb{Q}$-algebra $\mathfrak{H}^{1}_{\ast}$.
\begin{lemma}\label{lem:SSD}
Let $D_1,\ldots,D_r$ be non-empty subsets of $D(\lambda/\mu)$ which gives a disjoint decomposition
of $D(\lambda/\mu)$, i.e., $D(\lambda/\mu)=D_1\sqcup\dots\sqcup D_r$.
Then the following conditions are equivalent:
\begin{enumerate}[label=\textup{(\roman*)}]
\item If we set $t_{ij}=a$ for $(i,j)\in D_a$ with $a=1,\ldots,r$,
then $(t_{ij})$ is a semi-standard Young tableau of shape $\lambda/\mu$.
\item There exists a semi-standard Young tableau $(m_{ij})$ of shape $\lambda/\mu$ such that
\[m_{ij}<m_{kl}\iff a<b\]
holds for any $(i,j)\in D_a$ and $(k,l)\in D_b$.
\end{enumerate}
\end{lemma}
\begin{proof}
Obviously, (i) implies (ii). Conversely, assume that a semi-standard tableau $(m_{ij})$ satisfies
the condition in (ii). If $(t_{ij})$ is defined as in (i), one has
\[m_{ij}<m_{kl}\iff a<b\iff t_{ij}<t_{kl}\]
for any $(i,j)\in D_a$ and $(k,l)\in D_b$, which shows that $(t_{ij})$ is also semi-standard.
Thus (ii) implies (i).
\end{proof}
We call a tuple $(D_1,\ldots,D_r)$ of non-empty subsets of $D(\lambda/\mu)$
satisfying the conditions of Lemma \ref{lem:SSD} a \emph{semi-standard decomposition}.
Let $\mathrm{SSD}(\lambda/\mu)$ denote the set of all semi-standard decompositions of $D(\lambda/\mu)$.
Then we define an element $\varphi(\lambda/\mu)$ of $\mathfrak{H}^1$ by
\[\varphi(\lambda/\mu)\coloneqq \sum_{(D_1,\ldots,D_r)\in\mathrm{SSD}(\lambda/\mu)}
z_{\abs{D_1}}\cdots z_{\abs{D_r}}, \]
where $\abs{D_i}$ denotes the number of elements of $D_i$.
This element is related to the sum formula as follows.
\begin{thm}\label{thm:onecornerwithphi}
When $\lambda/\mu$ has only one corner, we have for $w>\abs{\lambda/\mu}$
\[S_w(\lambda/\mu)=Q_{w-\abs{\lambda/\mu}}(\varphi(\lambda/\mu)). \]
\end{thm}
\begin{proof}
For any admissible Young tableau ${\boldsymbol{k}}=(k_{ij})$ of shape $\lambda/\mu$, we see that
\begin{equation}\label{eq:zeta(bk) via SSD}
\zeta({\boldsymbol{k}})=\sum_{(D_1,\ldots,D_r)\in\mathrm{SSD}(\lambda/\mu)}
\zeta\Biggl(\sum_{(i,j)\in D_1}k_{ij},\ldots,\sum_{(i,j)\in D_r}k_{ij}\Biggr)
\end{equation}
by classifying the semi-standard tableaux $(m_{ij})$ of shape $\lambda/\mu$
according to the semi-standard decompositions $(D_1,\ldots,D_r)$ determined as in Lemma \ref{lem:SSD} (ii).
Then, from the definition of $S_w(\lambda/\mu)$, we have
\begin{align}
&S_w(\lambda/\mu)
=\sum_{\substack{k_{ij}\ge 1,\,(i,j)\in D(\lambda/\mu)\\ k_{ij}\ge 2,\,(i,j)\in C(\lambda/\mu)\\
\sum_{(i,j)}k_{ij}=w}}
\sum_{(D_1,\ldots,D_r)\in\mathrm{SSD}(\lambda/\mu)}
\zeta\Biggl(\sum_{(i,j)\in D_1}k_{ij},\ldots,\sum_{(i,j)\in D_r}k_{ij}\Biggr)\\
&=\sum_{(D_1,\ldots,D_r)\in\mathrm{SSD}(\lambda/\mu)}
\sum_{\substack{w_1,\ldots,w_{r-1}\ge 1\\ w_r\ge 2\\ w_1+\cdots+w_r=w}}
\binom{w_1-1}{\abs{D_1}-1}\cdots\binom{w_{r-1}-1}{\abs{D_{r-1}}-1}\binom{w_r-2}{\abs{D_r}-1}
\zeta(w_1,\ldots,w_r).
\end{align}
Here note that, for any $(D_1,\ldots,D_r)\in\mathrm{SSD}(\lambda/\mu)$, the unique corner belongs to $D_r$.
The last expression above is equal to
\[\sum_{(D_1,\ldots,D_r)\in\mathrm{SSD}(\lambda/\mu)}
Q_{w-\abs{\lambda/\mu}}(\abs{D_1},\ldots,\abs{D_r})
=Q_{w-\abs{\lambda/\mu}}(\varphi(\lambda/\mu)).\]
Thus the proof is complete.
\end{proof}
By Remark \ref{rem:plisbounded} we see that Theorem \ref{thm:onecornerwithphi} gives a sum formula of bounded type for shapes with one corner. In order to evaluate the sum $S_w(\lambda/\mu)$ in the one corner case,
one therefore needs to find an expression of $\varphi(\lambda/\mu)$.
For this purpose, the following expression is useful:
\begin{prop}\label{prop:jacobitrudiphi}
For any skew shape $\lambda/\mu$, let $\lambda'=(\lambda'_1,\ldots,\lambda'_s)$
and $\mu'=(\mu'_1,\ldots,\mu'_s)$ be the conjugates of $\lambda$ and $\mu$, respectively.
Then we have the identity
\[\varphi(\lambda/\mu)=\det\nolimits_*\bigl[z_1^{\lambda'_i-\mu'_j-i+j}\bigr]_{1\le i,j\le s}, \]
where $\det\nolimits_*$ denotes the determinant performed in the stuffle algebra $\mathfrak{H}^1_*$.
\end{prop}
\begin{proof}
For any integer $M>0$, we have
\[\zeta_M(\varphi(\lambda/\mu))
=\sum_{(D_1,\ldots,D_r)\in\mathrm{SSD}(\lambda/\mu)}\zeta_M(\abs{D_1},\ldots,\abs{D_r})
=\zeta_M(\mathbf{O}_{\lambda/\mu}), \]
where $\mathbf{O}_{\lambda/\mu}$ denotes the tableau of shape $\lambda/\mu$ all entries of which are $1$.
Indeed, the first equality is obvious from the definition of $\varphi(\lambda/\mu)$,
and the second follows in the same way as \eqref{eq:zeta(bk) via SSD}.
On the other hand, by the Jacobi-Trudi type formula for truncated Schur MZVs
(\cite[Theorem 1.1]{NakasujiPhuksuwanYamasaki2018}, \cite[Theorem 4.7]{Bachmann2018}), we have
\[\zeta_M(\mathbf{O}_{\lambda/\mu})
=\det\bigl[\zeta_M(z_1^{\lambda'_i-\mu'_j-i+j})\bigr]_{1\le i,j\le s}
=\zeta_M\Bigl(\det\nolimits_*\bigl[z_1^{\lambda'_i-\mu'_j-i+j}\bigr]_{1\le i,j\le s}\Bigr).\]
Since the map $\mathfrak{H}^1 \rightarrow \prod_{M\geq 1} \mathbb{Q}$ given by $w \mapsto (\zeta_M(w))_{M\geq 1}$ is
injective (\cite[Theorem 3.1]{Yamamoto13}), we obtain the desired identity in $\mathfrak{H}^1$.
\end{proof}
\ytableausetup{boxsize=5pt}
In some cases, one can compute $\varphi(\lambda/\mu)$ explicitly
by using Proposition \ref{prop:jacobitrudiphi}.
\begin{thm}
For $n\ge 1$ and $k\ge 0$, it holds that
\begin{align}
\varphi
\bigl(\,(2^{n+k})/(1^k)\,\bigr)
&=
\sum^{n}_{l=0}
\frac{k+1}{l+k+1}\binom{2l+k}{l} z_2^{n-l}\mathbin{\widetilde{\shuffle}} z_1^{2l+k}.
\end{align}
\end{thm}
\begin{proof}
By \cref{prop:jacobitrudiphi}, we have
\begin{align}
\varphi\bigl(\,(2^{n+k})/(1^k)\,\bigr)
=
\begin{vmatrix}
z^n_1 & z^{n+k+1}_1 \\
z^{n-1}_1 & z^{n+k}_1
\end{vmatrix}
=z^{n+k}_1 \ast z^{n}_1 - z^{n+k+1}_1 \ast z^{n-1}_1.
\end{align}
By \cite[Lemma 1]{Chen15} we get for $m\ge n\ge 1$
\begin{align}
z^m_1\ast z^n_1
&=\sum^{n}_{l=0}\binom{m+n-2l}{m-l}\sum_{{\boldsymbol{k}}\in G^{m+n-2l}_{l}}z_{\boldsymbol{k}}
=\sum^{n}_{l=0}\binom{m+n-2l}{n-l}z_2^{l} \mathbin{\widetilde{\shuffle}} z_1^{m+n-2l},
\end{align}
where $G^a_b$ is the set of all possible indices containing $a$ times $1$ and $b$ times $2$. This gives
\begin{align}
&\varphi\bigl(\,(2^{n+k})/(1^k)\,\bigr)\\
&=z^{n+k}_1 \ast z^{n}_1 - z^{n+k+1}_1 \ast z^{n-1}_1\\
&=
\sum^{n}_{l=0}\binom{2n+k-2l}{n-l}
z_2^{l}\mathbin{\widetilde{\shuffle}} z_1^{2n+k-2l}
-
\sum^{n-1}_{l=0}\binom{2n+k-2l}{n-l-1}
z_2^{l}\mathbin{\widetilde{\shuffle}} z_1^{2n+k-2l}\\
&=\sum^{n}_{l=0}
\left(\binom{2n+k-2l}{n-l}-\binom{2n+k-2l}{n-l-1}\right)
z_2^{l}\mathbin{\widetilde{\shuffle}} z_1^{2n+k-2l}\\
&=\sum^{n}_{l=0}
\left(\binom{2l+k}{l}-\binom{2l+k}{l-1}\right)
z_2^{n-l}\mathbin{\widetilde{\shuffle}} z_1^{2l+k}\\
&=\sum^{n}_{l=0}
\frac{k+1}{l+k+1}\binom{2l+k}{l} z_2^{n-l}\mathbin{\widetilde{\shuffle}} z_1^{2l+k}.\qedhere
\end{align}
\end{proof}
For some shapes $\lambda/\mu$, the $\varphi(\lambda/\mu)$ will contain sums over all indices over a fixed weight and depth. The corresponding sums of $Q$ applied to these can be evaluated by using the following lemma.
\begin{lemma}
\label{lem:binomprodsum}
For $k\ge d\ge1, w\ge d+1$ and an index ${\boldsymbol{n}}=(n_1,\dots,n_d)$, we have
\begin{align}
\sum_{\substack{{\boldsymbol{k}}=(k_1,\dots,k_d)\\\wt({\boldsymbol{k}})=k}}P_{w-k}({\boldsymbol{n}};{\boldsymbol{k}})
=\binom{w-\wt({\boldsymbol{n}})}{k-d}\zeta(w).
\end{align}
In particular,
\[
\sum_{\substack{{\boldsymbol{k}}=(k_1,\dots,k_d)\\\wt({\boldsymbol{k}})=k}}P_{w-k}({\boldsymbol{k}})
=\binom{w-d}{k-d}\zeta(w), \qquad
\sum_{\substack{{\boldsymbol{k}}=(k_1,\dots,k_d)\\\wt({\boldsymbol{k}})=k}}Q_{w-k}({\boldsymbol{k}})
=\binom{w-d-1}{k-d}\zeta(w).
\]
\end{lemma}
\begin{proof}
This follows directly by using the Chu-Vandermonde identity and the usual sum formula for MZVs.
\end{proof}
\begin{remark} With the same combinatorial argument as in the proof of Theorem \ref{thm:s00} one can show that for $s\ge 0$ and $r\ge 1$, we have
\[
\varphi\big(((s+1)^{r})/(s^{r-1})\big)
=\sum^{s}_{l=0}\binom{r+l-1}{l}
\sum_{\substack{{\boldsymbol{k}}=(k_1,\ldots,k_{r+l})\\ \wt({\boldsymbol{k}})=r+s}}z_{\boldsymbol{k}}.
\]
Using Theorem \ref{thm:onecornerwithphi} and Lemma~\ref{lem:binomprodsum} this gives another way of proving the anti-hook sum formula in Theorem \ref{thm:anti-hook}
\begin{align}
S_{w}\big(((s+1)^{r})/(s^{r-1})\big)&=Q_{w-(r+s)}\left(\varphi\big(((s+1)^{r})/(s^{r-1})\big)\right)\\
&=\sum^{s}_{l=0}\binom{r+l-1}{l}
\sum_{\substack{{\boldsymbol{k}}=(k_1,\ldots,k_{r+l})\\ \wt({\boldsymbol{k}})=r+s}}Q_{w-(r+s)}({\boldsymbol{k}})\\
&=\sum^{s}_{l=0}\binom{r+l-1}{l}
\binom{w-r-l-1}{s-l}\zeta(w)\\
&=\binom{w-1}{s}\zeta(w)\,.
\end{align}
\end{remark}
We can summarize the general strategy to give a bounded expression of $S_{w}(\lambda/\mu)$ for the case when $\lambda/\mu$ has one corner as follows:
\begin{enumerate}[label=\textup{(\roman*)}]
\item Give an expression for $\varphi(\lambda/\mu)$, by evaluating the determinant in Proposition \ref{prop:jacobitrudiphi} by the stuffle product. Then use Theorem \ref{thm:onecornerwithphi} to get $S_{w}(\lambda/\mu)=Q_{w-|\lambda/\mu|}(\varphi(\lambda/\mu))$.
\item If sums of $Q_l$ over all indices of a fixed weight and depth appear, use Lemma \ref{lem:binomprodsum} to write them in terms of Riemann zeta values.
\item For other terms involving $Q$ write them in terms of $P$, by using \eqref{for:PtoQ}, i.e.
\begin{equation}
Q_l(k_1,\dots,k_d)
=\sum^{k_d-1}_{j=0}(-1)^jP_{l+j}(k_1,\ldots,k_{d-1},k_d-j).
\end{equation}
Then use Proposition \ref{prop:bwsumformula} to get recursive (bounded) expressions of $P_{l+j}$ in terms of MZVs. For depth $2$ and $3$ explicit expressions are given by Corollary \ref{cor:pldep2} and \ref{cor:pldep3}.
\end{enumerate}
\begin{example} Using above strategy we get formula \eqref{eq:22square} for $ S_w\left(\,{\footnotesize \ytableausetup{centertableaux, boxsize=0.8em}
\begin{ytableau}
\, & \, \\
\, & \,
\end{ytableau}}\,\right)$ in the introduction and the following examples:
\begin{enumerate}[label=\textup{(\roman*)}]
\item For $w\ge 6$ we have
\begin{align}
S_w\left(\
\begin{ytableau}
\none & \, & \, \\
\,& \, & \,
\end{ytableau}\ \right)
&=\binom{w-2}{2}\zeta(2) \zeta(w-2)-\frac{5}{4}\zeta(4)\zeta(w-4)+\zeta(2) \zeta(2,w-4)
\\
&\ \ \ -\zeta(2) \zeta(1,w-3)- \binom{w-2}{2}\zeta(1,w-1)
+\binom{w-3}{2} \zeta(2,w-2)\\
&\ \ \ +(w-3) \zeta(3,w-3)+(w-3) \zeta(1,1,w-2)
-(w-5) \zeta(1,2,w-3)\\
&\ \ \ -2 \zeta(1,3,w-4)+\zeta(2,1,w-3)-\zeta(2,2,w-4).\\
\end{align}
\item For $w\ge 6$ we have
\begin{align}
S_w&\left(\
\begin{ytableau}
\none & \, \\
\, & \, \\
\, & \,
\end{ytableau}\ \right)
=(w-2) \zeta(2) \zeta(w-2)+(w-5)\zeta(3)\zeta(w-3)-\frac{5}{4}\zeta(4)\zeta(w-4)\\
&\ \ \ -\zeta(2) \zeta(1,w-3)+\zeta(2) \zeta(2,w-4)+(2-w)\zeta(1,w-1)+(w-4) \zeta(2,w-2)\\
&\ \ \ +2 \zeta(3,w-3)+(w-3) \zeta(1,1,w-2)-(w-5) \zeta(1,2,w-3)\\
&\ \ \ -2 \zeta(1,3,w-4)+\zeta(2,1,w-3)-\zeta(2,2,w-4).
\end{align}
\end{enumerate}
Comparing (i) and (ii) with \eqref{eq:22square}, we see that for all $w \geq 1$ we have
\begin{align}\label{eq:examplerelationofs}
2 S_w\left(\
\begin{ytableau}
\none & \, & \, \\
\,& \, & \,
\end{ytableau}\ \right) - 2S_w\left(\
\begin{ytableau}
\none & \, \\
\, & \, \\
\, & \,
\end{ytableau}\ \right)
&= (w-5) S_w\left(\,{\footnotesize
\begin{ytableau}
\, & \, \\
\, & \,
\end{ytableau}}\,\right)\,.
\end{align}
\end{example}
We will now show that the relation \eqref{eq:examplerelationofs} among $S_w$ for different shapes is a special case of a more general family of relations. For this we first notice that any skew Young diagram $D(\lambda/\mu)$ with one corner can be written as
\[\lambda=(n^m)=(\underbrace{n,\ldots,n}_m),\quad \mu=(\mu_1,\ldots,\mu_m)\]
with $\mu_1=n$, $\mu_m>0$ (for example, the one-box diagram is represented as $(2,2)/(2,1)$).
Then we write $I$ for the set of $i=1,\ldots,m$ that
\[\mu[i]\coloneqq(\mu_1,\ldots,\mu_i-1,\ldots,\mu_m)\]
is non-increasing.
Then, for $i\in I$, $D(\lambda/\mu[i])=D(\lambda/\mu)\cup\{(i,\mu_i)\}$
is still a skew Young diagram with one corner. As a generalization of \eqref{eq:examplerelationofs} we obtain the following.
\begin{thm}\label{thm:S_w rel} If $\lambda/\mu$ is a skew Young diagram with one corner then for all $w\geq 1$
\begin{equation}\label{eq:S_w rel}
\sum_{i\in I}\bigl((i-\mu_i)-(m-n)\bigr)S_w(\lambda/\mu[i])
=(w-\lvert\lambda/\mu\rvert-1)S_w(\lambda/\mu).
\end{equation}
\end{thm}
To prove this, we need the following Lemma.
Define a linear map $\partial\colon\mathfrak{H}^1\to\mathfrak{H}^1$ by
\[\partial(z_{k_1}\cdots z_{k_d})\coloneqq
\sum_{a=1}^d k_a z_{k_1}\cdots z_{k_a+1}\cdots z_{k_d}.\]
In particular, $\partial(1)=0$.
\begin{lemma}\label{lem:partial}\leavevmode
\begin{enumerate}[label=\textup{(\roman*)}]
\item\label{lem:partial:derivation}
$\partial$ is a derivation with respect to the stuffle product.
\item\label{lem:partial:power}
For any $N\in\mathbb{Z}$, we have
\[\partial(z_1^N)=z_1*z_1^N-(N+1)z_1^{N+1}.\]
\item We have
\[(l-1)Q_l(v)=Q_{l-1}(\partial(v))\]
for any $v\in\mathfrak{H}^1$.
\end{enumerate}
\end{lemma}
\begin{proof}
\cref{lem:partial:derivation} is verified, e.g., by induction on the depth.
It is also easy to show (ii) from the definition.
Finally, the identity in (iii) follows from the definition of $Q_l$ and the identity
\begin{align*}
&(w-k-1)\binom{w_1-1}{k_1-1}\cdots\binom{w_d-2}{k_d-1}\\
&=\bigl((w_1-k_1)+\cdots+(w_d-1-k_d)\bigr)
\binom{w_1-1}{k_1-1}\cdots\binom{w_d-2}{k_d-1}\\
&=\sum_{a=1}^d
k_a\binom{w_1-1}{k_1-1}\cdots\binom{w_a-1}{k_a}\cdots\binom{w_d-2}{k_d-1},
\end{align*}
where $w=w_1+\cdots+w_d$ and $k=k_1+\cdots+k_d$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:S_w rel}]
By \cref{thm:onecornerwithphi} and \cref{prop:jacobitrudiphi}, we have
\[S_w(\lambda/\mu)=Q_{w-\abs{\lambda/\mu}}
\Bigl(\det\nolimits_*\bigl[z_1^{m-\mu'_j-i+j}\bigr]_{1\le i,j\le n}\Bigr). \]
Here $(\mu'_1,\ldots,\mu'_n)$ denotes the transpose of the partition $\mu=(\mu_1,\ldots,\mu_m)$.
Note that $\mu'_1=m$ and $\mu'_n>0$.
By \cref{lem:partial} (iii) and (i), we see that
\begin{align}\label{eq:S_w rel RHS1}
(w-\lvert\lambda/\mu\rvert-1)S_w(\lambda/\mu)
&=(w-\lvert\lambda/\mu\rvert-1)Q_{w-\abs{\lambda/\mu}}
\Bigl(\det\nolimits_*\bigl[z_1^{m-\mu'_j-i+j}\bigr]_{1\le i,j\le n}\Bigr)\\
&=Q_{w-\abs{\lambda/\mu}-1}
\Bigl(\partial\det\nolimits_*\bigl[z_1^{m-\mu'_j-i+j}\bigr]_{1\le i,j\le n}\Bigr)\\
&=\sum_{k=1}^n Q_{w-\abs{\lambda/\mu}-1}
\Bigl(\det\nolimits_*\bigl[\partial^{\delta_{jk}}(z_1^{m-\mu'_j-i+j})\bigr]_{1\le i,j\le n}\Bigr).
\end{align}
Here $\partial^{\delta_{jk}}$ means the operator $\partial$ if $j=k$ and the identity operator
otherwise. Moreover, since the identity
\begin{align*}
\partial (z_1^{m-\mu'_k-i+k})
&=z_1*z_1^{m-\mu'_k-i+k}-(m-\mu'_k-i+k+1)z_1^{m-\mu'_k-i+k+1}\\
&=\bigl((\mu'_k-k)-(m-n)\bigr)z_1^{m-(\mu'_k-1)-i+k}\\
&\qquad +z_1*z_1^{m-\mu'_k-i+k}-(n+1-i)z_1^{m-(\mu'_k-1)-i+k}
\end{align*}
holds by \cref{lem:partial} (ii), we have
\begin{align}\label{eq:S_w rel RHS2}
&\det\nolimits_*\bigl[\partial^{\delta_{jk}}(z_1^{m-\mu'_j-i+j})\bigr]\\
&=\bigl((\mu'_k-k)-(m-n)\bigr)\det\nolimits_*\bigl[z_1^{m-(\mu'_j-\delta_{jk})-i+j}\bigr]\\
&\qquad+z_1*\det\nolimits_*\bigl[z_1^{m-\mu'_j-i+j}\bigr]
-\det\nolimits_*\bigl[(n+1-i)^{\delta_{jk}}z_1^{m-(\mu'_j-\delta_{jk})-i+j}\bigr].
\end{align}
On the other hand, the left-hand side of \eqref{eq:S_w rel} is equal to
\begin{equation}\label{eq:S_w rel LHS}
\sum_{k=1}^n\bigl((\mu'_k-k)-(m-n)\bigr)
Q_{w-\abs{\lambda/\mu}-1}
\Bigl(\det\nolimits_*\bigl[z_1^{m-(\mu'_j-\delta_{jk})-i+j}\bigr]\Bigr).
\end{equation}
Here, a priori, $k$ runs only over the indices such that $\mu'_k>\mu'_{k+1}$.
However, if $\mu'_k=\mu'_{k+1}$, the $k$-th and $(k+1)$-st columns of
the matrix $\bigl(z_1^{m-(\mu'_j-\delta_{jk})-i+j}\bigr)_{i,j}$ are equal,
so the determinant is zero.
Comparing \eqref{eq:S_w rel RHS1}, \eqref{eq:S_w rel RHS2} and \eqref{eq:S_w rel LHS},
it suffices to prove the equality
\begin{equation}\label{eq:S_w rel reduced}
nz_1*\det\nolimits_*\bigl[z_1^{m-\mu'_j-i+j}\bigr]
\overset{?}{=}
\sum_{k=1}^n\det\nolimits_*\bigl[(n+1-i)^{\delta_{jk}}z_1^{m-(\mu'_j-\delta_{jk})-i+j}\bigr].
\end{equation}
Let us compute the right-hand side by the cofactor expansion with respect to the $k$-th column.
\begin{align*}
&\sum_{k=1}^n\det\nolimits_*\bigl[(n+1-i)^{\delta_{jk}}z_1^{m-(\mu'_j-\delta_{jk})-i+j}\bigr]\\
&=\sum_{k=1}^n\sum_{l=1}^n(-1)^{k+l}(n+1-l)
z_1^{m-(\mu'_k-1)-l+k}*\det\nolimits_*\bigl[z_1^{m-\mu'_j-i+j}\bigr]_{i\ne l,j\ne k}\\
&=\sum_{l=1}^n(n+1-l)\sum_{k=1}^n(-1)^{k+l}
z_1^{m-(\mu'_k-1)-l+k}*\det\nolimits_*\bigl[z_1^{m-\mu'_j-i+j}\bigr]_{i\ne l,j\ne k}.
\end{align*}
Then, by the cofactor expansion with respect to the $l$-th row, we have
\[\sum_{k=1}^n(-1)^{k+l}
z_1^{m-(\mu'_k-1)-l+k}*\det\nolimits_*\bigl[z_1^{m-\mu'_j-i+j}\bigr]_{i\ne l,j\ne k}
=\det\nolimits_*\bigl[z_1^{m-\mu'_j-(i-\delta_{il})+j}\bigr].\]
This is zero for $l=2,\ldots,n$ since the $l$-th and $(l-1)$-st rows are equal.
Thus we have shown that the right-hand side of \eqref{eq:S_w rel reduced} is
$n\det\nolimits_*\bigl[z_1^{m+\delta_{i1}-\mu'_j-i+j}\bigr]$,
and now it is enough to prove
\begin{equation}\label{eq:S_w rel last}
z_1*\det\nolimits_*\bigl[z_1^{m-\mu'_j-i+j}\bigr]
\overset{?}{=}
\det\nolimits_*\bigl[z_1^{m+\delta_{i1}-\mu'_j-i+j}\bigr].
\end{equation}
But this is obvious since two matrices here are of the form
\[\begin{pmatrix}
1 & * \\
\textbf{0} & Z
\end{pmatrix}
\quad \text{and} \quad
\begin{pmatrix}
z_1 & * \\
\textbf{0} & Z
\end{pmatrix},\]
respectively, with the common $(n-1)\times(n-1)$ matrix
$Z=\bigl[z_1^{m-\mu'_i-i+j}\bigr]_{2\le i,j\le n}$.
Hence the proof is complete.
\end{proof}
|
1,116,691,500,172 | arxiv | \section{Introduction}
Inflation has become the most acceptable paradigm that describes the physics of the very early universe. Besides of solving most of the shortcomings of the hot big-bang scenario, like the horizon, the flatness, and the
monopole problems \cite{R1,R106,R103,R104,R105,Linde:1983gd}, inflation also generates a causal mechanism to explain the large-scale structure (LSS) of the universe \cite{R2,R202,R203,R204,R205}
and the origin of the anisotropies observed in the cosmic microwave background (CMB) radiation \cite{astro,astro2,astro202,Hinshaw:2012aka,Ade:2013zuv,Ade:2013uln,Ade:2015xua,Ade:2015lrj}, since primordial density perturbations may be sourced from quantum fluctuations of the inflaton scalar field during the inflationary expansion.
Several representative inflationary models have been studied within the framework of the so-called slow-roll approximation \cite{Lyth:2009zz},
where the kinetic term of the inflaton field
is much smaller than the potential energy, i.e. $\dot{\phi}^2\ll V(\phi)$, together with the approximation $\left|\ddot{\phi}\right|\ll H \left|\dot{\phi}\right|$. Moreover, in this approach the full shape of the inflaton potential is considered in order to identify the value of the scalar field at the end of inflation and hence the value of the scalar field when the largest scales observable today cross the Hubble radius. Upon comparison to the current cosmological and astronomical observations, specially those related with the CMB
temperature anisotropies, it is possible to constrain several inflation models. Particularly, the constraints in the $n_s-r$ plane
give us the predictions of a number of representative inflationary
potentials. Recently, the Planck
collaboration has published new data of enhanced precision of
the CMB anisotropies \cite{Ade:2015lrj}. Here, the Planck full mission
data has improved the upper bound on the tensor-to-scalar ratio
$r_{0.002} < 0.11$($95\%$ CL) which is similar to obtained from \cite{Ade:2013uln}, in which
$r < 0.12$ ($95\%$ CL). From the particle physics point of view, it is natural to
begin by specifying the functional form of the potential.
However, even for simple choices, such as exponential \cite{Lucchin:1984yf}, constant \cite{R1}
or power-law potentials \cite{Linde:1983gd}, it is not possible to go further analytically. An alternative way is
to specify the time-dependence of the scale factor $a(t)$. Following Refs.\cite{Barrow:1993zq, Rendall:2005if, Barrow:2006dh}. exact solutions can also be found in the scenario of intermediate inflation. In this inflationary model the scale factor evolves as $a(t)\sim \exp\left(At^f\right) $, where $A$ and $f$ are two constant parameters such that $A>0$ and $0<f<1$. The expansion rate of this scale factor is slower than de Sitter inflation \cite{R1}, for which $a(t)\sim \exp(H t)$, where $H$ is the Hubble rate, which is a contant, but faster than power-law inflation, $a(t)\sim t^n$ \cite{Lucchin:1984yf} , where $n>1$.
Alternative to the slow-roll approximation, there is another method for studying inflation known as the
Hamilton-Jacobi approach \cite{Salopek:1990jq, Kinney:1997ne}. This formulation is a powerful way of rewriting the equations of motion for single-field inflation. It can be derived by considering the scalar field itself to be the time variable, which is possible during any epoch in which the scalar field evolves monotonically with time. It allows us to consider the Hubble rate or Hubble function $H(\phi)$ (from now on, not confuse with with the Hamiltonian function $H$), rather than the inflaton scalar potential $V(\phi)$, as the fundamental quantity to be specified. Because $H(\phi)$, unlike $V(\phi)$, is a geometric quantity, inflation is described more naturally in that language. The advantage of such an approach is that the form of the potential is readily deduced. As it was suggested in Refs.\cite{Lidsey:1991zp, Lidsey:1991dz, Hawkins:2000dq}, $H(\phi)$ should be viewed as the solution generating
function when analysing inflationary cosmologies. For instance, $H(\phi) \sim \exp(\phi)$ gives the power-law inflation model \cite{Lucchin:1984yf}. Furthermore, this formalism has been considered by Planck collaboration in order to reconstruct the inflaton potential beyond slow-roll approximation \cite{Ade:2013uln, Ade:2015lrj}. For a representative list
of recent inflation models studied under Hamilton-Jacobi formalism where several expressions for $H(\phi)$ have been considered, see Refs.\cite{delCampo:2012qb,Pal:2011dt,Aghamohammadi:2014aca,Villanueva:2015ypa,Villanueva:2015xpa,Sheikhahmadi:2016wyz}.
Following Ref.\cite {Pal:2011dt}, a phenomenological quasi-exponential Hubble rate $H(\phi)$ yielding an inflationary
solution was proposed to be
\begin{equation}
H(\phi)=H_{inf}\exp \left[\frac{\frac{\phi}{m_p}}{p\left(1+\frac{\phi}{m_p}\right)}\right],\label{Hexp}
\end{equation}
where $p$ is a dimensionless parameter, $m_p$ denotes the Planck mass, and $H_{inf}$ is a parameter
with dimensions of Planck mass. It is interesting to mention that this model presents an improvement in comparison to power-law inflation model \cite{Lucchin:1984yf}, because the first one addresses the graceful-exit problem of inflation and the value predicted for the tensor-to-scalar ratio was compatible with Seven-Year WMAP \cite{astro202}, being
supported by the current data available at that time.
The main goal of the present work is to study the realization of inflation by reconsidering the expression for the Hubble rate given
by Eq.(\ref{Hexp}), in the light of the recent Planck
results. We stress that our work is different to previous work \cite {Pal:2011dt} in three ways. Firstly, in this work we restrict ourselves only to the inflationary predictions of this models. Secondly, in the previous paper the authors did not used the contour plots in the $n_s-r$ and $n_s-d n_s/d\ln k$ planes to constrain the parameters of the model they studied. Finally, in our work here we make use the latest data from Planck, not available at that time, to put bounds on the parameters of the model. We will show that our results are modified
compared to \cite{Pal:2011dt} using the Planck results. By comparing the
theoretical predictions of the model together with the allowed
contour plots in the $n_s-r$ plane, the model predicts a value for the tensor-to-scalar ratio $r$ detectable
by Planck, and we conclude that the model is viable.
We organize our work as follows: After this introduction, in the next section
we summarize the dynamics of
inflation in the Hamilton-Jacobi formalism. In the third section we analyze the inflation dynamics of the Hubble rate given by Eq.(\ref{Hexp}) in the
Hamilton-Jacobi framework, obtaining expressions for the scalar power spectrum, scalar spectral index, and tensor-to-scalar ratio
in terms of the free parameters characterizing the model which are constrained by considering the Planck 2015 results, through the allowed contour plots in the $r-n_s$ plane and the amplitude of the scalar power spectrum. In section \ref{wi} we discuss a little further how in the warm inflation scenario the radiation-dominated phase is achieved without introducing the reheating phase for this quasi-exponential Hubble function. In the last section we finish with
our conclusions. We choose units so that $c=\hbar=1$.
\section{Hamilton-Jacobi approach to inflation}\label{branerew}
\subsection{Dynamics of inflation}
In the simplest model of inflation in Einstein's General Relativity is a classical homogeneous scalar field $\phi=\phi(t)$, named the inflaton field, which is introduced into the action. The properties of the scalar potential determine how inflation evolves. For a flat Friedman-Lema\^{i}tre-Robertson-Walker (FLRW) metric,
the Friedmann and acceleration equations become
\begin{equation}
H^2 =\frac{8 \pi}{3 m^2_p}\left(\frac{\dot{\phi}^2}{2}+V(\phi)\right),\label{H2}
\end{equation}
and
\begin{equation}
\frac{\ddot{a}}{a} =-\frac{4 \pi}{3 m^2_p}\left(\dot{\phi}^2-V(\phi)\right),\label{aa}
\end{equation}
respectively, where $m_p=1/G$ corresponds to the Planck mass.
Besides the Einstein equations, the field satisfies the Klein-Gordon equation in this FLRW universe
\begin{equation}
\ddot{\phi}+3H\dot{\phi}+V^{\prime}=0,\label{KG}
\end{equation}
where prime indicates derivative with respect to $\phi$, and dot a derivative with respect to cosmic time.
The Friedmann and Klein-Gordon equations are the basis to construct the Hamilton-Jacobi formulation. By combining Eqs.(\ref{H2}) and (\ref{KG}), we obtain the following expression
\begin{equation}
\dot{\phi}=-\left(\frac{m^2_p}{4 \pi}\right) H^{\prime}(\phi),\label{dotphi}
\end{equation}
which gives the relation between $\phi$ and cosmic time $t$. This allows us to write the Friedmann
equation in a first-order form, from which the inflaton potential $V(\phi)$ becomes
\begin{equation}
V(\phi)=\left(\frac{3m^2_p}{8\pi}\right)\left[H(\phi)^2-\frac{m^2_p}{4\pi}\left[H^{\prime}(\phi)\right]^2\right].\label{HJ}
\end{equation}
This last equation is the Hamilton-Jacobi equation \cite{Lyth:2009zz}. It allows us to consider $H(\phi)$, rather than $V(\phi)$, as the fundamental quantity to be specified. On the other hand, a relatiom $\frac{d a}{d \phi}=a \frac{H}{\dot{\phi}}$ with Eq.(\ref{dotphi})
yields a differential equation for $a(\phi)$, whose integration becomes
\begin{equation}
a(\phi)=\exp\left[-\frac{4\pi}{m^2_p}\int \,\frac{H^{\prime}(\phi)}{H(\phi)}\,d\phi\right]. \label{aphi}
\end{equation}
This equation implies that, once the functional form of a geometrical quantity $H(\phi)$ has been specified, the cosmological dynamics is determined. The advantage of such an approach is that the form of the
potential is readily deduced from Eq.(\ref{HJ}).
We can use the Hamilton-Jacobi formalism to write down a slightly different version of the slow-roll approximation, defining the Hubble hierarchy parameters $\epsilon_H$ and $\eta_H$ as \cite{Lyth:2009zz}
\begin{eqnarray}
\epsilon_H &\equiv & -\frac{d \ln H}{d \ln a} =\left(\frac{m^2_p}{4\pi}\right)\left(\frac{H^{\prime}(\phi)}{H(\phi)}\right)^2,\label{epsilon}\\
\eta_H & \equiv & -\frac{d \ln H^{\prime}}{d \ln a}= \frac{m^2_p}{4\pi}\frac{H^{\prime \prime}(\phi)}{H(\phi)}.\label{eta}
\end{eqnarray}
In the slow-roll limit, $\epsilon_H\,\,\rightarrow\,\,\epsilon$ and $\eta_H \,\,\rightarrow\,\, \eta -\epsilon$, where $\epsilon$ and $\eta$ are
the usual slow-roll parameters. By using Eq.(\ref{epsilon}), the acceleration equation (\ref{aa}) is rewritten as
\begin{equation}
\frac{\ddot{a}}{a}=H^2\left(1-\epsilon_H\right).\label{newacc}
\end{equation}
During inflation $\epsilon_H$
satisfies the condition $\epsilon_H<1$, and the inflationary expansion ends when $\epsilon_H$ becomes one.
On the other hand, the number of $e$-folds between the Hubble-radius crossing and the end of inflation yields
\begin{equation}
N(\phi) \equiv \int_{t_{*}}^{t_{end}}\,H\,dt=\left(\frac{4\pi}{m^2_p}\right)\int_{\phi_{end}}^{\phi_{*}}\,\frac{H(\phi)}{H^{\prime}(\phi)}\,d\phi=\int_{\phi_{end}}^{\phi_{*}}\,\frac{1}{\epsilon_H}\frac{H^{\prime}(\phi)}{H(\phi)}\,d\phi\label{Nfolds}
\end{equation}
where $\phi_{*}$ and $\phi_{end}$ are the values of the scalar field when the cosmological scales cross the Hubble-radius and at the end of inflation, respectively. The last value is found by $\epsilon_H(\phi_{end})=1$.
\subsection{Attractor behavior}
The Hamilton-Jacobi approach is usefull to show that all possible inflationary trajectories will rapidly converge to a common attractor solution, if they are sufficiently close to each other initially. This is exactly the behaviour that one expect within the slow-roll approximation, but the proof do not use of that approximation. Suppose that $H_0(\phi)$ is any solution to Eq.(\ref{HJ}), inflationary or not. If we add to this solution a linear homogeneous perturbation $\delta H(\phi)$, the attractor behaviour will be satisfied if $\frac{\delta H(\phi)}{H_0(\phi)}$ tends quickly to zero as $\phi$ evolves \cite{Liddle:1994dx}. Replacing $H(\phi)=H_0(\phi)+\delta H(\phi)$ in Eq.(\ref{HJ}) and linearizing, we have that
\begin{equation}
\delta H(\phi)\simeq \frac{1}{3}\left(\frac{m^2_p}{4\pi}\right)\frac{H^{\prime}_0(\phi)}{H_0(\phi)}\,\delta H^{\prime}_0(\phi).
\end{equation}
Integrating last expression we get
\begin{equation}
\delta H(\phi)=\delta H(\phi_i) \exp\left(\int_{\phi}^{\phi_i}\,\frac{3}{\epsilon_H}\frac{H^{\prime}_0(\phi)}{H_0(\phi)}\right)\,d\phi, \label{attractor}
\end{equation}
where $\delta H(\phi_i)$ is the initial value of the perturbation at $\phi=\phi_i$. Knowing $H(\phi)$, it is possible to study the behaviour
of perturbation $\delta H(\phi)$.
In the next section we will give a review of cosmological perturbations and use Hubble hierarchy parameters for describing scalar and tensor perturbations.
\subsection{Cosmological perturbations}
We consider the gauge invariant quantity $\zeta=-\psi-H\frac{\delta \rho}{\dot{\rho}}$. Here, $\zeta$ is defined on slices of uniform density and reduces to the curvature perturbation $\mathcal{R}$ at super-horizon scales. A fundamental
feature of $\zeta$ is that it is nearly constant on super-horizon scales \cite{Riotto:2002yw}, and in fact this property does not depend on the gravitational field equations \cite{Wands:2000dp}. Therefore, at super-horizon scales we have that $\mathcal{R}=H\frac{\delta \phi}{\dot{\phi}}$, where $\left|\delta \phi\right|=H/2\pi$. In this way, the power spectrum
of scalar perturbations is given by \cite{Lyth:2009zz, Bassett:2005xm}
\begin{equation}
\mathcal{P}_{\mathcal{R}}(k)=\frac{H^2}{\dot{\phi}^2}\left(\frac{H}{2\pi}\right)^2_{k=aH}.\label{AS}
\end{equation}
This perturbation is evaluated at Hubble radius crossing $k = aH$ during inflation.
Important observational quantities are not only the amplitude of the primordial curvature perturbations but also the scalar spectral index which represents the scale dependence of the power spectrum, defined by
\begin{equation}
n_s-1\equiv \frac{d \ln \mathcal{P}_{\mathcal{R}} }{d \ln k}.
\end{equation}
Thus, the scalar spectral index of the power spectrum (\ref{AS}) is given by
\begin{equation}
n_s-1=2\eta_H-4\epsilon_H, \label{ns}
\end{equation}
where $\epsilon_H$ and $\eta_H$ are the Hubble hierarchy parameters, given by Eqs.(\ref{epsilon}) and (\ref{eta}), respectively.
We also introduce the running of the scalar spectral index, which represents the scale dependence of the spectral index, by
$n_{run}=\frac{d n_s}{d \ln k}$, yielding
\begin{equation}
n_{run}=10 \epsilon_H \eta_H-8 \epsilon^2_H-2 \xi^2_H,\label{nrun}
\end{equation}
where $\xi^2_H$ is a third Hubble hierarchy parameter, defined by \cite{Lidsey:1995np}
\begin{equation}
\xi^2_H\equiv \frac{m^2_p}{4 \pi}\left(\frac{H^{\prime \prime \prime}(\phi)H^{\prime}(\phi)}{H(\phi)^2}\right).\label{xi2}
\end{equation}
On the other hand, the power spectrum of tensor perturbations generated from inflation is given by \cite{Lyth:2009zz, Bassett:2005xm}
\begin{equation}
\mathcal{P}_{\mathcal{T}}=\frac{16\pi}{m^2_p}\left(\frac{H}{2\pi}\right)^2_{k=aH}.\label{TS}
\end{equation}
As the cosmological parameter related to the primordial tensor perturbation, the ratio between the amplitude of the primordial tensor perturbation and that of the primordial curvature perturbation, the so-called tensor-to-scalar ratio, defined by $r\equiv \frac{\mathcal{P}_{\mathcal{R}}}{\mathcal{P}_{\mathcal{T}}}$, becomes
\begin{equation}
r=4\epsilon_H.\label{rH}
\end{equation}
Additionally, by combining Eqs.(\ref{dotphi}) and (\ref{rH}), we obtain the Lyth bound \cite{Lyth:1996im}, which relates the tensor-to-scalar ratio and
the evolution of the scalar inflaton field
\begin{equation}
\frac{\Delta \phi}{m_p}=\frac{1}{4\sqrt{\pi}}\int_{0}^{N}\,\sqrt{r}\,dN.\label{lyth}
\end{equation}
This means that the tensor-to-scalar ratio measures (up to order-one constants) the distance that the inflaton
field $\phi$ traveled in field space during inflation. For detectable $r$, this implies $\Delta \phi \sim m_p$.
Up to now, the basis of the Hamilton-Jacobi formalism has been presented. In next section, in order to get a specific result, we are going to introduce the quasi-exponential form for Hubble rate given by Eq.(\ref{Hexp}).
\section{Hamilton-Jacobi approach for quasi-exponential inflation}\label{natbra}
\subsection{Dynamics of inflation}
In this section we describe an inflationary model by using the quasi-exponential generating function $H(\phi)$ given by Eq.(\ref{Hexp}). By combining Eqs.(\ref{Hexp}) and (\ref{dotphi}) we get that
\begin{eqnarray}
&\exp\left[-\frac{\frac{\phi}{m_p}}{p\left(1+\frac{\phi}{m_p}\right)}\right] p\left(1+\frac{\phi}{m_p}\right)\left[1+p+2p^2+p(1+4p)\frac{\phi}{m_p}+2p^2\left(\frac{\phi}{m_p}\right)^2\right] \nonumber\\
&-\exp\left[-\frac{1}{p}\right] \textup{Ei}\left[x(\phi)\right]=-\frac{3p^2}{2\pi}H_{inf}t,\label{phisol}
\end{eqnarray}
where $\textup{Ei}\left[x(\phi)\right]$ denotes the Exponential Integral function \cite{arfken}, given by the integral
\begin{equation}
\textup{Ei}(x)=-\int_{-x}^{\infty}\,\frac{\exp(-z)}{z}\,dz,
\end{equation}
with $x(\phi)=\frac{1}{p\left(1+\frac{\phi}{m_p}\right)}$. This latter expression yields the inflaton field as function of cosmic time. Additionally, the scale factor $a(t)$ turns out to be
\begin{equation}
a(\phi)=a_i \exp\left[-4\pi p\left(\left[\frac{\phi}{m_p}+\left(\frac{\phi}{m_p}\right)^2+\left(\frac{\phi}{m_p}\right)^3\right]-\left[\frac{\phi_i}{m_p}+\left(\frac{\phi_i}{m_p}\right)^2+\left(\frac{\phi_i}{m_p}\right)^3\right]\right)\right],\label{asolphi}
\end{equation}
where $a_i$ denotes the value of the scale factor when the inflaton field has the value $\phi_i$, i.e., $a_i=a(\phi_i)$. In order to have
an inflationary solution, the condition $\phi<\phi_i$ must be satisfied, which means that the inflaton starts to rolling down the
potential at large values of $\phi_i$.
The form of the potential is readily deduced from Eqs.(\ref{Hexp}) and (\ref{HJ}), which results to be given by
\begin{eqnarray}
&V(\phi)=V_0\frac{\exp\left[\frac{\frac{2\phi}{m_p}}{p\left(1+\frac{\phi}{m_p}\right)}\right] }{\left(1+\frac{\phi}{m_p}\right)^4}\bigg[\left(4 \pi p^2-1\right)+16\pi p^2\frac{\phi}{m_p}+24\pi p^2\left(\frac{\phi}{m_p}\right)^2+16\pi p^2\left(\frac{\phi}{m_p}\right)^3\nonumber\\
&+4\pi p^2\left(\frac{\phi}{m_p}\right)^4\bigg],\label{VVphi}
\end{eqnarray}
where $V_0=\frac{3H_{inf}^2m^2_p}{32 \pi^2 p^2}$. For sake of comparison, in the slow-roll approximation, $\dot{\phi}^2\ll V(\phi)$ and $\left|\ddot{\phi}\right|\ll H \left|\dot{\phi}\right|$, the inflaton potential becomes
\begin{equation}
V(\phi)\simeq \frac{3 H^2_{inf}m^2_p}{8\pi} \exp\left[\frac{\frac{2\phi}{m_p}}{p\left(1+\frac{\phi}{m_p}\right)}\right].
\end{equation}
For this model the Hubble hierarchy parameters $\epsilon_H$ and $\eta_H$ become
\begin{equation}
\epsilon_H(\phi)=\frac{1}{4\pi p^2\left(1+\frac{\phi}{m_p}\right)^4},\label{eh}
\end{equation}
and
\begin{equation}
\eta_H(\phi)=-\frac{\left(-1+2p+2p\frac{\phi}{m_p}\right)}{4\pi p^2\left(1+\frac{\phi}{m_p}\right)^4},\label{etah}
\end{equation}
respectively.
From the condition $\epsilon_H(\phi_{end})=1$, we obtain the value of the inflaton field at the end of the inflationary expansion, yielding
\begin{equation}
\phi_{end}=\left(\frac{1}{\sqrt{2p}\,\pi^{1/4}}-1\right)m_p.\label{phiend}
\end{equation}
Restricting ourselves only to positive incursion of the inflaton field trough the potential, the allowed range for $p$ becomes $0 < p < \frac{1}{2\sqrt{\pi}}\approx 0\textup{.}282$.
The number of inflationary $e$-folds between the values of the scalar field when a given perturbation scale leaves the Hubble-radius and at the end of inflation, can be computed from Eqs.(\ref{Hexp}), (\ref{Nfolds}), and (\ref{phiend}), resulting in
\begin{equation}
N=\frac{4\pi p}{3}-\frac{\sqrt{2}\pi^{1/4}}{3\sqrt{p}}+4\pi p \frac{\phi_{*}}{m_p}+4\pi p \left(\frac{\phi_{*}}{m_p}\right)^2+\frac{4\pi p}{3}\left(\frac{\phi_{*}}{m_p}\right)^3.\label{efoldsV}
\end{equation}
By solving Eq.(\ref{efoldsV}) for $\phi_{*}$, we may obtain the value of the scalar field at the time of Hubble-radius crossing, giving
\begin{equation}
\phi_{*}=\left[\frac{\left(3N \pi^2p^2+\sqrt{2}\pi^{9/4}p^{3/2}\right)^{1/3}}{2^{2/3}\pi p}-1\right]m_p.\label{ycmb}
\end{equation}
As we shall see later on, the several inflationary observables will be evaluated at the value of the inflaton field given by Eq.(\ref{ycmb}).
\subsection{Attractor behavior}
\begin{figure}[th]
{\hspace{-2
cm}\includegraphics[width=3.3 in,angle=0,clip=true]{deltaH1}
}
{\vspace{-0.5 cm}\caption{Plot of the perturbation $\delta H(\phi)/H(\phi_i)$ as function of inflaton field $\phi$ for the
quasi-exponential Hubble rate. For this plot we
have used 3 different values for the number of $e$-folds $N$: the solid, dashed, and dotted lines correspond to $N=55,\,60$, and $65$, respectively. Additionally, we have used the values $p=0\textup{.}15$ and $\alpha=1\textup{.}5$.
\label{delta}}}
\end{figure}
As final step of the analysis of background dynamic for this model, the attractor behavior of the solution is
considered. From Eqs.(\ref{Hexp}) and (\ref{attractor}), the solution for the perturbation $\delta H(\phi)$ yields
\begin{equation}
\frac{\delta H(\phi)}{\delta H(\phi_i)}=\exp\left(4\pi p \left[\left(1+\frac{\phi}{m_p}\right)^3-\left(1+\frac{\phi_i}{m_p}\right)^3\right]\right).\label{deltahs}
\end{equation}
In order to determine the attractor behaviour quantitatively, we consider the initial value of the inflaton field to be $\phi_i=\alpha \phi_{*}$, with $\alpha >1$ and $\phi_{*}$ given by Eq.(\ref{ycmb}). Then, replacing $\phi_i$ into Eq.(\ref{deltahs}), the perturbation $\frac{\delta H(\phi)}{\delta H(\phi_i)}$
becomes
\begin{equation}
\frac{\delta H(\phi)}{\delta H(\phi_i)}=\exp\left(4\pi p \left[\left(1+\frac{\phi}{m_p}\right)^3-\left(1+\alpha \left[\frac{\left(3N \pi^2p^2+\sqrt{2}\pi^{9/4}p^{3/2}\right)^{1/3}}{2^{2/3}\pi p}-1\right]\right)^3\right]\right).\label{deltahN}
\end{equation}
Fig.\ref{delta} shows the plot of the perturbation $\delta H(\phi)/H(\phi_i)$ as function of inflaton field $\phi$. For this plot we
have used 3 different values for the number of $e$-folds $N$: the solid, dashed, and dotted lines correspond to $N=55,\,60$, and $65$, respectively. Additionally, we have used the values $p=0\textup{.}15$ and $\alpha=1\textup{.}5$.
For this quasi-exponential form of the Hubble function, the inflaton field decreases as time increases, therefore the exponential term
on the right-hand side of Eq.(\ref{deltahN}) decreases by passing time and tends to zero rapidly, then the
perturbation of the Hubble function vanishes, and the model has an attractive behavior.
\subsection{Cosmological perturbations}
Regarding the cosmological perturbations, the amplitude of the primordial curvature perturbation, using Eqs.(\ref{Hexp}) and (\ref{AS}), is found to be
\begin{equation}
\mathcal{P}_{\mathcal{R}}=\frac{4H^2_{inf}p^2}{m^2_p}\exp\left[\frac{2\frac{\phi}{m_p}}{p\left(1+\frac{\phi}{m_p}\right)}\right]\left(1+\frac{\phi}{m_p}\right)^4.\label{ASV}
\end{equation}
The scalar spectral index, using Eqs.(\ref{ns}), (\ref{eh}), and (\ref{etah}), becomes
\begin{equation}
n_s=1-\frac{\left(1+2p+2p\frac{\phi}{m_p}\right)}{2\pi p^2 \left(1+\frac{\phi}{m_p}\right)^4}.\label{nsV}
\end{equation}
Additionally, the running of the scalar spectral index $n_{run}$ is found to be
\begin{equation}
n_{run}=-\frac{\left(2+3p+3p\frac{\phi}{m_p}\right)}{4\pi^2 p^2 \left(1+\frac{\phi}{m_p}\right)^7}.\label{nsV}
\end{equation}
Finally, the tensor-to-scalar ratio can be obtained from Eqs.(\ref{rH}) and (\ref{eh}), yielding
\begin{equation}
r=\frac{1}{\pi p^2\left(1+\frac{\phi}{m_p}\right)^4}.\label{rrV}
\end{equation}
After evaluating these inflationary observables at the value of the scalar field when a given perturbation scale leaves the Hubble-radius, given by (\ref{ycmb}), we may compare the theoretical predictions of our model with the observational data in order to obtain constraints on the parameters that characterize it.
\begin{figure}[th]
{\hspace{-3
cm}\includegraphics[width=4.8 in,angle=0,clip=true]{Hrns21}}
{\vspace{-1.5cm}\caption{Plot of the tensor-to-scalar ratio $r$ versus the scalar spectral index $n_s$ for the
quasi-exponential Hubble rate. Here, we have considered the
two-dimensional marginalized joint confidence contours for $(n_s
,r)$, at the $68\%$ and $95\%$ CL, from the latest Planck data \cite{Ade:2015lrj}. In this plot we
have used 3 different values for the number of $e$-folds $N$: the solid, dashed, and dotted lines correspond to $N=55,\,60$, and $65$, respectively.
\label{rns}}}
\end{figure}
The amplitude of the primordial curvature perturbation, the scalar spectral index, the running of the scalar spectral index, and the tensor-to-scalar ratio, evaluated at the Hubble-radius crossing $k=aH$, become
\begin{eqnarray}
\label{AsN}
\mathcal{P}_{\mathcal{R}} &=& \frac{H^2_{inf}}{2^{2/3}\pi^{4/3}m^2_p}\exp\left[\frac{2}{p}-\frac{2^{5/3}\pi^{1/3}}{\sqrt{p}\left(3N\sqrt{p}+\sqrt{2}\pi^{1/4}\right)^{1/3}}\right]\left(3N\sqrt{p}+\sqrt{2}\pi^{1/4}\right)^{4/3},\\
\label{nsN}
n_s &=& 1-\frac{4\sqrt{p}}{3N\sqrt{p}+\sqrt{2}\pi^{1/4}}-\frac{2^{5/3}\pi^{1/3}}{\left(3N\sqrt{p}+\sqrt{2}\pi^{1/4}\right)^{4/3}} ,\\
\label{runn}
n_{run} &=& -\frac{4p}{\left(3N\sqrt{p}+\sqrt{2}\pi^{1/4}\right)^2}\left[3+\frac{2^{5/3}\pi^{1/3}}{\sqrt{p}\left(3N\sqrt{p}+\sqrt{2}\pi^{1/4}\right)^{1/3}}\right],\\
\label{rN}
r &=& \frac{2^{8/3}\pi^{1/3}}{\left(3N\sqrt{p}+\sqrt{2}\pi^{1/4}\right)^{4/3}}.
\end{eqnarray}
The first constraint on the parameters of this model can easily found from Eq.(\ref{AsN}), because we may write the $H_{inf}$ parameter in terms of the amplitude of the scalar power spectrum, obtaining
\begin{equation}
\label{HN}
H_{inf}=\frac{2^{1/3}\pi^{2/3}p\sqrt{\mathcal{P}_{\mathcal{R}}}m_p}{\sqrt{p}\left(3N\sqrt{p}+\sqrt{2}\pi^{1/4}\right)^{2/3}}\exp\left[-\frac{1}{p}+\frac{2^{2/3}\pi^{1/3}}{\sqrt{p}\left(3N\sqrt{p}+\sqrt{2}\pi^{1/4}\right)^{1/3}}\right].
\end{equation}
The trajectories in the $n_s-r$ plane for the model studied here may be generated by plotting Eqs.(\ref{nsN}) and (\ref{rN}) parametrically. In particular, we have obtained three different curves by fixing the number of $e$-folds to $N=55,\,60$, and $65$, and plotting with respect to the parameter $p$ in the range $0 < p < \frac{1}{2\sqrt{\pi}}$, obtained by considering a positive incursion of the inflaton field trough the potential, which gives an upper bound for $p$. The Fig.(\ref{rns}) shows the plot of the tensor-to-scalar ratio $r$ versus the scalar spectral index $n_s$ for the quasi-exponential Hubble rate. Here, we have considered the
two-dimensional marginalized joint confidence contours for $(n_s
,r)$, at the $68\%$ and $95\%$ CL, from the latest Planck data \cite{Ade:2015lrj}. We can determinate numerically from Eq.(\ref{rN}) that, by fixing $N$, the tensor-to-scalar ratio decreases as the parameter $p$ is increasing. In this way, the allowed contour plots in the $n_s-r$ plane impose a strong constraint on the lower bound for $p$. This lower bound for the dimensionless parameter $p$, for each $r(n_s)$ curve, may be inferred by finding the points when the trajectory enters the $95\%$ CL region from Planck.
The trajectory for $N=55$ enters to joint $95\%$ CL region in the $n_s$ - $r$ plane for $p>0\textup{.}104$. On the other hand,
for $N=60$, the trajectory enters to the $95\%$ CL region for $p>0\textup{.}078$ . Finally, for $N=65$ enters to the joint $95\%$ CL
for $p>0\textup{.}061$. On the other hand, by Eq.(\ref{HN}), the constraints on $p$ already obtained, and the observational value for amplitude of the scalar power spectrum $\mathcal{P}_{\mathcal{R}}\simeq 2 \times 10^{-9}$ \cite{Ade:2015lrj}, me may obtain the allowed range for $H_{inf}$ for each value of $N$. For $N=55$, this constraint becomes $3\text{.}883\times 10^{-9}\,m_p< H_{inf} < 4\text{.}866\times 10^{-7}\,m_p$, for $N=60$ we have that $2\text{.}279\times 10^{-10}\,m_p < H_{inf} < 4\text{.}472\times 10^{-7}\,m_p$,
and finally, for $N=65$ the allowed range becomes $9\text{.}117\times 10^{-12}\,m_p < H_{inf} < 4\text{.}141\times 10^{-7}\,m_p$. Table (\ref{T1}) summarizes the constraints obtained on $p$ and $H_{inf}$ using the last data of Planck.
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline
$N$ & constraint on $p$ & constraint on $H_{inf}$ \\
\hline
55 & $0\textup{.}104<p<0\textup{.}282$ & $3\text{.}883\times 10^{-9}\,m_p< p < 4\text{.}866\times 10^{-7}\,m_p$ \\
\hline
60 & $0\textup{.}078<p<0\textup{.}282$ & $2\text{.}279\times 10^{-10}\,m_p < p < 4\text{.}472\times 10^{-7}\,m_p$ \\
\hline
65 & $0\textup{.}061<p<0\textup{.}282$ & $9\text{.}117\times 10^{-12}\,m_p < p < 4\text{.}141\times 10^{-7}\,m_p$ \\
\hline
\end{tabular}
\caption{Results for the constraints on the parameters $p$ and
$H_{inf}$ for the quasi-exponential form for the Hubble rate, using the last data of Planck.} \label{T1}
\end{table}
\begin{figure}[th]
{\hspace{-2
cm}\includegraphics[width=3.8 in,angle=0,clip=true]{Vphi1}}
{\vspace{-1 cm}\caption{Plot of the scalar potential $V$ as function of the inflaton field. For this plot we
have used 3 different values for the number of $e$-folds $N$: the solid, dashed, and dotted lines correspond to $N=55,\,60$, and $65$, respectively. For all the three cases, we have used the values $p=0\textup{.}15$, which lies in the allowed range already obtained for ecah value of $N$, and $m_p=1$.
\label{V}}}
\end{figure}
As we can see, using the latest Planck results, through the allowed contour plots in the $n_s-r$ plane and the amplitude of the scalar power spectrum, we were able to find the allowed range for $p$ and $H_{inf}$. Particularly, the allowed contour plots in the $n_s-r$ plane impose a strong constraint on the lower bound for $p$, which was not considered by the authors in the previous work \cite {Pal:2011dt}.
After replacing Eq.(\ref{HN}) into Eq.(\ref{VVphi}), we can plot the scalar potential $V$ as function of the inflaton field, as is shown in
Fig.(\ref{V}). We have plotted the inflaton potential for 3 different values for the number of $e$-folds $N$: the solid, dashed, and dotted lines correspond to $N=55,\,60$, and $65$, respectively, and fixing the value $p=0\textup{.}15$, which lies in the allowed range for each value of $N$ already obtained by using the Planck data. It is interesting to mention that this quasi-exponential form
of the Hubble rate presents a graceful-exit of inflation, however, the inflaton potential does not present a minimum, which raises the issue of how to address the problem of reheating in this model. A way to address this problem may be to study this model in the warm inflation scenario \cite{Berera:2008ar, BasteroGil:2009ec,Ramos:2016coz}, which has the attractiveness that it avoids the reheating period at the end of the accelerated expansion. In such as scenario, the dissipative effects are important, and radiation production
takes place at the same time as the expansion of the universe. When the universe heats up and becomes radiation dominated, inflation ends and the universe smoothly enters the radiation Big Bang phase. In section \ref{wi} we discuss a little further how in the warm inflation scenario the radiation-dominated phase is achieved without introducing the reheating phase for this quasi-exponential Hubble function.
\begin{figure}[th]
{\hspace{-2
cm}\includegraphics[width=3.3 in,angle=0,clip=true]{nrun21}
\includegraphics[width=3.3 in,angle=0,clip=true]{N60run1}}
{\vspace{-1 cm}\caption{Left panel shows the plot of the running of the scalar spectral index $d n_s/ d\ln k$ versus the scalar spectral index $n_s$ for the
quasi-exponential Hubble rate. In this plot we
have used 3 different values for the number of $e$-folds $N$: the solid, dashed, and dotted lines correspond to $N=55,\,60$, and $65$, respectively. The right panel shows the two-dimensional marginalized joint confidence contours for $(n_s
,dn_s / d\ln k)$, at the $68\%$ and $95\%$ CL, in the presence of a non-zero tensor contribution, from the latest Planck data \cite{Ade:2015lrj} and the $\frac{d n_s}{d\ln k}(n_s)$ curve for $N=60$ (red-dashed line).
\label{run}}}
\end{figure}
In order to determine the prediction of this model regarding the running of the spectral index, the trajectories in the $n_s-d n_s/d\ln k$ plane may be generated by plotting Eqs.(\ref{runn}) and (\ref{rN}) parametrically. In particular, we have obtained three different curves by fixing the number of $e$-folds to $N=55,\,60$, and $65$, and plotting with respect to the parameter $p$ in the allowed range obtained for each value of $N$, which are shown in left panel of Fig.(\ref{run}). In order to compare the previous predictions with the observational data, the right panel of Fig.(\ref{run}), shows the
two-dimensional marginalized joint confidence contours for $(n_s,
d n_s/d\ln k)$, at the $68\%$ and $95\%$ CL, in the presence of a non-zero tensor contribution, from the latest Planck data \cite{Ade:2015lrj}. Given the indistinguishability of the curves in left panel, for the right panel we have only considered the curve corresponding to $N=60$ (red-dashed line) to compare with Planck data. The thin black line in right panel shows the prediction of single-field monomial inflation models with $50<N<60$. From both panels, we observe that all three $\frac{d n_s}{d\ln k}(n_s)$ curves lie inside the $68\%$ as well as $95\%$ CL regions from Planck.
Finally, by replacing the expression
that we have found for the tensor-to-scalar ratio $r(N)$, expressed by Eq.(\ref{rN}), into Eq.(\ref{lyth}), the incursion of the inflaton field is found to be
\begin{equation}
\frac{\Delta \phi}{m_p}=\frac{1}{2^{2/3}\sqrt{p}\pi^{1/3}}\left[\left(3N\sqrt{p}+\sqrt{2}\pi^{1/4}\right)^{1/3}-\left(\sqrt{2}\pi^{1/4}\right)^{1/3}\right].
\end{equation}
In particular, by considering the constraint on $p$ for each value of $N$ already obtained, we get that
$2\textup{.}623<\frac{\Delta \phi}{m_p}<3\textup{.}428$, $2\textup{.}727<\frac{\Delta \phi}{m_p}<3\textup{.}852$, and $2\textup{.}826<\frac{\Delta \phi}{m_p}<4\textup{.}267$, for $N=55$, $N=60$, and $N=70$, respectively. We note that the
incursion of the inflaton field decreases as $p$ increases.
\section{A first approach to warm inflation with a quasi-exponential Hubble function}\label{wi}
As we mentioned at previous section, the warm
inflation scenario, as opposed to the standard cold inflation, has the attractive feature that it
avoids the reheating period at the end of the accelerated expansion. During the evolution
of warm inflation dissipative effects are important, and radiation production takes place at
the same time as the expansion of the universe. The dissipative effects arise from a friction
term $\Gamma$ which accounts for the processes of the scalar field dissipating into a thermal bath. In addition,
in warm inflationary scenario the density perturbations arise
from thermal fluctuations of the inflaton and dominate over the quantum ones. In this form,
an essential condition for warm inflation to occur is the existence of a radiation component
with temperature $T>H$, since the thermal and quantum fluctuations are proportional to
$T$ and $H$, respectively \cite{Berera:2008ar, BasteroGil:2009ec,Ramos:2016coz}. When the universe heats up and becomes radiation dominated,
inflation ends and the universe smoothly enters in the radiation Big-Bang.
We start by considering a spatially flat FLRW universe
containing a self-interacting inflaton scalar field $\phi$ with energy density and pressure
given by $\rho_{\phi}=\dot{\phi}^2/2+V(\phi)$ and $P_{\phi}=\dot{\phi}^2/2-V(\phi)$, respectively,
and a radiation field with energy density $\rho_{\gamma}$. The corresponding Friedmann equations reads
\begin{equation}
H^2=\frac{8\pi}{3m^2_p}(\rho_{\phi}+\rho_{\gamma}), \label{Freq}
\end{equation}
and the dynamics of $\rho_{\phi}$ and $\rho_{\gamma}$ is described by the equations \cite{Berera:2008ar, BasteroGil:2009ec,Ramos:2016coz}
\begin{equation}
\dot{\rho_{\phi}}+3\,H\,(\rho_{\phi}+P_{\phi})=-\Gamma \dot{\phi}^{2},
\label{key_01
\end{equation}
and
\begin{equation}
\dot{\rho}_{\gamma}+4H\rho_{\gamma}=\Gamma \dot{\phi}^{2}, \label{key_02
\end{equation}
where the dissipative coefficient $\Gamma>0$ produces the decay of the scalar
field into radiation. Recall that this
decay rate can be assumed to be a function of the
temperature of the thermal bath $\Gamma(T)$, or a function of the
scalar field $\Gamma(\phi)$, or a function of $\Gamma(T,\phi)$ or
simply a constant\cite{Berera:2008ar, BasteroGil:2009ec,Ramos:2016coz}.
During warm inflation, the energy density related to
the scalar field predominates over the energy density of the
radiation field, i.e.,
$\rho_\phi\gg\rho_\gamma$\cite{Berera:2008ar, BasteroGil:2009ec,Ramos:2016coz}, but even if small when compared to the inflaton energy density
it can be larger than the expansion rate with $\rho_{\gamma}^{1/4}>H$. Assuming thermalization, this translates roughly
into $T>H$, which is the condition for warm inflation to occur.
When $H$, $\phi$, and $\Gamma$ are slowly varying, which is a good
approximation during inflation, the production of radiation becomes quasi-stable, i.e., $\dot{\rho
}_{\gamma}\ll4H\rho_{\gamma}$ and $\dot{\rho}_{\gamma}\ll\Gamma\dot{\phi}^{2
$, see Refs.\cite{Berera:2008ar, BasteroGil:2009ec,Ramos:2016coz}. Then, the energy density of the radiation field becomes
\begin{equation}
4H\rho_{\gamma}\simeq \Gamma\,\dot{\phi}^{2}. \label{key_02n
\end{equation}
If we consider thermalization, then the energy density of the radiation field could be written as $\rho_{\gamma}=C_{\gamma}\,T^{4}$, where the constant $C_{\gamma}=\pi^{2}\,g_{\ast}/30$. Here, $g_{\ast}$ represents the number
of relativistic degrees of freedom.
By combining Eqs.(\ref{Freq}) and (\ref{key_01}), the time derivative of the Hubble function is given by
\begin{equation}
\dot{H}(\phi)=-\left(\frac{4\pi}{m_p^2}\right)(1+Q)\dot{\phi}^2, \label{hdot
\end{equation}
where $Q$ is the dissipative ratio, defined as $Q\equiv \Gamma/3H$. In warm inflation, we can distinguish between two possible scenarios, namely the weak and strong dissipative regimes, defined as $Q\ll 1$ and $Q\gg 1$, respectively. In the weak dissipative regime, the Hubble damping is still the dominant term, however, in the strong dissipative regime, the dissipative coefficient $\Gamma$ controls the damped evolution of the inflaton field.
By expressing the time derivative of the Hubble function in terms of inflaton field derivative, the time derivative of
inflaton field becomes
\begin{equation}
\dot{\phi}=-\left(\frac{m_p^2}{4\pi}\right)\frac{H^{\prime}(\phi)}{(1+Q)}, \label{fdot
\end{equation}
which is the same expression found in \cite{Sayar:2017pam}, where warm inflation under the Hamilton-Jacobi formalism has been studied recently.
Introducing the dimensionless Hubble hierarchy parameter $\epsilon_H$, we write
\begin{equation}
\epsilon_H=\frac{1}{(1+Q)}\left(\frac{m_p^2}{4\pi}\right)\frac{H^{\prime\,2}(\phi)}{H^2(\phi)}.\label{ehw
\end{equation}
It is possible to find a relation between the energy densities $\rho_{\gamma}$ and $\rho_{\phi}$ by combining Eqs.(\ref{key_02n}), (\ref{fdot}), and (\ref{ehw}), so that
\begin{equation}
\rho_{\gamma}=\frac{Q}{2(1+Q)}\epsilon_H \rho_{\phi}.\label{rhos
\end{equation}
Warm inflation takes place when the parameter $\epsilon_H$ satisfies $\epsilon_H<1$. This condition given above
implies that during inflation the energy density of the inflaton field satisfies $\rho_{\phi}>\frac{2(1+Q)}{Q}\rho_{\gamma}$. Then,
at the end of inflation, when $\epsilon_H=1$, we have that $\rho_{\gamma}=\frac{Q}{2(1+Q)}\rho_{\phi}$. The universe stops inflating and heats up
to become radiation dominated at the time when $\rho_{\gamma}=\rho_{\phi}$. This is one of the most attractive features of warm inflation, since
provides a smooth transition to the radiation-dominated epoch without introducing a reheating epoch. Given that for the quasi-exponential Hubble function
the inflaton potential does not present a minimum, the dynamics for this model in warm inflation scenario provides a solution for
the problem of reheating. The perturbation dynamics of warm inflation with the quasi-exponential Hubble function deserves a more further analysis which goes beyond the scope of this work.
\section{Conclusions}\label{conclu}
To summarize, in this article we have studied an inflationary model the Hubble rate has an
quasi-exponential dependence in the inflaton field. We have studied the
inflation dynamics in the Hamilton-Jacobi formalism, in which the scalar field itself to be the time variable, which is possible during any epoch in which the scalar field evolves monotonically with time. It allows us to consider the Hubble function $H(\phi)$, rather than the inflaton potential $V(\phi)$, as the fundamental quantity to be specified. Because $H(\phi)$, unlike $V(\phi)$, is a geometric quantity, inflation is described more naturally in that language. This model is characterized by the dimensionless parameter $p$ and $H_{inf}$. In order to constraints our model, we have considered the amplitude of the primordial scalar perturbations, as well as the allowed contour plots in $n_s-r$ and $n_s-dn_s/ d\ln k$ planes from Planck 2015 data. First, in the $n_s-r$ plane we show the theoretical predictions of the model
for three different values of $e$-folds $N=55$, 60, and 65. By finding the points where each $r(n_s)$ curve enters the joint
$95\%$ CL region, the allowed range for the $p$ parameter may be determined. After that, using the constraint
for the amplitude of scalar perturbations we determined the allowed range for $H_{inf}$. In addition, in the $n_s-d n_s/ d\ln k$ plane we show that all the three $\frac{d n_s}{d\ln k}(n_s)$ curves for $N=55$, 60, and 65 lie inside the $68\%$ as well as $95\%$ CL regions from Planck. This model predicts values for the tensor-to-scalar ratio $r$ and for the running of the scalar spectral index consistent with the current bounds imposed by Planck 2015, and we conclude
that the model is viable. As we mention before, this quasi-exponential form
of the Hubble rate presents a graceful-exit of inflation, however, the inflaton potential does not present a minimum, which raises the issue of how to address the problem of reheating. In order to address last problem and as a first approach to further research, in section \ref{wi} we discussed how in the warm inflation scenario the radiation-dominated phase is achieved without introducing the reheating phase for this quasi-exponential Hubble function. We hope to return to this point in the near future.
\begin{acknowledgments}
N.V. was funded by Comisi\'on Nacional
de Ciencias y Tecnolog\'ia of Chile through FONDECYT Grant N$^{\textup{o}}$
3150490.
\end{acknowledgments}
|
1,116,691,500,173 | arxiv | \section{Introduction}
An azimuthal anisotropic flow which describes collectivity among
particles produced in heavy-ion collision is recognized as
a key observable used to infer information about
the early time evolution of the nuclei interaction.
This FAIRNESS 2012 Conference proceedings highlight recent results
by the LHC experiments from the anisotropic flow
measurements in relativistic heavy-ion collisions at the TeV energy scale.
Experimental findings from the search for effects of
the local parity violation in strong interaction
using the charge dependent azimuthal correlations
with respect to the reaction plane at the LHC energies
are also discussed.
\section{Anisotropic flow fluctuations}
It is commonly understood that the even-by-event
fluctuations in the initial energy density
of a heavy-ion collision plays an important role
in the development of the azimuthal asymmetries
in the momentum distribution of the produced particles.
Since fluctuations are mapped by the interaction among constituents
of the expanding medium into the final momentum space asymmetry,
the measurements of the anisotropic flow
provide a unique experimental information about
the properties of the created medium, its evolution,
and profile of the initial conditions in a heavy-ion collision.
Recently a number of new high statistics experimental results from LHC
which help to understand details of flow fluctuations become available.
Figure \ref{fig:1}(a) shows the results of the multi-particle correlation analysis
of the directed $v_1$, elliptic $v_2$, and triangular $v_3$, flow measured
for Pb-Pb collisions at \mbox{$\sqrt{s_{\rm NN}}$ = 2.76~GeV}
by the ALICE Collaboration \cite{Bilandzic:2012an}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.5\textwidth]{Figure_1a}
\includegraphics[width=.46\textwidth]{Figure_1b
{\mbox{}\\\vspace{-0.2cm}
\hspace{1cm}\mbox{~} \bf (a)
\hspace{+6.7cm}\mbox{~} \bf (b)}
{\mbox{}\\\vspace{-0.1cm}}
\caption{Charged particle $v_1$, $v_2$, and $v_3$
measured by ALICE \cite{Bilandzic:2012an,Hansen:2012ur} in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = $~2.76~TeV.
(a) Multi-particle estimates of $v_1$, $v_2$, and $v_3$ at midrapidity vs. centrality.
(b) Two- and four-particle estimates of $v_2$ vs. pseudorapidity.
}
\label{fig:1}
\end{center}
\end{figure}
Observed non-zero odd flow harmonics represent a significant effect of
the initial energy fluctuations especially for the most central collisions
where all harmonics become comparable to each other.
Agreement between the anisotropic flow estimates with 4, 6, and 8-particle correlations
put stringent constraints on the shape of the event-by-even flow
distribution.
Differential measurements of the anisotropic flow versus pseudorapidity $\eta$ and transverse momentum $p_{\rm T}$
provide additional information on the extend of flow fluctuations to forward rapidity and higher transverse momenta.
Such measurements performed by the ATLAS~\cite{ATLAS:2012at}, CMS~\cite{Chatrchyan:2012ta}, and ALICE~\cite{Abelev:2012di} Collaborations
are shown in Fig.~\ref{fig:1}(b) and Fig.~\ref{fig:2}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.47\textwidth]{Figure_2a}
\includegraphics[width=.47\textwidth]{Figure_2b
{\mbox{}\\\vspace{-0.2cm}
\hspace{1cm}\mbox{~} \bf (a)
\hspace{+6.7cm}\mbox{~} \bf (b)}
{\mbox{}\\\vspace{-0.1cm}}
\caption{(a) Charged particle $v_2$, $v_3$, and $v_4$ measured for Pb-Pb collisions at $\sqrt{s_{\rm NN}} = $~2.76~TeV by the ATLAS \cite{ATLAS:2012at}, CMS \cite{Chatrchyan:2012ta}, and ALICE \cite{Abelev:2012di} Collaborations. (b) Estimate of the relative $v_2$ flow fluctuations as a function of the transverse momentum for different collision centrality classes.}
\label{fig:2}
\end{center}
\end{figure}
The flow estimates obtained with the event plane, two and four-particle measurement techniques
\cite{Chatrchyan:2012ta,Abelev:2012di,Hansen:2012ur}
suggest that relative flow fluctuations weakly depend on $p_{\rm T}$ and $\eta$.
Figure \ref{fig:2}(b) shows that fluctuations extend to higher $p_{\rm T}$ up to about 8~GeV/c,
while Fig. \ref{fig:1}(b) indicates a similar amount of relative fluctuations up to $|\eta|\sim 5$.
Further constraints on the effects of the initial energy fluctuations is provided by the measurements
of the multi-particle mixed harmonic correlations and the correlation between the different order symmetry planes.
Figure~\ref{fig:3} shows corresponding results by the ATLAS~\cite{Jia:2012sa} and ALICE~\cite{Bilandzic:2012an} Collaborations in comparison with the expectations from the event-by-event hydrodynamic model calculations using
two different profiles of the initial condition~\cite{Qiu:2012uy}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.335\textwidth]{Figure_3a}~~
\includegraphics[width=.54\textwidth]{Figure_3b
{\mbox{}\\\vspace{-0.4cm}
\hspace{-1cm}\mbox{~} \bf (a)
\hspace{+6.7cm}\mbox{~} \bf (b)}
{\mbox{}\\\vspace{-0.1cm}}
\caption{(a)
2nd and 4th order event plane correlation vs. the number of participants (centrality)
measured by ATLAS \cite{Jia:2012sa} for
Pb-Pb collisions at $\sqrt{s_{\rm NN}} = $~2.76~TeV.
Results are compared with the event-by-event hydrodynamic model calculations.
(b) Three-particle azimuthal correlation proposed in \cite{Teaney:2010vd}
measured by ALICE \cite{Bilandzic:2012an} for the same collision system.
}
\label{fig:3}
\end{center}
\end{figure}
Observed correlations between the two and three planes of symmetry
and their qualitative agreement with the hydrodynamic model calculations
points on their strong sensitivity to the details of the fluctuating initial energy profile.
Among other promising developments in the fluctuations studies are the shape measurements
of the event-by-event flow distributions (see e.g. \cite{Jia:2012ve}), and
the measurements of physics observables for event classes selected based on their azimuthal shapes.
This new type of study suggested in \cite{Schukraft:2012ah} was successfully applied in the real data analysis
by the ALICE Collaboration \cite{Dobrin:2012zx,Milano:2012qm}.
\section{Elliptic flow of identified particles}
Viscous hydrodynamics is considered to be a relevant model to describe a thermalized phase
in the time evolution of the system created in a heavy-ion collision.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.44\textwidth]{Figure_4a}\mbox{~~
\includegraphics[width=.43\textwidth]{Figure_4b
{\mbox{}\\\vspace{-0.4cm}
\hspace{1cm}\mbox{~} \bf (a)
\hspace{+6.7cm}\mbox{~} \bf (b)}
{\mbox{}\\\vspace{-0.1cm}}
\caption{Identified particle $v_2(p_{\rm T})$ measured by ALICE \cite{Noferini:2012ps} for Pb-Pb collisions at $\sqrt{s_{\rm NN}} = $~2.76~TeV.
Charged pion, kaon, and proton $v_2$ compared to that of (a) $\Xi$ and $\Omega$ and the hydrodynamic model calculations \cite{Shen:2011eg}, and (b) with $v_2$ of $\phi$-meson, $K^0_s$ and $\Lambda$/$\bar\Lambda$ particles.}
\label{fig:4}
\end{center}
\end{figure}
Applicability of the hydrodynamic description at the LHC energies
can be tested by measuring the particle mass dependence of the elliptic
and triangular flow at small transverse momenta, $p_{\rm T}<2-3$~GeV/$c$.
Figure~\ref{fig:4} shows $v_2(p_{\rm T})$ of pions and anti-protons
in comparison to that of strange
($K^0_s$, $\Lambda$/$\bar\Lambda$) and multi-strange ($\phi$, $\Xi$, and $\Omega$) particles.
The main trends of the observed mass splitting of $v_2$ at low transverse momenta is
reproduced by viscous hydrodynamic model calculations \cite{Shen:2011eg}
with a color glass condensate initial condition (see Fig.~\ref{fig:4}(a)).
The flow of heavier particles (proton, $\phi$-meson, $\Xi$ and $\Omega$)
is more sensitive to the hadronic rescattering phase
and the agreement with data improves when adding
this additional phase into the model calculations (see Fig. 2 in \cite{Noferini:2012zz}).
$v_2$ of $\phi$-meson which is shown in Fig.~\ref{fig:4}(b)
follows the mass splitting expected from the hydrodynamics at lower transverse momenta,
while its magnitude is close to $v_2$ of other mesons (pions)
in the intermediate region of $p_{\rm T} \sim 3-6$~GeV/$c$.
Such behavior suggest the constituent quark number ($n_q$) scaling of $v_2$
which is expected for mesons and baryons produced via quark coalescence
in a phase of the deconfined quarks and gluons. This scaling
may indeed holds at the LHC within 20\% at the $p_{\rm T}/n_q \sim 1.2$~GeV/$c$
(see Figs.~(4) and (5) in~\cite{Noferini:2012ps}).
\section{Probes of local parity violation in strong interaction}
The parity symmetry violation by the strong interactions
remains one of the open fundamental questions about the quantum chromodynamics.
It is argued \cite{Kharzeev:2007jp} that the parity
symmetry can be locally violated in a heavy-ion collisions
what will result in the experimentally observable separation of charges
along the extreme magnetic field generated by the moving ions,
the so called chiral magnetic effect (CME).
As an experimental probe of the CME it was proposed \cite{Voloshin:2004vk} to
use the azimuthal correlations with respect to the collision reaction plane
which is perpendicular to the magnetic field generated in the collision.
The STAR~\cite{Abelev:2009ac} at RHIC and the ALICE~\cite{Abelev:2012pa} at the LHC observed
a clear charge dependence of the two-particle correlation with respect to the reaction plane.
The measured observable is parity even and thus is sensitive to effect unrelated to the parity symmetry violation.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.53\textwidth]{Figure_5a
\includegraphics[width=.38\textwidth]{Figure_5b
{\mbox{}\\\vspace{-0.3cm}
\hspace{2cm}\mbox{~} \bf (a)
\hspace{+6.7cm}\mbox{~} \bf (b)}
{\mbox{}\\\vspace{-0.1cm}}
\caption{
Charge dependent correlations measured by ALICE~\cite{Hori:2012hi} for Pb-Pb collisions at $\sqrt{s_{\rm NN}} = $~2.76~TeV.
(a) The same minus the opposite charge correlations compared to the local charge conservation model \cite{Hori:2012kp}.
(b)~Correlation $\langle \cos(\phi_{\alpha}-3\phi_\beta+2\Psi_{2})\rangle$
vs. rapidity separation~\cite{Hori:2012hi}.
}
\label{fig:5}
\end{center}
\end{figure}
Recently the ALICE Collaboration extended \cite{Hori:2012hi} the measurements
with the new set of the mixed harmonic correlators
$\langle \cos(\phi_{\alpha}-(m+1)\phi_\beta+m\Psi_{m})\rangle$ (for $m=\pm 2, \pm 4$)
and the double harmonic correlator $\langle \cos (2\phi_{\alpha}+2\phi_\beta-4\Psi_{4})\rangle$,
where $\phi_{\alpha,\beta}$ is the azimuthal angle, $\alpha,\beta$ is the charge of the particle, and $\Psi_{m}$ is the $m$-th order collision symmetry plane angle.
Figure~\ref{fig:5}(a) shows the correlation difference $\Delta\langle \cos(\phi_{\alpha}-(m+1)\phi_\beta+m\Psi_{m})\rangle$
between the same and the opposite charge combinations.
The expected CME contribution to the correlator $\langle \cos(\phi_{\alpha}+\phi_\beta-2\Psi_{2})\rangle$ ($m=-2$ in Fig.~\ref{fig:5}) at the LHC energies is based on the extrapolation of the RHIC results.
Depends on the model assumptions, it varies from about 20\% \cite{Toneev:2010xt}
up to the full strength \cite{Kharzeev:2007jp,Zhitnitsky:2010zx,Zhitnitsky:2012im} of the correlations
observed at the LHC if one assumes that magnetic field flux is independent of the collision energy.
It is noted \cite{Pratt:2010gy} that effects of local charge conservation
modulated by the anisotropic flow can be responsible
for a significant part of the measured correlations.
The statement is also supported by model calculations \cite{Hori:2012kp} shown in Fig.~\ref{fig:5}(a).
The charge independent part of the correlations can be dominated
by effects of flow fluctuations~\cite{Teaney:2010vd}, although the
differential study of the charge dependent correlation
shown in Fig.~\ref{fig:5}(b) reveals strong dependence
on the pair separation in pseudorapidity which is not typical for flow fluctuations.
At the moment none of the models is able to reproduce simultaneously the charge
dependence and the charge insensitive baseline of the measured correlations.
\section{Summary and outlook}
The anisotropic flow and the multi-particle azimuthal correlation measurements at the LHC
help to clarify the role of the initial energy fluctuations in
the multiple particle production in a heavy-ion collision.
The particle type and the mass dependence of the anisotropic flow suggest that
the system produced in a heavy-ion collision
is strongly coupled and evolved via the deconfined phase of quarks and gluons.
The new measurements of the mixed harmonic charge dependent azimuthal correlations helps
to constrain possible background sources in the search for
effects of the parity symmetry violation in the QCD.
\section*{Acknowledgements}
Supported by the Helmholtz Alliance
Program of the Helmholtz Association, contract HA216/EMMI ``Extremes of
Density and Temperature: Cosmic Matter in the Laboratory".
\section*{References}
|
1,116,691,500,174 | arxiv | \section{Introduction}
\label{intro}
A political party, according to \cite{ACE2012}, is an organized group of people who exercise their legal rights to identify with a set of similar political aims and opinions, and one that seeks to influence public policy by allowing its candidates elected to public office. Political parties gain control over the government by winning elections with candidates they officially sponsor or nominate for positions in government. They also coordinate political campaigns and mobilize voters. Political parties exist to win elections to influence public policy. This requires them to build coalitions across a wide range of voters who share similar preferences. Even though the presentation of candidates and the electoral campaign are the functions that are most visible to the electorate(s), political parties fulfill many other vital roles which could directly or indirectly influence the people that are registered to vote (the voters)\footnote{Lumen. https://courses.lumenlearning.com/americangovernment/chapter/introduction-9/}. The appearance of candidates in the electoral campaign is widely accepted to also influence the election outcomes \cite{HOEGG2011}
The popularity of social networking, micro-blogging and blogging websites have evolved to become a varied kind of way people express their thoughts and feelings, and at such a huge quantity of data is generated. For example, the nature of micro-blogging allows users to post realtime messages about their opinions on a variety of topics, discuss current issues, complain, and express sentiments (positive/negative) about things that influence their daily life. Recently, people are sounding off online like never before, like checking the reviews or ratings of movies or products before watching the movie in theatres or buying the products. In fact, manufacturing companies and politicians have started to use micro-blogs as a medium to get a sense of general sentiments about the way people view their products, views or personalities. Hence, this paper investigates if the sentiment analysis of political data can discover insights that show the influence political parties have on their candidates or vice versa which may lead to their winning or losing an election. This is certainly interesting as it guides political parties/candidates to know if people support their program or not.
Social media and other information and computer technologies have changed the dynamics of politics and political participation in Nigeria since 2011. Political actors likewise political parties now reach out to wider political space in millions across climes with their manifestoes and ideologies without embarking on distance tours. Social media has added colouration to technics of the political campaign even in the developing countries. A larger percentage of our population has access to and are also consciously connected to social media for ideas and news sharing, information and entertainment \cite{Bettina2009,nwagbosnc2016}. \cite{nwagbosnc2016} further maintained that social media grants many people the chance to participate actively in political discourses by adding their views to issues under discussion.
In this study, we try to understand empirically from Twitter discussions if political parties or their candidates could influence winning or losing an election. For this purpose, we use Twitter data collected on the election day of the Anambra State Gubernatorial Election held on November 18, 2017. To do this, we use a Natural Language Processing (NLP) method called Sentiment Analysis (SA) to conduct data analysis experiments on the election Twitter data. The experiments involve the polarity sentiment analysis (PSA) and the subjectivity sentiment analysis (SSA) on all the tweets considering time as a useful dimension of SA.
Our purpose of PSA and SSA is to find attitudes of the people towards the political actors and to evaluate whether Twitter users were tweeting facts during the election or whether most of their messages were emotional subjective opinions based on a given time. Furthermore, using the \textit{word frequency} and a topic modeling algorithm, we find words most associated to the political actors and most talked about topics and how they are related to each other per political actor in a given time.
Thus, to analyze tweets in terms of polarity and subjectivity, we propose the following research question:
\textbf{Research Question 1}: \textit{How sentiment of the tweets for a particular candidate/party behaves across a given time frame to ascertain attitudes of the public towards the political actors?}
\textbf{Research Question 2}: \textit{How subjectivity scores for each candidate/party varied across time and which of the candidate/party whose mention alone got a high frequency score in more subjective tweets?}
Time has been considered in the literature as a useful dimension for sentiment analysis \cite{Giachanou20162,nguyen2012predicting}. Tracking opinion over time is a powerful tool that can be used for sentiment prediction or to detect the possible reasons for a sentiment change. In particular, understanding topic and sentiment evolution during election allow the government, election observers or people to capture sentiment changes and act promptly. For example, understanding the sentiment change on a particular candidate during an election can reveal possible topic trends that can show people's attitude about the candidate.
To evaluate the stated research questions, we performed the following experimental analysis on the Twitter data we collected (see Table \ref{tab:dataset}):
\begin{itemize}
\item Polarity and subjectivity analyses considering a two-hourly time granularity attribute. Then, the averages of the two analyses scores are calculated. Every two hours generally means tracking topic changes eight times a day from \textit{06:00 to 23:59}.
\item Find the most talked about topics. In each topic, we investigate a political actor's name with the highest frequency of occurrence. Investigation includes
\begin{enumerate}
\item which of the political actor whose mention alone in a tweet got a high frequency score in more polarity or subjectivity tweets.
\item How important are the words most associating to a political actor in a given topic?
\end{enumerate}
\end{itemize}
The experiments started with preprocessing of the tweets and performing initial investigations on them to discover the most common co-occurring words and the number of tweets per candidate and political parties. Furthermore, we group the tweets based on the names of interest (top five political parties and candidates) and perform sentiment analysis using Textblob's Naive Bayes Classifier (NBC)\footnote{https://textblob.readthedocs.io/en/dev/} and SENTIWORDNET \cite{Esuli2007} on the set of tweets in each group to determine the polarity of each tweet. For finding most talked about topics and most frequently associating words, we use Latent dirichlet allocation (LDA)\cite{blei2003} and \textit{word frequency} respectively.
\section{Related Work}
\label{relatedlit}
In the modern politics, Twitter has been in the forefront of political discourse, with politicians choosing it as their platform for disseminating information to their constituents. This has instigated parties and their candidates to an online presence which is usually dedicated to social media coordinators.
In this section, we present some previous works related to sentiment analysis of Twitter discussions on politics. Sentiment analysis is a Natural Language Processing task, where the system has to test the sentiments of texts based on the training data, which obviously sounds like a machine learning problem. Starting from being a document level classification task \cite{turney2002thumbs,pang2004sentimental}, it has been handled at the sentence level \cite{hu2004mining,kim2004determining} and more recently used in the analysis of political texts, especially from the Twitter collection. \cite{tumasjan2010predicting} validate Twitter as a forum for political deliberation and validly mirror offline political sentiment based on the context of the German federal election. \cite{Heredia2011} explores the effectiveness of social media as a resource for both polling and predicting the election outcome. \cite{wang2012system} analyze public sentiment toward presidential candidates in the 2012 U.S. election as expressed on Twitter. \cite{vilares2015megaphone} ranks political leaders, parties and personalities for popularity by analysing Spanish political tweets. \cite{conover2011political} analysis of political polarization on twitter demonstrates that the network of political retweets exhibits a highly segregated partisan structure, with extremely limited connectivity between left- and right-leaning users. \cite{razzaq2014prediction} experimental results validate social media content as an effective indicator for capturing political behaviours of different parties. In other words, positive, negative and neutral behaviour of the party followers as well as the party's campaign impact can be predicted. \cite{Dahal2019} research shows that social media websites can be used as a data source for mining public opinion on a variety of subjects and LDA was applied for topic modeling to infer the different topics of discussion. \cite{Boutet2013} researches on the usefulness of analyzing Twitter messages to identify both the characteristics of political parties and the political leaning of users. \cite{Makazhanov2014} reveals in their work that the political preference of users can be predicted from their Twitter behaviour towards political parties. Furthermore, \cite{Pak2010TwitterAA} has shown that Twitter, a microblogging platform, is valid for building a corpus for sentiment analysis and opinion mining. \cite{deFran2018} proposes a method to segment the Twitter users into groups such as popular, activists and observers to help filter out information and give a more detailed analysis of the important events.
These researches demonstrated that political insight is a phenomenon present on Twitter, hence, this paper presents a comprehensive sentiment analysis considering the common co-occurring tweet words and polarized tweets connections among such groups as a political party, candidate and political party cum its candidates.
\section{Methodology}
Figure \ref{fig:methodsteps} shows the steps of the methodology followed in this work. In the first step, we collect Twitter data and described the process of tweets collection that formed the data of this work. We perform some clean up on the data collected as the second step. This process is discussed in detail in section \ref{preprocessing}. The third step is the political groups' analyses in section \ref{preprocessing} which are the groups of data recorded in the \textit{collection} column of Table \ref{tab:dataset}. The fourth and fifth steps of the methodology are analysed in \ref{explysis} and \ref{sentilysis} sections respectively. We perform the tweets' texts exploratory and sentiment analyses for three distinct groups: the individual parties, the candidates and the individual parties cum their candidates.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.6\textwidth]{method_flow.png}
\caption{\scriptsize Methodology steps}
\label{fig:methodsteps}
\end{figure}
\subsection{Data Collection}
\label{datacll}
This section presents information about Anambra State and its gubernatorial election in the 18th November 2017, and the Twitter social network and its features. The method of collecting the November 18, 2017, Anambra State gubernatorial election Twitter data is discussed in the following subsections.
\subsubsection{November 18, 2017 Anambra State Gubernatorial Election}
\label{AGEactors}
Anambra is a state in southeastern Nigeria\footnote{https://en.wikipedia.org/wiki/Anambra\_State} with 21 Local Government Areas (LGAs). The State Gubernatorial Election (SGE) is conducted every 4 years just like every other state in the country. The November 18, 2017, SGE is a bit significant since the state became the first in the nation to have 37 political parties and candidates participated in the governorship election. In this paper, we only looked at the five major parties and their candidates: Willie Obiano (the incumbent Governor) of the state ruling All Progressive Grand Alliance (APGA), Tony Nwoye of the national ruling All Progressives Congress (APC), Oseloka Obaze of the People's Democratic Party (PDP), Osita Chidoka of United Progressive Party (UPP), and Godwin Ezeemo of the People's Progressive Alliance (PPA). The APGA candidate swept the entire 21 LGAs in the state according to the election results pulling a total vote of 234,071 to finish ahead of the candidate of APC who got 98,752\footnote{http://saharareporters.com/2017/11/19/governor-willie-obiano-wins-anambra-gubernatorial-election}$^,$\footnote{https://www.vanguardngr.com/2017/11/anambra-election-results-obiano-wins-21-lgas/}. This is of considerable significance in this research since the candidates of the other political parties involved are from some of the 21 LGAs.
\subsubsection{Twitter}
Twitter\footnote{http://twitter.com} is a social network classified as a microblog with which users can share messages, links to external websites, images, or videos that are visible to other users subscribed to the service. Messages that are posted on microblogs are short in contrast to traditional blogs. Blogging becomes 'micro' by shrinking it down to its bare essence and relaying the heart of the message and communicating the necessary as quickly as possible in realtime. Twitter, in 2016, limited its messages to 140 characters \cite{Giachanou2016}. There are other microblogging platforms such as Tumblr\footnote{https://www.tumblr.com/}, FourSquare\footnote{https://foursquare.com/}, Google+\footnote{http://plus.google.com.}, and LinkedIn\footnote{http://inkedin.com/} of which Twitter is the most popular microblog launched in 2006 and since then has attracted a large number of users. Researches, as presented in the \textit{Related Work} section, have shown that Twitter data is well suited as a corpus for sentiment analysis and opinion mining.
\subsubsection{Collecting data from Twitter}
\label{tweetcoll}
As discussed in Crawling Twitter Data of \cite{Shamanth2014}, data collection was done using Twitter Streaming Application Programming Interface (API) and Python. API is a tool that makes the interaction with computer programs and web services easy. It enables real-time collection of tweets. Many web services provide APIs to developers to interact with their services and to access data in a programmatic way. For this work, we use the Twitter Streaming API to download tweets related to 3 keywords: ``\#anambradecides2017'', ``\#anambraelections'' and ``\#anambradecides'' on the day of the election. The objective of the real-time collection was to collect only tweets about the election published on the same day. We based on the hypothesis that \textit{if there is a tweet about Anambra State election that same day, then that tweet could be making a reference to what the user is experiencing at the moment about the election}. The Twitter data that we collected is stored in JSON format to make it easy for humans and computer to read from the data and to parse it respectively.
\subsection{Preprocessing Tweets Data}
\label{preprocessing}
Figure \ref{fig:preprocessingsteps} shows five basic steps we took in preprocessing the dataset of tweets we collected as discussed in \ref{tweetcoll} section.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.6\textwidth]{preprocessing_steps.png}
\caption{\scriptsize Basic steps of preprocessing standard tweets}
\label{fig:preprocessingsteps}
\end{figure}
\begin{table}[!htb]
\footnotesize
\caption{\scriptsize Dataset used in this study.}
\label{tab:dataset}
\centering
\begin{tabular}{l|l|r|r}
\hline
SN & Politcal Actors & Total of & Total of \\
& Tweets Collection & Tweets Before & Tweets After \\
& & Preprocessing & Preprocessing \\
\hline
1 & willie\_obiano & 1056 & 535 \\
2 & oseloka\_obaze & 249 & 147 \\
3 & nwoye\_tony & 602 & 235 \\
4 & godwin\_ezeemo & 118 & 77 \\
5 & osita\_chidoka & 277 & 118 \\
6 & apga & 1728 & 668 \\
7 & pdp & 1134 & 412 \\
8 & apc & 1980 & 394 \\
9 & ppa & 94 & 39 \\
10 & upp & 2 & 2 \\
11 & willie\_obiano\_apga & 2543 & 1079 \\
12 & oseloka\_obaze\_pdp & 1258 & 489 \\
13 & nwoye\_tony\_apc & 2464 & 577 \\
14 & godwin\_ezeemo\_ppa & 160 & 92 \\
15 & osita\_chidoka\_upp & 279 & 120 \\
\hline
16 & Total Tweets & 33502 & 7430 \\
\hline
\end{tabular}
\end{table}
We did preprocessing in two ways: \textit{Method 1} involves using tweet-preprocessor, a preprocessing library for a tweet data written in Python, to clean and tokenize the tweets. Tokenization involves converting a sentence into a list of words. In \textit{method 2}, we manually defined a function to double-check our tweet preprocessing and remove other unwanted tweets like retweets. This is to be sure that our data is reasonably cleaned. Moreover, spelling correction is one of the unique functionalities of the TextBlob library. With the correct method of the TextBlob object, we corrected all the spelling mistakes in our tweets. The final steps involve removing stopwords and punctuations, and stemming which is transforming any form of a word to its root word. Also included in the lists of stopwords are the party and candidate names, especially when we want to generate a word cloud image on any of the political parties or candidates since they are the targets. This is to enable meaningful words to be displayed than having party/candidate names seen all over the word cloud image.
Table \ref{tab:dataset} shows the total of tweets we collected and tweets associated with each political party and candidates before and after preprocessing. Rows 1 to 15 show the names we are interested in investigating on this study, we add columns of Booleans that indicated whether a name of interest was in the tweet or not. The \textit{Total of Tweets After Preprocessing} column shows that the names of interest for this work formed 67.08\% of 7430 total tweets. The remaining percentage is unclassified tweets. This experiment focuses on investigating whether the political actors stated in section \ref{AGEactors} can influence winning or losing an election. The results of the preprocessed tweets are stored in the CSV file. CSV file enables data storage into columns of variables and rows of observations.
\subsection{Experimental Tools}
\label{exptools}
In this experimental analysis, we use Sentiwordnet\cite{Esuli2007} to compare the overall analysis scores of TextBlob's Naive Bayes Classifier (NBC). The two sentiment classifiers are used to determine the overall polarity scores for the sake of comparison, while Textblob is further used to perform detailed polarity analysis on the political actors and to determine the subjectivity of tweets. LDA (short for Latent Dirichlet Allocation) and word frequency are used for topic modeling to infer the different topics of discussion and to find most common occurring words respectively.
TextBlob is an extremely powerful NLP library for Python for processing textual data. It provides a consistent API for diving into common Natural Language Processing (NLP) tasks such as part-of-speech tagging, noun phrase extraction, sentiment analysis, and more. NBC is a classification technique based on Bayes'theorem with an assumption of independence among predictors. In simple terms, a NBC assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature. NBC is based on the Bayes' theorem:
\begin{center}
$P(A|B) = \frac{P(B|A)~*~P(A)}{P(B)}$
\end{center}
SENTIWORDNET \cite{Esuli2007} is the result of the automatic annotation of all the synsets\footnote{A special kind of a simple interface that is present in NLTK to look up words in WordNet. } of WORDNET according to the notions of `positivity', `negativity', and `neutrality'. Each synset \textit{s} is associated to three numerical scores \textit{Pos(s)}, \textit{ Neg(s)}, and \textit{Obj(s)} which indicate how positive, negative, and ``neutral'' the terms contained in the synset are. Different senses of the same term may thus have different opinion-related properties. For example, in SENTIWORDNET 1.0 the synset [estimable(J,3)] corresponding to the sense ``may be computed or estimated'' of the adjective estimable, has an Obj score of 1:0 (and Pos and Neg scores of 0.0), while the synset [estimable(J,1)] corresponding to the sense ``deserving of respect or high regard'' has a Pos score of 0:75, a Neg score of 0:0, and an Obj score of 0:25. Each of the three scores ranges in the interval [0:0; 1:0], and their sum is 1:0 for each synset. This means that a synset may have nonzero scores for all the three categories, which would indicate that the corresponding terms have, in the sense indicated by the synset, each of the three opinions related properties to a certain degree.
LDA \cite{blei2003} is an unsupervised machine-learning model that takes documents as input and finds topics as output. The model also says in what percentage each document talks about each topic. Hence, a topic is represented as a weighted list of words.
\subsection{Exploratory Twitter Data Analysis - EDA}
\label{explysis}
We use EDA approach to analyse the \textit{Total tweets} in Table \ref{tab:dataset} after preprocessing to summarize their main characteristics with visualizations. The EDA process is a necessary step prior to sentiment analysis or building a model in order to unravel various insights that will become important later.
\subsection{Sentiment Analysis}
\label{sentilysis}
Sentiment analysis of tweets involves understanding the attitudes, opinions, views and emotions from tweets using Natural Language Processing (NLP) techniques. In this section, we look at sentiment analysis involving subjectivity and polarity.
\subsubsection{Polarity and Subjectivity Analyses}
Polarity is a sentiment analysis that determines whether a tweet expresses a positive, negative or neutral opinions. This enables the determination of the attitude of Twitter users for topics under discussion via quantifying the sentiment of texts.
Subjectivity is a sentiment analysis that classifies a text as opinionated or not opinionated. Terms such as adjectives, adverbs and some group of verbs and nouns are used to identify a subjective opinion. Speech patterns such as the use of adjectives along with nouns are used as an indicator for the subjectivity of a statement \cite{kharde2016sentiment,yaqub2018analysis}. Thus subjectivity analysis is the classification of sentences as subjective opinions or objective facts.
In this study, we have used tools as described in section \ref{exptools} on our dataset. Hence from the dataset in Table \ref{tab:dataset}, we classify tweets as positive, negative or neutral opinions and further identify the ones that are subjective from those that are objective. Furthermore, we compute word frequency to find most talked about words, identify most discussed topics using LDA and look at how these most frequent words contribute to the importance of the LDA topics. Finally, we describe how the most frequent words and most discussed topics are related to the computed sentiments per political actor at a given time.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.4\textwidth]{polarity_subjectivity.png}
\caption{\scriptsize Polarity and subjectivity metrics}
\label{fig:polmetrics}
\end{figure}
Figure \ref{fig:polmetrics} briefly describes the polarity metrics. Algorithm 1 shows subjectivity and polarity calculations. As explained in section \ref{exptools}, in Polarity, we are looking at how positive or negative a tweet is. -1 is very negative while +1 is very positive. For subjectivity, we are looking at how subjective or opinionated a tweet is. 0 is a fact while +1 is very much opinion.
\begin{algorithm}[H]
\small
\SetAlgoLined
polarity\_vals = []\;
subjectivity\_vals = []\;
\While{While lenght of tweets dataframe is not zero}{
senti = SentimentClassifier(tweet)\;
polarity\_vals.append(senti.sentiment.polarity)\;
subjectivity\_vals.append(senti.sentiment.subjectivity)\;
}
\caption{ \scriptsize Algorithm to calculate the polarity and subjectivity of each tweet.}
\end{algorithm}
\section{Results}
This section presents the results of our experiments based on the stated research questions in the introduction. While we used sentiment analysis to research on people's attitude towards political candidates and parties and whether such attitude is subjective or not, the use of exploratory Data Analysis gives us quantitative clues on our Twitter dataset.
\subsection{Exploratory Data Analysis (EDA)}
\begin{figure}[!htb]
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{plot_names_interest.png}
\caption{}
\label{fig:namecounts1}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{plot_partynames_interest.png}
\caption{}
\label{fig:namecounts2}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{plot_party_candidate_names_interest.png}
\caption{}
\label{fig:namecounts3}
\end{subfigure}
\caption{\scriptsize A plot for counts for names of interest. (a) is counts for cadidates' names of interest. (b) is count for political parties' names of interest. (c) count for both parties' and candidates' names of interest}
\label{fig:namecounts}
\end{figure}
This process, as explained earlier in \ref{explysis}, is an important step usually performed before sentiment analysis to quantify the dataset in frequency. We start by counting names of interests by adding columns of Booleans in our Pandas data frame (a two-dimensional size-mutable, potentially heterogeneous tabular data structure with labelled axes (rows and columns)) to indicate whether a name of interest was in the tweet or not. See results in Table \ref{tab:dataset} from row 2 to 16 and Figure \ref{fig:namecounts}.
\begin{figure}[htb]
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.6\textwidth]{wordcloud_obiano.png}
\caption{}
\label{fig:wordcl_obiano}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.6\textwidth]{wordcloud_obianoapga.png}
\caption{}
\label{fig:wordcl_obianoapga}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.6\textwidth]{wcloudobaze.png}
\caption{}
\label{fig:wordcl_obaze}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.6\textwidth]{wcloudobazepdp.png}
\caption{}
\label{fig:wordcl_obazepdp}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{wcloudtony.png}
\caption{}
\label{fig:wordcl_tony}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{wcloudtonyapc.png}
\caption{}
\label{fig:wordcl_tonyapc}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{wcloudgodwin.png}
\caption{}
\label{fig:wordcl_godwin}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{wcloudgodwinppa.png}
\caption{}
\label{fig:wordcl_godwinppa}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{wcloudchidioka.png}
\caption{}
\label{fig:wordcl_chidioka}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{wcloudchidiokaudp.png}
\caption{}
\label{fig:wordcl_chidiokaudp}
\end{subfigure}
\caption{\scriptsize Words that co-occurred most frequently with the candidates and their parties. (a)-- Most frequent words associated with \textit{Willie Obiano} - the incumbent governor and winner of the election. (b)-- Most frequent words associated with \textit{Willie Obiano cum his party, APGA}. (c)-- Most frequent words associated with \textit{Obaze Oseloka}. (d)-- Most frequent words associated with \textit{Obaze Oseloka cum his party PDP}. (e)-- Most frequent words associated with \textit{Tony Nwoye}. (f)-- Most frequent words associated with \textit{Tony Nwoye cum his party APC}. (g)-- Most frequent words associated with \textit{Godwin Ezeemo}. (h)-- Most frequent words associated with \textit{Godwin Ezeemo cum his party PPA}. (i)-- Most frequent words associated with \textit{Osita Chidioka}. (j)--Most frequent words associated with \textit{Osita Chidioka cum his party UPP}.}
\label{fig:wordfreqCloud}
\end{figure}
Furthermore, we explore the most frequent words associated with the political actors. We added columns for tokens in our Pandas data frame and get rid of stopwords including punctuation, political parties and candidates names, then we generate a word cloud image for the frequent words. For example, words that co-occurred most frequently with the political actors\footnote{Political candidates and their parties} are shown in Figure \ref{fig:wordfreqCloud}.
\subsection{Sentiment Analysis}
At the first step of the sentiment analysis, we analyzed tweets using two different sentiment classifiers such as TextBlob and SentiWordNet with the aim of looking at their overall scores. The sole purpose of using these sentiment classifiers was to give a comparison of their scores and to determine which classifier to use.
Tweets gathered from public accounts were 33,502 in number. However, after pre-processing only 7430 tweets remained. Among the two sentiment analyzers we compared in this research, we found that SentiWordNet had the highest rate of tweets with positive sentiment, 2916 in number and 39.25\% in percentage. While Textblob is highest in neutral sentiment rate of 3971, 53.45\%, which can be viewed in Table \ref{tab:sentiscores} and Figure \ref{fig:classifiers_c}.
\begin{table}[!htb]
\caption{\scriptsize Percentage/number of polarity calculations of different sentiment classifiers.}
\label{tab:sentiscores}
\centering
\begin{tabular}{l|c|c|c}
\hline
Sentiment Classifier & Positive & Neutral & Negative \\
\hline
TextBlob & 2447 (32.93\%) & 3971 (53.44\%) & 1012 (13.62\%) \\
SentiWordNet & 2916 (39.25\%) & 3085 (41.52\%) & 1429 (19.23\%) \\
\hline
\end{tabular}
\end{table}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{classifiers_sc.png}
\caption{\scriptsize Polarity calculation with each sentiment classifier.}
\label{fig:classifiers_c}
\end{figure}
We use Textblob sentiment tools beyond this point because of its popularity. There are two aspects of Polarity Sentiment Analysis (PSA) and Subjectivity Sentiment Analysis (SSA) conducted. Firstly, we apply both PSA and SSA on all the tweets regardless of the time of tweeting from the users. Secondly, we considered time as a useful dimension of sentiment analysis. The second elucidates the research question 1, considering time-series topic tracking and to find whose name is most mentioned in each topic. In both sentiment analyses, we want to find the attitude of the public towards the political actors and possibly the reason(s).
\subsubsection{Polarity Sentiment Analysis (PSA)}
\label{psa}
\textbf{First PSA}: Regardless of the time of tweets by the users, we compute the sentiment polarity for each tweet in Table \ref{tab:dataset} and aggregate the summary statistics per collection. This analysis includes all the political actors mentioned in section \ref{AGEactors} and in Table \ref{tab:dataset}. Using algorithm 1, we compute polarity scores between +1 to -1. A tweet is classified \textit{positive} if $polarity score > 0$ or \textit{negative} if $polarity score < 0$ otherwise classified as \textit{neutral}. To visualize the overall public opinions or feelings about the election, we compute the sentiment frequency distribution on the overall tweets and per category as recorded in our dataset in Table \ref{tab:dataset}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{sentipol_overall.png}
\caption{\scriptsize The distribution of polarity in our Twitter dataset}
\label{fig:sentianalysis1}
\end{figure}
Figure \ref{fig:sentianalysis1} shows the frequency distribution of sentiment polarity in our dataset. From this Figure, it is evident that most of the tweets in our dataset are positive and have polarity between 0 and 0.5.
\begin{figure}[htb]
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{sentifreq_cand.png}
\caption{}
\label{fig:sentianalysis2}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{sentifreq_parties.png}
\caption{}
\label{fig:sentianalysis3}
\end{subfigure}
\caption{\scriptsize The polarity sentiment frequency distributions (FD). This shows FD of all the tweets in Table \ref{tab:dataset} where the political actors are mentioned. We used the top 5 political candidates and their parties. (a) is the polarity sentiment frequency distribution per candidate and (b) is the the polarity sentiment frequency distribution per political party}
\label{fig:sentilysis}
\end{figure}
Figures \ref{fig:sentianalysis2} and \ref{fig:sentianalysis3} show the polarity sentiment frequency distribution for the political actors in our dataset categories of Table \ref{tab:dataset}. Figures \ref{fig:namecounts}, \ref{fig:sentianalysis2} and \ref{fig:sentianalysis3} show the political actors in the Anambra State gubernatorial election conducted on November 18, 2017, and their various scores in frequency and sentiment polarity. The frequency distributions of the tweets in these experiments considered tweets where the political actors are mentioned regardless if there are more than one actor mentioned in the same tweet. For example, Figure \ref{fig:sentianalysis3} is a count of tweets that their polarity has been identified. The counts include where the names of the political actors are mentioned irrespective of how many of them appear in a tweet. For example, this tweet ``\textit{Anambra Poll: Election observers, APGA, UPP commend timely distribution of materials}''\footnote{referring to election materials.} from our dataset is positive polarity and is counted for both APGA and UPP political actors.
\noindent \\
\textbf{Second PSA}: Time is considered as a useful dimension of sentiment analysis. To answer \textit{research question 1}, we used our Twitter dataset grouped according to time of tweet to perform polarity analysis of tweets mentioning each of the political actor. In this phase, we select the top three of the candidates and their parties mentioned in section \ref{AGEactors} to constitute our political actors set. This set is \textit{Willie Obiano (the incumbent Governor) of the state ruling All Progressive Grand Alliance (APGA), Tony Nwoye of the national ruling All Progressives Congress (APC)} and \textit{Oseloka Obaze of the People's Democratic Party (PDP)}. The reason for this selection is considered by the number of tweets mentioning political actors and popularity. For each polarized tweet computed using algorithm 1, we find the time it was tweeted, whose name is mentioned ``solely'' in the tweet at the time and finally compute the average polarity scores of the collection against the political actor's name mentioned. This is to track how people's attitude towards a political actor changes overtime during the election. Time arrangement is based on two hourly granularity starting from 06:00 to 23:59 on the election day.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\textwidth]{time_series_pol.png}
\caption{ \scriptsize Polarity scores for each political actors based on time scaled by a factor of 100. Time is ranged in two hourly granularity. This graph is used to reveal the attitude of the people towards the political actors in a given time}
\label{fig:polaritySc}
\end{figure}
\noindent Figure \ref{fig:polaritySc} is a graph of the polarity scores on each of the political actor scaled by a factor of 100. It reveals what people are feeling about the political actors in a given time window, what topics are being discussed in each of the time and whose name is mentioned in those topics. Here we can observe interesting patterns such as between \textit{6-8 -- 8-10} there was no tweet specifically mentioning the political actors. But there are general tweets such as
\begin{quote}
\textit{Ndi Anambra, the next 4 years is critical. It will either be more development for the State or statue building leaders. Please vote wisely. \#AnambraDecides2017}
\end{quote}
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.8\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{wo_topics.png}
\caption{}
\label{fig:wotopics1}
\end{subfigure}
\begin{subfigure}{0.8\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{oo_topics.png}
\caption{}
\label{fig:wotopics2}
\end{subfigure}
\begin{subfigure}{0.8\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{tn_topics.png}
\caption{}
\label{fig:wotopics3}
\end{subfigure}
\caption{\scriptsize Most frequent words based on different time of the election. (a) is for \textit{Willie Obiano},\textit{ APGA}, \textit{Obiano combined with APGA}. (b) is for \textit{ Oseloka Obaze}, \textit{PDP}, \textit{Obaze combined with PDP}. (c) is for \textit{Tony Nwoye}, \textit{APC} and \textit{Nwoye combined with APC}. Time is categorized into two hourly granularity starting from \textit{06:00 to 23:59}. This heatmap is used to discover insights that reveal different most frequent words on each of the political actors across different time during the election fit into topics in Figure \ref{fig:ldatopics}. For example, \textit{Willie Obiano 12-14} showing is the most frequent keywords associated with \textit{Willie Obiano} between 12 pm to 2 pm.}
\label{fig:wotopics}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.9\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{topicslda1.png}
\caption{}
\label{fig:ldatopics1}
\end{subfigure}
\begin{subfigure}{0.9\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{topicslda2.png}
\caption{}
\label{fig:ldatopics2}
\end{subfigure}
\caption{ \scriptsize The top 5 keywords computed by LDA model from tweets where political actors are mentioned and the inferring topics from the keywords. The number of words in easch keyword is 10. (a) is for the political candidates. (b) is for the political parties}
\label{fig:ldatopics}
\end{figure}
\noindent The data points on Figure \ref{fig:polaritySc}, from \textit{8-10 -- 20-00}, are the sentiment polarity scores of the tweets mentioning the political actors. They are often inferred as positive, neutral or negative and from the sign of the polarity score, a tweet is defined as either positive or negative feedback. Thus, they could be used to show the attitude of the public towards the political actors. Generally, from the figure, the tweets mentioning the following political actors \{\textit{Willie Obiano, APGA, Obaze Oseloka, PDP}\} are above the zero bar all through the time from \textit{6-8 -- 20-00}. While \{\textit{Tony Nwoye, APC}\} political actors are below the zero bar, signifying negative comments, at \textit{14-16} for \{\textit{APC, Tony Nwoye}\} and at \textit{12-14} and \textit{18-20} for \textit{Tony Nwoye}. Compare \textit{Tony Nwoye 14-16} on Figure \ref{fig:wotopics} and \textit{Tony Nwoye} at \textit{14-16} data point on Figure \ref{fig:polaritySc}.
Figure \ref{fig:wotopics} shows the most frequent words associated with the political actors in a given time. It gives insight to what people are saying about them at that time. Figure \ref{fig:ldatopics} shows how important those most frequent words in Figure \ref{fig:wotopics} are in a given topic. Here, topics are computed by a topic model called LDA. It shows the keywords for each topic and the weightage (importance) of each keyword.
Figure \ref{fig:ldatopics} comprises top 5 different topics per political actor built with an LDA model where each topic is a combination of keywords and each keyword contributes a certain weightage to the topic. Adopting the methodology we explained in section \ref{exptools}, we built the LDA model from the corpus and dictionary generated from grouping our tweet data collections into the following political actors, viz; \textit{Willie Obiano (the incumbent Governor) of the state ruling All Progressive Grand Alliance (APGA), Tony Nwoye of the national ruling All Progressives Congress (APC)} and \textit{Oseloka Obaze of the People's Democratic Party (PDP)}. This grouping is based on tweets where these political actors are mentioned. Looking at the \textbf{T0 (topic 0)} under \textbf{Willie Obiano} on the Figure, it means the top 10 keywords that contribute to the topic ``Voting Hindrances'' are: `failing', `complains', `card','reader', ... and the weight of `failing' on \textbf{T0} is 0.065. The weights reflect how important a keyword is to that topic. Looking at these keywords, we guessed what the topic could be by summarising it as ``Voting Hindrances'' associated to the political actor \textbf{Willie Obiano}.
Comparing Figures \ref{fig:wotopics} and \ref{fig:ldatopics}, the x-axis and y-axis makers of Figure \ref{fig:wotopics} show words like \textit{ojukwu} and \textit{bianca} that occurred at \textit{Willie Obiano} and \textit{APGA} with that of \textit{APGA} showing a high frequency of occurrence than \textit{Willie Obiano}. Also, in Figure \ref{fig:ldatopics}, these two words are very important, considering their weights, in the formation of topics \textbf{T2} and \textbf{T3} of \textit{Willie Obiano} and \textit{APGA} respectively. \textit{Ojukwu} was a hero in Igbo land and from Anambra State. He was \textit{bianca}'s husband and the founder of APGA. The same could be said of \textit{oseloka obaze} and \textit{PDP} where words such as \textit{agulu, jonathan, obi} in Figure \ref{fig:wotopics} constitute to the formation of topics in Figure \ref{fig:ldatopics} (see \textbf{T2} under both \textit{oseloka obaze} and \textit{PDP}). The high-frequency occurrence of the word \textit{agulu} (also see Figure \ref{fig:wordcl_obazepdp}), just like \textit{ojukwu}, reveals the connection between the candidate and one of the PDP's `kingpin' who hails from Agulu, a town within Anambra state.
Comparing Figures \ref{fig:polaritySc} and \ref{fig:wotopics}, the sentiment polarity of \textit{oseloka obaze} reveals more positive sentiments than the other political actors between \textit{18-20} and \textit{20-00}. Words like \textit{confident}, \textit{ran}, \textit{hard}, \textit{winning}, \textit{credible}, \textit{news}, \textit{win}, \textit{says}, etc., can be observed at \textit{18-20} and \textit{20-00} from Figure \ref{fig:wotopics} showing people's thoughts concerning \textit{Oseloka Obaze}. Also, from Figure \ref{fig:wotopics} at \textit{14-16} and \textit{16-18}, we observed words such as \textit{questionable}, \textit{going}, \textit{incidents}, \textit{spokesman}, \textit{police}, \textit{candidate}, \textit{average}, \textit{governorship}, \textit{inec}, etc., associated with \textit{Tony Nwoye}. And words such as \textit{decides}, \textit{law}, \textit{electoral}, \textit{breaches}, \textit{addresses}, \textit{victory}, \textit{bianca}, \textit{coasting}, \textit{celebrates}, \textit{ojukwu} etc., are found associated with \textit{Willie Obiano} at \textit{16-18}, \textit{18-20} and \textit{20-00} on Figure \ref{fig:wotopics}. Most of these words formed top 10 important words used in the formation of topics in Figure \ref{fig:ldatopics} for the political actors \textit{Willie Obiano, Tony Nwoye, Oseloka Obaze, APGA, APC, PDP}.
\subsubsection{Subjective Sentiment Analysis (SSA)}
An objective sentence expresses some factual information about the world, while a subjective sentence expresses some personal feelings or beliefs. For example, the sentence ``\textit{This past Saturday, I bought a Nokia phone and my girlfriend bought a Motorola phone}'' does not express any opinion hence is objective, while ``\textit{The voice on my phone was not so clear, worse than my previous phone}'' sentence is subjective. Subjective expressions come in many forms, for example, opinions, allegations, desires, beliefs, suspicions, and speculations \cite{Riloffetal2006,Wiebe2000}. Thus, a subjective sentence may not contain an opinion. For example, ``\textit{I wanted a phone with good voice quality}'' is subjective but it does not express a positive or negative opinion on any specific phone. Similarly, we should also note that not every objective sentence contains no opinion as in ``\textit{The voice quality of this phone is amazing}'' \cite{liu2010sentiment}. The issue of subjectivity has been extensively studied in the literature \cite{Hatzivassiloglou1997,Hatzivassiloglou2000,Riloffetal2006,Wiebe2000,Riloffetal2003,liu2010sentiment}.
\begin{figure}[htb]
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{subj_candidates.png}
\caption{}
\label{fig:sentianalysis2subj}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{subj_party.png}
\caption{}
\label{fig:sentianalysis3subj}
\end{subfigure}
\caption{\scriptsize The subjectivity sentiment frequency distributions (FD). This shows FD of all the tweets in Table \ref{tab:dataset} where candidates, political parties or both are mentioned. We used top 5 political candidates and parties. (a)-The subjectivity sentiment frequency distribution per candidate. (b)-The subjectivity frequency distribution per political party}
\label{fig:sentilysissubj}
\end{figure}
\textbf{First SSA}: We perform SSA on our Twitter data in Table \ref{tab:dataset} without time consideration. This is to evaluate the overall Twitter texts tweeted during the Anambra State gubernatorial election conducted on November 18, 2017, whether they are factual or emotional subjective opinions. The evaluation involves all the candidates and their political parties mentioned in section \ref{AGEactors}. While sentiment polarity in section \ref{psa} determines the positive or negative connotation of a tweet in our Twitter dataset, SSA tries to discern whether the tweet is subjective in the form of an opinion, belief, emotion or speculation or objective as a fact. Thus, we investigate tweets mentioning the political candidates, parties or both to know their subjectivity scores and which one of them is higher. This is illustrated in Figure \ref{fig:sentilysissubj}. We can observe that tweets mentioning \textit{willie\_obiano} and \textit{apga} are more non--subjective than others. A similar case can be seen in \textit{godwin\_ezeemo} and \textit{ppa} except in \textit{godwin\_ezeemo\_ppa}, tweets mentioning both names, where both subjectivity and non--subjectivity are equal. \textit{Oseloka Obaze} and his party \textit{PDP} tweets are more subjective as revealed in all the bar plots. A dissimilar trend is observed in \textit{tony\_nwoye} and his party, \textit{APC} where the tweets mentioning the candidate's name is more subjective, the tweets mentioning the party is more non-subjective and the tweets mentioning both names are more non--subjective. The non-subjectivity of the latter can be viewed as an influence from the party's non--subjectivity results.
\noindent \\
\textbf{Second SSA}: In this analysis, we considered time as a useful dimension of sentiment analysis. Along with polarity analysis \textit{PSA} above, we have also used the same data and time arrangement to answer \textit{research question 2}. The results for the SSA are shown in Figure \ref{fig:subjectivitySc}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\textwidth]{time_series_subj.png}
\caption{\scriptsize Subjectivity scores for each political actors based on time. Time is formatted in two hourly granularity from \textit{06:00 to 23:59}}
\label{fig:subjectivitySc}
\end{figure}
We observed from the figure that tweets `solely' mentioning \textit{Tony Nwoye} are more subjective between \textit{14-16} time. A similar case is observed of \textit{Oseloka Obaze} but between the \textit{18-20} time group. \textit{Willie Obiano} and his party, \textit{APGA} started on subjective scores a bit lower than others but at the end of the day, they got the lowest scores than others showing that tweets mentioning their names are less opinionated. Again, we can say that their results are closely knitted at morning time between \textit{10-12} and at afternoon time between \textit{14-16 -- 16-18}. Furthermore, it could be envisaged that the personalities of the candidates \textit{Oseloka Obaze} and \textit{Tony Nwoye} drove people to be emotionally subjective in their tweets about them leading to the huge differences between their subjectivity scores and that of their parties.
A high subjectivity score does not indicate a higher propensity for a voter to vote. What it does indicate however is that Twitter users mentioning \textit{Tony Nwoye} and \textit{Oseloka Obaze} in their tweets are more opinionated, and those mentioning \textit{Willie Obiano} are less opinionated.
\section{Discussion}
From the above figures and analyses, we deduce that even though a political party serves as a platform that sales the personality of a political actor or contestant while struggling for power, the credibility of a political actor may even though add strength to the spread of the party. For example, Figures \ref{fig:sentilysis}, \ref{fig:polaritySc} and \ref{fig:wotopics} reveal attitudes of the public towards the political actors. The political actors \textit{Oseloka Obaze} and \textit{Osita Chidioka} are well accepted than their political parties in either positive polarity or popularity. The variable \textit{oseloka\_obaze} on Figure \ref{fig:sentianalysis2} shows a negative polarity score that is almost zero (also see \textit{osita\_chidioka}). Most frequent words such as \textit{confident}, \textit{credible}, \textit{news}, \textit{win}, \textit{says}, \textit{peacefully}, etc., can be observed from Figure \ref{fig:wotopics2} showing people's thoughts concerning \textit{Oseloka Obaze}. However, the viability and acceptability of a given political party to the electorates have a greater effect on the victory of the party and the candidate it presents. Individual efforts by the political actors in promoting their political manifestoes through the political party that does not win the sympathy of the electorates are usually of less effect in actualizing political victory especially in a developing nation like Nigeria. This is illustrated in the case of \textit{Willie Obiano} and his party \textit{All Progressive Grand Alliance (APGA)} as explained in the paragraphs below.
Political behaviour of the electorates during an election has a connection to the political party that has an ideological link to their belief, culture and value. For instance, in Nigeria PDP, could rule Nigeria for 16 years (1999-2015) was not because of the power of incumbency rather also as a result of its acceptability to the people notwithstanding who is her flagbearer. Muhammed Buhari contested presidential elections in 2003, 2007 (All Nigerian People's Party-ANPP), 2011 (Congress for Progressive Change- CPC) but lost majorly because of the poor spread of his party as a result of unacceptability to the electorates not minding his personality. This resembles the case of the political actor \textit{Godwin Ezeemo} and political parties \textit{PPA} and \textit{UPP} as shown in their sentiment scores and tweets frequencies in Figures \ref{fig:sentianalysis2}, \ref{fig:sentianalysis3} and \ref{fig:namecounts}. In 2013, Buhari formed an alliance with other political parties (Action Congress of Nigeria- ACN; Congress for Progressive Change- CPC; All Nigerian People's Party- ANPP; a faction of All Progressive Grand Alliance- APGA and aggrieved members of Peoples Democratic Party- PDP). This alliance gave rise All Progressive Congress (APC) with wider coverage and acceptability in the North-East, North-West, North-Central and South-West of Nigeria. In view of this, Buhari that lost election three consecutive times (2003, 2007 and 2011) won the 2015 general election against the incumbent president (Goodluck Jonathan of PDP).
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\textwidth]{avemeans.png}
\caption{The average sentiment polarity for each political party cum candidate. Numbers 1,2,3,4,5 represents APGA, PDP, APC, PPA and UPP respectively}
\label{fig:sentianalysis5}
\end{figure}
Empirical observations on the political actor \textit{Willie Obiano} and his party \textit{APGA} from Figures \ref{fig:sentianalysis2}, \ref{fig:sentianalysis3} and \ref{fig:wordfreqCloud} reveal that the political-ideological links such as people's belief, culture, value and acceptability add to the \textit{Willie Obiano} winning the November 18, 2017, Anambra State gubernatorial election. From Figure \ref{fig:wordfreqCloud}, the most frequent outstanding words associated with \textit{Willie Obiano} when combined with \textit{APGA} can be connected to the ideology of the Igbo nation and Chukwuemeka Odimegwu Ojukwu signifying their normal slogan \textit{nke a b\d{u} nke any\d{i}} `this is our own'. Moreso, Figure \ref{fig:ldatopics2}, topic \textbf{T4} (political marketing) can further be explained, based on the topic keywords, to be \textit{indigenous acceptance ``Nke a b\d{u} any\d{i}'' (This is our own)}. Also, from Figures \ref{fig:sentianalysis2} and \ref{fig:sentianalysis3}, the sentiment polarity frequency distribution associated with \textit{Willie Obiano} shows a more negative sentiment compared to his party \textit{APGA}, while Figure \ref{fig:sentianalysis5} shows all positive on the average sentiment polarity scores between \textit{Willie Obiano} and \textit{APGA}. The variable \textit{willie Obiano+APGA} at \textit{18-20} and \textit{20-00} on Figure \ref{fig:wotopics1} diplays more positive words such as \textit{celebrates, victory, results, bianca, coasting, guber, early, ojukwu}, etc., than when only \textit{willie Obiano} is used. APGA since 2008 till date has been winning gubernatorial elections of Anambra State-Nigeria due to the acceptability of the party to the people and belief on the party as Igbo party. APGA survived the influence of National incumbent parties (PDP and APC) in 2010, 2014 and 2018 gubernatorial elections not because of the qualities of the candidates rather because of the party's influence on the people. The founder of the party APGA (Chukwuemeka Odimegwu Ojukwu) is a generally accepted personality among the Igbo nation, especially his state Anambra (see Figure \ref{fig:wordfreqCloud} for the most frequent words with \textit{Willie Obiano} and \textit{APGA}). The objective of the party as the founder always advocated is to promote Igbo ideology, to unite the Igbo nation within Nigeria state and to have a political umbrella to advance Igbos interest, made the majority of the Anambrarians especially the masses (electorates) to support the party in every election notwithstanding who is its flagbearer.
In furtherance to our discussion on Figure \ref{fig:subjectivitySc} about objective and subjective opinions, APGA is a ruling political party in Anambra state for 12 years now. It has been able to design policies and execute infrastructure that outwit other political parties like PDP that was in power for 7years (1999-2003), APC, UDP, PPA that have not been in power. This singular opportunity made tweets associated with APGA as a party and its candidate less subjective unlike other political actors. However, other political parties like APC, PDP, PPA UDP and their candidates have not had such opportunity in their political adventure in the state. In view of this, people's connection to these parties are not concrete and there is no concrete policies and projects linked to them in the state. This may be the likely reason why the tweets associated with these parties' candidates are more subjective. See also Figure \ref{fig:sentianalysis2subj} for the subjectivity sentiment frequency distribution on the candidates.
Finally, variable like personality influence occurred repeatedly in Figure \ref{fig:ldatopics}. This shows that tweets of the people are indicating that personality influence plays significant roles in winning an election. Political party with an acceptable personality coupled with its influence as a party does well than a party who has a person of no or little influence in the society or among the people. Therefore, personality influence and political party influence are very important and underlining factors in electoral victory.
\section{Conclusion}
In this research, we investigate how candidates/their political parties could influence winning or losing an election using Twitter data. We stated research questions that enable us to evaluate our dataset to gain insight on the political phenomenon that could help us in our research.
We tested our research questions by using Twitter data collected during the Anambra gubernatorial election 2017 as a case study. We analyzed over 7k Twitter messages streamed during the election day only. Since Twitter users tweets in real-time, we believe that tweets during the election day are based on what the users are experiencing at that moment and could gain political insight from them. The tweets collected are analyzed exploratively and sentimentally to answer our two research questions. In the explorative experiment, we gained overall insights on our data such as \textit{Willie Obiano} and his party, APGA recording highest number of tweets mentioning their names. This reveals to us the likely reasons behind him/his party having the highest number of positive and negative tweets in the proportion of frequency distributions of the sentiment `polarize and subjective' tweets (see Figures \ref{fig:sentilysis} and \ref{fig:sentilysissubj}). In sentiment analysis, we found people's attitudes towards the political actors across a given set of time and whether these attitudes are subject to facts or opinions. Our tweets collected were segmented into two-hour time groups forming 8 groups starting from \textit{06:00} to \textit{23:59}. The primary purpose of this study was to utilize this time-based information, as a useful dimension for sentiment analysis, contained in the message metadata to create a more detailed analysis of Twitter users subjectivity and polarity during the elections. At this stage, a polarized tweet must contain one of the selected candidates/political parties names in order to be considered. For each time group, we filtered tweets that were tweeted between the set time range and grouped them based on the names they are uniquely mentioning. Then find the average polarity and subjectivity scores of each group within the said time. Generally, the polarity average scores for all selected cases are above bar as shown in Figure \ref{fig:polaritySc} with only \textit{Oseloka Obaze} exceeding 30\% but at 20--00 time, and except \textit{Tony Nwoye} and \textit{APC} are the only cases where the scores are negative. Furthermore, it was observed that tweets mentioning \textit{Tony Nwoye} and \textit{Oseloka Obaze} were more subjective in nature than those mentioning \textit{Willie Obiano} and his party. This is likely because APGA has been a ruling political party in Anambra state for 12 years now. We also use LDA model to build topics in Figure \ref{fig:ldatopics} to find various topics being discussed and we compared the polarity and subjectivity analyses with the findings. Again, we used Figure \ref{fig:ldatopics}to to check on how important a word in Figure \ref{fig:wotopics} is based on its wieght.
A high subjectivity score does not indicate a higher propensity for a voter to vote. However, it indicates that Twitter users mentioning \textit{Tony Nwoye} and \textit{Oseloka Obaze} in their tweets during the lunch and supper times are more emotional subjective in their messages, whereas those mentioning \textit{Willie Obiano} at the same time are not.
Finally, the Twitter Analysis and Visualization on \#AnambraDecides2017 shows that political actors leverage on the impacts of social media (Twitter, Facebook, WhatsApp, Youtube and other blogs) to define and determine political behaviour of the electorates to win elections. It also adds in validating that political insights are a phenomenon present on social media.
\section*{Conflict of interest}
On behalf of all authors, the corresponding author states that there is no conflict of interest.
|
1,116,691,500,175 | arxiv | \section{Introduction}
Schedule pressure demands continuous improvements to productivity in all semiconductor design phases. Verification and test is often a critical bottleneck, and a key factor is the choice of specification language. For datapath designs, ANSI-C, C++ and SystemC are often the best languages to capture intent. Functional datapath specifications in these languages yield a higher level of abstraction, reduce code size, and speed up simulation and regression.
In this work, we focus on the scenario in which a designer maintains a high-level specification written in ANSI-C and the design under test is a low-level or synthesizable implementation in some standard HDL. In industrial practice, this scenario is often encountered with designs such as binary floating point units~\cite{Jones:2001:PFV,Mukherjee:2016:ECF} and GPUs. Our work is, however, also applicable to circuits with non-trivial control.
We have developed a tool for this setting called CREST, a prototype front-end intended as an add-on to commercial formal verification tools. It works by translating high-level ANSI-C specifications---potentially very large ones---into a low-level logical representation that is especially tuned to analysis by formal proof engines, such as SMT and SAT solvers. To be as general as possible, CREST uses very simple Verilog code as the intermediate language for communication of processed specifications to downstream EDA tools. The idea is to make high-level C specifications, expressed by almost arbitrary ANSI-C code, available for use as \textit{reference models} or \textit{specifications} in the formal verification of a RTL designs in an arbitrary commercial verification tool.
There are, of course, many existing compilers that map C to RTL, including several commercial C to RTL formal verification tools. There are also commercial and academic high-level synthesis tools that generate Verilog code---intended to represent a viable hardware circuit design---from higher-level C specifications. The novelty of our approach that we handle full ANSI-C, without artificial restriction to a synthesizable or other limited subset. This includes:
\begin{itemize}
\item arithmetic conversions, including conversions to and from floating-point types
\item typecasting, including typecasting to and from pointer types
\item deeply nested composite datatypes, such as structs of unions of structs
\item pointer dereferencing
\item passing of pointers as function arguments
\item dynamically-assigned function pointers
\end{itemize}
\noindent Our aim is to make it possible to integrate simulation models and other high-level models written in an unconstrained and natural style of ANSI-C into an RTL formal verification flow.
\section{Architecture and Implementation}
As a software implementation, CREST is an adaptation of the CBMC bounded model checker for C~\cite{cbmc}, a substantial Oxford research tool developed by Professor Daniel Kroening that features a highly accurate, fully scalable bit-level logical semantics of full ANSI-C. CBMC is widely used by leading technology companies, including Bosch, General Electric, TATA and AWS~\cite{aws} and is also the core technology of Diffblue, an Oxford spinout company using AI to provide software synthesis and automated testing.
In CBMC, verification is done by unwinding program loops, symbolically executing the program, and passing the resulting equations to a decision procedure. The algorithms that do this have been highly engineered for efficiency, accuracy, and scalability through more than a decade of intensive use in industry and academic research.
Our solution exploits this capability by tapping into the CBMC internal data flow just after symbolic execution. From this representation, it then generates a low-level Verilog model that exactly captures the bit-level semantics of the effect of the symbolic program execution. This can then be read by a downstream RTL verification tool, including commercial tools from EDA vendors. It is important to note that the resulting Verilog is not intended to be a synthesised circuit design, but is instead a low-level representation that is tuned for analysis by SAT, SMT and other proof engines.
The result of CBMC's symbolic execution is a set of variable assignments that satisfy the single static assignment rule. All the high-level constructs of C will have been interpreted away, leaving a relatively simple representation of the program's semantics. Along the way, CBMC does a very significant amount of optimization and simplification---all aimed at optimising the results of symbolic simulation for processing by formal engines. CBMC also maintains full backannotation information, connecting each expression in the resulting equations to the file and line number of its C source.
CREST then translates each variable in this representation to a Verilog bitvector and each C expression on the right-hand side of the assignments to a corresponding Verilog expression. Where possible, the
translation is done in the most straightforward way, mapping each C operator to an equivalent Verilog operator. This is not, however, always possible. For example, Verilog does not allow the extraction of a bit range from an expression, but only from a variable. Our tool introduces auxiliary variables to solve this problem. CREST outputs the resulting Verilog code, and also makes available full backannotation information derived from CBMC.
The primary benefit of this approach is that it leverages CBMC's established and scalable capability to handle full ANSI-C, making a broad spectrum of high-level C models accessible as specifications for RTL verification.
\section{Case Studies}
To give a flavour of the range of specifications that CREST can process, and the types of downstream verification enabled, we sketch five case studies using the tool.
\subsection{Softfloat vs VGM Floating Point Add}
SoftFloat~\cite{softfloat} is an architectural, IEEE-conformant software implementation in C of floating-point operations. It is owned and maintained by John Hauser (UC Berkeley) and widely considered a gold standard in high-performance software models of FP operations. CREST can take any SoftFloat interface function (e.g.~\texttt{f32{\_}add}, \texttt{f64{\_}mul}) as a starting point and generate Verilog that is equivalent to the function execution. This can then be used in a downstream EDA tool as a specification for equivalence verification against another reference model in RTL or, with appropriate datapath verification techniques deployed in the downstream tool~\cite{Seger:2005:IEE}, an RTL circuit implementation.
The SoftFloat function \texttt{f32{\_}add} implements 32 bit floating-point addition. To process this with CREST, we first create a C function that acts as a wrapper around the SoftFloat function and also defines the inputs and outputs of the resulting Verilog module. This function looks as follows:
\begin{verbatim}
#include <SoftFloat.h>
void f32_add_wrapper()
{
uint32_t x, y;
C2V_SAMPLE_INPUT(uint32_t, x);
C2V_SAMPLE_INPUT(uint32_t, y);
float32_t xf, yf, rf;
xf.v = x;
yf.v = y;
rf = f32_add(xf, yf);
uint32_t res = rf.v;
C2V_DRIVE_OUTPUT(uint32_t, res);
}
\end{verbatim}
\noindent The macros \texttt{C2V{\_}SAMPLE{\_}INPUT} and \texttt{C2V{\_}DRIVE{\_}OUTPUT} define the interface of the Verilog module created by our tool. Its inputs are bitvectors of width 32. These are interpreted as floating point representations, not as integers, and passed to the SoftFloat function \texttt{f32{\_}add}. We then use the bit pattern of the result as the output of the Verilog module.
From this C code, CREST generates a Verilog module with the following interface:
\begin{verbatim}
module f32_add(
input logic unsigned [31:0] x,
input logic unsigned [31:0] y,
output logic unsigned [31:0] res
);
\end{verbatim}
\noindent The logic of the module is semantically equal to the C code of \texttt{f32{\_}add{\_}wrapper}, which calls SoftFloat's \texttt{f32{\_}add} function.
This generated Verilog can now be compared by standard equivalence checking tools to an existing RTL reference model or implementation of the same 32-bit floating point operation. In our work, we are benchmarking with the reference models provided by the Verilog Golden Model library (VGM). This is a high-quality reference library for floating point operations created by Warren Ferguson and Flemming Andersen and available under an open-source license. It implements standard arithmetic operations and fused multiply-add, and we have added a certain number of conversion operations. The VGM modules are parameterized in terms of the widths of the exponent and significand of floating-point values, so the library covers more than the standard IEEE precisions. It supports all conventional rounding modes and some of the recently-introduced non-IEEE rounding modes. Customization options allow for the definition of architecture-specific handling of unspecified behaviour involving NaNs.
Using any off-the-shelf sequential equivalence checking tool, it is straightforward to verify the equivalence of the floating point addition reference models of SoftFloat and VGM. The verifications go through fully automatically for half-, single- and double-precision. In academic experiments at Oxford with the SEC App in JasperGold version v2018.12, the runtimes are all very reasonable:
\medskip
\begin{tabular}{@{}r|r@{}}
\multicolumn{1}{l|}{\textbf{Width}} &
\multicolumn{1}{l}{\textbf{Runtime}} \\
\hline
16 bits & 6.11 sec. \\
32 bits & 59.68 sec. \\
64 bits & 2068.38 sec. \\
\end{tabular}
\medskip
\noindent These runtimes are for a 3.07\,Ghz 4-core Intel Xeon X5667 machine with 48\,GB memory running Linux kernel 4.13.
\subsection{Floating Point Multiplication}
Equivalence of the SoftFloat and VGM reference models for floating
point addition can be verified end-to-end by a state of the art
equivalence verification tool. But for floating point multiplication,
a problem decomposition is needed. This is very typical of challenging
data path proofs~\cite{OLeary:2013:RST, Mukherjee:2016:ECF, Jones:2001:PFV}.
For multiplication, SoftFloat and VGM take different approaches to handling subnormal
operands. SoftFloat normalizes each operand prior to performing an integer multiplication of the significands.
VGM performs the multiplication directly on the (possibly subnormal) significands and applies a corrective
shift to the product afterwards. Furthermore, the widths of the integers holding the significands
differ between the two models: 64 bits in SoftFloat (modelling C's '{\tt *}' operator) versus 48 bits in VGM.
These differences make automatic equivalence verification difficult, but they can easily be tackled
with an appropriate proof decomposition.
The decomposition is done in the equivalence checking step between VGM and CREST-generated
Verilog from the SoftFloat model. The following cases must be considered:
\begin{enumerate}
\item Special operands: infinities, NaNs, and the like.
\item Neither operand is special and at least one operand is zero.
\item Both operands are non-zero normal or subnormal floats.
\end{enumerate}
\noindent Equivalence of the multipliers in cases 1 and 2 is easily proved automatically in an
RTL formal verification tool, as is the exhaustiveness of this case analysis.
Equivalence of the models in case 3 requires establishing some
invariants linking the models. In the subcase when both operands are
normal floats, we first prove correspondences between the fraction
fields of the VGM operands and inputs to SoftFloat's significand
multiplier:
\begin{verbatim}
SoftFloat.mul_in_1[63:31] = 33'b0
SoftFloat.mul_in_1[30:7] = {1'b1,Vgm.fp1[22:0]}
SoftFloat.mul_in_1[6:0] = 7'b0
SoftFloat.mul_in_2[63:32] = 32'b0
SoftFloat.mul_in_2[31:8] = {1'b1,Vgm.fp2[22:0]}
SoftFloat.mul_in_2[7:0] = 8'b0
\end{verbatim}
\noindent Assuming these correspondences, we then establish the correspondence between the significands resulting from
integer multiplications in the two models:
\begin{verbatim}
SoftFloat.mul_out[63]} = 1'b0
SoftFloat.mul_out[62:15] = vgm.mul_out[47:0]
SoftFloat.mul_out[14:0] = 15'b0
\end{verbatim}
\noindent Finally, we complete the proof of the subcase by proving that
SoftFloat and VGM compute identical products, assuming correspondence
of their multiplications. The remaining subcases, when one or
both operands are subnormal, require additional case splitting based
on the magnitude of the corrective shift applied in the VGM.
The case analyses, including proofs of each property and
exhaustiveness of each case split, were scripted in Tcl and executed
in a standard EDA vendor formal verification tool.
\subsection{Approximate Reciprocal}
The C model in this case study is publicly distributed by
Intel~\cite{rcp14}. It is a software model of the behaviour of the VRCP14\{P,S\}S
machine instructions, members of the Intel$^\textrm{\small\textregistered}$ AVX-512
instruction family that compute approximate reciprocals of single
precision floating point values, with relative error of less than
\(2^{-14}\). The code was written by numerics architects without
consideration of the needs of high level synthesis or formal
verification. This code is known to be problematic for existing C to
Verilog solutions, because it passes pointers as function arguments
and employs various `type punning' techniques such as the use of
typecasting to access bit patterns in a float variable:
\begin{verbatim}
float x;
int *xp = (int*)&x;
*xp = *xp & 0x003fffff
\end{verbatim}
CBMC handles such tricks with aplomb, though they are often not
accepted as within the synthesizable subset by commercial C to
Verilog solutions.
Another challenge arose because the authors of the C model wrote the
code recursively. The reciprocal approximation is computed by table
lookup and interpolation, and the approximation requires that the
approximand lies in the interval
\([1.0,2.0)\).
Other floating point inputs are first scaled to lie within the
interval, the approximation function is called recursively on the
scaled float, and the result of the recursive call is `de-scaled'
accordingly. We chose to unroll the recursion by hand, and used CBMC
as a model checker to prove the assertions that justify the
correctness of the unrolling transformation.
The Verilog code specification generated by CREST has been
successfully checked against a proprietary, hand-coded reference and
against an optimised RTL design from a shipping microprocessor, using
both EDA vendor tooling and proprietary symbolic simulation engines.
\subsection{Google's WebM VP9 Codec}
We translate an implementation of matrix transformations used in the WebM VP9 encoder implementation by Google~\cite{webm:libvpx}. The transformations act on a 16 $\times$ 16 matrix of bits represented as an int16{\_}t array of size 16. Each transformation is represented by a tuple of function pointers, which implement the one-dimensional transforms acting on matrix rows and columns. A primary input to the top level transformation function is an indicator variable that selects the specific transformation to be used. In order to translate this code to Verilog, a tool must be able to handle function pointers that cannot be resolved statically.
The code also contains several user-written assertions, which are are translated into SVA assertions by our tool. These assertions are located in a low-level function that implements a transformation done in two passes, and state that after the first pass an intermediate result has been calculated. We formally verify these assertions in the original C code using the native CBMC prover. We also verify the translation of the assertions in the generated Verilog using a leading commercial RTL verification tool.
\subsection{Sequential Floating-Point Adder}
To demonstrate the ability to use specifications generated by our tool for verification of sequential designs, we take SoftFloat's \texttt{f32{\_}add} function as the specification for a synthesizable implementation of a 32-bit floating point adder that is optimized for area~\cite{dawson}.
The circuit implementation executes add operations one after the other, not in parallel. The next execution starts only after the previous execution has finished. Internally, it uses a finite state machine to define the current execution stage of the add operation. The design is clocked and changes of the execution stage happen on the rising edge.
\smallskip
The execution stages are:
\begin{itemize}
\item \texttt{get}: read and store inputs
\item \texttt{unpack}: unpack the inputs into sign, exponent and significand
\item \texttt{special{\_}cases}: handle the special cases of inputs being not a number (NaN), infinity or zero
\item \texttt{align}: align significands to the larger exponent
\item \texttt{add}: add significands and set rounding bits
\item \texttt{normalize}: normalize the result
\item \texttt{round}: round the result to nearest even
\item \texttt{pack}: pack sign, exponent and significand of the result into a bitvectors
\item \texttt{put}: write the result to output
\end{itemize}
The implementation executes one operation per clock cycle. This includes the shifts during normalization and alignment, and as a result a single add operation can take up to 108 clock cycles. The implementation is therefore very different in structure to the Verilog generated by our tool from the SoftFloat C specification, which is combinatorial. Nevertheless we are able to show automatically, using a leading formal verification tool, that the implementation is standard compliant if and only if the specification is standard compliant.
\section{Conclusion: Benefits and Prospects}
Building the CREST front-end on CBMC brings two substantial benefits: a precise, bit-level interpretation of the semantics of full ANSI-C, and a built-in best-in-class model checker for C assertions. This enables us to support several usage scenarios aimed at RTL design verification against high-level C reference models.
\medskip
\noindent \textbf{RTL verification against ANSI-C specifications.}
This is the primary usage scenario: a C reference specification is translated into its low-level semantics, in a form tuned to processing by SAT and SMT engines. This is made available as very simple Verilog code, so that it can be used as a specification of intended behaviour for verification of a circuit using any downstream EDA verification tool.
\medskip
\noindent \textbf{Independent verification of HLS.} CREST provides a C to RTL path that could be used as an independent check on the results of high level synthesis (HLS). Starting from the same C source, an RTL circuit can be generated by HLS and a reference specification in Verilog generated by CREST. Comparing the two by equivalence checking would provide additional confidence in the correctness of circuit synthesis.
\medskip
\noindent \textbf{Proving properties about C specifications.} Since
CBMC is part of our solution, its model checking capability can be
used to verify properties of the reference specification. This
includes user written assertions as well as pointer safety and buffer
overflow checks. We exploited this capability to justify source
transformations in the approximate reciprocal example. Our tool
translates C assertions into SVA, so it is also possible to verify them
with the verification engines of a downstream EDA tool.
\medskip
\noindent \textbf{Using established C properties as helper assertions.}
Once properties of a C specification have been established by CBMC, they can in principle be used as helper assertions in downstream formal verification of a circuit against the specification. The hope is that the relatively high level of the properties that can be proved in C would, when translated by CREST, give the downstream verification engines helpful information. Experiments are needed to determine how useful this can be in practice.
\medskip
Current work on CREST is focussed on further case studies, including industrial C specifications, as well as exploring ways to extending support for C++ beyond the current capabilities of CBMC for specifications in that language.
\bibliographystyle{IEEEtran}
|
1,116,691,500,176 | arxiv | \section*{Introduction}
Let $K$ be a number field of degree $n$ and $\mathbb Z_K$ its ring of integers. An essential task in Algorithmic Number Theory is to construct the prime ideals of $K$ in terms of a defining equation of $K$, usually given by a monic and irreducible polynomial $f(x)\in\mathbb Z[x]$. The standard approach to do this, followed by most of the algebraic manipulators like {\tt Kant}, {\tt Pari}, {\tt Magma} or {\tt Sage}, is based on the previous computation of
an integral basis of $\mathbb Z_K$. This approach has a drawback: one needs to factorize the discriminant, $\op{disc}(f)$, of $f(x)$, which can be a heavy task, even in number fields of low degree, if $f(x)$ has large coefficients.
In this paper we present a direct construction of the prime ideals, that avoids the computation of the maximal order of $K$ and the factorization of $\op{disc}(f)$. The following tasks concerning fractional ideals can be carried out using this construction:
\begin{enumerate}
\item Compute the $\mathfrak{p}$-adic valuation, \ $v_\mathfrak{p}\colon K^*\to \mathbb Z$, for any prime ideal $\mathfrak{p}$ of $K$.
\item Obtain the prime ideal decomposition of a fractional ideal.
\item Compute a two-element representation of a fractional ideal.
\item Add, multiply and intersect fractional ideals.
\item Compute the reduction maps, $\mathbb Z_K\to \mathbb Z_K/\mathfrak{p}$.
\item Solve Chinese remainders problems.
\end{enumerate}
Moreover, along the construction of a prime ideal $\mathfrak{p}$, lying over a prime number $p$, a $\mathbb Z_p$-basis of the ring of integers of the local field $K_\mathfrak{p}$ is obtained as a by-product. Hence, from the prime ideal decomposition of the ideal $p\mathbb Z_K$ we are also able to derive the resolution of another task:\medskip
\begin{enumerate}
\item[(7)] Compute a $p$-integral basis of $K$.
\end{enumerate}
For a given prime number $p$, the prime ideals of $K$ lying above $p$ are in one-to-one correspondence with the irreducible factors of $f(x)$ in $\mathbb Z_p[x]$ \cite{hensel}. In \cite{HN} we proved a series of recurrent generalizations of Hensel lemma, leading to a constructive procedure to obtain a family of \emph{$f$-complete types}, that parameterize the irreducible factors of $f(x)$ in $\mathbb Z_p[x]$. A type is an object that gathers combinatorial and arithmetic data attached to Newton polygons of $f(x)$ of higher order, and an $f$-complete type contains enough information to single out a $p$-adic irreducible factor of $f(x)$.
In \cite{GMNalgorithm} we described Montes algorithm, which optimizes the construction of the $f$-complete types; it outputs a list of $f$-complete and optimal types that parameterize the prime ideals of $K$ lying above $p$, and contain valuable arithmetic information on each prime ideal. All these results were based on the PhD thesis of the second author \cite{montes}. The algorithm is extremely fast in practice; its complexity has been recently estimated to be $O(n^{3+\epsilon}\delta+n^{2+\epsilon}\delta^{2+\epsilon})$, where $\delta=\log(\op{disc}(f))$ \cite{FV}.
In \cite{GMNokutsu} we reinterpreted the invariants stored by the types in terms of the Okutsu polynomials attached to the $p$-adic irreducible factors of $f(x)$ \cite{Ok}. Suppose $\mathbf{t}$ is the $f$-complete and optimal type attached to a prime ideal $\mathfrak{p}$, corresponding to a monic irreducible factor $f_\mathfrak{p}(x)\in\mathbb Z_p[x]$; then, the arithmetic information stored in $\mathbf{t}$ is synthesized by two invariants of $f_\mathfrak{p}(x)$: an \emph{Okutsu frame} $[\phi_1(x),\dots,\phi_r(x)]$ and a \emph{Montes approximation} $\phi_\mathfrak{p}(x)$
(cf. loc.cit.). The monic polynomials $\phi_1,\dots,\phi_r,\phi_\mathfrak{p}$ have integer coefficients and they are all irreducible over $\mathbb Z_p[x]$; the polynomial $\phi_\mathfrak{p}(x)$ is ``sufficiently close" to $f_\mathfrak{p}(x)$. We say that
$$
\mathfrak{p}=[p;\phi_1,\dots,\phi_r,\phi_\mathfrak{p}],
$$
is the \emph{Okutsu-Montes representation} of the prime ideal $\mathfrak{p}$. Thus, from the computational point of view, $\mathfrak{p}$ is structured in $r+1$ levels and at each level one needs to compute (and store) several Okutsu invariants that are omitted in this notation. This computational representation of $\mathfrak{p}$ is essentially canonical: the Okutsu inva\-riants of $\mathfrak{p}$, distributed along the different le\-vels, depend only on the defining equation $f(x)$. These invariants provide a rich and exhaustive source of information about the arithmetic properties of $\mathfrak{p}$, which is crucial in the computational treatment of $\mathfrak{p}$.
From a historical perspective, the sake for a constructive representation of ideals goes back to the very foundation of algebraic number theory. Kummer had the
insight that the prime numbers factorize in number fields
into the product of prime ``ideal numbers", and
he tried to construct them as symbols $[p\,;\phi]$, where
$\phi(x)$ is a monic lift to $\mathbb Z[x]$ of an irreducible factor of $f(x)$ modulo $p$. Dedekind
showed that these ideas led to a coherent theory only in the case
that $p$ does not divide the index $i(f):=(\mathbb Z_K\colon
\mathbb Z[x]/(f(x)))$.
This constructive approach could not be universally
used because there are number fields in which $p$ divides the
index of all defining equations \cite{D}. Fortunately, this
obstacle led Dedekind to invent ideal theory as the only way to
perform a decent arithmetic in number fields. Ore, in his Phd
thesis \cite{ore}, tried to regain the constructive approach to
ideal theory. He generalized and improved the classical tool of Newton polygons
and showed that under the assumption that the defining
equation is \emph{$p$-regular} (a much weaker condition than
Dedekind's condition $p\nmid i(f)$), the prime ideals dividing $p$ can
be parameterized as $\mathfrak{p}=[p\,;\phi,\phi_\mathfrak{p}]$ (in our notation), where
$\phi_\mathfrak{p}(x)\in\mathbb Z[x]$ is certain polynomial whose $\phi$-Newton polygon
is one-sided and the residual polynomial attached to this side is irreducible (cf. section \ref{secMontes}). The contribution of
\cite{montes} was to extend Ore's ideas in order to obtain a similar construction of the prime ideals in the general case.
The aim of this paper is to show how to use this constructive representation of the prime ideals to carry out the above mentioned tasks (1)-(6) on fractional ideals and to compute $p$-integral bases.
The outline of the paper is as follows. In section \ref{secMontes} we recall the structure of types, we describe their invariants, and we review the process of construction of the Okutsu-Montes representations of the prime ideals. In section \ref{secPadic} we
show how to compute the $\mathfrak{p}$-adic valuation of $K$ with respect to a prime ideal $\mathfrak{p}$; this is the key ingredient to obtain the factorization of a fractional ideal as a product of prime ideals (with integer exponents). The operations of sum, multiplication and intersection of fractional ideals are trivially based on these tasks. In section \ref{secGenerators} we show how to find integral elements $\alpha_\mathfrak{p}\in\mathbb Z_K$ such that $\mathfrak{p}$ is the ideal of $\mathbb Z_K$ generated by $p$ and $\alpha_\mathfrak{p}$; this leads to the computation of a two-element representation of any fractional ideal. In section \ref{secCRT}, we show how to compute residue classes modulo prime ideals and we design a chinese remainder theorem routine. Section \ref{secBasis} is devoted to the construction of a $p$-integral basis.
We have implemented a package in {\tt Magma} that performs all the above mentioned tasks; in section \ref{secKOM} we present several examples showing the excellent performance of the package in cases that the standard packages cannot deal with. Our routines work extremely fast as long as we deal only with fractional ideals whose norm may be factorized. Even in cases where $\op{disc}(f)$ may be factorized and an integral basis of $\mathbb Z_K$ is available, our methods work faster than the standard ones if the degree of $K$ is not too small. Mainly, this is due to the fact that we avoid the use of linear algebra routines (computation of $\mathbb Z$-bases of ideals, Hermite and Smith normal forms of $n\times n$ matrices, etc.), that dominate the complexity when the degree $n$ grows. Finally, in section \ref{secConclusion} we make some comments on the apparent limits of these Montes' techniques: they are not yet able to test if a fractional ideal is principal.
We also briefly mention how to extend the results of this paper to the function field case and the similar challenges that arise in this geometric context.\bigskip
\noindent{\bf Notations. }
Throughout the paper we fix a monic irreducible polynomial $f(x)\in\mathbb Z[x]$ of degree $n$, and a root $\theta\in\overline{\mathbb{Q}}$ of $f(x)$. We let $K=\mathbb Q(\theta)$ be the number field generated by $\theta$, and $\mathbb Z_K$ its ring of integers.
\section{Okutsu-Montes representations of prime ideals}\label{secMontes}
Let $p$ be a prime number. In this section we recall Montes algorithm and we describe the structure of the $f$-complete and optimal types that parameterize the prime ideals of $K$ lying over $p$. The results are mainly extracted from \cite{HN} (HN standing for ``Higher Newton") and \cite{GMNalgorithm}.
Given a field $F$ and two polynomials $\varphi(y),\,\psi(y)\in F[y]$, we write $\varphi(y)\sim \psi(y)$ to indicate that there exists a constant $c\in F^*$ such that $\varphi(y)=c\psi(y)$.
\subsection{Types and their invariants}
Let $v\colon \overline{\mathbb{Q}}_p^{\,*}\to \mathbb Q$ be the canonical extension of the $p$-adic valuation of $\mathbb{Q}_p$ to a fixed algebraic closure. We extend $v$ to the discrete valuation $v_1$ on the field $\mathbb{Q}_p(x)$, determined by:
$$
v_1\colon \mathbb Q_p[x]\longrightarrow \mathbb Z\cup\{\infty\},\quad v_1(b_0+\cdots+b_rx^r):=\min\{v(b_j),\,0\le j\le r\}.
$$
Denote by $\ff0:=\operatorname{GF}(p)$ the prime field of characteristic $p$, and consider the $0$-th \emph{residual polynomial} operator
$$
R_0\colon \Z_p[x]\longrightarrow \ff0[y],\quad g(x)\mapsto \overline{g(y)/p^{v_1(g)}},
$$
where, $^{\raise.8ex\hbox to 8pt{\hrulefill }}\colon \mathbb Z_p[y]\to \ff0[y]$, is the natural reduction map.
A \emph{type of order zero}, $\mathbf{t}=\psi_0(y)$, is just a monic irreducible polynomial $\psi_0(y)\in\ff0[y]$. A \emph{representative} of $\mathbf{t}$ is any monic polynomial $\phi_1(x)\in\mathbb Z[x]$ such that $R_0(\phi_1)=\psi_0$. The pair $(\phi_1,v_1)$ can be used to attach a Newton polygon to any nonzero polynomial $g(x)\in \mathbb{Q}_p[x]$. If $g(x)=\sum_{s\ge0}a_s(x)\phi_1(x)^s$ is the $\phi_1$-adic development of $g(x)$, then
$N_1(g):=N_{\phi_1,v_1}(g)$ is the lower convex envelope of the set of points of the plane with coordinates $(s,v_1(a_s(x)\phi_1(x)^s))$ \cite[Sec.1]{HN}.
Let $\lambda_1\in\mathbb Q^-$ be a negative rational number, $\lambda_1=-h_1/e_1$, with $h_1,e_1$ po\-sitive coprime integers. The triple $(\phi_1,v_1,\lambda_1)$ determines a discrete valuation $v_2$ on $\mathbb{Q}_p(x)$, constructed as follows: for any nonzero polynomial $g(x)\in\Z_p[x]$, take a line of slope $\lambda_1$ far below $N_1(g)$ and let it shift upwards till it touches the polygon for the first time; if $H$ is the ordinate at the origin of this line, then $v_2(g(x))=e_1 H$, by definition. Also, the triple $(\phi_1,v_1,\lambda_1)$ determines a residual polynomial operator
$$
R_1:=R_{\phi_1,v_1,\lambda_1}\colon \Z_p[x]\longrightarrow \ff1[y],\quad \ff1:=\ff0[y]/(\psi_0(y)),
$$
which is a kind of reduction of first order of $g(x)$ \cite[Def.1.9]{HN}.
Let $\psi_1(y)\in\ff1[y]$ be a monic irreducible polynomial, $\psi_1(y)\ne y$. The triple $\mathbf{t}=(\phi_1(x);\lambda_1,\psi_1(y))$ is called a \emph{type of order one}. Given any such type, one can compute a representative of $\mathbf{t}$; that is, a monic polynomial $\phi_2(x)\in\mathbb Z[x]$ of degree $e_1\deg\psi_1\deg\phi_1$, satisfying
$R_1(\phi_2)(y)\sim\psi_1(y)$. Now we may start over with the pair $(\phi_2,v_2)$ and repeat all constructions in order two.
The iteration of this procedure leads to the concept of \emph{type of order $r$} \cite[Sec.2]{HN}. A type of order $r\ge 1$ is a chain:
$$
\mathbf{t}=(\phi_1(x);\lambda_1,\phi_2(x);\cdots;\lambda_{r-1},\phi_r(x);\lambda_r,\psi_r(y)),
$$
where $\phi_1(x),\dots,\phi_r(x)$ are monic polynomials in $\mathbb Z[x]$ that are irreducible in $\Z_p[x]$, $\lambda_1,\dots,\lambda_r$ are negative rational numbers, and $\psi_r(y)$ is a polynomial over certain finite field $\ff{r}$ (to be specified below), that satisfy the following recursive properties:
\begin{enumerate}
\item $\phi_1(x)$ is irreducible modulo $p$. We define $\psi_0(y):=R_0(\phi_1)(y)\in \ff0[y]$, $\ff1=\ff0[y]/(\psi_0(y))$.
\item For all $1\le i<r$, $N_i(\phi_{i+1}):=N_{\phi_i,v_i}(\phi_{i+1})$ is one-sided of slope $\lambda_i$, and $R_i(\phi_{i+1})(y):=R_{\phi_i,v_i,\lambda_i}(\phi_{i+1})(y)\sim \psi_i(y)$, for some monic irreducible polynomial $\psi_i(y)\in \ff{i}[y]$. We define $\ff{i+1}=\ff{i}[y]/(\psi_i(y))$.
\item $\psi_r(y)\in\ff{r}[y]$ is a monic irreducible polynomial, $\psi_r(y)\ne y$.
\end{enumerate}
Thus, a type of order $r$ is an object structured in $r$ levels. In the computational representation of a type, several invariants are stored at each level, $1\le i\le r$. The most important ones are:
$$
\begin{array}{ll}
\phi_i(x), & \mbox{monic polynomial in }\mathbb Z[x], \mbox{ irreducible over }\Z_p[x]\\
m_i, & \deg \phi_i(x),\\
v_i(\phi_i)&\mbox{non-negative integer},\\
\lambda_i=-h_i/e_i,& \mbox{$h_i,e_i$ positive coprime integers},\\
\ell_i,\ell'_i,& \mbox{a pair of integers satisfying }\ell_ih_i-\ell'_ie_i=1,\\
\psi_i(y), & \mbox{monic irreducible polynomial in }\ff{i}[y],\\
f_i& \deg \psi_i(y),\\
z_i& \mbox{the class of $y$ in $\ff{i+1}$, so that }\psi_i(z_i)=0.
\end{array}
$$
Take $f_0:=\deg \psi_0$, and let $z_0\in\ff{1}$ be the class of $y$, so that $\psi_0(z_0)=0$. Note that $m_i=(f_0f_1\cdots f_{i-1})(e_1\cdots e_{i-1})$, $\ff{i+1}=\ff{i}[z_i]$, and $\dim_{\ff0}\ff{i+1}=f_0f_1\cdots f_i$. The discrete valuations $v_1,\dots,v_{r+1}$ on the field $\mathbb{Q}_p(x)$ are essential invariants of the type.
\begin{definition}\label{defs}
Let $g(x)\in\Z_p[x]$ be a monic separable polynomial, and $\mathbf{t}$ a type of order $r\ge 1$.
(1) \ We say that $\mathbf{t}$ \emph{divides} $g(x)$ (and we write $\mathbf{t} \,|\, g(x)$), if $\psi_r(y)$ divides $R_r(g)(y)$ in $\ff{r}[y]$.
(2) \ We say that $\mathbf{t}$ is \emph{$g$-complete} if $\operatorname{ord}_{\psi_r}(R_r(g))=1$. In this case, $\mathbf{t}$ singles out a monic irreducible factor $g_\mathbf{t}(x)\in\Z_p[x]$ of $g(x)$, uniquely determined by the property $R_r(g_\mathbf{t})(y)\sim\psi_r(y)$. If $K_\mathbf{t}$ is the extension of $\mathbb{Q}_p$ determined by $g_\mathbf{t}(x)$, then $$e(K_\mathbf{t}/\mathbb{Q}_p)=e_1\cdots e_r,\qquad f(K_\mathbf{t}/\mathbb{Q}_p)=f_0f_1\cdots f_r.$$
(3) \ A \emph{representative} of $\mathbf{t}$ is a monic polynomial $\phi_{r+1}(x)\in\mathbb Z[x]$, of degree $m_{r+1}=e_rf_rm_r$ such that $R_r(\phi_{r+1})(y)\sim \psi_r(y)$. This polynomial is necessarily irreducible in $\Z_p[x]$. By the definition of a type, each $\phi_{i+1}(x)$ is a representative of the truncated type of order $i$
$$
\operatorname{Trunc}_{i}(\mathbf{t}):=(\phi_1(x);\lambda_1,\phi_2(x);\cdots;\lambda_{i-1},\phi_i(x);\lambda_i,\psi_i(y)).
$$
(4) \ We say that $\mathbf{t}$ is \emph{optimal} if $m_1<\cdots<m_r$, or equivalently, if $e_if_i>1$, for all $1\le i< r$.
\end{definition}
\begin{lemma}\label{vjphii}
Let $\mathbf{t}$ be a type of order $r$. Then, $v_j(\phi_i)=(m_i/m_j)v_j(\phi_j)$, for all $j<i\le r$.
\end{lemma}
\begin{proof}
Let $\phi_i(x)=\sum_{s\ge 0} a_s(x)\phi_j(x)^s$ be the $\phi_j$-adic development of $\phi_i$. By \cite[Lem.2.17]{HN}, $v_j(\phi_i)=\min_{s\ge 0}\{v_j(a_s\phi_j^s)\}$. Now, $N_j(\phi_i)$ is one-sided of slope $\lambda_j$, because $\phi_i$ is a polynomial of type $\operatorname{Trunc}_j(\mathbf{t})$ \cite[Def.2.1+Lem.2.4]{HN}. Since the principal term of the development is $\phi_j^{m_i/m_j}$, we get
$v_j(\phi_i)=v_j(\phi_j^{m_i/m_j})=(m_i/m_j)v_j(\phi_j)$.
\end{proof}
\subsection{Certain rational functions}\label{subsecratfunctions}
Let $\mathbf{t}$ be a type of order $r$. We attach to $\mathbf{t}$ seve\-ral rational functions in $\mathbb Q(x)$ \cite[Sec.2.4]{HN}. Note that $v_i(\phi_i)$ is always divisible by $e_{i-1}$
\cite[Thm.2.11]{HN}.
\begin{definition}\label{ratfracs}
Let $\,\pi_0(x)=1$, $\pi_1(x)=p$. We define recursively for all $1\le i\le r$:
$$
\Phi_i(x)=\dfrac{\phi_i(x)}{\pi_{i-1}(x)^{v_i(\phi_i)/e_{i-1}}},\qquad
\gamma_i(x)=\dfrac{\Phi_i(x)^{e_i}}{\pi_i(x)^{h_i}},\qquad
\pi_{i+1}(x)=\dfrac{\Phi_i(x)^{\ell_i}}{\pi_i(x)^{\ell'_i}}.
$$
These rational functions can be written as a product of powers of $p,\phi_1(x),\dots,\phi_r(x)$, with integer exponents.
\end{definition}
\noindent{\bf Notation. }Let $\Psi(x)=p^{n_0}\phi_1(x)^{n_1}\cdots\phi_s(x)^{n_s}\in \mathbb Q(x)$ be a rational function which is a product of powers of $p,\phi_1,\dots,\phi_s$, with integer exponents. We denote:
$$
\log \Psi=(n_0,\dots,n_s)\in \mathbb Z^{s+1}.
$$
The next result is inspired in \cite[Cor.4.26]{HN}.
\begin{lemma}\label{prodgammas}
Let $F(x)\in\mathbb Z_p[x]$ be a monic irreducible polynomial divisible by $\mathbf{t}$, and let $\alpha\in\overline{\mathbb{Q}}_p$ be a root of $F(x)$. For some $1\le s\le r$, let $\Psi(x)=p^{n_0}\phi_1(x)^{n_1}\cdots\phi_s(x)^{n_s}$ be a rational function in $\mathbb Q(x)$, such that $v(\Psi(\alpha))=0$. Then,
$$\Psi(x)=\gamma_1(x)^{t_1}\cdots\gamma_s(x)^{t_s},
$$
for certain integer exponents $t_1,\dots,t_s\in\mathbb Z$, which can be computed by the following recursive procedure:
{\tt
\qquad vector=$(n_0,\dots,n_s)$
\nopagebreak
\qquad for i=s to 1 by -1 do
\nopagebreak
\qquad\qquad$t_i$=vector[i]/$e_i$
\nopagebreak
\qquad\qquad vector=vector-$t_i\log\gamma_i$
\qquad end for}
\end{lemma}
\begin{proof}
By \cite[(17)]{HN} and \cite[Cor.3.2]{HN}:
$$\log\Phi_s=(\dots\dots,1),\ \log\pi_s=(\dots\dots,0),\ \log \gamma_s=(\dots\dots,e_s)\in\mathbb Z^{s+1},$$
$$
v(\phi_s(\alpha))=\sum_{i=1}^s e_if_i\cdots e_{s-1}f_{s-1}\dfrac{h_i}{e_1\cdots e_i},
\qquad v(\gamma_s(\alpha))=0.
$$
Since $v(\Psi(\alpha))=0$, the formula for $v(\phi_s(\alpha))$ shows that $e_s| n_s$. Thus, we can replace $\Psi(x)$ by $\Psi(x)\gamma_s^{-n_s/e_s}$ and iterate the argument. Since $v(\gamma_s(\alpha))=0$, the new $\Psi(x)$ satisfies $v(\Psi(\alpha))=0$ as well, and the $s$-th coordinate of $\log\Psi$ is zero. At the last step ($s=1$), we get $\Psi(x)=p^{n'_0}\phi_1^{n'_1}$, with $n'_0+n'_1(h_1/e_1)=0$.
Then, clearly $\Psi(x)=\gamma_1(x)^{n'_1/e_1}$.
\end{proof}
\subsection{Montes algorithm and the secondary invariants}
Let $f(x)\in\mathbb Z[x]$ be a monic irreducible polynomial. At the input of the pair $(f(x),p)$, Montes algorithm computes a family $\mathbf{t}_1,\dots,\mathbf{t}_s$ of $f$-complete and optimal types in one-to-one correspondence with the irreducible factors $f_{\mathbf{t}_1}(x),\dots,f_{\mathbf{t}_s}(x)$ of $f(x)$ in $\Z_p[x]$. This one-to-one correspondence is determined by:
\begin{enumerate}
\item For all $1\le i\le s$, the type $\mathbf{t}_i$ is $f_{\mathbf{t}_i}$-complete.
\item For all $j\ne i$, the type $\mathbf{t}_j$ does not divide $f_{\mathbf{t}_i}(x)$.
\end{enumerate}
The algorithm starts by computing the order zero types determined by the irreducible factors of $f(x)$ modu\-lo $p$, and then proceeds to enlarge them in a convenient way till the whole list of $f$-complete optimal types is obtained \cite{GMNalgorithm}.
With regard to the computation of generators of the prime ideals and chinese remainder multipliers, the algorithm is slightly modified to compute and store some other (secondary) invariants at each level of all types $\mathbf{t}$ considered by the algorithm:
$$\begin{array}{ll}
\operatorname{Refinements}_i, & \mbox{ a list of pairs $[\phi(x),\lambda]$, where $\phi$ is a representative of}\\& \qquad\operatorname{Trunc}_{i-1}(\mathbf{t})\mbox{ and $\lambda$ a negative slope},\\
u_i, & \mbox{ a nonnegative integer called the \emph{height}},\\
\operatorname{Quot}_i,& \mbox{ a list of $e_i$ polynomials in }\mathbb Z[x], \\
\log \Phi_i,& \mbox{ a vector }(n_0,\dots,n_i)\in\mathbb Z^{i+1},\\
\log \pi_i,& \mbox{ a vector }(n_0,\dots,n_{i-1},0)\in\mathbb Z^{i+1},\\
\log \gamma_i,& \mbox{ a vector }(n_0,\dots,n_i)\in\mathbb Z^{i+1}.
\end{array}
$$
Let us briefly explain the flow of the algorithm and the computation of these invariants. Suppose a type of order $i-1$ dividing $f(x)$ is considered,
$$\mathbf{t}=(\phi_1(x);\lambda_1,\phi_2(x);\cdots;\lambda_{i-2},\phi_{i-1};\lambda_{i-1},\psi_{i-1}(y)).$$
A representative $\phi_i(x)$ is constructed. Suppose that either $i=1$ or $m_1<\cdots<m_i$. Let $\ell=\op{ord}_{\psi_{i-1}}R_{i-1}(f)$. If $\mathbf{t}$ is not $f$-complete ($\ell>1$), it may ramify to produce new types, that will be germs of distinct $f$-complete types. To carry out this ramification process we compute simultaneously the first $\ell+1$ coefficients of the $\phi_i$-adic development of $f(x)$ and the corresponding quotients:
\begin{equation}\label{quotients}
\begin{array}{rcl}
f(x)&=&\phi_i(x)q_1(x)+a_0(x),\\
q_1(x)&=&\phi_i(x)q_2(x)+a_1(x),\\
\cdots&&\cdots\\
q_\ell(x)&=&\phi_i(x)q_{\ell+1}(x)+a_\ell(x).
\end{array}
\end{equation}
The Newton polygon of $i$-th order of $f(x)$, $N_i(f)$, is the lower convex envelope of the set of points $(s,v_i(a_s\phi_i^s))$ of the plane, for all $s\ge 0$. However, we need to build up only the \emph{principal part} of this polygon, $N^-_i(f)=N^-_{\phi_i,v_i}(f)$, formed by the sides of negative slope of $N_i(f)$. By \cite[Lem.2.17]{HN}, this latter polygon is the lower convex envelope of the set of points $(s,v_i(a_s\phi_i^s))$ of the plane, for $0\le s\le \ell$. For each side of slope (say) $\lambda$ of $N^-_i(f)$, the residual polynomial
$R_{\lambda}(f)(y)=R_{\phi_i,v_i,\lambda}(f)(y)\in\ff{i}[y]$ is computed and factorized into a product of irreducible factors. The type $\mathbf{t}$ branches in principle into as many types as pairs $(\lambda,\psi(y))$, where $\lambda$ runs on the negative slopes of $N^-_i(f)$ and $\psi(y)$ runs on the different irreducible factors of $R_{\lambda}(f)(y)$. If one of these branches
$$
\mathbf{t}_{\lambda,\psi}:=(\phi_1(x);\lambda_1,\phi_2(x);\cdots;\lambda_{i-1},\phi_i(x);\lambda,\psi(y)),
$$ is $f$-complete, we store this type in an specific list and we go on with the analysis of other branches. Otherwise, we compute a representative $\phi_{\lambda,\psi}(x)$ of $\mathbf{t}_{\lambda,\psi}$. Let $e$ be the least non-negative denominator of $\lambda$ and $f=\deg \psi$. Then we proceed in a different way according to $ef=1$ or $ef>1$.
If $ef>1$, then $\deg \phi_{\lambda,\psi}>m_i$, so that $\phi_{i+1}:=\phi_{\lambda,\psi}$ may be used to enlarge $\mathbf{t}_{\lambda,\psi}$ into several optimal types of order $i+1$. We store the invariants
$$
\begin{array}{l}
\phi_i, \ m_i=\deg \phi_i, \ v_i(\phi_i),\ \lambda_i=\lambda, \ h_i,\ e_i=e,\ \ell_i,\ \ell'_i,\ \psi_i=\psi, \ f_i=f,\ z_i, \\ u_i,\ \operatorname{Quot}_i,\ \log\Phi_i,\ \log\pi_i,\ \log\gamma_i
\end{array}
$$
at the $i$-th level of $\mathbf{t}_{\lambda,\psi}$, and then we proceed to enlarge the type.
The invariant $v_i(\phi_i)$ is recursively computed by using \cite[Prop.2.7+Thm.2.11]{HN}:
$$
v_i(\phi_i)=\left\{\begin{array}{ll}
0,&\mbox{ if }i=1,\\
e_{i-1}f_{i-1}(e_{i-1}v_{i-1}(\phi_{i-1})+h_{i-1}),&\mbox{ if }i>1.
\end{array}
\right.
$$
Let $s_i$ be the abscissa of the right end point of the side of $N^-_i(f)$ of slope $\lambda$; the secondary invariants are computed as:
\begin{equation}\label{secondaryinvariants}
\as{1.2}
\begin{array}{l}
u_i=v_i(a_{s_i}),\\
\operatorname{Quot}_i=[1,\,q_{s_i-1}(x),\,\dots,\,q_{s_i-e+1}(x)],\\
\log \Phi_i=(n_0,\dots,n_{i-1},1), \mbox{ where }(n_0,\dots,n_{i-1})=-(v_i(\phi_i)/e_{i-1})\log \pi_{i-1},\\
\log \pi_i=\ell_{i-1}\log\Phi_{i-1}-\ell'_{i-1}\log \pi_{i-1},\\
\log \gamma_i=e_i\log\Phi_i-h_i\log \pi_i.
\end{array}
\end{equation}
For $i=1$ we take $\log \Phi_1=(0,1)$ and $\log \pi_1=(1,0)$.
Note that all these secondary invariants depend only on $\lambda$ and not on $\psi$. They are computed only once for each side of $N^-_i(f)$ and then stored in the different optimal branches that share the same slope.
The type $\mathbf{t}_{\lambda,\psi}$ of order $i$ is then ready for further analysis.
If $ef=1$ then $\deg \phi_{\lambda,\psi}=m_i$, so that the enlargements of $\mathbf{t}_{\lambda,\psi}$ that would result from building up an $(i+1)$-th level from $\phi_{\lambda,\psi}$ would not be optimal. In this case, we replace $\mathbf{t}_{\lambda,\psi}$ by all types
$$
\mathbf{t}'_{\lambda',\psi'}:=(\phi_1(x);\lambda_1,\phi_2(x);\cdots;\lambda_{i-1},\phi'_i(x);\lambda',\psi'(y)),
$$
obtained by enlarging $\mathbf{t}$ with $i$-th levels deduced from the consideration of $\phi_{\lambda,\psi}$ as a new (and better) representative of $\mathbf{t}$: $\phi_i'(x):=\phi_{\lambda,\psi}(x)$, but taking into account only the slopes $\lambda'$ of $N_{\phi'_i,v_i}(f)$ satisfying $\lambda'<\lambda$; this is called a \emph{refinement step} \cite[Sect.3.2]{GMNalgorithm}. If the total number of pairs $(\lambda,\psi)$ is greater than one, we append to the list $\operatorname{Refinements}_i$ of all types $\mathbf{t}'_{\lambda',\psi'}$ the pair $[\phi_i(x),\lambda]$.
Some of these new branches $\mathbf{t}'_{\lambda',\psi'}$ of order $i$ may be $f$-complete, some may lead to optimal enlargements and some may lead to further refinement at the $i$-th level. Thus, in general, the list $\operatorname{Refinements}_i$ of a type $\mathbf{t}$ is an ordered sequence of pairs:
$$
\operatorname{Refinements}_i=[[\phi_i^{(1)},\lambda_i^{(1)}], \cdots ,[\phi_i^{(s)},\lambda_i^{(s)}]],
$$
reflecting the fact that along the construction of $\mathbf{t}$, $s$ or more successive refinement steps occurred at the $i$-th level. All polynomials $\phi_i^{(k)}$ are representatives of $\operatorname{Trunc}_{i-1}(\mathbf{t})$, and the slopes $\lambda_i^{(k)}$ grow strictly in absolute size: $|\lambda_i^{(1)}|<\cdots <|\lambda_i^{(s)}|$. We recall that we store only the pairs $[\phi_i^{(k)},\lambda_i^{(k)}]$ corresponding to a refinement step that occurred simultaneously with some branching. For instance, if $N_{\phi_i^{(k)},v_i}^-(f)$ has only one side with integer slope $\lambda_i^{(k)}\in\mathbb Z$ (i.e. $e=1)$ and the corresponding residual polynomial is the power of an irreducible polynomial of degree one $(f=1)$, then the pair $[\phi_i^{(k)},\lambda_i^{(k)}]$ is not included in the list $\operatorname{Refinements}_i$.
After a finite number of branching, enlargement and/or refinement steps, all types become $f$-complete and optimal. If $\mathbf{t}$ is an $f$-complete and optimal type of order $r$, then the \emph{Okutsu depth} of $f_\mathbf{t}(x)$ is
\cite[Thm.4.2]{GMNokutsu}:
\begin{equation}\label{depth}
R=\left\{\begin{array}{ll}
r,&\mbox{ if }e_rf_r>1, \\r-1,&\mbox{ if }e_rf_r=1.
\end{array}\right.
\end{equation}
The invariants $v_{i+1},\,h_i,\,e_i,\,f_i$ at each level $1\le i\le R$ are canonical (depend only on $f(x)$) \cite[Cor.3.7]{GMNokutsu}. On the other hand, the polynomials $\phi_i(x),\psi_i(y)$ depend on several choices, some of them caused by the lifting of elements of a finite field to rings of characteristic zero. However, the sequence $[\phi_1,\dots,\phi_R]$ is an \emph{Okutsu frame} of $f_\mathbf{t}(x)$ \cite[Sec.2]{GMNokutsu}; in the original terminology of Okutsu, the polynomials $\phi_1,\dots,\phi_R$ are \emph{primitive divisor polynomials} of $f_\mathbf{t}(x)$ \cite{Ok}.
\subsection{Montes approximations to the irreducible $p$-adic factors}\label{subsecApprox}
Once an $f$-complete and optimal type $\mathbf{t}$ of order $r$ is computed, Montes algorithm attaches to it an $(r+1)$-level that carries only the invariants:
\begin{equation}\label{lastlevel}\as{1.2}
\begin{array}{l}
\phi_{r+1},\ m_{r+1},\ v_{r+1}(\phi_{r+1}),\ \lambda_{r+1}=-h_{r+1},\ e_{r+1}=1,\ \psi_{r+1},\ f_{r+1}=1,\\
z_{r+1},\ \log\Phi_{r+1},\ \log\pi_{r+1},\ \log\gamma_{r+1}.
\end{array}
\end{equation}
The polynomial $\phi_{r+1}(x)$ is a representative of $\mathbf{t}$. The invariants $\lambda_{r+1}$, $\psi_{r+1}$ are deduced from the computation of the principal part of the Newton polygon of $(r+1)$-th order of $f(x)$ (which is a single side of length one) and the corresponding resi\-dual polynomial $R_{r+1}(f)(y)\in\ff{r+1}[y]$ (which has degree one).
If $\mathfrak{p}$ is the prime ideal corresponding to $\mathbf{t}$, we denote
$$\as{1.3}
\begin{array}{l}
f_\mathfrak{p}(x):=f_\mathbf{t}(x)\in\Z_p[x],\quad \phi_\mathfrak{p}(x):=\phi_{r+1}(x)\in\mathbb Z[x],\quad \ff{\mathfrak{p}}:=\ff{r+1},\\
\mathbf{t}_\mathfrak{p}:=(\phi_1;\lambda_1,\phi_2;\cdots,\phi_r;\lambda_r,\phi_{\mathfrak{p}};\lambda_{r+1},\psi_{r+1}),
\end{array}
$$
and we say that $\mathfrak{p}=[p;\phi_1,\dots,\phi_r,\phi_\mathfrak{p}]$ is the \emph{Okutsu-Montes representation} of $\mathfrak{p}$.
Note that $\mathbf{t}_\mathfrak{p}$ is still an $f$-complete type of order $r+1$, but it may eventually be non-optimal because $m_{r+1}=m_r$, if $e_rf_r=1$.
The polynomial $\phi_\mathfrak{p}(x)$ is a \emph{Montes approximation} to the $p$-adic irreducible factor $f_\mathfrak{p}(x)$ \cite[Sec.4.1]{GMNokutsu}. Several arithmetic tasks involving prime ideals (tasks (1), (3), (5), (6) and (7) from the list given in the Introduction) require the computation of a Montes approximation with a sufficiently large value of the last slope $|\lambda_{r+1}|=h_{r+1}$. This can be achieved by applying a finite number of refinement steps at the $(r+1)$-th level, to the type $\mathbf{t}_\mathfrak{p}$ of order $r+1$, as described in \cite[Sec.4.3]{GMNokutsu}. This procedure has a linear convergence. In \cite{GNP} a more efficient \emph{single-factor lift} algorithm is developed, which is able to improve the Montes approximations to $f_\mathfrak{p}(x)$ with quadratic convergence.
\section{$\mathfrak{p}$-adic valuation and factorization}\label{secPadic}
In section \ref{subsecPadic} we compute the $\mathfrak{p}$-adic valuation, $v_\mathfrak{p}\colon K^*\to \mathbb Z$, determined by a prime ideal $\mathfrak{p}$, in terms of the data contained in the Okutsu-Montes representation of $\mathfrak{p}$. In section \ref{subsecFactorization} we describe a procedure to find the prime ideal decomposition of any fractional ideal of $K$. From the computational point of view, this procedure is based on three ingredients:
\begin{enumerate}
\item The factorization of integers.
\item Montes algorithm to find the prime ideal decomposition of a prime number.
\item The computation of $v_\mathfrak{p}$ for some prime ideals $\mathfrak{p}$.
\end{enumerate}
The routines (2) and (3) run extremely fast in practice (see section \ref{secKOM}).
It is well-known how to add, multiply and intersect fractional ideals once their prime ideal factorization is available. We omit the description of the routines that carry out these tasks.
From now on, to any prime ideal $\mathfrak{p}$ of $K$ we attach the data $\mathbf{t}_\mathfrak{p}$, $f_\mathfrak{p}(x)$, $\phi_\mathfrak{p}(x)$, $\ff{\mathfrak{p}}$, described in section \ref{subsecApprox}. Also, we choose a root
$\theta_\mathfrak{p}\in\overline{\mathbb{Q}}_p$ of $f_\mathfrak{p}(x)$, we consider the local field $K_\mathfrak{p}=\mathbb{Q}_p(\theta_\mathfrak{p})$, and we denote by $\mathbb Z_{K_\mathfrak{p}}$ the ring of integers of $K_\mathfrak{p}$.
\subsection{Computation of the $\mathfrak{p}$-adic valuation}\label{subsecPadic}
Let $p$ be a prime number and $v\colon \overline{\mathbb{Q}}_p^{\,*}\to \mathbb Q$, the canonical $p$-adic valuation. Let $\mathfrak{p}$ be a prime ideal of $K$ lying above $p$, corresponding to an $f$-complete type $\mathbf{t}_\mathfrak{p}$ with an added $(r+1)$-th level, as indicated in section \ref{subsecApprox}.
We shall freely use all invariants of $\mathbf{t}_\mathfrak{p}$ described in section \ref{secMontes}. By item 2 of Definition \ref{defs} we know that
$$
e(\mathfrak{p}/p)=e_1\cdots e_r,\qquad f(\mathfrak{p}/p)=f_0f_1\cdots f_r.
$$
The residue field $\mathbb Z_{K_\mathfrak{p}}/\mathfrak{p}\mathbb Z_{K_\mathfrak{p}}$ can be identified to the finite field $$\ff{\mathfrak{p}}:=\ff{r+1}=\ff{0}[z_0,z_1,\dots,z_r].$$
More precisely, in \cite[(27)]{HN} we construct an explicit isomorphism
\begin{equation}\label{embedding}
\gamma\colon \ff{\mathfrak{p}}\ \lower.3ex\hbox{\as{.08}$\begin{array}{c}\longrightarrow\\\mbox{\tiny $\sim\,$}\end{array}$}\ \mathbb Z_{K_\mathfrak{p}}/\mathfrak{p}\mathbb Z_{K_\mathfrak{p}}, \qquad z_0\mapsto \overline{\theta_\p}, \ z_1\mapsto \gb1,\ \dots,\ z_r\mapsto \gb{r},
\end{equation}
where we indicate by a bar the canonical reduction map, $\mathbb Z_{K_\mathfrak{p}}\longrightarrow \mathbb Z_{K_\mathfrak{p}}/\mathfrak{p}\mathbb Z_{K_\mathfrak{p}}$. We denote by $\operatorname{lred}_\mathfrak{p} \colon\mathbb Z_{K_\mathfrak{p}}\longrightarrow \ff{\mathfrak{p}}$, the reduction map obtained by composition of the canonical reduction map with the inverse of the isomorphism (\ref{embedding}).
\begin{equation}\label{lred}
\operatorname{lred}_\mathfrak{p}\colon \mathbb Z_{K_\mathfrak{p}}\longrightarrow \mathbb Z_{K_\mathfrak{p}}/\mathfrak{p}\mathbb Z_{K_\mathfrak{p}} \stackrel{\gamma^{-1}}\longrightarrow \ff{\mathfrak{p}}.
\end{equation}
Consider the topological embedding $\iota_\mathfrak{p}\colon K\hookrightarrow K_\mathfrak{p}$ determined by sending $\theta$ to $\theta_\mathfrak{p}$. We have: $v_\mathfrak{p}(\alpha)=e(\mathfrak{p}/p)v(\iota_\mathfrak{p}(\alpha))$, for all $\alpha\in K$. In particular, for any polynomial $g(x)\in\mathbb Z[x]$,
\begin{equation}\label{tautology}
v_\mathfrak{p}(g(\theta))=e(\mathfrak{p}/p)v(g(\theta_\mathfrak{p})).
\end{equation}
Any $\alpha\in K^*$ can be expressed as $\alpha=(a/b)g(\theta)$, for some coprime positive integers $a,b$ and some primitive polynomial $g(x)\in\mathbb Z[x]$. By (\ref{tautology}), $$v_\mathfrak{p}(\alpha)=e(\mathfrak{p}/p)(v(g(\theta_\mathfrak{p}))+v(a/b)).$$ Thus, it is sufficient to learn to compute $v(g(\theta_\mathfrak{p}))$. The condition $v(g(\theta_\mathfrak{p}))=0$ is easy to check \cite[Lem.2.2]{GMNokutsu}:
\begin{equation}\label{v=0}
v(g(\theta_\mathfrak{p}))=0 \ \,\Longleftrightarrow\,\ \psi_0\nmid R_0(g).
\end{equation}
If $\psi_0\mid R_0(g)$, the computation of $v(g(\theta_\mathfrak{p}))$ can be based on the following proposition, which is easily deduced from \cite[Prop.3.5]{HN} and \cite[Cor.3.2]{HN}.
\begin{proposition}\label{vgt}Let $\mathfrak{p},\,f_\mathfrak{p}(x),\,\theta_\mathfrak{p}$ be as above. Let $\mathbf{t}$ be a type of order $R$ dividing $f_\mathfrak{p}(x)$,
and let $g(x)\in\mathbb Z[x]$ be a nonzero polynomial. For any $1\le i\le R$, take a line $L_{\lambda_i}$ of slope $\lambda_i$ far below $N_i(g)$, and let it shift upwards till it touches the polygon for the first time. Let $S$ be the intersection of this line with $N_i(g)$, let $(s,u)$ be the coordinates of the left end point of $S$, and let $H=u+s|\lambda_i|$ be the ordinate at the origin of this line. Then,
\begin{enumerate}
\item $v(g(\theta_\mathfrak{p}))\ge H/e_1\cdots e_{i-1}$, and equality holds if and only if $\operatorname{Trunc}_i(\mathbf{t})\nmid g(x)$.
\item If equality holds, then $v(g(\theta_\mathfrak{p}))=v(\Phi_i(\theta_\mathfrak{p})^s\pi_i(\theta_\mathfrak{p})^u)$ and
$$
\operatorname{lred}_\mathfrak{p}\left(\dfrac{g(\theta_\mathfrak{p})}{\Phi_i(\theta_\mathfrak{p})^s\pi_i(\theta_\mathfrak{p})^u}\right)=R_i(g)(z_i)\ne0.
$$\qed
\end{enumerate}
\end{proposition}
Figure 1 shows that the segment $S$ may eventually be reduced to a point. In this case, the residual polynomial $R_i(g)(y)$ is a constant \cite[Def.2.21]{HN}, so that $\operatorname{Trunc}_i(\mathbf{t})\nmid g(x)$ automatically holds.
\begin{center}
\setlength{\unitlength}{5.mm}
\begin{picture}(16,6)
\put(2.85,1.85){$\bullet$}\put(1.85,2.85){$\bullet$}
\put(-1,0){\line(1,0){7}}\put(0,-1){\line(0,1){6}}
\put(3,2){\line(-1,1){1}}\put(3.02,2){\line(-1,1){1}}
\put(3,2){\line(3,-1){1}}\put(3.02,2){\line(3,-1){1}}
\put(2,3){\line(-1,2){1}}\put(2.02,3){\line(-1,2){1}}
\put(6,.5){\line(-2,1){7}}
\put(5.2,1){\begin{footnotesize}$L_{\lambda_i}$\end{footnotesize}}
\multiput(3,-.1)(0,.25){9}{\vrule height2pt}
\multiput(-.1,2)(.25,0){12}{\hbox to 2pt{\hrulefill }}
\put(1.8,4.5){\begin{footnotesize}$N_i(g)$\end{footnotesize}}
\put(-.6,1.85){\begin{footnotesize}$u$\end{footnotesize}}
\put(-.6,3.1){\begin{footnotesize}$H$\end{footnotesize}}
\put(2.9,2.3){\begin{footnotesize}$S$\end{footnotesize}}
\put(2.8,-.6){\begin{footnotesize}$s$\end{footnotesize}}
\put(14.8,1.35){$\bullet$}\put(12.85,2.35){$\bullet$}
\put(9,0){\line(1,0){8}}\put(10,-1){\line(0,1){6}}
\put(15,1.5){\line(-2,1){2}}\put(15.02,1.5){\line(-2,1){2}}
\put(15,1.5){\line(3,-1){1}}\put(15.02,1.5){\line(3,-1){1}}
\put(13,2.5){\line(-1,2){1.3}}\put(13.02,2.5){\line(-1,2){1.3}}
\put(17,.5){\line(-2,1){8}}
\put(16.6,.8){\begin{footnotesize}$L_{\lambda_i}$\end{footnotesize}}
\put(14,2.1){\begin{footnotesize}$S$\end{footnotesize}}
\put(12.85,-.6){\begin{footnotesize}$s$\end{footnotesize}}
\multiput(13,-.1)(0,.25){11}{\vrule height2pt}
\multiput(9.9,2.55)(.25,0){12}{\hbox to 2pt{\hrulefill }}
\put(12.5,4.5){\begin{footnotesize}$N_i(g)$\end{footnotesize}}
\put(9.5,2.35){\begin{footnotesize}$u$\end{footnotesize}}
\put(9.4,3.6){\begin{footnotesize}$H$\end{footnotesize}}
\end{picture}
\end{center}\bigskip
\begin{center}
Figure 1
\end{center}\bigskip
We may compute $v_\mathfrak{p}(g(\theta))=e(\mathfrak{p}/p)v(g(\theta_\mathfrak{p}))$ by applying Proposition \ref{vgt} to the type $\mathbf{t}_\mathfrak{p}$. If for some $1\le i\le r+1$, the truncated type $\operatorname{Trunc}_i(\mathbf{t}_\mathfrak{p})$ does not divide $g(x)$, we compute $v(g(\theta_\mathfrak{p}))$ as indicated in item 1 of this proposition. Nevertheless, it may occur that $\operatorname{Trunc}_i(\mathbf{t}_\mathfrak{p})$ divides $g(x)$ for all $1\le i\le r+1$ (for instance, if $g(x)$ is a multiple of $\phi_\mathfrak{p}(x)=\phi_{r+1}(x)$). In this case, we compute an improvement of the Montes approximation $\phi_\mathfrak{p}(x)$ by applying the single-factor lift routine \cite{GNP}; then, we replace the $(r+1)$-th level of $\mathbf{t}_\mathfrak{p}$ by the invariants (\ref{lastlevel}) determined by the new choice of $\phi_{r+1}(x)=\phi_\mathfrak{p}(x)$, and we test again if $\mathbf{t}_\mathfrak{p}=\operatorname{Trunc}_{r+1}(\mathbf{t}_\mathfrak{p})$ divides $g(x)$.
If $\mathbf{t}_\mathfrak{p}$ divides $g(x)$, then $\phi_\mathfrak{p}(x)$ is simultaneously close to a $p$-adic irreducible factor of $f(x)$ and to a $p$-adic irreducible factor of $g(x)$; hence, if $f(x)$ and $g(x)$ do not have a common $p$-adic irreducible factor, after a finite number of steps the renewed type $\mathbf{t}_\mathfrak{p}$ will not divide $g(x)$. On the other hand, if $f(x)$ and $g(x)$ have a common $p$-adic irreducible factor, they must have a common irreducible factor in $\mathbb Z[x]$ too; since $f(x)$ is irreducible, necessarily $f(x)$ divides $g(x)$ and $g(\theta)=0$.
We may summarize the routine to compute $v_\mathfrak{p}(\alpha)$ as follows. \medskip
\noindent{\bf Input: } $\alpha\in K^*$ and a prime ideal $\mathfrak{p}$ determined by a type $\mathbf{t}_\mathfrak{p}$ of order $r+1$.
\nopagebreak
\noindent{\bf Output: } $v_\mathfrak{p}(\alpha)$.\medskip
\nopagebreak
\noindent{\bf1. }Write $\alpha=\frac ab g(\theta)$, with $a,b$ coprime integers and $g(x)\in\mathbb Z[x]$ primitive.
\nopagebreak
\noindent{\bf2. }Compute $\nu=v(a/b)$.
\noindent{\bf3. }if $\psi_0\nmid R_0(g)$ then return $v_\mathfrak{p}(\alpha)=e(\mathfrak{p}/p)\nu$.
\noindent{\bf4. }for $i=1$ to $r+1$ do
\qquad compute $N_i^-(g)$, $R_i(g)$, and the ordinate $H$ of Proposition \ref{vgt}.
\qquad if $\psi_i\nmid R_i(g)$ then return $v_\mathfrak{p}(\alpha)=e(\mathfrak{p}/p)((H/e_1\cdots e_{i-1})+\nu)$.
\noindent\hphantom{\bf4. }end for.
\noindent{\bf5. }while $\psi_{r+1}\mid R_{r+1}(g)$ do
\qquad improve $\phi_\mathfrak{p}$ and compute the new values $\lambda_{r+1}$, $\psi_{r+1}$.
\qquad compute $N_{r+1}^-(g)$, $R_{r+1}(g)$, and the ordinate $H$ of Proposition \ref{vgt}.
\noindent\hphantom{\bf5. }end while.
\noindent{\bf6. }return $v_\mathfrak{p}(\alpha)=e(\mathfrak{p}/p)((H/e_1\cdots e_r)+\nu)$.
\subsection{Factorization of fractional ideals}\label{subsecFactorization}
For any $\alpha\in K^*$, the factorization of the principal ideal generated by $\alpha$ is
$$\alpha\mathbb Z_K=\prod_\mathfrak{p}\p^{v_\mathfrak{p}(\alpha)}.$$
Let $\alpha=(a/b)g(\theta)$, for some positive coprime integers $a,b$ and some primitive polynomial $g(x)\in\mathbb Z[x]$. Then, $v_\mathfrak{p}(\alpha)=0$ for all prime ideals $\mathfrak{p}$ whose underlying prime number $p$ does not divide the product $ab\operatorname{N}_{K/\mathbb Q}(g(\theta))=ab\operatorname{Resultant}(f,g)$.
Also, if $\mathfrak{p}$ is a prime ideal of $K$ and $\mathfrak{a}$, $\mathfrak{b}$ are fractional ideals, we have
$$
v_\mathfrak{p}(\mathfrak{a}+\mathfrak{b})=\min\{v_\mathfrak{p}(\mathfrak{a}),v_\mathfrak{p}(\mathfrak{b})\}.
$$
Thus, the $\mathfrak{p}$-adic valuation of the fractional ideal $\mathfrak{a}$ generated by $\alpha_1,\dots,\alpha_m\in K^*$ is:
$v_\mathfrak{p}(\mathfrak{a})=\min_{1\le i\le m}\{v_\mathfrak{p}(\alpha_i)\}$.
After these considerations, it is straightforward to deduce a factorization routine of fractional ideals from the routines computing prime ideal decompositions of prime numbers and $\mathfrak{p}$-valuations of elements of $K^*$ with respect to prime ideals $\mathfrak{p}$.\medskip
\noindent{\bf Input: } a family $\alpha_1,\dots,\alpha_m\in K^*$ of generators of a fractional ideal $\mathfrak{a}$.
\noindent{\bf Output: } the prime ideal decomposition $\mathfrak{a}=\prod_{\mathfrak{p}}\mathfrak{p}^{a_\mathfrak{p}}$.\medskip
\noindent{\bf1. }For each $1\le i\le m$, write $\alpha_i=(a_i/b_i) g_i(\theta)$, with $a_i,b_i$ coprime integers and $g_i(x)\in\mathbb Z[x]$ primitive; then compute $N_i=\operatorname{N}_{K/\mathbb Q}(g_i(\theta))$.
\noindent{\bf2. }Compute $N=\operatorname{gcd}(a_1N_1,\dots,a_mN_m)$ and $M=\operatorname{lcm}(b_1,\dots,b_m)$.
\noindent{\bf3. }Factorize $N$ and $M$ and store all their prime factors in a list ${\mathcal P}$.
\noindent{\bf4. }For each $p\in{\mathcal P}$ apply Montes algorithm to obtain the prime ideal decomposition of $p$, and
for each $\mathfrak{p}|p$, take $a_\mathfrak{p}=\min_{1\le i\le m}\{v_\mathfrak{p}(\alpha_i)\}$.
\noindent{\bf5. }Return the list of pairs $[\mathfrak{p},a_\mathfrak{p}]$ for all $\mathfrak{p}$ with $a_\mathfrak{p}\ne0$. \medskip
The bottleneck of this routine is step 3. We get a fast facto\-rization routine in the number field $K$, as long as the integers $N$, $M$ attached to the ideal $\mathfrak{a}$ may be easily factorized.
\section{Computation of generators}\label{secGenerators}
In \cite[Sec.4]{GMNalgorithm} we gave an algorithm to compute generators of the prime ideals as certain rational functions of the $\phi$-polynomials. Some inversions in $K$, one for each prime ideal, were needed. These inversions dominated the complexity of the algorithm, and they were a bottleneck that prevented the computation of generators for number fields of large degree.
In sections \ref{subsectPseudo} and \ref{subsectgenerators} we construct a two-element representation of prime ideals, which does not need any inversion in $K$. As a consequence, this construction works extremely fast in practice even for number fields of large degree (see section \ref{secKOM}). In section \ref{subsectwogen} we easily derive two-element representations of fractional ideals.
For any prime ideal $\mathfrak{p}$ of $K$ we keep the notations for $\mathbf{t}_\mathfrak{p}$, $f_\mathfrak{p}(x)$, $\phi_\mathfrak{p}(x)$, $\ff{\mathfrak{p}}$, $\theta_\mathfrak{p}$, $K_\mathfrak{p}$, $\mathbb Z_{K_\mathfrak{p}}$, as introduced in section \ref{secPadic}.
\subsection{Local generators of the prime ideals}\label{subsectPseudo}
\begin{definition}
A pseudo-generator of a prime ideal $\mathfrak{p}$ of $K$ is an integral element $\pi\in\mathbb Z_K$ such that $v_\mathfrak{p}(\pi)=1$.
\end{definition}
Let $p$ be a prime number, and let $\mathfrak{p}=[p;\phi_1,\dots,\phi_r,\phi_\mathfrak{p}]$ be a prime ideal factor of $p\mathbb Z_K$, corresponding to an $f$-complete type $\mathbf{t}_\mathfrak{p}$ with an added $(r+1)$-th level, as indicated in section \ref{subsecApprox}. In this section we show how to compute a pseudogenerator of $\mathfrak{p}$ from the secondary invariants $u_i$,
$\operatorname{Quot}_i$, of $\mathbf{t}_\mathfrak{p}$, for $1\le i\le r$, computed along the flow of Montes algorithm as indicated in (\ref{secondaryinvariants}).
For each level $1\le i\le r$, let us denote:
$$\operatorname{Quot}_i=[Q_{i,0}(x),\dots, Q_{i,e_i-1}(x)].
$$Recall that $Q_{i,0}(x)=1$, and for $0<j<e_i$, the polynomial $Q_{i,j}(x)\in\mathbb Z[x]$ is the $(s_i-j)$-th quotient of the $\phi_i$-adic development of $f(x)$ (cf. (\ref{quotients})), where $s_i$ is the abscissa of the right end point of the side of slope $\lambda_i$ of $N^-_i(f)$. Also, let us define
$$H_{i,0}=0,\qquad H_{i,j}=\dfrac{u_i+j(|\lambda_i|+v_i(\phi_i))}{e_1\cdots e_{i-1}},
\quad \forall\,0<j<e_i.
$$
\begin{proposition}\label{quotientvalue}
For each level $1\le i\le r$ and subindex $\,0\le j<e_i$:
\begin{enumerate}
\item $v_\mathfrak{q}(Q_{i,j}(\theta))\ge e(\mathfrak{q}/p)H_{i,j}$, for all prime ideals $\mathfrak{q}\mid p$.
\item $v_\mathfrak{p}(Q_{i,j}(\theta))=e(\mathfrak{p}/p)H_{i,j}$.
\end{enumerate}
\end{proposition}
\begin{proof}
Item 1 being proved in \cite[Prop.10]{GMNbasis}, let us prove item 2. Fix a level $1\le i\le r$ and a subindex $\,0\le j<e_i$. Let
$\ell_i=\op{ord}_{\psi_{i-1}}R_{i-1}(f)$, and let $f(x)=\sum_{s\ge 0}a_s\phi_i^s$ be the $\phi_i$-adic development of $f(x)$.
The Newton polygon of $i$-th order of $f(x)$, $N_i(f)$, is the lower convex envelope of the cloud of points $(s,v_i(a_s\phi_i^s))$, for all $s\ge0$. The principal part $N_i^-(f)$ is equal to $N_i(f)\cap\left([0,\ell_i]\times\mathbb R\right)$; the typical shape of this polygon is illustrated in Figure 2. Let $S_{\lambda_i}$ be the side of slope $\lambda_i$ of this polygon, and $s_i$ the abscissa of the right end point of $S_{\lambda_i}$.
\begin{center}
\setlength{\unitlength}{5.mm}
\begin{picture}(20,11)
\put(-.15,8.85){$\bullet$}\put(1.85,5.85){$\bullet$}
\put(10.85,2.85){$\bullet$}\put(9.85,3.85){$\bullet$}
\put(8.85,3.5){$\circ$}\put(16.85,1.85){$\bullet$}
\put(0,-1.5){\line(0,1){12}}\put(-1,0){\line(1,0){19}}
\put(2,6.03){\line(-2,3){2}}\put(2,6){\line(-2,3){2}}
\put(2,6){\line(3,-1){9}}\put(2,6.03){\line(3,-1){9}}
\put(11,3){\line(6,-1){6}}\put(11,3.03){\line(6,-1){6}}
\multiput(2,6)(-.15,.05){15}{\mbox{\begin{scriptsize}.\end{scriptsize}}}
\multiput(11,3)(-.1,.1){10}{\mbox{\begin{scriptsize}.\end{scriptsize}}}
\put(8.5,6){\begin{footnotesize}$N_i^-(f)$\end{footnotesize}}
\put(5,5.2){\begin{footnotesize}$S_{\lambda_i}$\end{footnotesize}}
\multiput(17,-.6)(0,.25){11}{\vrule height2pt}
\multiput(11,-.1)(0,.25){13}{\vrule height2pt}
\multiput(10,-.6)(0,.25){19}{\vrule height2pt}
\multiput(9,-.1)(0,.25){15}{\vrule height2pt}
\put(10.15,.15){\begin{footnotesize}$t$\end{footnotesize}}
\put(11.2,.2){\begin{footnotesize}$s_i$\end{footnotesize}}
\put(17.2,.2){\begin{footnotesize}$\ell_i$\end{footnotesize}}
\put(8,-.6){\begin{footnotesize}$s_i-e_i$\end{footnotesize}}
\multiput(-.1,2)(.25,0){68}{\hbox to 2pt{\hrulefill }}
\put(-1.6,1.9){\begin{footnotesize}$v_i(f)$\end{footnotesize}}
\put(-.6,6.6){\begin{footnotesize}$H$\end{footnotesize}}
\put(.2,-.6){\begin{footnotesize}$0$\end{footnotesize}}
\put(13,-.6){\vector(-1,0){3}}\put(13,-.6){\vector(1,0){4}}
\put(12,-1.4){\begin{footnotesize}$N_i^-(q_t\phi_i^t)$\end{footnotesize}}
\end{picture}
\end{center}\vskip.4cm
\begin{center}
Figure 2
\end{center}
For any $0\le t\le \ell_i$, let $q_t(x)$ be the $t$-th quotient of the $\phi_i$-adic development (see (\ref{quotients})). We have $f(x)=q_t(x)\phi_i(x)^t+r_t(x)$, with
$$
r_t(x)=\sum_{0\le s< t}a_s(x)\phi_i(x)^s,\qquad q_t(x)\phi_i(x)^t=\sum_{t\le s}a_s(x)\phi_i(x)^s.
$$
Hence, if $t_0$ is the smallest abscissa of a vertex of $N_i(f)$, such that $t_0\ge t$, we have
$$
N_i(q_t\phi_i^t)\cap \left([t_0,\infty)\times \mathbb R\right)=
N_i(f)\cap \left([t_0,\infty)\times \mathbb R\right).$$
Recall that $Q_{i,j}(x)=q_t(x)$, for $t=s_i-j$; for this value of $t$ we have $t_0=s_i$ (see Figure 2). On the other hand, all points in the cloud $(s,v_i(a_s\phi_i^s))$, for $s_i-e_i<s<s_i$ lie strictly above $S_{\lambda_i}$, because the point on $S_{\lambda_i}$ with integer coordinates and closest to the right end point has abscissa $s_i-e_i$. Hence, the line of slope $\lambda_i$ that first touches $N_i(q_{s_i-j}\phi_i^{s_i-j})$ from below is the line containg $S_{\lambda_i}$.
This line has ordinate at the origin (see Figure 2):
$$H=v_i\left(a_{s_i}\phi_i^{s_i}\right)+s_i|\lambda_i|=u_i+s_i(v_i(\phi_i)+|\lambda_i|).$$
On the other hand, this line touches $N_i(q_{s_i-j}\phi_i^{s_i-j})$ only at the point
$(s_i,v_i(a_{s_i}\phi_i^{s_i}))$, so that $R_i(q_{s_i-j}\phi_i^{s_i-j})(y)$ is a constant and $\operatorname{Trunc}_i(\mathbf{t}_\mathfrak{p})$ does not divide $q_{s_i-j}\phi_i^{s_i-j}$. Therefore, Proposition \ref{vgt} shows that
$$
v(q_{s_i-j}(\theta_\mathfrak{p})\phi_i(\theta_\mathfrak{p})^{s_i-j})=\dfrac {u_i+s_i(v_i(\phi_i)+|\lambda_i|)}{e_1\cdots e_{i-1}}.
$$
By the Theorem of the polygon \cite[Thm.3.1]{HN},
\begin{equation}\label{thmpolygon}
v(\phi_i(\theta_\mathfrak{p}))=\dfrac{v_i(\phi_i)+|\lambda_i|}{e_1\cdots e_{i-1}},
\end{equation} so that
$$
v(q_{s_i-j}(\theta_\mathfrak{p}))=\dfrac {u_i+j(v_i(\phi_i)+|\lambda_i|)}{e_1\cdots e_{i-1}}=H_{i,j}.
$$
By (\ref{tautology}), $v_\mathfrak{p}(q_{s_i-j}(\theta))=e(\mathfrak{p}/p)H_{i,j}$.
\end{proof}
If $e(\mathfrak{p}/p)=1$, then $\pi_\mathfrak{p}:=p$ is a pseudo-generator of $\mathfrak{p}$. If $e(\mathfrak{p}/p)>1$, we can always find a pseudo-generator of $\mathfrak{p}$ by computing a suitable product of quotients in the lists $\operatorname{Quot}_i$, divided by a suitable power of $p$.
\begin{corollary}\label{products} \mbox{\null}
\begin{enumerate}
\item Let $j_1,\dots,j_r$ be subindices satisfying, $0\le j_i<e_i$, for all $1\le i\le r$. Then, the following element belongs to $\mathbb Z_K$:
$$\pi_{j_1,\dots,j_r}:=Q_{1,j_1}(\theta)\cdots Q_{r,j_r}(\theta)/p^{\lfloor H_{1,j_1}+\cdots+H_{r,j_r}\rfloor}.$$
\item If $e(\mathfrak{p}/p)>1$, there is a unique family $j_1,\dots,j_r$ as above, for which
$v_\mathfrak{p}(\pi_{j_1,\dots,j_r})=1$. This family may be recursively computed as follows:
$$\as{1.2}
\begin{array}{l}
j_r\equiv h_r^{-1} \md{e_r},\\ \operatorname{res}_r:=(j_rh_r-1)/e_r,\\
j_{r-1}\equiv -h_{r-1}^{-1}(u_r+j_rv_r(\phi_r)+\operatorname{res}_r) \md{e_{r-1}},\\ \operatorname{res}_{r-1}:=(j_{r-1}h_{r-1}+u_r+j_rv_r(\phi_r)+\operatorname{res}_r)/e_{r-1},\\
\qquad \cdots\qquad \cdots\\
j_1\equiv -h_1^{-1}(u_2+j_2v_2(\phi_2)+\operatorname{res}_2) \md{e_1}.
\end{array}
$$
\end{enumerate}
\end{corollary}
\begin{proof}
Item 1 is an immediate consequence of item 1 of Proposition
\ref{quotientvalue}.
Also, by Proposition \ref{quotientvalue},
$$
v_\mathfrak{p}(\pi_{j_1,\dots,j_r})=e(\mathfrak{p}/p)\left(H_{1,j_1}+\cdots+H_{r,j_r}-\lfloor H_{1,j_1}+\cdots+H_{r,j_r}\rfloor\right).
$$
Thus, item 2 states that there is a unique family $j_1,\dots,j_r$ such that
$$
H_{1,j_1}+\cdots+H_{r,j_r}\equiv \dfrac 1{e(\mathfrak{p}/p)} \md{\mathbb Z}.
$$
Since $e(\mathfrak{p}/p)=e_1\cdots e_r$ and $|\lambda_i|=h_i/e_i$, this is equivalent to:
\begin{align*}
u_1+j_1v_1(\phi_1)+&\dfrac{j_1h_1+u_2+j_2v_2(\phi_2)}{e_1}+\cdots\\
&\cdots+\dfrac{j_{r-1}h_{r-1}+u_r+j_rv_r(\phi_r)}{e_1\cdots e_{r-1}}+\dfrac{j_rh_r}{e_1\cdots e_r}\equiv \dfrac{1}{e_1\cdots e_r} \md{\mathbb Z}.
\end{align*}
Clearly this congruence has a unique solution $j_1, \dots, j_r$ satisfying $0\le j_i<e_i$, for all $1\le i\le r$, and this solution may be recursively obtained by the procedure described in item 2.
\end{proof}
The only property of $\mathbf{t}_\mathfrak{p}$ that we used in in Corollary \ref{products} is: $e_1\cdots e_r=e(\mathfrak{p}/p)$. Thus, we don't need to use all levels of $\mathbf{t}_\mathfrak{p}$ to compute a pseudo-generator of $\mathfrak{p}$; in practice we take $r$ to be the minimum level such that
$e_1\cdots e_r=e(\mathfrak{p}/p)$.
\subsection{Generators of the prime ideals}\label{subsectgenerators}
Let $p$ be a prime number, and $\mathfrak{P}$ the set of prime ideals of $K$ lying over $p$. Once we have pseudo-generators $\pi_\mathfrak{p}\in\mathfrak{p}$ of all $\mathfrak{p}\in\mathfrak{P}$, in order to find generators we need only to compute a family of integral elements, $\{b_\mathfrak{p}\in \mathbb Z_K\}_{\mathfrak{p}\in\mathfrak{P}}$, satisfying:
\begin{equation}\label{bp}
v_\mathfrak{p}(b_\mathfrak{p})=0,\ \forall\,\mathfrak{p}\in\mathfrak{P},\qquad v_\mathfrak{q}(b_\mathfrak{p})>1,\ \forall\,\mathfrak{q},\mathfrak{p}\in\mathfrak{P},\ \mathfrak{q}\ne\mathfrak{p}.
\end{equation}
Then, for each $\mathfrak{p}\in\mathfrak{P}$, the integral element:
$$
\alpha_\mathfrak{p}:=b_\mathfrak{p}\pi_\mathfrak{p}+\sum_{\mathfrak{q}\in\mathfrak{P}, \mathfrak{q}\ne\mathfrak{p}} b_\mathfrak{q}\in\mathbb Z_K
$$
clearly satisfies: $v_\mathfrak{p}(\alpha_\mathfrak{p})=1$, $v_\mathfrak{q}(\alpha_\mathfrak{p})=0$, for all $\mathfrak{q}\ne \mathfrak{p}$. Therefore, $\mathfrak{p}$ is the ideal generated by $p$ and $\alpha_\mathfrak{p}$.
The rest of this section is devoted to the construction of these multipliers $\{b_\mathfrak{p}\}$.
Let $(\mathbf{t}_\mathfrak{p})_{\mathfrak{p}\in\mathfrak{P}}$ be the parameterization of the set $\mathfrak{P}$ by a family of $f$-complete types obtained by an application of Montes algorithm. As usual, we suppose that each $\mathbf{t}_\mathfrak{p}$ has been conveniently enlarged with an $(r_\mathfrak{p}+1)$-th level, as indicated in section \ref{subsecApprox}. From now on we provide the invariants of $\mathbf{t}_\mathfrak{p}$ with a subscript $\mathfrak{p}$ to distinguish the prime ideal they belong to: $r_\mathfrak{p}, \phi_{i,\mathfrak{p}}, m_{i,\mathfrak{p}}, \lambda_{i,\mathfrak{p}},$ etc.
The integral elements $b_\mathfrak{p}$ will be constructed as suitable products of $\phi$-polynomials divided by suitable powers of $p$. The crucial ingredient is Proposition \ref{vpq}, that computes $v_\mathfrak{p}(\phi_{i,\mathfrak{q}}(\theta))$ for all $\mathfrak{p}\ne\mathfrak{q}$ in $\mathfrak{P}$, and all $1\le i\le r_\mathfrak{q}+1$.
\begin{definition}
For any pair $\mathfrak{p},\mathfrak{q}\in\mathfrak{P}$, we define the \emph{index of coincidence} between the types $\mathbf{t}_\mathfrak{p}$
and $\mathbf{t}_\mathfrak{q}$ as:
$$
i(\mathbf{t}_\mathfrak{p},\mathbf{t}_\mathfrak{q})=\left\{\begin{array}{ll}
0,&\mbox{if }\psi_{0,\mathfrak{p}}\ne\psi_{0,\mathfrak{q}},\\
\min\left\{j\in\mathbb Z_{>0}\,\,|\,\, (\phi_{j,\mathfrak{p}},\lambda_{j,\mathfrak{p}},\psi_{j,\mathfrak{p}})\ne
(\phi_{j,\mathfrak{q}},\lambda_{j,\mathfrak{q}},\psi_{j,\mathfrak{q}})\right\},&\mbox{if }\psi_{0,\mathfrak{p}}=\psi_{0,\mathfrak{q}}.
\end{array}
\right.
$$
Alternatively, $i(\mathbf{t}_\mathfrak{p},\mathbf{t}_\mathfrak{q})$ is the least subindex $j$ for which $\operatorname{Trunc}_j(\mathbf{t}_\mathfrak{p})\ne \operatorname{Trunc}_j(\mathbf{t}_\mathfrak{q})$.
\end{definition}
\begin{remark}\label{coincide}\mbox{\null}
By definition,
$$\phi_{i,\mathfrak{p}}=\phi_{i,\mathfrak{q}},\quad \lambda_{i,\mathfrak{p}}=\lambda_{i,\mathfrak{q}},\quad \psi_{i,\mathfrak{p}}=\psi_{i,\mathfrak{q}}, \quad\forall\,i< i(\mathbf{t}_\mathfrak{p},\mathbf{t}_\mathfrak{q}).$$ Hence, by the definition of the $p$-adic valuations $v_{i,\mathfrak{p}}$, $v_{i,\mathfrak{q}}$, and by \cite[Thm. 2.11]{HN}, we get:
$$v_{i,\mathfrak{p}}=v_{i,\mathfrak{q}}, \quad m_{i,\mathfrak{p}}=m_{i,\mathfrak{q}}, \quad
v_{i,\mathfrak{p}}(\phi_{i,\mathfrak{p}})=v_{i,\mathfrak{q}}(\phi_{i,\mathfrak{q}}),\quad \forall\,i\le i(\mathbf{t}_\mathfrak{p},\mathbf{t}_\mathfrak{q}).
$$
\end{remark}
\begin{lemma}\label{lessthanr}
If $\mathfrak{p},\mathfrak{q}\in\mathfrak{P}$, and $\mathfrak{p}\ne\mathfrak{q}$, then $i(\mathbf{t}_\mathfrak{p},\mathbf{t}_\mathfrak{q})\le\min\{r_\mathfrak{p},r_\mathfrak{q}\}$.
\end{lemma}
\begin{proof}
Suppose $r_\mathfrak{p}\le r_\mathfrak{q}$ and $i(\mathbf{t}_\mathfrak{p},\mathbf{t}_\mathfrak{q})=r_\mathfrak{p}+1$. Then, $\operatorname{Trunc}_{r_\mathfrak{p}}(\mathbf{t}_\mathfrak{p})=\operatorname{Trunc}_{r_\mathfrak{p}}(\mathbf{t}_\mathfrak{q})$. Since the type $\operatorname{Trunc}_{r_\mathfrak{p}}(\mathbf{t}_\mathfrak{p})$ is $f$-complete, it singles out a unique $p$-adic irreducible factor of $f(x)$ (item 2 of Definition \ref{defs}). Hence, $\operatorname{Trunc}_{r_\mathfrak{p}}(\mathbf{t}_\mathfrak{q})$ is also $f$-complete and it singles out the same $p$-adic irreducible factor of $f(x)$. This implies that $\mathfrak{p}=\mathfrak{q}$.
\end{proof}
By (\ref{tautology}) and (\ref{thmpolygon}), for all $1\le i\le r_\mathfrak{p}+1$ we have
\begin{equation}\label{p=q}
v_\mathfrak{p}(\phi_{i,\mathfrak{p}}(\theta))=e(\mathfrak{p}/p)\,\dfrac{v_{i,\mathfrak{p}}(\phi_{i,\mathfrak{p}})+|\lambda_{i,\mathfrak{p}}|}{e_{1,\mathfrak{p}}\cdots e_{i-1,\mathfrak{p}}}.
\end{equation}
In order to compute the values of $v_\mathfrak{p}(\phi_{i,\mathfrak{q}}(\theta))$, for $\mathfrak{p}\ne\mathfrak{q}$, we need still another definition.
\begin{definition}\label{gcphi}
Let $\mathfrak{p},\mathfrak{q}\in\mathfrak{P}$, $\mathfrak{q}\ne\mathfrak{p}$, and $j=i(\mathbf{t}_\mathfrak{p},\mathbf{t}_\mathfrak{q})$. Let $s_\mathfrak{p}=\#\operatorname{Refinements}_{j,\mathfrak{p}}$, and consider the list $\operatorname{Ref}_\mathfrak{p}$ obtained by extending the list $\operatorname{Refinements}_{j,\mathfrak{p}}$ by adding the pair $[\phi_{j,\mathfrak{p}},\lambda_{j,\mathfrak{p}}]$ at the last position:
$$
\operatorname{Ref}_\mathfrak{p}=\left[\left[\phi_{j,\mathfrak{p}}^{(1)},\lambda_{j,\mathfrak{p}}^{(1)}\right], \cdots ,\left[\phi_{j,\mathfrak{p}}^{(s_\mathfrak{p})},\lambda_{j,\mathfrak{p}}^{(s_\mathfrak{p})}\right],\left[\phi_{j,\mathfrak{p}}^{(s_\mathfrak{p}+1)},\lambda_{j,\mathfrak{p}}^{(s_\mathfrak{p}+1)}\right]:=\left[\phi_{j,\mathfrak{p}},\lambda_{j,\mathfrak{p}}\right]\right].
$$
Let $\operatorname{Ref}_\mathfrak{q}$ be the analogous list for the prime ideal $\mathfrak{q}$.
We define the \emph{greatest common $\phi$-polynomial} of the pair $(\mathbf{t}_\mathfrak{p},\mathbf{t}_q)$ to be the more advanced common $\phi$-polynomial in the two lists $\operatorname{Ref}_\mathfrak{p}$, $\operatorname{Ref}_\mathfrak{q}$. We denote it by:
$$
\phi(\mathfrak{p},\mathfrak{q}):=\phi_{j,\mathfrak{p}}^{(k)}=\phi_{j,\mathfrak{q}}^{(k)},
$$
for the maximum index $k$ such that $\phi_{j,\mathfrak{p}}^{(k)}=\phi_{j,\mathfrak{q}}^{(k)}$.
We define the \emph{hidden slopes} of the pair $(\mathbf{t}_\mathfrak{p},\mathbf{t}_\mathfrak{q})$ to be: $\lambda_\mathfrak{p}^\mathfrak{q}:=\lambda_{j,\mathfrak{p}}^{(k)}$, $\lambda_\mathfrak{q}^\mathfrak{p}:=\lambda_{j,\mathfrak{q}}^{(k)}$.
\end{definition}
\noindent{\bf Remarks. }\medskip
(1) \ By the concrete way the processes of branching, enlarging and/or refining were defined, this polynomial $\phi(\mathfrak{p},\mathfrak{q})$ always exists. In fact, let us show that we must have $\phi_{j,\mathfrak{p}}^{(1)}=\phi_{j,\mathfrak{q}}^{(1)}$.
Since $\operatorname{Trunc}_{j-1}(\mathbf{t}_\mathfrak{p})=\operatorname{Trunc}_{j-1}(\mathbf{t}_\mathfrak{q})$, this type had some original representative (say) $\phi_j$. By considering $N_{\phi_j,v_j}^-(f)$ and the irreducible factors of all residual polynomials of all sides, we had different branches $(\lambda,\psi)$ to analyze; if there was only one branch, the algorithm necessarily performed a refinement step, because otherwise we would have $i(\mathbf{t}_\mathfrak{p},\mathbf{t}_\mathfrak{q})>j$. After eventually a finite number of these unibranch refinement steps (that were not stored in the list $\operatorname{Refinements}_j$), we considered some representative, let us call it $\phi_j$ again, leading to several branches. One of these branches led later to the type $\mathbf{t}_\mathfrak{p}$ and one of them (maybe still the same) to the type $\mathbf{t}_\mathfrak{q}$. If the $\mathfrak{p}$-branch experimented refinement, the list $\operatorname{Refinements}_{j,\mathfrak{p}}$ had $\phi_{j,\mathfrak{p}}^{(1)}=\phi_j$ as its initial $\phi$-polynomial; if the $\mathfrak{p}$-branch was $f$-complete or had to be enlarged, then the list $\operatorname{Refinements}_{j,\mathfrak{p}}$ remained empty and we had $\phi_{j,\mathfrak{p}}=\phi_j$. In any case, $\phi_j$ is the first $\phi$-polynomial of the list $\operatorname{Ref}_\mathfrak{p}$.\medskip
(2) \ All $\phi_{j,\mathfrak{p}}^{(\ell)}$, $\phi_{j,\mathfrak{q}}^{(\ell)}$ are representatives of $\mathbf{t}_{j-1}:=\operatorname{Trunc}_{j-1}(\mathbf{t}_\mathfrak{p})=\operatorname{Trunc}_{j-1}(\mathbf{t}_\mathfrak{q})$; in particular, all these polynomials have degree $m_j$. With the obvious meaning for $\psi_{j,\mathfrak{p}}^{( \ell)}$, we have necessarily:
$$
\left[\phi_{j,\mathfrak{p}}^{(k)},\lambda_{j,\mathfrak{p}}^{(k)},\psi_{j,\mathfrak{p}}^{(k)}\right]\ne
\left[\phi_{j,\mathfrak{q}}^{(k)},\lambda_{j,\mathfrak{q}}^{(k)},\psi_{j,\mathfrak{q}}^{(k)}\right],\quad
\left[\phi_{j,\mathfrak{p}}^{(\ell)},\lambda_{j,\mathfrak{p}}^{(\ell)},\psi_{j,\mathfrak{p}}^{(\ell)}\right]=
\left[\phi_{j,\mathfrak{q}}^{(\ell)},\lambda_{j,\mathfrak{q}}^{(\ell)},\psi_{j,\mathfrak{q}}^{(\ell)}\right],
$$
for all $1\le \ell<k$. Thus, $\phi(\mathfrak{p},\mathfrak{q})$ is the first representative of $\mathbf{t}_{j-1}$ for which the branches of $\mathbf{t}_\mathfrak{p}$ and $\mathbf{t}_\mathfrak{q}$ are different.\medskip
(3) \ Caution: we may have $\operatorname{Ref}_\mathfrak{p}=\operatorname{Ref}_\mathfrak{q}$. In this case $\phi(\mathfrak{p},\mathfrak{q})=\phi_{j,\mathfrak{p}}= \phi_{j,\mathfrak{q}}$ and $\lambda_\mathfrak{p}^\mathfrak{q}=\lambda_{j,\mathfrak{p}}=\lambda_{j,\mathfrak{q}}=\lambda_\mathfrak{q}^\mathfrak{p}$; the branches of $\mathbf{t}_\mathfrak{p}$ and $\mathbf{t}_\mathfrak{q}$ are distinguished by $\psi_{j,\mathfrak{p}}\ne\psi_{j,\mathfrak{q}}$. \medskip
\begin{proposition}\label{vpq}
Let $\mathfrak{p},\mathfrak{q}\in\mathfrak{P}$, $\mathfrak{p}\ne \mathfrak{q}$, and $j=i(\mathbf{t}_\mathfrak{p},\mathbf{t}_\mathfrak{q})$. Let $\phi(\mathfrak{p},\mathfrak{q})$ be the greatest common $\phi$-polynomial of the pair $(\mathbf{t}_\mathfrak{p},\mathbf{t}_\mathfrak{q})$ and $\lambda_\mathfrak{p}^\mathfrak{q}$, $\lambda_\mathfrak{q}^\mathfrak{p}$ the hidden slopes. For any $1\le i\le r_\mathfrak{q}+1$,
$$
\as{2.2}
\dfrac{v_\mathfrak{p}(\phi_{i,\mathfrak{q}}(\theta))}{e(\mathfrak{p}/p)}=\left\{
\begin{array}{ll}
0,&\mbox{if }j=0,\\
\dfrac{v_i(\phi_i)+|\lambda_i|}{e_1\cdots e_{i-1}}
,&\mbox{if }i<j,\\
\dfrac{v_j(\phi_j)+|\lambda_\mathfrak{p}^\mathfrak{q}|}{e_1\cdots e_{j-1}},&\mbox{if $i=j$ and }\, \phi_{j,\mathfrak{q}}=\phi(\mathfrak{p},\mathfrak{q}),\\
\dfrac{v_j(\phi_j)+\min\{|\lambda_\mathfrak{p}^\mathfrak{q}|,|\lambda_\mathfrak{q}^\mathfrak{p}|\}}{e_1\cdots e_{j-1}},&\mbox{if $i=j$ and }\phi_{j,\mathfrak{q}}\ne \phi(\mathfrak{p},\mathfrak{q}),\\
\dfrac{m_{i,\mathfrak{q}}}{m_j}\cdot\dfrac{v_j(\phi_j)+\min\{|\lambda_\mathfrak{p}^\mathfrak{q}|,|\lambda_\mathfrak{q}^\mathfrak{p}|\}}{e_1\cdots e_{j-1}},&\mbox{if }i>j>0.
\end {array}
\right.
$$
In these formulas we omit the subscripts $\mathfrak{p}$, $\mathfrak{q}$ when the invariants of the two types coincide (cf. Remark \ref{coincide}).
\end{proposition}
\begin{proof}
The case $j=0$ was seen in (\ref{v=0}). The cases $i<j$ and $i=j$, $\phi_{j,\mathfrak{q}}=\phi(\mathfrak{p},\mathfrak{q})$, are a consequence of (\ref{p=q}).
Suppose $i>j>0$ and $\phi_{j,\mathfrak{p}}=\phi_{j,\mathfrak{q}}$; we have then, $\phi(\mathfrak{p},\mathfrak{q})=\phi_{j,\mathfrak{p}}=\phi_{j,\mathfrak{q}}$ and
$\lambda_\mathfrak{p}^\mathfrak{q}=\lambda_{j,\mathfrak{p}}$, $\lambda_\mathfrak{q}^\mathfrak{p}=\lambda_{j,\mathfrak{q}}$. We compute $v_\mathfrak{p}(\phi_{i,\mathfrak{q}}(\theta))$ by applying Proposition \ref{vgt} to the polynomial
$g(x)=\phi_{i,\mathfrak{q}}(x)$ and the type $\mathbf{t}_\mathfrak{p}$. Since $v_{j,\mathfrak{p}}=v_{j,\mathfrak{q}}$ and $\phi_{j,\mathfrak{p}}=\phi_{j,\mathfrak{q}}$, we have $N_{j,\mathfrak{p}}(\phi_{i,\mathfrak{q}})=N_{j,\mathfrak{q}}(\phi_{i,\mathfrak{q}})$.
On the other hand, we saw in the proof of Lemma \ref{vjphii} that $N_{j,\mathfrak{q}}(\phi_{i,\mathfrak{q}})$ is one-sided of slope $\lambda_{j,\mathfrak{q}}$. Figure 3 shows the three possibilities for the line $L_{\lambda_{j,\mathfrak{p}}}$ of slope $\lambda_{j,\mathfrak{p}}$ that first touches $N_{j,\mathfrak{p}}(\phi_{i,\mathfrak{q}})$ from below.
\begin{center}
\setlength{\unitlength}{5.mm}
\begin{picture}(22,5.5)
\put(.8,3.8){$\bullet$}\put(3.8,.8){$\bullet$}
\put(0,0){\line(1,0){6}}\put(1,-1){\line(0,1){6}}
\put(4,1){\line(-1,1){3}}\put(4.02,1){\line(-1,1){3}}
\multiput(.45,4.85)(.05,-.1){30}{\mbox{\begin{scriptsize}.\end{scriptsize}}}
\multiput(4,-.1)(0,.25){5}{\vrule height1pt}
\multiput(.9,1)(.25,0){12}{\hbox to 2pt{\hrulefill }}
\put(.55,-.6){\begin{footnotesize}$0$\end{footnotesize}}
\put(-1.3,.9){\begin{footnotesize}$v_j(\phi_{i,\mathfrak{q}})$\end{footnotesize}}
\put(3,-.6){\begin{footnotesize}$m_{i,\mathfrak{q}}/m_j$\end{footnotesize}}
\put(2.6,2.6){\begin{footnotesize}$\lambda_{j,\mathfrak{q}}$\end{footnotesize}}
\put(1.65,1.6){\begin{footnotesize}$L_{\lambda_{j,\mathfrak{p}}}$\end{footnotesize}}
\put(.1,3.8){\begin{footnotesize}$H$\end{footnotesize}}
\put(1.2,-1.6){$\lambda_{j,\mathfrak{p}}<\lambda_{j,\mathfrak{q}}$}
\put(11.8,.8){$\bullet$}\put(8.85,3.8){$\bullet$}
\put(8,0){\line(1,0){6}}\put(9,-1){\line(0,1){6}}
\put(12,1){\line(-1,1){3}}\put(12.02,1){\line(-1,1){3}}
\multiput(9,3.9)(-.1,.1){7}{\mbox{\begin{scriptsize}.\end{scriptsize}}}
\multiput(12,.9)(.1,-.1){7}{\mbox{\begin{scriptsize}.\end{scriptsize}}}
\multiput(12,-.1)(0,.25){5}{\vrule height1pt}
\multiput(8.9,1)(.25,0){12}{\hbox to 2pt{\hrulefill }}
\put(8.55,-.6){\begin{footnotesize}$0$\end{footnotesize}}
\put(6.7,.9){\begin{footnotesize}$v_j(\phi_{i,\mathfrak{q}})$\end{footnotesize}}
\put(11,-.6){\begin{footnotesize}$m_{i,\mathfrak{q}}/m_j$\end{footnotesize}}
\put(10.6,2.6){\begin{footnotesize}$\lambda_{j,\mathfrak{q}}$\end{footnotesize}}
\put(12.8,.5){\begin{footnotesize}$L_{\lambda_{j,\mathfrak{p}}}$\end{footnotesize}}
\put(8.2,3.8){\begin{footnotesize}$H$\end{footnotesize}}
\put(9.2,-1.6){$\lambda_{j,\mathfrak{p}}=\lambda_{j,\mathfrak{q}}$}
\put(19.8,.8){$\bullet$}\put(16.85,3.8){$\bullet$}
\put(16,0){\line(1,0){6}}\put(17,-1){\line(0,1){6}}
\put(20,1){\line(-1,1){3}}\put(20.02,1){\line(-1,1){3}}
\multiput(21,.4)(-.1,.05){47}{\mbox{\begin{scriptsize}.\end{scriptsize}}}
\multiput(20,-.1)(0,.25){5}{\vrule height1pt}
\multiput(16.9,1)(.25,0){12}{\hbox to 2pt{\hrulefill }}
\put(16.55,-.6){\begin{footnotesize}$0$\end{footnotesize}}
\put(14.7,.9){\begin{footnotesize}$v_j(\phi_{i,\mathfrak{q}})$\end{footnotesize}}
\put(19,-.6){\begin{footnotesize}$m_{i,\mathfrak{q}}/m_j$\end{footnotesize}}
\put(18.6,2.6){\begin{footnotesize}$\lambda_{j,\mathfrak{q}}$\end{footnotesize}}
\put(20.6,.9){\begin{footnotesize}$L_{\lambda_{j,\mathfrak{p}}}$\end{footnotesize}}
\put(16.2,2.2){\begin{footnotesize}$H$\end{footnotesize}}
\put(16.9,2.5){\line(1,0){.2}}
\put(17.2,-1.6){$\lambda_{j,\mathfrak{p}}>\lambda_{j,\mathfrak{q}}$}
\end{picture}
\end{center}\bigskip
\begin{center}
Figure 3
\end{center}
A glance at Figure 3 shows that
$$
H=v_j(\phi_{i,\mathfrak{q}})+\dfrac{m_{i,\mathfrak{q}}}{m_j}\,\min\{|\lambda_{j,\mathfrak{p}}|,|\lambda_{j,\mathfrak{q}}|\}=
\dfrac{m_{i,\mathfrak{q}}}{m_j}\,(v_j(\phi_j)+\min\{|\lambda_{j,\mathfrak{p}}|,|\lambda_{j,\mathfrak{q}}|\}),
$$
the last equality by Lemma \ref{vjphii}. Now, if $\lambda_{j,\mathfrak{p}}\ne \lambda_{j,\mathfrak{q}}$, the line $L_{\lambda_{j,\mathfrak{p}}}$ touches the polygon only at one point, and the residual polynomial $R_{j,\mathfrak{p}}(\phi_{i,\mathfrak{q}})(y)$ is a cons\-tant. If $\lambda_{j,\mathfrak{p}}=\lambda_{j,\mathfrak{q}}$, then
$L_{\lambda_{j,\mathfrak{p}}}$ contains $N_{j,\mathfrak{p}}(\phi_{i,\mathfrak{q}})$ and $R_{j,\mathfrak{p}}(\phi_{i,\mathfrak{q}})=
R_{j,\mathfrak{q}}(\phi_{i,\mathfrak{q}})$ is a power of $\psi_{j,\mathfrak{q}}$, up to a multiplicative constant. In this case, necessarily $\psi_{j,\mathfrak{q}}\ne\psi_{j,\mathfrak{p}}$, by the definition of $j=i(\mathbf{t}_\mathfrak{p},\mathbf{t}_\mathfrak{q})$. Therefore, $\operatorname{Trunc}_j(\mathbf{t}_\mathfrak{p})$ never divides $\phi_{i,\mathfrak{q}}$, and Proposition \ref{vgt} shows that $v(\phi_{i,\mathfrak{q}}(\theta_\mathfrak{p}))=H/(e_1\cdots e_{j-1})$. By (\ref{tautology}), we get the desired expression for $v_\mathfrak{p}(\phi_{i,\mathfrak{q}}(\theta))$.
Suppose now $i>j>0$, $\phi_{j,\mathfrak{p}}\ne\phi_{j,\mathfrak{q}}$, or $i=j$, $\phi_{j,\mathfrak{q}}\ne\phi(\mathfrak{p},\mathfrak{q})$.
Consider a new type $\tilde{\mathbf{t}}_\mathfrak{p}$, constructed as follows: if $\phi_{j,\mathfrak{p}}=\phi(\mathfrak{p},\mathfrak{q})$, we take $\tilde{\mathbf{t}}_\mathfrak{p}=\mathbf{t}_\mathfrak{p}$, and if $\phi_{j,\mathfrak{p}}\ne\phi(\mathfrak{p},\mathfrak{q})$, we take
\begin{multline*}
\tilde{\mathbf{t}}_\mathfrak{p}=(\phi_1;\lambda_1,\phi_2;\cdots,\phi_{j-1};\lambda_{j-1},\phi(\mathfrak{p},\mathfrak{q});\lambda^\mathfrak{q}_\mathfrak{p},\phi_{j,\mathfrak{p}};\\\lambda_{j,\mathfrak{p}}-\lambda^\mathfrak{q}_\mathfrak{p},\phi_{j+1,\mathfrak{p}};\lambda_{j+1,\mathfrak{p}},\cdots,\phi_{r_\mathfrak{p}+1,\mathfrak{p}};\lambda_{r_\mathfrak{p}+1,\mathfrak{p}},\psi_{r_\mathfrak{p}+1,\mathfrak{p}}).
\end{multline*}
By \cite[Cor.3.6]{GMNalgorithm}, $\tilde{\mathbf{t}}_\mathfrak{p}$ is a type, and it is also $f$-complete. If $\phi_{j,\mathfrak{p}}\ne\phi(\mathfrak{p},\mathfrak{q})$ then $\tilde{\mathbf{t}}_\mathfrak{p}$ is not optimal because $\deg\phi(\mathfrak{p},\mathfrak{q})=\deg \phi_{j,\mathfrak{p}}$, but optimality is not necessary to apply Proposition \ref{vgt}.
We consider an analogous construction for $\tilde{\mathbf{t}}_\mathfrak{q}$. The new types satisfy $i(\tilde{\mathbf{t}}_\mathfrak{p},\tilde{\mathbf{t}}_\mathfrak{q})=j$ and they have the same $j$-th $\phi$-polynomial; finally if $i=j$, the polynomial $\phi_{i,\mathfrak{q}}$ is the $(j+1)$-th $\phi$-polynomial of $\tilde{\mathbf{t}}_\mathfrak{q}$.
Therefore, the computation of $v_\mathfrak{p}(\phi_{i,\mathfrak{q}})$ is deduced by the same arguments as above.
\end{proof}
We are ready to construct the family $\{b_\mathfrak{p}\}_{\mathfrak{p}\in\mathfrak{P}}$. Consider the following equivalence relation in the set $\mathfrak{P}$:
$$
\mathfrak{p}\sim\mathfrak{q}\,\Longleftrightarrow\, \psi_{0,\mathfrak{p}}=\psi_{0,\mathfrak{q}},
$$and denote by $[\mathfrak{p}]$ the class of any $\mathfrak{p}\in\mathfrak{P}$.
For each class $[\mathfrak{p}]$, let $\phi_{1,[\mathfrak{p}]}(x)\in\mathbb Z[x]$ be the first $\phi$-polynomial in any list $\operatorname{Ref}_\mathfrak{q}$ for some $\mathfrak{q}\in[\mathfrak{p}]$; we saw in the first remark following Definition \ref{gcphi} that all lists $\operatorname{Ref}_\mathfrak{q}$, for $\mathfrak{q}\in[\mathfrak{p}]$, have the same initial $\phi$-polynomial. Now, for each $\mathfrak{q}\in[\mathfrak{p}]$, denote by $\lambda_{1,\mathfrak{q}}^0$
the first slope in the list $\operatorname{Ref}_\mathfrak{q}$. In other words, for any $\mathfrak{q}\in[\mathfrak{p}]$, we have
$$\as{1.6}
(\phi_{1,[\mathfrak{p}]},\lambda_{1,\mathfrak{q}}^0)=
\left\{
\begin{array}{ll}
(\phi_{1,\mathfrak{q}},\lambda_{1,\mathfrak{q}}),&\mbox { if }\operatorname{Refinements}_{1,\mathfrak{q}}\mbox{ is empty},\\
\left(\phi_{1,\mathfrak{q}}^{(1)},\lambda_{1,\mathfrak{q}}^{(1)}\right),&\mbox { if }\operatorname{Refinements}_{1,\mathfrak{q}}\mbox{ is not empty}.
\end{array}
\right.
$$
By (\ref{v=0}) and (\ref{thmpolygon}),
\begin{equation}\label{vqphip}\as{1.2}
v_\mathfrak{q}(\phi_{1,[\mathfrak{p}]}(\theta))=\left\{
\begin{array}{ll}
0,&\mbox { if }\mathfrak{q}\not\in[\mathfrak{p}],\\
e(\mathfrak{q}/p)|\lambda^0_{1,\mathfrak{q}}|,&\mbox { if }\mathfrak{q}\in[\mathfrak{p}].
\end{array}
\right.
\end{equation}
Consider now, for each class $[\mathfrak{p}]$:
$$
B_{[\mathfrak{p}]}(x):=\prod_{[\mathfrak{q}]\ne[\mathfrak{p}]}\phi_{1,[\mathfrak{q}]}(x).
$$
Fix a prime ideal $\mathfrak{p}\in\mathfrak{P}$. If $\#[\mathfrak{p}]=1$, the element $b_\mathfrak{p}=(B_{[\mathfrak{p}]}(\theta))^2$ satisfies (\ref{bp})
already. Suppose now $\#[\mathfrak{p}]>1$. For all $\mathfrak{l}\in[\mathfrak{p}]$, $\mathfrak{l}\ne\mathfrak{p}$, let $\phi_{\mathfrak{l}}=\phi_{r_{\mathfrak{l}}+1,\mathfrak{l}}$ be the Montes approximation to $f_{\mathfrak{l}}(x)$ contained in $\mathbf{t}_{\mathfrak{l}}$; consider the least positive numerator and denominator of the rational number $v_\mathfrak{p}(\phi_{\mathfrak{l}}(\theta))/e(\mathfrak{p}/p)$ (which has been computed in Proposition \ref{vpq}):
$$\dfrac{v_\mathfrak{p}(\phi_{\mathfrak{l}}(\theta))}{e(\mathfrak{p}/p)}=\dfrac{n_{\mathfrak{l}}}{d_{\mathfrak{l}}},\quad \operatorname{gcd}(n_{\mathfrak{l}},d_{\mathfrak{l}})=1.
$$
We look for an integral element of the form:
\begin{equation}\label{finalbp}
b_\mathfrak{p}=\dfrac{(B_{[\mathfrak{p}]}(\theta))^m\prod_{\mathfrak{l}\in[\mathfrak{p}],\,\mathfrak{l}\ne\mathfrak{p}}\phi_{\mathfrak{l}}(\theta)^{d_{\mathfrak{l}}}}{p^N},
\end{equation}
where the exponents $N,m$ are given by
$$N=\sum_{\mathfrak{l}\in[\mathfrak{p}],\,\mathfrak{l}\ne\mathfrak{p}}n_{\mathfrak{l}},\qquad m=\left\lceil\max_{\mathfrak{q}\in\mathfrak{P},\,\mathfrak{q}\not\in[\mathfrak{p}]}\left\{\dfrac{Ne(\mathfrak{q}/p)+2}{e(\mathfrak{q}/p)|\lambda^0_{1,\mathfrak{q}}|}\right\}\right\rceil.$$
Take $\mathfrak{q}\not\in[\mathfrak{p}]$. By (\ref{v=0}) and (\ref{vqphip}), we have $$v_\mathfrak{q}\left(\prod_{\mathfrak{l}\in[\mathfrak{p}],\,\mathfrak{l}\ne\mathfrak{p}}\phi_{\mathfrak{l}}(\theta)^{d_{\mathfrak{l}}}\right)=0,\qquad
v_\mathfrak{q}(B_{[\mathfrak{p}]}(\theta))=v_\mathfrak{q}(\phi_{1,[\mathfrak{q}]}(\theta))=e(\mathfrak{q}/p)|\lambda^0_{1,\mathfrak{q}}|.$$
Hence, $v_\mathfrak{q}(p^Nb_\mathfrak{p})=m\,e(\mathfrak{q}/p)|\lambda^0_{1,\mathfrak{q}}|\ge Ne(\mathfrak{q}/p)+2$,
so that $v_\mathfrak{q}(b_\mathfrak{p})>1$, as desired.
For the prime $\mathfrak{p}$ itself, we have $v_\mathfrak{p}(B_{[\mathfrak{p}]}(\theta))=0$ and, by construction,
$$
v_\mathfrak{p}(b_\mathfrak{p})=\left(\sum_{\mathfrak{l}\in[\mathfrak{p}],\,\mathfrak{l}\ne\mathfrak{p}}d_{\mathfrak{l}}v_\mathfrak{p}(\phi_{\mathfrak{l}}(\theta))\right)-Ne(\mathfrak{p}/p)=0.
$$
Finally, for a prime $\mathfrak{l}\in[\mathfrak{p}]$, $\mathfrak{l}\ne\mathfrak{p}$, we have $v_{\mathfrak{l}}(B_{[\mathfrak{p}]}(\theta))=0$ and
$$
v_{\mathfrak{l}}(b_\mathfrak{p})=V_1+V_2-Ne(\mathfrak{l}/p),\quad V_1:=\sum_{\mathfrak{l}'\in[\mathfrak{p}],\,\mathfrak{l}'\ne\mathfrak{p},\mathfrak{l}}v_{\mathfrak{l}}(\phi_{\mathfrak{l}'}(\theta)^{d_{\mathfrak{l}'}}),\quad V_2:=v_{\mathfrak{l}}(\phi_{\mathfrak{l}}(\theta)^{d_{\mathfrak{l}}}).
$$
Lemma \ref{lessthanr} and Proposition \ref{vpq} show that $V_1$ (as all invariants we used so far) depends only on the numerical invariants of the types $\mathbf{t}_{\mathfrak{l}}, \mathbf{t}_{\mathfrak{l}'}$, of level $1\le i\le i(\mathbf{t}_{\mathfrak{l}}, \mathbf{t}_{\mathfrak{l}'})\le \min\{r_{\mathfrak{l}},r_{\mathfrak{l}'}\}$, and not on the quality of the Montes approximations $\phi_{\mathfrak{l}'}$. On the other hand, $V_2$ depends on the choice of $\phi_{\mathfrak{l}}$ as a Montes approximation of $f_{\mathfrak{l}}(x)$; by (\ref{p=q}):
$$
V_2=v_{\mathfrak{l}}(\phi_{\mathfrak{l}}(\theta)^{d_{\mathfrak{l}}})=d_{\mathfrak{l}}\,e(\mathfrak{l}/p)\dfrac{v_{r_{\mathfrak{l}}+1,\mathfrak{l}}(\phi_{\mathfrak{l}})+h_{r_{\mathfrak{l}}+1,\mathfrak{l}}}{e_{1,\mathfrak{l}}\cdots e_{r_{\mathfrak{l}},\mathfrak{l}}}=d_{\mathfrak{l}}(v_{r_{\mathfrak{l}}+1,\mathfrak{l}}(\phi_{\mathfrak{l}})+h_{r_{\mathfrak{l}}+1,\mathfrak{l}}).
$$
Hence, for all $\mathfrak{l}\in[\mathfrak{p}]$, $\mathfrak{l}\ne\mathfrak{p}$, we improve the Montes approximation $\phi_{\mathfrak{l}}$ till we get
$$
h_{r_{\mathfrak{l}}+1,\mathfrak{l}}\ge \dfrac{2+Ne(\mathfrak{l}/p)-V_1}{d_\mathfrak{l}}-v_{r_{\mathfrak{l}}+1,\mathfrak{l}}(\phi_{\mathfrak{l}}).
$$This ensures that $v_{\mathfrak{l}}(b_\mathfrak{p})>1$, as desired.
\subsection{Two-element representation of a fractional ideal}\label{subsectwogen}
Any fractional ideal $\mathfrak{a}$ of $K$ admits a two-element representation: $\mathfrak{a}=(\ell,\alpha)$, where $\alpha\in K$ and $\ell=\ell(\mathfrak{a})\in\mathbb Q$ is the least positive rational number contained in $\mathfrak{a}$. It is straightforward to obtain such a representation from the two-element representation of the prime ideals obtained in the last section.
For the sake of completeness we briefly describe the routine. For each prime ideal $\mathfrak{p}$ of $K$, we have computed an integral element $\alpha_\mathfrak{p}\in\mathbb Z_K$ such that
$$
v_\mathfrak{p}(\alpha_\mathfrak{p})=1,\quad v_\mathfrak{q}(\alpha_\mathfrak{p})=0, \ \forall\,\mathfrak{q}|p, \,\mathfrak{q}\ne\mathfrak{p}.
$$
These elements are of the form: $\alpha_\mathfrak{p}=p^{\nu_\mathfrak{p}}h_\mathfrak{p}(\theta)$, where $\nu_\mathfrak{p}\in\mathbb Z$ and $h_\mathfrak{p}(x)\in\mathbb Z[x]$ is a primitive polynomial. Generically, $\nu_\mathfrak{p}\le0$, except for the special case $\mathfrak{p}=p\mathbb Z_K$, where $\nu_\mathfrak{p}=1$, $h_\mathfrak{p}(x)=1$. Let us write
$$
\operatorname{N}_{K/\mathbb Q}(h_\mathfrak{p}(\theta))=p^{\mu_\mathfrak{p}}N_\mathfrak{p}, \mbox{ with }p\nmid N_\mathfrak{p}.
$$
Suppose first that $\mathfrak{a}=\prod_{\mathfrak{p}|p}\mathfrak{p}^{a_\mathfrak{p}}$ has support only in prime ideals dividing $p$. We take then:
$$
\alpha=\prod_{\mathfrak{p}|p}\alpha_\mathfrak{p}^{a_\mathfrak{p}}\prod_{a_\mathfrak{p}<0}N_\mathfrak{p}^{|a_\mathfrak{p}|},\qquad H:=\left\lceil\max_{\mathfrak{p}|p}\left\{ \dfrac{a_\mathfrak{p}}{e(\mathfrak{p}/p)}\right\}\right\rceil.
$$
One checks easily that $\ell(\mathfrak{a})=p^H$ and $\mathfrak{a}=(p^H,\alpha)$.
In the general case, we write $\mathfrak{a}=\prod_{p\in{\mathcal P}}\mathfrak{a}_p$, where ${\mathcal P}$ is a finite set of prime numbers and $\mathfrak{a}_p$ is divided only by prime ideals lying over $p$. For each $\mathfrak{a}_p$ we find a two-element representation $\mathfrak{a}_p=(p^{H_p},\alpha_p)$; then, the two-element representation of $\mathfrak{a}$ is:
$$
\mathfrak{a}=\left(\prod_{p\in{\mathcal P}}p^{H_p},\,\sum_{p\in{\mathcal P}}\left(\prod_{q\in{\mathcal P},\,q\ne p}
q^{H_q+1}\right)\alpha_p\right).
$$
Note that the second generator $\alpha$ constructed in this way satisfies: $v_\mathfrak{p}(\alpha)=v_\mathfrak{p}(\mathfrak{a})$, for all $\mathfrak{p}$ with $v_\mathfrak{p}(\mathfrak{a})\ne 0$, which is slightly stronger than the condition $\mathfrak{a}=(\ell,\alpha)$.
\section{Residue classes and Chinese remainder theorem}\label{secCRT}
In this section we show how to compute residue classes modulo prime ideals, and we design a chinese remainder theorem routine. As in the previous sections, this will be done without constructing (a basis of) the maximal order of $K$ and without the necessity to invert elements in the number field. Only some inversions in the finite residue fields are required.
For any prime ideal $\mathfrak{p}$ of $K$ we keep the notations for $\mathbf{t}_\mathfrak{p}$, $f_\mathfrak{p}(x)$, $\phi_\mathfrak{p}(x)$, $\ff{\mathfrak{p}}$, $\theta_\mathfrak{p}$, $K_\mathfrak{p}$, $\mathbb Z_{K_\mathfrak{p}}$, as introduced in section \ref{secPadic}.
\subsection{Residue classes modulo a prime ideal}
\label{redmap}
Let $\mathfrak{p}$ be a prime ideal of $K$, corresponding to an $f$-complete type $\mathbf{t}_\mathfrak{p}$ with an added $(r+1)$-th level, as indicated in section \ref{subsecApprox}.
The finite field $\ff{\mathfrak{p}}:=\ff{r+1}$ may be considered as a computational representation of the residue field $\mathbb Z_K/\mathfrak{p}$. In fact, fix the topological embedding, $\iota_\mathfrak{p}\colon K\hookrightarrow K_\mathfrak{p}$, determined by sending $\theta$ to $\theta_\mathfrak{p}$, and consider the reduction modulo $\mathfrak{p}$ map obtained by composition of the embedding
$\mathbb Z_K\hookrightarrow \mathbb Z_{K_\mathfrak{p}}$ with the local reduction map constructed in (\ref{lred}):
$$
\operatorname{red}_\mathfrak{p}\colon \mathbb Z_K\hookrightarrow \mathbb Z_{K_\mathfrak{p}}\stackrel{\operatorname{lred}_\mathfrak{p}}\longrightarrow \ff{\mathfrak{p}}.
$$
The commutative diagram:
$$
\begin{array}{ccccc}
\mathbb Z_K&\hookrightarrow&\mathbb Z_{K_\mathfrak{p}}&\stackrel{\operatorname{lred}_\mathfrak{p}}\longrightarrow&\ff{\mathfrak{p}}\\
\downarrow&&\downarrow&&\parallel\\
\mathbb Z_K/\mathfrak{p}&\ \lower.3ex\hbox{\as{.08}$\begin{array}{c}\longrightarrow\\\mbox{\tiny $\sim\,$}\end{array}$}\ &\mathbb Z_{K_\mathfrak{p}}/\mathfrak{p}\mathbb Z_{K_\mathfrak{p}}&\stackrel{\gamma^{-1}}\ \lower.3ex\hbox{\as{.08}$\begin{array}{c}\longrightarrow\\\mbox{\tiny $\sim\,$}\end{array}$}\ &\ff{\mathfrak{p}}
\end{array}
$$
shows that our reduction map $\operatorname{red}_\mathfrak{p}$ coincides with the canonical reduction map, $\mathbb Z_k\longrightarrow \mathbb Z_K/\mathfrak{p}$, up to certain isomorphism $\mathbb Z_K/\mathfrak{p}\ \lower.3ex\hbox{\as{.08}$\begin{array}{c}\longrightarrow\\\mbox{\tiny $\sim\,$}\end{array}$}\ \ff{\mathfrak{p}}$.
The problem has now a computational perspective; we want to find a routine that computes $\operatorname{red}_\mathfrak{p}(\alpha)\in \ff{\mathfrak{p}}$ for any given integral element $\alpha\in\mathbb Z_K$. To this end, it is sufficient to have a routine that computes $\operatorname{lred}_\mathfrak{p}(\alpha)\in\ff{\mathfrak{p}}$, for any $\mathfrak{p}$-integral $\alpha\in K$. Let us show that this latter routine may be based on item 2 of Proposition \ref{vgt} and Lemma \ref{prodgammas}.
Any $\alpha\in\mathbb Z_K$ can be written in a unique way as:
$$
\alpha=\dfrac ab\,\dfrac{g(\theta)}{p^N},
$$
where $a,b$ are positive coprime integers not divisible by $p$ and $g(x)\in\mathbb Z[x]$ is a primitive polynomial. Clearly, $$\operatorname{red}_\mathfrak{p}(\alpha)=\operatorname{lred}_\mathfrak{p}(\iota_\mathfrak{p}(\alpha))=\operatorname{lred}_\mathfrak{p}(a/b)\operatorname{lred}_\mathfrak{p}(g(\theta_\mathfrak{p})/p^N),$$ and $\operatorname{lred}_\mathfrak{p}(a/b)\in\ff{\mathfrak{p}}$ is the element in the prime field determined by the quotient of the classes modulo $p$ of $a$ and $b$. Thus, we need only to compute $\operatorname{lred}_\mathfrak{p}(g(\theta_\mathfrak{p})/p^N)$.
If $N=0$, then $\operatorname{lred}_\mathfrak{p}(g(\theta_\mathfrak{p}))=\operatorname{red}_\mathfrak{p}(g(\theta))$ is just the class of $g(x)$ modulo the ideal $(p,\phi_{1,\mathfrak{p}}(x))$. In other words, if $\overline{g}(x)\in\ff{0}[x]$ is the polynomial obtained by reduction of the coefficients of $g(x)$ modulo $p$, then $\operatorname{red}_\mathfrak{p}(g(\theta))=\overline{g}(z_0)\in\ff{1,\mathfrak{p}}\subseteq \ff{\mathfrak{p}}$.
If $N>0$, we look for the first index, $1\le i\le r+1$, for which the truncation $\operatorname{Trunc}_i(\mathbf{t})$ does not divide $g(x)$. In the paragraph following Proposition \ref{vgt} we showed that this will always occur, eventually (for $i=r+1$) after improving the Montes approxi\-mation $\phi_\mathfrak{p}=\phi_{r+1}$. By Proposition \ref{vgt}, there is a computable point $(s,u)\in N_i(g)$ such that
$v(g(\theta_\mathfrak{p}))=(sh_i+ue_i)/(e_1\cdots e_i)=v(\Phi_i(\theta_\mathfrak{p})^s\pi_i(\theta_\mathfrak{p})^u)$, and
$$
\operatorname{lred}_\mathfrak{p}\left(\dfrac{g(\theta_\mathfrak{p})}{\Phi_i(\theta_\mathfrak{p})^s\pi_i(\theta_\mathfrak{p})^u}\right)=R_i(g)(z_i)\ne0.
$$
Now, if $(sh_i+ue_i)/(e_1\cdots e_i)>N$, we have $\operatorname{lred}_\mathfrak{p}(g(\theta_\mathfrak{p})/p^N)=0$; on the other hand, if $(sh_i+ue_i)/(e_1\cdots e_i)=N$, we have
\begin{equation}\label{locred}
\operatorname{lred}_\mathfrak{p}\left(\dfrac{g(\theta_\mathfrak{p})}{p^N}\right)= R_i(g)(z_i)\cdot \operatorname{lred}_\mathfrak{p}\left(\dfrac{\Phi_i(\theta_\mathfrak{p})^s\pi_i(\theta_\mathfrak{p})^u}{p^N}\right)
= R_i(g)(z_i)\, z_1^{t_1}\cdots z_i^{t_i},
\end{equation}
where $p^{-N}\Phi_i(\theta_\mathfrak{p})^s\pi_i(\theta_\mathfrak{p})^u=\gamma_1^{t_1}\cdots \gamma_i^{t_i}$, and the vector $(t_1,\dots,t_i)\in\mathbb Z^i$ can be found by the procedure of Lemma \ref{prodgammas}, applied to the input vector $$\log \left(p^{-N}\Phi_i(x)^s\pi_i(x)^u\right)=
(-N,0,\dots,0)+s\log \Phi_i+u\log \pi_i.$$
Since $\log \Phi_i$, $\log \pi_i$ have been stored as secondary invariants of $\mathbf{t}_\mathfrak{p}$,
we get in this way a really fast computation of $\operatorname{lred}_\mathfrak{p}(p^{-N}g(\theta_\mathfrak{p}))$.
This ends the computation of the reduction modulo $\mathfrak{p}$ map $\operatorname{red}_\mathfrak{p}$.
\subsection{Chinese remainder theorem}
It is straightforward to design a chinese remainders routine once the following problem is solved.\medskip
\noindent{\bf Problem. }{\it Let $p$ be a prime number, $\mathfrak{P}$ the set of prime ideals of $K$ lying above $p$, and $(a_\mathfrak{p})_{\mathfrak{p}\in\mathfrak{P}}$ a family of non-negative integers. Find a family $(c_\mathfrak{p})_{\mathfrak{p}\in\mathfrak{P}}$ of integral elements $c_\mathfrak{p}\in\mathbb Z_K$ such that,
$$
c_\mathfrak{p}\equiv 1 \md{\mathfrak{p}^{a_\mathfrak{p}}},\quad c_\mathfrak{p}\equiv 0\md{\mathfrak{q}^{a_\mathfrak{q}}},\ \forall\,\mathfrak{q}\in\mathfrak{P},\ \mathfrak{q}\ne\mathfrak{p},
$$
for all $\mathfrak{p}\in\mathfrak{P}$.}\medskip
There is an easy solution to this problem: take the element $b_\mathfrak{p}\in\mathbb Z_K$ satis\-fying (\ref{bp}), constructed in section \ref{subsectgenerators}, and consider $c_\mathfrak{p}=(b_\mathfrak{p})^{p^t(q_\mathfrak{p}-1)}$, where
$q_\mathfrak{p}=\operatorname{N}_{K/\mathbb Q}(\mathfrak{p})=\#\ff{\mathfrak{p}}$, and $t$ is sufficiently large. However, the element $c_\mathfrak{p}$ constructed in this way is not useful for practical purposes because it may have a huge norm and a huge height (very large numerators or denominators of the coefficients of its standard representation as a polynomial in $\theta$), if $q_\mathfrak{p}$ or $t$ are large. Instead, we shall refine the construction of the element $b_\mathfrak{p}$ to get a small size solution $c_\mathfrak{p}$ to the above problem.
First we deal with the particular case $a_\mathfrak{p}=1$. The idea is to get an element $b_\mathfrak{p}\in\mathbb Z_K$ satisfying
$$
v_\mathfrak{p}(b_\mathfrak{p})=0,\quad v_\mathfrak{q}(b_\mathfrak{p})\ge a_\mathfrak{q},\ \forall\,\mathfrak{q}\in\mathfrak{P},\ \mathfrak{q}\ne\mathfrak{p},
$$
and then find $\beta\in K$ such that $v_\mathfrak{p}(\beta)=0$, $c_\mathfrak{p}:=b_\mathfrak{p} \beta$ is integral, and $c_\mathfrak{p}\equiv 1\md{\mathfrak{p}}$. Since we know the two-element representation $\mathfrak{q}=(p,\alpha_\mathfrak{q})$ of the prime ideals, we could take $b_\mathfrak{p}=\prod_{\mathfrak{q}\in\mathfrak{P},\,\mathfrak{q}\ne\mathfrak{p}}(\alpha_\mathfrak{q})^{a_\mathfrak{q}}$. Again, this might lead to a $b_\mathfrak{p}$ with large size, so that a direct construction of $b_\mathfrak{p}$ is preferable in order to keep its size as small as possible.
Thus, we consider an element $b_\mathfrak{p}\in\mathbb Z_K$ as in (\ref{finalbp}):
$$
b_\mathfrak{p}=\dfrac{(B_{[\mathfrak{p}]}(x))^m\prod_{\mathfrak{l}\in[\mathfrak{p}],\,\mathfrak{l}\ne\mathfrak{p}}\phi_{\mathfrak{l}}(x)^{d_{\mathfrak{l}}}}{p^N},
$$
with $N=\sum_{\mathfrak{l}\in[\mathfrak{p}],\ \mathfrak{l}\ne\mathfrak{p}} d_\mathfrak{l} v(\phi_\mathfrak{l}(\theta_\mathfrak{p}))=\sum_{\mathfrak{l}\in[\mathfrak{p}],\ \mathfrak{l}\ne\mathfrak{p}} n_\mathfrak{l}$. By construction, $v_\mathfrak{p}(b_\mathfrak{p})=0$.
Let $i=\max_{\mathfrak{q}\in\mathfrak{P}, \,\mathfrak{q}\ne\mathfrak{p}}\{i(\mathbf{t}_\mathfrak{p},\mathbf{t}_\mathfrak{q})\}$; Lemma \ref{lessthanr} shows that $i\le r_\mathfrak{p}$. From now on, we use only invariants of the type $\mathbf{t}_\mathfrak{p}$ and we drop the subindex $\mathfrak{p}$ in the notation. Let
$$
M=\left\{\begin{array}{ll}
0,& \mbox{ if }i=0,\\
\left\lceil \dfrac{v_{i+1}(\phi_{i+1}))}{e_1\cdots e_i}\right\rceil,&\mbox{ if }i>0.
\end{array}
\right.
$$
Arguing as in section \ref{subsectgenerators}, we can take $m$ sufficiently large, and each $\phi_\mathfrak{l}(x)$ sufficiently close to the $p$-adic irreducible factor $f_l(x)$, so that
\begin{equation}\label{plusM}
v_\mathfrak{q}(b_\mathfrak{p})\ge a_\mathfrak{q}+Me(\mathfrak{q}/p), \ \forall \,\mathfrak{q}\in\mathfrak{P},\ \mathfrak{q}\ne\mathfrak{p},
\end{equation}
while keeping the denominator $p^N$ and the condition $v_\mathfrak{p}(b_\mathfrak{p})=0$.
In particular, $b_\mathfrak{p}$ belongs to $\mathbb Z_K$. The idea is to multiply $b_\mathfrak{p}$ by some element in $K$ that conveniently modifies its residue class modulo $\mathfrak{p}$.
We split this task into two parts, that may be considered as a kind of respective inversion modulo $\mathfrak{p}$ of $(B_{[\mathfrak{p}]}(\theta))^m$ and $\,p^{-N}\prod_{\mathfrak{l}\in[\mathfrak{p}],\,\mathfrak{l}\ne\mathfrak{p}}\phi_{\mathfrak{l}}(\theta)^{d_{\mathfrak{l}}}$.
Let $h(x)=(B_{[\mathfrak{p}]}(x))^m$. Then, $\zeta:=\operatorname{red}_\mathfrak{p}(h(\theta))\in\ff{1}\subseteq\ff{\mathfrak{p}}$
is just the class of $h(x)$ modulo the ideal $(p,\phi_1(x))$. We invert $\zeta$ in $\ff{1}$ and represent the inverse $\zeta^{-1}=\overline{P}(z_0)$, as a polynomial in $z_0$ of degree less than $f_0$, with coefficients in the prime field $\ff{0}$. Take $\beta_0=P(\theta)$, where $P(x)\in\mathbb Z[x]$ is an arbitrary lift of $\overline{P}(x)$; clearly, $h(\theta)\beta_0\equiv 1\md{\mathfrak{p}}$.
Let now $\,g(x)=p^{-N}\prod_{\mathfrak{l}\in[\mathfrak{p}],\,\mathfrak{l}\ne\mathfrak{p}}\phi_{\mathfrak{l}}(x)^{d_{\mathfrak{l}}}$.
If $[\mathfrak{p}]=\{\mathfrak{p}\}$, then $i=0$, $N=M=0$, $g(x)=1$ and we are done. Suppose $[\mathfrak{p}]\varsupsetneq \{\mathfrak{p}\}$, so that $1\le i\le r_\mathfrak{p}$. By the definition of the index of coincidence, we have $\operatorname{Trunc}_i(\mathbf{t}_\mathfrak{p})\nmid \phi_\mathfrak{l}(x)$, for all $\mathfrak{l}\ne\mathfrak{p}$; by the Theorem of the product \cite[Thm.2.26]{HN}, $\operatorname{Trunc}_i(\mathbf{t}_\mathfrak{p})\nmid g(x)$. Therefore, as we saw in the last section,
$$
\xi:=\operatorname{lred}_\mathfrak{p}(p^{-N}g(\theta_\mathfrak{p}))=R_i(g)(z_i)\,z_1^{t_1}\cdots z_i^{t_i}\in\ff{i+1},
$$
for some easily computable sequence of integers $(t_1,\dots,t_i)$. Let $V=e_1\cdots e_iM$; by \cite[Cor.3.2]{HN}, $v\left(\pi_{i+1}(\theta_\mathfrak{p})^V\right)=M$. Compute a vector $(t'_1,\dots,t'_i)\in\mathbb Z^i$ such that
$p^{-M}\pi_{i+1}(x)^V=\gamma_1^{t'_1}\cdots \gamma_i^{t'_i}$, as indicated in Lemma \ref{prodgammas}, and take
$$
\xi':=z_1^{t'_1}\cdots z_i^{t'_i}\in\ff{i+1}.
$$
Let $\varphi(y)\in\ff{i}[y]$ be the unique polynomial of degree less than $f_i$, such that
$\varphi(z_i)=z_i^{\ell_iV/e_i}(\xi\xi')^{-1}$. Let $\nu=\op{ord}_y\varphi(y)$.
Clearly,
$$V\ge v_{i+1}(\phi_{i+1})=e_if_iv_{i+1}(\phi_i),
$$the last equality by \cite[Thm.2.11]{HN}. Therefore, we can apply the constructive method described in \cite[Prop.2.10]{HN} to compute a polynomial $P(x)\in\mathbb Z[x]$ sa\-tisfying the following properties:
$$\op{deg} P(x)<m_{i+1},\qquad v_{i+1}(P)=V, \qquad y^{\nu}R_i(P)(y)=\varphi(y).
$$
A look at the proof of \cite[Prop.2.10]{HN} shows that $N_i(P)$ is one sided of slope $\lambda_i$ and its end points have abscissa $e_i\nu$ and $e_i\deg\varphi$ (cf. Figure 4).
\begin{center}
\setlength{\unitlength}{5.mm}
\begin{picture}(10,5.4)
\put(5.85,1.85){$\bullet$}\put(2.85,3.35){$\bullet$}
\put(0,0){\line(1,0){10}}
\put(9,.5){\line(-2,1){8}}
\put(3,3.53){\line(2,-1){3}}
\put(.9,4.5){\line(1,0){.2}}
\put(1,-1){\line(0,1){6.5}}
\put(4.5,3){\begin{footnotesize}$N_i(P)$\end{footnotesize}}
\put(8.5,1){\begin{footnotesize}$L_{\lambda_i}$\end{footnotesize}}
\put(-.5,4.3){\begin{footnotesize}$V/e_i$\end{footnotesize}}
\multiput(3,-.1)(0,.25){15}{\vrule height2pt}
\multiput(6,-.1)(0,.25){9}{\vrule height2pt}
\put(5.2,-.6){\begin{footnotesize}$e_i\deg\varphi $\end{footnotesize}}
\put(2.6,-.6){\begin{footnotesize}$e_i\nu $\end{footnotesize}}
\end{picture}
\end{center}\medskip
\begin{center}
Figure 4
\end{center}
\noindent{\bf Claim: }$\operatorname{lred}_\mathfrak{p}(p^{-M}P(\theta_\mathfrak{p}))=\xi^{-1}$.\medskip
In fact, since $\op{deg} P(x)<m_{i+1}$ and $v_{i+1}(P)=V$, the Newton polygon $N_{i+1}(P)$ is the single point $(0,V)$; in particular, $\operatorname{Trunc}_{i+1}(\mathbf{t}_\mathfrak{p})$ does not divide $P(x)$ and Proposition \ref{vgt}
shows that $v(P(\theta_\mathfrak{p}))=M$ and
$$
\operatorname{lred}_\mathfrak{p}\left(P(\theta_\mathfrak{p})/\pi_{i+1}(\theta_\mathfrak{p})^V\right)=R_{i+1}(P)(z_{i+1})\ne 0.
$$
Actually, $R_{i+1}(P)(y)$ has degree $0$ and it represents a constant in $\ff{i+1}$ that we denote simply by $R_{i+1}(P)$. By (\ref{locred}),
$$
\operatorname{lred}_\mathfrak{p}\left(p^{-M}P(\theta_\mathfrak{p})\right)=R_{i+1}(P)\,\operatorname{lred}_\mathfrak{p}(p^{-M}\pi_{i+1}(\theta_\mathfrak{p})^V)=R_{i+1}(P)\,\xi'.
$$
By the very definition of the residual polynomial \cite[Defs.2.20+2.21]{HN}, we have
$$
R_{i+1}(P)=z_i^{(e_i\nu -\ell_iV)/e_i}R_i(P)(z_i)=z_i^{-\ell_iV/e_i}\varphi(z_i)=(\xi\xi')^{-1},
$$
and the Claim is proven.\medskip
Finally, consider $$c_\mathfrak{p}=b_\mathfrak{p}\beta_0(p^{-M}P(\theta)).$$ The condition (\ref{plusM}) ensures that $c_\mathfrak{p}$ belongs to $\mathbb Z_K$ (although $p^{-M}P(\theta)$ might not be integral) and satisfies $v_\mathfrak{q}(c_\mathfrak{p})\ge a_\mathfrak{q}$, for all $\mathfrak{q}\in\mathfrak{P}$, $\mathfrak{q}\ne\mathfrak{p}$. By construction, $\operatorname{red}_\mathfrak{p}(c_\mathfrak{p})=\operatorname{lred}_\mathfrak{p}(\iota_\mathfrak{p}(c_\mathfrak{p}))=1$.
It remains to solve our Problem when $a_\mathfrak{p}>1$. In this case, we find $c\in\mathbb Z_K$ such that
$$
c\equiv 1\md{\mathfrak{p}},\qquad c\equiv 0 \md{\mathfrak{q}^{a_\mathfrak{q}}},\ \forall\,\mathfrak{q}\in\mathfrak{P},\ \mathfrak{q}\ne\mathfrak{p},
$$
and we take $c_\mathfrak{p}=(c-1)^m+1$, where $m$ is the least odd integer that is greater than or equal to $a_\mathfrak{p}/v_\mathfrak{p}(c-1)$.
\section{$p$-integral bases}\label{secBasis}
Let $p$ be a prime number and let $\ff{p}$ be the prime field of characteristic $p$. A \emph{$p$-integral basis of $K$} is a family of $n$ $\mathbb Z$-linearly independent integral elements $\alpha_1,\dots,\alpha_n\in\mathbb Z_K$ such that
$$
p\nmid \left(\mathbb Z_K\colon \gen{\alpha_1,\dots,\alpha_n}_\mathbb Z\right),
$$
or equivalently, such that the family $\alpha_1\otimes1,\,\dots,\,\alpha_n\otimes1$ is $\ff{p}$-linearly independent in the $\ff{p}$-algebra $\mathbb Z_K\otimes_\mathbb Z\ff{p}$.
If the discriminant $\op{disc}(f)$ of $f(x)$ may be factorized, the computation of an integral basis of $K$ (a $\mathbb Z$-basis of $\mathbb Z_K$) is based on the computation of $p$-integral bases for the different primes $p$ that divide $\op{disc}(f)$.
Anyhow, even when $\op{disc}(f)$ may not be factorized, the computation of a $p$-integral basis of $K$ for a given prime $p$ is an interesting task on its own. In this section we show how to carry out this task from the data captured by the Okutsu-Montes representations of the prime ideals $\mathfrak{p}$ lying over $p$.
For any such prime ideal we keep the notations for $f_\mathfrak{p}(x)$, $\phi_\mathfrak{p}(x)$, $\ff{\mathfrak{p}}$, $\theta_\mathfrak{p}$, $K_\mathfrak{p}$, $\mathbb Z_{K_\mathfrak{p}}$, as introduced in section \ref{secPadic}.
\subsection{Local exponent of a prime ideal}Let $\mathfrak{P}$ be the set of prime ideals lying over $p$. For any $\mathfrak{p}\in\mathfrak{P}$ we fix the topological embedding $K\hookrightarrow K_\mathfrak{p}$ determined by sending $\theta$ to $\theta_\mathfrak{p}$.
\begin{definition}
We define the local exponent of $\mathfrak{p}\in\mathfrak{P}$ to be the least positive integer $\op{exp}(\mathfrak{p})$ such that
$$
p^{\op{exp}(\mathfrak{p})}\mathbb Z_{K_\mathfrak{p}}\subseteq \mathbb Z_p[\theta_\mathfrak{p}].
$$
Note that $\op{exp}(\mathfrak{p})$ is an invariant of the irreducible polynomial $f_\mathfrak{p}(x)\in\mathbb Z_p[x]$, but it is not an intrinsic invariant of $\mathfrak{p}$.
\end{definition}
The computation of $\op{exp}(\mathfrak{p})$ is easily derived from the results of \cite{GMNokutsu}.
Let $\mathfrak{p}=[p;\phi_1,\dots,\phi_r,\phi_\mathfrak{p}]$ be the Okutsu-Montes representation of $\mathfrak{p}$, as indicated in section \ref{subsecApprox}. By \cite[Lem.4.5]{GMNokutsu}:
$$f_\mathfrak{p}(x)\equiv \phi_\mathfrak{p}(x)\md{{\mathfrak m}^{\lceil\nu\rceil}}, \quad \mbox{ where }\nu=\nu_\mathfrak{p}+(h_{r+1}/e(\mathfrak{p}/p)),
$$
and $\nu_\mathfrak{p}$ is the rational number
$$\nu_\mathfrak{p}:=\dfrac{h_1}{e_1}+\dfrac{h_2}{e_1e_2}+\cdots+\dfrac{h_r}{e_1\cdots e_r}.
$$
Caution: this number is not always an invariant of $f_\mathfrak{p}(x)$. If $e_rf_r=1$, the Okutsu depth of $f_\mathfrak{p}(x)$ is $R=r-1$ (cf. (\ref{depth})), and $h_r$ depends on the choice of $\phi_r$.
Denote $\phi_0(x)=x$, $m_0=1$, $m_{r+1}=n_\mathfrak{p}:=e(\mathfrak{p}/p)f(\mathfrak{p}/p)$. Any integer $0\le m<n_\mathfrak{p}$ can be written in a unique way as:
$$
m=\sum_{i=0}^ra_im_i, \quad 0\le a_i<\dfrac{m_{i+1}}{m_i}.
$$
Then, $g_m(x):=\prod_{i=0}^r\phi_i(x)^{a_i}$ is a divisor polynomial of degree $m$ of $f_\mathfrak{p}(x)$ \cite[Thm.2.15]{GMNokutsu}. Also, if $\nu_m:=\lfloor v(g_m(\theta))\rfloor $, the family
\begin{equation}\label{basis}
1,\,\dfrac{g_1(\theta_\mathfrak{p})}{p^{\nu_1}},\,\dots,\,\dfrac{g_{n_\mathfrak{p}-1}(\theta_\mathfrak{p})}{p^{\nu_{n_\mathfrak{p}-1}}}
\end{equation}
is a $\mathbb Z_p$-basis of $\mathbb Z_{K_\mathfrak{p}}$ \cite[I,Thm.1]{Ok}. Since the numerators $g_m(x)$ have strictly increasing degree and $\nu_1\le \cdots\le\nu_{n_\mathfrak{p}-1}$, it is clear that
$\op{exp}(\mathfrak{p})=\nu_{n_\mathfrak{p}-1}$.
On the other hand, since $m_{i+1}/m_i=e_if_i$, we have
$$\nu_{n_\mathfrak{p}-1}=\left\lfloor \sum_{i=1}^r(e_if_i-1)v(\phi_i(\theta_\mathfrak{p}))
\right\rfloor
$$
Now, along the proof of \cite[Lem.4.5]{GMNokutsu}, it was proven that:
\begin{equation}\label{above}
\sum_{i=1}^r(e_if_i-1)v(\phi_i(\theta_\mathfrak{p}))=v(\phi_{r+1}(\theta_\mathfrak{p}))-\nu_\mathfrak{p}-\dfrac{h_{r+1}}{e(\mathfrak{p}/p)}=\dfrac{v_{r+1}(\phi_{r+1})}{e(\mathfrak{p}/p)}-\nu_\mathfrak{p}.
\end{equation}
Thus, if we combine
(\ref{above}) with the explicit formula:
$$
v_{r+1}(\phi_{r+1})=\sum_{i=1}^re_{i+1}\cdots e_r(e_if_i\cdots e_rf_r)h_i,
$$
given in \cite[Prop.2.15]{HN}, we get the following explicit computation of $\op{exp}(\mathfrak{p})$ in terms of the invariants of the Okutsu-Montes representation of $\mathfrak{p}$.
\begin{theorem}\label{theoexp}For all $\mathfrak{p}\in\mathfrak{P}$, we have $\op{exp}(\mathfrak{p})=\lfloor\mu_\mathfrak{p}\rfloor$, where
$$
\mu_\mathfrak{p}:=\dfrac{v_{r+1}(\phi_{r+1})}{e(\mathfrak{p}/p)}-\nu_\mathfrak{p}=\sum_{i=1}^r(e_if_i\cdots e_rf_r-1)\dfrac{h_i}{e_1\cdots e_i}.
$$
\end{theorem}
Note that $\mu_\mathfrak{p}$ is an Okutsu invariant, because it depends only on $e_i,f_i,h_i$, for $1\le i\le R$, where $R$ is the Okutsu depth of $f_\mathfrak{p}(x)$. If $R=r-1$, then $e_rf_r=1$ and the summand of $\mu_\mathfrak{p}$ corresponding to $i=r$ vanishes.
\subsection{Computation of a $p$-integral basis}
Along the computation of the prime ideal decomposition of the ideal $p\mathbb Z_K$ (by a single call to Montes algorithm) we can easily store the local exponents $\op{exp}(\mathfrak{p})$, and the numerators $g_m(x)$ and denominators $p^{\nu_m}$ of all $\mathbb Z_p$-bases of $\mathbb Z_{K_\mathfrak{p}}$ given in (\ref{basis}), for all $\mathfrak{p}\in\mathfrak{P}$.
It is well-known how to derive a $p$-integral basis from all these local bases. Let us briefly describe a concrete procedure to do this, taken from \cite{ore2}.
We apply the method described at the end of section \ref{subsectgenerators} to compute multipliers $\{b_\mathfrak{p}\}_{\mathfrak{p}\in\mathfrak{P}}$ satisfying:
\begin{equation}\label{multipliers}
v_\mathfrak{p}(b_\mathfrak{p})=0,\quad v_\mathfrak{q}(b_\mathfrak{p})\ge (\op{exp}(\mathfrak{p})+1)\,e(\mathfrak{q}/p),\ \forall\mathfrak{q}\in\mathfrak{P},\,\mathfrak{q}\ne\mathfrak{p}.
\end{equation}
Consider the family obtained by multiplying each local basis (\ref{basis}) by its corresponding multiplier:
$$
{\mathcal B}_\mathfrak{p}:=\left[b_\mathfrak{p},\,b_\mathfrak{p}\,\dfrac{g_1(\theta)}{p^{\nu_1}},\,\dots,\,b_\mathfrak{p}\,\dfrac{g_{n_\mathfrak{p}-1}(\theta)}{p^{\nu_{n_\mathfrak{p}-1}}}\right].
$$
Then, ${\mathcal B}:=\bigcup_{\mathfrak{p}\in\mathfrak{P}}{\mathcal B}_\mathfrak{p}$ is a $p$-integral basis of $K$.
In fact, although the elements $g_m(\theta)/p^{\nu_m}$ are not (globally) integral, (\ref{multipliers}) shows that the products $\alpha_{m,\mathfrak{p}}:=b_\mathfrak{p}\,(g_m(\theta)/p^{\nu_m})$ belong all to $\mathbb Z_K$ and satisfy
$$
v_\mathfrak{q}(\alpha_{m,\mathfrak{p}})\ge e(\mathfrak{q}/p),\quad \forall \mathfrak{q}\in\mathfrak{P},\, \mathfrak{q}\ne\mathfrak{p}.
$$
It is easy to deduce from this fact that the family of all $\alpha_{m,\mathfrak{p}}$ determines an $\ff{p}$-linearly independent family of the algebra $\mathbb Z_K\otimes_\mathbb Z\ff{p}$.
\section{Some examples}\label{secKOM}
We have implemented the algorithms described above in a package for Magma. The arithmetic of number fields in any algebraic manipulator has to face two problems: the factorization of the discriminant and the memory requirements for large degrees. Our package allows the user to skip the first problem, while it uses very little memory, expanding Magma's capabilities by far. The package
which can be downloaded from its web page ({\tt http:/ma4-upc.edu/$\sim$guardia/+Ideals.html}),
is described in detail in the accompanying paper \cite{GMNPackage}. We include here a few examples which exhibit the power of the package in different situations. More exhaustive tests of Montes algorithm have been presented in
\cite{GMNalgorithm}, \cite{GMNbasis}.
The computations in these examples have been done with Magma v2.15-11 in a Linux server, with two Intel Quad Core processors, running at 3.0 Ghz, with 32Gb of RAM memory.
\subsection{Large degree} Consider the number field $K=\mathbb Q(\theta)$ given by a root $\theta\in\overline\mathbb Q$ of the polynomial $f(x)=x^{1000} + 2^{50} x^{50} + 2^{60}$. The factorization of the discriminant of $f$ is
$$
\mbox{Disc}(f)=
2^{53940}3^{50}5^{2000}127^{50} 313^{50} 743^{50} 4886229527^{50}p^{50},
$$
with $p=337572698551220494882323528404563236947916489629537$.
The large degree of $f$ makes impossible to work in this number field using the standard functions of Magma, even after factorizing the discriminant, since the computation of the integral basis is necessary for these functions. But our algorithms avoid this computation, so that we can work with ideals in $K$. For instance, in the table below we show the local index of the primes dividing $\operatorname{Disc}(f)$ and the time taken to decompose them in $K$.
$$
\begin{array}{|c|r|r|}
\hline
\rm{Ideal} &\rm{Index} &\rm{Time}\\
\hline\hline
2\mathbb Z_K & 26235 &0.36s \\
\hline
3\mathbb Z_K & 0& 0.61s\\
\hline
5\mathbb Z_K &20& 0.63s\\
\hline
127\mathbb Z_K & 0& 1.29s\\
\hline
313\mathbb Z_K &0& 3.69s\\
\hline
743\mathbb Z_K &0& 6.47s \\
\hline
4886229527\mathbb Z_K&0 & 6.96s \\
\hline
p\mathbb Z_K &0& 60s\\
\hline
\end{array}
$$
Thus, we need less than 90 seconds to see that the discriminant of $K$ is
$$
\mbox{Disc}(K)=
2^{1470}3^{50}5^{1960}127^{50} 313^{50} 743^{50} 4886229527^{50}p^{50}.
$$
The running times in the table show clearly that the cost of the factorizations increases mainly because of the size of the numbers involved, and that the index has not a serious impact on them. The largest type appearing in these computations has order 3, and it appears along the factorization of the ideal $2\mathbb Z_K$, which is
$$
2\mathbb Z_K=\mathfrak{p}_1^{10}({\mathfrak{p}_1'})^{38}\mathfrak{p}_4^{10}({\mathfrak{p}_4'})^{38}\mathfrak{p}_{20}^{38},
$$
where $\mathfrak{p}_f^e$ stands for a prime ideal with residual degree $f$ and ramification index $e$. While we cannot expect to factor the ideals $I=(\theta^3+50)\mathbb Z_K,$ $J=(\theta+10)\mathbb Z_K$ in a reasonable time, it takes 0.03 seconds to compute the factorization of its sum:
$$
I+J=\mathfrak{p}_1^{2}({\mathfrak{p}_1'})^{2}\mathfrak{p}_4^{2}({\mathfrak{p}_4'})^{2}\mathfrak{p}_{20}^{2}.
$$
The decomposition of 5 in the maximal order $\mathbb Z_K$ is
$$
5\mathbb Z_K=\mathfrak{p}_2^5({\mathfrak{p}'}_2)^5\mathfrak{p}_{2}^{20}({\mathfrak{p}'}_2)^{20}\mathfrak{p}_2^{25}\mathfrak{p}_4^{25}\mathfrak{p}_{15}^{25}({\mathfrak{p}_{15}'})^{25}.
$$
With the residue map computation explained in subsection \ref{redmap}, we may check very quickly that
$$
\theta\equiv \zeta\left(\operatorname{mod}{\mathfrak{p}_4}\right),
$$
where $\mathbb Z_K/\mathfrak{p}_4\simeq \mathbb{F}_5[\zeta]$, with $\zeta^4+2\zeta^2+3=0$. The Chinese remainder algorithm works also very fast in this number field.
\subsection{Small degree, large coefficients}
The space of modular forms of level 1 and degree 76 has dimension 6. The newforms in this space are defined over the number field $K=\mathbb Q(\theta)$, where $\theta\in\overline{\mathbb Q}$ is a root of the polynomial:
$$
\begin{array}{l}
f(x)=x^6 + 57080822040x^5 - 198007918566571424544768x^4 \\
\qquad- 11405115067164354385292006554337280x^3 \\
\qquad + 9757628454131691442128845013041495838774263808x^2 \\
\qquad +290013995562379500498435975003716024800114593761580810240x\\
- 92217203874207784163935379997152082331434364841943058919508374716416.
\end{array}
$$
The discriminant of $f(x)$ is
$$
\begin{array}{rl}
\operatorname{Disc}(f)=&
2^{264} 3^{72} 5^{16} 7^{16} 11^2 13^2 17^4 19^2 43^2 59\cdot 193^2\cdot \\
&\qquad\qquad 293\cdot391987^2 4759427^2 137679681521^2M,
\end{array}
$$
where $M$ is a composite integer of 135 decimal figures which we have not been able to factorize. J. Rasmussen asked us (\cite{Rasmussen}) for a test to check certain divisibility conditions on the ring of integers of $K$, related to his work on congruences satisfied by the coefficients of certain modular forms. The time to find the decomposition of the primes in the set
$$
S:=\{
2,3,5,7,11,13,17,19,43,59,193,293,391987,4759427,137679681521
\}
$$
is almost negligible, since it involves only types of order at most 1. The table below shows the local indices of these primes:
$$
\begin{array}{|c|r|}
\hline
\rm{Ideal} &\rm{Index} \\
\hline
\hline
2 \mathbb Z_K & 132 \\ \hline
3 \mathbb Z_K & 36 \\ \hline
5 \mathbb Z_K & 8 \\ \hline
7 \mathbb Z_K & 8 \\ \hline
11 \mathbb Z_K & 1 \\ \hline
13 \mathbb Z_K & 1 \\ \hline
17 \mathbb Z_K & 2 \\ \hline
19 \mathbb Z_K & 1 \\ \hline
43 \mathbb Z_K & 1 \\ \hline
59 \mathbb Z_K & 0 \\ \hline
193 \mathbb Z_K & 1 \\ \hline
293 \mathbb Z_K & 0 \\ \hline
391987 \mathbb Z_K & 1 \\ \hline
4759427 \mathbb Z_K & 1 \\ \hline
137679681521 \mathbb Z_K & 1 \\ \hline
\end{array}
$$
Hence the discriminant of $K$ is $\operatorname{Disc}(K)=59\cdot 293N$, where $N$ is divisible by at least one of the prime factors of $M$, since $M$ is not a square.
The ideal prime decomposition of 3 in $\mathbb Z_K$ is
$$
3\mathbb Z_K=\mathfrak{p}_2\mathfrak{p}_1\mathfrak{p}_1'\mathfrak{p}_1''\mathfrak{p}_1'''.
$$
The algorithm explained in section \ref{secGenerators} provides generators for all these ideals:
$$
\begin{array}{l}
\mathfrak{p}_2=3\mathbb Z_K\!+3^{-12}(4\theta^5\! + 4311\theta^4\! + 1717038\theta^3\! + 2900691\theta^2 \!+ 820125\theta\!+ 2834352)\mathbb Z_K \\
\mathfrak{p}_1=3\mathbb Z_K\!+3^{-11}(2\theta^5 + 1815\theta^4 + 586980\theta^3 + 732159\theta^2 + 658287\theta + 1535274)\mathbb Z_K\\
\mathfrak{p}_1'=3\mathbb Z_K\!+3^{-11}(2\theta^5\! + 2031\theta^4 \!+ 662796\theta^3\! + 1123632\theta^2\! + 1071630\theta + 295245)\mathbb Z_K\\
\mathfrak{p}_1''=3\mathbb Z_K\!+3^{-11}(2\theta^5 + 2307\theta^4 + 910872\theta^3 + 847584\theta^2 + 398034\theta + 1121931\mathbb Z_K\\
\mathfrak{p}_1'''=3\mathbb Z_K\!+3^{-11}(2\theta^5 \!+ 2091\theta^4\! + 708696\theta^3 \!+ 646380\theta^2\! + 634230\theta + 1121931)\mathbb Z_K\\
\end{array}
$$
Applying the algorithm described in section \ref{secCRT}, we can compute without much effort an element $\alpha\in K$ satisfying
$$
\begin{array}{lll}
\alpha\equiv 1(\operatorname{mod}\,{\mathfrak{p}_2}),
\quad & \alpha\equiv \theta\left(\operatorname{mod}\,{\mathfrak{p}_1}\right),\\\\
\alpha\equiv \theta^2\left(\operatorname{mod}\,({\mathfrak{p}_1'})^2\right),
\quad &\alpha\equiv \theta^3\left(\operatorname{mod}\,({\mathfrak{p}_1''})^3\right),
\quad &\alpha\equiv \theta^4\left(\operatorname{mod}\,({\mathfrak{p}_1'''})^4\right).
\end{array}
$$
We may take, for instance:
$$
\alpha=3^{-9}(786086\theta^5 + 445989\theta^4 + 196857\theta^3 + 1159353\theta^2 + 649539\theta + 354294).
$$
Following the algorithm for $p$-adic valuations introduced in section \ref{secPadic}, we can check this result computing the valuations of the differences $r-\theta^j$ at the prime ideals dividing 3:
$$
v_{\mathfrak{p}_2}(\alpha-1)=1,\
v_{\mathfrak{p}_1}(\alpha-\theta)=7,\
v_{\mathfrak{p}_1'}(\alpha-\theta^2)=4,\
v_{\mathfrak{p}_1''}(\alpha-\theta^3)=4,\
v_{\mathfrak{p}_1'''}(\alpha-\theta^4)=4.
$$
All these computations are almost immediate. Even the computation of an $S$-integral basis takes only 0.06 seconds.
\subsection{Medium degree}
Consider the polynomials:
$$\as{1.2}
\begin{array}{ll}
\phi_0= x+1,&\phi_1=\phi_0^{2}+2, \\
\phi_{21}=\phi_1^{2}+8, &\phi_{22}= \phi_1^{4}+4\phi_0\phi_1^{2}+32, \\
\phi_3=\phi_{22}^{2}+256\phi_1^{2},&
f=\phi_3\phi_{21}+2^{30}.
\end{array}
$$
Let $K=\mathbb Q(\theta)$ be the number field of degree 20 determined by a root $\theta\in\overline{\mathbb Q}$ of $f$. For the prime $p=2$,
the polynomial $f$ has two complete types, with associated Okutsu frames $[\phi_1,\phi_{21}]$ and $[\phi_1,\phi_{22},\phi_3]$, which give rise to the two prime ideals of $\mathbb Z_K$ over $2$.
The concrete decomposition is $2\mathbb Z_K=\mathfrak{p}_1^4\mathfrak{p}_2^8$, where $f(\mathfrak{p}_i/2)=i$, $e(\mathfrak{p}_i/2)=4i$.
The discriminant of $f$ is
$$
\begin{array}{rl}
\operatorname{Disc}(f)=&2^{268}\cdot 3^2\cdot 19927\cdot 43691^2\cdot 211039\cdot 6059454913\cdot\\
&512920919154157817\cdot
25506978885046388417449\cdot\\
& 149169795543042282387542317948232968678925571739.
\end{array}
$$
In this example we may compare the performance of the standard Magma functions and that of our package, since Magma can determine the ring of integers of $K$. Once the factorization of $\operatorname{Disc}(f)$ is known, Magma takes 5.8 seconds to determine $\mathbb Z_K$, and 0.08 seconds to find the decomposition of the prime 2 in $\mathbb Z_K$.
Our package takes 0.3 seconds to see that $\operatorname{Disc}(K)=2^{-234}\operatorname{Disc}(f)$, and during this computation already finds the decomposition of all the primes dividing the discriminant. Our program can also compute a 2-integral basis of $K$, which is already a global integral basis, in 0.02 seconds.
\section{Conclusions}\label{secConclusion}
\subsection{Challenges}\label{first}
We described routines to perform the basic tasks concerning fractional ideals of a number field, based on the Okutsu-Montes representations of the prime ideals \cite{montes}, \cite{HN}. This avoids the factorization of the discriminant of a defining equation and the construction of the maximal order. These routines are very fast in practice, as long as one deals with fractional ideals whose norm may be factorized.
A big challenge arises: is it possible to combine these techniques with some kind of LLL reduction to test if a fractional ideal is principal?
Also, the generators of the prime ideals constructed in this paper have small height as vectors in $\mathbb Q^n$ (the coefficients of its standard representation as a polynomial in $\theta$). This may have some advantages, but in many applications it is preferable to have generators of small norm. A solution to the above mentioned challenge would
probably lead to a procedure to find generators of small norm too.
\subsection{Comparison with the standard methods}Suppose the discriminant of the defining equation of the number field may be factorized. Most of the methods to compute a $\mathbb Z$-basis of the maximal order are based on variants of the Round 2 and Round 4 algorithms of Zassenhaus.
The procedure of section \ref{secBasis} yields a much faster computation of an integral basis and the discriminant of the field.
Once the maximal order is constructed, we can compare our routines for the manipulation of fractional ideals with the standard ones. The routines based on the Okutsu-Montes representations of the prime ideals are faster, mainly because they avoid the usual linear algebra techniques (computation of bases of the ideals, Hermite and Smith normal forms, etc.), which become slow if the degree of the number field grows.
\subsection{Curves over finite fields}The results of these paper are easily extendable to function fields. If $C$ is a curve over a finite field, there is a natural identification of rational prime divisors of $C$ with prime ideals of the integral closures of certain subrings of the function field \cite{hess}, \cite{hess2}. Montes algorithm may be applied as well to construct these prime ideals, and the routines of this paper lead to parallel routines to find the divisor of a function, or to construct a function with zeros and poles of a prescribed order, at a finite number of places.
The results of section \ref{secBasis} may be used to efficiently compute bases of the above mentioned integral closures too. However, the big challenge of section \ref{first} has its parallel in the geometric situation: we hope that the techniques of this paper may be used to find better routines to compute bases of the Riemann-Roch spaces and to deal with reduced divisors. This would open the door to operate in the group $\operatorname{Pic}^0(C)$ of rational points of the Jacobian of $C$, for curves with plane models of very large degree.
|
1,116,691,500,177 | arxiv | \section{Introduction}
A possible signature of the formation of the quark-gluon
plasma (QGP) in relativistic heavy ion collisions
is the suppression of J/$\psi$ due to color Debye screening \cite{MS}.
In the $Pb$-$Pb$ collisions at CERN-SPS,
the J/$\psi$ suppression beyond the normal nuclear absorption
has been discovered \cite{NA50}. However, the data
may be described either by the color Debye screening due to deconfined
quarks and gluons or
by absorption/dissociation due to comoving light hadrons.
Recent BNL-RHIC data show a tendency that the J/$\psi$ suppression is
almost independent of collision energies between 62 GeV and 200 GeV.
The magnitude of the suppression is less than those predicted
from color Debye screening or from the absorption by comovers, which
may be understood by the recombination of $c$ and $\bar{c}$ \cite{QM}.
To understand the mechanism of J/$\psi$ suppression in those experiments,
we need more precise knowledge on the J/$\psi$-hadron interactions.
In this report,
we show our recent results of J/$\psi$-hadron scattering lengths calculated
in quenched lattice QCD simulations. (See \cite{AH,WE,TO,SV,SL} and references therein
for other approaches.)
\section{Formulation}
Suppose we have two hadrons in a finite box. The effect of
their interaction appears as an energy shift $\Delta E$ relative to the
non-interacting case. $\Delta E$ may be
extracted from the correlator ratio $R(t)$ for large $t$;
\begin{eqnarray}
R(t) = \frac{G_{{\rm J}/\psi{\textit-}H}(t)}{G_{{\rm J}/\psi}(t)G_H(t)}
\sim {\rm e}^{-(E-m_{{\rm J}/\psi}-m_H)\cdot t} = {\rm e}^{-\Delta E \cdot t},
\end{eqnarray}
where $G_{{\rm J}/\psi \textit{-}H}(t)$ and $G_{H}(t)$ are
the J/$\psi$-hadron four-point function and the hadron two-point function,
respectively.
In our calculation, we consider three scattering processes,
J/$\psi$-$\pi$, J/$\psi$-$\rho$ and J/$\psi$-$N$(nucleon),
which are most important for J/$\psi$ absorption by comoving hadrons
in relativistic heavy ion collisions.
Since each hadron has spins,
we need to make spin projection to good total spin states of J/$\psi$-hadron
two body system.
For example, the J/$\psi$-$N$ case reads
\begin{eqnarray}
G_{{\rm J}/\psi\textit{-}N}(t) = G_{1/2}(t)\hat{P}^{1/2}+G_{3/2}(t)\hat{P}^{3/2},
\end{eqnarray}
where $G_{1/2(3/2)}$ denotes
the four-point function with a good spin quantum number ($J=1/2$ or 3/2)
and $\hat{P}^{1/2(3/2)}$ is the spin projection operator \cite{KSSS}.
The L\"uscher's formula tells us a relation between $\Delta E$ and
the scattering observables such as the scattering length and the scattering phase shift.
For the S-wave scattering phase shift, it reads
\cite{ML}
\begin{eqnarray}
\tan\delta_0(q) = \frac{\pi^{3/2}\sqrt{q}}{Z_{00}(1,q)},
\label{tan-d}
\end{eqnarray}
with sign convention as which negative phase shift corresponds to repulsion.
Here $Z_{00}$ is a generalized zeta function defined by
\begin{eqnarray}
Z_{00}(s,q)=\frac{1}{\sqrt{4\pi}}\sum_{\bf{n}}\frac{1}{(n^2-q)^s},\qquad
q=\left( \frac{pL}{2\pi} \right)^2,
\end{eqnarray}
$p$ and $L$ are the relative momentum of the two hadrons and the spatial size of the box,
respectively. Here it is assumed that the interaction range is finite.
Outside the interaction range,
the total energy of the system is related to the relative momentum $p$ as
\begin{eqnarray}
\sqrt{m^2_{J/\psi}+p^2}+\sqrt{m_H^2+p^2} = E.
\end{eqnarray}
Here positive (negative) $p^2$ corresponds to the repulsion (attraction).
If there are no interactions between the hadrons,
$p$ takes discrete momentum in the finite box as
$p^2=(2\pi /L)^2\cdot n\;$ ($n=0,1,2,\cdots$).
If there are interactions, $p$ receives an extra contribution $p_{\rm int}$
and then a momentum squared divided by $(2\pi /L)^2$ is no longer an integer.
The S-wave scattering length is defined as
$a_0\equiv \lim_{p\to 0} \tan(p)/p$ and is related to the zeta function
through Eq.(\ref{tan-d}) as
\begin{eqnarray}
a_0 = \left. \frac{L\sqrt{\pi}}{2Z_{00}(1,q)} \right|_{n=0},
\label{a0-zeta}
\end{eqnarray}
where the scattering length is assigned to be negative (positive) for
repulsion (weak attraction). The hadron scattering
lengths based on the formulas Eqs.(\ref{tan-d}) and (\ref{a0-zeta})
has been extensively studied for $\pi$-$\pi$ and $N$-$N$ systems
in Refs.~\cite{Krms,SRB}.
It is important here to discuss the asymptotic behavior of Eq.(\ref{a0-zeta})
for large $L$ for the purpose of analyzing the system with attractive
interactions. The large $L$ expansion of the right hand side of Eq.(\ref{a0-zeta})
at $q \sim 0$ leads to \cite{ML}
\begin{eqnarray}
\Delta E = -\frac{2\pi a_0}{M_{\rm res}L^3} \left(
1+c_1 \left( \frac{a_0}{L} \right) +c_2 \left( \frac{a_0}{L} \right)^2
\right) +O(L^{-6}),
\label{largeL}
\end{eqnarray}
with $c_1=-2.837297$ and $c_2=6.375183$.
Let us try to solve Eq.(\ref{largeL}) in terms of $a_0$ for
given $\Delta E$.
In the case that $\Delta E >0$, both the expansion
up to $O(L^{-4})$ and that up to $O(L^{-5})$ always have real and negative
solutions. On the other hand, in the case that $\Delta E <0$,
the expansion up to $O(L^{-4})$ gives no real solution
for
\begin{eqnarray}
\Delta E < -\frac{\pi}{2|c_1|M_{\rm res}L^2},
\label{condition}
\end{eqnarray}
although the expansion
up to $O(L^{-5})$ always has a real solution.
This observation implies that some care must be taken to use the
expansion especially for relatively strong attraction.
Indeed, our lattice data show that the J/$\psi$ interaction with $\pi$, $\rho$
and $N$ are all attractive and the condition
Eq.(\ref{condition}) is met for J/$\psi$-$\rho$ and J/$\psi$-$N$ cases.
Therefore, in our study,
we use Eq.(\ref{a0-zeta}) directly without the large $L$ expansion
to extract $a_0$.
\section{Results}
In our simulation, we employed unimproved Wilson gauge action and Wilson
fermion. We have $\beta=6.2$
on $L^3\times T=24^3\times 48$ and $32^3\times48$ lattices with
$\kappa$(charm) $=$ 0.1360 and $\kappa$(light) $=$ 0.1520, 0.1506, 0.1489.
In the physical unit, the lattice sizes are $L\sim$1.6, 2.1 fm,
the lattice spacing is $a\sim$0.067 fm, and
$m_{J/\psi}\sim$3.0 GeV and $m_\pi\sim$0.6-1.2 GeV.
The number of quenched gauge configurations for smaller lattice
is 161 and that for larger lattice is 169.
Our error estimates are all based on the Jackknife method.
\begin{figure}
\epsfig{file=dEPI.eps, width=.48\textwidth}
\epsfig{file=dEN.eps, width=.48\textwidth}
\caption{The left (right) panel shows
the quark mass dependence of the energy shift for
the J/$\psi$-$\pi$ (J/$\psi$-$N$) interaction in $L=32$ lattice.
The horizontal axis is the pion mass squared $(m_\pi a)^2$ and
the vertical axis is the energy shift $\Delta E$.
The open circles are the energy shift extracted from different $\kappa$.
The circles are the linear extrapolated points at the physical point.
The open squares are the points which are evaluated with an assumption
that energy shift is independent of quark mass.
}
\label{fig1}
\end{figure}
In Figure 1, we show the quark mass dependence of the energy shift $\Delta E$ in
the J/$\psi$-$\pi$ channel (the left panel) and the J/$\psi$-$N$ channel
(the right panel).
The open circles are the energy shift extracted from the correlator ratio
for different quark masses.
The circles and the open squares are the results of a
linear fit in quark mass and of a simple average over the data with
different quark masses, respectively.
The open squares are the estimate of $\Delta E$ without quark mass dependence as a reference.
In both channels, we found that the interactions are attractive.
\begin{figure}
\epsfig{file=dEPI_V.eps, width=.48\textwidth}
\epsfig{file=dEN_V.eps, width=.48\textwidth}
\caption{The left (right) panel shows
the volume dependence of the energy shift from
the J/$\psi$-$\pi$(J/$\psi$-$N$) interaction.
The horizontal axis is the spatial size $L$ and the vertical axis is the energy shift $\Delta E$.
The circles indicate the energy shifts which are assumed to
have linear quark mass dependence extrapolated to the physical point.
The open squares show energy shifts estimated as if there is no quark mass dependence.
}
\label{fig2}
\end{figure}
In Figure 2, we show the volume dependence of the energy shift in
the J/$\psi$-$\pi$ channel (the left panel) and the J/$\psi$-$N$ channel
(the right panel).
The circles are the energy shifts from linear quark mass extrapolation
to the physical point and the open squares are the results of using the constant quark mass dependence.
Although the error bars are large in both channels, one can see the following tendency:
In the J/$\psi$-$\pi$ channel, the absolute value of $\Delta E$ decreases as $L$ increases.
On the other hand, $\Delta E$ for J/$\psi$-$N$ has opposite tendency.
However, to make firm conclusions on this, we need to increase statistics
and also collect the data for larger $L$.
If it turns out to be true in high statistics data,
one may conclude that J/$\psi$-$\pi$ channel is attractive without a bound state,
while J/$\psi$-$N$ may have a bound state \cite{SS}.
\begin{figure}
\epsfig{file=sIPIV.eps, width=.48\textwidth}
\epsfig{file=sINV.eps, width=.47\textwidth}
\caption{
The left (right) panel shows
the volume dependence of the scattering length in
the J/$\psi$-$\pi$ (J/$\psi$-$N$) channel.
The horizontal axis is the spatial size $L$ and the vertical axis is the S-wave scattering length $a_0$.
The circles (open squares) indicate the scattering lengths which are assumed
to have linear (constant) quark mass dependence.
The crosses show the sign-flipped empirical value for the $\pi$-$\pi$ scattering lengths
in the $I=2$ channel as reference points.
}
\label{fig3}
\end{figure}
Finally, in Figure 3, we show the volume dependence of the scattering lengths in
the J/$\psi$-$\pi$ (left panel) and the J/$\psi$-$N$ (right panel) channels.
The circles and the open squares are the results of linear quark mass extrapolation
to the physical point and of fitting assumed constant quark mass dependence.
To compare these absolute magnitudes of the J/$\psi$-hadron scattering length
with that in the $I=2$ $\pi$-$\pi$ channel,
we put the empirical value of the $\pi$-$\pi$ scattering length by crosses.
Note that $I=2$ $\pi$-$\pi$ scattering is repulsive and we show its
absolute value in the figure for comparison.
The J/$\psi$-$\pi$ scattering length is negative and small compared to $\pi$-$\pi$.
This is partly because the size of J/$\psi$ is small than the pion,
and partly because only the gluonic exchange is allowed in the J/$\psi$-$\pi$ case.
On the other hand, the scattering length for J/$\psi$-$N$ could be order of
magnitude larger than J/$\psi$-$\pi$, although the error bar is still quite large.
\section{Summary}
In summary, we study the J/$\psi$-hadron scattering lengths by the
quenched lattice QCD simulations.
We found attractive interactions in all J/$\psi$-$\pi$, J/$\psi$-$\rho$ and J/$\psi$-$N$
channels. Furthermore, the J/$\psi$-$N$ scattering length is considerably larger than
the J/$\psi$-meson scattering length.
Also, we found a sizable volume dependence of scattering lengths in all three channels.
There is an opposite tendency of the volume dependence
between J/$\psi$-$\pi$ and J/$\psi$-$N$.
There are several future problems to be examined further:
To study whether the attractive J/$\psi$-$N$ interaction could form a bound state,
we need to have better statistics and simulations with larger lattice volumes, which
is now under way.
To confirm the validity of using the L\"uscher's formula,
we need to check whether the potential range is small enough in comparison to
the lattice size.
Moreover, we need more careful analysis of inelastic contribution in our correlator ratio
such as the $D$-$\bar{D}$ contribution in the J/$\psi$-$\rho$ channel.
\subsection*{Acknowledgement}
This work was supported by the Supercomputer Projects No.110 (FY2004)
and No.125 (FY2005) of High Energy Accelerator Research
Organization (KEK). S.S. and T.H. were also supported by
Grants-in-Aid of MEXT, No. 15540254 and No.15740137.
|
1,116,691,500,178 | arxiv | \section{Introduction}
Let $M$ be a matrix over the ring of integral Laurent polynomials in some (finite) number of variables. We say that $M$ is Perron--Frobenius if there is a power $M^k$ of $M$ such that every entry of $M^k$ is a nontrivial Laurent polynomial with positive coefficients. In this note, we shall establish the following fact:
\begin{prop}\label{p:pf}
Let $M$ be a square Perron--Frobenius matrix over $\mathbb{Z}[t,t^{-1}]$ and let $\phi\in\Hom(\mathbb{Z},S^1)$ be a non--torsion point. Then the spectral radius of $\phi(M)$ is strictly smaller than that of $\phi_0(M)$, where $\phi_0$ is the trivial representation of $\mathbb{Z}$.
\end{prop}
In general, we will write $H$ for a finitely generated, torsion--free abelian group, and $\mathbb{Z}[H]$ is the ring of integral Laurent polynomials over $H$.
We remark that if $M$ is a Perron--Frobenius matrix over the ring of Laurent polynomials $\mathbb{Z}[H]$ then applying the trivial representation of $H$ to $M$ gives us a usual integral Perron--Frobenius matrix, which in turn has a unique real eigenvalue of maximal modulus.
Let $M$ be a fibered hyperbolic manifold with first Betti number $h+1$, where $h=\rk H\geq 1$. Choosing a fibration of $M$, we can decompose $H_1(M,\mathbb{Z})/(Torsion)$ as $\mathbb{Z}\oplus H$. The corresponding fibered face of $M$ has an associated {\bf Teichm\"uller polynomial} $\theta(u,t)\in\mathbb{Z}[\mathbb{Z}\oplus H]$, with $t\in H$. Proposition \ref{p:pf} has the following corollary:
\begin{cor}\label{c:teich}
Let $K$ be the largest root of $\theta(u,\phi_0(t))$, and let $\phi\in\Hom(H,S^1)$ be a non--torsion point. The largest root of the polynomial $\theta(u,\phi(t))$ has modulus strictly less than $K$.
\end{cor}
Since we are interested in the actions of pseudo-Anosov homeomorphisms on the homology of finite covers of the base surface, it would be useful to relate Corollary \ref{c:teich} to homology of finite covers of the base surface.
Let $\Sigma_{g,n}$ be a surface of genus $g$ with $n$ punctures and let $\psi\in\Mod_{g,n}$, the mapping class group of $\Sigma_{g,n}$. Let $\Sigma_{h,k}\to\Sigma_{g,n}$ be a finite cover to which $\psi$ lifts. Each lift of $\psi$ acts on the homology of $\Sigma_{h,k}$. This action can be very complicated, especially as we allow the cover itself to vary (cf. \cite{KobGeom} or \cite{nilpcover}, for instance). In this note, we will be interested in the spectrum of these actions as the cover is allowed to vary over a certain class of abelian covers.
Let $\psi$ be a pseudo-Anosov homeomorphism. Recall that $\psi$ stabilizes a canonical quadratic differential $q\in \mathcal{Q}(\Sigma_{g,n})$. The quadratic differential $q$ gives rise to two foliations which, away from the zeros of $q$, furnish local charts to $\mathbb{C}$ together with a horizontal and vertical foliation. The action of $\psi$ stretches the horizontal foliation by a factor of $K$ and contracts the vertical foliation by a factor of $K$, where $K$ is the {\bf dilatation} of $\psi$.
When $q=\omega^2$ for some $1$--form $\omega$, the mapping class $\psi$ is termed a {\bf homological pseudo-Anosov}. In this case, the foliations of $q$ admit global orientations which are preserved by $\psi$. It turns out in this case that $\omega$ gives rise an eigenclass for the action of $\psi$ on $H^1(\Sigma_{g,n},\mathbb{R})$ whose corresponding eigenvalue is also $K$.
It is known (cf. \cite{KS} and \cite{LT}) that if $\psi$ is a homological pseudo-Anosov homeomorphism then $K$ is a simple eigenvalue of largest modulus. We will be interested in the size of the second largest eigenvalue of the action of $\psi$ on the homology of certain abelian covers of $\Sigma_{g,n}$.
The connection between Teichm\"uller polynomials and homology is achieved by appealing to the theory of the {\bf Alexander polynomial} $A(u,t)$. The Alexander polynomial can be defined for any CW complex (see \cite{ctm4}), and in the case of a fibered hyperbolic $3$--manifold, the Alexander polynomial is very useful for describing the action of a mapping class on the homology of certain finite abelian covers of the base surface. The following result is not particularly new, but will allow us to see the connection between Teichm\"uller polynomials and homology of finite covers more clearly:
\begin{prop}\label{p:alexteich}
Let $\Sigma\to M\to S^1$ be a fibered hyperbolic $3$--manifold with monodromy $\psi$ and let $\theta(u,t)$ and $A(u,t)$ be the associated Teichm\"uller and Alexander polynomials, viewed as polynomials in one variable over $\mathbb{Z}[H]$.
\begin{enumerate}
\item
If the monodromy $\psi$ is a homological pseudo-Anosov homeomorphism then $A(u,t)$ divides $\theta(u,t)$.
\item
For each irreducible character $\chi$ of $H$, the polynomial $A(u,\chi(t))$ is the characteristic polynomial of the action of $\psi$ on the twisted homology group $H_1(\Sigma_{g,n},\mathbb{C}_{\chi})$.
\end{enumerate}
\end{prop}
Suppose that the action of $\psi$ on $H_1(\Sigma_{g,n},\mathbb{Z})$ has a nonzero fixed vector. The fixed vectors of the $\psi$--action give an infinite family of finite abelian covers to which $\psi$ lifts and commutes with the deck group. We will write $H$ for the fixed subgroup of $H_1(\Sigma_{g,n},\mathbb{Z})$ and we will write $h$ for its rank. If $\Sigma_{h,k}\to\Sigma_{g,n}$ is such a cover with deck group $A$ then the complex homology of $\Sigma_{h,k}$ splits into eigenspaces according to the representations of $A$. Furthermore, $\psi$ acts on each of these eigenspaces. As $\Sigma_{h,k}$ varies over all finite abelian covers given by $H$, we can parametrize the eigenspaces of the deck group by torsion points on the torus $\mathbb{U}^h\cong\Hom(H,S^1)$, where $\mathbb{U}\cong S^1$ denotes the unit complex numbers. We will usually not need to choose a basis for $H$ so that the isomorphism $\mathbb{U}^h\cong\Hom(H,S^1)$ is not canonical.
We thus obtain a continuous function $\rho$ on $\Hom(H,S^1)$ which assigns to a point the spectral radius of the action of $\psi$ on the corresponding eigenspace, which is to say the largest eigenvalue of the action of $\psi$ on the homology group $H_1(\Sigma_{g,n},\mathbb{C}_{\phi})$.
We will establish the following result as a corollary of Corollary \ref{c:teich}:
\begin{thm}\label{t:main}
Let $N$ be an arbitrary open neighborhood of the identity in $\Hom(H,S^1)$. Then there is a $\delta=\delta(N)>0$ such that $\rho(\phi)\leq K-\delta$ for each $\phi\in\Hom(H,S^1)\setminus N$.
\end{thm}
Note that the continuity of $\rho$ implies that $\rho(\phi)$ will tend to $K$ as $\phi$ tends to the identity. This last observation should be contrasted with McMullen's result in \cite{ctm1}. There, he proves that if $\psi$ is a non--homological pseudo-Anosov which remains non--homological on every finite cover of $\Sigma_{g,n}$, then the largest eigenvalue of the action of $\psi$ on $H_1(\Sigma_{h,k},\mathbb{C})$ is bounded away from $K$ as $\Sigma_{h,k}$ varies over all finite covers of $\Sigma_{g,n}$.
To see the contrast more clearly, suppose that $\psi$ is a non--homological pseudo-Anosov homeomorphism with nonzero invariant cohomology on a surface $\Sigma$, and suppose that $\psi$ lifts to a homological pseudo-Anosov homeomorphism (which we also call $\psi$) on a double branched cover $\Sigma'$ of $\Sigma$. Theorem \ref{t:main} implies that there are $\psi$--modules occurring in the homology of finite abelian covers of $\Sigma'$ which are never found as submodules of the homology of any finite cover of $\Sigma$.
The continuity of the function $\rho$ in Theorem \ref{t:main}, combined with Proposition \ref{p:alexteich} has the following immediate consequence. Let $\psi\in\Mod_{g,n}$ be a homological pseudo-Anosov with nonzero invariant cohomology and let $\Sigma_{h,k}\to\Sigma_{g,n}$ be an abelian cover to which $\psi$ lifts. Write $\gamma$ for the absolute value of the difference between the absolute values of the top eigenvalue (the dilatation) of $\psi$ on $H_1(\Sigma_{h,k},\mathbb{C})$ and the second largest eigenvalue of $\psi$ on $H_1(\Sigma_{h,k},\mathbb{C})$.
\begin{cor}\label{c:gap}
For each $\epsilon>0$ there exists a finite, abelian cover $\Sigma_{h,k}\to\Sigma_{g,n}$ to which $\psi$ lifts and for which $\gamma<\epsilon$.
\end{cor}
In other words, as we consider larger and larger abelian covers of the base, new homological eigenvalues appear which get as close to the dilatation as we like.
\section{Acknowledgements}
The author thanks A. Eskin, B. Farb, C. McMullen and A. Zorich for various helpful discussions.
\section{Train tracks and the Teichm\"uller polynomial}
Let $\psi\in\Mod_{g,n}$ be a pseudo-Anosov homeomorphism which fixes a nontrivial integral cohomology class. As is standard, the suspended $3$--manifold $M_{\psi}$ is hyperbolic and admits infinitely many non--equivalent fibrations which are parametrized by the Thurston unit norm ball (see \cite{FLP}, \cite{ctm2} and \cite{T}). The Teichm\"uller polynomial was developed in \cite{ctm3} by McMullen in order to study fibered faces of the Thurston unit norm ball. In this section we will recall some of the background on the Teichm\"uller polynomial.
Recall that a {\bf train track} is a branched, compact $1$--submanifold of $\Sigma_{g,n}$. We can think of a train track $\tau$ as an embedded graph such that there is a smooth continuation of each path entering a vertex. Train tracks are required to have global topological properties. Indeed, the complement $\Sigma_{g,n}\setminus\tau$ is required to be a union of (cusped) subsurfaces of $\Sigma_{g,n}$ such that the double of each component has negative Euler characteristic. In particular, smooth annuli, monogons and bigons are disallowed as complementary regions. A {\bf transverse measure} on a train track is an assignment of a nonnegative weight to each edge of $\tau$ in such a way that the sum of the weights entering a vertex is equal to the sum of the weights exiting the vertex. See \cite{PH} for general background on train tracks.
Train tracks are finite combinatorial objects used to study measured foliations on surfaces. To each measured foliation, one can associate a measured train track, and conversely one can recover a measured foliation from a measured train track. Both of these associations are up to some standard notion of equivalence which we will not discuss here but can be found in \cite{PH}.
A pseudo-Anosov homeomorphism $\psi$ has two associated measured foliations, and as such can be canonically assigned two measured train tracks, $\tau$ and $\tau^*$, corresponding to the expanding and contracting foliation respectively. We will restrict our attention to $\tau$. It is standard to call $\tau$ an {\bf invariant train track}, since there is an isotopy of $\Sigma_{g,n}$ which sends $\psi(\tau)$ onto $\tau$ in a way which sends vertices to vertices. We will fix such an isotopy, thus obtaining a well-defined map $P_E$ from the free abelian group on the edges to itself. The map $P_E$ is defined by taking an edge $e$ of $\tau$ and recording which edges of $\psi(\tau)$ hit $e$, with multiplicity. It is a standard result that $P_E$ is a Perron--Frobenius matrix, which is to say that for each sufficiently large $n$, each entry of $P_E^n$ is a positive integer.
It is a standard fact that the measure on $\tau$ is positive on each edge and that $\tau$ {\bf fills} $\Sigma_{g,n}$, in the sense that each complementary region of $\tau$ is topologically a (possibly punctured) disk (see \cite{FLP} and \cite{PH}). This fact gives us the following simple but important observation:
\begin{lemma}
Let $\tau$ be an invariant train track for a pseudo-Anosov homeomorphism. Then the edges of $\tau$ form the $1$--cells of a CW structure on $\Sigma_{g,n}$.
\end{lemma}
\begin{proof}
This is a general fact about filling train tracks. If a complementary region is not punctured, it can be used as a $2$--cell. If a complementary region is punctured, it can be deformation retracted onto its boundary.
\end{proof}
Let $M=M_{\psi}$ be the suspended $3$--manifold of $\Sigma_{g,n}$. Since $\psi$ has invariant homology, the rank of $H_1(M,\mathbb{Z})$ is at least two. We obtain an infinite abelian cover $S$ of $\Sigma_{g,n}$ by restricting the map \[\pi_1(M)\to H_1(M,\mathbb{Q})\] to $\pi_1(\Sigma_{g,n})$. Write $H$ for the corresponding deck group. We may lift both $\tau$ and $\psi$ to $S$. Notice that $\psi$ acts trivially on $H$, so that $\psi$ commutes with the deck group and we can lift the carrying map $\psi(\tau)\to\tau$ in a $\psi$--equivariant way.
The edges and vertices of the total lift of $\tau$ are just edges and vertices of $\tau$ labelled by elements of $H$. The total lift of $\tau$ still fills $S$. The carrying map gives us a well--defined map $P_E$ of the free $\mathbb{Z}[H]$--module on the edges, and it furnishes a matrix with coefficients in $\mathbb{Z}[H]$. We can construct a similar map $P_V$ for the vertices, and the ratio of the two characteristic polynomials is the {\bf Teichm\"uller polynomial} of the fibered face determined by $\psi$. We denote the Teichm\"uller polynomial by $\theta$. If $\phi:H\to\mathbb{C}$ is any homomorphism, we obtain a polynomial in one variable with complex coefficients, which we call a {\bf specialization} of $\theta$ and write as either $\theta_{\phi}$ or as $\theta(u,\phi(t))$.
We now recall some facts about the Teichm\"uller polynomial which can all be found in \cite{ctm3}. The Teichm\"uller polynomial is usually specialized at real cohomology classes (i.e. homomorphisms $\phi:H\to\mathbb{R}$) in the fibered face. This way, we obtain real polynomials. Each rational point in the fibered face corresponds to another fibration of $M$. The largest root of the Teichm\"uller polynomial at such a rational point (suitably rescaled) is the dilatation of the monodromy. The function which assigns to each point on the fibered face the largest root of the specialization of $\theta$ is a concave analytic function which tends to infinity on the boundary of the fibered face.
Note that if $\phi$ is an integral cohomology class which arises from a homomorphism $H\to\mathbb{Z}$, we get an infinite cyclic cover of $\Sigma_{g,n}$ and an associated specialization $\theta_{\phi}$ of the Teichm\"uller polynomial, which is now a polynomial over the ring of Laurent polynomials in one variable. Since the dilatation blows up at the boundary of the fibered face, it follows that there is no unit $u$ such that the coefficients of $u\cdot \theta_{\phi}$ are all constant Laurent polynomials. This observation has the following immediate consequence:
\begin{lemma}\label{l:spread1}
Choose any basis $\{t_1,\ldots,t_h\}$ for $H$ and let $u$ be any unit in $\mathbb{Z}[H]$. Then for each $i$, the coefficients of $u\cdot \theta$ are not all independent of $t_i$.
\end{lemma}
We finally make a few observations about homological pseudo-Anosov homeomorphisms. If $\psi$ is homological then $\tau$ admits an orientation. By this, we mean that we can orient each edge of $\tau$ in such a way that every smooth path in $\tau$ has a coherent orientation. Furthermore, the carrying map $\psi(\tau)\to\tau$ is orientation preserving. It follows that $P_E$ not only records the edges which hit a particular edge $e$ of $\tau$ with multiplicity, but with signed multiplicity. The oriented edges of $\tau$ now provide a basis for $C_1(\Sigma_{g,n},\mathbb{Z})$, the one--chains with respect to some cell decomposition of $\Sigma_{g,n}$, and the adjoint of $P_E$ is the map $\psi_*$ induced on $C_1(\Sigma_{g,n},\mathbb{Z})$ by $\psi$. On the cover $S$ of $\Sigma_{g,n}$, the orientation of $\tau$ lifts, as does $\psi$, and $P_E$ is replaced by the corresponding matrix of Laurent polynomials. This is the fundamental connection between train track, pseudo-Anosov homeomorphisms, and homology.
Now suppose that $\Sigma_{h,k}\to\Sigma_{g,n}$ is a finite abelian cover with deck group $A$, and suppose $\psi$ lifts to this cover. The $1$--chains $C_1(\Sigma_{h,k},\mathbb{C})$ form a module with commuting actions of $\psi$ and $A$. The $A$--action decomposes $C_1(\Sigma_{h,k},\mathbb{C})$ into eigenspaces, and $\psi$ preserves these eigenspaces. The action of $\psi$ on this eigenspace is described by $P_E$, except that the Laurent polynomial indeterminates now act by roots of unity according to particular representation of $A$.
Since we can build finite abelian covers by writing down homomorphisms from $H$ to various finite groups, it becomes clear that the action of $\psi$ on train tracks on various finite abelian covers of $\Sigma_{g,h}$ is determined by finite image homomorphisms $\phi:H\to S^1$. The action of $\psi$ on one--chains is obtained by specializing $P_E$ at $\phi$.
Write $p_e$ for the characteristic polynomial of $P_E$. Notice that taking the largest root of $p_e$ yields a continuous function of the coefficients of $p_e$. We thus obtain a continuous function $\rho$ on $\Hom(H,S^1)\cong\mathbb{U}^h$ which assigns to a point $z$ the modulus of the largest eigenvalue of the specialization $\theta_z$ of the Teichm\"uller polynomial. Note that we lose no information by specializing $\theta$ as opposed to $p_e$, since the roots of the characteristic polynomial of $P_V$ are all roots of unity and morally we are only interested in eigenvalues which lie off the unit circle.
\section{Twisted homology, Alexander polynomials, representations and covers}
Let $X$ be a CW complex with nontrivial fundamental group $\pi$. Recall that one can consider twisted homology of $X$. Indeed, let $\rho:\pi\to GL(V)$ be a representation of $\pi$. One can consider the twisted homology of $X$ with coefficients in $V$, written $H_*(X,V_{\rho})$. Twisted homology is related to covers in an essential way. Let $\widetilde{X}$ be the cover of $X$ corresponding to the kernel of $\rho$, equipped with a lifted CW structure. One first considers the complex $C_*(\widetilde{X},\mathbb{Z})$. This complex is equipped with a natural action of the group ring $\mathbb{Z}[\pi]$. We define the complex \[C_*(X,V_{\rho})=C_*(\widetilde{X},\mathbb{Z})\otimes_{\mathbb{Z}[\pi]} V.\] The twisted homology $H_*(X,V_{\rho})$ is just the homology of this complex.
We are primarily interested in $H_1(\Sigma_{g,n},V_{\chi})$, where $\Sigma_{g,n}$ is a surface and $\chi$ is a finite, one--dimensional representation of $\pi_1(\Sigma_{g,n})$. Any such representation factors through a map \[\chi:H_1(\Sigma_{g,n},\mathbb{Z})\to\mathbb{C}^*\] whose image is finite. Write $\Sigma_{h,k}$ for the covering space corresponding to the kernel of $\chi$. The twisted homology group is then $H_1(\Sigma_{g,n},\mathbb{C}_{\chi})$ and can be identified with the subspace of $H_1(\Sigma_{h,k},\mathbb{C})$ where the action of $\pi_1(\Sigma_{g,n})$ is given by $\chi$, i.e. the $\chi$--eigenspace of $H_1(\Sigma_{h,k},\mathbb{C})$.
\subsection{The Teichm\"uller polynomial and the Alexander polynomial}
We now make some further remarks about the relationship between train tracks and homology via the Alexander polynomial (see \cite{ctm4} for more details). The Alexander polynomial of a $3$--manifold $M$ is an element of $\mathbb{Z}[H_1(M,\mathbb{Z})/(torsion)]$. One way to define the Alexander polynomial $A$ is to consider the cover $M'\to M$ corresponding to $H_1(M,\mathbb{Z})/(torsion)$. Let $p$ be a basepoint of $M$ and $p'\subset M'$ its preimage. The {\bf Alexander module} of $M$ is the $\mathbb{Z}[H_1(M,\mathbb{Z})/(torsion)]$--module $H_1(M',p',\mathbb{Z})$. The {\bf Alexander ideal} is the first elementary ideal of the Alexander module, and the {\bf Alexander polynomial} is the greatest common divisor of the elements in the elements in the Alexander ideal.
Returning to the situation at hand, after choosing a fibration $M\to S_1$, we can write \[H_1(M,\mathbb{Z})/(torsion)\cong\mathbb{Z}\oplus H.\] Thus, the Alexander polynomial can be written as a polynomial $A(u,t)$, where $t\in H$. The relationship between cohomology of finite abelian covers and Alexander polynomials is given by the following result in \cite{ctm4}:
\begin{prop}
An Alexander polynomial in more than one variable defines the maximal hypersurface in the character variety such that $\dim H^1(M,\mathbb{C}_{\chi})>0$ whenever $A(\chi)=0$.
\end{prop}
The relationship between Alexander polynomials and the action of $\psi$ on the homology groups of finite covers is the following observation, which is well--known and appears in \cite{ctm3}, for instance:
\begin{prop}\label{p:1}
Let $\chi:H\to S^1$ be a character of $H$. Then the polynomial $A(u,\phi(t))$ is the characteristic polynomial of the action of $\psi$ on $H_1(\Sigma,\mathbb{C}_{\chi})$.
\end{prop}
The relationship between the Teichm\"uller polynomial for a fibration with monodromy $\psi$ and an orientable invariant foliation is given as follows (see \cite{ctm3} for a proof):
\begin{prop}\label{p:2}
The Alexander polynomial $A(u,t)$ divides the Teichm\"uller polynomial $\theta(u,t)$.
\end{prop}
On the other hand, the Teichm\"uller polynomial gives more information than just the action of $\psi$ on the homology of certain finite abelian covers of $\Sigma_{g,n}$. Indeed, let $\psi$ act on the homology $H_1(\Sigma_{g,n},\mathbb{Z})$ with trace smaller than zero. Then there is no basis for $H_1(\Sigma_{g,n},\mathbb{Z})$ with respect to which the action of $\psi$ is Perron--Frobenius.
\section{Perron--Frobenius matrices over Laurent polynomial rings}
In this short section, we prove a result about matrices with entries in Laurent polynomial rings. If $p\in\mathbb{Q}[t,t^{-1}]$, define the {\bf spread} of $p$ to be the absolute value of the difference between the highest and lowest exponent of $t$ occurring in $p$.
\begin{lemma}\label{l:spread}
Suppose $\{p_1,\ldots,p_k\}$ and $\{q_1,\ldots,q_k\}$ are two collections of nonzero rational Laurent polynomials with positive coefficients. Then the spread of \[\sum_{i=1}^k p_iq_i\] is at least as large as the maximum of the spreads of the $\{p_i\}$ and the $\{q_i\}$.
\end{lemma}
\begin{proof}
Since each $p_i$ and $q_i$ is nonzero and has only positive coefficients, it is clear that no cancellation can occur, so that the spread can only increase.
\end{proof}
\begin{lemma}
Let $M\in M_n(\mathbb{Q}[t,t^{-1}])$ be Perron--Frobenius, and suppose that there is an entry of $M$ whose spread is at least one. Then there is a $k$ such that each entry of $M^k$ has spread at least one.
\end{lemma}
\begin{proof}
Passing to a power of $M$ if necessary, each entry of $M$ is a nonzero Laurent polynomial with positive coefficients. Without loss of generality, $a_{1,1}$ is not a unit. In particular, the spread of $a_{1,1}$ is nonzero. By Lemma \ref{l:spread}, the spread of each entry in the first row of $M^2$ is least one. By the same argument, the spread of each entry of $M^3$ is at least one.
\end{proof}
In the language of this section, we have a convenient rephrasing of Lemma \ref{l:spread1}:
\begin{lemma}\label{l:spread2}
Choose a basis $\{t_1,\ldots,t_h\}$ for $H$. There is a $k$ such that each entry of $P_E^k$ has spread at least $1$ in each of the variables $\{t_1,\ldots,t_h\}$.
\end{lemma}
\section{The point--wise spectral gap}
We are now ready to prove the results leading up to Theorem \ref{t:main}.
\begin{proof}[Proof of Proposition \ref{p:pf}]
The fundamental observation for the proof of the proposition is the following: if $\{z_1,\ldots,z_n\}$ are complex numbers of modulus one and $\{a_1,\ldots,a_n\}$ are positive integers then the modulus of \[\sum_ia_iz_i\] is maximized when $\{z_1,\ldots,z_n\}$ all have the same argument.
Since $M$ is Perron--Frobenius, we may assume that each entry $m_{i,j}$ of $M$ is a nonconstant Laurent polynomial in one variable with spread at least one. Write $K>1$ for the spectral radius of $\phi_0(M)$. Consider the matrices $\phi(M)^n=(a_{i,j,n})$ and $\phi_0(M)^n=(b_{i,j,n})$. Since $\phi(t)$ is an irrational point on the circle, it follows that \[C=\max_{i,j}\frac{|a_{i,j,1}|}{b_{i,j,1}}<1.\]
Consider the entries of $\phi(M)^{n+1}$. We have the following inequalities: \[|a_{i,j,n+1}|=\Big|\sum_{k=1}^ma_{i,k,n}a_{k,j,1}\Big|\leq\sum_{k=1}^m|a_{i,k,n}||a_{k,j,1}|<\sum_{k=1}^m|a_{i,k,n}||b_{k,j,1}|,\] where $m$ is the dimension of $M$. Notice that the rightmost expression exceeds the second rightmost by a factor of at least $C$. It follows that the sum of the moduli of the entries of $\phi(M)^n$ is exceeded by the sum of the entries of $\phi_0(M)^n$ by a factor of at least $C^n$. It follows that in the $\ell_1$--norm, we have \[||\phi(M)^n||_1\leq C^n\cdot ||\phi_0(M)^n||.\]
It follows that the spectral radius of $\phi(M)$ is at most $C\cdot K$.
\end{proof}
\begin{proof}[Proof of Corollary \ref{c:teich}]
Choose a basis $\{t_1,\ldots,t_h\}$ for $H$. Since $\phi$ is a non--torsion point of the representation variety $\Hom(H,S^1)$, there is at least one basis element for $H$, say $t_1$, which is sent to an irrational point on $S^1$. By Lemma \ref{l:spread2}, we may assume that each entry of $P_E$ has spread at least one in $t_1$, possibly after passing to a power. By Proposition \ref{p:pf}, we have that the spectral radius of $\phi_0(P_E)$ is strictly larger than that of $\phi(P_E)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{t:main}]
For each nonidentity $\phi\in\Hom(H,S^1)$, we need to show that the spectral radius of the action of $\psi$ on $H_1(\Sigma_{g,n},\mathbb{C}_{\phi})$ is strictly smaller than the dilatation of $\psi$. The conclusion of the theorem will follow from the continuity of the function $\rho$ and the compactness of $\Hom(H,S^1)$.
For nonidentity torsion points in $\Hom(H,S^1)$, this follows from the fact that torsion points of $\Hom(H,S^1)$ give rise to finite abelian covers of $\Sigma_{g,n}$ to which $\psi$ lifts. The dilatation of $\psi$ is a simple eigenvalue of the action of $\psi$ on the homology of that cover, whence it follows that the spectral radius on $H_1(\Sigma_{g,n},\mathbb{C}_{\phi})$ is strictly smaller than the dilatation.
For a non--torsion point $\phi$, we have that the largest root of $\theta(u,\phi(t))$ is strictly smaller than the dilatation of $\psi$, by Corollary \ref{c:teich}. Since the Alexander polynomial $A(u,\phi(t))$ is the characteristic polynomial of the action of $\psi$ on $H_1(\Sigma_{g,n},\mathbb{C}_{\phi})$ and since $A(u,\phi(t))$ divides $\theta(u,\phi(t))$, we have that the spectral radius is in fact smaller than the dilatation.
\end{proof}
\begin{proof}[Proof of Corollary \ref{c:gap}]
Since the Alexander polynomial varies continuously on $\Hom(H,S^1)$ as does its largest root, and since its largest root at $\phi_0$ is the dilatation of $\psi$, the result follows.
\end{proof}
\section{Comparison with other homological representations}
When $g=0$, mapping class groups of the surfaces $\{\Sigma_{g,n}\}$ are essentially just braid groups. We will assume in this case that the surface $\Sigma_{g,n}$ is just the $n$--times punctured disk $D_n$, so that the mapping class group is identified with the braid group $B_n$. The braid group $B_n$ has the pure braid group $P_n$ as a subgroup of finite index, and it arises as the kernel of the permutation action of $B_n$ on the punctures of $D_n$. The groups $B_n$ and $P_n$ have two well--studied homological representations, namely the Burau representation and the Gassner representation (see \cite{Bir} for more details).
The Burau representation is given by considering the infinite cyclic cover of $D_n$ obtained by sending a small loop about each puncture of $D_n$ to a fixed generator of $\mathbb{Z}$. The Burau representation is the associated homology representation. The Gassner representation is the analogous representation of $P_n$, except that we consider the universal abelian cover of $D_n$.
These representations take a braid $\beta$ and return a matrix over a Laurent polynomial ring. It can be shown that specializations of these matrices, given by sending the indeterminates to roots of unity, describe the actions of braid (resp. pure braid groups) on the homologies of all finite abelian covers with equal branching over all the punctures (resp. all finite abelian covers of the disk). For more details, consult \cite{KobGeom}.
The Burau and the Gassner representations are useful in that one can easily associate a characteristic polynomial to each braid, and the roots of the specializations give eigenvalues of the actions of the braid on various deck group eigenspaces of the homology of covers.
\subsection{The simplest pseudo-Anosov braid}
Consider the group $B_3$, which is generated by the two braids $\sigma_1$ and $\sigma_2$, which interchange the first two and second two strands respectively, in the same direction. It is well--known that the braid $\beta=\sigma_1\sigma_2^{-1}$ is pseudo-Anosov. In fact, there is a double cover of $D_3$ which is homeomorphic to a torus with three punctures and one boundary component, and a lift of $\beta$ acts by the matrix \[\begin{pmatrix}2&1\\1&1\end{pmatrix}.\] Under the Burau representation, we have that \[\sigma_1\mapsto\begin{pmatrix}t&t\\0&1\end{pmatrix},\] and \[\sigma_2\mapsto\begin{pmatrix}1&0\\t^{-1}&t^{-1}\end{pmatrix}.\]
Computing the characteristic polynomial, we obtain $p(u,t)=1-u(1+t+t^{-1})+u^2$, up to a unit in $\mathbb{Z}[t,t^{-1}]$. We can write a program which computes the largest root of $p(u,\zeta)$ as $\zeta$ ranges over roots of unity. We can see that there is a unique maximum of $(3+\sqrt{5})/2$ at $\zeta=1$. This follows from the fact that the function $1+t+t^{-1}$ on the unit circle is just the function $1+2\Re(t)$, which achieves a unique maximum on the circle.
|
1,116,691,500,179 | arxiv | \section{Introduction}
Observations of the diffuse $\gamma$-ray emission during the last
twenty years \footnote{A comprehensive account of these matters as well as
of their theoretical explanations can be found in \cite{ft}.}
have been successfully interpreted in terms of a two-folded structure \\
$\star$ a highly anisotropic component strongly concentrated along the
galactic disk, \\
$\star$ an apparently isotropic component. \\
While the former is evidently galactic in nature - being actually
accounted for by cosmic ray (CR) interactions in the interstellar medium (ISM)
\cite{hunter} -
the origin of the latter still remains an open problem
in high-energy astrophysics (see e.g. \cite{dixon,pohl,smr}).
We will restrict our attention to the latter component throughout the
present paper.
We begin by recalling that EGRET observations have
detected a diffuse $\gamma$-ray flux \cite{sreekumar}
\begin{equation}
\Phi_{\gamma}(E_{\gamma}>0.1 {\rm GeV}) = (1.45 \pm 0.05) \times
10^{-5}~\gamma~{\rm cm^{-2}~s^{-1}~sr^{-1}}~, \label{2}
\end{equation}
with a spectral slope of $-2.10 \pm 0.03$, which -
for $E_{\gamma}>1$ GeV - gives
\begin{equation}
\Phi_{\gamma}(E_{\gamma}>1 {\rm GeV}) = (1.14 \pm 0.04) \times
10^{-6}~\gamma~{\rm cm^{-2}~s^{-1}~sr^{-1}}~.
\label{eq:3}
\end{equation}
A question naturally arises. Where does the $\gamma$-ray emission in
question come from? No doubt, its characteristic isotropy calls for an
extragalactic origin - an option which is further supported by the fact
that it fits remarkably well with the extragalactic hard X-ray background
\cite{thompson}.
The next question to address is whether the considered $\gamma$-ray
background arises from a truly diffuse process or rather from the
contribution of very many unresolved point sources. Either option has
received considerable attention. Among the theories of diffuse origin are a
baryon-symmetric Universe \cite{smb},
primordial black hole evaporation
\cite{ph,cline}, early
collapse of supermassive black holes \cite{go}, a new
population of Geminga-like pulsars \cite{hart} and WIMP (Weakly
Interacting Massive Particle) annihilation
(see e.g. \cite{jkg}).
Models based on discrete source contribution include a variety of
possibilities. What is clear since a long time is that normal galaxies
fail to account for the observed isotropic background -
at least as long as their disk emission is considered
\cite{fichtel}-\cite{gal3} - since the
corresponding intensity falls shorter by a factor $\sim 10$ with respect to
the detected flux.
A more realistic option is provided by active galaxies
\cite{active1,active2}.
Indeed, blazars seem to yield a successful explanation of the isotropic
$\gamma$-ray emission \cite{blazars1}-\cite{blazars5}.
Finally, a somewhat hybrid model has recently been proposed, in which the
isotropic $\gamma$-ray background is produced in clusters of galaxies
through the interaction of CRs with the hot intracluster gas \cite{dar}.
However, this model has been severely criticized
\cite{ss,bbp}.
In fact, it gives rise to a $\gamma$-ray spectral index in disagreement
with the observed one
and relies upon a value for the CR density in the
intracluster space which is too high to be plausible.
More generally, it has been shown that the contribution to the
isotropic $\gamma$-ray emission from clusters of galaxies is
negligible \cite{bbp}.
Recently, Dixon et al. \cite{dixon}
have re-analyzed the EGRET data concerning the diffuse $\gamma$-ray flux
with a wavelet-based technique,
using the expected (galactic plus isotropic) emission as a null hypothesis.
Although the wavelet approach does not allow for a good estimate of the errors,
they find a statistically significant diffuse emission from
an extended halo surrounding the Milky Way.
This emission
traces a somewhat flattened halo and its
intensity at high-galactic latitude is \cite{dixon}
\begin{equation}
\Phi_{\gamma}(E_{\gamma}>1 {\rm GeV}) \simeq 10^{-7}-10^{-6}
~\gamma~{\rm cm^{-2}~s^{-1}~sr^{-1}}~.
\label{eq:4}
\end{equation}
Clearly, the comparison of eqs. (\ref{eq:3}) and (\ref{eq:4}) entails
that the newly discovered halo $\gamma$-ray flux is a relevant fraction
of the standard isotropic diffuse emission (at least for $E_{\gamma} >1$ GeV).
Our aim is to show that the observed halo
$\gamma$-ray emission naturally arises
within a previously-proposed model for baryonic dark matter,
according to which dark clusters of brown dwarfs and cold
self-gravitating $H_2$ clouds populate the outer
galactic halo and can show up in microlensing observations
\cite{depaolis1}-\cite{depaolisapj}.
Basically, CR protons in the galactic halo scatter on the clouds
clumped into dark clusters,
giving rise to the newly discovered $\gamma$-ray flux.
Although we already pointed out that a signature of the model is a diffuse
$\gamma$-ray emission from the galactic halo \cite{depaolis1,depaolis2}, a
more thorough study is
required to compare the predicted intensity
distribution with the observed one. A short account of these results has
been presented elsewhere \cite{depaolisapjl}. In the present paper, we provide
a more exhaustive analysis. In addition,
we estimate the $\gamma$-ray emission from the nearby M31
galaxy.
The paper is organized as follows.
In Section 2 we recall the main
features of our model for baryonic dark matter in the galactic halo.
In Section 3 we address the CR confinement
in the galactic halo and we estimate the CR energy density.
In Section 4 we compute the halo
$\gamma$-ray flux - produced by the clouds
clumped into dark clusters through proton-proton scattering -
as detected on Earth.
Section 5 is devoted to the study of the $\gamma$-ray flux due to
Inverse Compton (IC) scattering of electrons off background photons.
In Section 6 we present $\gamma$-ray intensity maps, pertaining to both
proton-proton scattering and IC scattering, and discuss their interplay.
Finally, in Section 7 we address future prospects to test our predictions.
\section{Dark clusters in the galactic halo}
Ever since the discovery that standard big-bang nucleosynthesis correctly
accounts for the light element abundances, a lesson has become clear: most of
the baryons in the Universe happen to be in nonluminous form, thereby
making a strong case for baryonic dark matter.
In order to see how this comes
about, we recall that the fraction of critical density contributed by luminous
matter is estimated to be $\Omega_L \sim 0.005$~\cite{bld}
\footnote{We are using throughout the presently favoured value
of the Hubble constant
$H_0 \simeq 70$ km s$^{-1}$ Mpc$^{-1}$.}.
Yet, agreement between the predicted and observed abundances
of nucleosynthetic yields is achieved only provided the similar
contribution from baryons - in whatever form - lies in the
range $0.01 \ut <\Omega_B \ut <0.05$~\cite{Schr98}. Actually, this conclusion has recently been sharpened
by deuterium measurements in Quasi Stellar Object (QSO) absorption
spectra, which probe regions of
space much farther away than previously explored and give $\Omega_B \simeq
0.05$~\cite{Tyt96}. So, about $90\%$ of the baryonic matter in the Universe is
expected to be dark.
Needless to say, one is naturally led to wonder about the
distribution and form of baryonic dark matter.
Several possibilities have been contemplated over the last few years. Although
no logically compelling reason in favour of any particular option has emerged
so far, it looks intriguing that a naturalness argument
strongly suggests that the galactic dark halos should be
predominantly baryonic.
Basically, the idea is as
follows. As is well known, both optical and HI observations have shown that all
galactic rotation curves exhibit a universal qualitative behaviour: after a
steep rise corresponding to the bulge, they stay approximately constant out to
the last measured point. This feature -- namely the lack of a keplerian
fall-off -- provides a stark evidence in favour of a spheroidal dark halo
surrounding the luminous part of any galaxy. This is however not the end of the
story. For, rotation curves trace the luminous -- hence baryonic -- matter
within the optical disk, but are dominated by the halo dark matter at larger
galactocentric distances. Yet, both contributions invariably turn out to match
smoothly and exactly, thereby signalling a striking visible-invisible
conspiracy (also called disk-halo conspiracy). Before proceeding further, a
point should be stressed. With only a rather limited sample of available
rotation curves, that conspiracy was initially understood as a fine-tuning
whereby the disk and the halo of spiral galaxies manage to produce
a flat rotation
curve~\cite{Alb86}. Further studies have shown that such a flatness is only
approximate: brighter galaxies tend to have slightly falling rotation curves,
whereas fainter ones possess slightly rising rotation curves~\cite{Per91}.
Still, what really matters for the visible-invisible conspiracy (as
stated above) is the lack of any jump in the rotation curve within the
disk-halo transition region, besides the approximate flatness.
A priori, only a mysterious fine-tuning could
justify the conspiracy in question if the halo dark matter were
different in nature from luminous matter, that is to say if it were
nonbaryonic. So, baryonic dark matter looks
like a natural constituent of galactic halos.
Incidentally, this situation is very reminiscent of the case of grand unified
theories in particle physics, where supersymmetry has been invoked as a
successful way out of a similar, mysterious fine-tuning needed to stabilize the
gauge hierarchy against radiative corrections~\cite{Maiani79}. Thus, we are led
to the conclusion that -- much in the same way as fundamental interactions
ought to be supersymmetric -- galactic halos ought to be predominantly
baryonic!
Remarkably enough, a specific model of baryonic dark halos emerges naturally
from the present-day understanding of globular clusters. Indeed, a few years
ago we have realized~\cite{depaolis1,depaolis2} that the Fall-Rees
theory for the formation of globular clusters~\cite{fall}-\cite{vietri}
automatically
predicts -- without any further physical assumption -- that dark clusters made
of brown dwarfs~\footnote{Although we concentrate our attention on brown
dwarfs, it should be mentioned that red dwarfs as well can be accomodated
within the considered setting.} and cold $H_2$ clouds should lurk in
the galactic halo at
galactocentric distances larger than $10-20$ kpc. Accordingly, the inner halo
is populated by globular clusters, whereas the outer halo chiefly consists
of dark clusters. \footnote{Similar ideas have been proposed by Ashman
and Carr~\cite{ac}, Ashman~\cite{ashman}, Fabian and Nulsen~\cite{fn1,fn2}, and
Kerins~\cite{kerins1,kerins2}. Moreover, a scenario almost identical to the one
investigated here has been put forward by Gerhard and Silk~\cite{gs}.
Somewhat different baryonic pictures have been worked out by
Pfenniger, Combes and Martinet \cite{pcm}, Sciama \cite{sciama}, and
Gibson and Schild \cite{gibson} (see also \cite{wwt}).}
Below, we summarize the main features of our model.
Although the mechanism of galaxy formation is not yet fully understood, the
theory for the origin of globular clusters seems to be fairly well established
- thanks to the pioneering work of Fall and Rees~\cite{fall} - and can be
summarized as follows. After its initial collapse, the proto-galaxy is expected
to be shock heated up to its virial temperature $\sim 10^6$ K. Because of
thermal instability, density enhancements rapidly grow as the gas cools.
Actually, overdense regions cool more rapidly than average, and so
proto-globular-cluster (PGC) clouds form in pressure equilibrium with the hot
diffuse gas. When the PGC cloud temperature drops to $\sim 10^4$ K, hydrogen
recombination occurs: at this stage, the PGC cloud mass and size are
$\sim 10^5 (R/{\rm kpc})^{1/2} ~M_{\odot}$ and $\sim 10 (R/{\rm kpc})^{1/2}$
pc, respectively
($R$ being the galactocentric distance). Below $\sim 10^4$ K, an efficient
cooling can be brought about only by photon emission from roto-vibrational
transitions in $H_2$. Whether this mechanism is actually operative or not
crucially depends on the intensity of the environmental ultraviolet (UV)
radiation field, as we are now going to discuss.
In fact, in the central region of the proto-galaxy an AGN (Active Galactic
Nucleus)
along with a first population of massive stars are expected to form, which act
as strong sources of UV radiation that dissociates the $H_2$
molecules. It is not difficult to estimate that the $H_2$ destruction
should occur
for galactocentric distances smaller than $10-20$ kpc. As a consequence,
cooling is heavily suppressed in the inner halo, and so here the PGC clouds
remain for a long time in quasi-hydrostatic equilibrium at temperature $\sim
10^4$ K, resulting in the imprinting of a characteristic mass $\sim 10^6
M_{\odot}$. Eventually, the UV flux decreases, thereby allowing for
the formation and survival of $H_2$. Accordingly, the PGC clouds can further
cool, collapse and fragment, ultimately producing ordinary stars clumped into
globular clusters.
What is most relevant for the present considerations is that in the outer halo
-- namely for galactocentric distances larger than $10-20$ kpc -- no
substantial $H_2$ destruction should take place, owing to the distance
suppression of the UV flux. Therefore, here the PGC clouds
monotonically cool, collapse and fragment. When their number density exceeds
$\sim 10^8$ cm$^{-3}$, virtually all hydrogen gets converted to molecular form
by three-body reactions ($H + H + H \to H_2 +H$ and $H + H + H_2 \to H_2 +
H_2$), which makes in turn the cooling efficiency increase
dramatically~\cite{palla}. As a result, no imprinting of a characteristic mass
on the PGC clouds shows up, and the fragment Jeans mass can drop to values
considerably smaller than $\sim 1 M_{\odot}$. The fragmentation process stops
when the PGC clouds become optically thick to their own line emission -- this
happens for a fragment Jeans mass as low as $\sim 10^{-2}
M_{\odot}$ ~\cite{palla}.
In this manner, dark clusters containing brown dwarfs
in the mass range $10^{-2} - 10^{-1}~M_{\odot}$ should form in the outer
halo. Typical values of the dark cluster radius are $\sim 10$ pc.
In spite of the fact that the dark clusters resemble in many respects
globular clusters, an important difference exists. Since practically no nuclear
reactions occur in the brown dwarfs, strong stellar winds are presently
lacking. Therefore the leftover gas - which is ordinarily expected to exceed
60\% of the original amount - is not expelled from the dark clusters but
remains confined inside them. Thus, also cold gas clouds are clumped into the
dark clusters. Although these clouds are primarily made of $H_2$, they
should be surrounded by an atomic layer and a photo-ionized ``skin''. Typical
values of the cloud radius are $\sim 10^{-5}$ pc.
Besides accounting for the halo dark matter in a natural fashion - without
demanding any new physical assumption - this model
elegantly explains the visible-invisible conspiracy. For, whether ordinary matter
is luminous or dark ultimately depends on the intensity of the environmental UV
radiation field during the proto-galactic epoch - no fine-tuning is indeed
involved!
Moreover, the UV field in question is expected to be stronger for brighter
galaxies. Accordingly, brighter galaxies should have the dark clusters
lying farther away from the galactic centre than fainter galaxies,
thereby making the
contribution of dark matter to the rotation curve of brighter galaxies less
significant than for fainter ones: this circumstance precisely agrees with
the above-mentioned
observed pattern of rotation curves~\cite {Per91}.
Observationally, the present model makes a crucial prediction: very high-energy
cosmic ray proton scattering on the clouds should give rise to a detectable
diffuse gamma-ray flux from the halo of our galaxy. This topic will be dealt
with in great detail in the next Sections.
Further support in favour of the baryonic scenario
in question comes from the understanding of the Extreme Scattering Events:
dramatic flux changes over several weeks during monitoring of compact radio
quasars~\cite{fiedler}. It is generally agreed that ESEs are not intrinsic
variations, but rather apparent flux changes caused by refraction when a
(partially) ionized cloud crosses the line of sight. Recently,
Walker and Wardle~\cite{ww} pointed out that the first consistent
explanation of ESEs requires the refracting clouds to have precisely the
same properties of the
cold $H_2$ clouds predicted by the present model
(it is their photo-ionized ``skin'' that causes the radio wave refraction).
Last but not least is the issue of MACHOs (Massive Astrophysical Compact Halo
Objects), detected since 1993 in microlensing experiments towards the
Magellanic Clouds.
Regretfully, their origin remains controversial. Although the events
detected towards the SMC (Small Magellanic Cloud) seem to be a self-lensing
phenomenon \cite{st, gyuk}, a similar interpretation of all the events
discovered towards the LMC (Large Magellanic Cloud) looks unlikely
~\cite{alcock2}.
Yet -- even if most of the MACHOs are dark matter candidates lying in the
galactic halo -- their physical nature is unclear, since their average mass
strongly depends on the still uncertain galactic model, ranging from
$\sim 0.1~M_{\odot}$ for a maximal disk up to $\sim 0.5~M_{\odot}$ for a
standard isothermal sphere.
Superficially, white dwarfs look as the best explanation,
but the resulting excessive metallicity of the halo makes this option
untenable, unless their contribution to halo dark matter is not substantial
(see \cite{gm,binney}).
So, some variations on the theme of brown dwarfs have been
explored.
An option is that the galactic halo resembles more closely a
minimal halo (maximal disk) rather than an isothermal sphere, in which case MACHOs can
still be brown dwarfs.~\footnote{Notice that also the $H_2$ clouds can give
rise to microlensing events~\cite{draine}.}
In this connection, two points should be stressed. First, a large fraction
(up to $50\%$ in mass) can be binary systems - much like ordinary stars -
thereby counting as twice more massive objects~\cite{depaolismnras}.
Second, within our model brown dwarfs can actually be beige dwarfs - with
mass substantially larger than $\simeq 0.1~M_{\odot}$ - as suggested by
Hansen~\cite{HANSEN}, since a slow accretion mechanism from cloud gas is
likely to occur~\cite{lcs}.
An alternative
possibility has been pointed out by Kerins and Evans~\cite{ke}. Since in
the present model the initial mass function obviously changes with the
galactocentric distance,~\footnote{Evidence for a spatially varying initial
mass function in the galactic disk has been reported~\cite{taylor}.}
it can well happen that brown dwarfs dominate the halo
mass density without however dominating the optical depth for microlensing.
What are then MACHOs? Quite recently, faint blue objects discovered by the
Hubble Space Telescope have been understood as old halo white dwarfs lying
closer than $\sim 2$ kpc from the Sun \cite{hansen88}-\cite{ibata}:
they look as a good
candidate for MACHOs within this context.
Finally, we remark that recently ISO observations \cite{valentijn} of
the nearby NGC891 galaxy have detected
a huge amount of molecular hydrogen, which
might account for almost all dark matter, at least within its optical radius.
Other observations suggest that similar clouds are also present farther away
\cite{lopez}.
In addition, Sciama \cite{sciama} has argued that a known excess in the
far-infrared emissivity of our galaxy (over that expected from a standard
warm interstellar dust model) would be naturally accounted for by a
population of cold $H_2$ clouds building up a thick galactic disk.
\section{Cosmic ray confinement in the galactic halo}
Neither theory nor observation allow at present to
make sharp statements about the propagation of CRs in the galactic
halo~\footnote{We stress
that - contrary to the practice used in the CR community -
by halo we mean the (almost) spherical galactic component which extends beyond
$\sim$ 10 kpc.}.
Therefore, the only possibility to get some insight into this issue
rests upon the extrapolation from the knowledge of CR propagation in the
disk. Actually, this strategy looks sensible, since the leading effect is CR
scattering on inhomogeneities of the magnetic field over scales
from $10^2$ pc down to less than $10^{-6}$ pc \cite{berezinskii}
and - according to our model - inhomogeneities
of this kind are expected to be present in the halo as well,
because of the existence of molecular clouds - with a photo-ionized
``skin'' - clumped into dark clusters. Indeed, typical values of
the dark cluster radius are $\sim 10$ pc, whereas typical values of the
cloud radius are $\sim 10^{-5}$ pc \cite{depaolisapj}.
As is well known, CRs up to energies of
$\sim 10^6$ GeV are confined in the galactic disk for $\sim 10^7$ yr
\cite{berezinskii}.
It can be shown that in the diffusion model for the propagation of
CRs, the escape time $\tau_{\it esc}$ is
given by \cite{berezinskii}
\begin{equation}
\tau_{\it esc}\simeq\frac{R_h^2}{3D(E)}
\frac{1-
\displaystyle{ \frac{1}{2} \left(\frac{h_d}{R_h}\right)^2 } +\frac{1}{8}
\left(
\frac{h_d}{R_h}\right)^3}{1- \displaystyle{
\frac{h_d}{2R_h}}}~,
\label{tau1}
\end{equation}
where $D(E)$ is the diffusion coefficient, while $h_d$ and $R_h$ are the
half-thickness of the disk and the radius of the confinement region,
respectively. We remind that -
for CR propagation in the disk - the diffusion coefficient is
$D(E) \simeq D_0~(E/7~ GeV)^{0.3}$ cm$^2$ s$^{-1}$ in the ultra-relativistic
regime, whereas it reads $D(E) \simeq D_0 \simeq 3 \times 10^{28}$ cm$^2$ s$^{-1}$ in
the non-relativistic regime \cite{berezinskii}.
CRs escaping from the disk will further diffuse in the galactic
halo, where they can be retained for a long time,
owing to the scattering on the above-mentioned
small inhomogeneities of the halo magnetic field
\footnote{
A similar idea has been proposed with a somewhat different motivation in
\cite{wdowczyk}.}.
Indirect evidence that CRs are in fact trapped in a low-density
halo has recently been reported. For example, Simpson \& Connell
\cite{simpson} argue that, based on measurements of isotopic abundances
of the cosmic ratio $^{26}$Al/$^{27}$Al, the CR lifetimes are perhaps a
factor of four larger than previously thought, thereby implying that CRs
traverse an average density smaller than that of the galactic disk.
A straightforward extension of the diffusion model
implies that the CR escape time $\tau_{\rm esc}^{~H}$ from the halo
(of size $R_H\equiv R_h\sim 100$ kpc, much larger than the disk half-thickness)
is given by
\begin{equation}
\tau_{\rm esc}^{~H} \simeq \frac{R_H^2}{3D_H(E)}~,
\label{tau2}
\end{equation}
where $D_H(E)$ is the diffusion coefficient in the galactic halo.
As a matter of fact, radio observations in clusters of galaxies
yield for the corresponding diffusion constant $D_0$
a value similar to that found in the galactic
disk \cite{sst}
\footnote{Moreover, we note that average magnetic field
values in galactic halos
are expected to be close to those of galaxy clusters,
i.e. between 0.1 $\mu$G and 1 $\mu$G \cite{hillas}.}.
So, it looks plausible that a similar value for $D_0$
also holds on intermediate length scales, namely within the galactic halo.
In the lack of any further information on the energy-dependence of $D_H(E)$,
we assume the same dependence as that established for the disk.
Hence, from eq. (\ref{tau2})
we find that for energies $E \ut < 10^3$ GeV the escape
time of CRs from the halo is greater than the age of the Galaxy
$t_0 \simeq 10^{10}$ yr
(notice that below the ultra-relativistic regime
$\tau_{\rm esc}^{~H}$ gets even longer). As a consequence - since the CR flux
scales like $E^{-2.7}$ (see next Section) - protons with $E \ut < 10^3$
GeV turn out to give the leading contribution to the CR flux.
We are now in position to evaluate the CR energy density in the galactic
halo, getting
\begin{equation}
\rho_{CR}^{~H} \simeq \frac{3 t_0 L_G }{4 \pi R_H^3} \simeq 0.12
~~~~~{\rm eV~cm^{-3}}~,
\label{hcrd}
\end{equation}
where
$L_G \simeq 10^{41}$ erg s$^{-1}$
is the galactic CR luminosity (see, e.g., \cite{breitschwerdt}).
Notice, for comparison, that $\rho_{CR}^{~H}$ turns out to be about
one-tenth of the disk value \cite{gaisser}.
In fact, this value is consistent with the EGRET upper bound on the CR
density in the halo near the SMC \cite{sreekumar2}.
We remark that we have taken specific realistic values for the various
parameters entering the above equations in order to make a quantitative
estimate.
However, somewhat different values can be used. For instance,
$R_H$ may range up to $\sim 200$ kpc \cite{bld},
whereas $D_0$ might be slightly larger than the above value, e.g.
$\simeq 10^{29}$ cm$^2$ s$^{-1}$
consistently with our assumptions. Moreover, $L_G$ can be as large
as $3 \times 10^{41}$ erg s$^{-1}$ \cite{volk}. It is easy to see
that these variations do not substantially affect our previous conclusions.
\section{Proton-proton scattering in the galactic halo}
We proceed to estimate the halo $\gamma$-ray flux produced by the
clouds clumped into dark clusters through the interaction
with high-energy CR protons.
CR protons scatter on cloud protons giving rise (in particular) to neutral
pions, which subsequently decay into photons.
A highly nontrivial question concerns the opacity effects in the clouds.
Quite recently, Kalberla et al.~\cite{kalberla} have addressed precisely this
issue, showing that optical-depth effects for both protons and photons are
negligible within our model.
Finally, we
expect an irrelevant high-energy ($\geq$ 100 MeV) $\gamma$-ray photon
absorption outside the clouds,
since the mean free path is orders of magnitudes larger than the
halo size.
As far as the energy-dependence of the halo CRs
is concerned, we adopt the same power-law as in the galactic disk (see below)
\cite{gaisser}
\begin{equation}
\Phi^{H}_{CR}(E) \simeq \frac{A}{{\rm GeV}}
\left(\frac{E}{{\rm GeV}}\right)^{-\alpha}~~~
{\rm particles~cm^{-2}~s^{-1}~sr^{-1}}~. \label{eqno:42}
\end{equation}
The constant $A$ is fixed by the requirement that the integrated
energy flux agrees with the above value of $\rho^H_{CR}$. Explicitly
\begin{equation}
\int d\Omega~ dE~ E~ \Phi^{H}_{CR}(E) \simeq 5.7\times 10^{-3}~~~
{\rm erg~cm^{-2}~s^{-1}}~,
\label{eqno:43}
\end{equation}
where for definiteness we take the integration range to be
$1~ {\rm GeV} \leq E \leq 10^3~ {\rm GeV}$.
A nontrivial point concerns the choice of $\alpha$. As an orientation, the observed
spectrum of primary CRs on Earth would yield
$\alpha \simeq 2.7$. However, this conclusion cannot be extrapolated
to an arbitrary region in the halo (and in the disk), since
$\alpha$ crucially depends on the diffusion processes undergone by
CRs. For instance, the best fit to EGRET data
in the disk towards
the galactic centre yields $\alpha \simeq 2.45$ \cite{mori},
thereby showing that $\alpha$ gets increased by diffusion.
In the lack of any direct information,
we conservatively take $\alpha \simeq 2.7$
even in the halo,
but in Table 1 we report some results for different values
of $\alpha$ for comparison . At any rate, the flux does not vary substantially.
\begin{table}
\caption{
Halo $\gamma$-ray intensity at high-galactic latitude
for a spherical halo evaluated for $R_{min}= 10$ and 15 kpc
at energies above 0.1 GeV and 1 GeV, for different values of
the CR spectral index $\alpha$ is given in units of
$10^{-7}$ $\gamma$ cm$^{-2}$ s$^{-1}$ sr$^{-1}$.}
\begin{tabular}{cccc}
\br
$R_{min} $ & $E_{\gamma}$ & $\alpha$ & $\Phi_{\gamma}^{~\rm DM}
(b=90^0)$ \\
\hline
(kpc) & (GeV) & & \\
\hline
\hline
$10$ & $>0.1$ & 2.45& $62 $ \\
& & 2.70& $59 $ \\
& & 3.00& $49 $ \\
\hline
$10$ & $>1.0$ & 2.45& $11$ \\
& & 2.70& $6.7 $ \\
& & 3.00& $3.3 $ \\
\hline
\hline
$15$ & $>0.1$ & 2.45& $37 $ \\
& & 2.70& $35 $ \\
& & 3.00& $29 $ \\
\hline
$15$ & $>1.0$& 2.45& $6.5$ \\
& & 2.70& $4.0$ \\
& & 3.00& $1.9$ \\
\br
\end{tabular}
\label{table3}
\end{table}
Let us next turn our attention to the evaluation of the $\gamma$-ray flux
produced in halo clouds
through the reactions $pp \rightarrow \pi^0 \rightarrow \gamma
\gamma$. Accordingly, the source function
$q_{\gamma}(>E_{\gamma},\rho,l,b)$ -
yielding the photon number density at distance
$\rho$ from Earth with energy greater than $E_{\gamma}$ - is
\cite{gaisser}
\begin{equation}
\begin{array}{ll}
q_{\gamma}(>E_{\gamma},\rho,l,b)=
\displaystyle{\frac{4\pi}{m_p}} \rho_{H_2}(\rho,l,b)~\times \\ \\
\sum_{n} \int_{E_p(E_{\gamma})}^{\infty}
d{E}_p dE_{\pi} \Phi^H_{CR}
({E}_p)
\displaystyle{\frac{d \sigma^n_{p \rightarrow \pi}
(E_{\pi})}{dE_{\pi}}}
n_{\gamma}({E}_p)~,
\label{eqno:46}
\end{array}
\end{equation}
where
the lower integration limit $E_p(E_{\gamma})$ is the minimal proton
energy necessary to produce a photon with energy $>E_{\gamma}$,
$\sigma^n_{p \rightarrow \pi}(E_\pi)$ is the cross-section for the
reaction $pp \rightarrow n \pi^0$ ($n$ is the $\pi^0$ multiplicity),
$\rho_{H_2}(\rho,l,b)$ is the halo gas density profile
and $n_{\gamma}({E}_p)$ is
the photon multiplicity.
Unfortunately, it would be exceedingly difficult to keep track of the
clumpiness of the actual gas distribution in the halo, and
so we assume that its
density is smooth and goes like the dark matter density - anyhow, the very low
angular resolution of $\gamma$-ray detectors would not permit to
distinguish between the two situations (evidently this strategy would be
meaningless if optical-depth effects were not negligible).
Accordingly, the halo gas
density profile reads
\begin{equation}
\rho_{H_2}(x,y,z) = f~ {\rho_0 (q)} ~ \frac{\tilde a^2+R_0^2}{\tilde a^2+x^2+y^2+(z/q)^2}~,
\label{eqno:29}
\end{equation}
for $\sqrt{ x^2+y^2+z^2/q^2} > R_{min}$,
($R_{min} \simeq 10$ kpc is the minimal galactocentric distance of the dark clusters
in the galactic halo).
We recall that $f$ denotes the fraction of halo dark matter in the form of gas,
$\rho_0(q)$ is the local dark matter density,
$\tilde a = 5.6$ kpc is the core radius and $q$ measures
the halo flattening. For
the standard spherical halo model
$\rho_0(q=1) \simeq 0.3$ GeV cm$^{-3}$,
whereas it turns out that e.g.
$\rho_0(q=0.5) \simeq 0.6$ GeV cm$^{-3}$.
In order to proceed further, it is convenient
to re-express
$q_{\gamma}(>E_{\gamma},\rho,l,b)$ in terms of the inelastic pion production
cross-section $\sigma_{in}(p_{lab})$. Since
\begin{equation}
\sigma_{in}(p_{lab})<n_{\gamma}(E_p)>~ = \sum_n \int dE_{\pi}~
\frac{d\sigma^n_{p \rightarrow \pi}(E_{\pi})}{dE_{\pi}}~ n_{\gamma}(E_p)~,
\label{eqno:48}
\end{equation}
eq. (\ref{eqno:46}) becomes
\begin{equation}
\begin{array}{ll}
q_{\gamma}(>E_{\gamma},\rho,l,b)=
\displaystyle{\frac{4\pi}{m_p}}
\rho_{H_2}(\rho,l,b) ~\times \\ \\
\int_{E_p(E_{\gamma})}^{\infty}
d{E}_p~ \Phi^H_{CR}({E}_p)~
\sigma_{in}(p_{lab}) <n_{\gamma}({E}_p)>~, \label{eqno:49}
\end{array}
\end{equation}
where $\rho_{H_2}(\rho,l,b)$ is given by eq. (\ref{eqno:29}) with
$x = -\rho \cos b \cos l +R_0$,
$y = -\rho \cos b \sin l$ and
$z = \rho \sin b$.
For the inclusive cross-section of the reaction
$pp \rightarrow \pi^{0} \rightarrow \gamma \gamma$
we adopt the Dermer \cite{dermer} parameterization
\begin{equation}
\begin{array}{ll}
\sigma_{in}(p) < n_{\gamma}(E_p) > = 2 \times 1.45 \times
10^{-27} ~\times \\ \\
~~~\left\lbrace
\begin{array}{llll}
0.032 \eta^2 + 0.040 \eta^6 + 0.047 \eta^8 & ~~~0.78 \leq p \leq 0.96 \\
32.6 (p - 0.8)^{3.21} & ~~~0.96 \leq p \leq 1.27 \\
5.40 (p - 0.8)^{0.81} & ~~~1.27 \leq p \leq 8.0 \\
32 ln p + 48.5 p^{-1/2} - 59.5 & ~~~p \geq 8.0 ~,
\end{array}
\right.
\end{array}
\end{equation}
where $p$ is the proton laboratory momentum in GeV/c,
the factor 2 comes from the fact that each pion decays into two photons,
whereas 1.45 accounts for the CR composition \cite{dermer},
which includes also heavy nuclei.
The quantity
\begin{equation}
\eta \equiv \frac{[(s- m_{\pi}^2 - m_p^2)^2 - 4m_{\pi}^2 m_p^2]^{1/2}}
{2 m_{\pi} s^{1/2} }~,
\end{equation}
is expressed in terms of the Mandelstam
variable $s$, while $m_{\pi}$ and $m_p$ are the pion
and the proton mass, respectively.
Because $dV=\rho^2 d\rho d\Omega$, it follows that the observed
$\gamma$-ray flux per unit solid angle is
\begin{equation}
\Phi_{\gamma}^{~ \rm DM}
(>E_{\gamma},l,b)=\frac{1}{4\pi}
\int^{\rho_2(l,b)}_{\rho_1(l,b)} d\rho~ q_{\gamma}(>E_{\gamma},\rho,l,b)
~. \label{eqno:51}
\end{equation}
So, we find
\begin{equation}
\Phi_{\gamma}^{~ \rm DM}
(>E_{\gamma},l,b) =
f ~ \frac{\rho_0(q)}{m_p}~ {I}_1(l,b)~
{I}_2(>E_{\gamma})~,
\label{eqno:52}
\end{equation}
where
${I}_1(l,b)$ and ${I}_2(>E_{\gamma})$ are
defined as
\begin{equation}
{I}_1(l,b) \equiv \int^{\rho_2(l,b)}_{\rho_1(l,b)} d\rho
\left(\frac{\tilde a^2 + R_0^2}
{\tilde a^2 + x^2 + y^2 + (z/q)^2 } \right)~, \label{eqno:A5}
\label{eqno:39}
\end{equation}
\begin{equation}
{I}_2(>E_{\gamma}) \equiv \int_{E_p(E_{\gamma})}^{\infty}
d{E}_p~\Phi^H_{CR}({E}_p)~\sigma_{in}(p_{lab})
<n_{\gamma}({E}_p)>~,
\label{eqno:35}
\end{equation}
and $m_p$ is the proton mass.
According to the discussion in Sections 2 and 3,
typical values of $\rho_1(l,b)$ and $\rho_2(l,b)$ in eqs.
(\ref{eqno:51}) and (\ref{eqno:39})
are 10 kpc and 100 kpc, respectively.
Numerical values for $\Phi_{\gamma}^{~\rm DM}$ in the
cases $\alpha = 2.45,~ 2.7$ and $3.0$ are reported in Table 1.
\section{Inverse-Compton scattering}
Another mechanism whereby $\gamma$-ray photons are produced is
IC scattering of high-energy CR electrons off galactic background photons.
Here we estimate the resulting flux, while the interplay between
proton-proton scattering and IC scattering will be discussed in the next Section.
The electron injection spectrum which best fits the locally observed
electron spectrum is given by the following power-law
valid for $E_e \ut > 10$ GeV
(see e.g. \cite{porter})
\begin{equation}
I_e(E_e;\rho,l,b) = K(\rho,l,b) E_e^{-a}~~
{\rm e^-~cm^{-2}~s^{-1}~sr^{-1}~Gev^{-1}}~,
\label{11}
\end{equation}
with $a \simeq 2.4$ and
$K_0 \equiv K(0) \simeq 6.3 \times 10^{-3}$
e$^-$ cm$^{-2}$ s$^{-1}$ sr$^{-1}$ Gev$^{a-1}$ (the value of $K_0$ is
obtained by normalizing eq. (\ref{11}) with the observed local CR
electron spectrum at 10 GeV).
Since $a$ is somewhat model-dependent (in particular it depends
on the diffusion
processes), its actual value is not well determined, and indeed it could
be as low as $a \simeq 2$ \cite{pohl} or even $a\simeq 1.8$ \cite{smr}.
However, what is relevant is the electron spectrum where
the $\gamma$-ray production occurs and - due to diffusion processes -
the value of $a$ is expected to increase with the distance from the
galactic plane where the electrons are mostly produced.
In order to estimate the
galactic radiation field, we adopt the model of Mazzei,
Xu \& De Zotti \cite{mazzei}
for the photometric evolution of disk galaxies. This model reproduces well
the present broad-band spectrum of the Galaxy over about four decades
in frequency, from UV to far-IR. Accordingly, the two main
contributions to the galactic radiation field come from stars at wavelength
$\lambda$ $\sim 1 \mu$m and diffuse dust at $\lambda \sim 100 \mu$m.
The total stellar luminosity of the Galaxy is
$L_\star \sim 3.5 \times 10^{10}~L_{\odot}$
and the amount of starlight absorbed and re-emitted by dust is
$L_{\rm d} \sim 1.2 \times 10^{10}~L_{\odot}$
(see e.g., \cite{mazzei,cox}).
As regards to the photon energy distribution, we can roughly approximate
the emission spectrum (see Fig. 4 in \cite{mazzei})
with the sum of two Planck functions
with temperature $T_{\star} \sim 2900$ K and $T_{\rm d} \sim 29$ K,
respectively.
According to the previous assumptions,
the source function
$q_{\rm ph} (E_{\gamma})$ for $\gamma$-ray
production through IC scattering is given by
\cite{berezinskii}
\begin{equation}
\begin{array}{ll}
q_{\rm ph} (E_{\gamma}) =
\frac{1}{2} \sigma_T
\left( \frac{4}{3} <\epsilon_{\rm ph}(T_{\star,{\rm d}})> \right)^{(a-1)/2}
~\times \\ \\
(mc^2)^{1-a} K_0 E_{\gamma}^{-(a+1)/2}
~~~{\rm \gamma~ s^{-1}~sr^{-1}~Gev^{-1}}.
\label{la}
\end{array}
\end{equation}
Here $<\epsilon_{\rm ph}(T_{\star,{\rm d}})>
\simeq 8kT_{\star,{\rm d}}/3$
is the average energy of background photons emitted by stars or dust and
$\sigma_T$ is the Thompson cross-section. In deriving eq. (\ref{la}), use
is made of the fact that the $\gamma$-ray energy is related to
the electron and background photon energies
according to
\begin{equation}
<E_{\gamma}> = \frac{4}{3} <\epsilon_{\rm ph}(T_{\star, d})> \left( \frac{ E_e}{mc^2} \right)
^2~,
\label{egammaav}
\end{equation}
so that very high-energy electrons are needed in order to produce
$\gamma$-rays.
For example, a $\gamma$-ray with $E_{\gamma} \simeq 1$ GeV produced by this
mechanism requires $E_e \simeq 170$ GeV for a target photon emitted by dust,
while $E_e \simeq 17$ GeV is demanded for starlight.
The intensity of diffuse galactic $\gamma$-rays of energy
$>E_{\gamma}$ produced in this way and coming to Earth along
the line-of-sight
$(l,b)$ turns out to be
\begin{equation}
\begin{array}{ll}
\Phi_{\gamma}^{~\rm IC} (>E_{\gamma},l,b) =
\int_{0}^{\infty} d\rho <n_{\rm ph}~(\rho,l,b)>f_e(\rho,l,b) ~ \times \\ \\
\int_{E_{\gamma}}^{\infty}
q_{\rm ph }({E}_{\gamma})~ d{E}_{\gamma}
{\rm ~~~\gamma~cm^{-2}~s^{-1}~sr^{-1}}~,
\label{eqno:intensityph}
\end{array}
\end{equation}
where we have introduced the function
$f_{e}(\rho,l,b) \equiv K(\rho,l,b)/K_0$
as the ratio of the electron CR intensity
relative to the local intensity, while
$<n_{\rm ph}(\rho,l,b)>$ is
the average density of background photons.
Let us next focus our attention on
the functions $f_{e}(\rho,l,b)$ and $<n_{\rm ph}(\rho,l,b)>$.
The electron component of CRs is galactic in
origin, mainly produced by supernovae and pulsars located
inside the disk.
Electrons diffuse through the Galaxy and their distribution is
energy-dependent and not uniform, namely,
the characteristic diffusion length scale gets smaller
for higher electron energy. This feature
cannot be described in the framework
of the widely used Leaky Box Model, and in order to obtain the electron
density at an arbitrary point in the Galaxy
one has to resort to the transport equation (see e.g.
\cite{pohl,berezinskii}).
Unfortunately, several fairly unknown parameters enter this equation,
like the electron diffusion coefficient,
the rate at which electrons lose energy, the density of sources and the
electron spectrum.
An alternative approach relies upon the experimental evidence of the thick
disk
\footnote{Often defined as ``halo'' by the CR community.},
in which high-energy electrons
may be retained for a long time before escaping into the galactic halo.
Indeed, the observed characteristics of the radio emission
spectra of our and other galaxies lead to a relative density distribution of
electrons $f_e(R_0,z)\equiv n_e(z)/n_e(0)$ extending up to $5 -12$ kpc
perpendicularly to the galactic plane, as shown
in Figure 5.29 of \cite{berezinskii}.
These numerical results can be
approximated by $f_e(R_0,z) = \exp[-(z/z_e)^{3/2}]$,
with the parameter $z_e$ depending on the electron energy. From
eq.~(\ref{egammaav}) and the ensuing discussion, it turns out that
$z_e \simeq 2.5$ kpc for $E_e \simeq 170$ GeV
while
$z_e \simeq 3.5$ kpc for $E_e \simeq 17$ GeV.
As far as the radial dependence
of the electron distribution is concerned,
we assume that $f_{e}(R,0)$ follows the same
$R$-dependance of the CRs, which can be obtained by
using a best fit procedure to the data in
Figure 11 of \cite{bloemen}. This yields
\begin{equation}
f_{e}(R,0) = e^{[0.48-0.36(R/R_0)-0.12(R/R_0)^2]}~.
\label{eqno:4}
\end{equation}
However, following Bloemen \cite{bloemen} - who
suggested a stronger radial gradient for the electron component of the CRs
- we also tested the effect of using a
steeper radial electron distribution on the IC $\gamma$-ray flux.
We anticipate that the corresponding results show that the IC
$\gamma$-ray flux does not change
significantly for galactic longitudes $|l| \leq 90^0$ (irrespectively of
the latitude values) while it increases up to a factor of two at
$l = 180^0$ for $|b| \leq 30^0$.
The last quantity to be specified
in eq. (\ref{eqno:intensityph}) is the average
background photon density $<n_{ph}(\rho, l, b)>$ or, equivalently,
the background photon flux $\Phi_{ph}(\rho, l, b)$ emitted by stars and dust
\begin{equation}
<n_{\rm ph}(\rho,l,b)>~ = \frac{\Phi_{\rm ph}(\rho,l,b)}{c} ~~~
{\rm \gamma~cm^{-3}}~.
\label{eqno:16}
\end{equation}
Note that the photon flux $d \Phi_{\rm ph}(\rho,l,b)$ at a point
$P(\rho, l, b)$ from
the solid angle $d\Omega$
subtended by an infinitesimal area $d{a'}$ centered in
$P'(R',\phi', z'=0)$ on the galactic plane is given by
\begin{equation}
d\Phi_{\rm ph}(\rho,l,b) =
I_{*,d}(R')~\left(\frac{d\Omega}{4\pi}\right)\cos{\alpha}~~~
{\rm ~~~\gamma~cm^{-2}~s^{-1}}~,
\label{eqno:17}
\end{equation}
where $\alpha$ is the angle between the normal to the area $da'$ and
the direction ${\bf PP'}$.
We can trace the surface brightness $I_{\star,{\rm d}}(R')$ to
the stellar/dust distribution. Assuming that visible matter makes up an
exponential disk, we set
\begin{equation}
I_{*,d} (R') = A_{*,d} e^{-(R'-R_0)/h_{*,d}}~~~~~~
{\rm \gamma ~ cm^{-2}~ s^{-1}} ~,
\label{22}
\end{equation}
where $h_{*,d}\simeq 3.5$ kpc is the scale length for the visible matter
and the constant $A_{*,d}$ is fixed by the total disk luminosity as
\begin{equation}
\int_0^{R_d} I_{*,d}(R') 2 \pi R' dR' =
\frac {L_{\star,d}} {2<\epsilon_{\rm ph}(T_{\star,d})>}
~~~~ {\rm \gamma ~s^{-1}}~.
\end{equation}
In this way, we get
$A_{\star} = 4.71 \times 10^{20}$ $\gamma$ cm$^{-2}$ s$^{-1}$
and
$A_{d} = 1.64 \times 10^{22}$ $\gamma$ cm$^{-2}$ s$^{-1}$,
with $R_d \simeq 15$ kpc.
By integrating eq. (\ref{eqno:17}) on the galactic disk, we find
\begin{equation}
\Phi_{\rm ph}(\rho,l,b)=
\int_0^{R_d}
\int_0^{2\pi}
I_{*, d}(R')~R' dR' d\phi '~
\left(\displaystyle{\frac{\cos\alpha}{4 \pi |{\bf PP'}|^2}}\right) ~~~~
{\rm \gamma~cm^{-2}~s^{-1}} ~.
\label{24}
\end{equation}
Finally, by using eqs. (\ref{eqno:16}), (\ref{22}) and (\ref{24}) - and
recalling eq. (\ref{la}) - eq. (\ref{eqno:intensityph}) can be rewritten in the form
\begin{equation}
\Phi_{\gamma}^{~\rm IC}(>E_{\gamma},l,b) =
{J}_1(l,b)~{J}_2(>E_{\gamma})
~~~~{\rm \gamma~cm^{-2}~s^{-1}~sr^{-1}}~,
\end{equation}
where we have set
\begin{equation}
\begin{array}{ll}
{J}_1(l,b) \equiv
\int_{0}^{\infty} f_e(\rho,l,b) d\rho ~\times \\ \\
\int_0^{R_d}~\int_0^{2 \pi}
\left(\displaystyle{
\frac{\cos\alpha} {4 \pi |{\bf PP'}|^2}} \right) ~R' dR' d\phi '
e^{-(R'-R_0)/h_{*,d}}~~~{\rm ~cm}~,
\end{array}
\end{equation}
and
\begin{equation}
\begin{array}{ll}
{J}_2(>E_{\gamma}) \equiv
\displaystyle{\frac{A_{*,d}}{2c}}
\sigma_T~ [ 4/3 <\epsilon_{\rm ph}(T_{\star,d})>]^{(a-1)/2}~\times
\\ \\
~(mc^2)^{1-a}~K_0~
\int_{E_{\gamma}}^{\infty} {E}_{\gamma}^{-(a+1)/2}d{E}_{\gamma}
~~~~~~~{\rm \gamma~cm^{-3}~s^{-1}~sr^{-1} }~.
\end{array}
\label{eqno:25}
\end{equation}
Numerical values of $\Phi_{\gamma}^{~\rm IC}(>E_{\gamma},l,b)$ at
high-galactic latitude are exhibited in Table 2 and plotted in Figure 3.
\begin{table}
\caption{
The galactic diffuse $\gamma$-ray intensity due to IC scattering
of high-energy electrons on background photons from stars and dust
is given (in units of $10^{-7}$ $\gamma$ cm$^{-2}$ s$^{-1}$ sr$^{-1}$)
for $a=2.0,~2.4$ and $2.8$. The results for $a=2, ~2.8$ are reported
for illustrative purposes.
We adopt the following values: $T_* = 2900$ K,
$L_* = 3.5 \times 10^{10}~L_{\odot}$ and
$T_d = 29$ K, $L_d= 1.5 \times 10^{10}~L_{\odot}$.}
\begin{tabular}{cccccc}
\br
& $z_e$ & $E_{\gamma}$ &
$\Phi_{\gamma}^{~\rm IC}(90^0)$&
$\Phi_{\gamma}^{~\rm IC}(90^0)$&
$\Phi_{\gamma}^{~\rm IC}(90^0)$ \\
\hline
& (kpc) & (GeV) & & & \\
\hline
\hline
& & & $a=2.0$ & $a=2.4$ & $a=2.8$ \\
\hline
stars & 3.5& $>0.1$ &$3.8 $ & $3.5 $& $3.4 $ \\
\hline
& & $>1.0$ &$1.2 $ & $0.7 $& $0.4$ \\
\hline
\hline
dust & 2.5& $>0.1$ &$12$ & $4.4 $& $1.7 $ \\
\hline
& & $>1.0$ &$3.8 $ & $0.9$& $0.2$ \\
\br
\end{tabular}
\label{table2}
\end{table}
\section{Discussion}
\begin{figure*}
\vspace{15.cm}
\special{psfile=fig2.eps vscale=76.5 hscale=67.5 voffset=-120. hoffset=-20.}
\caption{Contour values for the $\gamma$-ray flux due
to the DM at $E_{\gamma} > 1$ GeV are given for the indicated values
in units of $10^{-7}$ $~\gamma$ cm$^{-2}$ s$^{-1}$ sr$^{-1}$,
in the cases: (a) spherical halo, (b) flattened halo with $q=0.5$.}
\label{fig2}
\end{figure*}
Our main result are maps for the intensity distribution
of the $\gamma$-ray emission from
baryonic dark matter (DM) in the galactic halo and from IC
processes in the galactic disk. In order to make the discussion definite,
we take the fraction of halo dark matter in the form of molecular clouds
$f \simeq 0.5$. As far as the IC emission is concerned, the
standard electron spectral index $a=2.4$ is used.
We stress that the shape of the IC maps does not depend on the value of $a$.
In Figures 1 we exhibit the contour plots in the first quadrant of the
sky ($0^0 \le l \le 180^0$, $0^0 \le b \le 90^0$) for
the halo $\gamma$-ray flux
$\Phi_{\gamma}^{~\rm DM} (E_{\gamma}> 1
{~\rm GeV})$.
Corresponding contour plots for $E_{\gamma}>0.1$ GeV are identical,
up to an overall constant factor equal to 8.74, as follows from
eq. (\ref{eqno:52}).
Figure 1a refers to a spherical halo, whereas Figure 1b
pertains to a $q=0.5$ flattened halo.
Regardless of the adopted value for $q$,
$\Phi_{\gamma}^{~\rm DM}(E_{\gamma}>1{\rm ~GeV})$ lies in the range
$\simeq 6-8 \times 10^{-7}$ $\gamma$ cm$^{-2}$ s$^{-1}$ sr$^{-1}$
at high-galactic latitude.
However, the shape of the contour lines strongly depends on
the flatness parameter.
Indeed, for $q \ut > 0.9$ there are two contour lines (for
each flux value) approximately symmetric with respect to $l=90^0$
(see Figure 1a).
On the other hand, for
$q \ut < 0.9$ there is a single contour line (for each value of the flux)
which varies much less with the longitude (see Figure 1b).
As we can see from Table 1 and Figures 1, the predicted
value for the halo $\gamma$-ray flux at high-galactic latitude
is close to that found by Dixon et al. \cite{dixon} (see also Table 3).
This conclusion holds almost
irrespectively of the flatness parameter.
Moreover, the comparison of the overall shape of the contour lines in our
Figures 1a and 1b with the corresponding ones in Figure 3 of Ref.
\cite{dixon} entails that models
with flatness parameter
$q \ut < 0.8$ are in better agreement with the data,
thereby implying that most likely
the halo dark matter is not spherically distributed. This result has
been also recently confirmed in the analysis by \cite{kalberla}.
In Figure 2 we present contour plots for the $\gamma$-ray flux
due to the IC scattering, for $E_{\gamma}>1$ GeV.
The corresponding contour plots for $E_{\gamma}>0.1$ GeV are identical,
up to an overall constant factor equal to 5
(this follows from eq. (\ref{eqno:25})).
The contour lines decrease with increasing longitude.
\begin{figure*}
\vspace{8.cm}
\special{psfile=fig3.eps vscale=76.5 hscale=67.5 voffset=-120. hoffset=-20.}
\caption{Contour values for the $\gamma$-ray flux due to the IC
at $E_{\gamma} > 1$ GeV are given for the indicated values
in units of $10^{-7}$ $~\gamma$ cm$^{-2}$ s$^{-1}$ sr$^{-1}$.}
\label{fig3}
\end{figure*}
We remark that eq. (\ref{eqno:52}) yields
$\Phi_{\gamma}^{~DM}(E_{\gamma}>0.1{~\rm GeV}) \simeq 5.9 \times 10^{-6}$
$~\gamma$ cm$^{-2}$ s$^{-1}$ sr$^{-1}$
at high-galactic latitude (for a spherical halo). This value is roughly
40\% of the diffuse $\gamma$-ray emission of
$(1.45 \pm 0.05) \times
10^{-5}~\gamma~{\rm cm^{-2}~s^{-1}~sr^{-1}}$
found by the EGRET team \cite{sreekumar} and in agreement with
the conclusion of Dixon et al. \cite{dixon} that
the halo $\gamma$-ray emission is a relevant fraction of the
standard isotropic diffuse flux also for $E_{\gamma} > 0.1$ GeV.
\begin{table}
\caption{
Rough values of the measured residual $\gamma$-ray flux at
$E_{\gamma}\geq 1$ GeV (after subtraction of both the isotropic background
and the standard galactic diffuse component)
is given for different galactic latitude and longitude values
(interpolated from Fig. 3a in [1]).
Fluxes are given in units of $10^{-6}$ $\gamma$ cm$^{-2}$ s$^{-1}$ sr$^{-1}$.
}
\begin{tabular}{ccc}
\br
$b$ & $l=0^0$ & $l=60^0$ \\
\hline
\hline
$45^0$ & 1 & 1 \\
\hline
$30^0$ & 2 & 1.5 \\
\hline
$15^0$ & 5.5 & 2 \\
\br
\end{tabular}
\label{table_obs}
\end{table}
Nevertheless, given the large
uncertainties both in the data and in the model parameters
(such as for instance the electron scale height and
the electron spectral index $a$), one might also explain the
observations with a nonstandard IC
mechanism \cite{smr}.
Our calculation, however, seems to point out that the IC
contour lines in Figure 2
decrease much more rapidly than the observed ones for the halo
$\gamma$-ray emission (see Figure 3 in \cite{dixon}).
More precise measurements with a next generation
of satellites are certainly required in order to settle the issue.
\section{Gamma rays from the halo of M31}
As M31 resembles our galaxy, the discovery of Dixon et al. \cite{dixon}
naturally leads to the expectation that the halo of M31 should give rise to
a $\gamma$-ray emission as well. Below, we will try to address this issue
in a quantitative manner, assuming that the halo of M31 is structurally
similar to that of our galaxy and that our model for baryonic
dark matter is correct.
We suppose that the various parameters entering the calculations in
Sections 3 and 4 take similar values for M31 and for the Galaxy, apart
from the M31 central dark matter density
$\rho(0) \simeq 2.5 \times 10^{-24}$ g cm$^{-3}$
and the M31 core radius $\tilde a \simeq 5$ kpc.
Accordingly, the evaluation of the corresponding flux
$\Phi^{~M31}_{\gamma~~halo}$ proceeds as before,
with only minor modifications.
Specifically, we can use again eq. (\ref{eqno:52}) - with
${I}_2$ still given by eq. (\ref{eqno:35}) - but now
${I}_1$ is to be replaced by
${L}_1$ (see below), in order to account for the different geometry.
Notice that $f$ in eq. (\ref{eqno:52}) presently denotes the fraction of
halo dark matter of M31 in the form of $H_2$ clouds.
Consider a generic point $P$ in the halo of M31, and let
$R$ and $r$ denote its distance from the centre $O$ of M31 and from Earth,
respectively.
Since the distance of $O$ from Earth is $D \simeq 650$ kpc, we have
$R(r) = (r^2 + D^2 - 2 r D cos \theta)^{1/2}$, where $\theta$
is the angular separation between $P$ and $O$ as seen from Earth.
For simplicity, we suppose that the M31 halo is described by an isothermal
sphere with radius $R_H$ and density profile
\begin{equation}
\rho(R) = \frac{\rho(0)}{1+(R/\tilde a)^2}~.
\end{equation}
Note that the ensuing amount of dark matter in M31 turns out to be about twice
as large as that of the Galaxy.
According to the discussion in Section 2, the dark clusters should populate
only the outer halo of M31. So, we compute $\Phi^{~M31}_{\gamma~~halo}$ from
regions of the M31 halo with
$R_{min} < R < R_H$, with $R_{min} \simeq 10$ kpc and $R_H \simeq 100$ kpc,
for definiteness. As it is easy to see, the values of $\theta$
corresponding to $R_{min}$ and $R_H$ are $\theta_{min} \simeq 1^0$ and
$\theta_{H} \simeq 9^0$, respectively.
We are now in position to compute ${L}_1$, which reads
\begin{equation}
{L}_1 = 2\pi
\int_{\theta_{min}}^{\theta_{H}}
\sin \theta ~d\theta
\int_{r_{min}(\theta)}^{r_{max}(\theta)}
dr
\left( \displaystyle{\frac{\tilde a^2}{\tilde a^2 +
R^2(r)}} \right ) \simeq
~1.9 \times 10^{20}~ {\rm cm~sr}~,
\label{eqno:m3144}
\end{equation}
with $r_{max (min)}(\theta) \equiv D cos \theta + (-)
(R^2_H - D^2 sin^2 \theta)^{1/2}$.
Recalling eqs. (\ref{eqno:52}) and (\ref{eqno:35}),
we get
\begin{equation}
\Phi^{~M31}_{\gamma~~halo}(>E_{\gamma}) =
1.9 \times 10^{20} f \frac{\rho(0)}{m_p}
{I}_2(>E_{\gamma})
~~~{\rm \gamma~ cm^{-2}~s^{-1}}~.
\label{eqno:m3145}
\end{equation}
Observe that regions of M31 halo with angular separation less than
$\theta_{min}$ from $O$ do not contribute in eqs. (\ref{eqno:m3144})
and (\ref{eqno:m3145}), and so $\Phi^{~M31}_{\gamma~~halo}$
should be regarded as a lower bound on the total $\gamma$-ray flux from M31
halo.
Specifically, eq. (\ref{eqno:m3145})
yields
\begin{equation}
\Phi^{~M31}_{\gamma~~halo}(E_{\gamma}>0.1{~\rm GeV})
\simeq 3.5 \times 10^{-7} f
~~~{\rm \gamma~ cm^{-2}~s^{-1}}~.
\label{m31}
\end{equation}
This value has to be compared both with the $\gamma$-ray flux
from M31 disk
and with the $\gamma$-ray emission from the halo of the Galaxy.
The former quantity has been estimated to be $\simeq 0.2 \times 10^{-7}$
$\gamma$ cm$^{-2}$ s$^{-1}$ for $E_{\gamma}>0.1$ GeV \cite{ob,of}
within a field of view of $1.5^0\times 6^0$,
whereas the latter quantity, integrated over the entire field of view of
M31 halo, is $\simeq 4.3 \times 10^{-7}$
$\gamma$ cm$^{-2}$ s$^{-1}$ for $E_{\gamma}>0.1$ GeV, according to our
results in Section 4 and 6. \footnote{For simplicity, we suppose here that
the halo of the Galaxy is spherical and we employ eq. (\ref{eqno:52})
with $f=1/2$.}
As far as observation is concerned, no $\gamma$-ray flux from M31 has been
detected by EGRET. Accordingly, the EGRET team has derived the upper bound
\cite{sreekumar94}
\begin{equation}
\Phi^{~M31}_{\gamma}(E_{\gamma}>0.1{~\rm GeV}) \ut <0.8 \times 10^{-7}
~~~{\rm \gamma~ cm^{-2}~s^{-1}}~.
\label{m31obs}
\end{equation}
Unfortunately, a direct comparison between eqs. (\ref{m31}) and
(\ref{m31obs}) is hindered by the fact that eq. (\ref{m31obs}) is derived
under the assumption of a point-like source.
Clearly, a good angular resolution of about one degree or less is
necessary in order to discriminate between the halo and disk emission from M31.
So, the next generation of $\gamma$-ray satellites
like AGILE and GLAST can test
our predictions.
\ack{We would like to thank G. Bignami, P. Caraveo, D. Dixon,
T. Gaisser, M. Gibilisco,
G. Kanbach, T. Stanev, A. Strong and M. Tavani for useful discussions.}
\section*{References}
|
1,116,691,500,180 | arxiv | \section{Introduction}
The single band Hubbard model is widely accepted
as the simplest starting point
of a microscopic description of correlated electron systems \cite{and}.
More recently it has been realized that inclusion of a $t'$ coupling \cite{ttp}
can fit the phenomenology of some cuprates. Recent photoemission
experiments seem to provide Fermi surfaces compatible with
the dispersion relation of the model at moderate values of $t'$ and densities
\cite{arpes}, while it has been argued that it can fit some
features of ruthenium compounds \cite{rut} at higher values of $t'$ and
the doping.\\
The study of inhomogeneous charge and spin phases in the Hubbard model has been a
subject of interest since the discovery of the high-$T_c$ compounds as
it was seen that they have a very inhomogeneous electronic structure
at least in the underdoped regime. However most of the work was done
before the importance of $t'$ was realized \cite{su,pr,il,paco1}.
A sufficiently large value of the ratio $t'/t$, compatible with
the values suggested for the cuprates ($t'/t \sim -0.3$),
leads to a significant change in the magnetic properties of the model,
as a ferromagnetic phase appears at low doping. This phase
has been found by numerical and analytical
methods \cite{ferro1,ferro3,phases}, and
it is a very robust feature of the model.\\
The purpose of this work is to study the influence of $t'$ on the
magnetism of the Hubbard model at moderate values of U and density.
We will make use of an unrestricted
Hartree-Fock approach in real space what allows us to visualize the
charge and spin configurations. We believe that the method is
well suited for the present purposes as: i) it gives a reasonable
description of the N\`eel state, with a consistent description of
the charge gap and spin waves, when supplemented with the RPA.
ii) It becomes exact if the ground state of the model is
a fully polarized ferromagnet, as, in this case, the interaction
plays no role. A fully polarized ground state is, indeed, compatible
with the available Montecarlo\cite{ferro1} and t-matrix
calculations\cite{ferro3}. iii) It is a variational technique, and it should
give a reasonable approximation to the ground state energy.
This is the only ingredient required in analyzing the issue
of phase separation. iv) It describes the doped antiferromagnet,
for $t'=0$ as a dilute gas of spin polarons. The properties of
such a system are consistent with other numerical calculations
of the same model \cite{letal98,letal99}.\\
On the other hand, the method used here does not allow us to treat
possible superconducting instabilities of the model, which have been
shown to be present, at least in weak coupling approaches \cite{phases}.
The study of these phases requires extensions of the present
approach, and will be reported elsewhere.\\
The main new feature introduced by a finite $t'$, using simple
concepts in condensed matter physics, is the destruction the perfect
nesting of the Fermi surface at half filling, and the existence
of a second interesting filling factor, at which the
Fermi surface includes the saddle points in the dispersion
relations. At this filling, the density of states at the
Fermi level becomes infinite, and the metallic phase
becomes unstable, even for infinitesimal values of the
interaction. For sufficiently large values of $t'/t$, the
leading instability at this filling is towards a ferromagnetic
state.\\
In the following section, we present the model and the method.
Then, we discuss the results. As the system shows a rich variety
of behaviors, we have classified the different regimes
into an antiferromagnetic region, dominated by short range
antiferromagnetic correlations, a ferromagnetic one,
and an intermediate situation, where the method suggest
the existence of phase separation. The last section presents
the main conclusions of our work.
\section{The model and the method}
The t-t' Hubbard model is defined in the two dimensional
squared lattice by the hamiltonian
\begin{equation}
H=-t\sum_{<i,j>,s} c^+_{i,s}
c_{j,s}
\;-t'\;\sum_{<<i,j>>,s} c^+_{i,s} c_{j,s}
\;+\;U\sum_i n_{i,\uparrow} n_{i,\downarrow}
\;\;\;,
\label{ham}
\end{equation}
with the dispersion relation
$$
\varepsilon({\bf k}) = 2t\;[ \cos(k_x a)+\cos(k_y a)]
+4t'\cos(k_x a)\cos(k_y a)
\;\;\;.
$$
We have adopted the convention widely used to describe the phenomenology
of some hole-doped cuprates \cite{ttp}, $t>0$, $t'< 0 \;,\; 2t'/t <\; 1\;$.
With this choice of parameters the bandwidth is $W=8t$ and
the Van Hove singularity
is approached by doping the half-filled
system with holes. Throughout this study we will fix the value of $t=1$
so that energies will be expressed in units of $t$. Unless otherwise stated,
we will work in a $12\times 12$ lattice with periodic boundary conditions.
We have choosen the $12\times 12$
lattice because it is the minimal size for which
finite size effects are almost irrelevant
\cite{pr,pilar,capone}. \\
The unrestricted Hartree-Fock approximation
minimizes the expectation value of
the hamiltonian (\ref{ham}) in the space of Slater determinants. These are
ground states of a single particle many-body system in a potential
defined by the electron occupancy of each site. This potential is
determined selfconsistently
$$
H=-\sum_{i,j,s} t_{ij} c^\dag_{i,s}c_{j,s}
-\sum_{i,s,s'}\frac{U}{2} {\vec m}_i c^\dag_{i,s} {\vec \sigma}_{s,s'} c_{i,s'}
+\sum_{i}\frac{U}{2} q_i(n_{i\uparrow}+n_{i\downarrow}) +
{\displaystyle c.c.}\;\;,
$$
(where $t_{i,j}$ denotes next, t, and next-nearest, t', neighbors),
and the self-consistency conditions are
$${\vec m_i}=\sum_{s,s'}<c^\dag_{i,s} {\vec \sigma}_{s,s'} c_{i,s'}> \;\;\;,\;\;\;
q_i=< n_{i\uparrow}+n_{i\downarrow}-1> \;,$$
where ${\vec \sigma}$ are the Pauli matrices. \\
We have established a very restrictive criterium
for the convergence of a solution. The iteration ends when
the effective potential of the hamiltonian and the one deduced from the
solution are equal up to $E\;<\;10^{-7}$.
When different configurations converge for a given value of the parameters,
their relative stability is found by comparing their total energies.
\section{The results}
The results in this work are summarized in fig.1
which represents the energy of the
ground state configurations versus doping from $x=0$ to $x=0.34$
(where x is the rate of total number of electrons over
the total number of sites)
for the representative
values $t'=0.3$ and $U=8$. As, in most cases, a variety
of selfconsistent solutions can be found, we have tried to avoid an inital
bias by starting with random spin and charge configurations. Once
the system has evolved to a stable final configuration, this has been used
as initial condition for the nearby dopings. Hence, most of the
configurations discussed in the text are robust in the sense that they
have not been forced by a choice of initial
conditions and hence are stable under
small changes of the initial values. Exceptions
are the diagonal commensurate domain walls
and the stripes. These configurations were set
as initial conditions and found to be self-consistent. Even if
there are many possible solutions,
the system cannot be ^^ ^^ forced " to converge to a given solution
by appropiately choosing the inital conditions.
In particular homogeneous solutions, such as a pure AF solution, do not
converge near half filling, as will be discussed later. Fig. 2 and fig. 3
show a comparison of the energies of different configurations converging
in the same range of dopings. Fig. 1 shows only minimal energy
configurations. Once a configuration converges, we have checked its
stability under changes in $U$ and $t'$.\\
The most remarkable feature of fig. 1 is the smooth transition
from insulating antiferromagnetism to
metallic ferromagnetism. The antiferromagnetic
region extends in a range of hole
doping from
$x=0$ to $x=0.125$ and the ferromagnetic region from $x=0.125$ to $x=0.34$.
In the antiferromagnetic region the
predominant configurations are fully polarized antiferromagnetism (AF),
polarons (POL),
diagonal commensurate domain-walls(dcDW), and noncollinear solutions ($S_x$).
In the ferromagnetic region the phases are ferromagnetic domains (fm DOM),
ferromagnetic non collinear solutions
(fm SDW) and the fully polarized state or Nagaoka configuration (Ng).\\
Most of the AF configurations are known as solutions of the Hubbard model
with $t'=0$ \cite{su,pr,il,paco1,zaan,Sc90}. We will here comment
on the changes induced by $t'$. The FM configurations are totally new
and due to the presence of $t'$, as well as
the zone of coexistence of both magnetic orderings.
In addition, we have also analyzed in
detail some striped configurations, due to their
possible experimental relevance.\\
In what follows we analyze the antiferromagnetic and ferromagnetic regions
and discuss the possibility of phase separation.
\subsection{The antiferromagnetic region}
The study of the motion of a few holes in an antiferromagnetic background
has been one of the main subjects in the literature related to the cuprates
as these are doped AF insulators. The region of the Hubbard model
at and close to half filling is also the area where the
metal-insulator transition \cite{imada} occurs, and where the well-established
spin polarons or spin bags \cite{bag}
coexist with domain walls and, possibly, striped
configurations.
The diagonal hopping $t'$ has a strong influence over this region as
it destroys the perfect nesting
of the Hubbard model at half filling and the particle-hole symmetry
which leads to AFM order in weak coupling approaches \cite{lin,vollhardt}.\\
The AF region is formed by fully polarized antiferromagnetism,
polarons, diagonal commensurante
domain-walls and non collinear solutions. We also have found stripes
as excited
states. The configurations and the density of states are shown in
fig. 4, fig. 5, fig. 6 and fig.7. We will give a brief
discussion of these configurations.
\vspace{0.5 true cm}
{\bf Antiferromagnetism}
\vspace{0.3 true cm}
For the reference values of $U=8$ and $t'=0.3$ fully polarized
antiferromagnetism (AF) is the lowest energy configuration
only at half filling. For the range of dopings
$0.007\leq x\leq 0.027$, ($1\leq h\leq 4$, where h denotes the
number of holes), AF converges but
POL are energetically
more favourable. Above four holes a purely AF initial configuration
evolves to polarons. \\
AF is the minimal energy configuration for lower values
of U in a wider range of dopings.
For example for $U=4$ and $t'=0.3$ AF is the lowest energy
configuration in the range
$0\leq x\leq 0.03$. This result is almost insensitive to changes in $t'$. \\
We can conclude then that in the presence of $t'$,
the homogeneous fully polarized antiferromagnetic configuration (N\`eel state)
is not the dominant solution near half filling.
This result is to be contrasted
with what happens with electron doping where AF dominates a larger region
of the doping space. The reason for this asymmetry will become clear in the
discussion of the polaronic configuration following. For $t'\neq 0$
inhomogeneous solutions are clearly energetically more favourable.
\vspace{0.5 true cm}
{\bf Polarons}
\vspace{0.3 true cm}
Magnetic polarons have been discussed at length in the literature
\cite{bag,paco1,seib}. For $t'=0$ the magnetization points along the
same direction everywhere in the cluster and the extra charge is localized
in regions that can be of either cigar or diamond shape. These regions
defined a core where the magnetization is reduced. \\
In the present case, $t' \neq 0$, this picture changes substantially.
The two Hubbard bands in the N\`eel state are no longer equivalent,
with bandwidths given, approximately, by $8 | t' | \pm 4 t^2 / U$.
Polarons are found at the edges of the narrower band at all values of t'.
This
situation corresponds to hole doping for our choice of sign
of $t'$ ( $t' / t < 0$). The doping of the wider
electron band leads usually
to stable homogeneous metallic AF solutions,
where the extra carriers are delocalized throughout the lattice.
We have found polarons in the electron region
only when $U$ is big ($U > 6$) and t' small
($t' < 0.15$). On the other hand, the localization of the polarons
induced by hole
doping increases with increasing $|t' / t |$. This reflects the
fact that these polarons are derived from a narrower Hubbard band.
Qualitatively, this fact can be understood
in terms of the asymmetric tendency of the system towards
phase separation when $t' \neq 0$ \cite{Getal99}.
The polarons also can be understood as an incipient form of phase
separation, as the core shows strong ferromagnetic correlations.\\
Polarons converge in a wide region of the phase diagram,
coexisting with AF and dcDW as shown in fig. 2. They have lowest energy
in the doping range
$0.007\leq x\leq 0.035$ ($1\leq h\leq 5$). They do not
converge in the range $0.076\leq x\leq 0.097$ ($11\leq h \leq 13$) where
the noncollinear solution has lower energy.
This is also different from
the situation with $t'=0$ where polarons converge and have lower
energy in the full range of dopings $2\leq h\leq 30$ \cite{paco1}.
The DOS of polarons is shown if fig. 5b for five holes.
Fig. 5a shows the reference AF state at half filling. In the DOS for
polarons the localized states appear in the antiferromagnetic gap.
As doping increases, they form a mid gap subband but the shape of
antiferromagnetic spectrum
is still clearly seen (see fig. 5c). \\
\vspace{0.5 true cm}
{\bf Diagonal commensurate domain-walls}
\vspace{0.3 true cm}
The dcDW configuration is formed by polarons arranged in the diagonal
direction creating an almost one dimensional charged wall that forms
a ferromagnetic domain (see
fig. 4b). We stress ^^ ^^ commensurate " because they do not separate
different AF domains as
stripes do.
The density of states of these solutions differ from
that of usual domain walls in that the one dimensional
band where the holes are located is wider, and the Fermi level
lies inside it. We do not find a size independent one particle gap
in these solutions, unlike for conventional domain walls.
Within the numerical precision of our calculations, these
structures are metallic, while antiphase domain walls are
insulating.\\
The dcDW are the predominant configuration in the
AF region. They converge in the range of dopings $0.014\leq x\leq 0.118$
($2\leq h\leq 17$)
and are the minimal energy configuration in most of the doping range as can be
seen in fig. 2. These configurations resemble an array of
the polarons discussed earlier, along the (1,1) direction.
Thus, we can say that individual polarons have a tendency towards
aligning themselves along the diagonals.
This may be due to the
fact that $t'$ favors the hopping along these directions.
This tendency can be also seen in the density of states.
We have also
checked that a lower value of $t'$ ( also for $U=8$)
these configurations are less favored.
In particular for $U=8$, dcDW are not
formed at low $t'$ while they do form at $t'= 0.3$.
Moreover we have tried
vertical domain walls and they are not formed for $U=8$ and $t'=0.3$
but they do with $t'$ lower ($t'=0.1$). Summarizing, we have checked that $t'$
favours dcDW against POL and disfavours vertical DW.
This behaviour is also found with the striped configurations
(see below).
\vspace{0.5 true cm}
{\bf Non collinear solution $S_x$}
\vspace{0.3 true cm}
The structure denoted by $S_x$
in the phase diagram consists of a special configuration
with noncollinear spin $S_z$. It appears in the range of dopings
$0.076\leq x\leq 0.097$,
($11\leq h\leq 13$), in competition with
dcDW and has lower energy. We have checked that this structure is
never seen in the absence of $t'$ and is also found at $U=4$ and $t'=0.3$.\\
The configuration is shown if fig. 4c. We see that there are polarons but there
is a contribution of the spin x at some random sites.
The convergence in this
configuration is very slow. It is interesting to point out
that we obtain this configuration when using conventional polarons as
the initial condition.\\
We do not have a complete understanding of why this configuration is preferred
in this region although, it is interesting to note that it happens for the
commensurate value of twelve holes
(in our $12 \times 12 $ lattice) and the two neighboring values $h=11$ and
$h=13$.\\
Its density of states shown in fig. 5d is very similar to the polaronic DOS.
\vspace{0.5 true cm}
{\bf Stripes}
\vspace{0.3 true cm}
The striped configurations are
similar to the domain-walls but the one dimensional
arrange of charge separates antiferromagnetic domains with a
phase shift of $\pi$. Two
typical striped configurations are shown in
fig. 6a and fig. 6b. Recently the stripes have attracted
a lot of interest as the half filled vertical
stripes (one hole every second site) are found in cuprates while diagonal
stripes are found
in nickelates \cite{strexperim}.\\
We have obtained stripes as higher energy configurations
and we have not found half
filled vertical stripes in agreement
with other works using mean field approximations
for $U=8$ \cite{su,pr,il,paco1,zaan}.
It is known that the addition
of a long range Coulomb interaction could stabilize the vertical stripes
as ground
states for large $U$ \cite{zaanen}
and applying a slave-boson version of the Gutzwiller
approach the half filled vertical stripes can be ground states depending
on paramenters \cite{seibold}.\\
We have studied the filled vertical stripe and the diagonal stripe obtained
at values of the doping
commensurate with the lattice.
Our main interest is the role played by $t'$ in these configurations.
We have seen that
$t'$ has a strong influence on them: $t'$ reduces
significantly the basin of attraction of the vertical stripes, what agrees
with recent calculations in the $tt'-J$ model \cite{tt'-J},
while it favors diagonal stripes. The evolution of the vertical stripe
with $t'$
can be seen in fig. 6b and fig. 6c for the values $U=4$, $t'=0$ and $t'=0.2$
. These stripes do not converge for higher values of $t'$.
We have instead found that diagonal stripes are favored by $t'$ much as
the dcDW were. \\
The density of states of the stripes is very similar to that of polarons.
We can conclude that they are insulating states (see fig.
7) unlike the similar commensurate domain walls where a
more metallic character can be appreciated.
\subsection{The ferromagnetic region}
The existence of metallic ferromagnetism in the Hubbard model
remains one of the most
controversial issues in the subject \cite{ferro4}.
Large areas of ferromagnetism in the doping
parameter were found in the earliest works on
the $t-t'$ Hubbard model within mean field
approximation \cite{lin} and were often assumed to be
an artifact of the approximation which
would be destroyed by quantum corrections.
There are two main regions where ferromagnetism
is likely to be the dominant configuration.
One is the region close to half filling, in particular at one hole doping where
the Nagaoka theorem
ensures a fully polarized ferromagnetic state
in a bipartite lattice at $U=\infty$. The other
is the region around the Van Hove fillings
where there is a very flat lower band and
where ferromagnetism was found for large values of $t'$ close to $t'= 0.5$
with quantum Monte Carlo techniques \cite{ferro1}
and in the T-matrix approximation \cite{ferro3}.
FM is also found to be the dominant
instability for small $U$ and large $t'$ in
analytical calculations based on the renormalization
group \cite{phases}, and at intermediate
values of U with a mixture of analytical and mean field
calculations \cite{japon}.
Finally, there is a controversy on whether Nagaoka ferromagnetism
is stabilized at the bottom of the band $\rho\rightarrow 0$ \cite{botton}.\\
Most of the previous calculations relay on the study of the divergences of
the magnetic susceptibility
pointing to either a symmetry breaking ground state or to the formation of spin
density waves as low energy excitations of the system. In many cases
it is not possible in this type of analyses to
discern on the precise nature of the magnetic phases, and,
in particular, whether they correspond
to fully polarized states (long range order) or
to inhomogeneous configurations with
an average magnetization.
A complete study of the magnetic transitions as a function
of the electronic density is also a difficult issue. \\
We have studied the stability of ferromagnetic configurations in the full
range of dopings discussed previously. Two main issues can be addressed
within the method of the present paper. One is the existence
of the fully polarized ferromagnetic state (Nagaoka state),
and its stability not only towards the state with one spin flip,
but against any weakly polarized or paramagnetic configuration.
The other is the specific symmetry of the partially polarized
ferromagnetic configurations.\\
In the region close to half filling, our results indicate
that the Nagaoka theorem does probably hold in the
presence of $t'$ (which spoils the bipartite character of the lattice)
since the
Nagaoka state appears when doping with one hole at such
large values of $U$ as to make the kinetic
term quite irrelevant. We found Nagaoka FM
at values of $U$ such as $U=128$.
No FM configurations are found doping with two holes even at $U=128$.\\
The region of low to intermediate electron density has been analyzed for
various values of $U$ and $t'$.
This region includes dopings close to the Van Hove singularity
where FM should be enhanced due to the large degeneracy of states in the
lower band. The position
of the Van Hove singularity
for a given value of $U$ and $t'$ can be read off from the undoped DOS;
it has been determined in \cite{vh}.
Our results are the following: \\
Nagaoka FM is not found for $t'= 0.1$ at any filling for $U\leq 8$. For
$t'= 0.3$, two types of
FM configurations are the most stable in the range of dopings shown in
fig. 1. Ferromagnetic spin density waves (fm SDW) despicted in
fig. 8b dominate the phase diagram at
densities close to the AFM transition $0.146 \leq x\leq 0.194$
($21\leq h\leq 28$), and in $x\geq 0.264$ ($h\geq 37$).
In the region in between, ferromagnetic domains
(fm DOM) as the one shown in fig. 8c are the most stable.
Both types of configurations show a strong charge segregation and are clearly
metallic. The excitation spectrum of these configurations can be seen
in fig. 9.\\
Fully polarized FM metallic states
(Nagaoka) shown in fig. 8a, are found
at all values of $h$ corresponding to closed shell configurations
from a critical value
$h_c (U)$ depending on $t'$ till the bottom of the band. They are
shown as vertical solid lines in fig. 1. They are also metallic with a
higher DOS at the Fermi level than the partially polarized configurations.
Larger values of $t'$ or $U$ push down the critical $h$
in agreement with previous works \cite{lin,ferro3}.
Some values of $h_c (U)$ are, for $t'=0.3$,
$h_c(6)=37 , h_c(8)=29 , h_c(10)=21\;.$
For $t'=0.4 , h_c(8)=25\;.$ As mentioned before,
no FM is found for $t'=0.1$. \\
The former results show a large region of
ferromagnetic configurations whose upper
boundary coincides with previous
estimations \cite{ferro3} but which extends to the
botton of the band.
We have found paramagnetic configurations to converge in the bottom
of the band but their energies are higher than the ferromagnetic ones.
Comparing our solutions with the corresponding results obtained with
the same method in the case $t'=0$ \cite{paco1},
we find that the inclusion of $t'$ favors
ferromagnetism for intermediate to large dopings.
\subsection{Phase separation}
Although the issue of phase separation (PS) in the Hubbard model is quite
old \cite{visscher}, it has become the object of
very active research work \cite{allps}
following the experimental
observation of charge segregation
in some cuprates \cite{psexp}. Despite the effort,
the theoretical situation is quite controversial,
although recent calculations rule out PS in the 2D
Hubbard model \cite{nops,capone}. It seems to occur
above some values of $J$ \cite{someps,Cetal98,CBS98}, although other
work suggests that it is likely
for all values of $J$ in the $t-J$ model \cite{tj}.
PS has also been invoked in connection with
the striped phase of the cuprates \cite{tranq}. \\
The theoretical study of PS is a difficult subject.
While it is a clear concept in statistical mechanics
dealing with homogeneous systems in thermodynamical equilibrium,
the characterization
of PS in discrete systems is much more involved. It is assumed to occur
in those density regions where the energy as a function of density is
not a convex function. This behavior is difficult to achieve in
finite systems where the indication of PS is a
line, E(x), of zero curvature, i.e of
infinite compressibility.
Even this characterization, which should be correct if it refers to
uniform phases of the system, is problematic when many
inhomogeneous phases compete in the same region of parameter space.
On the other hand, simple thermodynamic arguments suggest that
it should be a general phenomenon near magnetic phase
transitions \cite{phasesep}.\\
PS is also very hard to observe numerically as
demonstrated by the results cited previously.
Exact results as the one obtained in \cite{nops} are very restrictive and
hence of relative utility.\\
Our work supports the evidence for
phase separation of the model in several ways.
The first is through the plot of the total energy
of the minimal energy configuration as a function
of the doping $x$ shown in fig. 1.
There we can see that the dominant feature follows
a straight line. As mentioned before, this characterization has the problem
of comparing the energies of different type of configurations. \\
The evidence is
more clear if we observe the same plot
for a given fixed configuration in the AF region
where phase separation occurs (fig. 2).
The polaronic configurations in fig. 2 follow a straight line
while negative curvature is clearly seen in the plot of the
commensurate domain walls, the more aboundant solution in this region.
A Maxwell construction
done to this region of curve interpolates rather well to half filling. \\
The best evidence is provided by the comparison between
the plots corresponding to the two uniform configurations
existing in the system. In the case of the N\`eel state (AF of fig. 2)
we can see a straight line in the
region of densities where it is a self-consistent solution.
This plot should be compared with the one in fig. 10 corresponding
to the uniform Nagaoka states. In the large region where this
homogeneous state is found, the plot
follows very closely a standard quadratic curve. \\
Finally we have looked at the charge and spin
configurations of minimal energy. Apart
from the AFM configuration at half filling and the Nagaoka FM, all inhomogeneous
configurations show the same path: coexisting regions
of an accumulation of holes accompanied by a
ferromagnetic order with regions of lower density with an AFM order The charge
segregation is obvious in configurations like the ones shown in fig. 4 and
in fig. 8c.\\
We have found fully polarized solutions in closed shell
configurations down to the lowest
electron occupancies allowed in our $12 \times 12$ cluster
(5 electrons). However, we cannot rule out the existence of
paramagnetic solutions at even lower fillings.\\
With all the previous hints we reach the conclusion that
the $t-t'$ Hubbard model tends to phase
separate into an antiferrromagnetic and a ferromagnetic fully polarized
state with different densities for any doping away to half
filling up to the Van Hove filling where FM sets in.
It is interesting to note that this result was
predicted in a totally different context by Markiewicz in ref. \cite{mark}.
Phase separation has also been predicted in the same range of dopings in ref.
\cite{ferro3} but between a paramagnetic and a ferromagnetic state.
\section{Conclusions}
In this paper we have analyzed the charge and
spin textures of the ground state
of the $t-t'$ Hubbard model in two dimensions as a
function of the parameters $U$, $t'$ and
the electron density $x$ in a range from half filling to intermediate
hole doping with the aim of elucidating the role of $t'$ on
some controversial issues. These include
the existence and stability of ordered configurations such as domain walls
or stripes, and the magnetic behavior in the
region of intermediate to large doping where the lower band becomes very flat.\\
We have used an unrestricted Hartree Fock approximation in real space
as the best suited method to study the inhomogeneous configurations of
the system. \\
Our results are summarized in the representative phase diagram of fig. 1
obtained for the standard values of the parameters $U=8$, $t'=0.3$.
There we can see
that the system undergoes a transition from
generalized antiferromagnetic insulating
configurations including
spin polarons and domain walls, to metallic ferromagnetic configurations.
For the values of the
parameters cited, the transition occurs at an electron density $x = 0.125$
($h = 18$).
Both types of magnetic configurations converge
in the intermediate region
indicating that the transition is smooth more like a crossover.\\
The generalized antiferromagnetic configurations are
characterized by a large peak in the
density of states of
the lower band and by the presence of an antiferromagnetic
gap with isolated polarons
for very small doping that evolves to a mid gap subband
for larger dopings. ferromagnetic configurations have a
metallic character with a
DOS at the Fermi level that increases for configurations
with increasing total magnetization. Fully polarized Nagaoka states
are found at all closed shell configurations in the
ferromagnetic zone of the phase diagram. They have the
highest DOS at the Fermi level. \\
Apart from the homogeneous N\'eel and Nagaoka states, all inhomogeneous
configurations show the existence of the two magnetic orders associated to
charge segregation. AF is found in the regions of low charge density and
FM clusters are formed in the localized regions where the extra charge tends
to accumulate. \\
Our main conclusion is that the only stable
homogeneous phases of the system consists
of the purely antiferromagnetic N\`eel configuration at half filling,
and Nagaoka ferromagnetism, which appears
around the Van Hove filling.
We find the system is unstable towards phase
separation for all intermediate densities. \\
We have reached this conclusion trough a careful study
of the curves representing the total energy versus doping
of the various configurations. Besides, the approach used allows us to
visualize the inhomogeneous configurations.
In all of them we find coexisting regions
of an accumulation of holes accompanied by a
ferromagnetic order with regions of lower density with an
antiferromagnetic order.\\
As the ferromagnetic phase is metallic while the N\`eel state is insulating,
we expect the transport properties of the model in the
intermediate region to resemble that of a percolating network, a system
which has attracted much attention lately \cite{SA97,SAK99}.\\
Finally, our study does not exclude the existence of other non magnetic
instabilities, most notably d-wave superconductivity. This can be,
however, a low energy phenomenon, so that the main magnetic properties
at intermediate energies or temperatures are well described
by the study presented here.
\vspace{1cm}
We thank R. Markiewicz for a critical reading of the
manuscript with very useful comments.
Conversations held with R. Hlubina, E. Louis, and M. P. L\'opez Sancho
are also gratefully acknowledged. This work has been supported by the
CICYT, Spain, through grant PB96-0875 and by CAM, Madrid, Spain.
\newpage
|
1,116,691,500,181 | arxiv | \section{Introduction}
L\'evy processes form the prototype of continuous-time processes with a continuous diffusion and a jump part. In applications, there is a high interest to disentangle these parts based on discrete observations. While A\"it-Sahalia and Jacod \cite{AJ} among many others propose an asymptotically (as the observation distances become smaller) consistent test on the presence of jumps for general semimartingale models, Neumann and Rei\ss\ \cite{NR} argue that, already inside the class of $\alpha$-stable processes with $\alpha\in(0,2]$, no uniformly consistent test exists. The subtle, but important, difference is the uniformity over the class of processes. Mathematically, the difference is that on the Skorokhod path space $D([0,T])$ $\alpha$-stable processes, $\alpha\in(0,2)$, induce laws singular to that of Brownian motion ($\alpha=2$), while their respective marginals at $t_k=kT/n$, $k=0,\ldots,n$ for $n$ fixed, have equivalent laws, which even converge in total variation distance as $\alpha\to 2$ to those of Brownian motion. It is our aim here to shed some light on the geometry of the marginal laws of one-dimensional L\'evy processes and to quantify the distance of the marginal laws non-asymptotically as a function of the respective L\'evy characteristics $(b,\sigma^2,\nu)$. The marginals form, of course, infinitely divisible distributions, but we prefer here the process point of view which is sometimes more intuitive.
Let us recall the fundamental result by Gnedenko and Kolmogorov \cite{GK}.
\begin{theorem}[\cite{GK}]
Marginals of L\'evy processes $X^n=(X^n_t)_{t\ge 0}$ with characteristics $(b_n,\sigma_n^2,\nu_n)$ converge weakly to marginals of a L\'evy process $X=(X_t)_{t\ge 0}$ with characteristics $(b,\sigma^2,\nu)$ if and only if
\[ b_n\to b \text{ and } \sigma_n^2\delta_0+(x^2\wedge 1)\nu_n(dx)\xrightarrow{w} \sigma^2\delta_0+(x^2\wedge 1)\nu(dx),\]
where $\delta_0$ is the Dirac measure in $0$ and $\xrightarrow{w}$ denotes weak convergence of finite measures.
\end{theorem}
As a particular example, consider the compound Poisson process with L\'evy measure $\frac{\delta_{-\varepsilon}+\delta_{\varepsilon}}{2\varepsilon^2}$ that has jumps of size $\varepsilon$ and $-\varepsilon$ both at intensity $\frac{1}{2\varepsilon^2}$. Then as $\varepsilon\downarrow 0$, the marginals converge to those of a standard Brownian motion, which can also be derived from Donsker's Theorem. Below, we shall be able to quantify this rate of convergence for general L\'evy processes in terms of the (stronger) $p$-Wasserstein distances $\Wi_p$. The derived Gaussian approximation of the small jump part relies on the fine analysis by Rio \cite{Rio09} of the approximation error in Wasserstein distance for the central limit theorem. This is the subject of Theorem \ref{teo:smalljumps}, of which the following is a simplified statement:
\begin{result*}
Let $X^S(\varepsilon)$ be a Lévy process with characteristics $(0, 0, \nu_\varepsilon)$ where $\nu_\varepsilon$ is a L\'evy measure with support in $[-\varepsilon,\varepsilon]$. Introducing $\bar\sigma^2(\varepsilon) = \int_{-\varepsilon}^ {\varepsilon} x^2 \nu_\varepsilon(dx)$, there exists a constant $C$ depending only on $p$ such that:
$$
\Wi_p\big(\Li(X_t^{S}(\varepsilon)),\No(0,t\bar\sigma^2(\varepsilon)) \big) \leq C\min\big(\sqrt t\bar\sigma(\varepsilon) ,\varepsilon\big) \leq C \varepsilon.
$$
\end{result*}
A Gaussian approximation of the small jumps of Lévy processes has already been employed, for example when simulating trajectories of Lévy processes with infinite Lévy measure (see e.g. \cite{tankov}).
The above result is actually an intermediate step for the more general Corollary \ref{res:tensorizationlevy}, that bounds the $p$-Wasserstein distance $\Wi_p$ in $\ell^r(\R^n)$ as follows:
\begin{result*}
Let $X^j$, $j=1,2$, be two Lévy processes with characteristics $(b_j,\sigma_j^2,\nu_j)$, $j=1,2$. Then for all $\varepsilon\geq 0$, $T>0$ and $n\in\N$ we have
\begin{align*}
\Wi_p\Big(&(X_{kT/n}^1-X_{(k-1)T/n}^1)_{k=1}^n,(X_{kT/n}^2-X_{(k-1)T/n}^2)_{k=1}^n\Big)\\
&\leq T n^{\frac{1}{r}-1} \big|b_1(\varepsilon)-b_2(\varepsilon)\big|
+ T^{1/2} n^{\frac{1}{r}-\frac12}\big|\sigma_1+\bar\sigma_1(\varepsilon)-\sigma_2-\bar\sigma_2(\varepsilon)\big|\\
&\quad +
C\sum_{j=1}^2\min\big(T^{1/2}n^{\frac{1}{r}-\frac12}\bar\sigma_j(\varepsilon),n^{\frac{1}{r}}\varepsilon\big)
+n^{\frac{1}{r}}\Wi_p\big(X_{T/n}^{1,B}(\varepsilon),X_{T/n}^{2,B}(\varepsilon)\big),
\end{align*}
where $b_j(\varepsilon):=b_j-\int_{\varepsilon<|x|\leq 1}x\nu_j(dx)$, $\bar\sigma_j^2(\varepsilon):=\int_{|x|\leq\varepsilon}x^2\nu(dx)$ and $C$ is a constant depending only on $p$. The term $\Wi_p(X_{T/n}^{1,B}(\varepsilon),X_{T/n}^{2,B}(\varepsilon))$, involving the jumps larger $\varepsilon$, can be bounded as in Theorem \ref{teo:CPP}.
\end{result*}
Sometimes we can even obtain bounds on the total variation distance, which for statistical purposes, especially testing, is particularly meaningful. The currently available bound in the literature is by Liese \cite{liese}.
\begin{theorem}[{\cite[Cor. 2.7]{liese}}]
For L\'evy processes $X^1$ and $X^2$ with characteristics $(b_1,\sigma_1^2,\nu_1)$ and $(b_2,\sigma_2^2,\nu_2)$, respectively, introduce the squared Hellinger distance of the L\'evy measures (put $\nu_0=\nu_1+\nu_2$):
\[ H^2(\nu_1,\nu_2):= \int_{\R} \bigg(\sqrt{\frac{d\nu_1}{d\nu_0}(x)}-\sqrt{\frac{d\nu_2}{d\nu_0}(x)}\bigg)^2\nu_0(dx).
\]
Then the total variation distance between the laws of $X_t^1$ and $X_t^2$ is bounded as:
\begin{align*}&\|\Li(X_t^1)-\Li(X_t^2)\|_{TV}\\
&\leq 2\sqrt{1-\Big(1-\frac12 H^2\Big(\No(\tilde b_1t,\sigma_1^2t),\No(\tilde b_2t,\sigma_2^2t)\Big)\Big)^2\exp\Big(- t H^2(\nu_1,\nu_2)\Big)}
\end{align*}
with $\tilde b_1=b_1-\int_{-1}^1x\nu_1(dx)$, $\tilde b_2=b_2-\int_{-1}^1x\nu_2(dx)$.
\end{theorem}
Note that the bound is very loose or even trivial in the case $\nu_2=0$ and $\lambda_1=\nu_1(\R)>1/t$ because then $tH^2(\nu_1,\nu_2)=t\lambda_1>1$. So, this bound does not allow to deduce a total variation approximation of Brownian motion by jump processes of infinite jump activity like $\alpha$-stable processes with $\alpha\uparrow 2$. In fact, for pure jump L\'evy processes these bounds are analogous to the bounds by Mémin and Shiryayev \cite{MS} in the path space $D([0,T])$, where pure jump processes and Brownian motion have singular laws (for other results on distances on $D([0,T])$ see e.g. \cite{etore14, g13,jacod,Ku}). Our main idea is to use the convolutional structure of the laws to transfer bounds from Wasserstein to total variation distance. This strategy is implemented for L\'evy processes with a non-zero Gaussian component (but without any restriction on the Lévy measures, which can be infinite, and even with infinite variation) and yields Theorem \ref{th:MainTV}:
\begin{result*}
For L\'evy processes $X^1$ and $X^2$ with characteristics $(b_j,\sigma_j^2,\nu_j)$ and $\sigma_j>0$, $j=1,2$, we have for all $t>0$, $\varepsilon\in [0,1]$:
\begin{align*}
\big\|\Li(X_t^1)-\Li(X_t^2)&\big\|_{TV} \leq \frac{\sqrt{\frac{t}{2\pi}} \Big|b_1(\varepsilon)-b_2(\varepsilon)\Big|+\sqrt 2\Big|\sqrt{\sigma_1^2+\bar\sigma_1^2(\varepsilon)}-\sqrt{\sigma_2^2+\bar\sigma_2^2(\varepsilon)}\Big|}{\sqrt{\sigma_1^2+\bar\sigma_1^2(\varepsilon)}\vee \sqrt{\sigma_2^2+\bar\sigma_2^2(\varepsilon)}}\\
&\quad+ \sum_{j=1}^2\sqrt{\frac{2}{\pi t \sigma_j^2}}
\min\Big(2\sqrt{t\bar \sigma_j^2(\varepsilon)}, \frac{\varepsilon}{2}\Big)\\
&\quad
+t\big|\lambda_1(\varepsilon)-\lambda_2(\varepsilon)\big|+t\big(\lambda_1(\varepsilon)\wedge\lambda_2(\varepsilon)\big)
\bigg\|\frac{\nu_1^\varepsilon}{\lambda_1(\varepsilon)}-\frac{\nu_2^\varepsilon}{\lambda_2(\varepsilon)}\bigg\|_{TV},
\end{align*}
with the above notation, $\nu_j^\varepsilon=\nu_j(\cdot\setminus(-\varepsilon,\varepsilon))$ and $\lambda_j(\varepsilon)=\nu_j^\varepsilon(\R)$.
\end{result*}
The results proven in this paper provide further insight in the geometry of the space of discretely observed Lévy processes. At the same time, their nonasymptotic character finds fruitful applications in nonparametric statistics, when proving general lower bounds in a minimax sense. The technology is shown at work in Section \ref{subsec:JR}, making the original proof by Jacod and Rei\ss\ \cite{JR} for volatility estimation under high activity jumps simpler and much more transparent.
The results are stated in dimension one. After the first version of this paper was completed, however, new results for non-asymptotic multidimensional central limit theorem in Wasserstein distances have appeared (see e.g. \cite{TB}). Since a (special form of a) central limit theorem was the main technical tool in our proof of Theorem \ref{teo:smalljumps}, this makes a multidimensional extension of our findings a promising future research direction that seems worth investigating. Another potentially fruitful line of research would be to go beyond the independence structure of the increments and consider the general framework of semimartingales. Lévy processes are the basic building blocks for these more general processes and it is common to use this easier setting as a first step towards a more general proof; however, the techniques that were used in this paper heavily depend on the independence structure and do not directly extend to this more general framework.
The paper is organized as follows. In Section 2 we review basic properties of the Wasserstein distances and discuss their relationship with the Zolotarev and Toscani-Fourier distances. Then we recall the main non-asymptotic bounds for the Wasserstein distances in the CLT and introduce L\'evy processes.
Section 3 derives bounds between marginals of L\'evy processes in Wasserstein distance. The main focus is on the small jump part, which is treated in Theorem \ref{teo:smalljumps} and for which the tightness of the bounds is discussed in detail, first for concrete examples and then more generally using a lower bound via the Toscani-Fourier distance. Main results are presented in Section 3.3. Section 4 introduces properties of the total variation distance and then shows how bounds in Wasserstein or Toscani-Fourier distance transfer under convolution to total variation bounds, see e.g. Proposition \ref{prop:TV} and Proposition \ref{prop:CTMR}. For Gaussian convolutions the different bounds are first compared and then applied to the marginals of L\'evy processes. Section 5 is devoted to the application of the total variation bounds for proving the minimax-optimality of integrated volatility estimators in the presence of jumps proposed in \cite{JR}.
\section{Preliminaries}
\subsection{The Wasserstein distances}
Let $(\mathcal X,d)$ be a Polish metric space. Given $p\in[1,\infty)$, let $\mathcal P_p(\X)$ denote the space of all Borel probability measures
$\mu$ on $\X$ such that the moment bound
\begin{equation*}
\E_\mu[d(X,x_0)^p]<\infty
\end{equation*}
holds for some (and hence all) $x_0\in \X$.
\begin{definition}
Given $p\geq 1$, for any two probability measures $\mu,\nu\in\mathcal P_p(\X)$, the \emph{Wasserstein distance of order $p$}
between $\mu$ and $\nu$ is defined by
\begin{equation}\label{eq:defw}
\Wi_p(\mu,\nu)=\inf\Big\{\big[\E [d(X',Y')^p]\big]^{\frac{1}{p}},\ \Li(X')=\mu, \ \Li(Y')=\nu\Big\},
\end{equation}
where the infimum is taken over all random variables $X'$ and $Y'$ having laws $\mu$ and $\nu$, respectively. We abbreviate $\Wi_p(X,Y)=\Wi_p(\Li(X),\Li(Y))$ for random variables $X,Y$ with laws $\Li(X),\Li(Y)\in\mathcal P_p(\X)$.
\end{definition}
The following lemma introduces some properties of the Wasserstein distances that we will use throughout the paper. For a proof, the reader is
referred to \cite{villani09}, Chapter 6.
\begin{lemma}\label{lemma:w1}
The Wasserstein distances have the following properties:
\begin{enumerate}[(1)]
\item For all $p\geq 1$, $\Wi_p(\cdot,\cdot)$ is a metric on $\mathcal P_p(\X)$.
\item If $1\leq p\leq q$, then $\mathcal P_q(\X)\subseteq\mathcal P_p(\X)$, and $\Wi_p(\mu,\nu)\leq \Wi_q(\mu,\nu)$ for every
$\mu,\nu\in\mathcal P_q(\X)$.
\item Given a sequence $(\mu_n)_{n\geq 1}$ and a probability measure $\mu$ in $\mathcal P_p(\X)$
$$\lim_{n\to\infty}\Wi_p(\mu_n,\mu)=0$$
if and only if $(\mu_n)_{n\geq 1}$ converges to $\mu$ weakly and for some (and hence all) $x_0\in\X$
$$\lim_{n\to\infty}\int_{\X}d(x,x_0)^p\mu_n(dx)=\int_{\X}d(x,x_0)^p\mu(dx).$$
\item \label{eq:coupling} The infimum in \eqref{eq:defw} is actually a minimum; i.e., there exists a pair $(X^*,Y^*)$ of jointly distributed $\X$-valued random
variables with $\Li(X^*)=\mu$ and $\Li(Y^*)=\nu$, such that
\begin{equation*}
\Wi_p(\mu,\nu)^p=\E[d(X^*,Y^*)^p].
\end{equation*}
\end{enumerate}
\end{lemma}
Following the terminology used in \cite{zolbook}, we can say that the Wasserstein distances are \emph{ideal metrics} since they possess the following
two properties.
\begin{lemma}\label{lemma:proprieta}
Let $\X$ be a separable Banach space. For any three $\X$-valued random variables $X,Y, Z$, with $Z$ independent of $X$ and $Y$, the inequality
$$\Wi_p(X+Z,Y+Z)\leq\Wi_p(X,Y)$$
holds. Furthermore, for any real constant $c$, we have
\begin{equation}\label{eq:hom}
\Wi_p(cX,cY)=|c|\Wi_p(X,Y).
\end{equation}
\end{lemma}
\begin{proof}
Lemma \ref{lemma:w1} guarantees the existence of two random variables $X^*,Y^*$, independent of $Z$, such that
$$\Wi_p(X,Y)=\big(\E[ d(X^*,Y^*)^p]\big)^{1/p}.$$ We have:
\begin{align*}
\Wi_p(X+Z,Y+Z)&\leq\big(\E\big[d(X^*+Z,Y^*+Z)^p\big]\big)^{\frac{1}{p}}=\big(\E\big[d(X^*,Y^*)^p\big]\big)^{\frac{1}{p}}=\Wi_p(X,Y).
\end{align*}
The equality \eqref{eq:hom} follows by homogeneity of the expectation.
\end{proof}
An immediate corollary of Lemma \ref{lemma:proprieta} is the subadditivity of the metric $\Wi_p$ under independence (or equivalently under convolution of laws).
\begin{corollary}\label{cor:subadd}
If $X_1,\dots,X_n$ are independent random variables as well as $Y_1,\dots,Y_n$, then
$$\mathcal W_p(X_1+\dots +X_n, Y_1+\dots +Y_n)\leq \sum_{i=1}^n\mathcal W_p(X_i,Y_i).$$
\end{corollary}
\begin{proof}
By induction, it suffices to prove the case $n=2$.
Let $\tilde X_2$ be a random variable equal in law to $X_2$ and independent of $Y_1$ and of $X_1$. By means of Lemma \ref{lemma:proprieta} we have
\begin{align*}
&\mathcal W_p(Y_1+\tilde X_2, Y_1+Y_2)\leq \mathcal W_p(\tilde X_2,Y_2)=\mathcal W_p(X_2,Y_2),\\
&\mathcal W_p(X_1+ X_2, Y_1+\tilde X_2)\leq \mathcal W_p(X_1, Y_1).
\end{align*}
Hence, by triangle inequality $\mathcal W_p(X_1+X_2, Y_1+Y_2)\leq \mathcal W_p(X_1,Y_1)+\mathcal W_p(X_2,Y_2)$ follows.
\end{proof}
A useful property of the Wasserstein distances is their good behaviour with respect to products of measures.
\begin{lemma}[Tensorisation]\label{property:tensorization}
Let $(\X,d)$ be $\R^n$ endowed with the distance
$d(x,y)=(\sum_{i=1}^n{|x_i-y_i|}^r)^{1/r}$, $r \geq 1$, for all $x=(x_1,\dots,x_n)$, $y=(y_1,\dots,y_n)$ and let $\mu=\bigotimes_{i=1}^n\mu_i$
and $\nu=\bigotimes_{i=1}^n\nu_i$ be two product measures on $\R^n$.
Then,
\begin{equation*}
\Wi_p(\mu, \nu)^p \leq \max(n^{\frac{p}{r}-1},1)\sum_{i=1}^n \Wi_p(\mu_i, \nu_i)^p.
\end{equation*}
In the special case where $\mu_1=\dots=\mu_n$ and $\nu_1=\dots=\nu_n$, the following stricter inequality holds:
$$
\Wi_p(\mu, \nu)^p \leq n^{\frac{p}{r}} \Wi_p(\mu_1,\nu_1)^p.
$$
\end{lemma}
\begin{proof}
Thanks to Point \eqref{eq:coupling} in Lemma \ref{lemma:w1}, we can always find two random vectors $X^{*,n}=(X_1^*,\dots,X_n^*)$,
$Y^{*,n}=(Y_1^*,\dots,Y_n^*)$
with independent coordinates such that $\mu_i=\Li(X_i^*)$, $\nu_i=\Li(Y_i^*)$ and
$\Wi_p(\mu_i,\nu_i)=\E[|X_i^*-Y_i^*|^p]^{1/p}$. In particular, we have:
\begin{align}\label{eq:prodotto}
\Wi_p(\mu,\nu)^p&\leq\E\big[ d((X_i^*)_i,(Y_i^*)_i)^p\big]=
\E \bigg[\bigg(\sum_{i=1}^n|X_i^*-Y_i^*|^r\bigg)^{p/r}\bigg].
\end{align}
If $p\geq r$, by means of the elementary inequality $(z_1+\dots +z_n)^q\leq n^{q-1}(z_1^q+\dots+z_n^q)$, $q\geq 1$, we deduce from \eqref{eq:prodotto} that
$$\Wi_p(\mu,\nu)^p\leq n^{p/r-1}\sum_{i=1}^n\E[|X_i^*-Y_i^*|^p]=n^{p/r-1}\sum_{i=1}^n\Wi_p(\mu_i,\nu_i)^p.$$
Similarly, if $p<r$, the proof follows by the inequality $(|z_1|+\dots+|z_n|)^{1/q}\leq|z_1|^{1/q}+\dots+|z_n|^{1/q}$, $q\geq 1$.
In the case where $\mu_1=\dots=\mu_n$ and $\nu_1=\dots=\nu_n$, one may choose $X_1^*=\dots=X_n^*$ and $Y_1^*=\dots=Y_n^*$. The conclusion readily follows.
\end{proof}
The distance $\Wi_1$ is commonly called the \emph{Kantorovich-Rubinstein distance} and it can be characterized in many different ways.
Some useful properties of the distance $\Wi_1$ are the following.
\begin{proposition}\label{pro:lip}[See \cite{GS}]
Let $X$ and $Y$ be integrable real random variables. Denote by $\mu$ and $\nu$ their laws and by $F$ and $G$ their cumulative distribution functions, respectively. Then the following characterizations of the Wasserstein distance of order $1$ hold:
\begin{enumerate}
\item $\displaystyle{\Wi_1(X,Y)=\int_{\R}|F(x)-G(x)|dx},$
\item $\displaystyle{\Wi_1(X,Y)=\int_0^1|F^{-1}(t)-G^{-1}(t)|dt},$
\item $\Wi_1(X,Y)=\sup_{\|\psi\|_{\textnormal{Lip}}\leq 1}\bigg(\int_\R\psi d\mu-\int_\R\psi d\nu\bigg),$
the supremum being taken over all $\psi$ satisfying the Lipschitz condition $|\psi(x)-\psi(y)|\leq |x-y|$, for all $x,y\in\R$. This property is generally called Kantorovich-Rubinstein formula.
\end{enumerate}
\end{proposition}
\subsection{Wasserstein, Zolotarev and Toscani-Fourier distances}
Let $\mu$, $\nu$ be two probability measures on $\R$ endowed with the distance $d(x,y)=|x-y|$, $x,y\in\R$.
Writing $p>0$ as $p=m+\alpha$ with $m\in\N_0$ and $0<\alpha\leq 1$, denote by $\F_p$ the H\"older class of
real-valued bounded functions $f$ on $\R$ which are $m$-times differentiable with
$$\big|f^{(m)}(x)-f^{(m)}(y)\big|\leq |x-y|^\alpha.$$
\begin{definition}
The \emph{Zolotarev distance} $Z_p$ between $\mu$ and $\nu$ is defined by
\begin{equation*}
Z_p(\mu,\nu)=\sup_{f\in\F_p}\bigg(\int_\R fd\mu-\int_\R fd\nu\bigg).
\end{equation*}
\end{definition}
\begin{remark}
It is easy to see that the functional $Z_p$ is a metric. For $p=0$ the metric $Z_p$ is defined by the relation $Z_0=\lim_{p\to0}Z_p$ and
$\F_0$ is the set of Borel functions satisfying the condition $|f(x)-f(y)|\leq \I_{x\neq y}$. Thanks to the characterisation of the total variation
given in Property \ref{property:tv} below, it follows that $Z_0(\mu,\nu)=\|\mu-\nu\|_{TV}$. Also, by means of the Kantorovich-Rubinstein formula, recalled in Property \ref{pro:lip}, we have
$Z_1(\mu,\nu)=\Wi_1(\mu,\nu).$
\end{remark}
The following result shows that the Wasserstein distance of order $p$ is bounded by the $p$-th root of the Zolotarev distance $Z_p$. This fact, together with Theorem \ref{teo:zolotarevCPP} below, will be a useful tool to control the Wasserstein distances between the increments of compound Poisson processes.
\begin{theorem}[See \cite{Rio09}, Theorem 3.1]\label{teo:Rio09}
For any $p\geq 1$ there exists a positive constant $c_p$ such that for any pair $(\mu,\nu)$ of laws on the real line with finite absolute moments
of order $p$
$$\big(\Wi_p(\mu,\nu)\big)^p\leq c_pZ_p(\mu,\nu).$$
\end{theorem}
\begin{theorem}\label{teo:zolotarevCPP}[See \cite{zolbook}, Theorem 1.4.3]
Let $(X_i)_{i\geq 1}$ and $(Y_i)_{i\geq 1}$ be sequences of independent random variables and $N$ be an integer-valued random variable independent of
the random variables from both sequences. Then,
\begin{equation*}
Z_p\bigg(\sum_{i=1}^N X_i, \sum_{i=1}^N Y_i\bigg)\leq \sum_{k=1}^{\infty}\p(N\geq k)Z_p(X_k,Y_k).
\end{equation*}
\end{theorem}
\begin{theorem}[See \cite{zolbook}, Theorem 1.4.2.]
Let $X$ and $Y$ be integrable real random variables with laws $\mu$ and $\nu$, respectively. Then the following characterization of the Zolotarev distance holds: for any $p\geq 1$
\begin{equation*}
Z_p(X,Y)=\int \bigg|\int_{-\infty}^x \frac{(x-u)^{p-1}}{\Gamma(p)}(\mu-\nu)(du)\bigg|dx,
\end{equation*}
where $\Gamma$ denotes the Gamma function.
\end{theorem}
Let $P_1$ and $P_2$ be two probability measures on the real line. We will denote by $\varphi_1$ (resp. $\varphi_2$) the characteristic function of $P_1$ (resp. $P_2$), i.e.
$$\varphi_1(u)=\int_{\R} e^{iux}P_1(dx).$$
Also, denote by $\Li^b(\R)$ (resp. $\Li^b(\C)$) the class of real-valued (resp. complex-valued) bounded functions on $\R$ with Lipschitz norm bounded by $1$.
\begin{definition}\label{def:toscani}
For $s>0$, the \emph{Toscani-Fourier distance of order} $s$, denoted by $T_s$, is defined as:
$$T_s(P_1,P_2)=\sup_{u\in \R\setminus\{0\}}\frac{|\varphi_1(u)-\varphi_2(u)|}{|u|^s}.$$
\end{definition}
The distance introduced in Definition \ref{def:toscani} first appeared in \cite{GT}, under the name ``Fourier-based metrics'', to study the trend to equilibrium for solutions of the space-homogeneous Boltzmann equation for Maxwellian molecules. After that, it has been used in several other works, and especially linked to the kinetic theory, see \cite{CT07} for an overview. In \cite{villani09}, $T_2$ is called the ``Toscani distance''.
\begin{proposition}\label{prop:toscani}
For all $p\geq 1$
$$\Wi_p(P_1,P_2)\geq \frac{1}{\sqrt 2}T_1(P_1,P_2).$$
\end{proposition}
\begin{proof}
Thanks to Lemma \ref{lemma:w1} and Property \ref{pro:lip}
\begin{align*}
\Wi_p(P_1,P_2)&\geq \Wi_1(P_1,P_2)=\sup_{\psi \in \Li^b(\R)}\bigg(\int_\R\psi dP_1-\int_\R\psi dP_2\bigg)\\
&\geq \frac{1}{\sqrt 2}\sup_{\psi\in \Li^b(\C)}\bigg|\int_\R\psi dP_1-\int_\R\psi dP_2\bigg|.
\end{align*}
For all $u\in \R\setminus\{0\}$, let us consider the function $\Psi_u(x)=\frac{e^{iux}}{u}$ and observe that the Lipschitz norm of $\Psi_u$ is $1$. It immediately follows that
$$\sup_{\psi\in \Li^b(\C)}\bigg|\int_\R\psi dP_1-\int_\R\psi dP_2\bigg|\geq \sup_{u\in\R\setminus\{0\}}\bigg|\int_\R\Psi_u dP_1-\int_\R\Psi_u dP_2\bigg|= T_1(P_1,P_2).$$
\end{proof}
\subsection{Wasserstein distances in the central limit theorem}\label{CLT}
The class of Wasserstein metrics proves to be very useful in estimating the convergence rate in the central limit theorem. We recall some results.
Let $(Y_i)_{i\geq 1}$ be a sequence of centred i.i.d. random variables with finite and positive variance $\sigma^2$. We denote by
$\mu_n$ the law of $\frac{1}{\sqrt{n\sigma^2}}\sum_{i=1}^n Y_i$.
For i.i.d. centred random variables with finite absolute third moment, Esseen \cite{esseen} proved the following result.
\begin{theorem}\label{teo:zolotarev}[See e.g. \cite{petrov}, Theorem 16]
For any $n\geq 1$,
$$\Wi_1\big(\mu_n,\No(0,1)\big)\leq \frac{1}{2\sqrt n} \frac{\E|Y_1|^3}{\big(\var[Y_1]\big)^{3/2}}.$$
The constant $\frac{1}{2}$ in this inequality cannot
be improved.
\end{theorem}
A bound for the Wasserstein distances of order $r\in(1,2]$ is due to Rio \cite{Rio09}:
\begin{theorem}\label{teo:rio}[See \cite{Rio09}, Theorem 4.1]
For any $n\geq 1$ and any $r\in(1,2]$, there exists some positive constant $C$ depending only on $r$ such that
$$\Wi_r(\mu_n,\No(0,1))\leq C \frac{\Big(\E\big[|Y_1|^{r+2}\big]\Big)^{1/r}}{\sqrt n\big(\var[Y_1]\big)^{\frac{r+2}{2r}}} .$$
\end{theorem}
For $r>2$ and i.i.d. random variables with a finite absolute moment of order $r$, we have the following:
\begin{theorem}\label{teo:sak}[See \cite{sak}]
For any $n\geq 1$ and $r>2$, there exists some positive constant $C$, depending only on $r$, such that
$$\Wi_r(\mu_n,\No(0,1))\leq C \frac{\Big(\E\big[|Y_1|^{r}\big]\Big)^{1/r}}{\sqrt{\var[Y_1]}}n^{\frac{1}{r}-\frac{1}{2}}.$$
\end{theorem}
If one only assumes finite absolute moment of order $r$, this rate cannot be improved. In particular, under this assumption, the classical rate of convergence $\frac{1}{\sqrt n}$ cannot be recovered for $r>2$. For that reason, from now on, we will only focus on the case $r\in[1,2]$
\subsection{Lévy processes}\label{sec:notationlevy}
Let us denote by $P_t^{(b,\sigma,\nu)}$ the marginal law at time $t\ge 0$ of a Lévy process $X$ with characteristics $(b,
\sigma^2,\nu)$, i.e. (see Theorem 8.1 in \cite{sato})
\begin{align*}
\E\big(e^{iuX_t}\big)&=\exp\bigg(t\bigg(iub-\frac{u^2\sigma^2}{2}+\int_\R\big(e^{iux}-1-iux\I_{|x|\leq 1}\big)
\nu(dx)\bigg)\bigg)\\
&=\exp\bigg(t\bigg(iub(\varepsilon)-\frac{u^2\sigma^2}{2}+\int_\R\big(e^{iux}-1-iux\I_{|x|\leq \varepsilon}\big)
\nu(dx)\bigg)\bigg),
\end{align*}
where $b(\varepsilon):=b-\int_{\varepsilon< |x|\leq 1}x\nu(dx)$, for all $\varepsilon\in(0,1]$. Equivalently,
$P_t^{(b,\sigma,\nu)}$ denotes the infinitely divisible law with characteristics $(bt,
\sigma^2t,\nu t)$.
$X$ can be characterised via the Lévy-Itô decomposition (see \cite{sato}), that is via a canonical representation with independent components
$X=X^{(1)}+X^{(2)}(\varepsilon)+X^{S}(\varepsilon)+X^{B}(\varepsilon)$: For all $\varepsilon\in(0,1]$
\begin{align*}
X_t&=\sigma W_t+b(\varepsilon)t+ \lim_{\eta\to 0}\bigg(\sum_{0< s\leq t}\Delta X_s\I_{(\eta,\varepsilon]}(|\Delta X_s|)-t
\big(b(\varepsilon)-b(\eta)\big)\bigg)\\
&\quad +\sum_{0< s\leq t}\Delta X_s\I_{(\varepsilon,+\infty)}(|\Delta X_s|),
\end{align*}
where $W$ is a standard Brownian motion, $\Delta X_s:=X_s-\lim_{r\uparrow s}X_r$ is the jump at time $s$ of $X$, $X^{S}(\varepsilon)$ is a pure jump martingale containing only small jumps and $X^{B}(\varepsilon)$ is a finite variation part containing jumps larger in absolute value than $\varepsilon$. Thus
$X^{B}(\varepsilon)$ is a compound Poisson process with intensity $\lambda_\varepsilon:=\nu(\R\setminus(-\varepsilon,\varepsilon)
)$ and jump distribution $F_{\varepsilon}(dx)=\frac{\nu(dx)}{\nu(\R\setminus(-\varepsilon,\varepsilon))}\I_{(\varepsilon,+\infty)}(|x|)$.
In the following, sometimes we will write $X^{B}_t(\varepsilon)$ as $\sum_{i=1}^{N_t}Y_i$ where $N$ is a Poisson process of intensity $\lambda_\varepsilon$ independent of the sequence $(Y_i)_{i\geq 0}$ of i.i.d. random variables having distribution $F_\varepsilon.$
Also, for a given Lévy process $X$ we define an auxiliary characteristic $\bar \sigma:\R_+\to\R$ capturing the variance induced by small jumps:
\begin{equation*}
\bar \sigma^2(\varepsilon):=\int_{|x|\leq\varepsilon}x^2\nu(dx).
\end{equation*}
\section{Wasserstein distances for Lévy processes}\label{sec:uppb}
Let $X^j$, $j=1,2$, be two Lévy processes with characteristics $(b_j,\sigma_j^2,\nu_j) $, $j=1,2$. As we will see later, thanks to Corollary \ref{cor:subadd} and the Lévy-Itô decomposition, in order to control $\Wi_p(X_t^1,X_t^2)$ it is enough to separately control the Wasserstein distances between two Gaussian random variables as well as $\Wi_p\big(\Li(X_{t}^{j,S}(\varepsilon)),\No\big(0,t\bar \sigma_j^2(\varepsilon)\big)\big)$ and $\Wi_p\big(X_{t}^{1,B}(\varepsilon), X_{t}^{2,B}(\varepsilon)\big)$. A bound for the Wasserstein distances between Gaussian distributions is given by:
\begin{lemma}\label{lemma:gaussiane}[See \cite{Givens1984}, Prop. 7]
$$\Wi_2(\No(m_1,\sigma_1^2),\No(m_2,\sigma_2^2))=\sqrt{(m_1-m_2)^2+(\sigma_1-\sigma_2)^2}.$$
\end{lemma}
Upper bounds for $\Wi_p\big(\Li(X_{t}^{j,S}(\varepsilon)),\No\big(0,t\bar \sigma_j^2(\varepsilon)\big)\big)$ and $\Wi_p\big(X_{t}^{1,B}(\varepsilon), X_{t}^{2,B}(\varepsilon)\big)$ will be the subject of Sections \ref{sss:smalljumps} and \ref{sss:bigjumps}, respectively.
\subsection{Distances between marginals of small jump Lévy processes }\label{sss:smalljumps}
Let $X$ be a Lévy process with Lévy measure $\nu$ and denote by $X^{S}(\varepsilon)$ the Lévy process associated with the small jumps of $X$,
following the notation introduced in Section \ref{sec:notationlevy}.
\begin{theorem}\label{teo:smalljumps}
For any $p\in[1,2]$, there exists a positive constant $C$ such that
\begin{align}
\Wi_p\big(\Li\big(X_t^{S}(\varepsilon)\big),\No(0,t\bar\sigma(\varepsilon)^2) \big)&\leq C\min\bigg(\sqrt t\bar\sigma(\varepsilon) ,
\bigg(\frac{\int_{-\varepsilon}^{\varepsilon}|x|^{p+2}\nu(dx)}{\bar\sigma^2(\varepsilon)}\bigg)^{1/p}\bigg)\nonumber\\
&\leq C\min\Big(\sqrt t\bar\sigma(\varepsilon) ,\varepsilon\Big) \label{eq:sj}.
\end{align}
In particular, for $p=1$ the bound is $\min(2\sqrt t\bar\sigma(\varepsilon),\frac12\varepsilon)$.
\end{theorem}
\begin{remark}
The inequality
$$\Wi_p\big(\Li\big(X_t^{S}(\varepsilon)\big),\No(0,t\bar\sigma^2(\varepsilon)) \big)\leq\Wi_2\big(\Li\big(X_t^{S}(\varepsilon)\big),\No(0,t\bar\sigma^2(\varepsilon)) \big)\leq 2 \sqrt t\bar\sigma(\varepsilon)$$
is clear from the definition of $\Wi_2$, noting that $t\bar\sigma^2(\varepsilon)$ is the second moment of both arguments. The interest of Theorem \ref{teo:smalljumps} lies in the bound
\begin{equation}\label{eq:sjbis}
\Wi_p\big(\Li\big(X_t^{S}(\varepsilon)\big),\No(0,t\bar\sigma^2(\varepsilon)) \big)\leq 2\varepsilon,
\end{equation}
which after renormalisation yields
$$ \Wi_p\bigg(\Li\Big(\frac{X_t^{S}(\varepsilon)}{\sqrt t\bar\sigma(\varepsilon)}\Big),\No(0,1) \bigg)\leq \frac{C\varepsilon}{\sqrt t\bar\sigma(\varepsilon)}.
$$
Thus, not surprisingly in view of the central limit theorem, the Gaussian approximation is better as $t$ is large. Also whenever $\bar\sigma^2(\varepsilon)$ is much larger than $\varepsilon^2$, then a Gaussian approximation is valid, e.g. for $\alpha$-stable processes with $\alpha>0$ and $\varepsilon$ small, see Example \ref{ex:stable} below.
\end{remark}
\begin{remark}\label{rmk:ex}
The upper bound \eqref{eq:sj} gives in general the right order. Indeed,
let us consider for $\varepsilon>0$ the Lévy measure $\nu_\varepsilon=\frac{\delta_{-\varepsilon}+\delta_\varepsilon}{2\varepsilon^2}$
and denote by
$Y(\varepsilon)$ the corresponding (centred) pure jump Lévy process, i.e. $Y_t(\varepsilon)=\varepsilon(N_t^1(\varepsilon)-N_t^2(\varepsilon))$ is the rescaled difference of two independent Poisson processes of intensity
$\lambda=\frac{1}{2\varepsilon^2}$ each. In particular, observe that
$\bar\sigma^2(\varepsilon)=1$ and $\int_{-\varepsilon}^{\varepsilon}|x|^3\nu_\varepsilon(dx)=\varepsilon$.
Let us develop the case $p=1$. Applying the scheme of proof proposed in \cite{Rio09}, see proof of
Theorem 5.1, we show that there exists a constant $K$ such that
$$\Wi_1\big(\Li( Y_t(\varepsilon)),\No(0,t)\big)\geq K\min(\sqrt t,\varepsilon).$$
To see that, we consider the cases where $t\leq \varepsilon^2$ and $t>\varepsilon^2$ separately.
\begin{itemize}
\item $t\leq \varepsilon^2$: From the definition of the Wasserstein distance of order $1$ it follows that
\begin{align*}
\Wi_1\big(\Li( Y_t(\varepsilon)),\No(0,t)\big)&\geq \E[|\No(0,t)|]\p(N_t^1(\varepsilon)+N_t^2(\varepsilon)=0)\\
&=\sqrt{\frac{2t}{\pi}}e^{-\frac{t}{\varepsilon^2}}\geq
\sqrt{\frac{2}{\pi}}\frac{1}{e}\sqrt t.
\end{align*}
\item $t\geq \varepsilon^2$: Again, by the definition of the Wasserstein distance of order $1$, we find that
\begin{align*}
\Wi_1\big(\Li( Y_t(\varepsilon)), \No(0,t)\big) &\geq \E\Big[\min_{n\in\Z}|\sqrt t N-n\varepsilon |\Big]\\
& \geq \frac{\varepsilon}{4}\p\Big((\sqrt t/\varepsilon) N\in \bigcup_{n\in\Z} [n+1/4,n+3/4]\Big)
\end{align*}
with $N\sim \No(0,1)$. Since in this case $(\sqrt t/\varepsilon) N$ has variance at least one,
there exists a constant $K$ such that
$$\Wi_1\big(\Li( Y_t(\varepsilon)),\No(0,t)\big)\geq K\varepsilon.$$
\end{itemize}
In the case $p\in(1,2]$ $\Wi_p$ is even larger than $\Wi_1$. For the case $p=2$ see also \cite{fournier}.
\end{remark}
\begin{example}\label{ex:stable}
Let us illustrate Theorem \ref{teo:smalljumps} for the class of $\alpha$-stable Lévy processes with a Lévy density proportional to $\frac{1}{|x|^{1+\alpha}}$, $\alpha\in[0,2)$. For all $\varepsilon\in (0,1]$, let us denote by $X^S(\varepsilon)$ the Lévy process describing the small jumps and by $\nu\I_{[-\varepsilon,\varepsilon]}$ its Lévy measure, i.e.
$$X^S(\varepsilon)\sim\big(0,0,\nu\I_{[-\varepsilon,\varepsilon]}\big),\quad \nu(dx)=\frac{C_\alpha}{|x|^{1+\alpha}}dx,$$
for some constant $C_\alpha$. In particular, we have
$$\bar\sigma^2(\varepsilon)=\int_{-\varepsilon}^\varepsilon x^2\nu(dx)=2C_\alpha\frac{\varepsilon^{2-\alpha}}{2-\alpha}.$$
Therefore an application of Theorem \ref{teo:smalljumps} guarantees the existence of a constant $C$, possibly depending on $p$ and $\alpha$, such that:
\begin{equation}\label{eq:alpha}
\Wi_p\Big(\Li\Big(\frac{X_t^S(\varepsilon)}{\bar\sigma(\varepsilon)}\Big),\No(0,t)\Big)\leq C\min\big(\sqrt t,\varepsilon^{\frac{\alpha}{2}}\big),\quad \forall t>0, \ \forall \varepsilon\in(0,1], \ \forall p\in[1,2].
\end{equation}
Equation \eqref{eq:alpha} validates the intuition that a Gaussian approximation of the small jumps is the better the more active the small jumps are. Indeed, the approximation in \eqref{eq:alpha} is better when $\alpha$ is larger.
\end{example}
Let us now prove Theorem \ref{teo:smalljumps}. For that we need to recall the following lemma:
\begin{lemma} [See \cite{Rusch2002}, Lemma 6.]\label{lemma:momenti}
Let $X$ be a Lévy process with Lévy measure $\nu$. If a Borel function $f:\R\to\R$ satisfies $\int_{|x|\geq 1}f(x)\nu(dx)<\infty$, $\lim_{x\to 0}\frac{f(x)}{x^{2}}=0$ and
$f(x)(|x|^2\wedge1)^{-1}$ is bounded, then
$$\lim_{t\to 0}\frac{1}{t}\E[f(X_t)]=\int f(x)\nu(dx).$$
\end{lemma}
\begin{proof}[Proof of Theorem \ref{teo:smalljumps}]
Let us introduce $n$ random variables defined by
$Y_j=\sqrt n(X^{S}_{tj/n}(\varepsilon)-X^{S}_{t(j-1)/n}(\varepsilon))$. The $Y_j$'s are i.i.d.
centred random variables
with variance equal to $t\bar\sigma^2(\varepsilon)$ and such that $X_{t}^{S}(\varepsilon)=\frac{1}{\sqrt n}\sum_{j=1}^n Y_j$.
An application of Theorems \ref{teo:rio} and \ref{teo:zolotarev} (using the fact that $Y_j$ has the same law as $\sqrt nX_{t/n}^{S}(\varepsilon)$ and the homogeneity
property of the Wasserstein distances stated in Lemma \ref{lemma:proprieta}) gives
\begin{align*}
\Wi_1(\Li\big(X_{t}^{S}(\varepsilon)\big),\No(0,t\bar\sigma^2(\varepsilon))\big)\leq \frac{n\E[|X_{t/n}^{S}(\varepsilon)|^3]}
{2t\bar\sigma^2(\varepsilon)}.
\end{align*}
Let us now argue that
$$\limsup_{t\to 0}\frac{\E[|X_t^S(\varepsilon)|^3]}{t}\leq \int_{|x|<\varepsilon}|x|^3\nu(dx).$$
Indeed, applying Lemma \ref{lemma:momenti} to the family $f(x)=f_R(x)=|x|^3 \I_{[-R, R]}(x)$ for $R>\varepsilon$, we deduce that
$$\lim_{t\to 0}\frac{\E\big[|X_t^S(\varepsilon)|^3\I_{|X_t^S(\varepsilon)|\leq R}\big]}{t}=\int_{|x|<\varepsilon}|x|^3\nu(dx).$$
Thus, using the fact that $\E\big[(X_t^S(\varepsilon))^4\big]=t\int_{|x|<\varepsilon}x^4 \nu(dx)+3t^2\bar\sigma^4(\varepsilon)$, we get
\begin{align*}
\E\big[|X_t^S(\varepsilon)|^3\big]&\le \E\big[|X_t^S(\varepsilon)|^3\I_{|X_t^S(\varepsilon)|\leq R}\big]+\E\big[(X_t^S(\varepsilon)^4/R)\I_{|X_t^S(\varepsilon)|> R}\big]\\
&\leq \E\big[|X_t^S(\varepsilon)|^3\I_{|X_t^S(\varepsilon)|\leq R}\big]+ \frac{1}{R} \bigg(t \int_{|x|<\varepsilon}x^4 \nu(dx)+3t^2\bar\sigma^4(\varepsilon)\bigg).
\end{align*}
Therefore, for any $R>\varepsilon$,
$$\limsup_{t\to 0}\frac{\E[|X_t^S(\varepsilon)|^3]}{t}\leq \int_{|x|<\varepsilon}|x|^3\nu(dx)+\frac{ \int_{|x|<\varepsilon}x^4 \nu(dx)}{R}.$$
Taking the limit as $R\to\infty$, we conclude.
It follows that
$$\Wi_1\big(\Li\big(X_{t}^{S}(\varepsilon)\big),\No(0,t\bar\sigma^2(\varepsilon))\big)\leq\limsup_{n\to\infty}\frac{n\E[|X_{t/n}^{S}(\varepsilon)|^3]}
{2t\bar\sigma^2(\varepsilon)}\leq \frac{\int_{-\varepsilon}^{\varepsilon}|x|^3\nu(dx)}{2\bar\sigma^2(\varepsilon)}.$$
Moreover, by definition of the Wasserstein distance of order $1$ and denoting by $N$ a centered Gaussian random variable with variance $t\bar\sigma^2(\varepsilon)$, we have
$$\Wi_1(\Li\big(X_{t}^{S}(\varepsilon)\big),\No(0,t\bar\sigma^2(\varepsilon)))\leq \E[|X_{t}^{S}(\varepsilon)|]+\E[|N|]
\leq 2\sqrt{t\bar\sigma^2(\varepsilon)}.$$
We deduce
$$\Wi_1\big(\Li\big(X_{t}^{S}(\varepsilon)\big),\No(0,t\bar\sigma^2(\varepsilon))\big)\leq \min\bigg(2\sqrt{t\bar\sigma^2(\varepsilon)},
\frac{\int_{-\varepsilon}^{\varepsilon}|x|^3\nu(dx)}{2\bar\sigma^2(\varepsilon)}\bigg).$$
Similarly, by means of Theorem \ref{teo:rio}, for $p\in(1,2]$
\begin{align*}
\Wi_p\big(\Li\big(X_{t}^{S}(\varepsilon)\big),\No(0,t\bar\sigma^2(\varepsilon))\big)&\leq \limsup_{n\to\infty}\frac{C}{\sqrt n}\bigg(\frac{\E\big[|\sqrt nX_{t/n}^{S}|^{p+2}\big]}
{t\bar\sigma^2(\varepsilon)}\bigg)^{1/p}\\
&\leq C\bigg(\frac{\int_{-\varepsilon}^{\varepsilon}|x|^{p+2}\nu(dx)}{\bar\sigma^2(\varepsilon)}\bigg)^{1/p}
\end{align*}
and also
$$\Wi_p\big(\Li\big(X_{t}^{S}(\varepsilon)\big),\No(0,t\bar\sigma^2(\varepsilon))\big)\leq \Big(\E\big[|X_{t}^{S}(\varepsilon)|^p\big]\Big)^{1/p}+\Big(\E\big[
|N|^p\big]\Big)^{1/p}
\leq 2\sqrt{t\bar\sigma^2(\varepsilon)}.$$
The upper bound \eqref{eq:sj} follows by the fact that $\frac{\int_{-\varepsilon}^{\varepsilon}|x|^{p+2}\nu_i(dx)}{\bar\sigma_i^2(\varepsilon)}\leq \varepsilon^p$.
\end{proof}
Theorem \ref{teo:smalljumps} can be used to bound the Wasserstein distances between the increments of the small jumps of two Lévy processes.
\begin{corollary}\label{cor:smalljumps}
For all $\varepsilon\in(0,1]$ let $X^j(\varepsilon)\sim(-\int_{\varepsilon<|x|\leq 1}x\nu_j(dx),\sigma_j^2,\nu_j\I_{[-\varepsilon,\varepsilon]})$ be two Lévy processes with $\bar \sigma_j^2(\varepsilon)\neq 0$ and $\sigma_j\geq 0$, $j=1,2.$ Then, for all $p\in[1,2]$, there exists a constant $C$, only depending on $p$, such that
\begin{align*}
\Wi_p\big(X_t^{1}(\varepsilon),X_t^{2}(\varepsilon)\big)&\leq\sum_{j=1}^2C\min\bigg(\sqrt t\bar\sigma_j(\varepsilon) ,
\bigg(\frac{\int_{-\varepsilon}^{\varepsilon}|x|^{p+2}\nu_j(dx)}{\bar\sigma_j^2(\varepsilon)}\bigg)^{1/p}\bigg)\\&\quad +t\Big(\sqrt{\bar\sigma_1^2(\varepsilon)+\sigma_1^2}-\sqrt{\bar\sigma_2^2(\varepsilon)+\sigma_2^2}\Big)^2.
\end{align*}
\end{corollary}
\begin{proof}
This is a consequence of Theorem \ref{teo:smalljumps} and Lemma \ref{lemma:gaussiane}.
\end{proof}
\subsection{Distances between random sums of random variables}\label{sss:bigjumps}
\begin{theorem}\label{teo:CPP}
Let $(X_i)_{i\geq 1}$ and $(Y_i)_{i\geq 1}$ be sequences of i.i.d. random variables with $Y_i\in L_2$ and $N$, $N'$ be two positive integer-valued random variables with $N$ (resp. $N'$) independent
of $(X_i)_{i\geq 1}$ (resp. $(Y_i)_{i\geq 1}$). Then, for $1\le p\le 2$,
\begin{align*}
\Wi_p\bigg(\sum_{i=1}^{N} X_i, \sum_{i=1}^{N'} Y_i\bigg)&\leq \min\bigg( \big(c_p\E[N]Z_p(X_1,Y_1)\big)^{1/p},\E[N^{p}]^{1/p}\Wi_p(X_1,Y_1)\bigg)\\
&\quad +
\Wi_p(N,N')\E\big[|Y_1|^p\big]^{1/p}
\end{align*}
with the constant $c_p$ from Theorem \ref{teo:Rio09}.
\end{theorem}
\begin{proof}
By the triangle inequality,
\begin{align}\label{eq:triangular}
\Wi_p\bigg(\sum_{i=1}^N X_i, \sum_{i=1}^{N'} Y_i\bigg)\leq \Wi_p\bigg(\sum_{i=1}^{N} X_i, \sum_{i=1}^{N''} Y_i\bigg)+
\Wi_p\bigg(\sum_{i=1}^{N''} Y_i, \sum_{i=1}^{N'} Y_i\bigg),
\end{align}
where $N''$ is independent of $(Y_i)_{i\geq 1}$ and with the same law as $N$.
Thanks to Theorems \ref{teo:Rio09} and \ref{teo:zolotarevCPP}, the first summand in \eqref{eq:triangular} is bounded by
$\big(c_p\E[N]Z_p(X_1,Y_1)\big)^{1/p}$. Alternatively, this summand can be estimated via Jensen's inequality joined with the fact that $\Wi_p\big(\sum_{i=1}^{N} X_i, \sum_{i=1}^{N''} Y_i\big)^p\leq \E\big[\big|\sum_{i=1}^{\tilde N} (X_i-Y_i)\big|^p\big]$, $\tilde N$ independent of $(X_i,Y_i)_{i\geq 1}$ and $\Li(\tilde N)=\Li(N)=\Li(N'')$, as follows:
\[ \E\bigg[\bigg|\sum_{i=1}^{\tilde N} (X_i-Y_i)\bigg|^p\bigg] \le \E\bigg[\tilde N^{p-1}\sum_{i=1}^{\tilde N} |X_i-Y_i|^p\bigg]\le \E\big[N^{p}\big]\E\big[|X_1-Y_1|^p\big].
\]
Therefore,
\begin{align*}
\Wi_p\bigg(\sum_{i=1}^{N} X_i, \sum_{i=1}^{N''} Y_i\bigg)^p&\leq \inf\bigg\{\E\bigg[\bigg|\sum_{i=1}^{\tilde N} (X_i-Y_i)\bigg|^p\bigg],
\, \tilde N\text{ independent of }(X_i,Y_i)_{i\ge 1}\bigg\}\\
&\leq \E\big[N^{p}\big]\Wi_p(X_1,Y_1)^p.
\end{align*}
To control the second summand, we proceed similarly
\begin{align*}
&\Wi_p\bigg(\sum_{i=1}^{N''} Y_i, \sum_{i=1}^{N'} Y_i\bigg)^p\\
&\leq\inf\bigg\{\E\bigg[\bigg|\sum_{i=1}^{N''} Y_i-\sum_{i=1}^{N'} Y_i\bigg|^p\bigg],
\, N'',N'\text{ independent of }(Y_i)_{i\ge 1}\bigg\}\\
&\leq\E[|Y_1|^p]\Wi_p(N'',N')^p,
\end{align*}
which, by noting $\Li(N'')=\Li(N)$, concludes the proof.
\end{proof}
In the preceding theorem one term is bounded alternatively by the Zolotarev or the Wasserstein distance between $X_1$ and $Y_1$. The difference is the factor in front which is either the first or the $p$th moment of $N$. If $N$ is likely to be large, then better bounds can be obtained by profiting from the variance stabilisation for centred sums. Since the larger jumps are not our main issue, this is not pursued further.
In the Poisson case the moments and the Wasserstein distances can be easily analysed.
\begin{proposition}\label{prop:wpoisson}
Let $N$ and $N'$ be two Poisson random variables of mean $\lambda$ and $\lambda'$, respectively. Let us denote by
$m_{(p,\ell)}$ the moment of order $p$ of a Poisson random variable of mean $\ell$, i.e.
$$m_{(p,\ell)}:=\sum_{i=1}^p\ell^i{p\brace i}, \text{ where } {p\brace i}:=\frac{1}{i!}\sum_{j=0}^{i}(-1)^{i-j}{i\choose j}j^p.$$
Then the following upper
bound holds for $p\geq1:$
\begin{align*}
\Wi_p(N,N')^p\leq m_{(p,|\lambda-\lambda'|)}.
\end{align*}
In particular,
\begin{align}
\Wi_1(N,N')&\leq |\lambda-\lambda'| \label{eq:wass1poisson}\\
\Wi_p(N,N')^p&\leq |\lambda-\lambda'|+ |\lambda-\lambda'|^p,\quad 1<p\le 2\label{eq:wass2poisson}.
\end{align}
\end{proposition}
\begin{proof}
Without loss of generality, let us suppose $\lambda'\geq \lambda$ and let $N''$ be a Poisson random variable with mean $\lambda-\lambda'$, independent of $N$. Thanks to Lemma \ref{lemma:proprieta} we have
\begin{align*}
\Wi_p(N,N')^p=\Wi_p(N'+N'',N')^p\leq \Wi_p(0,N'')^p\leq \E\big[(N'')^p\big]=m_{(p,\lambda'-\lambda)}.
\end{align*}
To deduce \eqref{eq:wass1poisson} and \eqref{eq:wass2poisson} we use the fact that $m_{(1,\ell)}=\ell$, $m_{(2,\ell)}=\ell+\ell^2$ and $\E[(N'')^p]\le \E[N'']^{2-p}\E[(N'')^2]^{p-1}$ for $p\in(1,2]$ by H\"older's inequality.
\end{proof}
\subsection{First main result}
We will use the notation introduced in Section \ref{sec:notationlevy}. In accordance with that, for any given Lévy process $X^j$ with characteristics $(b_j,\sigma_j^2,\nu_j)$, $X^{j,B}(\varepsilon)$ will be a compound Poisson process with Lévy measure $\nu_j(dx)\I_{(\varepsilon,\infty)}(|x|)$, i.e.
$$X_t^{j,B}(\varepsilon)=\sum_{i=1}^{N_t^j}Y_i^{(j)}$$
where $N^j$ is a Poisson process of intensity $\lambda_j(\varepsilon):=\nu_j(\R\setminus (-\varepsilon,\varepsilon))$ independent of the sequence of i.i.d. random variables $(Y_i^{(j)})_{i\geq 0}$ having distribution $F_\varepsilon^j(dx)=\frac{\I_{(\varepsilon,\infty)}(|x|)}{\lambda_j(\varepsilon)}\nu_j(dx)$.
Recall from Proposition \ref{prop:wpoisson} that $m_{(p,\ell)}$ denotes the moment of order $p$ of a Poisson random variable of mean $\ell$.
\begin{theorem}\label{teow1}
Let $X^j$, $j=1,2$, be two Lévy processes with characteristics $(b_j,\sigma_j^2,\nu_j)$, $j=1,2$. For all $p\in[1,2]$, for all $\varepsilon\in [0,1]$ and all $t\geq 0$, the following estimate holds
\begin{align*}
\Wi_p\big(X_t^1,X_t^2\big)&\leq \Big(t^2\big(b_1(\varepsilon)-b_2(\varepsilon)\big)^2+t\big(\sigma_1+\bar\sigma_1(\varepsilon)-\sigma_2-\bar\sigma_2(\varepsilon)\big)^2\Big)^{1/2} \\
&\quad +C\sum_{j=1}^2\min\Big(\sqrt t\bar \sigma_j(\varepsilon) ,\varepsilon\Big)+
\Wi_p\big(X_{t}^{1,B}(\varepsilon),X_t^{2,B}(\varepsilon)\big),
\end{align*}
for some constant $C$, only depending on $p$.
Introducing $L_t(\varepsilon):=t|\lambda_1(\varepsilon)-\lambda_2(\varepsilon)|$, we have
\begin{align*}
\Wi_p\big(X_{t}^{1,B}(\varepsilon),X_t^{2,B}(\varepsilon)\big)&\leq
\big((t\lambda_1(\varepsilon))^{1/p}+t\lambda_1(\varepsilon)\big)\Wi_p\big(Y_1^{(1)},Y_1^{(2)}\big)\\
&\quad + \big(L_t(\varepsilon)^{1/p}+L_t(\varepsilon)\big) \E\big[\big(Y_1^{(2)}\big)^p\big]^{1/p}.
\end{align*}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{teow1}]
By some abuse of notation let $\No(\mu,\sigma^2)$ denote a random variable with this distribution. Then, thanks to the Lévy-Itô decomposition, we have
$$X_t^i=\No(t b_i(\varepsilon),t\sigma_i^2)+X_{t}^{i,S}(\varepsilon)+X_{t}^{i,B}(\varepsilon)$$
with independent summands.
Hence, by subadditivity we get
\begin{align*}
\Wi_p\big(X^1_t,X^2_t\big)&\leq \Wi_p\big(\No(t b_1(\varepsilon),t\sigma_1^2)+X_{t}^{1,S}(\varepsilon),\No(t b_2(\varepsilon), t\sigma_2^2)+X_{t}^{2,S}(\varepsilon)\big)
\\ &\quad+\Wi_p\big(X_{t}^{1,B}(\varepsilon), X_{t}^{2,B}(\varepsilon)\big).
\end{align*}
Observe that
\begin{align*}
&\Wi_p\big(\No(t b_1(\varepsilon),t\sigma_1^2)+X_{t}^{1,S}(\varepsilon), \No(t b_2(\varepsilon),t\sigma_2^2)+
X_{t}^{2,S}(\varepsilon)\big)\\
&\leq \Wi_p\big(\No(t b_1(\varepsilon),t\sigma_1^2)+X_t^{1,S}(\varepsilon), \No(t b_1(\varepsilon),t(\sigma_1^2+\bar\sigma_1^2(\varepsilon)))\big)\\
&\quad + \Wi_p\big(\No(tb_2(\varepsilon),t\sigma_2^2)+X_{t}^{2,S}(\varepsilon), \No(t b_2(\varepsilon),t(\sigma_2^2+
\bar\sigma_2^2(\varepsilon)))\big)\\
&\quad +\Wi_p\big(\No(t b_1(\varepsilon),t(\sigma_1^2+\bar\sigma_2^2(\varepsilon))\big), \No(t b_2(\varepsilon),
t(\sigma_2^2+\bar\sigma_2^2(\varepsilon)))\big)\\
&\leq \Wi_p\big(X_{t}^{1,S}(\varepsilon),\No(0,t\sigma_1^2(\varepsilon))\big)
+\Wi_p\big(X_{t}^{2,S}(\varepsilon),\No(0,t\bar\sigma_2^2(\varepsilon))\big)\\
&\quad +\Wi_p\big(\No(t b_1(\varepsilon),t(\sigma_1^2+\bar\sigma_1^2(\varepsilon))),
\No(t b_2(\varepsilon),t(\sigma_2^2+\bar\sigma_2^2(\varepsilon)))\big),
\end{align*}
where in the second inequality we used again Lemma \ref{lemma:proprieta}.
An application of Theorem \ref{teo:smalljumps} together with Point (2) in Lemma \ref{lemma:w1} and Lemma \ref{lemma:gaussiane} allows us to bound
$$\Wi_p\big(\No(t b_1(\varepsilon),t\sigma_1^2)+X_{t}^{1,S}(\varepsilon), \No(t b_2(\varepsilon),t\sigma_2^2)
+X_{t}^{2,S}(\varepsilon)\big)$$
by the quantity
$$\Big(t^2\big(b_1(\varepsilon)-b_2(\varepsilon)\big)^2
+t\big(\sigma_1+\bar\sigma_1(\varepsilon)-\sigma_2-\bar\sigma_2(\varepsilon)\big)^2\Big)^{1/2}
+C\sum_{j=1}^2\min\Big(\sqrt t\bar \sigma_j(\varepsilon) ,\varepsilon\Big),$$
for some constant $C$ only depending on $p$.
Finally, $\Wi_p\big(X_{t}^{1,B}(\varepsilon), X_{t}^{2,B}(\varepsilon)\big)$ is bounded by means of Theorem \ref{teo:CPP} and Proposition \ref{prop:wpoisson}.
\end{proof}
We now address the problem of how to compute the Wasserstein distance between $n$ given increments of two Lévy processes.
To that end, fix a time span $T>0$,
a sample size $n\in\N$ and consider the sample $(X_{kT/n}^1-X_{(k-1)T/n}^1,X_{kT/n}^2-X_{(k-1)T/n}^2)_{k=1}^n$. From Lemma \ref{property:tensorization} we know that
we can measure the distance between the random vectors $(X_{kT/n}^1-X_{(k-1)T/n}^1)_{k=1}^n$ and
$(X_{kT/n}^2-X_{(k-1)T/n}^2)_{k=1}^n$ in terms of the Wasserstein distance between the marginals. This observation combined with Theorem \ref{teow1}
allows us to obtain an upper bound for the Wasserstein distance of order $p$ between the increments of these Lévy processes.
\begin{corollary}\label{res:tensorizationlevy}
Let $X^j$, $j=1,2$, be two Lévy processes with characteristics $(b_j,\sigma_j^2,\nu_j)$, $j=1,2$. Then, with respect to the $\ell^r$-metric on $\R^n$ given by $d(x,y):=\big(\sum_{i=1}^N|x_i-y_i|^r\big)^{1/r}$, $r \geq 1$, for all $p\in[1,2]$, $\varepsilon\geq 0$, $T>0$, $n\in\N$ we have
\begin{align*}
\Wi_p\Big(&(X_{kT/n}^1-X_{(k-1)T/n}^1)_{k=1}^n,(X_{kT/n}^2-X_{(k-1)T/n}^2)_{k=1}^n\Big)\\
&\leq T n^{\frac{1}{r}-1} \big|b_1(\varepsilon)-b_2(\varepsilon)\big|
+ T^{1/2} n^{\frac{1}{r}-\frac12}\big|\sigma_1+\bar\sigma_1(\varepsilon)-\sigma_2-\bar\sigma_2(\varepsilon)\big|\\
&\quad +
C\sum_{j=1}^2\min\big(T^{1/2}n^{\frac{1}{r}-\frac12}\bar\sigma_j(\varepsilon),n^{\frac{1}{r}}\varepsilon\big)
+n^{\frac{1}{r}}\Wi_p\big(X_{T/n}^{1,B}(\varepsilon),X_{T/n}^{2,B}(\varepsilon)\big),
\end{align*}
where $C$ is a constant depending only on $p$. The term $\Wi_p(X_{T/n}^{1,B}(\varepsilon),X_{T/n}^{2,B}(\varepsilon))$ can be bounded
as in Theorem \ref{teo:CPP} with $t=T/n$.
\end{corollary}
In the Euclidean case $r=2$ we see that in the bound for the Wasserstein distance the drift part disappears as $n\to\infty$ ($T$ fixed), while the Gaussian part remains invariant and the Gaussian approximation of small jumps gives an error of order $\min(\bar\sigma_j(\varepsilon),n^{1/2}\varepsilon)$. The bound on the larger jumps scales as $n^{1/2}(T/n+(T/n)^{1/2})$ (for $p=1$ even as $T/n^{1/2}$) so that the entire bound on the Wasserstein distance remains bounded as $n\to\infty$.
\subsection{Lower bounds}
Applying the general lower bound established in Proposition \ref{prop:toscani} to Lévy processes, we get the following result:
\begin{corollary}\label{cor:lb}
Let $W$ be a Brownian motion and $X^\varepsilon$ be a pure jump Lévy process with jumps of absolute value less than $\varepsilon\in(0,1]$. This means that $X^\varepsilon$ has L\'evy triplet $(0,0,\nu_\varepsilon)$, $\text{supp}(\nu_\varepsilon)\subset[-\varepsilon,\varepsilon]$ and
characteristic function
$$\varphi^\varepsilon_t(u)=\E[e^{iuX_t^\varepsilon}]=\exp\Big(t \int_{-\varepsilon}^\varepsilon \big(e^{iux}-1-iux\big)\nu_\varepsilon(dx)\Big).$$
Let $\bar\sigma^2(\varepsilon)=\int x^2\nu_\varepsilon(dx)$. Then for $p\ge 1$
\begin{align*}
\Wi_p(X_t^\varepsilon,\bar\sigma(\varepsilon)W_t) &\geq \sup_{u\in\R}\frac{\Big|\exp \Big(t\int\big(e^{iux}-1-iux\big)\nu_\varepsilon(dx)\Big)-\exp \Big(t\int \frac{(iux)^2}{2} \nu_\varepsilon(dx)\Big)\Big|}{\sqrt 2|u|}.
\end{align*}
\end{corollary}
By the bound given in Proposition \ref{prop:toscani} we usually do not lose in approximation order as the following lower bound examples demonstrate.
Let us start with the general case that at $\varepsilon=1$ we have a standardised pure jump process $X^1$ with $\E[X^1_t]=0$, $\var[X^1_t]=t$ as for Brownian motion, which means $\int x\nu_1(dx)=0$, $\int x^2\nu_1(dx)=1$. Then rescaling as in Donsker's Theorem we consider $X_t^\varepsilon:=\varepsilon X^1_{\varepsilon^{-2}t}$ such that $\nu_\varepsilon(B)=\varepsilon^{-2}\nu_1(\varepsilon^{-1}B)$ for Borel sets $B$, $\bar\sigma^2(\varepsilon)=1$ and
$$\varphi_t^\varepsilon(u)=\exp\Big(t\varepsilon^{-2} \int_{-1}^1\big(e^{i\varepsilon ux}-1\big)\nu_1(dx)\Big).$$
Let us further assume that $q_3:=\int x^3\nu_1(dx)\not=0$. Then, taking into account the first two moments, a Taylor expansion yields
\[ t\varepsilon^{-2}\int_{-1}^1\big(e^{i\varepsilon ux}-1\big)\nu_1(dx)=-\frac{tu^2}{2}-\frac{it\varepsilon u^3}{3!}q_3+{ O}(t\varepsilon^2 u^4).
\]
For $t>\varepsilon^2$ we thus obtain at $u_0=t^{-1/2}$
\[ \frac{|\varphi_\varepsilon(u_0)-e^{-tu_0^2/2}|}{u_0}=t^{1/2}e^{-1/2}\Big|e^{-iq_3\varepsilon t^{-1/2}/3!+{ O}(\varepsilon^2/t)}-1\Big| =\varepsilon\Big(\frac{q_3}{\sqrt e 3!}+{ O}(\varepsilon/\sqrt{t})\Big).
\]
Hence, by Corollary \ref{cor:lb} there are constants $M>0$ and $c>0$ such that for all $t\ge M\varepsilon^2$
\[ \Wi_p(X_t^\varepsilon,W_t) \geq c\varepsilon.\]
For $t\le M\varepsilon^2$ we obtain at $u_0=(8M\lambda_1/t)^{1/2}$ with $\lambda_1=\nu_1(\R)$
\begin{align*}
\frac{|\varphi_t^\varepsilon(u_0)-e^{-tu_0^2/2}|}{u_0}&\ge \frac{t^{1/2}}{(8M\lambda_1)^{1/2}}\Big|e^{t\varepsilon^{-2}\int(\cos(\varepsilon u_0x)-1)\nu_1(dx)}-e^{-4M\lambda_1}\Big|
\\ &\ge \frac{e^{-2M\lambda_1}-e^{-4M\lambda_1}}{(8M\lambda_1)^{1/2}}t^{1/2}.
\end{align*}
We conclude
\[\forall t>0,\,\varepsilon\in(0,1]:\;\Wi_p(X_t^\varepsilon,W_t)\geq K\min(\sqrt t,\varepsilon)\]
for some positive constant $K$, depending on $\nu_1$, but independent of $t$ and $\varepsilon$, whenever $q_3=\int x^3\nu_1(dx)\not=0$.
Even for symmetric L\'evy measures, not inducing skewness of the distribution, we can attain the order $\min(\sqrt t,\varepsilon)$. If we consider $\nu_1=\frac12(\delta_{-1}+\delta_1)$, $\nu_\varepsilon=\frac{\delta_{-\varepsilon}+\delta_{\varepsilon}}{2\varepsilon^2}$, we arrive at the same conclusion by computing the distance $T_1$ between $Y_t(\varepsilon)$ and $\No(0,t)$ as in Remark \ref{rmk:ex}:
\begin{align*}
T_1\big(\Li(Y_t(\varepsilon)),\No(0,t)\big)
=\sup_{u\in\R}\bigg|\frac{\exp\Big(-t\Big(\frac{1-\cos(u\varepsilon)}{\varepsilon^2}\Big)\Big)-\exp\Big(-\frac{tu^2}{2}\Big)}{u}\bigg|.
\end{align*}
If $t\geq \varepsilon^2$, we choose $u=\frac{2\pi}{\varepsilon}$ and get
\begin{align*}
T_1\big(\Li(Y_t(\varepsilon)),\No(0,t)\big)\geq \bigg|\frac{\varepsilon}{2\pi}\Big(1-\exp\Big(-\frac{2\pi^2t}{\varepsilon^2}\Big)\Big)\bigg|\geq \Big(\frac{1-e^{-2\pi^2}}{2\pi}\Big)\varepsilon.
\end{align*}
If $t<\varepsilon^2$, the choice $u=\frac{3}{\sqrt t}$ gives
$$ T_1\big(\Li(Y_t(\varepsilon)),\No(0,t)\big)\geq \sqrt t\frac{\Big(e^{-2}-e^{-\frac{9}{2}}\Big)}{3}.$$
We conclude that also in this case $\Wi_1(\Li(Y_t(\varepsilon)),\No(0,t))\geq K\min(\sqrt t,\varepsilon)$ holds for some positive constant $K$, independent of $t$ and $\varepsilon$.
\section{Total variation bounds via convolution}\label{sec:conv}
\subsection{Notation and some useful properties}
Let $(\X,\mathscr F)$ be a measurable space and let $\mu$ and $\nu$ be two probability measures on $(\X,\mathscr F)$.
\begin{definition}
The \emph{total variation distance} between $\mu$ and $\nu$ is defined as
$$\|\mu-\nu\|_{TV}=\sup_{A\in\F}\big|\mu(A)-\nu(A)\big|.$$
\end{definition}
\begin{lemma}\label{property:tv}
The total variation distance has the following properties.
\begin{enumerate}
\item $\|\mu-\nu\|_{TV}=\frac{1}{2}\sup_{\|\Psi\|_{\infty}\leq 1}\big|\int_\X \Psi(x)(\mu-\nu)(dx)\big|$.
\item \label{pro:couplingtv} $\|\mu-\nu\|_{TV}=\inf\big(\p(X\neq Y):\Li(X)=\mu,\ \Li(Y)=\nu\big)$.
\end{enumerate}
\end{lemma}
\begin{remark}
Let $\X$ be a discrete set, equipped with the Hamming metric $d(x,y)=\I_{x\neq y}$. In this case, thanks to Property \ref{pro:couplingtv}. above,
for any probability measures $\mu$ and $\nu$ on $\X$ we have
$$\Wi_1(\mu,\nu)=\|\mu-\nu\|_{TV}.$$
\end{remark}
The total variation distance does not always bound the Wasserstein distance, because the latter is also influenced by large distances.
However, thanks to the following classical result, one can get some control on $\Wi_p$ given a bound on the total variation distance.
\begin{theorem}\label{teo:villani}[See \cite{villani09}, Theorem 6.13]
Let $\mu$ and $\nu$ be two probability measures on a Polish space $(\X,d)$. Let $p\in[1,\infty)$ and $x_0\in\X$. Then
$$\Wi_p(\mu,\nu)\leq 2^{\frac{1}{p'}}\bigg(\int d(x_0,x)^p|\mu-\nu|(dx)\bigg)^{\frac{1}{p}},\quad \frac{1}{p}+\frac{1}{p'}=1.$$
In particular, if $p=1$ and the diameter of $\X$ is bounded by $D$, then
$$\Wi_1(\mu,\nu)\leq 2 D\|\mu-\nu\|_{TV}.$$
\end{theorem}
In Proposition \ref{prop:TV} we will show an inequality that can be thought of as an inverse of the one above. Namely, the total variation distance between two measures convolved with a common measure can be bounded by a multiple of the Wasserstein distance of order $1$.
\subsection{Wasserstein distance of order $1$ and total variation distance}
Recall that a real function $g$ is of \emph{bounded variation} if its total variation norm is finite, i.e.
$$\|g\|_{BV}=\sup_{ P \in\mathscr P}\sum_{i=0}^{n_P-1}|g(x_{i+1})-g(x_i)|<\infty,$$
where the supremum is taken over the set $\mathscr{P}=\{P=(x_0,\dots,x_{n_P}):x_0<x_1<\dots<x_{n_P}\}$ of all finite ordered subsets of $\R$. We will denote by $BV(\R)$ the space of functions of bounded variation.
We now state a lemma that will be useful in the following.
\begin{lemma}\label{lemma:bv}
Let $g$ be a real function of bounded variation and $\mathcal{F} \subseteq \{\phi \colon \R \to \R : \|\phi\|_\infty \leq 1\} \cap L^{1}(\R)$ a functional class. Suppose that for any $\phi\in \mathcal{F}$
$$h_\phi(t)=\int_\R \phi(y)\Big(g(t-y)-\lim_{x\to -\infty} g(x)\Big)dy$$
is well defined. Then,
\begin{equation}\label{eq:Lipnorm}
\sup_{\phi \in \mathcal{F}}\|h_\phi\|_{Lip}\leq \|g\|_{BV}.
\end{equation}
\end{lemma}
\begin{proof}
The proof is an easy consequence of the following classical results on Lebesgue-Stieltjes measures:
\begin{enumerate}
\item For every right-continuous function $g\colon \R\to\R$ of bounded variation there exists a unique signed measure $\mu$ such that
\begin{equation}\label{eq:signedmeasure}
\mu(]-\infty,x])=g(x)-\lim_{y\to-\infty} g(y).
\end{equation}
\item Let $\phi\in L^\infty(\R)$ and let $g\in BV(\R)$ be a right-continuous function. Let $\mu$ be the finite signed measure associated to $g$ as in \eqref{eq:signedmeasure}.Then $\int \phi(t-y)\mu(dy)$ is well defined, measurable in $t\in\R$ and bounded in absolute value by $\|\phi\|_\infty\|g\|_{BV}$. \label{eq:2}
\end{enumerate}
More precisely, let $\mu$ be the finite signed measure associated to $g$. It is enough to prove that $\int \phi(t-y)\mu(dy)$ is the weak derivative of $h_\phi$ since then, using Point \ref{eq:2}. above, we deduce that $\|h_\phi\|_{Lip}=\|\int \phi(\cdot-y)\mu(dy)\|_\infty\leq \|\phi\|_\infty\|g\|_{BV}$ and hence \eqref{eq:Lipnorm}. The claim above follows by Fubini's Theorem: for all $T>0$ \begin{align*}
\int_0^T \int \phi(t-y)\mu(dy)dt &= \int\int \I_{[0,T]}(u+y)\phi(u)du\,\mu(dy)\\
&=\int \phi(u)(g(T-u)-g(-u))du\\
&=\int \phi(u)\Big(g(t-u)-\lim_{x\to-\infty} g(x)\Big)du\Big|_{t=0}^T.
\end{align*}
Hence, $ \int \phi(t-y)\mu(dy)$ is the weak derivative of $\int \phi(u)(g(t-u)-\lim_{x\to-\infty} g(x))du$ as desired.
\end{proof}
\begin{proposition}\label{prop:TV}
Let $\mu$ and $\nu$ be two measures on $(\R,\B(\R))$ and $G$ be an absolutely continuous measure with respect to the Lebesgue measure admitting
a density $g$ of bounded variation.
Then the total variation distance between the convolution measures $\mu*G$ and $\nu*G$ is bounded by
$$\|\mu*G-\nu*G\|_{TV}\leq \frac{\|g\|_{BV}}{2} \Wi_1(\mu,\nu).$$
\end{proposition}
\begin{proof}
\begin{align*}
\|\mu*G-\nu*G\|_{TV}&=\frac{1}{2}\sup_{\|\phi\|_{\infty}\leq 1}\bigg|\int_\R \phi(x)(\mu*G-\nu*G)(dx)\bigg|\\
&=\frac{1}{2}\sup_{\|\phi\|_{\infty}\leq 1}\bigg|\int_\R \bigg(\int_\R \phi(x)g(x-t)(\mu-\nu)(dt)\bigg)dx\bigg|\\
&=\frac{1}{2}\sup_{\|\phi\|_{\infty}\leq 1}\bigg|\int_\R \bigg(\int_\R \phi(x)g(x-t)dx\bigg)(\mu -\nu)(dt)\bigg|,
\end{align*}
the supremum being taken over compactly supported functions $\phi$.
Denote by $h_\phi(t)=\int_\R \phi(x)g(x-t)dx$. From the last equality it follows that
$$\|\mu*G-\nu*G\|_{TV}\leq \frac{1}{2}\sup_{\|\phi\|_{\infty}\leq 1}\sup_{\|\psi \|_{Lip}\leq \|h_\phi\|_{Lip}}\bigg|\int_\R \psi(t)(\mu -\nu)(dt)\bigg|,$$
hence, applying Lemma \ref{lemma:bv} to $\mathcal F=\{\phi \colon \R \to \R : \|\phi\|_\infty \leq 1 \textnormal{ with compact support}\}$ and Proposition \ref{pro:lip}, we deduce that
\begin{align*}
\|\mu*G-\nu*G\|_{TV}\leq \frac{\|g\|_{BV}}{2}\sup_{\|\psi\|_{Lip}\leq 1}\bigg|\int_\R \psi(t)(\mu -\nu)(dt)\bigg|=\frac{\|g\|_{BV}}{2}\Wi_1(\mu,\nu).
\end{align*}
\end{proof}
The upper bound established in Proposition \ref{prop:TV} is sharp. To see that, let us consider the following example.
\begin{example}
Let $\mu=\delta_0$, $\nu=\delta_{\varepsilon}$ and $G=\mathcal N(0,1)$ for some $\varepsilon>0$. Denoting by $\varphi$ the density of a random variable $N\sim\No(0,1)$ and by $\Phi$ its cumulative distribution function, we have
\begin{align*}
\|\nu*G-\mu*G\|_{TV}&=\frac{1}{2}\int_\R|\varphi(x)-\varphi(x-\varepsilon)|dx = \Phi\Big(\frac{\varepsilon}{2}\Big) - \Phi\Big(-\frac{\varepsilon}{2}\Big) = 2\Phi\Big(\frac{\varepsilon}{2}\Big)-1\\
&= \frac{\varepsilon}{\sqrt{2\pi}} + O(\varepsilon^2).
\end{align*}
At the same time it is easy to see that $\Wi_1(\mu,\nu)=\varepsilon$ and $\|g\|_{BV}=\sqrt \frac{2}{\pi}$. Therefore, the upper bound established in Proposition \ref{prop:TV}
$$ \|\nu*G-\mu*G\|_{TV}\leq \frac{1}{\sqrt{2\pi}}\Wi_1(\mu,\nu)=\frac{\varepsilon}{\sqrt{2\pi}}$$
is exactly the correct estimate up to the first order.
\end{example}
\subsection{Total variation distance and Toscani-Fourier distances}
For any Lebesgue density $f$ introduce its Fourier transform $\F f(u)=\int e^{iux}f(x)dx$.
A first elementary result linking the total variation distance between convolution measures to Toscani-Fourier metrics is the following.
\begin{proposition}\label{prop:CS}
Let $\mu,\nu$ and $G$ be probability measures and suppose that its characteristic functions $\varphi_\mu$, $\varphi_\nu$, $\varphi_G$ are differentiable. Assume that $G$ has a Lebesgue density $g$ with $m$th weak derivative $g^{(m)}$. Then, for all $k,j,r\in\{1,\ldots,m\}$, we have
\begin{align*}
&\|\mu*G-\nu*G\|_{TV}\\
&\leq C\bigg(T_{k}(\mu,\nu) \|g^{(k)}\|_2+\sqrt 2T_{r}(\mu,\nu) \|(xg(x))^{(r)}\|_2+\sqrt 2\sup_{u\in\R}\frac{|\varphi'_\mu(u)-\varphi'_\nu(u)|}{|u|^{j}}\|g^{(j)}\|_2\bigg)
\end{align*}
for some numerical constant $C>0.$
\end{proposition}
\begin{proof}
First of all, remark that if any one among the $\|g^{(\bullet)}\|_2$, $\|(xg(x))^{(\bullet)}\|_2$, $T_{\bullet}(\mu,\nu)$, or $\sup_{u\in\R}\frac{|\varphi'_\mu(u)-\varphi'_\nu(u)|}{|u|^{\bullet}}$ appearing above is infinite, then there is nothing to prove. Therefore, from now on, we will assume that they are all finite.
Since $G$ admits a density $g$ with respect to Lebesgue measure, $\mu*G$ and $\nu*G$ have densities $g*\mu$ and $g*\nu$.
Using the Cauchy-Schwarz inequality we have
\begin{align*}
\|\mu*G-\nu*G\|_{TV}&=\frac{1}{2}\int \frac{1}{\sqrt{1+x^2}}\sqrt{1+x^2}|g*\mu(x)-g*\nu(x)|dx\\
&\leq C(\|g*\mu-g*\nu\|_2+\|x(g*\mu-g*\nu)\|_2),
\end{align*}
for some numerical constant $C>0.$
For all $k>0$ an application of the Plancherel identity yields
\begin{align*}
\|g*\mu-g*\nu\|_2^2&=\frac{1}{2\pi}\|\varphi_G(\varphi_\mu-\varphi_\nu)\|_2^2=\frac{1}{2\pi}\int \frac{|\varphi_\mu(u)-\varphi_\nu(u)|^2}{u^k} |\varphi_G(u)|^2 u^k du.
\end{align*}
Hence,
$$\|g*\mu-g*\nu\|_2\leq \sqrt{\frac{1}{2\pi}}\sup_{u\in\R}\frac{|\varphi_\mu(u)-\varphi_\nu(u)|}{|u|^{k/2}}\|u^{k/2}\varphi_G\|_2.$$
In the same way we also have
$$\|x(g*\mu(x)-g*\nu(x))\|_2^2\leq \frac{1}{\pi}\|\varphi_G' (\varphi_\mu-\varphi_\nu)\|_2^2+\frac{1}{\pi}\|\varphi_G (\varphi_\mu'-\varphi_\nu')\|_2^2$$
and we conclude as before that for all $r,j>0$
\begin{align*}
&\|x(g*\mu(x)-g*\nu(x))\|_2\\
&\quad \leq\sqrt{\frac{1}{\pi}} \sup_{u\in\R}\frac{|\varphi_\mu(u)-\varphi_\nu(u)|}{|u|^{r/2}}\|u^{r/2}\varphi_G'\|_2
+\sqrt{\frac{1}{\pi}}\sup_{u\in\R}\frac{|\varphi'_\mu(u)-\varphi'_\nu(u)|}{|u|^{j/2}}\|u^{j/2}\varphi_G\|_2.
\end{align*}
It remains to apply the inverse Fourier transform.
\end{proof}
Using a different set of hypotheses, one can also establish the following relation between the total variation distance and the Toscani-Fourier distance.
\begin{proposition}\label{prop:N}
Let $\mu$, $\nu$ and $G$ be real probability measures absolutely continuous with respect to the Lebesgue measure. Let $f_\mu$, $f_\nu$ and $g$ denote their densities and $F_\mu$ and $F_\nu$ denote the cumulative distribution functions of $\mu$ and $\nu$. Suppose that $\F g \in L_1$ and that $F_\mu - F_\nu \in L_1$. Further suppose that the graphs of $f_\mu*g$ and $f_\nu*g$ intersect in at most $N$ points. Then,
$$\|\mu*G-\nu*G\|_{TV}\leq \frac{N}{2\pi}T_1(\mu,\nu)\int|\F g(u)|du.$$
\end{proposition}
\begin{proof}
As in the proof of Proposition \ref{prop:TV}, let us introduce the function
$$h_\phi(t):=\int_{\R}\phi(x)g(t-x)dx$$
and recall that
$$\|\mu*G-\nu*G\|_{TV}=\frac{1}{2}\sup_{\|\phi\|_{\infty}\leq 1}\bigg|\int_\R h_\phi(t)(\mu-\nu)(dt) \bigg|.$$
Using an integration by part and Plancherel identity, we get
\begin{align}
\int_\R h_\phi(t)(\mu-\nu)(dt) &=\int_\R h_{\phi}'(t)(F_\mu(t)-F_\nu(t))dt\nonumber\\&=\frac{1}{2\pi}\int \overline{\F h_{\phi}'(u)}\F(F_\mu-F_\nu)(u) du\nonumber \\
&=\frac{1}{2\pi}\int \overline{\F h_{\phi}'(u)}\frac{\varphi_\mu(u)-\varphi_\nu(u)}{-iu} du\nonumber\\
&\leq T_1(\mu,\nu) \frac{1}{2\pi}\int |\F h_{\phi}'(u)|du. \label{eq:TVT}
\end{align}
Also observe that
$$\sup_{\|\phi\|_{\infty}\leq 1}\bigg|\int_\R h_\phi(t)(\mu-\nu)(dt) \bigg|=\int_\R h_{\tilde\phi}(t)(\mu-\nu)(dt)$$
where
$$\tilde \phi(x)=\begin{cases}
-1\quad & \text{ if }\quad f_\mu*g(x)<f_\nu*g(x),\\
1\quad & \text{ if }\quad f_\mu*g(x)\geq f_\nu*g(x).\\
\end{cases}$$
Let us denote by $-\infty=x_0<x_1<\dots<x_N<x_{N+1}=+\infty$ the points of intersections between the graphs of $f_\mu*g$ and $f_\nu*g$. In particular $\tilde\phi(u)=\pm\sum_{i=0}^N(-1)^i\I_{[x_i,x_{i+1})}(u)$ with the sign depending on the sign of $f_{\mu}*g-f_\nu*g$ on $(-\infty,x_1)$. Thus,
$$h_{\tilde \phi}'(u)=\pm2\sum_{j=1}^N (-1)^{j}g(u-x_j).$$
In particular, we get that
$$\F h_{\tilde \phi}'(u)=\pm 2\sum_{j=1}^N (-1)^{j}\F g(u)e^{iux_j},$$
hence $|\F h_{\tilde \phi}'(u)|\leq 2N |\F g(u)|.$ This fact, together with \eqref{eq:TVT}, concludes the proof.
\end{proof}
Let us observe that another way to link the total variation distance between convolution measures to the Toscani-Fourier distance is offered by Theorem 2.21 in \cite{CT07} joint with Proposition \ref{prop:TV}. More precisely, Theorem 2.21 in \cite{CT07} states that, under appropriate hypotheses on $\mu$ and $\nu$,
$$\Wi_1(\mu,\nu)\leq \bigg(\frac{18 M}{\pi}\bigg)^{1/3}T_2(\mu,\nu)^{1/6},$$
with
$M=\max\big\{\E[X^2],\E[Y^2]\big\},$ $X\sim \mu$ and $Y\sim \nu.$
Therefore, from Proposition \ref{prop:TV}, it follows that
$$\|\mu*G-\nu*G\|_{TV}\leq \|g\|_{BV}\bigg(\frac{9M}{4\pi}\bigg)^{1/3}T_2(\mu,\nu)^{1/6},$$
where $g$ denotes the density of $G$.
Using some ideas from the proof of Theorem 2.21 in \cite{CT07} we will be able to prove the following general result.
\begin{proposition}\label{prop:CTMR}
Let $\mu,\nu\in\mathcal P_j(\R)$, $j\geq 1$, and $G$ be a measure, absolutely continuous with respect to the Lebesgue measure. Suppose that the density $g$ of $G$ is $j$-times weakly differentiable with $j$th derivative $g^{(j)}\in L_2$. Then,
$$\|\mu*G-\nu*G\|_{TV}\le C_j^{1/(2j+1)}\| g^{(j)}\|_2^{2j/(2j+1)} T_j(\mu,\nu)^{2j/(2j+1)},$$
where $C_j=\max\big(\E\big[|X+Z|^j\big],\E\big[|Y+Z|^j\big]\big)$ with $X\sim \mu$, $Y\sim \nu$, $Z\sim G$ and $Z$ independent of $X$ and $Y$.
\end{proposition}
\begin{proof}
Using the same notation as in Proposition \ref{prop:N}, we have for all $R>0$
\begin{align*}
&\|\mu*G-\nu*G\|_{TV}=\frac{1}{2}\int |g*\mu(x)-g*\nu(x)|dx\\&\leq \frac{1}{2}\bigg(\int_{-R}^{R} |g*\mu(x)-g*\nu(x)|dx+\frac{1}{R^j}\int_{|x|>R}|x|^j|g*\mu(x)-g*\nu(x)|dx\bigg)\\
&\leq \frac{1}{2}\bigg(\int_{-R}^{R} |g*\mu(x)-g*\nu(x)|dx+\frac{C_j}{R^j}\bigg).
\end{align*}
By Cauchy-Schwarz inequality,
$$\int_{-R}^{R} |g*\mu(x)-g*\nu(x)|dx\leq \sqrt{2R} \|g*\mu-g*\nu\|_2$$
holds. Taking $R=\big(\frac{ C_j}{\sqrt 2\|g*\mu-g*\nu\|_2}\big)^{2/(2j+1)}$ we get
$$\|\mu*G-\nu*G\|_{TV}\le \frac{1}{2} C_j^{1/(2j+1)}(\sqrt 2\|g*\mu-g*\nu\|_2)^{2j/(2j+1)}.$$
Using Plancherel identity and the properties of the Fourier transform, we deduce that
\begin{align*}
\|g*\mu-g*\nu\|_2^2&=\frac{1}{2\pi}\|\F(g*\mu)-\F(g*\nu)\|_2^2
=\frac{1}{2\pi}\|\F g(\varphi_{\mu}-\varphi_{\nu})\|_2^2\\
&=\frac{1}{2\pi}\int |\F g(u)|^2 u^{2j}\frac{|\varphi_{\mu}(u)-\varphi_{\nu}(u)|^2}{u^{2j}}du\\
&\leq \frac{1}{2\pi}T_j(\mu,\nu)^2 \int |\F g(u)|^2 u^{2j}du\\
&= \frac{1}{2\pi}T_j^2(\mu,\nu)\|\F g^{(j)}\|_2^2= T_j^2(\mu,\nu)\| g^{(j)}\|_2^2.
\end{align*}
It follows that
$$\|\mu*G-\nu*G\|_{TV}\le C_j^{1/(2j+1)}(T_j(\mu,\nu)\| g^{(j)}\|_2)^{2j/(2j+1)}.$$
\end{proof}
\begin{remark}
To better understand the upper bounds presented above, let us specialise to the case $G=\mathcal N(0,\sigma^2)$. In order to compare the results presented in Propositions \ref{prop:CS}--\ref{prop:CTMR} let us start by observing that the following equalities hold.
\begin{itemize}
\item If $g(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{x^2}{2\sigma^2}}$, then $\F g(u)=e^{-\frac{u^2\sigma^2}{2}}$. Therefore, $\int|\F g(u)|du=\frac{\sqrt{2\pi}}{\sigma}.$
\item Also, $g'(x)=\frac{-x}{\sqrt{2\pi}\sigma^3}e^{-\frac{x^2}{2\sigma^2}}$ and $g''(x)=\frac{1}{\sqrt{2\pi}\sigma^3}e^{-\frac{x^2}{2\sigma^2}}(-1+\frac{x^2}{\sigma^2})$. It follows that $\|g'\|_2^2=\frac{1}{4\sqrt{\pi}\sigma^3}$, $\|g''\|_2^2=\frac{1}{\pi\sigma^5}\int_0^\infty e^{-y^2}(y^2-1)^2 dy=\frac{1}{4\sqrt \pi\sigma^5}$ and $\|g\|_{BV}=\sqrt{\frac{2}{\sigma^2\pi}}$.
\end{itemize}
We are now able to compare the previous results for independent random variables $Z\sim\mathcal N(0,\sigma^2)$, $X\sim \mu$, $Y\sim \nu$.
\begin{description}
\item {\it Proposition \ref{prop:CS} for $T_1$:} With a numerical constant $C>0$, independent of the laws of $X,Y,Z$,
\begin{multline*}
\|\Li(X+Z)-\Li(Y+Z)\|_{TV}\leq C\bigg( T_1(X,Y)\bigg(\frac{1}{\sqrt{\sigma^3}}+\frac{1}{\sigma}\bigg)\\+\frac{1}{\sqrt{\sigma^3}}\sup_{u\in\R}\frac{|\varphi_X'(u)-\varphi_Y'(u)|}{|u|}\bigg).
\end{multline*}
\item {\it Proposition \ref{prop:N}:} Let $N$ be the number of intersections between the graphs of the densities of $X+Z$ and $Y+Z$. Then,
$$\|\Li(X+Z)-\Li(Y+Z)\|_{TV}\leq \frac{N}{\sqrt{2\pi}}\frac{T_1(X,Y)}{\sigma}.$$
\item {\it Proposition \ref{prop:CTMR} for $T_1$:}
\begin{multline*}
\|\Li(X+Z)-\Li(Y+Z)\|_{TV}\\\leq \Big(\frac{\max\big(\E[|X+Z|],\E[|Y+Z|]\big)}{4\sqrt\pi}\Big)^{1/3}\frac{\big(T_1(X,Y)\big)^{2/3}}{\sigma}.
\end{multline*}
\item {\it Proposition \ref{prop:CTMR} for $T_2$:}
\begin{multline*}
\|\Li(X+Z)-\Li(Y+Z)\|_{TV}\\ \leq \bigg(\frac{\max\big(\E\big[(X+Z)^2\big],\E\big[(Y+Z)^2\big]\big)}{16\pi}\bigg)^{1/5}\frac{(T_2(X,Y))^{4/5}}{\sigma^2}.
\end{multline*}
\item {\it Proposition \ref{prop:TV} + Theorem 2.21 in \cite{CT07}:}
$$\|\Li(X+Z)-\Li(Y+Z)\|_{TV}\leq \bigg( \frac{9\max\big(\E[X^2],\E[Y^2]\big)}{\sqrt{2\pi^5}}\bigg)^{1/3}\frac{(T_2(X,Y))^{1/6}}{\sigma}.$$
\end{description}
We see that Proposition \ref{prop:CTMR} gives a much tighter bound than Proposition \ref{prop:TV} + Theorem 2.21 in \cite{CT07} when $T_2(X,Y)$ is small.
\end{remark}
\subsection{Main total variation results}\label{subsec:TV}
As it was the case in Section \ref{sec:uppb}, in order to obtain an upper bound for the total variation distance between the marginals $X_t^1$ and $X_t^2$ of two Lévy processes it is enough to separetely control the total variation distance between Gaussian distributions, between the small jumps and the corresponding Gaussian component and finally between the big jumps. The latter can be controlled by means of the following result.
\begin{theorem}\label{teo:CPPTV}
Let $(X_i)_{i\geq 1}$ and $(Y_i)_{i\geq1}$ be sequences of i.i.d. random variables a.s. different from zero and $N$, $N'$ be two Poisson random variables with $N$ (resp. $N'$) independent of $(X_i)_{i\geq 1}$ (resp. $(Y_i)_{i\geq 1}$). Denote by $\lambda$ (resp. $\lambda'$) the mean of $N$ (resp. $N'$). Then,
$$\Big\|\Li\Big(\sum_{i=1}^N X_i\Big)-\Li\Big(\sum_{i=1}^{N'}Y_i\Big)\Big\|_{TV}\leq (\lambda\wedge\lambda')\|\Li(X_1)-\Li(Y_1)\|_{TV}+1-e^{-|\lambda-\lambda'|}.$$
\end{theorem}
\begin{proof}
Without loss of generality, let us suppose that $\lambda\geq \lambda'$ and write $\lambda=\alpha+\lambda'$, $\alpha\geq 0$.
By triangle inequality,
\begin{align}
\Big\|\Li\Big(\sum_{i=1}^N X_i\Big)-\Li\Big(\sum_{i=1}^{N'}Y_i\Big)\Big\|_{TV} &\leq \Big\|\Li\Big(\sum_{i=1}^N X_i\Big)-\Li\Big(\sum_{i=1}^{N''}X_i\Big)\Big\|_{TV}\nonumber\\
&\quad + \Big\|\Li\Big(\sum_{i=1}^{N''} X_i\Big)-\Li\Big(\sum_{i=1}^{N'}Y_i\Big)\Big\|_{TV},\label{eq:ti}
\end{align}
where $N''$ is a random variable independent of $(X_i)_{i\geq 1}$ and with the same law as $N'$.
The first addendum in \eqref{eq:ti} can be bounded as follows. Let $P$ be a Poisson random variable independent of $N''$ and $(X_i)_{i\geq 1}$ with mean $\alpha$. Then,
\begin{align*}\Big\|\Li\Big(\sum_{i=1}^N X_i\Big)-\Li\Big(\sum_{i=1}^{N''}X_i\Big)\Big\|_{TV} &=\Big\|\Li\Big(\sum_{i=1}^{N''+P} X_i\Big)-\Li\Big(\sum_{i=1}^{N'}X_i\Big)\Big\|_{TV}\\ &\leq \Big\|\delta_0-\Li\Big(\sum_{i=1}^{P}X_i\Big)\Big\|_{TV}
\end{align*}
where the last bound follows by subadditivity of the total variation distance.
By definition, it is easy to see that
$$\Big\|\delta_0-\Li\Big(\sum_{i=1}^{P}X_i\Big)\Big\|_{TV}= \p\Big(\sum_{i=1}^{P}X_i\neq 0\Big)\leq 1-e^{-\alpha}.$$
In order to bound the second addendum in \eqref{eq:ti} we condition on $N'$ and use again the subadditivity of the total variation joined with the fact that $\Li(N')=\Li(N'')$:
\begin{align*}
\Big\|\Li\Big(\sum_{i=1}^{N''} X_i\Big)-\Li\Big(\sum_{i=1}^{N'}Y_i\Big)\Big\|_{TV}&=\sum_{n\geq 0} \Big\|\Li\Big(\sum_{i=1}^{n} X_i\Big)-\Li\Big(\sum_{i=1}^{n}Y_i\Big)\Big\|_{TV}\p(N'=n)\\&\leq \sum_{n\geq0} n \|\Li(X_1)-\Li(Y_1)\|_{TV}\p(N'=n)\\&=\lambda'\|\Li(X_1)-\Li(Y_1)\|_{TV}.
\end{align*}
\end{proof}
The treatment of the small jumps is the subject of the following result:
\begin{proposition}\label{prop:TVprocesses}
Let $X$ be a pure jump Lévy process with Lévy measure $\nu$. Introduce $\nu_\varepsilon=\nu\I_{|x|\leq \varepsilon}$. Then, for all $\Sigma>0$ and $\varepsilon\in(0,1]$, we have
\begin{align*}
\Big\|P_t^{(0,\Sigma,\nu_\varepsilon)}-P_t^{(0,\sqrt{\Sigma^2+\bar\sigma^2(\varepsilon)},0)}\Big\|_{TV}&\leq
\sqrt{\frac{2}{\pi t\Sigma^2}}\Wi_1\Big(P_t^{(0,0,\nu_\varepsilon)},P_t^{(0,\bar\sigma(\varepsilon),0)}\Big)\\
&\leq \sqrt{\frac{2}{\pi t\Sigma^2}}\min\Big(2\sqrt{t\bar\sigma^2(\varepsilon)},\frac{\varepsilon}{2}\Big).
\end{align*}
\end{proposition}
\begin{proof}
This follows by applying first Proposition \ref{prop:TV} and then Theorem \ref{teo:smalljumps}.
\end{proof}
As a consequence of the above estimates on the Wasserstein distances, we obtain a bound for the total variation distance of the marginals of L\'evy processes with non-zero Gaussian components.
\begin{theorem}\label{th:MainTV}
With the same notation used in Theorem \ref{teow1} and Section \ref{sec:notationlevy}, for all $t>0$, $\varepsilon\in (0,1]$ and for all $\sigma_i>0$, $i=1,2$, we have:
\begin{align*}
\Big\|P_t^{(b_1,\sigma_1,\nu_1)}&-P_t^{(b_2,\sigma_2,\nu_2)}\Big\|_{TV} \\&\leq \frac{\sqrt{\frac{t}{2\pi}} \Big|b_1(\varepsilon)-b_2(\varepsilon)\Big|+\sqrt 2\Big|\sqrt{\sigma_1^2+\bar\sigma_1^2(\varepsilon)}-\sqrt{\sigma_2^2+\bar\sigma_2^2(\varepsilon)}\Big|}{\sqrt{\sigma_1^2+\bar\sigma_1^2(\varepsilon)}\vee \sqrt{\sigma_2^2+\bar\sigma_2^2(\varepsilon)}}\\
&\quad+ \sum_{i=1}^2\sqrt{\frac{2}{\pi t \sigma_i^2}}
\min\Big(2\sqrt{t\bar \sigma_i^2(\varepsilon)}, \frac{\varepsilon}{2}\Big)\\
&\quad
+t\big|\lambda_1(\varepsilon)-\lambda_2(\varepsilon)\big|
+t\big(\lambda_1(\varepsilon)\wedge\lambda_2(\varepsilon)\big)
\big\|\frac{\nu_1^\varepsilon}{\lambda_1(\varepsilon)}-\frac{\nu_2^\varepsilon}{\lambda_2(\varepsilon)}\big\|_{TV},
\end{align*}
with $\nu_j^\varepsilon=\nu_j(\cdot\cap (\R\setminus(-\varepsilon,\varepsilon)))$ and $\lambda_j(\varepsilon)=\nu_j^\varepsilon(\R)$.
\end{theorem}
\begin{proof}
By subadditivity of the total variation distance and by the triangle inequality,
\begin{align*}
\Big\|P_t^{(b_1,\sigma_1,\nu_1)}-P_t^{(b_2,\sigma_2,\nu_2)}\Big\|_{TV}& \leq \sum_{i=1}^2\Big\|P_t^{(0,\sigma_i,\nu_i(\varepsilon))}-P_t^{(0,\sqrt{\sigma_i^2+\bar\sigma_i^2(\varepsilon)},0)}\Big\|_{TV}\\
&\quad +\Big\|P_t^{(b_1(\varepsilon),\sqrt{\sigma_1^2+\bar\sigma_1^2(\varepsilon)},0)}-P_t^{(b_2(\varepsilon),\sqrt{\sigma_2^2+\bar\sigma_2^2(\varepsilon)},0)}\Big\|_{TV}\\
&\quad + \big\|\Li\big(X_t^{1,B}(\varepsilon)\big)-\Li\big(X_t^{2,B}(\varepsilon)\big)\big\|_{TV}.
\end{align*}
The proof follows from Proposition \ref{prop:TVprocesses}, the classical bound $$\|\No(\mu_1,\sigma_1^2)-\No(\mu_2,\sigma_2^2)\|_{TV}\leq \frac{\frac{1}{\sqrt{2\pi}}|\mu_1-\mu_2|+\sqrt 2|\sigma_1-\sigma_2|}{\sigma_1\vee\sigma_2}.
$$
and Theorem \ref{teo:CPPTV}.
\end{proof}
Another useful result follows directly from Proposition \ref{prop:CTMR} with $j=1$ and allows to bound the total variation distance for L\'evy processes with positive Gaussian part by the Toscani-Fourier distance for the same L\'evy processes, but with a smaller Gaussian part.
\begin{theorem}\label{teo:TVToscani}
Let $X^i\sim(b_i,\sigma_i^2,\nu_i)$, with $\sigma_i>0$, $i=1,2$, be two Lévy processes. For any $\Sigma\in(0,\sigma_1\wedge\sigma_2)$ consider the Lévy processes $\widetilde X^i\sim(b_i,\sigma_i^2-\Sigma^2,\nu_i)$, $i=1,2$. Then,
$$\big\|\Li(X_t^1)-\Li(X_t^2)\big\|_{TV}\leq \frac{\max\big(\E[| X_t^1|], \E[| X_t^2|]\big)^{1/3} \big(T_1(\widetilde X_t^1,\widetilde X_t^2)\big)^{2/3}}{( 16\pi)^{1/6}\Sigma\sqrt t}.
$$
\end{theorem}
\section{A statistical application}
\subsection{Lower bounds in the minimax sense}\label{subsec:minimax}
One of the main goals in statistics is to estimate a quantity of interest from the data. There are different criteria that can be used to judge the quality of an estimator. In nonparametric statistics it is common to use a minimax approach. Let us recall the classical setting. From the data $(X_1,\dots,X_n)$ one wants to recover a quantity of interest $\theta$ (e.g. $\theta$ is the density of the observations, or the regression function, or the Léyy density, or the diffusion coefficient, etc.). In practice $\theta$ is unknown (but supposed to belong to a certain \emph{parameter space} $\Theta$) and one needs to estimate it via an estimator (a measurable function of the data) $\hat \theta_n=\hat \theta_n(X_1,\dots,X_n)$. To measure the accuracy of the estimator one computes the \emph{minimax risk}
$$\mathcal R_n^*:=\inf_{T_n}\sup_{\theta\in\Theta}\E\big[d^2(\theta,T_n)\big],$$
where the infimum is taken over all possible estimators $T_n$ of $\theta$ and $d$ is a semi-distance on $\Theta$. Furthermore, one says that a positive sequence $(\psi_n)_{n\geq 1}$ is an \emph{optimal rate of convergence} of estimators on $(\Theta,d)$ if there exist constants $C<\infty$ and $c>0$ such that
\begin{equation}\label{eq:ub}
\limsup_{n\to\infty}\psi_n^{-2}\mathcal R_n^*\leq C\quad \quad \text{ (upper bound)}
\end{equation}
and
\begin{equation}\label{eq:lb}
\liminf_{n\to\infty}\psi_n^{-2}\mathcal R_n^*\geq c, \quad \quad \text{ (lower bound)}.
\end{equation}
The goal is then to construct an estimator $\theta_n^*$ such that
$$\sup_{\theta\in\Theta} \E\big[d^2(\theta_n^*,\theta)\big]\leq C'\psi_n^2$$
where $(\psi_n)_{n\geq 1}$ is the optimal rate of convergence and $C'<\infty$ is a constant.
The usual way to proceed is to build an estimator $\hat\theta_n$ of $\theta$ and start the investigation about its performance firstly via an upper bound like \eqref{eq:ub}. This is important since the first thing to check is that the considered estimator is at least \emph{consistent}, that is automatically implied if $\sup_{\theta\in\Theta}\E\big[d^2(\theta,\hat \theta_n)\big]\to0$. After that, a natural question is whether one could construct a better (in terms of rate of convergence in the class $(\Theta,d)$) estimator. In order to ensure that it is not possible to obtain a better estimator than the one already constructed one has to prove a lower bound, that is it is needed to prove that the rate of convergence of any other possible estimator of $\theta$ will not be faster than the rate obtained in the upper bound. This is in general a difficult task and we refer to Chapter 2 in \cite{T} for general techniques to prove lower bounds. Without recalling all the steps needed to prove a lower bound following \cite{T}, let us stress here that one of the fundamental ingredients is to have a fine upper bound for the total variation distance or other measure distances. To that end the estimates in Section \ref{subsec:TV} can be of general interest to prove lower bounds in the minimax sense.
One situation when this general procedure applies is the following, where we show how to simplify the arguments used in \cite{JR} in order to prove the desired lower bound for an estimator of the integrated volatility.
\subsection{How to simplify the proof of the lower bound in \cite{JR}}\label{subsec:JR}
In \cite{JR}, the authors consider a one-dimensional Itô-semimartingale
$X$ with characteristics $(B,C,\nu)$:
$$B_t=\int_0^t b_sds,\quad C_t=\int_0^t c_sds,\quad \nu(dt,dx)=dtF_t(dx).$$
They assume that $X$ belongs to the class $\mathcal S_A^r$ of all Itô-semimartingales that satisfy
$$|b_t|+c_t+\int (|x|^r\wedge 1)F_t(dx)\leq A \quad \forall t\in[0,1].$$
Their goal is to estimate the integrated volatility $C$ at time $1$, $C(X)_1$, from high-frequency observations $X_{\frac{i}{n}}$, $i=0,\dots,n$. They have an upper bound for an estimator of $C(X)_1$ and they want to prove that the rate of convergence attained by that estimator is optimal. To that aim they need to prove that any uniform rate $\psi_n$ for estimating $C(X)_1$ satisfies
\begin{equation}\label{eq:JRlb}
\psi_n\geq (n\log n)^{-\frac{2-r}{2}}\quad \text{if }\ r>1.
\end{equation}
Following \cite{T}, their strategy consists in finding two Lévy processes $X^i\sim(b_i,\sigma_i^2,F_i)$, $i=1,2$, such that
\begin{enumerate}
\item $\sigma_1^2-\sigma_2^2=a_n:=(n\log n)^{-\frac{2-r}{2}}$, $r\in(0,2)$,
\item $\int (|x|^r\wedge 1)F_i(dx)\leq K$,
\item $\|\mathscr L((X_{i/n}^1)_{1\leq i\leq n})-\mathscr L((X_{i/n}^2)_{1\leq i\leq n})\|_{TV}\to 0$ as $n\to \infty$.
\end{enumerate}
The construction in \cite{JR} of the Lévy processes as well as the proof of the convergence in total variation stated in Point 3. above is very involved. Let us now see how to use Theorem \ref{teo:TVToscani} to prove \eqref{eq:JRlb} more easily.
To that aim consider two sequences of Lévy processes $X^{1,n}\sim(0,1+a_n,F^1_{n})$ and $X^{2,n}\sim(0,1,F^2_{n})$
with Lévy measures $F^1_n$ and $F^2_n$ satisfying the following conditions:
\begin{itemize}
\item $\int_{\R}(|x|^r\wedge 1)F^i_n(dx)\leq K$, $i=1,2.$
\item Define
$\Psi_{i,n}:=\int_{\R}(e^{iux}-1-iux\I_{|x|\leq 1})F^i_n(dx)$, $i=1,2.$
Then $\Psi_{1,n}$ and $\Psi_{2,n}$ are real positive functions such that
\begin{equation}\label{eq:psi}
\Psi_{2,n}(u)=\frac{a_n}{2} + \Psi_{1,n}(u), \quad \forall |u|<u_n:=2\sqrt{n\log n}.
\end{equation}
\end{itemize}
It is not difficult to see that Lévy measures $F^1_n$ and $F^2_n$ satisfying such conditions always exist.
In particular, it follows from \eqref{eq:psi} that the $X^{i,n}$, $i=1,2$, have the same characteristic function for all $|u|<u_n$, i.e.:
\begin{align*}
\E\Big[e^{iuX^{1,n}_{1/n}}\Big]&=\exp\Big(-\frac{u^2}{2n}(1+a_n)-\frac{\Psi_{1,n}(u)}{n}\Big),\\
\E\Big[e^{iuX^{2,n}_{1/n}}\Big]&=\exp\Big(-\frac{u^2}{2n}-\frac{\Psi_{2,n}(u)}{n}\Big).
\end{align*}
In order to apply Theorem \ref{teo:TVToscani} let us observe that $X^{1,n}_{1/n}$ (resp. $X^{2,n}_{1/n}$) is equal in law to the convolution between a Gaussian distribution $\mathcal N\big(0,\frac{1}{8n}\big)$ and $\widetilde X^{1,n}_{1/n}$ (resp. $\widetilde X^{2,n}_{1/n}$), where $\widetilde X^{1,n}\sim\big(0,\frac{7}{8}+a_n, F^1_n\big)$ (resp. $\widetilde X^{2,n}\sim\big(0,\frac{7}{8}, F^2_n\big)$).
We obtain
\begin{align*}
\|\mathscr L(X^{1,n}_{1/n})-\mathscr L(X^{2,n}_{1/n})\|_{TV}\leq \Big(\frac{32}{\pi}\Big)^{1/6} \big(C_n n^{3/4}T_1(\mu_n, \nu_n)\big)^{2/3},
\end{align*}
where $C_n^2=\max\big(\E[| X^{1,n}_{1/n}|],\E[| X^{2,n}_{1/n}|]\big)$.
We are therefore left to compute $T_1(\mu_n, \nu_n)$ and show that $n\|\mathscr L(X_{1/n}^{1,n})-\mathscr L(X_{1/n}^{2,n})\|_{TV}\to 0$.
\begin{align*}
&T_1(\mu_n, \nu_n)=\sup_{u\in\R}\frac{\exp\big(-\frac{u^2}{2n}(\frac{7}{8}+a_n)-\frac{\Psi_{1,n}(u)}{n}\big)-\exp\big(-\frac{7u^2}{16n}-\frac{\Psi_{2,n}(u)}{n}\big)}{u}\\
&=\sup_{|u|>u_n}\frac{\exp\big(-\frac{7u^2}{16n})\Big[\exp\big(-\frac{u^2a_n}{2n}-\frac{\Psi_{1,n}(u)}{n}\big)-\exp\big(-\frac{\Psi_{2,n}(u)}{n}\big)\Big]}{u}\\
&\leq \frac{\exp\big(-\frac{7u_n^2}{16n})}{u_n}=\frac{\exp\big(-\frac{7\times4n\log n}{16n})}{2\sqrt{n\log n}}=\frac{n^{-9/4}}{2\sqrt{\log n}}.
\end{align*}
Hence,
$$\|\mu_n*G_n-\nu_n*G_n\|_{TV}\leq \bigg(C_n n^{3/4}\frac{n^{-9/4}}{\sqrt{\log n}}\bigg)^{2/3}=\Big(\frac{32}{\pi}\Big)^{1/6}\frac{C_n^{2/3} n^{-1}}{(\log n)^{1/3}}.$$
Therefore,
\begin{align*}
\|\mathscr L((X_{i/n}^{1,n})_{1\leq i\leq n})-\mathscr L((X_{i/n}^{2,n})_{1\leq i\leq n})\|_{TV}&\leq \sqrt{n\|\mathscr L(X^{1,n}_{1/n})-\mathscr L(X^{2,n}_{1/n})\|_{TV}}
\\&\leq \Big(\frac{32}{\pi}\Big)^{1/12}\Big(\frac{C_n}{\sqrt {\log n}}\Big)^{1/3}\to 0,
\end{align*}
as desired.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,116,691,500,182 | arxiv | \section{Introduction}
Emergency response to incidents such as accidents, medical calls, urban crimes, poaching, and fires is one of the most pressing problems faced by communities across the globe. Emergency response must be fast and efficient in order to minimize loss of lives~\cite{jaldell2017important,jaldell2014time}. Significant attention in the last several decades has been devoted to studying emergency incidents and response. The broader goal of designing ERM systems is to enable communities to deal with emergency response in a manner that is principled, proactive, and efficient. ERM systems based on data-driven models can help reduce both human and financial losses. Insights from principled approaches can also be used to improve policies and safety measures. Although such models are increasingly being adopted by government agencies, emergency incidents still cause thousands of deaths and injuries and also result in losses worth more than billions of dollars directly or indirectly each year~\cite{crimeUS}.
ERM can be divided into five major components: 1) mitigation, 2) preparedness, 3) detection, 4) response, and 5) recovery. The stages of ERM systems are heavily interlinked~\cite{mukhopadhyay2020review}. Mitigation involves sustained efforts to reduce long-term risks to people and property. It also involves creating forecasting models to understand the spatial and temporal characteristics of incidents. Preparedness involves creating policy and allocating resources that enable emergency response management. The third phase seeks to use automated techniques to detect incidents as they happen in order to expedite response. The dispatch phase, the most critical phase in the field, involves responding to incidents when they occur. Finally, the recovery phase seeks to provide support and sustenance to communities and individuals affected by the emergency. While most of the prior work in ERM has studied these problems independently, these stages are inter-linked. Frequently, the output of one stage serves as the input for another. For example, predictive models learned in the \textit{preparedness} stage are used in planning \textit{response} strategies. Therefore, it is crucial that ERM pipelines are designed such that intricate interdependencies are considered. In this paper, we highlight some of the challenges that we have faced while designing ERM pipelines and lessons that we have learned through the process. A large portion of the insights have come from our colleagues at the Nashville Fire Department and the Tennessee Department of Transportation, who have provided invaluable domain expertise to us.
\section{Challenges and Lessons}
While ERM forms a crucial component of the cities and governments, designing and deploying principled approaches to ERM is challenging. We consider the following to be the main challenges in the design and deployment of ERM pipelines.
\begin{enumerate}
\item \textbf{How to forecast incident occurrence?} It has been noted that emergency incidents are generally difficult to predict due to the inherent random nature of such incidents and spatially varying factors~\cite{shankar1995effect,mukhopadhyayAAMAS17}. Incidents are highly sporadic, making it particularly difficult to learn models of incident occurrence. For example, well-known regression models such as Poisson regression and negative binomial regression have been shown to perform poorly on accident data due to the prevalence of zero counts~\cite{mukhopadhyay2020review}. Incident data has also been shown to be particularly sensitive to the scale of spatial and temporal resolutions, thereby making it difficult to perform meaningful inference. There are several approaches that have shown to alleviate these concerns. First, identifying clusters of incidents (both spatial and non-spatial) has shown to balance model variance and spatial heterogeneity particularly well~\cite{sasidharan2015exploring,mukhopadhyayAAMAS17}. Second, dual state models like zero-inflated Poisson models can be used to address the issue of a high number of zero counts in data~\cite{qin2004selecting}.
\item \textbf{When to optimize?} Arguably, the most important component of ERM pipelines is to dispatch responders when incidents occur. While resource allocation and dispatch to emergency incidents evolve in highly uncertain and dynamic environments, the expectation is that response is very timely~\cite{felder2002spatial,mukhopadhyayAAMAS18}. Approaches to optimize dispatch typically focus on decision-making \textit{after} an incident occurs~\cite{toro2013joint,mukhopadhyayAAMAS18,keneally2016markov}. However, our conversations and collaborations with first responders revealed that there is limited applicability of such approaches in practice due to two important reasons. First, response to incidents occurs almost instantaneously after a report is received. Although optimizing dispatch can minimize response times in the long run, time spent to optimize dispatch \textit{after} incidents occur is perceived as costly in the field. Second, it is almost impossible to judge the severity of an incident from a call for service. Consequently, it is imperative for first responders to follow a \textit{greedy} strategy and dispatch the closest available responder to incidents. An alternative approach is to periodically optimize the spatial distribution of responders \textit{between} incidents. ~\citeauthor{pettet2020algorithmic}~\cite{pettet2020algorithmic} introduced this idea recently for emergency response. While there are challenges with respect to scalability of such an approach, planning between incidents is much more applicable in the field, as it does not violate constraints under which first responder operate.
\item \textbf{How to model communication?} Approaches to tackle emergency response typically assume that the agents can observe the world completely and communicate with centralized servers and each other. This assumption is usually satisfied in practice. However, in scenarios that involve natural disasters (like floods, earthquakes, wildfires, etc.) communication mechanisms can break down and power failures are common. In such scenarios, it is important that agents can optimize response based on information gathered locally. One way to approach such a problem is to design distributed approaches for ERM, in which agents can optimize their own decisions~\cite{pettet2020algorithmic}. This is feasible since modern agents (ambulances, for example) are equipped with laptops. While distributed approaches perform worse than their centralized counterparts, they provide the benefit of performance in scenarios where communication is challenging.
\item \textbf{How to model the environment?} A problem in designing approaches to ERM is that environmental conditions under which response takes place is dynamic. Consider a decision-theoretic model for dispatching responders. For example, see the semi-Markov decision process formulation to minimize expected response times to accidents ~\cite{mukhopadhyayAAMAS18}. An approach to solve such large-scale decision problems is to use a simulator to find a policy that picks the optimal action given the state of the world. However, in urban areas, events like road closures, constructions, or increased traffic due to a public gathering can drastically alter the distribution of incidents. Further, ambulances can be unavailable due to breakdowns or maintenance. In such cases, it is crucial that the actual state of the environment is taken into account while creating allocation and dispatch decisions. One approach is to create high-fidelity models of covariates such as traffic and weather. Such models can then be used in online forecasting models, that can accommodate incoming streams of updated information~\cite{MukhopadhyayICCPS}. Similarly, decision-theoretic approaches that can quickly compute promising actions for the current state of the world can be more valuable in emergency response than approaches that find policies for the entire state space of the problem~\cite{pettet2020algorithmic}.
\end{enumerate}
The principled design of ERM systems is an important problem faced by communities. As smart and connected communities evolve, they present both opportunities and challenges to manage ERM systems. In this short paper, we highlight common problems and lessons learned through our experience in designing ERM systems over the last few years.
\section{Acknowledgement}
We would like to acknowledge the National Science Foundation and the Center of Automotive Research at Stanford for funding this research. We would also like to thank the Nashville Fire Department (NFD) and the Tennessee Department of Transportation (TDOT) for collaborating with us and providing invaluable knowledge about the intricacies of emergency response.
{\small
\bibliographystyle{aaai}
|
1,116,691,500,183 | arxiv |
\section{Introduction}
In Figure \ref{fig:Adap} we see the possible paths for two different stochastic processes. We'll think of each of the paths drawn as having the same probability $1/2$. The process on the left only branches at final time $2$, while the one on the right already branches at time $1$, but the branches don't move very far apart. The processes on the left and on the right are very close in Wasserstein distance, but their \enquote{information structure} is very different. For the process on the right we already know at time $1$, what is going to happen at time $2$, for the one on the left we don't.
\begin{figure}
\label{fig:Adap}
\includegraphics[trim={0 1cm 0 4.5cm},clip,width=.8\linewidth]{lib/Adap.pdf}%
\caption{Two processes which are very close in Wasserstein distance, but whose information structure is very different.}
\end{figure}
A number of authors have introduced topologies and/or metrics which respect this information structure of processes -- topologies for which, in particular, the two processes in Figure \ref{fig:Adap} are \emph{not} \enquote{close} to each other.
These are: Hellwig's information topology \cite{He96}, the nested distance of Pflug, Pichler and co-authors \cite{PfPi12, Pi13, PfPi14, PfPi15, PfPi16, GlPfPi17} and the extended weak topology of Aldous \cite{Al81}. Lassalle's notion of a causal transference plan, \cite{Las18}, can also be utilized to define a metric by restricting the transference plans in the definition of the Wasserstein metric to be causal and then symmetrizing.
In a parallel paper \cite{AllTopologiesAreEqual} we show that all these topologies are in fact equal in the finite discrete time setting.
Already by looking at the pictures in Figure \ref{fig:Adap} one can see that all of these topologies will necessarily lack a feature which is often very useful -- namely the characterization of compactness by something akin to Prokhorov's theorem.
Let us imagine a sequence of laws of processes $\mu_n$ described by pictures similar to the one on the right, only with the size of the gap at time $1$ going to zero.
We had just decided that, if the topology is to respect the \enquote{information structure} of processes, then the sequence $(\mu_n)_n$ cannot converge to the measure $\mu$ described by the picture on the left.
If the topology is also finer than the weak topology (which is a feature that all of the cited topologies share) then $(\mu_n)_n$, and any of its subsequences have nowhere to converge to.
This is even though $\mu_n$ very much remain bounded in any of the usual senses, so by any fictitious generalization of Prokhorov's theorem to this new topology should be relatively compact.
One \enquote{fix} for this problem, which has already seen some use for example in \cite{BaBePa18,BaPa19}, is to pass to a larger space which (among others) contains an extra element which $(\mu_n)_n$ converges to.
But we are also interested in finding out what the (relatively) compact sets in the original space are.
We now give a rigorous definition of the information topology as introduced by Hellwig, as this is the formulation that it is easiest to work with for the purposes of this paper (see \cite{AllTopologiesAreEqual} for all the equivalent ways of describing this topology) and then state our main theorem, Theorem \ref{thm:relconested}, which gives a characterization of relatively compact sets in the information topology. We would like to emphasize the parallels between this theorem and the theorem of Arzelà-Ascoli describing compact sets in spaces of continuous functions.
Let $\mathcal Z$ be a Polish space. In fact, let us fix a compatible complete bounded metric, so that we are viewing $\mathcal Z$ as a Polish metric space with a bounded metric $\relax\ifmmode D\else \error\fi_Z$. We are interested in probability measures on $\mathcal Z^N$, where $N \in \mathbb{N}$. We denote by $Z_t : \mathcal Z^N \rightarrow \mathcal Z$ the projection on the $t$-th coordinate, i.e.\ $(Z_t)_t$ is the canonical process on $\mathcal Z^N$.
\newcommand{\mathcal L}{\mathcal L}
Building on the idea already alluded to that we want to capture what we may predict about the future evolution of a process from its behaviour up to the current time $t$ we introduce maps
\begin{align*}
\mathcal I_t : \Pr {\mathcal Z^N} \rightarrow \Pr {\mathcal Z^t \times \Pr {\mathcal Z^{N-t}}}
\end{align*}
which send a measure $\mu$ to the joint law of
\begin{align*}
Z_1, \dots, Z_t, \mathcal L^{\mu}(Z_{t+1},\dots,Z_N | Z_1, \dots, Z_t)
\end{align*}
under $\mu$. $\mathcal L^{\mu}(Z_{t+1},\dots,Z_N | Z_1, \dots, Z_t)$ denotes the conditional law of $Z_{t+1},\dots,Z_N$ given $Z_1, \dots, Z_t$ under $\mu$.
\begin{definition}
Hellwig's \emph{information topology} on $\Pr {\mathcal Z^N}$ is the initial topology w.r.t.\ $\set{\mathcal I_t}[1 \leq t < N]$.
\end{definition}
In Definition \ref{def:mocprel} we introduce the central notion used in characterizing relative compactness in the information topology. First we need a little more notation.
For any Polish space $\mathcal X$ call $\Pr \mathcal X$ the set of probability measures and $\SubP \mathcal X$ the set of subprobability measures on $\mathcal X$.
\begin{definition}[Modulus of Continuity]
\label{def:mocprel}
Let $\mathcal X$ and $\mathcal Y$ be Polish metric spaces and let $\mu \in \Pr { \mathcal X \times \mathcal Y }$.
The \emph{modulus of continuity} $\mocf \mu : \mathbb{R}_+ \rightarrow \mathbb{R}_+$ of $\mu$ is given by
\begin{align*}
\moc \mu \delta & := \sup_{\gamma \in \Sh \mu \delta} \relax\ifmmode D\else \error\fi^\mathcal Y(\gamma)
\end{align*}
where
\begin{align*}
\relax\ifmmode D\else \error\fi^\mathcal X(\gamma) & := {\textstyle \int} \relax\ifmmode D\else \error\fi_\mathcal X(x_1,x_2) \d \gamma(x_1,y_1,x_2,y_2) \text{, } &
\relax\ifmmode D\else \error\fi^\mathcal Y(\gamma) & := {\textstyle \int} \relax\ifmmode D\else \error\fi_\mathcal Y(y_1,y_2) \d \gamma(x_1,y_1,x_2,y_2)
\end{align*}
and
\begin{multline*}
\Sh \mu \delta := \set<\big>{ \gamma \in \SubP { \mathcal X \times \mathcal Y \times \mathcal X \times \mathcal Y } }[{ \\ \text{both $\mathcal X \times \mathcal Y$-marginals of $\gamma$ are $\leq \mu$} \text{ and } \relax\ifmmode D\else \error\fi^\mathcal X(\gamma) \leq \delta }]
\end{multline*}
is the set of measures describing \enquote{perturbations} of $\mu$ that (on average) shift the $\mathcal X$-coordinate by at most $\delta$.
\end{definition}
\begin{remark}
In the definition of $\Sh \mu \delta$ we might as well have said $\gamma \in \Pr { \mathcal X \times \mathcal Y \times \mathcal X \times \mathcal Y }$ instead of $\gamma \in \SubP { \mathcal X \times \mathcal Y \times \mathcal X \times \mathcal Y }$ without changing the definition of $\moc \mu \delta$, see Lemma \ref{lem:mocSubPvsPr}. For our purposes the definition given here is more convenient.
\end{remark}
Note that $\mathcal L^{\mu}(Z_{t+1},\dots,Z_N | Z_1, \dots, Z_t)$, being a conditional law, is a function of $Z_1, \dots, Z_t$.
Setting $\mathcal X := \mathcal Z^t$ and $\mathcal Y := \Pr {\mathcal Z^{N-t}}$ we see that $\mathcal I_t(\mu)$ is probability on $\mathcal X \times \mathcal Y$ and is concentrated on the graph of a measurable function $\mathcal X \rightarrow \mathcal Y$.
$\mathcal X$ can be equipped with the $\ell^1$-metric $\relax\ifmmode D\else \error\fi_\mathcal X((z_i)_i, (z'_i)_i) := \sum_{i=1}^t \relax\ifmmode D\else \error\fi_\mathcal Z(z_i,z'_i)$, which is a bounded compatible complete metric and $\mathcal Y$ can be equipped with the $1$-Wasserstein metric built from the sum metric on $\mathcal Z^{N-t}$, which is a complete metric inducing the usual weak topology on $\Pr {\mathcal Z^{N-t}}$.
In the following, when we write $\mocf {\mathcal I_t(\mu)}$ this is how we want $\mathcal X$ and $\mathcal Y$ in the definition of the modulus of continuity to be understood.
\begin{theorem}
\label{thm:relconested}
$K \subseteq \Pr {\mathcal Z^N}$ is relatively compact in the information topology iff
\begin{enumerate}
\item \label{it:weaklycompact}
\comment{If we want to adapt the whole paper to be both about weak topology and Wasserstein metrics for possibly unbounded metrics then the wording here needs to change:\\ }
$K$ is relatively compact in the weak topology and
\item \label{it:kmalgleichgradigstetig}
$\displaystyle \lim_{\delta \searrow 0} \sup_{\mu \in K} \moc {\mathcal I_t(\mu)} \delta = 0$ for all $t \in \set{ 1, \dots, N-1 }$.
\end{enumerate}
\comment{The statement is true both for $\mathcal I_t = \disint {\overline X^t} {\overline X_{t+1}}$ and for $\mathcal I_t = \disint {\overline X^t} {\mathcal X_{t+1}} \circ \proj_{\overline X^{t+1}}$.}
\end{theorem}
\section{Properties of the Modulus of Continuity}
We will see in the proof of Theorem \ref{thm:relconested} that if we understand relative compactness in the information topology in the case of two timepoints, there's not much difficulty in passing to the $N$-timepoint case. So we will first focus on the two-timepoint case.
Here the information topology is the toplogy that we get on $\Pr {\mathcal Z^2}$ when we embed it into $\Pr {\mathcal Z \times \Pr \mathcal Z}$ via $\mathcal I_1$.
In fact $\Pr {\mathcal Z^2}$ with the information topology is homeomorphic to the subspace of $\Pr {\mathcal Z \times \Pr \mathcal Z}$ whose elements are all probability measures which are concentrated on the graph of a Borel function $\mathcal Z \rightarrow \Pr \mathcal Z$, equipped with the subspace topology.
So this is the setting in which we will begin studying the problem. We have two Polish metric spaces $\mathcal X$ and $\mathcal Y$ and we are interested in the relatively compact sets in $\FunP \mathcal X \mathcal Y \subseteq \Pr {\mathcal X \times \mathcal Y}$, the space of measures on $\mathcal X \times \mathcal Y$ which are concentrated on the graph of some Borel function from $\mathcal X$ to $\mathcal Y$.
\subsection{From \texorpdfstring{$1$}{1}-Wasserstein to \texorpdfstring{$p$}{p}-Wasserstein}
\label{sec:wassersteinp}
At this point we would like to clarify a small detail that we have tried to mostly gloss over up to now.
In the introduction we have been switching between talking about topological spaces and talking about metric spaces.
This was for expositional purposes, because we wanted to show how our results connect to the literature on \enquote{adapted weak topologies}, more specifically the information topology, which has only been defined as a topology -- not a metric -- by Hellwig.
As can be seen from Definition \ref{def:mocprel} of the modulus of continuity, our methods make direct use of a metric.
By choosing a compatible complete bounded metric on $\mathcal Z$ (and $\mathcal Z^{N-t}$) we get the $1$-Wasserstein metric (or really any $p$-Wasserstein metric) to induce the usual weak topology on $\Pr {\mathcal Z^{N-t}}$ and are thus able to recover topological results about the weak topology and the information topology.
\newcommand{\mathscr P_{\!\!p}}{\mathscr P_{\!\!p}}
\RenewDocumentCommand{\Pr}{d<>m}{\mathscr P_{\!\!p}\IfValueTF{#1}{#1}{\left}(#2\IfValueTF{#1}{#1}{\right})}
\NewDocumentCommand{\PrN}{d<>m}{\mathscr P\IfValueTF{#1}{#1}{\left}(#2\IfValueTF{#1}{#1}{\right})}
The methods themselves do not rely on the assumption that the metrics are bounded, though.
They work for any Polish metric space and provide statements about compact sets in the topology induced by the $1$-Wasserstein distance.
In fact, they are also easily generalized to $p$-Wasserstein distances for $p \geq 1$.
Therefore, in the sequel let us make the following conventions, which we will be using unless noted otherwise.
$1 \leq p < \infty$ can be chosen now and is kept fixed throughout the paper.
All spaces $\mathcal X, \mathcal Y, \mathcal Z$ etc.\ denoted by calligraphic\comment{check if still true in the end} letters are Polish metric spaces. The metric on $\mathcal X$ will be called $\relax\ifmmode D\else \error\fi_{\mathcal X}$, etc. If clear from the context we may omit the subscript. For any two Polish metric spaces $\mathcal X$ and $\mathcal Y$ their product space $\mathcal X \times \mathcal Y$ will be regarded as a Polish metric space with the metric
\begin{align*}
\relax\ifmmode D\else \error\fi_{\mathcal X \times \mathcal Y}\pa<\big>{(x_1,y_1),(x_2,y_2)} := \pa<\big>{ \relax\ifmmode D\else \error\fi_\mathcal X(x_1,x_2)^p + \relax\ifmmode D\else \error\fi_\mathcal Y(y_1,y_2)^p }^{\frac 1 p} \text{ .}
\end{align*}
Note that this construction is associative so that there is no confusion about what the metric on for example $\mathcal X \times \mathcal Y \times \mathcal Z$ should be, as both groupings $(\mathcal X \times \mathcal Y) \times \mathcal Z$ and $\mathcal X \times (\mathcal Y \times \mathcal Z)$ give the same result. So for example the metric on $\mathcal Z^t$ is
\begin{align*}
\relax\ifmmode D\else \error\fi_{\mathcal Z^t}\pa<\big>{(z_i)_i, (z'_i)_i} = \pa<\big>{ {\textstyle \sum_i} \, \relax\ifmmode D\else \error\fi_\mathcal Z(z_i,z'_i)^p }^{\frac 1 p} \text{ .}
\end{align*}
For any Polish metric space $\mathcal X$, $\Pr \mathcal X$ will denote the space of probability measures $\mu$ on $\mathcal X$ with finite $p$-th moment, i.e.\ satisfying
\begin{align*}
\int \relax\ifmmode D\else \error\fi_\mathcal X(x_0,x)^p \d\mu(x) < \infty
\end{align*}
for any (and therefore all) $x_0$ and will carry the $p$-Wasserstein metric
\begin{align*}
\relax\ifmmode D\else \error\fi_{\Pr \mathcal X} (\mu_1, \mu_2) := \mathcal W_p(\mu_1,\mu_2) = \pa{\inf_{\gamma \in \Cpl {\mu_1} {\mu_2}} \int \relax\ifmmode D\else \error\fi_\mathcal X(x_1,x_2)^p \d\gamma(x_1,x_2)}^{\frac 1 p}
\end{align*}
where $\Cpl {\mu_1} {\mu_2}$ is the set of couplings between $\mu_1$ and $\mu_2$, i.e.\ the set of measures $\gamma \in \PrN {\mathcal X \times \mathcal X}$ with first marginal $\mu_1$ and second marginal $\mu_2$.
\RenewDocumentCommand{\FunP}{d<>mm}{\mathscr F_{\!p} \IfValueTF{#1}{#1}{\left}( #2 \rightsquigarrow #3 \IfValueTF{#1}{#1}{\right})}
$\FunP \mathcal X \mathcal Y$ is the space of $\mu \in \Pr {\mathcal X \times \mathcal Y}$ which are concentrated on the graph of some Borel function from $\mathcal X$ to $\mathcal Y$.
We also amend Definition \ref{def:mocprel}.
\begin{definition}[$p$-Modulus of Continuity]
\label{def:moc}
Let $\mathcal X$ and $\mathcal Y$ be Polish metric spaces and let $\mu \in \Pr { \mathcal X \times \mathcal Y }$.
The \emph{modulus of continuity} $\mocf \mu : \mathbb{R}_+ \rightarrow \mathbb{R}_+$ of $\mu$ is given by
\begin{align*}
\moc \mu \delta & := \sup_{\gamma \in \Sh \mu \delta} \relax\ifmmode D\else \error\fi^\mathcal Y(\gamma)
\end{align*}
where
\begin{align*}
\relax\ifmmode D\else \error\fi^\mathcal X(\gamma) & := \pa<\Big>{{\textstyle \int} \relax\ifmmode D\else \error\fi_\mathcal X(x_1,x_2)^p \d \gamma(x_1,y_1,x_2,y_2)}^{\frac 1 p} \text{,}\\
\relax\ifmmode D\else \error\fi^\mathcal Y(\gamma) & := \pa<\Big>{{\textstyle \int} \relax\ifmmode D\else \error\fi_\mathcal Y(y_1,y_2)^p \d \gamma(x_1,y_1,x_2,y_2)}^{\frac 1 p}
\end{align*}
and
\begin{multline*}
\Sh \mu \delta := \set<\big>{ \gamma \in \SubP { \mathcal X \times \mathcal Y \times \mathcal X \times \mathcal Y } }[{ \\ \text{both $\mathcal X \times \mathcal Y$-marginals of $\gamma$ are $\leq \mu$} \text{ and } \relax\ifmmode D\else \error\fi^\mathcal X(\gamma) \leq \delta }]
\end{multline*}
\end{definition}
\begin{remark}
\label{rem:DXprop}
There are two main properties of $\relax\ifmmode D\else \error\fi^\mathcal X$ and $\relax\ifmmode D\else \error\fi^\mathcal Y$ that we will be making use of in our proofs.
The first is that for $r \geq 0$
\begin{align}
\label{eq:DXhomog}
\relax\ifmmode D\else \error\fi^\mathcal X(r \gamma) = r^{1/p} \, \relax\ifmmode D\else \error\fi^\mathcal X(\gamma) \text{ .}
\end{align}
The second is that $\relax\ifmmode D\else \error\fi^\mathcal X(\gamma)$ is really the $L^p(\gamma)$-norm of $(x_1,y_1,x_2,y_2) \mapsto \relax\ifmmode D\else \error\fi_\mathcal X(x_1,x_2)$. If we can decompose this function as a sum of functions or bound it by a sum of function then we may apply the triangle inequality of $L^p(\gamma)$.
\end{remark}
\subsection{Basic properties of the modulus of continuity}
Now we start listing basic properties of $\moc \mu \delta$.
First we show that in the definition of $\moc \mu \delta$ it does not matter whether we talk about probabilities or subprobabilities.
\begin{definition}
Let $\gamma \in \SubP {\mathcal X \times \mathcal Y \times \mathcal X \times \mathcal Y}$. The mirrored version, or \emph{inverse}, $\gamma^{-1}$ of $\gamma$ is the pushforward of $\gamma$ under the map $(x_1,y_1,x_2,y_2) \mapsto (x_2,y_2,x_1,y_1)$.
\end{definition}
\begin{lemma}
\label{lem:mocSubPvsPr}
Let $\mu \in \Pr {\mathcal X \times \mathcal Y}$.
For any $\gamma' \in \SubP {\mathcal X \times \mathcal Y \times \mathcal X \times \mathcal Y}$, both of whose $\mathcal X \times \mathcal Y$-marginals are $\leq \mu$, there is a $\gamma \in \PrN {\mathcal X \times \mathcal Y \times \mathcal X \times \mathcal Y}$ both of whose $\mathcal X \times \mathcal Y$-marginals are equal to $\mu$, which satisfies $\relax\ifmmode D\else \error\fi^\mathcal X(\gamma) = \relax\ifmmode D\else \error\fi^\mathcal X(\gamma')$, $\relax\ifmmode D\else \error\fi^\mathcal Y(\gamma) = \relax\ifmmode D\else \error\fi^\mathcal Y(\gamma')$ and which is symmetric in the sense that $\gamma = \gamma^{-1}$.
\end{lemma}
\begin{proof}
Given $\gamma' \in \SubP {\mathcal X \times \mathcal Y \times \mathcal X \times \mathcal Y}$ we first symmetrize by setting $\gamma_2 := \frac 1 2 \pa{\gamma' + \gamma'^{-1}}$. Because metrics are symmetric, $\relax\ifmmode D\else \error\fi^\mathcal X(\gamma_2) = \relax\ifmmode D\else \error\fi^\mathcal X(\gamma')$ and $\relax\ifmmode D\else \error\fi^\mathcal Y(\gamma_2) = \relax\ifmmode D\else \error\fi^\mathcal Y(\gamma')$. Now both the first and the second $\mathcal X \times \mathcal Y$-marginal of $\gamma_2$ is equal to some measure $\mu' \leq \mu$. If we add the identity coupling of $\mu-\mu'$, i.e.\ the measure $\push{(x,y)\mapsto (x,y,x,y)}\pa{\mu - \mu'}$, to the measure $\gamma_2$ we get a measure $\gamma$ which is still symmetric, still satisfies $\relax\ifmmode D\else \error\fi^\mathcal X(\gamma) = \relax\ifmmode D\else \error\fi^\mathcal X(\gamma')$, $\relax\ifmmode D\else \error\fi^\mathcal Y(\gamma) = \relax\ifmmode D\else \error\fi^\mathcal Y(\gamma')$ and which has both marginals equal to $\mu' + (\mu - \mu') = \mu$ and therefore must be a probability measure.
\end{proof}
\begin{lemma}
$\mocf \mu$ is monotone, i.e.\ $\delta_1 \leq \delta_2$ implies $\moc \mu {\delta_1} \leq \moc \mu {\delta_2}$.
\end{lemma}
\begin{proof}
Obvious.
\end{proof}
\begin{lemma}
$\restr{\mocf \mu}{(0,\infty)}$ is continuous.
\end{lemma}
\begin{proof}
Let $0 < \delta_1 < \delta_2$. Let $\gamma \in \Sh \mu {\delta_2}$.
By \eqref{eq:DXhomog} we have $r \gamma \in \Sh \mu {\delta_1}$, if we set $r := \pa{\frac {\delta_1} {\delta_2}}^p$.
So $\moc \mu {\delta_1} \geq \relax\ifmmode D\else \error\fi^\mathcal Y(r \gamma) = \frac {\delta_1} {\delta_2} \relax\ifmmode D\else \error\fi^\mathcal Y(\gamma)$.
As $\gamma \in \Sh \mu {\delta_2}$ was arbitrary we have
\begin{align}
\label{eq:moccontindelta}
\moc \mu {\delta_1} \geq \frac {\delta_1} {\delta_2} \moc \mu {\delta_2} \text{ .}
\end{align}
Let $\delta > 0$, let $|\delta' - \delta| < \varepsilon'$ where $\varepsilon'$ is small enough that both
\begin{align*}
\pa{ 1 - \frac {\delta - \varepsilon'} \delta } \moc \mu \delta & < \varepsilon &
\pa{ \frac {\delta + \varepsilon'} \delta - 1} \moc \mu \delta & < \varepsilon \text{ .}
\end{align*}
If $\delta' < \delta$ then subtracting \eqref{eq:moccontindelta} with $\delta_2 = \delta$, $\delta_1 = \delta'$ from $\moc \mu \delta$ we get
\begin{align*}
| \moc \mu \delta - \moc \mu {\delta'} | = \moc \mu \delta - \moc \mu {\delta'} \leq \pa{1 - \frac {\delta'} \delta} \moc \mu \delta
\end{align*}
If $\delta < \delta'$ then similarly multiplying \eqref{eq:moccontindelta} by $\frac {\delta_2} {\delta_1}$, substituting $\delta_2 = \delta'$, $\delta_1 = \delta$ and subtracting $\moc \mu \delta$ from it we get
\begin{align*}
| \moc \mu \delta - \moc \mu {\delta'} | = \moc \mu {\delta'} - \moc \mu \delta \leq \pa{ \frac {\delta'} \delta - 1 } \moc \mu \delta \text{ .}
\end{align*}
\end{proof}
The following lemma shows how the analogy hinted at by calling $\mocf \mu$ the modulus of continuity is to be understood. While the classical modulus of continuity recognizes \emph{continuous functions} $f$ as those for which $\lim_{\delta \searrow 0} \moc f \delta = 0$, our modulus of continuity for measures recognizes \emph{measures concentrated on the graph of a function}.
\begin{lemma}
\label{lem:FunPCharacterisation}
Let $\mu \in \Pr { \mathcal X \times \mathcal Y }$. Then $\mu \in \FunP \mathcal X \mathcal Y$ iff $\lim_{\delta \searrow 0} \moc \mu \delta = 0$.
\end{lemma}
\begin{proof}
\comment{I can't immediately figure out if $\mocf \mu$ has to be continuous at $0$ when $\mu \not\in \FunP \mathcal X \mathcal Y$... My feeling says that maybe somehow applying this result to the disintegration would show that yes, but also maybe not...}
By monotonicity of $\mocf \mu$, $\lim_{\delta \searrow 0} \moc \mu \delta = 0$ implies $\moc \mu 0 = 0$. We first show that this in turn implies $\mu \in \FunP \mathcal X \mathcal Y$.
For any $\mu \in \Pr { \mathcal X \times \mathcal Y } $ we can always construct the following $\gamma \in \Sh \mu 0 \subseteq \PrN { \mathcal X \times \mathcal Y \times \mathcal X \times \mathcal Y} $.
Let $(\mu_x)_{x \in \mathcal X}$ be a disintegration of $\mu$ w.r.t.\ the first coordinate.
\begin{align*}
\gamma(f) & := \iiint f(x,y_1,x,y_2) \d \mu_x(y_2) \d \mu_x(y_1) \d \mu(x, y) \\ &\phantom{:}= \iint f(x,y_1,x,y_2) \d \pa{ \mu_x \otimes \mu_x }(y_1,y_2) \d \mu(x, y)
\end{align*}
$\moc \mu 0 = 0$ implies that
\begin{align*}
0 = \relax\ifmmode D\else \error\fi^\mathcal Y(\gamma)^p = \iint \relax\ifmmode D\else \error\fi_\mathcal Y(y_1,y_2)^p \d \pa{ \mu_x \otimes \mu_x }(y_1,y_2) \d \mu(x, y) \text{ .}
\end{align*}
This means that for $\marg \mu \mathcal X$-a.a.\ $x$ we have $\int \relax\ifmmode D\else \error\fi_\mathcal Y(y_1,y_2)^p \d \pa{ \mu_x \otimes \mu_x }(y_1,y_2) = 0$. This implies that $\mu_x$ is concentrated on a single point, and there is a measurable map $b$ sending measures concentrated on a single point to that point. $b \circ (x \mapsto \mu_x)$ is then the function on whose graph $\mu$ is concentrated. This concludes the first half of the proof.
We now show that $\mu \in \FunP \mathcal X \mathcal Y$ implies $\lim_{\delta \searrow 0} \moc \mu \delta = 0$.
Let $f: \mathcal X \rightarrow \mathcal Y$ be a measurable function such that $\int g(x,y) \d \mu(x,y) = \int g(x,f(x)) \d \mu(x, y)$.
Fix $y_0 \in \mathcal Y$, let $\theta > 0$ be such that $g : \mathcal X \times \mathcal Y \rightarrow [0,1]$, $\mu(g) < \theta$ implies
\begin{align}
\label{eq:abscont}
\int \relax\ifmmode D\else \error\fi(y_0,y)^p g(x,y) \d \mu(x,y) < \epsilon^{p} \text{ .}
\end{align}
This is possible because the finite measure which has density $(x,y) \mapsto \relax\ifmmode D\else \error\fi(y_0,y)^p$ w.r.t.\ $\mu$ is absolutely continuous w.r.t.\ to $\mu$.
Because $\mathcal X$ is Polish and $\mathcal Y$ is second countable we can apply Lusin's theorem to get a compact set $K \subseteq \mathcal X$ such that $\restr f K$ is uniformly continuous and $\marg \mu \mathcal X (K^C) < \frac \theta 3$.
Let $\eta > 0$ be such that for $x_1, x_2 \in K$, $\relax\ifmmode D\else \error\fi(x_1,x_2) < \eta$ implies $\relax\ifmmode D\else \error\fi(f(x_1),f(x_2)) < \epsilon$.
Let $\delta < (\frac \theta 3)^{1/p} \cdot \eta$.
Let $\gamma \in \Sh \mu \delta$. $\relax\ifmmode D\else \error\fi^\mathcal Y(\gamma)$ is the $L^p(\gamma)$-norm of the function $(x_1,y_1,x_2,y_2) \mapsto \relax\ifmmode D\else \error\fi(y_1,y_2)$ which, setting
\begin{align*}
R(x_1,x_2) & := \indicator{(K \times K)^C}(x_1,x_2) + \indicator{K \times K}(x_1,x_2) \, \indicator{\lcro{\eta,\infty}}(\relax\ifmmode D\else \error\fi(x_1,x_2))
\end{align*}
we may bound as follows
\begin{multline*}
\relax\ifmmode D\else \error\fi(y_1,y_2) = \relax\ifmmode D\else \error\fi(y_1,y_2) \, R(x_1,x_2) + \relax\ifmmode D\else \error\fi(y_1,y_2) \, \indicator{K \times K}(x_1,x_2) \, \indicator{\lcro{0,\eta}}(\relax\ifmmode D\else \error\fi(x_1,x_2)) \\
\leq \pa<\big>{\relax\ifmmode D\else \error\fi(y_1,y_0) + \relax\ifmmode D\else \error\fi(y_0,y_2)} \, R(x_1,x_2)
+ \relax\ifmmode D\else \error\fi(y_1,y_2) \, \indicator{K \times K}(x_1,x_2) \, \indicator{\lcro{0,\eta}}(\relax\ifmmode D\else \error\fi(x_1,x_2))
\end{multline*}
Using the triangle inequality in $L^p(\gamma)$ and the fact $\mu$ is concentrated on the graph of $f$ we get that
\begin{multline*}
\relax\ifmmode D\else \error\fi^\mathcal Y(\gamma) \leq
\pa<\bigg>{\int \relax\ifmmode D\else \error\fi(y_0,y_1)^p R(x_1,x_2) \d\gamma(x_1,y_1,x_2,y_2)}^{1/p} + \\
\pa<\bigg>{\int \relax\ifmmode D\else \error\fi(y_0,y_2)^p R(x_1,x_2) \d\gamma(x_1,y_1,x_2,y_2)}^{1/p} + \\
\pa<\bigg>{\int \relax\ifmmode D\else \error\fi(f(x_1),f(x_2))^p \, \indicator{K \times K}(x_1,x_2) \, \indicator{\lcro{0,\eta}}(\relax\ifmmode D\else \error\fi(x_1,x_2))\d \gamma(x_1,y_1,x_2,y_2) }^{1/p}
\end{multline*}
The first two integrals are of the form as in \eqref{eq:abscont} and as
\begin{align*}
\theta > \int R(x_1,x_2) \d\gamma(x_1,y_1,x_2,y_2) = \iint R(x_1,x_2) \d\gamma_{x_1,y_1}(x_2,y_2) \d\mu(x_1,y_1)
\end{align*}
by the choice of $K$ and because $\gamma \in \Sh \mu \delta$, they can each be bounded by $\epsilon^p$. In the last integral, whenever the integrand is nonzero, $\relax\ifmmode D\else \error\fi(f(x_1),f(x_2)) < \epsilon$ by our choice of $K$ and $\eta$.
Overall we get
\begin{align*}
\relax\ifmmode D\else \error\fi^\mathcal Y(\gamma) < 3 \epsilon \text{ .}
\end{align*}
\end{proof}
\subsection{Composition of measures}
In the proof of Lemma \ref{lem:moccontinmu} below we will be \enquote{composing} measures on product spaces to get new measures. A useful intuition may be to think of the operation $\mcmp$ below as a generalization of the composition of functions or relations. From a probabilistic point of view $\gamma \mathbin{\dot\otimes} \gamma'$ below should be called the conditionally independent product (at least when both $\gamma$ and $\gamma'$ are probability measures).
\begin{definition}
\label{def:mcmp}
For $\gamma \in \Pr {\mathcal X \times \mathcal Y}$ and $\lambda \in \Pr {\mathcal Y \times \mathcal Z}$ with $\marg \gamma \mathcal Y = \marg {\lambda} \mathcal Y$ define
\begin{align*}
\gamma \mathbin{\dot\otimes} \lambda & := f \mapsto \int f(x,y,z) \d \lambda_y(z) \d \gamma(x,y) \\
\gamma \mcmp \lambda & := f \mapsto \int f(x,z) \d \lambda_y(z) \d \gamma(x,y)
\end{align*}
where $y \mapsto \lambda_y$ is a disintegration of $\lambda$ w.r.t.\ the first variable, that is $\int f \d\lambda = \int \int f(y,z) \d \lambda_y(z) \d \gamma(y, z)$.
\end{definition}
The asymmetry in the definition is only apparent, in the sense that we may as well have disintegrated $\gamma$ instead of $\lambda$, getting the same result.
Both $\mathbin{\dot\otimes}$ and $\mcmp$ are associative operations.
\begin{lemma}
\label{lem:moccontinmu}
Let $\delta > 0$. Then
\begin{align*}
\mu \mapsto \moc \mu \delta
\end{align*}
is continuous on $\Pr{ \mathcal X \times \mathcal Y}$, i.e.\ in the $p$-Wasserstein metric.
\end{lemma}
\begin{proof}
Let $\mu, \nu \in \Pr{\mathcal X \times \mathcal Y}$ and let $\mathcal W_p(\mu,\nu) < \epsilon$. We will show that then \eqref{eq:moccontinmuconc} below holds. As both sides of \eqref{eq:moccontinmuconc} converge to $\moc \mu \delta$ as $\epsilon$ goes to $0$ this shows that $\mu \mapsto \moc \mu \delta$ is continuous at $\mu$.
$\mathcal W_p(\mu,\nu) < \epsilon$ implies that there is $\psi \in \Cpl \mu \nu$ s.t.\ $\relax\ifmmode D\else \error\fi^\mathcal X(\psi) \vee \relax\ifmmode D\else \error\fi^\mathcal Y(\psi) < \epsilon$.
We want to bound $\moc \mu \delta$ in terms of $\moc \nu \delta$, so let $\gamma \in \Sh \mu \delta$ be arbitrary. By Lemma \ref{lem:mocSubPvsPr} we may as well assume that $\gamma$ is a probability measure. Then
{\def#1{{\scriptstyle #1}}
\begin{multline*}
\relax\ifmmode D\else \error\fi^\mathcal X(\psi \mcmp \gamma \mcmp \psi^{-1}) = \pa{\int \relax\ifmmode D\else \error\fi(x_1,x_4)^p \d \pa{ \psi \mcmp \gamma \mcmp \psi^{-1} }(x_1,{y_1},x_4,{y_4})}^{1/p} \leq \\
\pa{\int \pa<\big>{\relax\ifmmode D\else \error\fi(x_1,x_2) + \relax\ifmmode D\else \error\fi(x_2,x_3) + \relax\ifmmode D\else \error\fi(x_3,x_4)}^p \d \pa{ \psi \mathbin{\dot\otimes} \gamma \mathbin{\dot\otimes} \psi^{-1} }(x_1,{y_1},x_2,{y_2},x_3,{y_3},x_4,{y_4})}^{1/p} \leq \\
\relax\ifmmode D\else \error\fi^\mathcal X(\psi) + \relax\ifmmode D\else \error\fi^\mathcal X(\gamma) + \relax\ifmmode D\else \error\fi^\mathcal X(\psi^{-1}) < \relax\ifmmode D\else \error\fi^\mathcal X(\gamma) + 2 \epsilon < \delta + 2 \epsilon \text{ .}
\end{multline*}%
}
Scaling down, we get that $r \cdot \psi \mcmp \gamma \mcmp \psi^{-1} \in \Sh \nu \delta$, where $r := \pa{\frac \delta {\delta + 2 \epsilon}}^p$. By definition of $\moc \nu \delta$
\begin{align*}
\relax\ifmmode D\else \error\fi^\mathcal Y\pa{r \cdot \psi \mcmp \gamma \mcmp \psi^{-1}} \leq \moc \nu \delta
\intertext{or}
\relax\ifmmode D\else \error\fi^\mathcal Y\pa{\psi \mcmp \gamma \mcmp \psi^{-1}} \leq \pa{ 1 + \frac {2 \epsilon} \delta } \moc \nu \delta
\end{align*}
We can bound $\relax\ifmmode D\else \error\fi^\mathcal Y(\gamma)$ in terms of $\relax\ifmmode D\else \error\fi^\mathcal Y\pa{\psi \mcmp \gamma \mcmp \psi^{-1}}$:
{\def{}
\begin{multline*}
\relax\ifmmode D\else \error\fi^\mathcal Y(\gamma) = \pa{\int \relax\ifmmode D\else \error\fi(y_2,y_3)^p \d \pa{\psi \mathbin{\dot\otimes} \gamma \mathbin{\dot\otimes} \psi^{-1}}({x_1},y_1,{x_2},y_2,{x_3},y_3,{x_4},y_4)}^{1/p} \leq \\
\pa{\int \pa<\big>{\relax\ifmmode D\else \error\fi(y_2,y_1) + \relax\ifmmode D\else \error\fi(y_1,y_4) + \relax\ifmmode D\else \error\fi(y_4,y_3)}^p \d \pa{\psi \mathbin{\dot\otimes} \gamma \mathbin{\dot\otimes} \psi^{-1}}({x_1},y_1,{x_2},y_2,{x_3},y_3,{x_4},y_4)}^{1/p} \leq \\
\relax\ifmmode D\else \error\fi^\mathcal Y(\psi) + \relax\ifmmode D\else \error\fi^\mathcal Y\pa{ \psi \mcmp \gamma \mcmp \psi^{-1} } + \relax\ifmmode D\else \error\fi^\mathcal Y\pa{\psi^{-1}} < \\
\pa{ 1 + \frac {2 \epsilon} \delta } \moc \nu \delta + 2 \epsilon
\end{multline*}%
}
As $\gamma$ was arbitrary this implies
\begin{align*}
\moc \mu \delta < \pa{ 1 + \frac {2 \epsilon} \delta } \moc \nu \delta + 2 \epsilon \text{ .}
\end{align*}
Rearranging terms gives the left side of \eqref{eq:moccontinmuconc}, while repeating the argument with the roles of $\mu$ and $\nu$ swapped gives the right side of \eqref{eq:moccontinmuconc}.
\begin{align}
\label{eq:moccontinmuconc}
\frac { \moc \mu \delta - 2 \epsilon } { 1 + \frac {2 \epsilon} \delta } < \moc \nu \delta < \pa{ 1 + \frac {2 \epsilon} \delta } \moc \mu \delta + 2 \epsilon
\end{align}
\end{proof}
\begin{theorem}
\label{thm:relco}
Let $K \subseteq \FunP \mathcal X \mathcal Y$. Then $K$ is relatively compact in $\FunP \mathcal X \mathcal Y$ (equipped with the $p$-Wasserstein metric) iff
\begin{enumerate}
\item \label{it:relcoinPr}
$K$ is relatively compact in $\Pr { \mathcal X \times \mathcal Y }$ (equipped with the $p$-Wasserstein metric) and
\item \label{it:gleichgradigstetig}
$ \displaystyle\lim_{\delta \searrow 0} \sup_{\mu \in K} \moc \mu \delta = 0 $.
\end{enumerate}
\end{theorem}
\begin{proof}
We first show that \itref{it:relcoinPr} and \itref{it:gleichgradigstetig} together imply that $K$ is relatively compact in $\FunP \mathcal X \mathcal Y$.
To that end we show that every sequence in $K$ has a subsequence which converges to a point in $\FunP \mathcal X \mathcal Y$. So let $(\mu_n)_n$ be a sequence in $K$. By \itref{it:relcoinPr} there is a subsequence $(\mu_{n_k})_k$ which converges to a point $\mu \in \Pr { \mathcal X \times \mathcal Y }$.
By continuity of the modulus of continuity in its measure argument, i.e.\ by Lemma \ref{lem:moccontinmu}, and by assumption \itref{it:gleichgradigstetig}
\begin{align*}
\lim_{\delta \searrow 0} \moc \mu \delta = \lim_{\delta \searrow 0} \lim_{k \to \infty} \moc {\mu_{n_k}} \delta \leq \lim_{\delta \searrow 0} \sup_{\nu \in K} \moc {\nu} \delta = 0 \text{ .}
\end{align*}
By Lemma \ref{lem:FunPCharacterisation} this implies $\mu \in \FunP \mathcal X \mathcal Y$.
The implication from \enquote{$K$ relatively compact in $\FunP \mathcal X \mathcal Y$} to \itref{it:relcoinPr} is trivial. To show that \enquote{$K$ relatively compact in $\FunP \mathcal X \mathcal Y$} implies \itref{it:gleichgradigstetig} we show its contrapositive.
So assume that \itref{it:gleichgradigstetig} is false. Then there is an $\epsilon$ and for all $n \in \mathbb{N}$ a measure $\mu_n \in K$ with $\moc {\mu_n} {\frac 1 n} \geq \epsilon$. Because $\delta \mapsto \moc {\mu_n} \delta$ is monotone this means that $\restr{\pa{\mocf {\mu_n}}}{\lcro{\frac 1 n,\infty}} \geq \epsilon$. For any subsequence $(\mu_{n_k})_k$ of $(\mu_n)_n$ which converges to some $\mu \in \Pr { \mathcal X \times \mathcal Y }$ we have again by Lemma \ref{lem:moccontinmu}
\begin{align*}
\lim_{\delta \searrow 0} \moc \mu \delta = \lim_{\delta \searrow 0} \lim_{k \to \infty} \moc {\mu_{n_k}} \delta \geq \varepsilon
\text{ .}
\end{align*}
This means that $\mu \notin \FunP \mathcal X \mathcal Y$.
\end{proof}
\ilcomment{
I haven't spent too much time thinking about the interconnection between the different versions of the modulus of continuity for different $p$-Wasserstein metrics, $p \in \set{0} \cup \lcro{1,\infty}$. It seems to me that in this theorem we could have used the \enquote{$0$-Wasserstein}-version of the modulus of continuity as well...
The reason I'm not really inclined to do it is because I suspect that somehow the theory is simplest when you just stick with one $p$ and it feels \enquote{ugly} to mix different versions and to introduce this arbitrary element of turning the metric into a bounded metric by some procedure.
But this thing, that it seems to me here we could just always be using the $0$-Wasserstein-version might be hinting that really there isn't much of the $p$ in those parts of the modulus of continuity that we care about (i.e.\ in the asymptotic behaviour at $0$).
I'm not terribly motivated to really figure it out, just wanted to mention that this is something I noticed in case someone else (or me at some later time) thinks that this is really, really important to figure out.
}
\section{Relative Compactness in the Nested Weak Topology}
\label{sec:relconested}
We are now ready to prove Theorem \ref{thm:relconested}. We restate it below as Theorem \ref{thm:relconested2}, generalizing from the weak topology to the one induced by the $p$-Wasserstein metric.
\begin{definition}
The $\mathcal W_p$-information topology is the initial topology with respect to the maps $\mathcal I_t$, $t \in \set{1, \dots, N-1}$, with the target spaces $\Pr {\mathcal Z^t \times \Pr {\mathcal Z^{N-t}}}$ equipped with the topology which arises when we use the $p$-Wasserstein metric throughout as per our convention introduced at the beginning of Section \ref{sec:wassersteinp}.
\end{definition}
For this to make sense we need to check that $\mathcal I_t(\mu) \in \Pr { \mathcal Z^t \times \Pr {\mathcal Z^{N-t}}}$ i.e.\ that
\begin{align*}
\int \relax\ifmmode D\else \error\fi(\hat z_0, \hat z)^p \d\pa{\mathcal I_t(\mu)}(\hat z) < \infty
\end{align*}
for some $\hat z_0 \in \mathcal Z^t \times \Pr {\mathcal Z^{N-t}}$. Let $z_0 \in \mathcal Z^t$, $z_0' \in \mathcal Z^{N-t}$, and set $\hat z_0 := (z_0, \delta_{z_0'})$. Then one easily checks
\begin{align*}
\int \relax\ifmmode D\else \error\fi(\hat z_0, \hat z)^p \d\pa{\mathcal I_t(\mu)}(\hat z) = \int \relax\ifmmode D\else \error\fi((z_0,z_0'), z) \d \mu(z) < \infty \text{ .}
\end{align*}
At this point we would also like to add another minor generalization, which is to allow the process to take its values in different spaces for different times. Let $\mathcal Z_t$, $t \in \set{1, \dots, N}$ be Polish spaces. The role of $\mathcal Z^N$ is now played by $\prod_{t=1}^N \mathcal Z_t$ and the process at time $t$ takes values in $\mathcal Z_t$. We introduce the shorthands
\newcommand{\overbar}[1]{\mkern 1.5mu\overline{\mkern-1.5mu#1\mkern-1.5mu}\mkern 1.5mu}
\renewcommand{\overline X}{\overbar\mathcal Z}
\begin{align*}
\overline X_s^t & := \prod_{i=s}^t \mathcal Z_i &
\overline X & := \overline X_1^N &
\overline X^t & := \overline X_1^t &
\overline X_t & := \overline X_t^N \text{ .}
\end{align*}
\begin{theorem}
\label{thm:relconested2}
$K \subseteq \Pr {\overline X}$ is relatively compact in the $\mathcal W_p$-information topology iff
\begin{enumerate}
\item \label{it:weaklycompact2}
$K$ is relatively compact in $\Pr {\overline X}$, i.e.\ in the topology induced by the $p$-Wasserstein metric and
\item \label{it:kmalgleichgradigstetig2}
$\displaystyle \lim_{\delta \searrow 0} \sup_{\mu \in K} \moc {\mathcal I_t(\mu)} \delta = 0$ for all $t \in \set{ 1, \dots, N-1 }$.
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:relconested2} (and therefore also Theorem \ref{thm:relconested})]
That $K$ being relatively compact in the $\mathcal W_p$-information topology implies \itref{it:weaklycompact} and \itref{it:kmalgleichgradigstetig} is clear because when $\Pr \overline X$ is equipped with the $\mathcal W_p$-information topology both the identity to $\Pr \overline X$ equipped with the usual topology and all of the $\mathcal I_t$ are continuous, therefore map relatively compact sets to relatively compact sets.
To show the reverse implication we need to show that
\begin{enumerate}[label=(\alph*)]
\item \label{it:PPandPcompactness} $K$ relatively compact in the usual topology on $\Pr \overline X$ implies that $\mathcal I_t\br[]{K}$ is relatively compact in $\Pr {\overline X^t \times \Pr {\overline X_{t+1}}}$.
\item \label{it:IXclosed} $\mathcal I[\Pr \overline X]$ is a closed subset of $\prod_{t=1}^{N-1} \mathcal I_t[\Pr \overline X]$ where $\mathcal I(\mu) := (\mathcal I_t(\mu))_t$,
\end{enumerate}
because then \itref{it:weaklycompact}, \itref{it:kmalgleichgradigstetig}, \itref{it:PPandPcompactness} and Theorem \ref{thm:relco} imply that $\prod_{t=1}^{N-1} \mathcal I_t[K]$ is relatively compact in $\prod_{t=1}^{N-1} \FunP {\overline X^t} {\Pr {\overline X_{t+1}}}$, i.e.\ that there is a compact subset $K'$ of $\prod_{t=1}^{N-1} \FunP {\overline X^t} {\Pr {\overline X_{t+1}}}$ which contains $\mathcal I[K]$. $K' \cap \mathcal I[\Pr \overline X]$ is still compact by \itref{it:IXclosed} and still contains $\mathcal I[K]$, showing that $\mathcal I[K]$ is relatively compact in $\mathcal I[\Pr \overline X]$.
Showing \itref{it:IXclosed} is relatively simple.
For two Polish spaces $\mathcal X$ and $\mathcal Y$ we define a map
\begin{align*}
\und \mathcal X \mathcal Y & : \Pr {\mathcal X \times \Pr \mathcal Y} \rightarrow \Pr {\mathcal X \times \mathcal Y} \\
\intertext{which sends $\nu \in \Pr {\mathcal X \times \Pr \mathcal Y}$ to the probability $\nu'$ satisfying}
\int f \d \nu' & = \iint f(x,y) \d \hat y (y) \d \nu (x, \hat y) \text{ .}
\end{align*}
$\und \mathcal X \mathcal Y$ is easily seen to be Lipschitz-continuous with constant $1$ by writing out the definition
\begin{multline*}
\relax\ifmmode D\else \error\fi_{\Pr{ \mathcal X \times \Pr \mathcal Y}}^p(\mu, \nu) = \\ \inf_{\gamma \in \Cpl \mu \nu} \int \relax\ifmmode D\else \error\fi(x_1,x_2)^p + \inf_{\hat \gamma \in \Cpl {\hat y_1} {\hat y_2}} \int \relax\ifmmode D\else \error\fi(y_1,y_2)^p \d \hat\gamma(y_1,y_2) \d \gamma(x_1,\hat y_1, x_2,\hat y_2)
\end{multline*}
and employing a measurable selector for the inner transport plans $\hat\gamma$ to create from a transport plan $\gamma$ between $\mu$ and $\nu$ a transport plan between $\und \mathcal X \mathcal Y (\mu)$ and $\und \mathcal X \mathcal Y (\nu)$ with the same cost as $\gamma$.
The set $\mathcal I[\Pr \overline X]$ is the preimage of the diagonal $\set{ (\mu)_{t \in \set{ 1 \dots N-1 }}}[\mu \in \Pr \overline X] \subseteq {\Pr \overline X}^{N-1}$ under the map which sends $(\mu_t)_t$ to $\pa{\und {\overline X^t} {\overline X_{t+1}}(\mu_t)}_t$.
This last map is continuous and the diagonal is closed.
\itref{it:PPandPcompactness} is a special case of Lemma \ref{lem:undAndRelCo} below.
\end{proof}
\begin{lemma}
\label{lem:undAndRelCo}
$K \subseteq \Pr { \mathcal X \times \Pr \mathcal Y }$ is relatively compact iff $\und \mathcal X \mathcal Y [K]$ is relatively compact.
\end{lemma}
\begin{proof}
As $\und \mathcal X \mathcal Y$ is continuous the implication from left to right is clear.
The other direction is also not hard using Lemmata \ref{lem:compinPrXtimesY} and \ref{lem:compinPrPrX} below, whose proofs we postpone:
If $\und \mathcal X \mathcal Y [K]$ is relatively compact, then $\set{ \marg \mu \mathcal X }[\mu \in K] = \set{ \marg \nu \mathcal X }[{\nu \in \und \mathcal X \mathcal Y[K]}]$ is relatively compact.
$\set{ \marg \mu \mathcal Y }[\mu \in K]$ is also relatively compact by Lemma \ref{lem:compinPrPrX} because $\avg \mathcal Y \big[ \set{ \marg \mu \mathcal Y }[\mu \in K] \big] = \set{\marg \nu \mathcal Y}[{\nu \in \und \mathcal X \mathcal Y [K]}]$ is relatively compact. Therefore by Lemma \ref{lem:compinPrXtimesY} $K$ is compact.
\end{proof}
\begin{lemma}
\label{lem:compinPrXtimesY}
Let $K \subseteq \Pr {\mathcal X \times \mathcal Y}$. $K$ is relatively compact iff $K_\mathcal X := \set{\marg \mu \mathcal X}[\mu \in K]$ and $K_\mathcal Y := \set{\marg \mu \mathcal Y}[\mu \in K]$ are relatively compact.
\end{lemma}
In analogy to the above definition of $\und \mathcal X \mathcal Y$ we define
\begin{align*}
\avg \mathcal X & : \Pr {\Pr \mathcal X} \rightarrow \Pr \mathcal X \\
\int f \d(\avg \mathcal X(\mu)) & = \int f(x) \d \nu(x) \d \mu(\nu) \text{ .}
\end{align*}
\begin{lemma}
\label{lem:compinPrPrX}
Let $K \subseteq \Pr{ \Pr \mathcal X }$. Then $K$ is relatively compact iff $\avg \mathcal X[K]$ is relatively compact.
\end{lemma}
Lemmata \ref{lem:undAndRelCo}, \ref{lem:compinPrXtimesY}, and \ref{lem:compinPrPrX} have been proved elsewhere.
Lemma \ref{lem:compinPrXtimesY} is very well known for the weak topology --- i.e.\ in the case where the metrics on the base spaces are bounded. In the current setting the proof is only a little more intricate.
Lemma \ref{lem:compinPrPrX} can be found for example in \cite[p. 178, Ch. II]{Sz91} for the weak topology and in \cite{BaBePa18} for our setting. Lemma \ref{lem:undAndRelCo} is also proved there.
For completeness we also provide their proofs here.
We make use of the following variant of Prokhorov's theorem.
\begin{lemma}
\label{lem:WpProkhorov}
Let $\mathcal X$ be a Polish metric space, let $x_0 \in \mathcal X$ be fixed.
$K \subseteq \Pr \mathcal X$ is relatively compact iff for all $\epsilon > 0$ there is a compact set $L \subseteq \mathcal X$ with
\begin{align*}
\int_{L^c} 1 + \relax\ifmmode D\else \error\fi(x_0, x)^p \d \mu(x) < \epsilon
\end{align*}
for all $\mu \in K$.
\end{lemma}
The integrand above will pop up a few times. Let us fix at this point for each Polish metric space $\mathcal X$ we will be talking about a point $x_0 \in \mathcal X$, and let us agree to do this in a compatible manner, i.e.\ if $x_0$ is the point we have chosen in $\mathcal X$ and $y_0$ is the point we have chosen in $\mathcal Y$, in $\mathcal X \times \mathcal Y$ we will chose $(x_0,y_0)$. Similarly, in $\Pr \mathcal X$ we choose $\delta_{x_0}$, the dirac measure at $x_0$.
With this convention, define for any Polish metric space $\mathcal X$
\renewcommand{\phi}{\varphi}
\begin{align*}
\phi_\mathcal X(x) := 1 + \relax\ifmmode D\else \error\fi(x_0,x)^p \text{ .}
\end{align*}
Note that
\begin{align*}
\phi_{\mathcal X \times \mathcal Y}(x,y) & = \phi_\mathcal X(x) + \phi_\mathcal Y(y) - 1 &
\phi_{\Pr \mathcal X}(\nu) & = \int \phi_\mathcal X \d \nu
\end{align*}
\begin{proof}[Proof of Lemma \ref{lem:WpProkhorov}]
As may be common knowledge, the topology induced by $\mathcal W_p$ is equal to the initial topology w.r.t.\ the map $\psi$ which send $\mu \in \Pr \mathcal X$ to the measure which as density $\phi_\mathcal X$ w.r.t.\ $\mu$, when the target space of finite positive measures is equipped with the weak topology. (This can be found for example in \cite[Definition 6.8 (iv) and Theorem 6.9]{Vi09}.) $\psi$ is injective, and surjective onto the closed set of all finite positive measures $\nu$ satisfying
\begin{align*}
\int \frac 1 {\phi_\mathcal X(x)} \d \nu(x) = 1 \text{ .}
\end{align*}
$\Pr \mathcal X$ is therefore homeomorphic to this set. Translating Prokhorov's theorem for finite positive measures to $\Pr \mathcal X$ via $\psi$ gives that $K \subseteq \Pr \mathcal X$ is relatively compact iff
\begin{enumerate}
\item $\exists M \in \mathbb{R}_+$ s.t.\ $\int \phi_\mathcal X \d \mu < M$ for all $\mu \in K$
\item $\forall \epsilon > 0$ there is a compact set $L \subseteq \mathcal X$ s.t.\ $\int_{L^c} \phi_\mathcal X \d \mu < \epsilon$.
\end{enumerate}
(1) is redundant because we may apply (2) for $\epsilon = 1$ to find a compact set $L$ s.t.\ $\int_{L^c} \phi_\mathcal X \d \mu < 1$. $\phi_\mathcal X$ is continuous and therefore bounded on $L$, say by $M'$, so that
\begin{align*}
\int \phi_\mathcal X \d \mu = \int_{L} \phi_\mathcal X \d \mu + \int_{L^c} \phi_\mathcal X \d \mu \leq M' + 1 =: M \text{ .}
\end{align*}
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:compinPrXtimesY}]
$\mu \mapsto \marg \mu \mathcal X$ and $\mu \mapsto \marg \mu \mathcal Y$ are continuous, so one direction is clear.
If $K_\mathcal X$ and $K_\mathcal Y$ are relatively compact, then for any $\epsilon > 0$ there are compact sets $M \subseteq \mathcal X$ and $N \subseteq \mathcal Y$ s.t.\
\begin{align}
\label{eq:MN}
\int_{M^c} \phi_\mathcal X \d (\marg \mu \mathcal X) & < \frac \epsilon 4 &
\int_{N^c} \phi_\mathcal Y \d (\marg \mu \mathcal Y) & < \frac \epsilon 4
\end{align}
for all $\mu \in K$.
Because $\phi_\mathcal X, \phi_\mathcal Y \geq 1$ we also find compact $\bar M \subseteq \mathcal X$, $\bar N \subseteq \mathcal Y$ s.t.\
\begin{align}
\label{eq:bMN}
\marg \mu \mathcal X (\bar M^c) & \leq \frac 1 {\sup_{N} \phi_\mathcal Y} \cdot \frac \epsilon 4 &
\marg \mu \mathcal Y (\bar N^c) & \leq \frac 1 {\sup_{M} \phi_\mathcal X} \cdot \frac \epsilon 4 \text{ .}
\end{align}
We show that for $L := M \times \bar N \cup \bar M \times N$ and for all $\mu \in K$
\begin{align*}
\int_{L^c} \phi_{\mathcal X \times \mathcal Y} \d\mu \leq \epsilon \text{ .}
\end{align*}
$\phi_{\mathcal X \times \mathcal Y}(x,y) < \phi_\mathcal X(x) + \phi_\mathcal Y(y)$, so we show
\begin{align*}
\int_{L^c} \phi_\mathcal X(x) \d\mu(x,y) \leq \frac \epsilon 2 \text{ .}
\end{align*}
$\int_{L^c} \phi_\mathcal Y(y) \d\mu(x,y) \leq \frac \epsilon 2$ will follow by symmetry.
$L^c \subseteq (M \times \bar N)^c = M^c \times \mathcal Y \cup M \times N^c$ and therefore
\begin{align*}
\int_{L^c} \phi_\mathcal X(x) \d\mu(x,y) \leq \int_{M^c \times \mathcal Y} \phi_\mathcal X(x) \d\mu(x,y) + \int_{M \times N^c} \phi_\mathcal X(x) \d \mu(x,y)
\end{align*}
The first summand is $\leq \frac \epsilon 4$ by \eqref{eq:MN}, while the second term is bounded by
\begin{align*}
\sup_M \phi_\mathcal X \cdot \int_{M \times N^c} 1 \d\mu \leq \sup_M \phi_\mathcal X \cdot \mu(\mathcal X \times N^c) \leq \frac \epsilon 4
\end{align*}
by \eqref{eq:bMN}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:compinPrPrX}]
The left-to-right direction is again obvious because $\avg \mathcal X$ is continuous.
For the other direction we show that for all $\epsilon > 0$ there is a compact set $N \subseteq \Pr \mathcal X$ such that for all $\mu \in K$ we have $\int_{N^c} \phi_{\Pr \mathcal X} \d \mu \leq \epsilon$.
Because $\avg \mathcal X[K]$ is relatively compact there is for each $n \in \mathbb{N}_+$ a compact set $L_n \subseteq \mathcal X$ such that
\begin{align}
\label{eq:Ln}
\int_{L_n^c} \phi_\mathcal X \d(\avg \mathcal X(\mu)) \leq \frac \epsilon 2 \cdot 2^{-n} \text{ .}
\end{align}
We also find for each $n \in \mathbb{N}_+$ a compact set $M_n \subseteq \mathcal X$ such that we even have
\begin{align}
\label{eq:Mn}
\int_{M_n^c} \phi_\mathcal X \d(\avg \mathcal X(\mu)) \leq \frac \epsilon 2 \cdot \frac 1 {\sup_{L_n} \phi} \cdot \frac 1 n \cdot 2^{-n} \text{ .}
\end{align}
Define
\begin{align*}
N & := \set { \nu \in \Pr \mathcal X }[ \smallint_{M_n^c} \phi_\mathcal X \d\nu \leq \frac 1 n \,\, \forall n ] \text{ ,}
\end{align*}
i.e.\ $N = \bigcap_{n \geq 1} N_n$, where
\begin{align}
\label{eq:Nn}
N_n & := \set { \nu \in \Pr \mathcal X }[ \smallint_{M_n^c} \phi_\mathcal X \d\nu \leq \frac 1 n ] \text{ .}
\end{align}
Clearly $N$ is compact, again by Lemma \ref{lem:WpProkhorov}.
We show that for each $\mu \in K$ and for all $n \geq 1$ we have $\int_{N_n^c} \phi_{\Pr \mathcal X} \d\mu \leq \epsilon \cdot 2^{-n}$, because then $\int_{N^c} \phi_{\Pr \mathcal X} \d\mu = \int_{\pa{\bigcup_{n \geq 1} N_n^c}} \phi_{\Pr \mathcal X} \d\mu \leq \sum_{n \geq 1} \int_{N_n^c} \phi_{\Pr \mathcal X} \d\mu \leq \epsilon$.
\begin{multline*}
\int_{N_n^c} \phi_{\Pr \mathcal X} \d \mu = \int_{N_n^c} \int \phi_\mathcal X \d\nu \d\mu(\nu) =
\int_{N_n^c} \int_{L_n^c} \phi \d\nu \d\mu(\nu) + \int_{N_n^c} \int_{L_n} \phi \d\nu \d\mu(\nu)
\end{multline*}
The first summand is $\leq \frac \epsilon 2 \cdot 2^{-n}$ by \eqref{eq:Ln}.
The second summand we may bound by
\begin{align*}
\sup_{L_n} \phi \cdot \int_{N_n^c} 1 \d\mu(\nu) \leq \sup_{L_n} \phi \cdot n \cdot \int_{N_n^c} \int_{M_n^c} \phi \d\nu \d\mu(\nu) \leq \frac \epsilon 2 \cdot 2^{-n} \text{ .}
\end{align*}
Here we used first \eqref{eq:Nn} and then \eqref{eq:Mn}.
\end{proof}
\section{Other Applications of the Modulus of Continuity}
In this section we give a new proof for Theorem \ref{thm:dprocont} below. \cite{barbiegupta} gave a different proof for the weak topology, i.e.\ for what in our setting corresponds to the case when the metrics on our base spaces are bounded.
The proof uses Lemma \ref{lem:helper} below, which is also used in the companion paper to this one, \cite{AllTopologiesAreEqual}, as an important ingredient in proving that the information topology of Hellwig is equal to the nested weak topology.
\begin{theorem}
\label{thm:dprocont}
Let $\mu \in \Pr { \mathcal X \times \mathcal Y }$, $\nu \in \FunP {\mathcal Y} {\mathcal Z}$. Then $\mathbin{\dot\otimes}$ is continuous at $(\mu, \nu)$.
\end{theorem}
\begin{lemma}
\label{lem:helper}
Let $\mu \in \FunP \mathcal X \mathcal Y$. For any $\epsilon > 0$ there is a $\delta > 0$ s.t. if
\begin{align*}
\nu \in \Pr { \mathcal X \times \mathcal Y } & \text{ with } \W \mu \nu < \delta \text{ and} \\
\gamma \in \Cpl \mu \nu & \text{ with } \relax\ifmmode D\else \error\fi^\mathcal X(\gamma) < \delta
\end{align*}
then
\begin{align*}
\relax\ifmmode D\else \error\fi^\mathcal Y(\gamma) < \epsilon \text{ .}
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:helper}]
By Lemma \ref{lem:FunPCharacterisation} we can find $\delta' > 0$ such that $\moc \mu {\delta'} < \frac \epsilon 2$. Set $\delta := \frac {\delta'} 2 \wedge \frac \epsilon 2$.
Let $\mathcal W_p(\mu, \nu) = \mathcal W_p(\nu, \mu) < \delta$ and let $\gamma \in \Cpl {\mu} {\nu}$ with $\relax\ifmmode D\else \error\fi^\mathcal X(\gamma) < \delta$. The former implies that there is a $\eta \in \Cpl {\nu} {\mu}$ with $\relax\ifmmode D\else \error\fi^\mathcal X(\eta) \vee \relax\ifmmode D\else \error\fi^\mathcal Y(\eta) < \delta$.
Then $\gamma \mcmp \eta \in \Cpl {\mu} {\mu}$ and $\relax\ifmmode D\else \error\fi^\mathcal X(\gamma \mcmp \eta) \leq \relax\ifmmode D\else \error\fi^\mathcal X(\gamma) + \relax\ifmmode D\else \error\fi^\mathcal X(\eta) < 2 \delta \leq \delta' $. This means that $\gamma \mcmp \eta \in \Sh \mu {\delta'}$ and therefore that $\relax\ifmmode D\else \error\fi^\mathcal Y(\gamma \mcmp \eta) < \frac \epsilon 2$.
{\def{}%
\begin{multline*}
\relax\ifmmode D\else \error\fi^\mathcal Y(\gamma) = \pa{\int \relax\ifmmode D\else \error\fi(y_1,y_2)^p \d\pa{\gamma \mathbin{\dot\otimes} \eta}({x_1},y_1,{x_2},y_2,{x_3},{y_3})}^{\frac 1 p} \leq \\
\pa{\int \pa{\relax\ifmmode D\else \error\fi(y_1,y_3) + \relax\ifmmode D\else \error\fi(y_3,y_2)}^p \d\pa{\gamma \mathbin{\dot\otimes} \eta}({x_1},y_1,{x_2},y_2,{x_3},{y_3})}^{\frac 1 p} \leq \\
\relax\ifmmode D\else \error\fi^\mathcal Y(\gamma \mcmp \eta) + \relax\ifmmode D\else \error\fi^\mathcal Y(\eta) < \frac \epsilon 2 + \delta \leq \epsilon
\end{multline*}
}%
\end{proof}
\newcommand{\tilde\gamma}{\tilde\gamma}
\begin{proof}[Proof of Theorem \ref{thm:dprocont}]
Let $\epsilon > 0$. Find by Lemma \ref{lem:helper} $\delta > 0$ s.t.\ for all $\nu'$ with $\mathcal W_p(\nu,\nu') < \delta$ and all $\kappa \in \Cpl {\nu} {\nu'}$ satisfying $\relax\ifmmode D\else \error\fi^\mathcal Y(\kappa) < \delta$ we have $\relax\ifmmode D\else \error\fi^\mathcal Z(\kappa) < \epsilon$.
Let $\nu' \in \Pr { \mathcal Y \times \mathcal Z }$ s.t.\ $\mathcal W_p(\nu,\nu') < \delta$ and let $\mu \in \Pr { \mathcal X \times \mathcal Y }$ s.t.\ $\mathcal W_p(\mu,\mu') < \delta \wedge \epsilon$, witnessed by $\gamma \in \Cpl {\mu} {\mu'}$ with $\relax\ifmmode D\else \error\fi^\mathcal X(\gamma) \vee \relax\ifmmode D\else \error\fi^\mathcal Y(\gamma) < \delta \wedge \epsilon$.
From $\gamma$, $\nu$ and $\nu'$ we may use $\mathbin{\dot\otimes}$ twice to define a measure $\chi \in \Cpl {\mu \mathbin{\dot\otimes} \nu} {\mu' \mathbin{\dot\otimes} \nu'}$ which has marginals as shown in the picture below.
\begin{center}
\begin{tikzpicture}[%
mymatrix/.style={matrix of nodes,
column sep=5em,
row sep=1em},
node distance=0em and 0em
]
\matrix[mymatrix] (mx) {%
$\mathcal X$ & $\mathcal Y$ & $\mathcal Z$ \\
$\mathcal X$ & $\mathcal Y$ & $\mathcal Z$ \\
};
\node[fit=(mx-1-1) (mx-1-2), draw, rounded corners, inner sep=0em] (m) {};
\node[above=-0.5em of m, fill=white, inner sep=0] (mu) {$\mu$};
\node[fit=(mx-2-1) (mx-2-2), draw, rounded corners, inner sep=0em] (m') {};
\node[below=-0.5em of m', fill=white, inner sep=0] (mu') {$\mu'$};
\node[fit=(mx-1-1) (mx-1-2) (mx-2-1) (mx-2-2) (mu) (mu'), draw, rounded corners, inner xsep=.9em, inner ysep=1em] (g) {};
\node[left=-0.5em of g, fill=white, inner sep=0.2em] (gamma) {$\gamma$};
\node[fit=(mx-1-2) (mx-1-3), draw, rounded corners, inner sep=.3em] (n) {};
\node[above=-0.4em of n, fill=white, xshift=1em, inner sep=.1em] (nu) {$\nu$};
\node[fit=(mx-2-2) (mx-2-3), draw, rounded corners, inner sep=.3em] (n') {};
\node[below=-0.5em of n', fill=white, xshift=1em, inner sep=.1em] (nu') {$\nu'$};
\end{tikzpicture}
\end{center}
In other words, with $(\nu_y)_y$ a disintegration of $\nu$ w.r.t.\ $\mathcal Y$, and similarly for $\nu'$,
\begin{align*}
\int f \d\chi = \iiint f(x,y,z,x',y',z') \d\nu'_{y'}(z') \d\nu_y(z) \d\gamma(x,y,x',y') \text{ .}
\end{align*}
Setting $\kappa := \marg \chi {\mathcal Y \times \mathcal Z \times \mathcal Y \times \mathcal Z}$ we have $\relax\ifmmode D\else \error\fi^\mathcal Y(\kappa) = \relax\ifmmode D\else \error\fi^\mathcal Y(\gamma) < \delta$, and by our choice of $\delta$ and $\nu'$, $\relax\ifmmode D\else \error\fi^\mathcal Z(\kappa) \leq \epsilon$.
Now $\relax\ifmmode D\else \error\fi^\mathcal X(\chi) = \relax\ifmmode D\else \error\fi^\mathcal X(\gamma)$, $\relax\ifmmode D\else \error\fi^\mathcal Y(\chi) = \relax\ifmmode D\else \error\fi^\mathcal Y(\gamma)$, $\relax\ifmmode D\else \error\fi^\mathcal Z(\chi) = \relax\ifmmode D\else \error\fi^\mathcal Z(\kappa)$ and therefore
\begin{align*}
\mathcal W_p(\mu \mathbin{\dot\otimes} \mu', \nu \mathbin{\dot\otimes} \nu') \leq \relax\ifmmode D\else \error\fi^\mathcal X(\chi) + \relax\ifmmode D\else \error\fi^\mathcal Y(\chi) + \relax\ifmmode D\else \error\fi^\mathcal Z(\chi) \leq 3 \epsilon \text{ .}
\end{align*}
\end{proof}
\bibliographystyle{abbrv}
|
1,116,691,500,184 | arxiv | \section{Introduction}
\label{sec:intro}
Nearest neighbor classifier (see, e.g., \citep{cover1967nearest,friedman2001elements,duda2000pattern}) is arguably the simplest and most popular nonparametric classifier in statistics and machine learning literature. It classifies a test case ${\bf x}$ to the class which has the maximum representation of training data points in a neighborhood of ${\bf x}$. So, when the numbers of training sample observations from the competing classes are not comparable, it shows a tendency to classify more observations in favor of larger classes. Many other popular classifiers (e.g., classification tree, random forest, support vector machines, artificial neural networks) also have similar problems for imbalanced data.
One possible way of dealing with such imbalanced data sets is to assign different weights or costs for observations belonging to different classes (see, e.g., \citep{huang2005weighted,dubey2013class,zang2016improved}). But these choices of weights are somewhat adhoc, and the performance of the resulting classifier may depend heavily on the choice. Some methods involving under-sampling from the majority class \citep{Domingos1999} or oversampling from the minority class \citep{chawla2002smote} have also been proposed in the machine learning literature. The under-sampling approach removes some data points from the majority class, but this can also lead to the removal of important observations and negatively affect the fit of the majority class. The oversampling techniques, on the other hand, synthetically generate observations from the minority class, and they are known as Synthetically Minority Oversampling TEchniques (SMOTE). Depending on the sample generation method, we have different versions of SMOTE (see, e.g., \citep{chawla2002smote,han2005borderline,Nguyen2011svmsmote,menardi2014training,last2017oversampling,rsmote}). Some of these methods need to generate data every time we want to classify a new observation, and some of them need a higher memory space to store all over-sampled points. Clearly, these are not desirable for large datasets, especially when the proportion of observations in the minority class is small. Further, two different persons using the same oversampling method on the same data set may get widely different results due to the randomness involved in sample generation.
In this article, we take a statistical approach towards nearest neighbor classification based on imbalanced data. Our method does not need to use any adhoc weight or cost function for its implementation. Also unlike SMOTE, it does not need to generate additional observations from the minority class. This method is based on a probabilistic model, and the results are exactly reproducible. In the next section, we describe this method for binary classification. We establish its consistency under appropriate regularity conditions, and analyze some simulated and real datasets to compare its performance with the usual nearest neighbor classifier, a weighted nearest neighbor classifier and various SMOTE algorithms. In Section~\ref{sec:multi-class}, we extend our method for classification problems involving more than two classes. Consistency of the resulting classifiers is derived and some real data sets are analyzed to investigate their empirical performance. Finally, Section~\ref{sec:concluding_remarks} provides a brief summary of the work and some concluding remarks.
\section{The proposed binary classifier}
\label{sec:method}
One major problem with the nearest neighbor classification based on imbalanced data is the sparsity of the minority class observations. Sometimes, one may not get any training data point from the minority class in a neighborhood of a query point even if the true posterior probability of that class is high at that location. Assignments of different weights or costs to observations belonging to different classes are not of much help in such situations. To take care of this problem, unlike the usual nearest neighbor classifier, here we do not consider the same number of neighbors for all test cases. For a test point ${\bf x}$, we keep on considering its neighbors one by one according to their distances until we get a fixed number of neighbors ($k$, say) from the minority class. Let $p({\bf x})$ be the probability (which can be assumed to be constant over a small neighborhood of ${\bf x}$) that a randomly chosen neighbor of ${\bf x}$ comes from the minority class. If $N_k({\bf x})$ denotes the minimum number of neighbors needed to get $k$ neighbors from the minority class, one can see that $N_k({\bf x})$ follows a negative binomial distribution (see, e.g., \citep{johnson2005univariate}) with the probability mass function (p.m.f.) given by
\[
P(N_k({\bf x})=n)=\binom{n-1}{k-1} (p({\bf x}))^k (1-p({\bf x}))^{n-k},
\]
for $n=k,k+1,\ldots$ Note that if the majority class and the minority class have identical probability distributions, and there are $n_1$ and $n_2$ observations from these two classes, respectively, we have $p({\bf x})=n_2/(n_1+n_2)$ ($=p_0$, say) for all ${\bf x}$. In such a case, the p.m.f.\ is given by
\[
f_k(n)=\binom{n-1}{k-1} p_0^k(1-p_0)^{n-k}.
\]
The random variable $N_k({\bf x})$ is supposed to take a small (respectively, large) value if $p({\bf x})$ is large (respectively, small). If $N_0$ is an independent random variable with p.m.f. $f_k$, the probability $P(N_0< N_k({\bf x}))=\sum_{n<N_k({\bf x})}f_k(n)$ can be viewed as the $p$-value (see, e.g., \citep{casella2021statistical}) for testing the null hypothesis $H_0:p({\bf x})=p_0$ against the alternative hypothesis $H_1:p({\bf x})>p_0$. Clearly, lower $p$-values indicate stronger evidence in favour of the minority class. Similarly, $P(N_0> N_k({\bf x}))=\sum_{n>N_k({\bf x})}f_k(n)$ can be viewed as the $p$-value for testing $H_0:p({\bf x})=p_0$ against $H_1:p({\bf x})<p_0$, and lower values of it indicate stronger evidence in favor of the majority class. Because of the discrete nature of the random variable $N_k({\bf x})$, here we make a slight adjustment and use
\[
e_k{(\bf x)}=\sum_{n<N_k({\bf x})}f_k(n) +\frac{1}{2}f_k(N_k({\bf x}))
\]
and $1-e_k({\bf x})=\sum_{n>N_k({\bf x})}f_k(n) +\frac{1}{2}f_k(N_k({\bf x}))$ as adjusted $p$-values for these two cases. Under $H_0:p({\bf x})=p_0$, $e_k({\bf x})$ follows a distribution symmetric about $1/2$ (i.e., $e_k({\bf x})$ and $1-e_k({\bf x})$ have the same distribution), while $e_k({\bf x})>1/2$ (respectively, $e_k({\bf x})<1/2$) gives an evidence in favour of the majority (respectively, minority) class, and a larger value of $|e_k({\bf x})-1/2|$ indicates stronger evidence. We compute $e_k({\bf x})$ for several choices of $k \le k_{max}$ (a user specified value), and consider ${\cal E}_1= \max_{k} e_k({\bf x})$ as the strongest evidence in favour of the majority class (Class-1, say). Similarly, ${\cal E}_2= \max_{k} (1-e_k({\bf x}))=1-\min_{k} e_k({\bf x})$ can be viewed as the strongest evidence in favour of the minority class (Class-2, say).
We classify the observation ${\bf x}$ to Class-1 (respectively, Class-2) if ${\cal E}_1 >{\cal E}_2$ (respectively, ${\cal E}_1 <{\cal E}_2$).
Note that unlike SMOTE, no randomness is involved in this proposed method, and the results are completely reproducible.
\begin{algorithm}[ht]
\begin{mdframed}
\caption{Algorithm for binary classification}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE ${\bf x}$ -- Observation to be classified. \\ $\{{\bf x}_{11},\ldots,{\bf x}_{1n_1}$\}: Training data from majority class.\\
$\{{\bf x}_{21},\ldots,{\bf x}_{2n_2}$\}: Training data from minority class.
\ENSURE $\delta({\bf x})$: Predicted class label of ${\bf x}$.
\\
\STATE Sort the $n_1+n_2$ training data points in increasing order of distances from ${\bf x}$.\\
\textit{Initialization} : $e_{\min}=e_{\max}=0.5$\\
\FOR {$k=1$ to $k_{max}$}
\STATE Compute $n_0$, the minimum number of neighbors of ${\bf x}$ needed to get $k$ neighbors from the minority class.
\STATE Calculate $e_k=\sum_{n<n_0}f_k(n) +\frac{1}{2}f_k(n_0)$.
\IF {($e_k<e_{\min}$)}
\STATE $e_{\min} = e_k$.
\ENDIF
\IF {($e_k>e_{\max}$)}
\STATE $e_{\max} = e_k$.
\ENDIF
\ENDFOR
\STATE Compute ${\cal E}_1=e_{\max}$ and ${\cal E}_2=1-e_{\min}$.
\RETURN $\delta({\bf x})=1+{\mathbb I}_{\{{\cal E}_1<{\cal E}_2\}}
\end{algorithmic}
\end{mdframed}
\vspace{-0.1in}
\end{algorithm}
\subsection{Large sample consistency}
\label{sec:binary_consistency}
Now, we investigate the large sample behavior of this proposed classifier. For this investigation, we assume that the density functions of the competing classes are continuous, and sample proportions of the two classes are asymptotically non-negligible. Our method needs the value of $k_{\max}$ to be specified. We assume that $k_{\max}$ grows with $n$ in such a way that $k_{\max}/n \rightarrow 0$ as $n$ diverges. This ensures that the neighborhood around any query point ${\bf x}$ shrinks as $n$ increases so that we can capture the local behavior of underlying densities. These assumptions are quite common in the literature of nearest neighbor methods \citep[see, e.g.][]{loftsgaarden1965nonparametric,cover1967nearest,ghosh2007nearest}.
Under these assumptions, we have the following result.
\begin{theorem}\label{thm:binary_consistency}
Suppose that two competing classes have continuous probability density functions $f_1$ and $f_2$. Also assume that as $n \rightarrow \infty$, (i) $n_i/n$ converges to some $\pi_i \in (0,1)$ for $i=1,2$ ($\pi_1\ge \pi_2,~\pi_1+\pi_2=1$) and (ii) $k_{\max} \rightarrow \infty$ in such a way that $k_{\max}/n \rightarrow 0$. If $f_i({\bf x})>f_j({\bf x})$ ($i,j=1,2,~i\neq j$), then the proposed method classifies the observation ${\bf x}$ to the $i$-th class with probability tending to $1$ as $n$ tends to infinity.
\end{theorem}
\begin{proof} Let ${\bf x}$ be the observation to be classified. First consider a sequence $\{k_n:n\ge 1\}$, where $k_n \rightarrow \infty$ and $k_n/n \rightarrow 0$ as $n \rightarrow \infty$. Define $r_{k_n}$ as the distance between ${\bf x}$ and its $k_n$-{th} nearest neighbor from the Class-2. Let $B({\bf x},r_{k_n})$ be the closed ball (neighborhood) of radius $r_{k_n}$ around ${\bf x}$ and $N_{k_n}$ be the total number of observations in $B({\bf x}, r_{k_n})$. Also, define $\pi_1^{(n)}$ (respectively, $\pi_2^{(n)}$) as the probability that a random observation in $B({\bf x},r_{k_n})$ is from the first (respectively, second) class. One can see that $N_{k_n}$ follows the negative binomial distribution \citep[see, e.g.,][]{feller2008introduction} with parameters $k_n$ and $\pi_2^{(n)}$. So, the evidence in favor of the first class is given by
\[
e_{k_n}=\sum_{n<N_{k_n}} f_{k_n}(n) +\frac{1}{2}f_{k_n}(N_{k_n}).
\]
Clearly, $e_{k_n}$ is a function of $N_{k_n}$, and it can be viewed as the conditional probability $$\psi(N_{k_n})=P(N_0 < N_{k_n} \mid N_{k_n}) + \frac{1}{2}P(N_0 = N_{k_n}\mid{N_{k_n}}),$$ where $N_0$ is an independent negative binomial random variable with parameters $k_n$ and $p_0={n_2}/{n}$.
Now, define $T_n=(N_0-N_{k_n})/k_n$. The mean and the variance of $T_n$ are
$\mu_n = \frac{n_1}{n_2} - \frac{\pi_1^{(n)}}{\pi_2^{(n)}}$ and $\sigma^2_n = \frac{1}{k_n}\left[\frac{n n_1}{n_2^2} - \frac{\pi_1^{(n)}}{\left(\pi_2^{(n)}\right)^2}\right]$, respectively. Under the condition $k_n/n \rightarrow 0$, since $r_{k_n} \stackrel{P}{\rightarrow} 0$ \citep[see][]{loftsgaarden1965nonparametric}, using the continuity of $f_1$ and $f_2$,
for $j=1,2$, we have $\left|\pi_n^{(j)} - \frac{\pi_j f_j({\bf x})}{\pi_1 f_1({\bf x}) + \pi_2 f_2({\bf x})}\right| \stackrel{P}{\rightarrow} 0$ as $n\rightarrow \infty$. Again, $n_j/n \rightarrow \pi_j$ for $j=1,2$. So, $\mu_n$ converges to $\frac{\pi_1}{\pi_2}-\frac{\pi_1 f_1({\bf x})}{\pi_2f_2({\bf x})}$ and $\sigma^2_n$ converges to $0$. This implies $T_n \xrightarrow{P} \frac{\pi_1}{\pi_2}-\frac{\pi_1 f_1({\bf x})}{\pi_2f_2({\bf x})}$. Note that this limiting value is positive (respectively, negative) if $f_1({\bf x}) < f_2({\bf x})$ (respectively, $f_1({\bf x}) > f_2({\bf x})$). Therefore,
\[
P(N_0 < N_{k_n})=P(T_n<0) \to
\begin{cases}
1 & f_1({\bf x}) > f_2({\bf x})\\
0 & f_1({\bf x}) < f_2({\bf x}).
\end{cases}
\]
In both of these cases, $P(N_{k_n}=N_0) \rightarrow 0$ as $n\rightarrow \infty$. Note that $P(N_0<N_{k_n})+\frac{1}{2}P(N_0=N_{k_n})=E(\psi(N_{k_n}))$ So, $E(\psi(N_{k_n}))$ converges to $1$ (respectively, $0$) if $f_1({\bf x}) > f_2({\bf x})$ (respectively,
$f_1({\bf x}) < f_2({\bf x})$). But $\psi(N_{k_n})$ is bounded between 0 and 1. So, $e_{k_n}=\psi(N_{k_n})$ converges in probability to $1$ and $0$ in the respective cases.
From the above discussion, it is clear that if $f_1({\bf x})>f_2({\bf x})$, the strongest evidence in favor of Class-1, ${\cal E}_1 = \max_{k\le k_{max}} e_k\ge e_{k_n}=\psi(N_{k_n})\rightarrow 1$ as $n \rightarrow \infty$. Now, we need to show that ${\cal E}_2=1-\min_{k\le k_{max}} e(k)$, the strongest evidence in favor of Class-2, remains bounded away from $1$ as $n$ diverges to infinity. If possible, suppose that is not true. Then there exists a sequence $\{k^*_n ~: n \ge 1\}$ such that $e_{k_n^*} \rightarrow 0$ as $n \rightarrow \infty$. Clearly, this is not possible if $k_n^{*}$ remains bounded (in that case, $f_{k^*_n}(t)$ remains bounded away from $0$ for all $t$). On the other hand, we have proved that if $k_n^{*}\rightarrow \infty$, $e_{k_n^*} \rightarrow 1$ (note that $k_n^* \le k_{\max}$ and hence $k_n^*/n \rightarrow 0$ as $n \rightarrow \infty$). So, ${\cal E}_2$ remains bounded away from $1$. As a result, we classify ${\bf x}$ to Class-1 with probability tending to one. Similarly, for $f_1({\bf x})<f_2({\bf x})$, we classify ${\bf x}$ to Class-2 with probability converging to $1$ as $n$ increases.
\end{proof}
\subsection{Illustrative examples}
\label{sec:binary_simulation}
We consider some simulated examples to demonstrate the utility of the proposed method. For each example, we consider training and test samples of size 1000. While the training samples have $\alpha$ ($0<\alpha<1/2$) proportion of observations from the minority class, the test samples are considered to have equal number of observations from the two classes. To evaluate the performance of our method, we compute the precision and the recall of the proposed classifier on the test set. Suppose that the classifier gives the following allocation matrix (see Table~\ref{tab:allocation_matrix}) for the test data. Then the precision and the recall of the classifier for Class-$i$ ($i=1,2$) are defined as ${\cal P}(i)=n_{ii}/{n_{0i}}$ and ${\cal R}(i)=n_{ii}/{n_{i0}}$, respectively. We often use the harmonic mean of precision and recall as another measure of performance, and it is called the $F_1$ score. The $F_1$ score for the $i$-th class ($i=1,2$) is given by ${\cal F}_1(i)=2n_{ii}/(n_{i0}+n_{0i})$. We compute these scores for each of the two classes, and the overall precision, recall and $F_1$ score of the classifier are given by ${\cal P}=\sum_{i=1}^{2}{\cal P}(i)/2$, ${\cal R}=\sum_{i=1}^{2}{\cal R}(i)/2$ and ${\cal F}_1=\sum_{i=1}^{2}{\cal F}_1(i)/2$, respectively. Note that if the test samples from the two competing classes are of the same size, overall recall turns out to be the same as the accuracy of the classifier $(n_{11} + n_{22})/n_{00}$.
\begin{table}[t]
\centering
\caption{Allocation matrix of a classifier\label{tab:allocation_matrix}}
\begin{tabular}{|cc|c|c|c|}
\hline
& &\multicolumn{2}{c|}{Predicted}& \\ & &Class-1 &Class-2 & Total\\ \hline
\multirow{2}{*}{Actual} & Class-1 & $n_{11}$ & $n_{12}$ &$n_{10}$\\ \cline{2-5}
& Class-2 & $n_{21}$ & $n_{22}$ & $n_{20}$\\ \hline
& Total & $n_{01}$ & $n_{02}$ & $n_{00}$ \\ \hline
\end{tabular}
\end{table}
\begin{table*}[t]
\centering
\caption{Performance (in \%) of different methods in normal location problems\label{tab:binary_normal_location}}
{\setlength{\tabcolsep}{0.065in}
\small
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
& $\alpha$ & $k$-NN & WNN & L-Smote & B-Smote & SVM-Smote & kM-Smote & Boot-Smote & Proposed \\
\hline
& {0.05} & 67.66 (.077) & 67.66 (.077) & 68.62 (.069) & 71.02 (.067) & 71.21 (.062) & 70.82 (.071) & 67.66 (.077) & \textbf{74.59} (.053) \\
${\cal P}$& {0.10} & 72.94 (.059) & 72.52 (.052) & 72.85 (.055) & 71.06 (.056) & 73.04 (.054) & 74.17 (.055) & 72.65 (.054) & \textbf{74.58 }(.049) \\
& {0.20} & 74.14 (.045) & 72.72 (.051) & 73.82 (.050) & 70.62 (.053) & 74.24 (.048) & 74.83 (.043) & 73.94 (.048) & \textbf{75.01} (.045) \\
& {0.40} & 75.19 (.046) & 75.33 (.044) & 75.01 (.046) & 74.53 (.047) & 74.96 (.046) & 75.22 (.046) & 75.01 (.045) & \textbf{75.78} (.043) \\
\hline \
& {0.05} & 57.07 (.047) & 57.07 (.047) & 66.24 (.065) & 66.64 (.065) & 65.02 (.064) & 65.39 (.068) & 57.07 (.047) & \textbf{73.76} (.057) \\
${\cal R}$& {0.10} & 59.16 (.063) & 71.56 (.053) & 72.51 (.055) & 68.89 (.057) & 70.80 (.060) & 71.26 (.098) & 72.41 (.054) & \textbf{73.77} (.051) \\
& {0.20} & 66.35 (.057) & 72.65 (.051) & 73.59 (.050) & 69.98 (.054) & 73.65 (.050) & 69.36 (.075) & 73.71 (.048) & \textbf{74.54} (.046) \\
& {0.40} & 74.30 (.048) & 75.25 (.044) & 74.77 (.046) & 74.41 (.047) & 74.86 (.046) & 74.47 (.048) & 74.78 (.046) & \textbf{75.70} (.043) \\
\hline
& {0.05} & 49.44 (.075) & 49.44 (.075) & 65.11 (.072) & 64.79 (.078) & 62.23 (.085) & 62.94 (.087) & 49.44 (.075) & \textbf{73.53} (.061) \\
${\cal F}_1$& {0.10} & 51.85 (.105) & 71.25 (.055) & 72.41 (.056) & 68.06 (.063) & 70.05 (.069) & 70.26 (.132) & 72.34 (.055) & \textbf{73.55} (.053) \\
& {0.20} & 63.36 (.077) & 72.63 (.051) & 73.52 (.051) & 69.74 (.055) & 73.48 (.052) & 67.51 (.099) & 73.65 (.049) & \textbf{74.42} (.048) \\
& {0.40} & 74.07 (.050) & 75.23 (.044) & 74.71 (.046) & 74.38 (.047) & 74.84 (.046) & 74.28 (.050) & 74.72 (.046) & \textbf{75.68} (.043) \\
\hline
\end{tabular}%
}
\end{table*}
\begin{table*}[t]
\centering
\caption{Performance (in \%) of different methods in normal scale problems\label{tab:binary_normal_scale}}
{\setlength{\tabcolsep}{0.065in}
\small
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
& $\alpha$ & $k$-NN & WNN & L-Smote & B-Smote & SVM-Smote & kM-Smote & Boot-Smote & Proposed \\
\hline
\multicolumn{10}{|c|}{Majority class: $N({\bf 0},{\bf I})$, Minority class: $N({\bf 0}, 2{\bf I})$} \\ \hline
& {0.1} & 56.69 (.085) & 58.05 (.069) & 55.88 (.067) & 57.45 (.074) & 59.73 (.088) & 58.79 (.072) & 56.69 (.085) & \textbf{61.96} (.074) \\
${\cal P}$& {0.2} & 56.19 (.064) & 57.26 (.055) & 58.87 (.062) & 58.86 (.062) & 60.88 (.076) & 60.69 (.070) & 59.13 (.063) & \textbf{62.37} (.065) \\
& {0.4} & 59.55 (.058) & 60.98 (.057) & 61.13 (.059) & 60.23 (.057) & 60.72 (.058) & 60.54 (.056) & 60.33 (.057) & \textbf{62.81} (.055) \\
\hline
& {0.1} & 52.84 (.038) & 56.54 (.058) & 54.79 (.056) & 55.54 (.057) & 56.06 (.063) & 55.63 (.053) & 52.84 (.038) & \textbf{58.73} (.057) \\
${\cal R}$& {0.2} & 54.27 (.045) & 57.24 (.055) & 58.17 (.057) & 58.37 (.058) & 57.92 (.058) & 58.38 (.061) & 58.34 (.057) & \textbf{59.89} (.052) \\
& {0.4} & 58.17 (.050) & 60.32 (.053) & 59.80 (.052) & 59.41 (.053) & 59.93 (.053) & 59.46 (.052) & 59.72 (.053) & \textbf{61.14} (.048) \\
\hline
& {0.1} & 44.84 (.056) & 54.39 (.066) & 52.56 (.064) & 52.47 (.067) & 51.41 (.107) & 51.18 (.082) & 44.84 (.056) & \textbf{55.74} (.077) \\
${\cal F}_1$& {0.2} & 50.42 (.053) & 57.21 (.055) & 57.34 (.060) & {57.68 (.061)} & 54.88 (.090) & 56.02 (.098) & 57.43 (.060) & {\bf 57.78} (.063) \\
& {0.4} & 56.59 (.056) & 59.71 (.056) & 58.57 (.057) & 58.57 (.057) & 59.18 (.056) & 58.39 (.058) & 59.12 (.055) & \textbf{59.84} (.055) \\
\hline \multicolumn{10}{|c|}{Majority class: $N({\bf 0},2{\bf I})$, Minority class: $N({\bf 0}, {\bf I})$} \\ \hline
& {0.1} & 53.86 (.078) & 53.86 (.078) & 54.69 (.060) & 55.32 (.063) & 56.37 (.067) & 55.18 (.082) & 53.86 (.078) & \textbf{57.34} (.060) \\
${\cal P}$ & {0.2} & 53.95 (.061) & 57.37 (.057) & 54.87 (.059) & 55.23 (.058) & 57.06 (.066) & 56.57 (.073) & \textbf{57.65} (.058) & 57.43 (.058) \\
& {0.4} & 58.14 (.056) & 60.30 (.053) & 59.01 (.054) & 59.05 (.056) & 58.97 (.054) & 59.63 (.058) & 58.54 (.055) & \textbf{60.64} (.052) \\
\hline
& {0.1} & 51.54 (.032) & 51.54 (.032) & 53.50 (.045) & 53.61 (.044) & 54.42 (.053) & 52.80 (.047) & 51.54 (.032) & \textbf{56.67} (.056) \\
${\cal R}$&{0.2} & 52.69 (.042) & 57.30 (.056) & 53.95 (.048) & 54.22 (.047) & 55.47 (.062) & 55.58 (.066) & \textbf{57.56} (.058) & 56.94 (.054) \\
&{0.4} & 57.20 (.053) & 60.07 (.052) & 58.96 (.054) & 59.01 (.055) & 58.90 (.053) & 59.47 (.058) & 58.45 (.055) & \textbf{60.45} (.051) \\
\hline
& {0.1} & 42.96 (.045) & 42.96 (.045) & 50.36 (.052) & 49.52 (.055) & 50.50 (.100) & 46.51 (.075) & 42.96 (.045) & \textbf{55.62} (.064) \\
${\cal F}_1$& {0.2} & 48.54 (.051) & 57.20 (.056) & 51.66 (.054) & 51.87 (.054) & 52.63 (.106) & 53.71 (.086) & \textbf{57.43} (.059) & 56.18 (.062) \\
& {0.4} & 55.89 (.063) & 59.86 (.052) & 58.90 (.055) & 58.97 (.056) & 58.82 (.053) & 59.30 (.060) & 58.34 (.056) & \textbf{60.28} (.051) \\
\hline
\end{tabular}%
}
\end{table*}%
For each example, we consider $1000$ simulation runs, and the average values of ${\cal P}$, ${\cal R}$ and ${\cal F}_1$ over these $1000$ trials are computed for the proposed classifier. These average values are reported in Tables~\ref{tab:binary_normal_location} and \ref{tab:binary_normal_scale} along with their corresponding standard errors. For proper evaluation of the proposed method, we compare its performance with the usual $k$-nearest neighbor classifier (k-NN), a weighted $k$-nearest neighbor classifier (WNN) and some of the SMOTE algorithms available in the literature. WNN puts weight $1/n_i$ on each observation from the $i$-th class ($i=1,2$). Among the SMOTE algorithms, we consider Linear SMOTE \citep{chawla2002smote}, Borderline SMOTE \citep{han2005borderline}, SVM SMOTE \citep{Nguyen2011svmsmote}, $k$-Means SMOTE \citep{last2017oversampling} and SMOTE based on smooth bootstrap \citep{menardi2014training}. We shall refer to them as L-Smote, B-Smote, SVM-Smote, kM-Smote and Boot-Smote, respectively. For k-NN and WNN, the number of neighbors is chosen using cross-validation. A pilot survey is carried out to choose the tuning parameters for the SMOTE algorithms (the one with the best $F_1$ score is chosen). All algorithms are implemented on \texttt{Python} using \texttt{scikit-learn} \citep{scikit-learn} and \texttt{Jupyter Notebooks}. The codes for the SMOTE methods are available in the package \texttt{imbalanced-learn} \citep{imblearn}. For our method, we used varying choices of $k_{\max}$ strating from $15$ to $45$, but the results were almost similar. Throughout this section, we report the results for $k_{\max}=45$.
We begin with an example, where two bivariate normal distributions $N({\bf 0}, {\bf I})$ and $N((1,1)^{\top},{\bf I})$ differ only in their locations. We consider the second one as the minority class and $\alpha$ proportion of training sample observations are generated from there. For different choices of $\alpha$ (0.05, 0.1, 0.2 and 0.4), the results are reported in Table~\ref{tab:binary_normal_location}. We can see that the proposed method outperformed all other competitors in all cases. The differences with other methods were higher for small values of $\alpha$. As $\alpha$ increased (i.e., the training sample became more balanced), all methods tend to perform better and the difference became smaller. Note that in this example, the Bayes (oracle) classifier has ${\cal P}={\cal R}={\cal F}_1= 76.025\%$. The performance of our method was close to that even for small values of $\alpha$. For small $\alpha$, as expected, kNN had low precision for the minority class and low recall for the majority class. As a result, they had low $F_1$ score. WNN and some SMOTE algorithms, particularly Boot-Smote, also had similar problems.
\begin{table*}[t]
\centering
\caption{Performance (in \%) of different classifiers on benchmark data sets involving two classes\label{tab:binary_benchmark}}
{\setlength{\tabcolsep}{0.05in}
\scriptsize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}\hline
& \multicolumn{2}{c|}{Sample size} & & $k$-NN & WNN & L-Smote & B-Smote & SVM-Smote & kM-Smote & Boot-Smote & Proposed\\ \cline{2-3}
& Train & Test & & & & & & & & &\\ \hline
Asteroids$^\ast$&3743, &189, &${\cal P}$ &79.18 (.059) &85.06 (.057)&80.78 (.061)&80.80 (.061)&80.46 (.059)&80.94 (.072)&79.18 (.059)&{\bf 86.58} (.054)\\
$p=20$&566 &189 &${\cal R}$ &75.76 (.064)&84.91 (.057)&79.34 (.065)&79.40 (.064)&78.72 (.063) &79.89 (.090)&75.76 (.064)&{\bf 86.32} (.055)\\
& & &${\cal F}_1$ &75.01 (.070)&84.90 (.057)&79.09 (.067)&79.15 (.066)&78.40 (.066) & 79.69 (.096)&75.01 (.070)&{\bf 86.30} (.055)\\ \hline
Astroseis- &641 &72 & ${\cal P}$ & 96.02 (.043) & 95.62 (.045) & 96.11 (.045) & 95.91 (.048) & \textbf{96.12} (.046) & 96.03 (.043) & 96.12 (.045) & 95.71 (.049) \\
mology & 216&72 & ${\cal R}$ & 95.83 (.046) & 95.47 (.047) & 96.02 (.046) & 95.86 (.048) & \textbf{96.07} (.046) & 95.83 (.046) & 96.05 (.046) & 95.64 (.050) \\
$p=3$& & & ${\cal F}$ & 95.82 (.046) & 95.47 (.047) & 96.02 (.046) & 95.86 (.048)\ & \textbf{96.07} (.046) & 95.83 (.046) & 96.04 (.046) & 95.64 (.050) \\ \hline
Breast &304 &53 &${\cal P}$ & 95.71 (.052)& {\bf 96.31} (.052) &96.12 (.053) &95.36 (.058) &95.55 (.056) &95.78 (.054) &95.86 (.055) &94.64 (.063) \\
Cancer &159 &53 &${\cal R}$ &95.40 (.058)& {\bf 96.20} (.054) &96.01 (.054) &95.29 (.058) &95.45 (.057) &95.56 (.058) &95.66 (.059) &94.27 (.070)\\
$p=30$& & &${\cal F}_1$ & 95.39 (.058)& {\bf 96.19} (.054) & 96.01 (.054) & 95.28 (.058) &95.45 (.057) &95.56 (.058)& 95.65 (.059) &94.25 (.070)\\ \hline
Fetal & 1538,& 117,&${\cal P}$ & 88.82 (.050) & 88.82 (.050) & 89.65 (.051) & \textbf{89.80} (.052) & 89.64 (.052) & 88.97 (.050) & 88.82 (.050) & 89.54 (.060) \\
Health &354 &117 & ${\cal R}$ & 87.53 (.058) & 87.53 (.058) & 88.83 (.057) & 89.08 (.057) & 88.85 (.057) & 87.74 (.058) & 87.53 (.058) & \textbf{89.32} (.062) \\
$p=21$& & & ${\cal F}$ & 87.42 (.059) & 87.42 (.059) & 88.76 (.058) & 89.03 (.058) & 88.79 (.058) & 87.64 (.059) & 87.42 (.059) & \textbf{89.30} (.062)\\ \hline
Haberman& 204,&20, & ${\cal P}$ &55.97 (.265)& 62.97 (.234)& 61.69 (.228) &61.22 (.230)& 64.97 (.242)& {\bf 66.48} (.250) &62.80 (.235)& 65.59 (.238)\\
$p=3$& 61& 20&${\cal R}$ & 54.22 (.186)& 61.18 (.203)& 60.47 (.204)& 60.33 (.212)& 61.30 (.184)& 60.93 (.179) &61.20 (.206) &{\bf 61.88} (.188)\\
& & & ${\cal F}_1$ &50.73 (.210)& 59.76 (.219)& 59.43 (.214)& 59.59 (.217)& 58.79 (.208)& 57.23 (.221)& {\bf 59.92} (.221)& 59.40 (.215)\\ \hline
Loan$^{\ast}$&4400, &120, & ${\cal P}$ &86.89 (.041) & 90.08 (.043)& 89.87 (.045)&88.23 (.042)& 89.21 (.048)& 87.20 (.041)& 86.89 (.041)& {\bf 90.10} (.047)\\
$p=11$ &360 &120 &${\cal R}$ & 84.42 (.052) & {\bf 89.86} (.044)& 89.41 (.048)& 86.81 (.049)& 88.61 (.052)& 84.97 (.051)& 84.42 (.052)& 89.80 (.048)\\
& & & ${\cal F}_1$ & 84.14 (.055) & {\bf 89.84} (.045)& 89.38 (.048)& 86.68 (.050)& 88.56 (.053)& 84.73 (.054)& 84.14 (.055)& 89.78 (.049)\\ \hline
Machine & 9577, & 84,& ${\cal P}$ & 80.36 (.057) & 80.36 (.057) & 82.40 (.069) & 82.95 (.063) & \textbf{83.98 (.062)} & 81.29 (.059) & 82.34 (.062) & 82.31 (.093)\\
Failure & 255 & 84 & ${\cal R}$ & 70.52 (.078) & 70.52 (.078) & 77.64 (.083) & 77.12 (.083) & 79.30 (.080) & 73.26 (.081) & 76.05 (.080) & \textbf{81.73 (.094)} \\
$p=8$ & & & ${\cal F}_1$ & 67.86 (.099) & 67.86 (.099) & 76.76 (.092) & 76.02 (.095) & 78.53 (.089) & 71.37 (.098) & 74.79 (.092) & \textbf{81.64 (.095)}
\\ \hline
Paris &8419, &316, & ${\cal P}$ & 96.72 (.018) & 95.89 (.020) & 97.75 (.018) & 96.86 (.020) & 96.39 (.020) & 96.89 (.020) & 95.89 (.020) & \textbf{98.07} (.016) \\
Housing &949 &316 & ${\cal R}$ & 96.48 (.021) & 95.62 (.023) & 97.72 (.018) & 96.75 (.021) & 96.21 (.022) & 96.79 (.022) & 95.62 (.023) & \textbf{97.99} (.018)\\
$p=17$ & & & ${\cal F}$ & 96.48 (.021) & 95.62 (.023) & 97.72 (.018) & 96.74 (.021) & 96.20 (.022) & 96.78 (.022) & 95.62 (.023) & \textbf{97.99} (.018) \\ \hline
Pima& 433,& 67, &${\cal P}$ & 72.62 (.116) & 72.97 (.111)& 72.91 (.113) &72.91 (.113) &71.66 (.115) &71.7 (.120) &72.72 (.110) &{\bf 73.05} (.111)\\
Indian& 201,& 67 &${\cal R}$ & 67.55 (.104)& {\bf 72.75} (.111) &72.72 (.113) &72.65 (.113) &71.24 (.113) &70.27 (.116) &72.29 (.111) &72.47 (.109)\\
$p=8$& & &${\cal F}_1$ & 65.58 (.121)& {\bf 72.68} (.111) &72.66 (.114) &72.57 (.114) &71.10 (.114) &69.76 (.122) &72.15 (.111) &72.30 (.110)\\ \hline
Rice & 7935,& 2050,& ${\cal P}$ & 98.93 (.004) & 98.94 (.004) & 98.93 (.004) & 98.38 (.006) & 98.51 (.005) & 98.91 (.004) & 98.93 (.004) & \textbf{98.95} (.004) \\
$p=10$ & 6150& 2050& ${\cal R}$ & 98.93 (.004) & 98.94 (.004) & 98.93 (.004) & 98.37 (.006) & 98.51 (.005) & 98.91 (.004) & 98.92 (.004) & \textbf{98.95} (.004) \\
& & & ${\cal F}$ & 98.93 (.004) & 98.94 (.004) & 98.93 (.004) & 98.37 (.006) & 98.51 (.005) & 98.91 (.004) & 98.92 (.004) & \textbf{98.95} (.004) \\
\hline
Stroke& 4799,& 62,& ${\cal P}$ & 61.92 (.259) & 63.50 (.154)& 60.95 (.172)& 64.67 (.149)& 64.30 (.163)& 67.30 (.143)& 62.51 (.170)& {\bf 73.87} (.115)\\
$p=8$&187 &62 &${\cal R}$ & 53.00 (.065) & 59.00 (.104)& 56.16 (.099)& 59.36 (.100)& 57.78 (.094)& 62.24 (.112)& 56.88 (.095)& {\bf 73.70} (.114)\\
& & & ${\cal F}_1$ & 42.23 (.097) &55.28 (.122)& 50.75 (.126)& 55.30 (.124)& 52.31 (.127)& 59.23 (.138)& 51.42 (.119)& {\bf 73.66} (.115)\\ \hline
Water&1679, &319, & ${\cal P}$ &57.78 (.059) & 61.08 (.055)& 59.90 (.059)& 59.69 (.056)& 60.40 (.059)&60.30 (.068)& 59.87 (.057)& {\bf 62.72} (.058)\\
$p=9$& 959& 319& ${\cal R}$ & 57.18 (.054) & 60.34 (.051)& 59.23 (.055)& 59.31 (.054)& 59.66 (.054)& 58.67 (.056)& 59.69 (.056)& {\bf 61.39} (.053)\\
& & & ${\cal F}_1$ & 56.33 (.056) & 59.67 (.053)& 58.53 (.056)& 58.91 (.054)& 58.94 (.056)& 56.99 (.069)& 59.50 (.056)& {\bf 60.36} (.057)\\ \hline
\end{tabular}
\vspace{0.025in}
\footnotesize{~$^\ast$ Some redundant features were removed before our analysis.}}
\end{table*}
\raggedbottom
Next, we consider a scale problem involving two bivariate normal distributions $N({\bf 0}, {\bf I})$ and $N({\bf 0},2{\bf I})$. Unlike the previous example, here the results may vary depending on the choice of the minority class. We consider both cases, and in each case, results are reported in Table~\ref{tab:binary_normal_scale} for $\alpha=0.1,0.2$ and $0.4$. This table clearly shows that except for one instance (when $N({\bf 0}, {\bf I})$ is the minority class and $\alpha=0.2$), on all other occasions, the proposed method had the best overall performance.
In this example, the Bayes classifier has $F_1$ score $62.5\%$ (${\cal P}=63.3\%$ and ${\cal R}= 61.9\%$). In both problems, for $\alpha=0.4$, the proposed method had $F_1$ score close to 60\%.
We carried out our experiments for other distributions as well, but the results were more or less similar. Barring a few cases, the proposed method outperformed all its competitors considered here. Therefore, to save space, we do not report them.
In the case of balanced training sample, while WNN and all SMOTE algorithms coincide with the usual nearest neighbor classifier $k$-NN, the proposed method leads to a slightly different classifier. In the normal location problem, while $k$-NN
had average precision, recall and $F_1$ score of $75.11\%$, $75.02\%$ and $75\%$, respectively, those for the proposed method were $75.28\%$, $75.19\%$ and $75.16\%$ in respective cases. These two methods had comparable performance in the normal scale problem as well. In that example, the average precision, recall and $F_1$ score for $k$-NN were $60.49\%$, $60.12\%$, $59.78\%$, respectively. The proposed method performed slightly better. The respective figures were $62.67\%$, $61.18\%$ and $60.02\%$.
\subsection{Analysis of Benchmark Datasets}
\label{sec:binary_realdata}
We analyze 12 benchmark data sets for further evaluation of the proposed method. Breast Cancer, Haberman, Predictive Maintenance and Pima Indian data sets are taken from the UCI Machine Learning Repository (\url{https://archive.ics.uci.edu/ml/datasets.php}).
The rest of the data sets are available at Kaggle (\url{https://www.kaggle.com/datasets}). For Asteroids and Loan data sets, we remove all redundant variables before carrying out our experiment. The Fetal Health data set has observations from three classes: `Normal', `Suspect' and `Pathological',. However, the distinction between the last two classes is not very clear. So, we consider it as a two-class problem, where `Suspect' and `Pathological' is considered as one class, and `Normal' as the other class. In most of these data sets, the measurement variables are not of comparable units and scales. So, we standardize each of the measurement variables and work with the standardized data sets. We divide each data set randomly into two groups to form the training and test sets. We first divide the minority class observations in two groups containing nearly 75\% and 25\% observations. Equal number of observations from the majority class are added to the smaller group to form the test set, while the rest of the majority class observations are added to the larger group to form the training set. For each data set, this random partitioning is done 1000 times, and the average overall performance of different classifiers (computed over these 1000 trials) are reported in Table~\ref{tab:binary_benchmark}.
Table~\ref{tab:binary_benchmark} clearly shows that the performance of the proposed method was highly satisfactory. In 5 out of 12 data sets (Asteroids, Paris Housing, Rice, Stroke and Water), our method had the best precision, recall and $F_1$ score. It had the best recall and $F_1$ score on the Fetal Health and Predictive Maintenance data sets as well. On Haberman data, it had the best recall, while its precision was the second best. On Loan data, it had the best precision. In this case, its recall and $F_1$ scores were second best (after WNN) and very close to the best ones. It had satisfactory performance on Pima Indian and Astroseismology data sets as well. Only in the case of Breast Cancer data, it had slightly inferior performance.
\begin{figure}[htp]
\centering
\includegraphics[height=2.250in, width=0.5\textwidth]{Box1A.png}
\caption{Precision efficiency of 1. $k$-NN, 2. WNN, 3. L-Smote, 4. B-Smote, 5. SVM-Smote, 6. $k$M-Smote, 7. Boot-Smote \& 8. the proposed classifier on benchmark $2$-class data sets.\label{fig:binary_precision_efficiency}}
\end{figure}
To compare the overall performance of different methods in a comprehensive way, following the ideas of \citep{chaudhuri2008classification,ghosh2012hybrid}, we introduce the notion of efficiency of a classifier. If ${\cal P}_1,{\cal P}_2,\ldots,{\cal P}_T$ are the precisions of $T$ classifiers on a data set, the precision-efficiency of the $i$-th ($i=1,2,\ldots,T$) classifier is defined as ${\cal P}_i/\max_{1\le j\le T} {\cal P}_j$. So, for a data set, the best classifier has efficiency $1$ and a small values of it indicates the lack of efficiency of the classifier. For each data set, we compute these ratios for all the classifiers, and they are graphically represented by boxplots in Figure~\ref{fig:binary_precision_efficiency}. This figure clearly shows that the overall performance of the proposed method was much better than its competitors.
Similarly, one can compute the recall-efficiency and the $F_1$-efficiency of different classifiers and construct the corresponding boxplots. But those boxplots were similar to the ones given in Figure~\ref{fig:binary_precision_efficiency}. So, we do not report them here.
\section{Classification involving multiple classes}
\label{sec:multi-class}
The proposed method can be generalized for $J$-class problems with $J>2$. One simple way of generalization is to adopt the one-vs-one approach, where we perform $\binom{J}{2}$ binary classifications taking one pair of classes at a time and then use majority voting. In some rare cases, it leads to ties, where more than one classes have the maximum number of votes. One can break these ties by considering a classification problem involving those classes only, and if the problem is not resolved even after that, the class having the maximum evidence in terms of the $p$-value (as decribed in the case of binary classification) can be chosen. However, to reduce the computing cost, we can also adopt a slightly different strategy. First we arrange the classes according to the non-increasing order of sample sizes (ties can be resolved arbitrarily). Next, we perform $J-1$ binary classifications taking the $J$-th class as the minority class and one of the rest as the majority class. For an observation ${\bf x}$, let $S_{\bf x}(J)$ be the collection of all classes winning over the minority class and $J_0=|S_{\bf x}(J)|$ be the cardinality of $S_{\bf x}(J)$. If this is empty (i.e., $J_0=0$), we classify ${\bf x}$ to Class-$J$. If $J_0=1$, we assign ${\bf x}$ to the single member of $S_{\bf x}(J)$. Otherwise, we repeat the procedure for a $J_0$-class problem only considering the classes in $S_{\bf x}(J)$. In our numerical study, there was no visible difference between the performance of these two methods, and here we report the results for the second one (referred to as the OvO$+$ method). A consistency result similar to Theorem~\ref{thm:binary_consistency} can be proved for this method, which is stated in Theorem~\ref{thm:consistency_multiclass}.
Instead of one-vs-one, one can also use the one-vs-rest method, where each time we consider a binary classification problem between one class and a combined class having observations from the rest of the classes. After $J$ binary classifications, majority voting is used to arrive at a final decision. Here also, we may have ties in some cases, and the method based on the maximum $p$-value type evidence can be used to resolve these issues. Another option is to consider a classification problem among the classes having the maximum number of votes and repeat the procedure until we get a winner. It can be shown that as the sample size increases, the probability of getting a unique winner by this method will converge to $1$ and the resulting classifier will have large sample consistency as well. This result is stated in Theorem~\ref{thm:consistency_multiclass}. However, in practice, it may be possible to have some rare cases, where the ties cannot be resolved in this way. The method based on maximum $p$-value type evidence can be used in such rare cases. In our numerical studies, both of these methods led to almost similar results in all data sets. So, here we report the results for the second method (referred to as the OvR$+$ method) only.
\begin{table*}[t]
\centering
\caption{Performance of different classifiers on benchmark data sets involving more than two classes\label{tab:benchmark_multiclass}}
{\setlength{\tabcolsep}{0.025in}
\scriptsize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline & \multicolumn{2}{c|}{Sample size} & & \multicolumn{1}{c|}{{$k$-NN}} & \multicolumn{1}{c|}{{WNN}} & \multicolumn{1}{c|}{{L-Smote}} & \multicolumn{1}{c|}{{B-Smote}} & \multicolumn{1}{c|}{{SVM-Smote}} & \multicolumn{1}{c|}{{Boot-Smote}} & \multicolumn{2}{c|}{{Proposed}} \\
\cline{2-3} \cline{11-12}
& Train & Test & & & & & & & &OvR$+$ &OvO$+$ \\ \hline
Vehicle &163,168 &49,49 & ${\cal P}$ & \textbf{71.12} (.120) & 70.90 (.125) & 71.04 (.119) & 71.05 (.124) & 71.02 (.119) & 70.99 (.121) & 70.85 (.125) & 70.92 (.140) \\
$p=18$ & 169,150 &49,49 & ${\cal R}$ & 71.19 (.114) & \textbf{71.76} (.112) & 71.22 (.113) & 71.35 (.119) & 71.27 (.114) & 71.15 (.116) & 71.67 (.113) & 70.36 (.110) \\
$J=4$ & & & ${\cal F}_1$ & \textbf{71.03} (.116) & 70.94 (.117) & 71.01 (.115) & 71.08 (.121) & 71.02 (.116) & 70.96 (.117) & 70.96 (.118) & 68.53 (.120)\\
\hline
Dry Bean& {1192,392,1500} &130,130 & ${\cal P}$ & 93.27 (.035) & 93.42 (.036) & 92.82 (.036) & 91.93 (.039) & 92.29 (.037) & 92.47 (.039) & \textbf{93.67} (.033) & 93.37 (.033) \\
$p=16$&3416,1798, &130,130, & ${\cal R}$ & 93.14 (.036) & 93.33 (.036) & 92.75 (.036) & 91.86 (.040) & 92.23 (.037) & 92.40 (.039) & \textbf{93.52} (.035) & 93.07 (.035) \\
$J=7$ & 1897,2506 &{130,130,130} &${\cal F}_1$ & 93.16 (.036) & 93.35 (.036) & 92.75 (.036) & 91.86 (.040) & 92.23 (.037) & 92.41 (.039) & \textbf{93.55} (.034) & 93.11 (.035) \\
\hline
Wine Recog.& 47,& 12, &${\cal P}$ & 96.37 (.127) & 97.06 (.114) & 96.56 (.116) & 96.06 (.133) & 96.62 (.116) & 96.76 (.113) & 96.99 (.114) & \textbf{97.98} (.096) \\
$p=13$& 59,& 12, & ${\cal R}$ & 95.99 (.141) & 96.73 (.129) & 96.12 (.135) & 95.64 (.149) & 96.19 (.135) & 96.35 (.131) & 96.61 (.132) & \textbf{97.79} (.105) \\
$J=3$ & 12 & 12 & ${\cal F}_1$ & 95.94 (.145) & 96.68 (.133) & 96.03 (.141) & 95.57 (.153) & 96.11 (.141) & 96.28 (.137) & 96.55 (.137) & \textbf{97.77} (.106) \\
\hline
SVM Guide 2& 208,& 13, &${\cal P}$ & 69.58 (.303) & 69.58 (.303) & 69.46 (.326) & 69.50 (.311) & 69.88 (.314) & 69.25 (.330) & 74.47 (.244) & \textbf{76.39} (.249) \\
$p=20$ &104, & 13, & ${\cal R}$ & 65.03 (.302) & 65.03 (.302) & 67.78 (.314) & 67.8 (.303) & 67.27 (.307) & 66.52 (.310) & 68.09 (.259) & \textbf{72.59} (.257) \\
$J=3$ & 40 & 13 & ${\cal F}_1$ & 64.02 (.324) & 64.02 (.324) & 67.43 (.322) & 67.46 (.310) & 66.62 (.322) & 65.77 (.329) & 66.82 (.290) & \textbf{71.66} (.275) \\
\hline
Satimage &969,376, &103,103, & ${\cal P}$ & 88.67 (.052) & 88.67 (.052) & \textbf{89.08} (.052) & 88.67 (.054) & 88.72 (.056) & 88.67 (.052) & 87.03 (.056) & 86.40 (.053) \\
$p=36$ &858,312, & 103,103, & ${\cal R}$ & 88.44 (.053) & 88.44 (.053) & \textbf{88.94} (.053) & 88.57 (0.55)& 88.64 (.056) & 88.44 (.053) & 86.86 (.057) & 82.86 (.058) \\
$J=6$ &367,935 &103,103 & ${\cal F}_1$ & 88.33 (.054) & 88.33 (.054) & \textbf{88.96} (.053) & 88.57 (.054) & 88.62 (.056) & 88.33 (.054) & 86.83 (.057) & 82.36 (.062) \\
\hline
SVM Guide 4& 66,96, &20,20, &${\cal P}$ & 67.45 (.179) & 67.87 (.177) & 67.56 (.178) & 67.64 (.178) & 67.46 (.175) & 67.45 (.179) & 68.52 (.168) & \textbf{68.62} (.179)\\
$p=10$ & 99,79, &20,20, & ${\cal R}$ & 66.23 (.177) & \textbf{66.65} (.175) & 66.48 (.177) & 66.59 (.178) & 66.37 (.176) & 66.23 (.177) & 66.20 (.169) & 62.91 (.172) \\
$J=6$& 90,62 &20,20 & ${\cal F}_1$ & 65.89 (.180) & \textbf{66.54} (.175) & 66.27 (.178) & 66.38 (.179) & 66.13 (.177) & 65.89 (.180) & 65.87 (.171) & 62.25 (.182) \\
\hline
E-Coli& 138,& 5,5, &${\cal P}$& 80.40 (.352) & 82.14 (.322) & 80.73 (.340) & 77.20 (.366) & 79.69 (.362) & 77.74 (.359) & \textbf{86.10} (.296) & 84.25 (.301) \\
$p=7$ &72,30,&5,5, & ${\cal R}$ & 77.96 (.307) & 80.61 (.308) & 78.99 (.315) & 75.46 (.330) & 77.56 (.322) & 75.38 (.337) & \textbf{84.61} (.298) & 81.77 (.315) \\
$J=5$&15,47 &5&${\cal F}_1$ & 76.79 (.336) & 79.99 (.324) & 78.30 (.335) & 74.51 (.347) & 76.69 (.345) & 74.46 (.356) & \textbf{84.20} (.311) & 81.17 (.334) \\
\hline
Shuttle&10022, & 187, &${\cal P}$ & 99.76 (.010) &99.81 (.008) &99.81 (.009) &99.82 (.008) &99.83 (.008) &99.81 (.008)& {\bf 99.93} (.005) &99.90 (.005)\\
$p=9$&1851, & 187, &${\cal R}$ & 99.76 (.010) &99.81 (.008) &99.81 (.009) &99.82 (.008) &99.83 (.008) &99.81 (.008)& {\bf 99.93} (.005) &99.90 (.005)\\
$J=3$ & 561 &187 & ${\cal F}_1$ & 99.76 (.010) &99.81 (.008) &99.81 (.009) &99.82 (.008) &99.83 (.008) &99.81 (.008)& {\bf 99.93} (.005) &99.90 (.005)\\
\hline
\end{tabular}%
}
\end{table*}%
\begin{figure*}[ht]
\vspace{-0.1in}
\centering
\includegraphics[height=2.20in, width=\textwidth]{Box2.png}
\caption{Efficiency of different classifiers (1. $k$-NN, 2. WNN, 3. L-SMOTE, 4. B-SMOTE, 5. SVM-SMOTE, 6. Boot-SMOTE, 7. Proposed OvR$+$ and 8. Proposed OvO$+$) on multi-class benchmark data sets.\label{fig:efficiency_multiclass}}
\end{figure*}
\begin{theorem}\label{thm:consistency_multiclass}
Suppose that the $J$ competing classes have continuous probability density functions $f_1,\ldots,f_J$, respectively. Assume that for each $i=1,\ldots,J$, $n_i/n \to \pi_i \in (0,1)$ as $n \to \infty$ ($\sum_{i=1}^{J} \pi_i=1$). Also assume that for each binary classification (both in one-vs-one and one-vs-rest methods), $k_{\max}$, the maximum number of neighbors from the minority class diverges to infinity in such a way that $k_{\max}/n \rightarrow 0$ as $n \to \infty$. Then the proposed methods OvO$+$ and OvR$+$ classify an observation ${\bf x}$ to the class $i_0=\arg\max_{1 \le i\le J} f_i({\bf x})$ with probability tending to $1$ as $n$ tends to infinity.
\end{theorem}
\begin{proof} ($i$) \underline{OvO$+$ method}: At the first step, we consider $J-1$ binary classifications, taking the $J$-th class and one of the other classes at a time. Let $S=\{i: f_i({\bf x})>f_J({\bf x})\}$. Note that for each of these binary classification problems, the conditions of Theorem~\ref{thm:binary_consistency} are satisfied. Since $J$ is finite, following the proof of Theorem~\ref{thm:binary_consistency}, one can show that $P(S_{\bf x}(J)=S) \rightarrow 1$ as $n \rightarrow \infty$. So, for $J_0=|S_{\bf x}(J)|=0$ or $1$, the proof follows immediately. For $J_0>1$, we have $P(i_0 \in S_{\bf x}(J)) \rightarrow 1$ and the result is obtained by repeating the same argument for the classification problem involving $J_0$ ($J_0 \le J-1$) classes in ${S_{\bf x}(J)}$.
\vspace{0.05in}
\noindent
($ii$) \underline{OvR$+$ method}: Consider the classification problem between Class-$i$ ($i=1,\ldots,J$) and the combined class containing the rest. Note that this combined class, being a mixture of $J-1$ classes, has density $f_{-i}({\bf x})=\sum_{j\neq i}\pi_jf_j({\bf x})/\sum_{j \neq i}\pi_j$. Now, from Theorem~\ref{thm:binary_consistency}, it is easy to see that for an observation ${\bf x}$, our binary classifier will prefer Class-$i$ with probability tending to $1$ if and only if $f_i({\bf x})>f_{-i}({\bf x}) \Leftrightarrow f_i({\bf x})>\sum_{j=1}^{J}\pi_jf_j({\bf x})=f_0({\bf x})$, say. Now, define $S^{\ast}=\{j: f_j({\bf x})>f_0({\bf x})\}$. So, if $i \in S^{\ast}$ (respectively, $i \notin S^{\ast}$), Class-$i$ wins over (respectively, loses to) the rest with probability tending to $1$ as $n$ tends to infinity. From the definition of $i_0$, it is clear that $i_0 \in S^{\ast}$. Therefore, if $J_*=|S^{\ast}|=1$, the result follows immediately. If $J_*>1$ , we need to consider a classification problem involving $J_*$ classes in $S^{\ast}$ (note that $J_*<J$). Since $J$ is finite, repeated application of the same argument leads to the proof.
\end{proof}
We analyze 8 benchmark data sets to evaluate the performance of our multi-class classifiers. The Dry Bean data set is taken from Kaggle and the E-Coli data set is taken from the UCI Machine Learning Repository. The rest of the data sets are available at LIBSVM (\url{https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/}). For each data set, we form the training and the test sets (sizes are reported in Table~\ref{tab:benchmark_multiclass}) by randomly partitioning the data as before. At first, the observations from the smallest class are divided into two groups in nearly 3:1 ratio. Equal number of observations from each class are added to the smaller group to form the test set, while the training set is formed by the rest of the observations. This random partitioning is done 500 times, and the performance of different classifiers over these 500 partitions are reported in Table~\ref{tab:benchmark_multiclass}. We could not use the $k$M-Smote algorithm for these multi-class problems due to some technical errors with the existing codes.
In 3 out of these 8 data sets (Dry Bean, E-Coli and Shuttle), the proposed OvR$+$ method had the best performance in terms of all the measures considered here. In 2 data sets (SVM Guide 2 and Wine Recognition), the proposed OvO$+$ method outperformed others. In Vehicle data also, they performed well. The OvO$+$ method had the highest precision on SVM Guide 4 data set, but its recall and $F_1$ score were slightly lower. However, the OvR$+$ method performed well on this data set. Only in the case of Satimage data, our proposed methods, especially OvO$+$, had relatively poor performance.
To compare the overall performance of different methods on these $8$ data sets, we construct the box plots of efficiency scores as before (see Figure~\ref{fig:efficiency_multiclass}). These plots show that the overall performance of OvR$+$ was better than its competitors. The OvO$+$ method had good overall performance. As far as precision-efficiency is concerned, its performance was comparable to OvR$+$, but its recall-efficiency and $F_1$-efficiency were comparatively lower.
\section{Concluding Remarks}
\label{sec:concluding_remarks}
This article presents a statistical method for nearest neighbor classification based on imbalanced training data. While the usual nearest neighbor classifier uses the same value of $k$ (the number of neighbors) for classification of all observations, here the choice of $k$
is case-specific, and it is obtained by maximizing a $p$-value-based evidence. Unlike the existing methods, our proposed classifiers do not need to use any adhoc weight functions or any pseudo observations for their implementation. So, the results are exactly reproducible. The proposed method is constructed under a probabilistic framework, and the resulting classifiers have large sample consistency under mild assumptions. They can outperform the usual nearest neighbor classifier, the weighted nearest neighbor classifier and various SMOTE algorithms in a wide variety of classification problems. Analyzing several simulated and real data sets, we have amply demonstrated it in this article.
\bibliographystyle{alpha}
|
1,116,691,500,185 | arxiv | \section{Introduction}
\label{sec:intro}
Convection plays an important and well-known role in the transport of
energy in stellar interiors. It has also been argued that convection
is important in a number of low-density astrophysical plasmas, such as
the intracluster medium in clusters of galaxies (Chandran \& Rasera
2007) and accretion flows onto compact objects (Quataert \& Gruzinov
2000). Although convection in stellar interiors has been thoroughly
studied over the course of several decades, the theory of convection
in low-density plasmas is still being developed, and investigations
carried out during the last several years have led to some interesting
surprises.
For many years, it was widely assumed that the convective stability
criterion for a low-density, non-rotating, weakly magnetized plasma is
the Schwarzchild criterion, $ds/dz>0$, where $s$ is the specific
entropy of the plasma and the gravitational acceleration is in
the~$-z$ direction. However, Balbus~(2000, 2001) showed that even
weak magnetic fields strongly modify the convective stability
criterion by causing heat to be conducted almost exclusively along
magnetic field lines. This anisotropy in the thermal conductivity
arises when the electron gyroradius is much less than the electron
mean free path, a condition that is easily satisfied for realistic
magnetic fields in most cases of interest. Balbus considered an
equilibrium in which the magnetic field is in the~$xy$-plane, and in
which $\beta = 8\pi p/B^2 \gg 1$, where $p$ is the pressure and~$B$ is
the magnetic field strength. He showed that near marginal stability,
the temperature of a rising fluid parcel is almost constant,
essentially for two reasons. First, the parcel remains magnetically
connected to material at its initial height. Second, near marginal
stability a fluid parcel rises very slowly, so thermal conduction has
enough time to approximately equalize the temperature along the
perturbed magnetic field lines. As a result, the stability criterion
becomes~$dT/dz>0$. When this criterion is satisfied, a rising fluid
parcel is cooler than its surroundings, and hence denser at the same
pressure, so that it falls back down to its initial height.
Parrish \& Stone (2005, 2007) carried out numerical
simulations that validated Balbus' analysis and extended it to
the nonlinear regime.
An immediate question arises, namely, why doesn't Balbus's stability
criterion apply to stars?
Although stellar plasmas are magnetized, heat is conducted in
stellar interiors primarily by photons. As discussed by Balbus (2000,
2001), the conductivity is thus almost isotropic, so it is the
Schwarzchild criterion that applies. The reason that the Schwarzchild
criterion applies even if the isotropic conductivity is large is
somewhat subtle. If $ds/dz<0$, then adiabatic expansion would cause a
slowly rising fluid parcel to be hotter and lighter than its
surroundings, and hence buoyant. The effect of isotropic conductivity
is then to relax the temperature in the parcel towards that of the
immediately surrounding fluid. However, because the conductivity is
finite, the rising fluid parcel's temperature is never decreased all
the way to the temperature of its surroundings. The fluid parcel thus
remains slightly hotter than its surroundings, and hence slightly
lighter at the same pressure, and the fluid is convectively unstable.
Although isotropic conductivity does not modify the Schwarzchild
stability criterion, it does reduce the convective heat flux and the
``efficiency of convection'' in a convectively unstable fluid by
decreasing the temperature difference $\delta T$ between rising fluid
parcels and their surroundings (Cox \& Guili 1968).
Balbus's analysis has been extended in two ways by recent
studies. First, Chandran \& Dennis (2006), (hereafter referred to as
[CD06]), investigated how the stability criterion is affected by
the presence of cosmic rays that diffuse primarily along magnetic
field lines. Like Balbus (2000, 2001), they assumed that~$\beta \gg 1$
and took the equilibrium magnetic field to be in the~$xy$-plane They
showed that near marginal stability, the cosmic-ray pressure is nearly
constant within a rising fluid element. This is because the fluid element
remains connected to material at its initial height, and because fluid
elements rise very slowly near marginal stability, so that there is plenty
of time for cosmic-ray diffusion to approximately equalize the cosmic-ray
pressure~$p_{\rm cr}$ along the perturbed magnetic field lines.
[CD06]\ showed analytically that the stability criterion in the presence
of cosmic rays is $n k_B dT/dz + dp_{\rm cr}/dz > 0$, where $n$ is the
total number density of thermal particles.
More recently, Quataert (2007) considered buoyancy instabilities in a
low-density, high-$\beta$ plasma in the absence of cosmic rays, but
allowing the equilibrium magnetic field to have a component in the~$z$
direction, parallel or antiparallel to the direction of gravity. Since
the temperature is a function of~$z$, the $z$ component of the
equilibrium magnetic field leads to an equilibrium heat flux.
Quataert (2007) showed that this heat flux causes the plasma to become
convectively unstable even if~$dT/dz>0$, so that the plasma is always
convectively unstable if the magnetic field has a nonzero $z$
component and a nonzero component in the $xy$ plane. This
heat-flux-buoyancy instability arises because of the geometry of the
perturbed magnetic field lines in the plasma. For example, when the
magnetic field is in the~$z$ direction and a fluid element is
displaced upwards at a 45-degree angle with respect to the $z$ axis,
field lines converge as they enter the fluid element from ``above''
(i.e., from larger~$z$). As a result, if $dT/dz>0$ then the parallel
heat flux converges within the fluid element, causing the fluid
element to become hotter than its surroundings, and thus less dense at
the same pressure. Buoyancy forces then cause the upwardly displaced
fluid element to rise unstably. (Quataert~2007) The nonlinear
development of this instability was investigated numerically by
Parrish \& Quataert (2007).
One of the open questions in this area of research is whether the
buoyancy instabilities identified in these previous studies for
the~$\beta \gg 1$ regime still operate when the magnetic field
strength is increased to the point that~$\beta \lesssim 1$. We address
this question in this paper. We consider the equilibrium geometry
investigated by Balbus (2000, 2001) and [CD06], in
which the magnetic field is in the~$xy$-plane - in particular, we set
${\mathbf B}_0 = B_0 \hat{\mathbf y}$. We also allow for cosmic rays that diffuse
along magnetic field lines, but now we allow~$\beta$ to take any
value. We focus on wave vectors in the ``quasi-interchange'' limit,
in which $|k_x| \gg |k_y|$, $|k_x| \gg |k_z|$, and $|k_x H| \gg 1$,
where~$H$ is the density scale height. This is the most unstable
wave-vector regime for stratified adiabatic plasmas, because a
small~$k_y$ reduces the stabilizing effects of magnetic tension and a
large~$k_x$ allows a rising fluid element to easily get out of the way
of the next rising element beneath it by moving just a small distance
in the~$x$ direction. (Parker 1967, Shu 1974, Ferri\`ere et al. 1999)
We show analytically that the stability criterion in this limit is
\begin{equation}
nk_B \frac{dT}{dz} +
\frac{d p_{\rm cr}}{d z}
+ \frac{1}{8\pi} \frac{dB^2}{dz} > 0,
\label{eq:stabcrit0}
\end{equation}
and we present a heuristic derivation of this stability criterion from
physical arguments. We also derive approximate analytical solutions
to the dispersion relation for small-amplitude perturbations to the
equilibrium in different parameter regimes, and compare these
solutions to numerical solutions of the full dispersion relation.
Our results are important for determining the convective stability of
galaxy-cluster plasmas, in which cosmic-rays are often produced by
central radio sources. Convection in intracluster plasmas is of
interest because it may provide a mechanism for regulating the
temperature profiles of galaxy-cluster plasmas and offsetting
radiative cooling, thereby solving the so-called ``cooling-flow
problem.'' (Chandran 2004, 2005; Parrish \& Stone 2005, 2007;
Chandran \& Rasera 2007). Our results are also important for
determining the conditions under which the Parker instability can
operate in the interstellar medium (ISM). Previous treatments of the
Parker instability assume an adiabatic thermal plasma (Parker 1966,
1967, Shu 1974, Ryu 2003). Our results show that anisotropic thermal
conductivity makes the ISM more unstable to the Parker instability, so
that the instability can operate under a wider range of equilibrium
profiles than was previously recognized.
The remainder of this paper is organized as follows. In section
\ref{sec:disprel} we outline the derivation of the general form of the
dispersion relation. In section \ref{sec:qil} we specialize to the
quasi-interchange limit, present our derivation of the necessary and
sufficient condition for convective stability, and describe the
properties of the unstable eigenmodes in plasmas that are
very close to marginal stability. In section~\ref{sec:physical} we
present a heuristic, physical derivation of the stability criterion.
We discuss the implications of our work for galaxy-cluster plasmas and
the interstellar medium in sections~\ref{sec:gc} and~\ref{sec:parker},
respectively. In section~\ref{sec:conc} we summarize our results, and
in appendix~\ref{ap:eigenmodes} we present approximate analytic
solutions and numerical solutions to the dispersion relation.
\section{The general dispersion relation}
\label{sec:disprel}
We begin with a standard set of two-fluid equations
(Drury \& Volk 1981, Jones~\&~Kang 1990), which we modify
to include thermal conduction along the magnetic field:
\begin{eqnarray}
\label{eq:masscon}
\frac{d\rho}{dt}&=&-\rho\del\cdot\mathbf v, \\
\label{eq:momconv}
\frac{d\mathbf v}{dt}&=&-\frac{1}{\rho}\del
\left(
p+p_{\rm cr}+\frac{B^2}{8\pi}
\right) +
\frac{1}{4\pi\rho}{\mathbf B}\cdot\del{\mathbf B}
+ \mathbf g, \\
\label{eq:induc}
\frac{d{\mathbf B}}{dt}&=&-{\mathbf B}\del\cdot\mathbf v
+ {\mathbf B}\cdot\del\mathbf v, \\
\label{eq:pgas}
\frac{dp}{dt}&=&-
\gamgasp\del\cdot\mathbf v +
\left(
\gamma-1
\right)\del\cdot
\left[
\hat {\mathbf b}\kappa_\parallel
\left(
\hat {\mathbf b}\cdot\del T
\right)
\right], \\
\label{eq:pcosmic}
\frac{dp_{\rm cr}}{dt}&=&
-\gamcosp_{\rm cr}\del\cdot\mathbf v+\del\cdot
\left[
\bhatD_\parallel
\left(
\hat {\mathbf b}\cdot\delp_{\rm cr}
\right)
\right],
\end{eqnarray}
where $d/dt=\partial/\partial t + \mathbf v\cdot\nabla$, and
where $\rho$ is the plasma mass density, $\mathbf v$ is the
velocity, $p$ is the plasma pressure, $p_{\rm cr}$ is
the cosmic-ray pressure, $\gamma$ is the ratio of specific
heats of the plasma, $\gamma_{\rm cr}$ is the effective ratio of
specific heats for the cosmic rays, ${\mathbf B}$ is the magnetic
field, $\hat {\mathbf b}$ is a unit vector in the direction of the
magnetic field, $\kappa_\parallel$ is the thermal conductivity along
the direction of the magnetic field, $D_\parallel$ is the
cosmic-ray diffusivity along the direction of the magnetic
field, and $\mathbf g$ is the gravitational acceleration. We have
ignored cross-field conduction and diffusion in
equations~(\ref{eq:pgas})~and~(\ref{eq:pcosmic}) since the gyroradii
of the thermal particles are small compared to their Coulomb
mean free path, and the gyroradii of the cosmic rays are
small compared to the mean free path for cosmic-ray
scattering. Equations~(\ref{eq:masscon})$-$(\ref{eq:pcosmic})
are closed via the equation of state for an ideal gas:
\begin{equation}
\label{eq:eqstate}
p=C_V
\left(
\gamma-1
\right)\rho T.
\end{equation}
We take
\begin{equation}
\label{eq:equilg}
\mathbf g=-g\hat{\mathbf z},
\end{equation}
and consider an equilibrium in which
\begin{equation}
\label{eq:equilB}
{\mathbf B}_0=B_0\hat{\mathbf y}.
\end{equation}
All equilibrium quantities (denoted with a ``0'' subscript)
are taken to be functions of $z$
only, and we set $\mathbf v_0=0$. These assumptions lead to a
condition for hydrostatic equilibrium of the form:
\begin{equation}
\label{eq:hydroeq}
\frac{d}{dz}
\left(
p_0+p_{{\rm cr,}0}+\frac{B_0^2}{8\pi}
\right)
=-\rho_0 g.
\end{equation}
To introduce perturbations, we represent the variables in
our two-fluid equations as sums of an equilibrium value and
a small fluctuating quantity as follows:
\begin{eqnarray}
\label{eq:rhoperturb}
\rho &=& \rho_0+\delta\rho \\
\label{eq:pgasperturb}
p &=& p_0+\deltap, \\
\nonumber\cdots
\end{eqnarray}
We employ a local analysis, in which we take the fluctuating
quantities to be proportional to
$e^{i\left(\mathbf k\cdot\mathbf r-\omega t\right)}$, with
\begin{equation}
kH \gg 1,
\label{eq:local1}
\end{equation}
where
\begin{equation}
H \equiv \left|\frac{d \ln \rho_0}{dz}\right|^{-1}
\label{eq:defH}
\end{equation}
is the density scale height, which we take to be comparable
to the length scales over which each of the equilibrium quantities
varies. Substituting
equations (\ref{eq:rhoperturb}) and (\ref{eq:pgasperturb}),
and analogous expressions for
$p_{\rm cr}$, ${\mathbf B}$, $\mathbf v$, $\hat {\mathbf b}$, and $T$ into
equations~(\ref{eq:masscon})$-$(\ref{eq:pcosmic}), and into
equation~(\ref{eq:eqstate}), we obtain the following
equations for the fluctuating quantities:
\begin{equation}
\label{eq:linmasscon}
-i\omega\frac{\delta\rho}{\rho_0}
+i\mathbf k\cdot\delta\mathbf v
+\delta v_z\frac{d\ln\rho_0}{dz}=0,
\end{equation}
\begin{equation}
\label{eq:linmomcon}
-i\omega\delta\mathbf v=-g\frac{\delta\rho}{\rho_0}\hat{\mathbf z}
-i\mathbf k\frac{\deltap_{\rm tot}}{\rho_0}
+v_{\rm A}^2
\left[
\frac{d\ln B_0}{dz}\frac{\delta B_z}{B_0}\hat{\mathbf y}
+ik_y\frac{\delta{\mathbf B}}{B_0}
\right],
\end{equation}
\begin{equation}
\label{eq:lininduc}
-i\omega\frac{\delta{\mathbf B}}{B_0}
=ik_y\delta\mathbf v-\delta v_z\frac{d\ln B_0}{dz}
-\hat{\mathbf y}\left(i\mathbf k\cdot\delta\mathbf v\right),
\end{equation}
\begin{equation}
\label{eq:linpgas}
-i\omega
\left[
\frac{\deltap}{p_0}
-\gamma\frac{\delta\rho}{\rho_0}
\right]+
\delta v_z \frac{d\ln
\left(
p\rho^{-\gamma}
\right)}{dz} = D_{\rm cond}
\left[
ik_y\frac{d\ln T_0}{dz}\frac{\delta B_z}{B_0}
- k_y^2\frac{\delta T}{T_0}
\right],
\end{equation}
\begin{equation}
\label{eq:linpcosmic}
-i\omega\frac{\deltap_{\rm cr}}{p_{{\rm cr,}0}}
+ \delta v_z\frac{d\lnp_{{\rm cr,}0}}{dz}
+ i\gamma_{\rm cr}\mathbf k\cdot\delta\mathbf v =D_\parallel
\left[
ik_y\frac{d\lnp_{{\rm cr,}0}}{dz}
\frac{\delta B_z}{B_0}
- k_y^2\frac{\deltap_{\rm cr}}{p_{{\rm cr,}0}}
\right],
\end{equation}
\begin{equation}
\label{eq:lineqstate}
\frac{\deltap}{p_0}
=\frac{\delta\rho}{\rho_0}
+\frac{\delta T}{T_0},
\end{equation}
where,
\begin{equation}
\label{eq:Dcond}
D_{\rm cond}=\frac{\left(\gamma-1\right)\kappa_\parallel T_0}{p_0},
\end{equation}
and
\begin{equation}
\label{eq:ptotdef}
p_{\rm tot}=p+p_{\rm cr}+\frac{B^2}{8\pi}.
\end{equation}
Equations (\ref{eq:linmasscon})--(\ref{eq:lineqstate})
may be reduced to an expression of the form:
${\mathsf M}\cdot\delta\mathbf v=0,$
where $\mathsf M$ is a $3\times3$ matrix. For non-trivial
solutions of this equation, we require
$|{\sf M}|=0$, whence we obtain the dispersion relation:
\begin{equation}
\label{eq:disprel}
A_0\omega^6+A_2\omega^4+A_4\omega^2+A_6=0,
\end{equation}
where,
\begin{eqnarray}
\label{eq:bigA0}
A_0 &=& 1, \\
\label{eq:bigA2}
A_2&=& -k^2
\left(
u^2+v_{\rm A}^2
\right)
-k_y^2v_{\rm A}^2+g\frac{d\ln\rho_0}{dz}, \\
\label{eq:bigA4}
A_4&=& k_y^2k^2v_{\rm A}^2
\left(
2u^2+v_{\rm A}^2
\right)-
\left(
k_x^2+k_y^2
\right)
\left[
g^2+
\left(
u^2+v_{\rm A}^2
\right) g\frac{d\ln\rho_0}{dz}
\right], \\
\label{eq:bigA6}
A_6&=&k_y^2v_{\rm A}^2
\left[
-k^2k_y^2v_{\rm A}^2 u^2 +
\left(
k_x^2+k_y^2
\right)
\left(
g^2+u^2g\frac{d\ln\rho_0}{dz}
\right)
\right],
\end{eqnarray}
and
\begin{equation}
\label{eq:udef}
u^2=\frac{1}{\rho_0}
\left[
p_0
\left(
\frac{\gamma\omega+i\eta}{\omega+i\eta}
\right)+
p_{\rm cr}\frac{\gamma_{\rm cr}\omega}{\omega+i\nu}
\right],
\end{equation}
and where in equation~(\ref{eq:udef}) we have introduced the
quantities
\begin{eqnarray}
\eta&=&k_y^2D_{\rm cond}, \\
\nu&=&k_y^2D_\parallel, \\
v_{\rm A}^2 &=&\frac{B_0^2}{8\pi\rho_0},
\end{eqnarray}
where $v_{\rm A}$ is the Alfv\`en speed, and $\eta$ and $\nu$ are,
respectively, the rates at which temperature fluctuations and
cosmic-ray-pressure fluctuations are smoothed out along the magnetic
field. Equations (\ref{eq:disprel})--(\ref{eq:udef}) represent the
same result as that presented in equations~(26)~and~(27) of
[CD06]\ and we shall henceforth refer to this result as the ``general
dispersion relation.'' As we shall see, this relation constitutes an
eighth-order polynomial equation in $\sigma=-i\omega$, (where the
change of variables is made so as to make all of the polynomial
coefficients real). It is worthwhile to note that in the absence of
cosmic rays and thermal conductivity,
\begin{equation}
\label{eq:limu}
u^2
\stackrel{\displaystyle\longrightarrow}
{\scriptstyle \nu,\eta,p_{\rm cr}\rightarrow 0}
{}\frac{\gamgasp_0}{\rho_0}=c_s^2,
\end{equation}
where $c_s$ is the adiabatic sound speed, and that if we
take this limit, together with the limit of no
stratification and $g\rightarrow 0$, the general dispersion relation
reduces to the well-known dispersion relation obtained in
ideal MHD. Thus the normal modes described by equation~(\ref{eq:disprel})
may be viewed as modifications of the
Alfv\`en mode and the fast and slow magnetosonic modes
of ideal MHD.
We now present the definitions of a number of frequencies
that allow us to write the polynomial form of the dispersion
relation more compactly. These are:
\begin{eqnarray}
\omega_{\rm s}^2&=&\frac{k^2p_0}{\rho_0},
\label{eq:omegassqd} \\
\omega_{\rm A}^2&=&k_y^2v_{\rm A}^2, \\
\omega_0^2&=&\frac{\rho_0}{p_0}g^2
\sin^2\theta, \\
\omega_1^2&=&\frac{g\sin^2\theta}{\gamma}
\frac{d}{dz}\ln
\left(p_0\rho_0^{-\gamma}
\right), \\
\omega_2^2&=&g\sin^2\theta\frac{d}{dz}\ln T,\label{eq:omegatwosqd} \\
\omega_3^2&=&\frac{g\sin^2\theta}{\gamma_{\rm cr}}
\frac{d}{dz}\ln
\left(p_{{\rm cr,}0}\rho_0^{-\gamma_{\rm cr}}
\right), \\
\omega_4^2&=&
g\sin^2\theta\frac{d}{dz}\lnp_{{\rm cr,}0}, \label{eq:omegafoursqd} \hspace{0.3cm} \mbox{ and}\\
\omega_5^2&=&
g\sin^2\theta\frac{d}{dz}\ln B_0^2,
\label{eq:omegafivesqd} \\
\end{eqnarray}
where we have defined
\begin{equation}
\label{eq:sinsqd}
\sin^2\theta=\frac{k_x^2+k_y^2}{k^2}.
\end{equation}
The quantities $\omega_{\rm A}^2$ and $\omega_{\rm s}^2$ are the
squares of the Alfv\`en and isothermal sound-wave
frequencies respectively. The quantity $\omega_1^2$ is
the square of the usual Brunt-V\"ais\"al\"a frequency for
buoyancy oscillations in the limit of vanishing
cosmic-ray pressure and magnetic field. As we shall
see below, the quantities $\omega_3^2$ and
$\omega_5^2$ serve to modify the frequency of these
oscillations when the cosmic-ray pressure and magnetic
field are non-vanishing. The quantities
$\omega_2^2$ and $\omega_4^2$ are related to $\omega_1^2
$ and $\omega_3^2$ through the identities:
\begin{eqnarray}
\label{eq:ident12}
\omega_2^2&=&\gamma\omega_1^2+
\left(
\gamma-1
\right)g\sin^2\theta\frac{d\ln\rho_0}{dz}, \\
\label{eq:ident34}
\omega_4^2&=&\gamma_{\rm cr}\omega_3^2
+\gamma_{\rm cr} g\sin^2\theta\frac{d\ln\rho_0}{dz}.
\end{eqnarray}
We also define the quantities $W^2$ and $\mathcal C$:
\begin{equation}
\label{eq:dubyadef}
W^2=\omega_2^2
+\chi\omega_4^2
+\frac{1}{\beta}\omega_5^2,
\end{equation}
and,
\begin{equation}
\label{eq:defcrit}
\mathcal C=\omega_{\rm A}^2+ W^2,
\end{equation}
where in equation (\ref{eq:dubyadef}), $\chi=p_{{\rm cr,}0}/p_0$.
Finally, noting the identity
\begin{equation}
\label{eq:critident}
-g\sin^2\theta\frac{d\ln\rho_0}{dz}=\omega_0^2
-\omega_{\rm A}^2+\mathcal C,
\end{equation}
we find that
the general dispersion relation may be written
\begin{equation}
\label{eq:gendisp}
a_0\sigma^8+a_1\sigma^7+\cdots+a_7\sigma+a_8=0,
\end{equation}
where
\begin{eqnarray}
\label{eq:smallazero}
a_0&=&\omega_{\rm s}^{-2}, \\
\label{eq:smallaone}
a_1&=&
\left(
\nu+\eta
\right)\omega_{\rm s}^{-2}, \\
\label{eq:smallatwo}
a_2&=&
\left(
\gamma+\chi\gamma_{\rm cr}+\frac{2}{\beta}
\right)+
\left[
\nu\eta+\omega_{\rm A}^2-g\frac{d\ln\rho_0}{dz}
\right] \omega_{\rm s}^{-2} \\
\label{eq:smallathree}
a_3&=&
\left[
\nu
\left(
\gamma+\frac{2}{\beta}
\right)
+\eta
\left(
1+\chi\gamma_{\rm cr}+\frac{2}{\beta}
\right)
\right] +
\left[
\left(
\nu+\eta
\right)
\left(
\omega_{\rm A}^2-g\frac{d\ln\rho_0}{dz}
\right)
\right]
\omega_{\rm s}^{-2}, \\
\label{eq:smallafour}
a_4&=& \nu\eta
\left[
\left(
1+\frac{2}{\beta}
\right)+
\left(
\omega_{\rm A}^2-g\frac{d\ln\rho_0}{dz}
\right)\omega_{\rm s}^{-2}
\right] + \nonumber \\
&{}&\quad
\left[
\left(
\left[
\left(
\gamma-1
\right)+\chi\gamma_{\rm cr}
\right]+\frac{2}{\beta}
\right)\omega_0^2+
\left(
\gamma+\chi\gamma_{\rm cr}
\right)\omega_{\rm A}^2+
\left(
\gamma+\chi\gamma_{\rm cr}+\frac{2}{\beta}
\right)\mathcal C
\right], \\
\label{eq:smallafive}
a_5&=& \nu
\left[
\left(
\gamma+\frac{2}{\beta}
\right)
\left(
\omega_0^2+\mathcal C
\right)-\omega_0^2+\gamma\omega_{\rm A}^2
\right]+ \nonumber \\
&{}&\qquad\qquad\eta
\left[
\left(
1+\chi\gamma_{\rm cr}+\frac{2}{\beta}
\right)
\left(
\omega_0^2+\mathcal C
\right)-\omega_0^2 +
\left(
1+\chi\gamma_{\rm cr}
\right)\omega_{\rm A}^2
\right], \\
\label{eq:smallasix}
a_6&=&\nu\eta
\left[
\left(
1+\frac{2}{\beta}
\right)
\left(
\omega_0^2+\mathcal C
\right)-
\left(
\omega_0^2-\omega_{\rm A}^2
\right)
\right] + \omega_{\rm A}^2
\left[
\left(
\gamma+\chi\gamma_{\rm cr}
\right)
\left(
\omega_0^2+\mathcal C
\right)-\omega_0^2
\right], \\
\label{eq:smallaseven}
a_7&=& \omega_{\rm A}^2
\biggl(
\left[
\nu
\left(
\gamma-1
\right)+\chi\gamma_{\rm cr}\eta
\right]\omega_0^2+
\left[
\nu\gamma+\eta
\left(
1+\chi\gamma_{\rm cr}
\right)
\right]\mathcal C
\biggr), \hspace{0.3cm} \mbox{ and} \\
\label{eq:smallaeight}
a_8&=&\nu\eta\omega_{\rm A}^2\mathcal C.
\end{eqnarray}
We assume that $|d\lnp_0/dz|$, $|d\lnp_{{\rm cr,}0}/dz|$, and
$|d\ln B_0^2/dz|$ are of order $H^{-1}$. We may thus conclude from
equation~(\ref{eq:hydroeq}) that $g\simp_{\rm tot,0} H^{-1}/\rho_0$. We
also assume that $p_{\rm tot}$ is not much greater than~$p$. Our
assumption that $|kH \gg 1|$ then allows us to write that
\begin{equation}
\label{eq:dropterm}
\omega_{\rm s}^{-2} g\frac{d\ln\rho_0}{dz}\sim
\left(
k^2H^2
\right)^{-1}\ll1.
\end{equation}
This inequality enables us to drop the $\omega_s^{-2}g\,d\ln\rho_0/dz$
terms in equations (\ref{eq:smallatwo}), (\ref{eq:smallathree}), and
(\ref{eq:smallafour}).
As a check on the results of this section, we show in appendix
\ref{ap:pslimit} that equation (\ref{eq:gendisp}) reduces properly to
the results obtained by Parker (1966, 1967) and Shu (1974) when
cosmic-ray diffusivity is taken to be infinite and thermal conduction
is negligible, and when the results of Parker (1966, 1967) and Shu (1974)
are considered in the short-wavelength limit.
\section{The quasi-interchange limit}
\label{sec:qil}
The most unstable modes in a gravitationally
stratified adiabatic plasma threaded by a horizontal magnetic
field are those for which $|k_x|$ is very large, so that
\begin{eqnarray}
|k_x H| & \gg & 1,\\
|k_x| & \gg & |k_y|, \\
|k_x| & \gg & |k_z|, \mbox{ \hspace{0.3cm} and}\\
\sin^2\theta & \rightarrow & 1
\end{eqnarray}
(Parker 1967, Shu 1974, Ferri\`ere et al. 1999).
We conjecture that the same is true when thermal conduction
is taken into account, at least when the equilibrium magnetic
field is horizontal, and thus we focus on this limit, which
we call the ``quasi-interchange limit.''
For very large~$|k_x|$, one set of modes consists of high-frequency
magnetosonic-like waves. In the $\beta \gg 1$~limit, these waves are
stable [CD06], and we assume they are stable here
as well. [We note, however, that in the presence of an equilibrium
heat flux (i.e. $B_{0z} \neq 0$), anisotropic conduction can cause
magnetosonic waves to become overstable (Socrates, Parrish, \& Stone
2007).] To filter out these high-frequency waves, we assume that
\begin{equation}
\sigma \ll \omega_s.
\label{eq:balassumps1}
\end{equation}
We also assume that $|k_x/k_y|$ is sufficiently large that
\begin{eqnarray}
\frac{\omega_{\rm A}}{\omega_{\rm s}}
&\ll& 1,
\label{eq:balassumps2} \mbox{ \hspace{0.3cm} and}\\
\frac{\nu}{\omega_{\rm s}}
\sim \frac{\eta}{\omega_{\rm s}}&\ll&1,
\label{eq:balassumps3}
\end{eqnarray}
and that $|k_x H|$ is sufficiently large that
\begin{eqnarray}
\frac{\omega_i}{\omega_{\rm s}} &\ll &
1;\hskip 1cm i = 0,\ldots, 5.
\label{eq:balassumps4}
\end{eqnarray}
Using these inequalities and equation~(\ref{eq:dropterm}),
we can rewrite the general dispersion relation as a
6th-degree polynomial equation,
\begin{equation}
\label{eq:ballim}
b_0\sigma^6
+ b_1\sigma^5
+ b_2\sigma^4
+ b_3 \sigma^3
+ b_4\sigma^2
+ b_5\sigma
+ b_6=0,
\end{equation}
where
\begin{eqnarray}
\label{eq:balloonb0}
b_0&=&\gamma+\chi\gamma_{\rm cr}+\frac{2}{\beta}, \\
\label{eq:balloonb1}
b_1&=& \nu
\left(
\gamma+\frac{2}{\beta}
\right)+\eta
\left(
1+\chi\gamma_{\rm cr}+\frac{2}{\beta}
\right), \\
\label{eq:balloonb2}
b_2&=&
\left(
\gamma+\chi\gamma_{\rm cr}
\right)
\left(
\omega_{\rm A}^2+\omega_0^2+\mathcal C
\right)
-\omega_0^2+\frac{2}{\beta}
\left(
\omega_0^2+\mathcal C
\right)+\nu\eta
\left(
1+\frac{2}{\beta}
\right), \\
\label{eq: balloonb3}
b_3&=&
\left[
\eta
\left(
1+\chi\gamma_{\rm cr}
\right)
+\nu\gamma
\right]
\left(
\omega_{\rm A}^2+\omega_0^2+\mathcal C
\right)
+
\left(
\nu+\eta
\right)
\left[
\frac{2}{\beta}
\left(
\omega_0^2+\mathcal C
\right)
-\omega_0^2
\right],\qquad \\
\label{eq:balloonb4}
b_4&=&\omega_{\rm A}^2
\biggl[
\left(
\gamma+\chi\gamma_{\rm cr}
\right)
\left(
\omega_0^2+\mathcal C
\right)-\omega_0^2
\biggr]
+\nu\eta
\left[
\omega_{\rm A}^2+\mathcal C+\frac{2}{\beta}
\left(
\omega_0^2+\mathcal C
\right)
\right], \\
\label{eq:balloonb5}
b_5&=&
\omega_{\rm A}^2
\biggl[
\left(
\omega_0^2+\mathcal C
\right)
\left[
\nu\gamma+\eta
\left(
1+\chi\gamma_{\rm cr}
\right)
\right]
-\left(
\nu+\eta
\right)
\omega_0^2
\biggr], \hspace{0.3cm} \mbox{ and} \\
\label{eq:balloonb6}
b_6&=&\nu\eta\omega_{\rm A}^2\mathcal C.
\end{eqnarray}
\subsection{Stability
criterion}
\label{sec:stabcrit}
To obtain the stability criterion for the modes
described by equation~(\ref{eq:ballim}), we use
the Routh-Hurwitz theorem.
[see for example Levinson \& Redheffer (1970)]
To apply this theorem, we construct the
matrix~${\mathcal R}$ from the (real) coefficients of the
polynomial in
equation~(\ref{eq:ballim}), where
\begin{equation}
{\mathcal R}=\left(
\begin{array}{cccccc}
b_1 & b_3 & b_5 & 0 & 0 & 0 \cr
b_0 & b_2 & b_4 & b_6 & 0 & 0 \cr
0 & b_1 & b_3 & b_5 & 0 & 0 \cr
0 & b_0 & b_2 & b_4 & b_6 & 0 \cr
0 & 0 & b_1 & b_3 & b_5 & 0 \cr
0 & 0 & b_0 & b_2 & b_4 & b_6
\end{array}
\right).
\end{equation}
The Routh-Hurwitz theorem then states that for the real parts of the
roots of equation~(\ref{eq:ballim}) to all take on negative values, it
is a necessary and sufficient condition that the determinants of the
principle minor matrices $\mathcal M_i$ of ${\mathcal R}$ all be
positive-definite. This necessary and sufficient condition is the
stability criterion for our plasma. The determinants of the principle
minors of ${\mathcal R}$ are:
\begin{equation}
\det(1)=b_1,
\end{equation}
\begin{equation}
\det(2)=\left|
\begin{array}{cc}
b_1 & b_3 \cr
b_0 & b_2
\end{array}
\right|,
\end{equation}
\begin{equation}
\det(3)=\left|
\begin{array}{ccc}
b_1 & b_3 & b_5 \cr
b_0 & b_2 & b_4 \cr
0 & b_1 & b_3
\end{array}
\right|,
\end{equation}
\begin{equation}
\det(4)=\left|
\begin{array}{cccc}
b_1 & b_3 & b_5 & 0 \cr
b_0 & b_2 & b_4 & b_6 \cr
0 & b_1 & b_3 & b_5 \cr
0 & b_0 & b_2 & b_4
\end{array}
\right|,
\end{equation}
\begin{equation}
\det(5)=\left|
\begin{array}{ccccc}
b_1 & b_3 & b_5 & 0 & 0 \cr
b_0 & b_2 & b_4 & b_6 & 0 \cr
0 & b_1 & b_3 & b_5 & 0 \cr
0 & b_0 & b_2 & b_4 & b_6 \cr
0 & 0 & b_1 & b_3 & b_5
\end{array}
\right|,
\end{equation}
and
\begin{equation}
\det(6)=\left|{\mathcal R}\right|.
\end{equation}
After some algebra, we find that these determinants may be
expressed as
\begin{eqnarray}
\label{eq:rhdet1}
\det(1)&=&\nu
\left(
\gamma+\frac{2}{\beta}
\right)+\eta
\left(
1+\chi\gamma_{\rm cr}+\frac{2}{\beta}
\right),
\label{eq:rhdet2} \\
\det(2)&=&
b_1\nu\eta
\left(
1+\frac{2}{\beta}
\right)+J
\left(
\omega_0^2+\frac{2}{\beta}\omega_{\rm A}^2
\right), \\
\label{eq:rhdet3}
\det(3)&=&
b_1J\omega_0^2
\left(
\omega_{\rm A}^2+W^2
\right)+JK
\left(
\omega_0^4+\frac{2}{\beta}\omega_{\rm A}^4
\right)
+\nu\eta Kb_1
\left(
\omega_0^2+\frac{2}{\beta}\omega_{\rm A}^2
\right) \nonumber \\
&{}&\qquad\qquad\qquad+\frac{2}{\beta} J
\left(
\nu+\eta
\right)
\left(
\omega_0^2-\omega_{\rm A}^2
\right)^2 \\
\label{eq:rhdet4}
\det(4)&=&\frac{2}{\beta}
J^2\omega_{\rm A}^2\omega_0^2
\left[
W^2+\omega_0^2
\right]^2+ \nonumber \\
&{}&J\nu\eta
\Biggl\{
(\nu+\eta)\omega_0^2\left[\mathcal C + \frac{2}{\beta}(W^2+\omega_0^2)\right]^2
+ K(\mathcal C^2\omega_0^2 + \mathcal C \omega_0^4)
+ \nonumber \\
&{}& \frac{2}{\beta}(\nu+\eta)\omega_A^2(\omega_A^2-\omega_0^2)^2
+ \frac{2}{\beta} K\left[ (W^2 + \omega_0^2)^2\omega_0^2 + \mathcal C \omega_0^2\omega_A^2 + \omega_A^2 (\omega_A^2-\omega_0^2)^2\right]
\Biggr\} \nonumber \\
&{}&+
\left(
\nu\eta
\right)^2b_1K
\left[
\left(
1+\frac{2}{\beta}
\right)\omega_0^2\mathcal C+\frac{2}{\beta}
\left(
\omega_0^2-\omega_{\rm A}^2
\right)^2
\right], \\
\label{eq:rhdet5}
\det(5)&=&
\frac{2}{\beta}\omega_0^2\omega_{\rm A}^2
\left[
W^2+\omega_0^2
\right]^2 \times\nonumber \\
&{}&
\biggl\{
\left(
\nu\eta
\right)^2
b_1K^2+
\left(
\nu\eta
\right)JK
\left[
b_1\mathcal C+\omega_{\rm A}^2
\left(
\nu+\eta
\right)+k
\left(
\omega_0^2+\omega_{\rm A}^2
\right)+\frac{2}{\beta}\omega_0^2
\left(
\nu+\eta
\right)
\right]\nonumber \\
&{}&\qquad+J^2\omega_{\rm A}^2
\left[
K\omega_0^2+
\left(
K+\nu+\eta
\right)\mathcal C
\right]
\biggr\} \hspace{0.3cm} \mbox{ and}\\
\det(6)&=&
\left|
{\mathcal R}
\right|=b_6\det
\left(
5
\right). \label{eq:det6eq}
\end{eqnarray}
The quantities $J$ and $K$ appearing in the above
expressions are defined as
\begin{equation}
\label{eq:defJ}
J =\left(\gamma-1\right)\eta+\chi\gamma_{\rm cr}\nu,
\end{equation}
and
\begin{equation}
\label{eq:defK}
K =\left(\gamma-1\right)\nu+\chi\gamma_{\rm cr}\eta,
\end{equation}
and are always positive.
We first consider the case $k_y \neq 0$.
In this case, $J$, $K$, and the first two
determinants are seen to be composed of sums of
positive-definite quantities and so are themselves
positive definite. By inspection,
$\det(3)$ through $\det(6)$ are positive if $\mathcal C>0$, and
thus $\mathcal C>0$ is a sufficient condition for stability.
On the other hand, equation~(\ref{eq:det6eq}) shows that
if $\mathcal C<0$, then either $\det(5)$ or $\det(6)$ is negative.
Therefore, $\mathcal C>0$ is also a necessary condition for stability.
If we fix the wavevector~$\mathbf k$, taking~$k_y \neq 0$,
the necessary and sufficient condition for modes at that~$\mathbf k$ to
be stable is then
\begin{equation}
\mathcal C >0.
\label{eq:stabcritky}
\end{equation}
Since $\mathcal C = W^2 + k_y^2 v_A^2$,
the smallest value of~$\mathcal C$ is obtained in the limit~$k_y\rightarrow 0$.
The necessary and sufficient condition for the plasma
to be stable at all wavevectors in the quasi-interchange limit is thus
\begin{equation}
W^2 > 0.
\label{eq:stabcritgen}
\end{equation}
Using the definition of $W^2$ given in
equation~(\ref{eq:dubyadef}), and the definitions
of the frequencies $\omega_2^2$,
$\omega_4^2$, and $\omega_5^2$
given in equations (\ref{eq:omegatwosqd}),
(\ref{eq:omegafoursqd}), and (\ref{eq:omegafivesqd})
respectively, we can rewrite equation~(\ref{eq:stabcritgen}) as
\begin{equation}
nk_B \frac{dT}{dz} +
\frac{d p_{\rm cr}}{d z}
+ \frac{1}{8\pi} \frac{dB^2}{dz} > 0,
\label{eq:stabcritgen2}
\end{equation}
where we have dropped the zero subscripts on the equilibrium
quantities. Equation~(\ref{eq:stabcritgen2}) shows that an ``upwardly
decreasing'' temperature, cosmic-ray pressure, or magnetic pressure is
destabilizing.
We next consider the special case, $k_y=0$, which corresponds to
pure interchanges. In this case,
equation~(\ref{eq:ballim}) leads to the two non-trivial solutions,
\begin{equation}
\sigma=\pm\sqrt{-\frac{b}{b_0}},
\end{equation}
where $b$ is what remains of the coefficient $b_2$ at $k_y=0$. By
inspection we see that the necessary and sufficient condition for
these modes to be stable is~$b>0$.
For a vanishing cosmic-ray pressure, the condition~$b>0$
reduces to
\begin{equation}
\label{eq:tserk}
-\frac{d\rho}{dz}>\frac{\rho^2g}{\gamgasp+B^2/4\pi},
\end{equation}
where we have again dropped the zero subscripts on the equilibrium
quantities. Equation~(\ref{eq:tserk}) is the result of
Tserkovnikov~(1960) for the pure interchanges as quoted by
Newcomb~(1961). The criterion $W^2>0$ obtained above for the
case $k_y\ne0$ is more restrictive than the condition~$b>0$,
since~$\gamma > 1$. Thus, $W^2>0$ is the necessary and
sufficient condition for the plasma to be stable to all modes in the
quasi-interchange limit, including those with~$k_y=0$.
\subsection{Eigenmodes near marginal stability}
\label{sec:ems}
In this section we consider the properties of unstable
modes very near to the limit of marginal stability. We assume that
$k_y\neq 0$, but take the limit~$k_y H \ll 1$---that is, the parallel
wave length is much longer than the scale height. Near marginal
stability, the quantity~$b_6$ in equation~(\ref{eq:ballim}) approaches
zero. There thus exists a solution to the dispersion relation in
which~$\sigma$ also approaches zero, for which the terms proportional
to $\sigma^2$ through~$\sigma^6$ in equation~(\ref{eq:ballim}) can be
neglected. This solution satisfies the approximate equation
\begin{equation}
\sigma \simeq - \frac{b_6}{b_5}.
\label{eq:dispappr}
\end{equation}
Making use of the fact that~$\sigma \rightarrow 0$ for this
mode, we can return to the results of section~\ref{sec:disprel}
(in particular, the equation ${\mathsf M}\cdot\delta\mathbf v=0$)
and show that to leading order in~$k_y H$
\begin{equation}
\frac{k_x \delta v_x}{k_y \delta v_y} \simeq - \frac{i k_z p}{\rho g},
\end{equation}
and
\begin{equation}
\frac{k_z \delta v_z}{k_y \delta v_y} \simeq \frac{i k_z p}{\rho g},
\end{equation}
so that
\begin{equation}
|k_x \delta v_x + k_z \delta v_z| \ll |k_y \delta v_y|.
\label{eq:compr}
\end{equation}
Thus, for modes with $|k_y H| \ll 1$ near marginal stability, most of
the compression or expansion of the plasma occurs in the direction of
the magnetic field rather than perpendicular to the magnetic field,
despite the fact that~$k_y$ is very small. We discuss the importance
of this result further in the next section.
\section{Heuristic derivation of stability criterion}
\label{sec:physical}
In this section, we present a way of understanding the stability
criterion in equation~(\ref{eq:stabcrit0}) in physical terms. We
consider the same equilibrium discussed in section~\ref{sec:disprel} ,
in which~$\mathbf g = -g \hat{\mathbf z}$ and ${\mathbf B}_0 = B_0 \hat{\mathbf y}$, and we again
take the plasma to be perfectly conducting, so that magnetic field
lines are frozen-in to the fluid. However, we now assume that the
equilibrium is very close to marginal stability. We then imagine some
mode in the plasma that causes a long and narrow magnetic flux tube to
rise upwards, as depicted in Figure~\ref{fig:f1}. For simplicity, we
assume that the ends of the flux tube are anchored at the flux tube's
initial height. We take the flux tube to be very long, so that
magnetic tension forces are very weak. Because the medium is
arbitrarily close to marginal stability, the growth time or
oscillation time for the mode is arbitrarily long. Thus, even though
the flux tube is long, there is plenty of time for conduction and
diffusion to equalize~$T$ and $p_{\rm cr}$ along the perturbed
magnetic field lines. We assume that the total pressure, $p_{\rm tot}
= p + p_{\rm cr} + B^2/8\pi$, at each point along the flux tube is
equal to the total pressure just outside the flux tube at that
point.\footnote{Total-pressure variations are associated with
high-frequency magnetosonic waves. These waves are stable at $\beta
\gg 1$ [CD06], and we assume they are stable here
as well. However, we note that Socrates, Parrish, \& Stone (2007)
have shown that magnetosonic waves can become unstable in the
presence of an equilibrium heat flux, when $B_{0z} \neq 0$.}
\begin{figure}[h]
\includegraphics[width=2.5in]{f1.eps}
\caption{\footnotesize An upwardly displaced flux tube.
\label{fig:f1}}
\end{figure}
We define $\Delta n$, $\Delta T$, $\Delta B^2$ and $\Delta p_{\rm
cr}$, respectively, as the difference between the density,
temperature, field-strength-squared, and cosmic-ray pressure at the
highest point in our flux tube and the immediately surrounding medium,
at a point in time when the top of the flux tube is a small
distance~$\Delta z$ above the flux tube's initial height. The
constancy of~$T$ and $p_{\rm cr}$ along the flux tube yields the
relations (accurate to first-order in~$\Delta z/H$)
\begin{equation}
\Delta T = - \Delta z \frac{dT}{dz}
\label{eq:dt1}
\end{equation}
and
\begin{equation}
\Delta p_{\rm cr} = - \Delta z \frac{dp_{\rm cr}}{dz},
\label{eq:dpcr1}
\end{equation}
where $dT/dz$ and $dp_{\rm cr}/dz$ are the gradients of the
equilibrium temperature and cosmic-ray pressure evaluated at the
initial height of the flux tube.
Equations~(\ref{eq:dt1}) and (\ref{eq:dpcr1}) tell us how to evaluate
$T$ and $p_{\rm cr}$ in our flux tube. Evaluating $B^2$ in the flux
tube is a little more involved. Assuming that the total pressure
decreases with height, the fluid in the flux tube has to expand in
order to achieve total-pressure balance. However, the manner in which
the flux tube expands is not obvious. If the plasma expands primarily
along the magnetic field, the cross sectional area of the flux tube
will be constant along the flux tube, and thus so will the magnetic
field strength. On the other hand, if the plasma expands perpendicular
to the field, the magnetic field strength will decrease. Which type of
expansion does the plasma favor? We answer this question analytically
in section~\ref{sec:ems}, where we show that, near marginal stability,
$|k_y \delta v_y| \gg |k_x \delta v_x + k_z \delta v_z|$ for the
low-frequency long-parallel-wavelength buoyancy instability in the
large-$|k_x|$ limit. Thus, for this mode, most of the expansion
$\nabla \cdot \mathbf v$ arises from the parallel motion. We note that
this statement is stronger than the statement that $|v_y| \gg |v_x|$,
because we take $|k_x| \gg |k_y|$.
How can we understand this result in physical terms? One way is by
analogy to the $\delta W$ analysis of the stability of ideal MHD
plasmas, in the absence of thermal conduction. (Bernstein, Frieman,
Kruskal, \& Kulsrud 1958, Friedberg~1987) In this analysis, it is
shown that if a mode expands in the direction perpendicular to the
magnetic field, additional work must be done on the surrounding
magnetic field. This requirement makes the mode more stable. To find
the stability criterion, we must seek out the most unstable mode,
which in this case is a mode that keeps the cross-sectional area of
the flux tube constant.
Taking the cross-sectional area of the flux tube to be constant,
we can treat $B^2$ as constant along the flux tube. This allows
us to write that
\begin{equation}
\Delta B^2 = - \Delta z \frac{dB^2}{dz}.
\label{eq:dB21}
\end{equation}
The condition that the total pressure inside the flux tube matches the
total pressure outside the flux tube can be written as
\begin{equation}
k_B T \Delta n + n k_B \Delta T
+ \Delta p_{\rm cr} + \frac{\Delta B^2}{8\pi} = 0.
\label{eq:peq}
\end{equation}
Together, equations~(\ref{eq:dt1}) through (\ref{eq:peq}) imply that
\begin{equation}
k_B T \Delta n = \Delta z\left( n k_B \frac{dT}{dz} + \frac{dp_{\rm cr}}{dz}
+ \frac{1}{8\pi} \frac{dB^2}{dz} \right).
\label{eq:stabcritf}
\end{equation}
The stability criterion, equation~(\ref{eq:stabcrit0}), is thus the
condition that the material inside an upwardly displaced,
long, and narrow flux tube be denser than the surrounding
medium.
\section{Convection in galaxy cluster plasmas}
\label{sec:gc}
In many galaxy-cluster cores, the radiative cooling time is much
shorter than the cluster's likely age (Fabian 1994). Nevertheless,
high-spectral-resolution X-ray observations show that very little
plasma actually cools to low temperatures. (B\"{o}hringer et~al~2001;
David et al 2001; Molendi \& Pizzolato~2001; Peterson et~al~2001,
2003; Tamura et~al~2001; Blanton, Sarazin, \& McNamara 2003). This
finding, sometimes referred to as the ``cooling-flow problem,''
strongly suggests that plasma heating approximately balances radiative
cooling in cluster cores.
A heating mechanism for cluster cores
that has been studied extensively is heating by a
central active galactic nucleus (AGN). The importance of such ``AGN
feedback'' is suggested by the observation that almost all clusters
with strongly cooling cores possess active central radio sources
(Burns 1990; Ball, Burns, \& Loken 1993; Eilek~2004) and by the
correlation between the X-ray luminosity from within a cluster's
cooling radius and the mechanical luminosity of a cluster's central
AGN (B\^irzan et al 2004,~Eilek 2004). One of the main unsolved
problems regarding AGN feedback is to understand how AGN power is
transferred to the diffuse ambient plasma. A number of mechanisms
have been investigated, including Compton heating (Binney \& Tabor
1995; Ciotti \& Ostriker 1997, 2001; Ciotti, Ostriker, \&
Pellegrini~2004, Sazonov et al 2005), shocks (Tabor \& Binney 1993,
Binney \& Tabor 1995), magnetohydrodynamic (MHD) wave-mediated plasma
heating by cosmic rays (B\"{o}hringer \& Morfill~1988; Rosner \&
Tucker~1989; Loewenstein, Zweibel, \& Begelman~1991), and cosmic-ray
bubbles produced by the central AGN (Churazov et al~2001, 2002;
Reynolds 2002; Br\"{u}ggen~2003; Reynolds et~al~2005), which can heat
intracluster plasma by generating turbulence (Loewenstein \& Fabian
1990, Churazov et~al~2004, Cattaneo \& Teyssier 2007) and sound waves
(Fabian et~al~2003; Ruszkowski, Br\"{u}ggen, \& Begelman 2004a,b) and
by doing $pdV$ work (Begelman 2001, 2002; Ruszkowski \& Begelman~2002;
Hoeft \& Br\"{u}ggen~2004).
Another way in which central AGNs may heat the
intracluster medium is by accelerating cosmic rays that mix
with the intracluster plasma and cause the
intracluster medium to become convectively unstable. A steady-state,
spherically symmetric, mixing-length model based on this idea was
developed by Chandran (2004) and subsequently refined by Chandran
(2005) and Chandran \& Rasera (2007). In this model, a central
supermassive black hole accretes hot intracluster plasma at the Bondi
rate, and converts a small fraction of the accreted rest-mass energy
into cosmic rays that are accelerated by shocks within some
distance~$r_{\rm source}$ of the center of the cluster. The resulting
cosmic-ray pressure gradient leads to convection, which in turn heats
the thermal plasma in the cluster core by advecting internal energy
inwards and allowing the cosmic rays to do~$pdV$ work on the thermal
plasma. The model also includes thermal conduction, cosmic-ray
diffusion, and radiative cooling. By adjusting a single parameter in
the model ($r_{\rm source}$), Chandran
\& Rasera (2007) were able to achieve a good match to the observed
density and temperature profiles in a sample of eight clusters.
The treatment of convective stability in the work of
Chandran (2004, 2005) and Chandran \& Rasera~(2007) was based on the
assumption that~$\beta = 8\pi p/B^2 \gg 1$. The
present paper investigates convective stability for arbitrary~$\beta$. One
of the motivations for this work is the possibility
that some clusters with short central cooling times (``cooling-core
clusters'') may
be in the $\beta \sim 1$ regime. For a fully ionized plasma with a
hydrogen mass fraction~$\,X=0.7$ and helium mass fraction $\,Y=0.29$,
\begin{equation}
\beta = 6.3 \times \left(\frac{n_e}{10^{-2} \mbox{cm}^{-3}}\right) \times
\left(\frac{k_B T}{\mbox{3 keV}}\right) \times
\left(\frac{B}{10 \mu \mbox{G}}\right)^{-2}.
\label{eq:beta}
\end{equation}
Although many studies of the magnetic field strength in clusters of
galaxies find $B$ in the range of $1-5 \mu$G (see, e.g., Kronberg
1994, Eilek \& Owen 2002), some studies of Faraday rotation in
cooling-core clusters find much stronger magnetic fields (Taylor \&
Perley 1993; Kronberg 1994; Taylor, Fabian, \& Allen 2002). In the
case of Hydra~A, Taylor \& Perley~(1993) found a tangled magnetic
field of~$\sim 30\mu$G, and Taylor, Fabian, \& Allen~(2002) found a
tangled magnetic field of $\sim 35\mu$G. The analysis of X-ray
observations of Hydra~A carried out by Kaastra et al (2004), when
converted to a $\Lambda$CDM cosmology (see Chandran \& Rasera 2007),
indicate that $n_e \simeq 0.01 \mbox{ cm}^{-3}$ and $k_B T \simeq
3.4$~keV in Hydra~A at $r=50$~kpc. Equation~(\ref{eq:beta}) thus
shows that if $B$ is indeed as large as~$30 \mu$G in the core of
Hydra~A, then $\beta$ is of order unity. Values of $\beta \sim 0.1-1$
for cluster cores in several other galaxy clusters were reported by
Eilek \& Owen~(2002). Although these studies suggest that $\beta \sim
1$ magnetic fields could be common in cooling-core clusters, some
caution is warranted here. Vogt \& Ensslin~(2005) have reanalyzed the
Faraday-rotation data for Hydra~A using an updated plasma
density profile, and found an rms magnetic field of~$7 \mu$G, which
corresponds to~$\beta =15$ at~$r=50$~kpc in Hydra~A.
In the remainder of this section, we explore the implications of
the condition~$\beta \lesssim 1$ on convective instability in clusters,
but the above uncertainty in the value of~$\beta$ in cooling-core
clusters should be born in mind.
In section~\ref{sec:qil} we showed that the
necessary and sufficient condition for stability for a mode with fixed
nonzero~$k_y$ in the quasi-interchange limit ($|k_x|$ much larger than
$|k_y|$, $|k_z|$, and $H^{-1}$) is
\begin{equation}
k_y^2 v_A^2 + g \left(\frac{d\ln T}{dz}
+ \frac{p_{\rm cr}}{p}\frac{d\ln p_{\rm cr}}{dz}
+ \frac{B^2}{8\pi p}\frac{d\ln B^2}{dz}
\right) > 0.
\label{eq:stabcrit2}
\end{equation}
This equation shows that the magnetic field has two competing effects
on convective stability. First, if the field strength decreases
``upwards'' (i.e., $dB^2/dz < 0$), the $g\beta^{-1}\, d\ln B^2/dz$
``magnetic-buoyancy term'' in equation~(\ref{eq:stabcrit2}) is
destabilizing. On the other hand, the $k_y^2 v_A^2$ ``magnetic-tension
term'' is stabilizing. We can estimate the relative importance of the
different terms in equation~(\ref{eq:stabcrit2}) by defining the
length scales~$H_f$, $H_B$, and $H_p$ via the equations
\begin{equation}
H_f^{-1} = \left|\frac{d\ln T}{dz} +
\frac{p_{\rm cr}}{p}\frac{d\lnp_{\rm cr}}{dz}\right|,
\end{equation}
\begin{equation}
H_B^{-1} = \left|\frac{d\ln B}{dz}\right|,
\end{equation}
and
\begin{equation}
H_p^{-1} = \frac{\rho g}{p}.
\end{equation}
The ratio of the magnetic-tension term to the magnetic-buoyancy
term is then
\begin{equation}
\frac{k_y^2 v_A^2}{2g\beta^{-1} H_B^{-1}} = k_y^2 H_B H_p,
\label{eq:ratio1}
\end{equation}
while the ratio of the magnetic-tension term to the
``fluid terms,'' $g[d\ln T/dz + (p_{\rm cr}/p)d\ln p_{\rm cr}/dz]$ is
\begin{equation}
\frac{k_y^2v_A^2}{g H_f^{-1}} = 2\beta^{-1} k_y^2 H_p H_f.
\label{eq:ratio2}
\end{equation}
The magnetic field turns off the buoyancy instability at wavevectors
for which the magnetic tension term dominates over both the magnetic
buoyancy term and the fluid terms. At $\beta\sim 1$, this happens for
$k_y^2 H_p H_B \gg 1$ and $k_y^2 H_p H_f\gg 1$. If we take all the
scale lengths to be comparable to the density scale height~$H$, then
at $\beta \sim 1$ magnetic tension turns off the instability for~$k_yH
\gg 1$, but is negligible for $k_yH \ll 1$. At $\beta\sim 1$ and $k_y
H \sim 1$, the tension, buoyancy, and fluid terms are all comparable,
and magnetic buoyancy and magnetic tension to some extent cancel out.
When $\beta \ll 1$, magnetic tension dominates for~$k_y H \gg 1$,
magnetic buoyancy dominates for $k_y H \ll 1$, and the two are
comparable at $k_y H \sim 1$, again assuming that $H_p \sim H_B \sim
H_f \sim H$.
To apply our results to galaxy-cluster plasmas, we imagine some
hypothetical spherical equilibrium, and consider local modes at a
radius~$r$ at some location where the radial component of the magnetic
field vanishes, and where all the scale lengths are of order~$r$. Our
local analysis of a slab-symmetric equilibrium is strictly applicable
only to modes with $k_y r \gg 1$, that is, to modes with parallel
wavelengths much less than the scale height. Our results show that
such modes are stable when~$\beta \lesssim 1$ because of the stabilizing
effects of magnetic tension.
\section{The Parker instability in the interstellar medium}
\label{sec:parker}
The Parker instability is an unstable mode in a gravitationally
stratified plasma that is driven by the buoyancy of the magnetic field
and/or cosmic rays. (Parker 1966, 1967) The Parker instability is
thought to be important for the interstellar medium for several
reasons. It has been argued that this mode, acting alone or in
concert with the thermal instability (Field 1965), contributes to the
formation of molecular clouds (Blitz \& Shu 1980; Parker \& Jokipii
2000; Kosinski \& Hanasz 2005, 2006, 2007). It has also been suggested that
the Parker instability is a mechanism for regulating the transport of
magnetic fields and cosmic rays in the direction perpendicular to the
galactic plane, and for driving the Galactic dynamo. (See, e.g.,
Parker 1992, Hanasz \& Lesch 2000, Hanasz et~al~2004).
The Parker instability is very similar to the instability that we have
investigated in this paper. Standard analyses of the Parker
instability consider an equilibrium in which $\mathbf g = - g\hat{\mathbf z}$,
${\mathbf B}$ is in the $xy$-plane, $\rho \propto \exp(-z/H)$, and $H$, $T$,
$\beta$, and $p_{\rm cr}/p$ are constant. (Parker 1966, 1967; Shu
1974, Ryu et~al~2003). In early studies, the parallel cosmic-ray
diffusion coefficient~$D_\parallel$ was taken to be infinite, since
$p_{\rm cr}$ was assumed to be constant along magnetic field
lines. (Parker 1966, 1967; Shu 1974) On the other hand, Ryu
et~al~(2003) considered the effects of finite~$D_\parallel$, as well
as cosmic-ray diffusion perpendicular to magnetic field lines. All of
these studies took the thermal plasma to be adiabatic.
The analysis of the present paper extends our understanding of the
Parker instability in two ways. First, we allow the equilibrium
values of $T$, $\beta$,
and $p_{\rm cr}/p$ to vary with~$z$. Second, we consider the effects of
anisotropic thermal conduction. By doing so, we show that the
condition~$dT/dz<0$ makes a plasma more unstable to the Parker
instability than when the equilibrium is isothermal.
We also show that even if the equilibrium is isothermal,
anisotropic thermal conduction makes a
stratified plasma more unstable to the Parker instability than when
the plasma is treated as adiabatic. The Parker stability criterion
in the limit~$|k_x| \rightarrow \infty$
for the equilibrium described above can be obtained from
the $k_y \rightarrow 0$ limit of equation~(73)
of Shu~(1974):
\begin{equation}
\frac{B^2}{8\pi} + p_{\rm cr} < (\gamma-1) p .
\label{eq:parker1}
\end{equation}
Multiplying this equation by~$-1/H$ and making use of the assumptions
that $dB^2/dz = - B^2/H$,
$dp_{\rm cr}/dz = - p_{\rm cr}/H$, and $dp/dz = - p/H$,
we can rewrite equation~(\ref{eq:parker1}) as
\begin{equation}
\frac{d}{dz} \left(\frac{B^2}{8\pi} + p_{\rm cr}\right) > -\frac{(\gamma-1)p}{H}.
\label{eq:parker2}
\end{equation}
On the other hand, when anisotropic thermal conduction is taken into
account, the stability criterion for this constant-temperature
equilibrium from equation~(\ref{eq:stabcritgen2}) is
\begin{equation}
\frac{d}{dz} \left(\frac{B^2}{8\pi} + p_{\rm cr}\right) > 0.
\label{eq:parker3}
\end{equation}
Since $\gamma >1$, equation~(\ref{eq:parker3}) is more restrictive
than equation~(\ref{eq:parker2}), and anisotropic thermal conduction
allows for instability under a larger range of equilibria than when
the plasma is taken to be adiabatic. The reason for this is that as a
fluid parcel rises and expands, anisotropic thermal conduction allows
heat to flow up along the magnetic field lines into the rising fluid
parcel. This heat flow increases the temperature of the rising fluid
parcel relative to the adiabatic case and thereby lowers the density,
making the fluid parcel more buoyant, as in the high-$\beta$
zero-$p_{\rm cr}$ limit considered by Balbus~(2000, 2001).
\section{Conclusion}
\label{sec:conc}
In this paper we derive the stability criterion for local buoyancy
instabilities in a stratified plasma, with the equilibrium magnetic
field in the~$\hat{\mathbf y}$ direction and gravity in the~$-\hat{\mathbf z}$ direction.
We take into account cosmic-ray diffusion and thermal conduction along
magnetic field lines and focus on the large-$|k_x|$ limit, which is
the most unstable limit for adiabatic plasmas. Our work extends the
earlier work of Balbus (2000, 2001) and [CD06]\ by allowing for
arbitrarily strong magnetic fields. Applying our work to
galaxy-cluster plasmas, we find that increasing the magnetic field to
the point that~$\beta = 8\pi p/B^2 \lesssim 1$ would shut off buoyancy
instabilities at wavelengths along the magnetic field that are much
shorter than the equilibrium scale height. Our analysis also extends
our understanding of the Parker instability by allowing the
equilibrium values of $T$, $\beta$, and $p_{\rm cr}/p$ to vary
with~$z$, and by accounting for anisotropic thermal conduction. We
find that the interstellar medium is more unstable to the Parker
instability than was predicted by earlier studies, which treated the
thermal plasma as adiabatic.
\acknowledgements
We thank Eliot Quataert for helpful discussions.
This work was partially supported by NASA's Astrophysical
Theory Program under grant NNG 05GH39G and by NSF under
grant AST 05-49577
|
1,116,691,500,186 | arxiv | \section{Introduction}
The binary pseudo-alloy of titanium-tungsten (Ti$_x$W$_{1-x}$, $x\leq0.3$) is a well-established, effective diffusion barrier and adhesion enhancer within silicon-based semiconductor devices.~\cite{NICOLET1978415, Wang_SQ_1993, ROSHANGHIAS2014386} It is designed to prevent the interdiffusion between adjacent metallisations and the underlying dielectric and semiconductor materials. TiW is compatible with various metallisations (Al, Au, Ag, In and Cu) and has remarkable thermal stability at elevated temperatures ($\leq$850$\degree$C).~\cite{Cunningham_1970, Harris_1976, GHATE1978117, NOWICKI1978195, Olowolafe1985, OPAROWSKI1987313, DIRKS1990201, Misawa_1992, OLOWOLAFE199337, Chiou_1995, BHAGAT20061998, Chang_2000, FUGGER20142487, LePriol2014, Souli_2017, Kalha_TiW_Cu_2022} Consequently, TiW diffusion barriers are now being widely implemented in next-generation SiC-based power semiconductor technologies with copper metallisation schemes,~\cite{Baeri_2004, Behrens_SiC_2013, Liu_2014} and more recently within electrodes for GaAs photoconductive semiconductor switches (PCSSs),~\cite{GaAs} and gate metal stacks in GaN-based high electron mobility transistor (HEMT) devices.~\cite{GaN}\par
Diffusion barriers are needed as Cu and Si readily react at relatively low temperatures to form intermetallic copper-silicide compounds at the interface, which seriously hamper the performance and reliability of devices.~\cite{Corn_1988, Harper_1990, Shacham_Diamand_1993, Liu_1993, Sachdeva_2001, Souli_2017} Studies have shown that TiW films are capable of retarding and limiting this interdiffusion and subsequent reaction.~\cite{Wang_SQ_1993, Souli_2017} However, when subjected to a high thermal budget, a depletion of Ti within the TiW grains has been observed, leading to the accumulation of Ti at grain boundaries.~\cite{CHOOKAJORN2014128} The segregated Ti is then able to diffuse out of the barrier and through the metallisation via grain boundary diffusion.~\cite{Olowolafe1985} This depletion of Ti is thought to lead to a greater defect density within the TiW layer, consequently allowing for the potential of Cu and Si to bypass the barrier and react. Fugger~\textit{et al.} cite that this out-diffusion process is an ``essential factor'' in the failure of this barrier,~\cite{FUGGER20142487} and others have also documented the segregation of Ti during high-temperature annealing.~\cite{OLOWOLAFE199337, OLOWOLAFE199337, Baeri_2004, Plappert_2012, CHOOKAJORN2014128, Kalha_TiW_Cu_2022}\par
Given the importance of the TiW barrier to the overall device performance, reliability and its application in future SiC technologies and beyond, this Ti diffusion degradation process must be better understood, including how it impacts the stability of the TiW/Cu structure. The common thread across the vast majority of past experimental studies on TiW and diffusion barriers in general, including the present authors' previous work,~\cite{Kalha_TiWO, Kalha_TiW_Cu_2022} is that ex-situ samples are used to track the evolution of the diffusion process and to determine the temperature at which the barrier fails. Such studies also often focus on one Ti concentration and are therefore unable to address the effect of the titanium concentration of the film on the degradation mechanism.\par
\begin{figure*}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.85\linewidth]{Figures/Schematic.png}
\caption{Schematic representation of the samples and experimental approach (not drawn to scale). (a) Device stack on a sample holder being annealed in-situ to 673~K and the expected Ti diffusion represented by grey vertical arrows. (b) A magnified view of the copper surface showing the Ti accumulation and the two photon energies used for SXPS and HAXPES measurements to excite the Ti~2\textit{p} and Ti~1\textit{s} electrons from the same depth. (c) SXPS laboratory-based Ar\textsuperscript{+} sputtering depth profile used to quantify the elemental distribution across the TiW/Cu bilayer after in-situ annealing (i.e. post-mortem).}
\label{fig:Schematic}
\end{figure*}
Although ex-situ prepared samples give a good representation of the device \emph{after} stress events, it is difficult to correlate the results directly with what a device is experiencing \emph{during} the applied stress.~\cite{Olowolafe1985, Baeri_2004} Therefore, it is crucial to develop new characterisation strategies that can probe the degradation mechanism dynamically under realistic conditions while allowing for changes to the chemical states across the device stack to be monitored.\par
To the best of our knowledge, only Le Priol~\textit{et al.} and Siol~\textit{et al.} provide in-situ monitoring measurements on TiW, both employing in-situ X-ray diffraction (XRD). Le Priol~\textit{et al.} studied the efficiency of a TiW barrier deposited from a 70:30~at.\% W:Ti alloy target against indium diffusion at temperatures between 573-673~K under vacuum.~\cite{LePriol2014} The authors could correlate the TiW barrier efficiency with its microstructure and determine the diffusion coefficient of In in TiW. Siol~\textit{et al.} were interested in understanding the oxidation of TiW alloy precursors, and observed oxygen dissolution and the formation and decomposition of mixed (W,Ti)-oxide phases when ramping the temperature between 303 to 1073~K in air.~\cite{SIOL202095}\par
An explanation for the lack of in-situ/operando experiments in the field, which is in contrast to the importance of these material interfaces in both novel and commercial device applications, is the challenges associated with performing such experiments. These include extensive periods of time required to collect sufficient data, the availability of instruments with in-situ capability, and difficulties in sample preparation and interfacing.\par
The present work combines soft and hard X-ray photoelectron spectroscopies (SXPS and HAXPES) with in-situ annealing to study the effect of annealing temperature, annealing duration, and Ti:W ratio on the thermal stability of TiW/Cu bilayers in real-time, considerably expanding on the existing ex-situ work, including the present authors' previous studies.~\cite{Kalha_TiWO, Kalha_TiW_Cu_2022} Si/SiO\textsubscript{2}/Ti$_x$W$_{1-x}$(300~nm)/Cu(25~nm) device stacks (see Fig.~\ref{fig:Schematic}(a) for a schematic of the stack) are annealed up to a maximum temperature of 673~K (400$\degree$C) and held there for 5~h. At the same time, soft and hard X-ray photoelectron spectra are continuously recorded to capture the Ti diffusion process and changes to the chemical state across the copper surface (see Fig.~\ref{fig:Schematic}(b) for a schematic). The target temperature of 673~K is selected as it is in a common temperature regime employed during device fabrication to obtain desired grain growth and texture of the copper metallisation.~\cite{Harper_2003, Plappert_2012} Additionally, it is a temperature that can occur at short circuit events during the operation of potential devices.~\cite{NELHIEBEL20111927}\par
A major benefit of combining the two variants of X-ray photoelectron spectroscopy (XPS) is that SXPS is more surface-sensitive, whereas HAXPES enables access to the Ti~1\textit{s} core line. The Ti~1\textit{s} offers an alternative to the commonly measured Ti~2\textit{p} with soft X-ray sources. The Ti~1\textit{s} compared to the Ti~2\textit{p} has the added benefits of covering a smaller binding energy (BE) range and consequently necessitating a shorter collection time, the absence of spin-orbit splitting (SOS), no additional broadening to consider from the Coster-Kronig effect that influences the Ti~2\textit{p}\textsubscript{1/2} peak, and the absence of underlying satellites. For these reasons, the exploitation of the 1\textit{s} core level over the 2\textit{p} is becoming increasingly popular for transition metals, especially for the disentanglement of charge transfer satellite structures in the X-ray photoelectron spectra of metal oxides.~\cite{Woicik_2015, Miedema2015, Ghiasi2019, Woicik_2020, HAXPES_Big_Boy}\par
HAXPES is typically employed as it offers a larger probing depth than conventional SXPS.~\cite{HAXPES_Big_Boy} However, here, it is strategically used to obtain comparable probing depths of the Ti~2\textit{p} and Ti~1\textit{s} core lines, collected with SXPS and HAXPES, respectively. Using this combination, the more widely studied Ti~2\textit{p} spectra can be used to understand the Ti~1\textit{s} spectra better. In addition to the synchrotron-based XPS experiments, quantitative laboratory-based SXPS depth profiles were also conducted on the samples following the in-situ experiment (i.e. post-mortem) to ascertain the quantitative distribution of Ti across the Cu metallisation (see Fig.~\ref{fig:Schematic}(c) for a schematic of the depth profiling).\par
\section{Methodology}
\subsection{Samples}\label{Samples}
Three as-deposited Si/SiO\textsubscript{2}/Ti$_x$W$_{1-x}$/Cu thin film stacks with varying Ti:W composition were prepared through an established industrial route. The stack consists of a 50~nm SiO\textsubscript{2} layer on an un-patterned Si (100) substrate, above which a 300~nm thick TiW layer was deposited via magnetron sputtering. The TiW films were deposited from composite targets with a nominal atomic concentration of 30:70 Ti:W, determined by X-ray fluorescence spectroscopy (XRF). By varying the deposition parameters, three samples with an average Ti concentration, $x$ across the entire film thickness of 5.4$\pm$0.3, 11.5$\pm$0.3, and 14.8$\pm$0.6 at.\% relative to W were realised (e.g (Ti/(Ti+W))$\times$100). These concentrations were determined using laboratory-based SXPS and depth profiling across the entire film thickness (further details regarding the quantification of the TiW films can be found in Supplementary Information I). These samples will be referred to as 5Ti, 10Ti and 15Ti, respectively, for the remainder of the manuscript. Finally, a 25~nm Cu capping layer was deposited via magnetron sputtering on top of the TiW barrier. Deposition of both TiW and Cu was conducted in an argon discharge with no active substrate heating or vacuum break between successive depositions. The deposition chamber operated under a base pressure of 10\textsuperscript{-8}-10\textsuperscript{-7}~mbar. Further details regarding the deposition process have been reported in Refs.~\cite{Plappert_2012, SAGHAEIAN2019137576}.
\subsection{Dynamic synchrotron-based SXPS/HAXPES}
\subsubsection{Beamline optics and end station details}
SXPS and HAXPES measurements were conducted at beamline I09 of the Diamond Light Source, UK,~\cite{Beam2018} at photon energies of 1.415~keV and 5.927~keV, respectively (these will be abbreviated as 1.4~keV and 5.9~keV throughout the remaining manuscript). 1.4~keV was selected using a 400~lines/mm plane grating monochromator, achieving a final energy resolution of 330~meV at room temperature. 5.9~keV was selected using a double-crystal Si~(111) monochromator (DCM) in combination with a post-monochromator Si~(004) channel-cut crystal, achieving a final energy resolution of 290~meV at room temperature. The total energy resolution was determined by extracting the 16/84\% width of the Fermi edge of a clean polycrystalline gold foil (see Supplementary Information II for further information on determining the resolution).~\cite{ISO} The end station of beamline I09 is equipped with an EW4000 Scienta Omicron hemispherical analyser, with a $\pm$28$\degree$ acceptance angle. The base pressure of the analysis chamber was 3.5$\times$10$\textsuperscript{-10}$~mbar. To maximise the efficiency in the collection of spectra, the measurements were conducted in grazing incidence and at near-normal emission geometry.\par
\subsubsection{Annealing} \label{methods_annealing}
Samples were individually annealed in-situ to a sample target temperature of 673~K (400$\degree$C) using a tungsten filament heater, and held at the temperature for approximately 5~h. The sample plate used for the experiment consisted of a copper disk (3~mm thick, 8~mm diameter) fixed to the centre of a flat tantalum plate, on which the sample was placed and secured using clips. Good thermal contact was made between the copper disk and the sample using a thin silver foil. This allowed the sample temperature to be inferred by attaching an N-type thermocouple to the centre of the copper disc. The thermocouple was also connected to a Lakeshore temperature controller, which was programmed to ramp the sample temperature at a constant rate under a closed-loop control (see Supplementary Information III for an image of the sample plate holder).\par
Prior to in-situ annealing, all samples were gently sputter cleaned in-situ for 10~minutes using a 0.5~keV de-focused argon ion (Ar\textsuperscript{+}) source, operating with a 6~mA emission current and 5$\times$10\textsuperscript{-5}~mbar pressure. This was necessary to remove the native copper oxide that had formed on the sample surface during sample transport.\par
The process of in-situ annealing encourages the purging of adsorbed gases and organic species within the sample and on the sample surface (i.e. degassing). Therefore, annealing in a UHV environment will increase the chamber pressure, which is undesired, especially during the collection of photoelectron spectra. To account for sample degassing, the annealing process was conducted step-wise to ensure a good analysis chamber pressure was maintained throughout the measurements. Fig.~\ref{fig:Temp_Profile} displays a representative temperature profile acquired for sample 5Ti and the related pressure profile within the analysis chamber (see Supplementary Information IV for the temperature profiles collected for all three samples). The temperature profile consists of three stages. Additionally, as seen in the pressure profile in Fig.~\ref{fig:Temp_Profile}, with every increasing step in temperature, a temporary increase in pressure resulted due to the degassing of the sample.\par
Prior to annealing in the analysis chamber, the samples were first heated in a subsidiary sample preparation chamber to remove the majority of adsorbed molecules. This stage of annealing involves a fast ramp from room temperature to 523~K and will be referred to as Stage \textbf{1} of the annealing process. The Ti diffusion process was assumed to be insignificant in this temperature range. Next, the sample was moved to the main analysis chamber, where the temperature was ramped step-wise from 523 to the target temperature of 673~K while maintaining on average a pressure of 7$\times$10\textsuperscript{-10}~mbar (referred to as Stage \textbf{2}). The temperature was then held at the 673~K target temperature for 5~h (referred to as Stage \textbf{3}). The spectra were continuously collected using SXPS and HAXPES from the start of Stage \textbf{2} until the end of Stage \textbf{3} of the annealing process. The period where the spectra were collected will be referred to as the ``measurement window''. Across the measurement window, the same group of spectra were collected iteratively, which will be referred to as the ``spectral cycle''. Each spectral cycle took approximately 15~minutes to collect, and details on which spectra were selected will be discussed in the following section. During Stage~\textbf{2}, the temperature was increased once a spectral cycle was completed, which coincidentally allows sufficient time for the analysis chamber pressure to recover below 8$\times$10\textsuperscript{-10}~mbar. \par
For completeness, we note that during the initial stages of annealing, sample 10Ti degassed more than samples 5Ti and 15Ti, and therefore the temperature ramp of Stage \textbf{2} for sample 10Ti was paused to allow the pressure to recuperate. This meant that sample 10Ti was held at 543~K for four spectral cycles rather than one. Therefore, the total time of annealing of sample 10Ti was extended by approximately 1~h compared to the annealing time of samples 5Ti and 15Ti. This is not expected to affect the diffusion process significantly or the resultant accumulation profiles, as the Ti diffusion at this temperature is minimal.\par
\begin{figure}
\centering
\includegraphics[keepaspectratio, width=0.4\linewidth]{Figures/TandP_profile.png}
\caption{Representative temperature profile acquired from the Lakeshore temperature controller during the measurements on sample 5Ti. The temperature profile consists of three stages. Stage \textbf{1}: a quick ramp to 523~K in a subsidiary chamber. Stage \textbf{2}: a 10~K/[spectral cycle] ramp in the main analysis chamber, which was then decreased to a 5~K/[spectral cycle] ramp once 653~K was reached. The temperature was ramped step-wise in Stage \textbf{2} to allow the pressure in the analysis chamber to recover to $<$7$\times$10\textsuperscript{-10}~mbar after each temperature step (see inset for the pressure profile). Stage \textbf{3}: holding period at 673~K for 5~h. The dotted line at $t$ = 0~h indicates the start of the measurement window.}
\label{fig:Temp_Profile}
\end{figure}
\subsubsection{Core level selection}\label{Decision}
The spectral cycle, which was run in an iterative loop during the experiment, included the following core level spectra: Cu~2\textit{p}\textsubscript{3/2}, Ti~2\textit{p} and W~4\textit{d} collected with SXPS, and Ti~1\textit{s} collected with HAXPES. The W~4\textit{d} core level was selected over the commonly measured W~4\textit{f} line as the former does not overlap with the core levels of Cu or Ti in this region, whereas the latter overlaps with the Ti~3\textit{p} core level. The Cu Fermi edge was also included in the spectral cycle and was collected with both SXPS and HAXPES throughout the measurement window to (a) provide an intrinsic method of calibrating the BE scale and (b) monitor any change to the total energy resolution as a consequence of raising the sample temperature. Based on 16/84\% fits of the collected Fermi edges across all measurements, the effect of thermal broadening is negligible under the experimental conditions used, and further information can be found in Supplementary Information V. All spectra were aligned to the intrinsic Cu Fermi energy (E\textsubscript{F}) and the spectral areas were obtained using the Thermo Avantage v5.9925 software package. The BE values quoted in this work are considered to have an estimated error of $\pm$0.1~eV.\par
The SXPS photon energy was set to 1.4~keV so that the kinetic energy (KE) of excited Ti~2\textit{p} electrons at this photon energy matches the KE of Ti~1\textit{s} electrons excited with the HAXPES photon energy (KE\textsubscript{Ti~1\textit{s}} $\approx$ KE\textsubscript{Ti~2\textit{p\textsubscript{3/2}}} $\approx$ 961~eV). Using the QUASES software package,~\cite{Shinotsuka_2015} the inelastic mean free path (IMFP) of Ti~2\textit{p} and Ti~1\textit{s} electrons in Cu metal at the SXPS and HAXPES photon energies were calculated. The IMFP for the Ti~1\textit{s} and Ti~2\textit{p}\textsubscript{3/2} is approximately 1.50~nm, and so the estimated probing depth (3$\lambda$) is 4.50~nm. Therefore, a direct comparison between the two Ti core levels will be possible as they originate from very similar probing depths.
\subsection{Laboratory-based SXPS}\label{SXPS_methods}
SXPS depth profile measurements were conducted on the samples that were annealed at I09 using a laboratory-based Thermo K-Alpha+ instrument (i.e. the in-situ annealed samples were removed and kept for a post-mortem analysis). The instrument operates with a monochromated Al~K$\alpha$ photon source ($h\nu$ = 1.4867~keV) and consists of a 180$\degree$ double-focusing hemispherical analyser, a two-dimensional detector that integrates intensity across the entire angular distribution range, and operates at a base pressure of 2$\times$10\textsuperscript{-9}~mbar. A 400~$\mu$m spot size was used for all measurements, achieved using an X-ray anode emission current of 6~mA and a cathode voltage of 12~kV. A flood gun with an emission current of 100~$\mu$A was used to achieve the desired level of charge compensation. The total energy resolution of the spectrometer was determined to be 400~meV. Survey and core level (W~4\textit{f}, Ti~2\textit{p}, O~1\textit{s} and Cu~2\textit{p}\textsubscript{3/2}) spectra were collected with pass energies of 200 and 20~eV, respectively. Depth profiles were conducted using a focused Ar\textsuperscript{+} ion source, operating at 500~eV energy and 10~mA current, rastering over a 2$\times$2~mm\textsuperscript{2} area with a 30$\degree$ sputtering angle. A total of 17 sputter or etch cycles, each lasting 180~s, was carried out with survey and core level spectra collected after each etch cycle. The data were analysed using the Thermo Avantage v5.9925 software package. The error associated with the quantification values is estimated to be $\pm$0.3~at.\% owing to the complexity of the W~4\textit{f} core level and the low quantities of Cu and Ti/W in the TiW and Cu layers, respectively. \par
\section{Results and Discussion}
Reference room temperature survey and core level spectra (Ti~1\textit{s}, Cu~2\textit{p}, Ti~2\textit{p} and W~4\textit{d}) were collected for the three samples after the in-situ sputter cleaning process, and prior to annealing, with the results displayed in Supplementary Information VI. From the survey spectra, the sample surfaces appear clean and are dominated by signals from Cu. Virtually no carbon is detected, and only a trace quantity of oxygen is present when measured with SXPS. The Cu~2\textit{p}\textsubscript{3/2} core level spectra are near identical for the three samples, and the position and line shape are commensurate with metallic copper.~\cite{SCHON197396, SCROCCO197952, Miller_1993} A low-intensity satellite is observed between 943-948~eV in the Cu~2\textit{p}\textsubscript{3/2} core level spectra, but comparing the spectra to reference measurements of a polycrystalline Cu foil and an anhydrous Cu\textsubscript{2}O powder, the satellite intensity is in agreement with the Cu foil. This confirms that the Cu surface of these samples can be considered metallic and the native oxide contribution is minimised after in-situ sputtering.\par
Importantly no Ti or W is observed in these room temperature measurements. This confirms both that the Cu layer is sufficiently thick so that even with SXPS the underlying TiW cannot be probed, and that the surfaces are consistent across all samples. The reference measurements show that the Cu~L\textsubscript{1}M\textsubscript{1}M\textsubscript{4,5} Auger line overlaps with the Ti~1\textit{s} core line but its intensity is vanishingly small.~\cite{COGHLAN1973317, Liu_SpeedyAuger_2021} Nevertheless, care was taken to remove this contribution when we quantified the Ti~1\textit{s} region to accurately determine the relative change in Ti concentration at the surface.\par
The following sections present the Cu, Ti and W core level spectra and associated accumulation profiles as a function of annealing duration/temperature across the three samples, with a focus on the initial stages of annealing and the 673~K holding period.
\subsection{In-situ annealing profiles}
\subsubsection{Copper}\label{Cu}
Fig.~\ref{fig:CLs_673K_Cu2p} displays the Cu~2\textit{p}\textsubscript{3/2} core level spectra collected over the 5~h holding period at 673~K for all three samples, i.e. Stage \textbf{3} (with \textit{t}~=~0~h in Fig.~\ref{fig:CLs_673K_Cu2p} referring to the start of the 5~h holding period). The spectra across all samples confirm that Cu still remains in its metallic state during annealing, with a BE position of approximately 932.5~eV. Additionally, the narrow full width at half maximum (FWHM), found to be 0.8~eV, and the lack of significant satellite features in the 943-948~eV region give further confirmation of the metallic nature of the Cu surface.~\cite{SCHON197396, SCROCCO197952, Miller_1993} From Fig.~\ref{fig:CLs_673K_Cu2p} it can be observed that after annealing and within the 673~K holding period, sample 5Ti has the highest Cu~2\textit{p}\textsubscript{3/2} signal intensity (Fig.~\ref{fig:CLs_673K_Cu2p}(a)), followed by samples 10Ti (Fig.~\ref{fig:CLs_673K_Cu2p}(b)) and 15Ti (Fig.~\ref{fig:CLs_673K_Cu2p}(c)). Moreover, within the 5~h holding period, the signal intensity is continually decreasing with annealing duration and this effect is most notable in Fig.~\ref{fig:CLs_673K_Cu2p}(c) for the sample with the highest Ti concentration.\par
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth,keepaspectratio]{Figures/673K_Composite_Cu2p.png}
\caption{Cu~2\textit{p}\textsubscript{3/2} core level spectra collected during the 673~K holding period (Stage \textbf{3}) for sample (a) 5Ti, (b) 10Ti, and (c) 15Ti. Spectra for each sample are plotted over the same $y$-axis scale to show the differences in intensity across the three samples. The spectra have not been normalised but a constant linear background has been removed. To avoid congestion of this figure, spectra collected every other spectral cycle are presented (i.e. $\approx$30~minutes) rather than at every spectral cycle (i.e. $\approx$15~minutes). The legend displayed in (b) also applies to (a) and (c). Here, $t$~=~0~h refers to the start of the 5~h holding period.}
\label{fig:CLs_673K_Cu2p}
\end{figure*}
To determine the change in concentration of Cu at the sample surface across the measurement window, peak fit analysis of the Cu~2\textit{p}\textsubscript{3/2} core level was conducted to determine the change in area, with the resultant profile displayed in Fig.~\ref{fig:Composite_Quant}(a). In Fig.~\ref{fig:Composite_Quant}, time, $t$ = 0~h is redefined as the first measurement point of the measurement window (i.e. at the start of Stage \textbf{2} at a temperature of 523~K (250$\degree$C)). Note, $t$ = 0~h in the context of Fig.~\ref{fig:Composite_Quant} is not the same as $t$ = 0~h in Fig.~\ref{fig:CLs_673K_Cu2p}. The same is also true for Fig.~\ref{fig:CLs_673K_Ti1s} and Fig.~\ref{fig:CLs_673K_W4d}, which present the equivalent spectra to Fig.~\ref{fig:CLs_673K_Cu2p} for the Ti~1\textit{s} and W~4\textit{d} core levels, respectively.\par
The Cu~2\textit{p}\textsubscript{3/2} intensity profile in Fig.~\ref{fig:Composite_Quant}(a) reflects what is observed in the core level spectra collected across the 673~K holding period shown in Fig.~\ref{fig:CLs_673K_Cu2p}, in that the Cu~2\textit{p}\textsubscript{3/2} signal intensity decreases as a function of time and annealing temperature across both Stages \textbf{2} and \textbf{3} of the annealing process. The decrease in intensity of the Cu~2\textit{p}\textsubscript{3/2} signal with time is a consequence of the diffusion of Ti out of the TiW layer during annealing. The accumulation of Ti leads to a displacement of Cu atoms and the formation of a Ti-rich surface layer, consequently attenuating the Cu signal. Additionally, when the TiW is more Ti-rich, Fig.~\ref{fig:Composite_Quant}(a) shows that the Cu signal diminishes more extensively suggesting a greater out-diffusion of Ti. As expected based on this interpretation, sample 15Ti shows the largest decay rate in the Cu~2\textit{p}\textsubscript{3/2} signal, followed by sample 10Ti and then 5Ti. At the end of the measurement window, the Cu~2\textit{p}\textsubscript{3/2} signal intensity has depreciated by approximately 2.8, 8.8 and 32.3~\%, for sample 5Ti, 10Ti and 15Ti, respectively. \par
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.95\linewidth,keepaspectratio]{Figures/Quant_Composite_DC_Correct_ADD.png}
\caption{Relative area intensities measured as a function of time, \textit{t} collected across the measurement window for all three samples, including (a) Cu, (b) Ti, and (c) W profiles, determined from peak fitting the Cu~2\textit{p}\textsubscript{3/2}, Ti~1\textit{s} and W~4\textit{d} core level spectra, respectively, at each spectral cycle. Here, $t$~=~0~h refers to the start of the measurement window. The yellow-filled marker for each dataset refers to the time when the 673~K holding period commences (i.e. data points before and after the marker refer to Stage~\textbf{2} and Stage~\textbf{3} of the annealing process). Vertical guidelines are also in place to mark this point for each sample. For Cu, the measured total Cu~2\textit{p}\textsubscript{3/2} areas are normalised relative to the initial raw area (I\textsubscript{0}) of their respective sample (i.e. I/I\textsubscript{0}). For Ti, the measured total raw Ti~1\textit{s} signal area for each sample is first normalised relative to the raw area of the Cu~2\textit{p}\textsubscript{3/2} core level measured during the same spectral cycle and then afterwards the resultant Ti~1\textit{s}/Cu~2\textit{p}\textsubscript{3/2} area is normalised relative to the final raw intensity of sample 15Ti (i.e.~I/I\textsubscript{F}). The W accumulation profile was determined by normalising the measured total raw W~4\textit{d} spectral areas following the method used for the Ti~1\textit{s} normalisation (i.e.~I/I\textsubscript{F}).}
\label{fig:Composite_Quant}
\end{figure*}
\subsubsection{Titanium} \label{sec:Ti}
The Ti~1\textit{s} core level spectra collected across the 5~h 673~K holding period (Stage \textbf{3}) are displayed in Fig.~\ref{fig:CLs_673K_Ti1s}, with the BE positions of the main signals annotated (see Supplementary Information VII and VIII for the equivalent Ti~2\textit{p} core level spectra and heat maps of the Ti~1\textit{s} spectra collected across the measurement window, respectively). \par
Fig.~\ref{fig:CLs_673K_Ti1s} shows that by the time the 673~K holding period starts, a Ti~1\textit{s} peak is observed across all three samples and the intensity continually increases during the 5~h holding period. This confirms that the onset of diffusion occurs prior to Stage \textbf{3} of the annealing process as assumed during the discussion of the Cu profile. Significant differences in intensity of the Ti~1\textit{s} spectra as a function of Ti concentration are observed, with sample 15Ti showing a considerably more intense peak than sample 10Ti and 5Ti (note the $\times$30 magnification of the 5Ti spectra). Notably, the spectral line shape also appears different across the samples indicating a change in the chemical state of the accumulated Ti. \par
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth,keepaspectratio]{Figures/673K_Composite_Ti1s.png}
\caption{Ti~1\textit{s} core level spectra collected during the 673~K holding period (Stage \textbf{3}) for sample (a) 5Ti, (b) 10Ti, and (c) 15Ti. Spectra for each core level are plotted over the same $y$-axis scale to show the differences in intensity across the three samples. The spectra have not been normalised, but a constant linear background has been removed. Additionally, spectra recorded every other spectral cycle are displayed to aid with the interpretation of the data. For sample 5Ti, the spectra are also shown magnified by $\times$30 to aid with viewing. The legend displayed in (b) also applies to (a) and (c). Here, $t$~=~0~h refers to the start of the 5~h holding period.}
\label{fig:CLs_673K_Ti1s}
\end{figure*}
All spectra exhibit a lower BE feature at BEs of 4964.2-4965.1~eV, corresponding to metallic Ti in varying environments (labelled as Ti(0)). As the Ti~1\textit{s} core level is not as widely studied as Ti~2\textit{p} due to the need for hard X-ray sources, only a handful of publications exist, with reported BEs varying considerably.~\cite{hagstrom1964extension, Nordberg1966, Diplas_2001, diplas2001electron, MOSLEMZADEH2006129, Woicik_2015, Renault_2018, RISTERUCCI201778, Regoutz_2018} The BE positions of the Ti(0)~1\textit{s} peak observed in the present work fall within the literature range of metallic Ti, and the asymmetric line shape of the peak, which can be clearly observed in Fig.~\ref{fig:CLs_673K_Ti1s}(b) and (c), is commensurate with this assignment. An asymmetric line shape is a hallmark of the core level spectra of many transition metals.~\cite{HUFNER1975417}
The 10Ti and 15Ti samples show a small BE difference of 0.2~eV, which could be attributed to the differences in the Ti:Cu and/or Ti:O ratio at the evolving surface. In contrast, the BE position in the 5Ti spectra is considerably lower, with a -0.9~eV shift relative to the BE position observed in the spectra of sample 10Ti. This shift can be attributed to the distinctly different surface configuration of this sample due to the dominance of Ti-O environments and the co-diffusion of tungsten, both of which will be discussed later. Moreover, the quantity of Ti diffused to the surface is incredibly small for sample 5Ti, and therefore, the shift could be due to strong surface effects, with far fewer nearest neighbours being Ti leading to a negative shift in BE position.~\cite{CHOPRA1986L311, Kuzmin_2011} \par
During the 673~K holding period, the nature of the accumulated Ti for samples 10Ti and 15Ti is predominately metallic, given that a single asymmetric peak is visible (see Figs.~\ref{fig:CLs_673K_Ti1s}(b) and (c)). The accumulated Ti for sample 5Ti, shown in Fig.~\ref{fig:CLs_673K_Ti1s}(a) is strikingly different as the intensity of the lower BE metallic peak is overshadowed by a large, fairly symmetric peak at approximately +4.5~eV from the Ti(0)~1\textit{s} peak. This peak, labelled as Ti(IV)~1\textit{s}, is attributed to Ti-O environments in the Ti 4+ oxidation state (i.e. TiO\textsubscript{2} like). Renault~\textit{et al.} report the Ti~1\textit{s} BE position of the TiO\textsubscript{2} environment on a TiN film at 4968.8~eV,~\cite{Renault_2018} which agrees well with the value reported here. Therefore, unlike samples 10Ti and 15Ti, the Ti accumulated at the surface of sample 5Ti is not predominately metallic but oxidic. Additionally, there is a shoulder on the lower BE side of this Ti(IV)~1\textit{s} peak (marked with an asterisk, *), which is attributed to lower valence states of Ti (i.e. 2+, 3+) that may also form due to the limited quantity of oxygen expected to be present (see Supplementary Information IX for a peak fit analysis of the spectra highlighting the presence of such environments). This shoulder increases in intensity with increasing annealing duration, and at the end of the 5~h period, a distinct Ti(0)~1\textit{s} peak is difficult to observe.\par
To aid with the interpretation of the Ti~1\textit{s} spectra, as well as validate the chemical state assignments made so far, the Ti~2\textit{p} spectra are used in parallel (see Supplementary Information VII). The Ti~2\textit{p} spectra for samples 10Ti and 15Ti show a doublet peak with an asymmetric line shape at 454.5 and 460.6~eV (SOS = 6.1~eV), in agreement with metallic Ti.~\cite{TANAKA1990429, Kuznetsov_1992} For sample 5Ti, three peaks are identified at 453.8, 459.0, and 464.8~eV. The lowest BE peak corresponds to Ti~2\textit{p}\textsubscript{3/2} of Ti(0), whereas the other two correspond to the doublet of Ti oxide in the 4+ oxidation state (SOS = 5.8~eV), labelled as Ti(IV) (with the Ti(IV)~2\textit{p}\textsubscript{3/2} peak overlapping the Ti(0)~2\textit{p}\textsubscript{1/2} peak). These BE positions and the SOS of the Ti(IV) oxide doublet match well with literature values.~\cite{Diebold_1996, Regoutz_2016}\par
A shift of the lower BE Ti(0)~2\textit{p}\textsubscript{3/2} peak between the three samples is observed, with the peak positioned at 453.8, 454.7 and 454.4~eV for sample 5Ti, 10Ti and 15Ti, respectively. The relative shifts are similar to those observed in the Ti~1\textit{s} spectra. Moreover, the Ti~2\textit{p} spectra recorded for sample 5Ti also display a shoulder on the lower BE side of the main Ti(IV)~2\textit{p}\textsubscript{3/2}, again reflecting what has been observed in the Ti~1\textit{s} spectra, suggesting the presence of lower valence oxidation states that may form during the reaction between Ti and oxygen.~\cite{POUILLEAU1997235, MCCAFFERTY199992} Overall, this confirms the peak assignments made using the Ti~1\textit{s} core level are valid and shows the importance of using multiple core levels to have confidence in the assignment of chemical states.\par
The observation of almost completely oxidised Ti on the surface of sample 5Ti is of interest, given that these measurements were conducted under ultra-high vacuum (UHV) conditions and annealed in-situ. The level of observed oxidation cannot be explained by Ti gettering residual oxygen from the analysis chamber as the quantity present in the chamber is insufficient to promote oxidation of Ti to the extent observed. Furthermore, as the sample is heated during the measurement, the sticking coefficients for adsorbed gases are greatly reduced. An alternative source of oxygen is residual oxygen within the Cu film, whether that be intrinsic to the film (i.e. incorporated during deposition) or that the sputtering process prior to annealing did not fully remove the native oxide layer that formed during the exposure of the samples to the atmosphere. From the room temperature reference survey spectra found in Supplementary Information VI, a small intensity O~1\textit{s} signal is present. Laboratory-based SXPS depth profiling on the as-deposited samples was conducted to determine the oxygen level within the starting (i.e. pre-annealed) films and to validate this assumption further. Three sputter cycles (or etch steps) were completed before the underlying TiW signal became strong (see Supplementary Information X for the collected spectra). The profiles showed that within the Cu bulk, less than 2~rel. at.\% of O is present, i.e., $<$2~at.\% O, $>$98~at.\% Cu. Within the errors of the performed quantification, this amount would be enough to facilitate the observed Ti oxidation.\par
Overall it is apparent that the oxidation of Ti is dependent on both the quantity and rate of accumulation of Ti metal at the surface. Given the significant Ti oxidation observed for sample 5Ti, owing to the low concentration of accumulated Ti, it would be expected that during the early stages of annealing for the higher concentration samples, when an equally low concentration of Ti is expected to accumulate, oxidation should also occur. To confirm this and explore the oxidation of accumulated Ti further, Fig.~\ref{fig:01} displays the Ti~1\textit{s} core level spectra collected across the measurement window for sample 10Ti (equivalent figures for sample 5Ti and 15Ti can be viewed in Supplementary Information XI and XII, respectively). Fig.~\ref{fig:01}(a) shows that during the initial stages of annealing sample 10Ti ($\leq$603~K), the intensity first increases within the region of 4966-4970~eV. After 603~K the intensity increases below 4966~eV, where the metallic Ti(0)~1\textit{s} peak is located, and this peak quickly becomes the dominant contribution to the total line shape and consequently masks the intensity of the environments seen on the higher BE side.\par
\begin{figure*}[ht]
\centering
\includegraphics[width=0.65\linewidth,keepaspectratio]{Figures/Stage_2_20Ti_arrow.png}
\caption{Initial stages of annealing (523-673~K) described by the Cu~2\textit{p}\textsubscript{3/2} and Ti~1\textit{s} core level spectra. (a) Raw Ti~1\textit{s} core level spectra collected (i.e. with no intensity normalisation) at each temperature increment, with +5~h referring to the data collected at the end of the 5~h 673~K holding period. (b) A magnified view of the raw Ti~1\textit{s} core level spectra collected between 523-623~K and a room temperature reference measurement on the same sample (i.e. before annealing) to highlight the Cu Auger contribution. (c) Normalised (0-1) Ti~1\textit{s} core level spectra to emphasise the change in line shape as a function of temperature. (d) Normalised (0-1) Cu~2\textit{p}\textsubscript{3/2} spectra taken at selected temperatures. (a) and (b), and (c) and (d) are plotted on the same $y$-axis scale, respectively (note the $\times$12.5 magnification of the $y$-axis scale of (b)).}
\label{fig:01}
\end{figure*}
From Fig.~\ref{fig:CLs_673K_Ti1s}(a), we know that the 4966-4970~eV region corresponds to Ti-O environments, namely the Ti(IV) oxidation environment, suggesting that even for sample 10Ti, during the initial stages of annealing when the accumulated Ti concentration is low, oxidation of Ti metal occurs. This region will be referred to as Ti-O environments in the following discussion. Fig.~\ref{fig:01}(b) further emphasises the development of Ti-O environments by focusing on the spectra collected between 523-623~K. From this, it is clear that Ti-O environments evolve first and then after 603~K, the Ti(0)~1\textit{s} peak appears due to the continuing diffusion of Ti metal from the TiW layer. It should be noted that the Cu~LMM Auger peak is also present in this region, however, given that the main Cu~2\textit{p}\textsubscript{3/2} core level peak decreases with annealing duration and temperature, the observed increase in spectral intensity in this region cannot be explained by any interference from the Auger peak.\par
The transition from predominantly Ti oxide to metal is evident in Fig.~\ref{fig:01}(c), showing the Ti~1\textit{s} spectra normalised to the maximum peak height. This figure shows that the main intensity peak signal shifts towards lower BEs across the temperature range of 623-673~K (highlighted with an arrow), and this is accompanied by a decrease in the relative intensity of the Ti-O region. The observed shift is due to the emergence of the Ti(0)~1\textit{s} metal peak and the overall reduction of the Ti-O contribution to the total spectral line shape. Lastly, Fig.~\ref{fig:01}(d) displays the Cu~2\textit{p}\textsubscript{3/2} spectrum recorded at different temperatures across the measurement window, and no discernible change is observed in the spectra. Additionally, Supplementary Information XIII shows that the same observation is true when comparing the Cu~2\textit{p}\textsubscript{3/2} line shape across all three samples. This indicates that only the Ti, not the Cu, is undergoing changes to its chemical state at the developing interface.\par
Therefore, oxidation of the surface accumulated Ti is also observed in sample 10Ti but is more evident during the initial stages of annealing where the rate of metal Ti diffusion and quantity of accumulated Ti is small. The same holds true for sample 15Ti as seen in Supplementary Information XII.
Beyond the qualitative analysis of the Ti~1\textit{s}/2\textit{p} spectra, an accumulation profile of Ti at the Cu surface across the measurement window can be obtained. The Ti accumulation profiles for the samples were extracted from the Ti~1\textit{s} core level spectral areas and are displayed in Fig.~\ref{fig:Composite_Quant}(b) (the equivalent Ti~2\textit{p} profile can be found in Supplementary Information XIV). Before discussing these profiles, it is important to reiterate that they represent changes in the quantity of surface-accumulated Ti with respect to time and not temperature, but with increasing time, the temperature also rises.\par
The temperature at which Ti is first observed at the Cu surface (i.e.~the onset), is difficult to identify with full confidence as the signal is very small, especially for samples 5Ti and 10Ti. For these two samples, the temperature range between 553-563~K (i.e. within the first two hours of Stage \textbf{2}) is when a Ti signal is clearly detectable. The detection of these small Ti signals was only possible through analysing the Ti~1\textit{s} core level as it was much more intense and sharper than the Ti~2\textit{p} (Supplementary Information XV provides a comparison of the Ti~2\textit{p} and Ti~1\textit{s} measured at the same point to highlight this issue). In contrast, for sample 15Ti, it is obvious from Fig.~S15(b) in Supplementary Information XII, that Ti is observed from the start of the measurement window (i.e.~523~K) and may have even begun to accumulate during Stage \textbf{1} of the annealing process. \par
The Ti profile displayed in Fig.~\ref{fig:Composite_Quant}(b) shows that with increasing the concentration of Ti within the TiW film, a greater out-diffusion of Ti is observed and thus, a greater accumulation of Ti on the Cu surface occurs. From the profile, it is apparent that the rate of diffusion and the quantity of accumulated Ti differs significantly across the three samples. Focusing on the last data point in the Ti profile at the end of the 673~K holding period, the Ti~1\textit{s}/Cu~2\textit{p}\textsubscript{3/2} area ratios of samples 5Ti and 10Ti are 3.7$\pm$0.5 and 18.2$\pm$0.5~\%, respectively of that of sample 15Ti. This indicates that a linear relationship between the Ti concentration in the film and the quantity of accumulated Ti on the Cu surface does not exist (i.e. they do not scale proportionally).\par
Sinojiya~\textit{et al.} studied similar Ti$_x$W$_{1-x}$ films across a composition range and observed that above a certain Ti concentration threshold, segregation of Ti toward the grain boundaries was favoured, and this enrichment increased with increasing Ti concentration.~\cite{Sinojiya_2022} Additionally, they observed that the change in Ti concentration not only enhances the segregation of Ti but is also accompanied by a change in stress, microstructure, and grain boundary density within the TiW films. A columnar grain boundary structure was also observed at higher concentrations with a relatively higher grain boundary density. Therefore, in our case, for sample 15Ti it is possible that a greater quantity of Ti was already segregated from the TiW grains within the as-deposited films or that annealing promoted a greater segregation compared to samples 5Ti and 10Ti, and consequently that this led to the differences observed in the Ti accumulation profile between the three samples. Furthermore, based on the work of Sinojiya~\textit{et al.}, the expected differences in the microstructure across samples 5, 10 and 15Ti will also contribute to the changes observed in the Ti diffusion profile as properties such as grain boundary density will affect the rate of diffusion. \par
The Ti accumulation profile displayed in Fig.~\ref{fig:Composite_Quant}(b), collected across the measurement window of all three samples, exhibit two different diffusion regimes. The first regime occurs before the 673~K target is reached (i.e. during Stage \textbf{2}), wherein a rapid exponential increase in intensity occurs when ramping the temperature. Once the 673~K target is reached (i.e. during Stage \textbf{3}), the second regime occurs wherein the diffusion rate begins to decelerate and starts to plateau. A plateau is observed for sample 5Ti, and signs of a plateau are present for sample 10Ti by the end of the measurement window. In contrast, the profile for sample 15Ti does not show signs of plateauing, indicating that Ti continues to accumulate at the Cu surface under the temperature and measurement window tested in this experiment. By fitting the linear portions of the Ti~1\textit{s} profile collected during Stages \textbf{2} and \textbf{3} of annealing, the rate of increase in the Ti~1\textit{s} signal intensity relative to sample 15Ti can be determined. The results of the linear fits of Stage \textbf{2} for samples 5Ti, 10Ti and 15Ti were found to be 0.7, 4.9 and 16.5, respectively, and for Stage \textbf{3} were found to be 0.2, 1.4 and 9.7, respectively (error estimated to be $\pm$20\%). These values highlight the dramatic decrease in the Ti accumulation rate during Stage \textbf{3} of annealing. Multiple processes could be responsible for these changes in the accumulation rate. For instance, only a finite quantity of Ti may be available to segregate from the TiW grains, therefore, after annealing for several hours, a plateau is reached as no more Ti is available to diffuse.~\cite{Kalha_TiW_Cu_2022} Additionally, the accumulation appears to decelerate after the 673~K mark is reached. This deceleration may imply that when subjected to a constant temperature rather than a temperature ramp, the rate of diffusion levels off as a steady-state system is reached due to the thermal input remaining at a constant rate. \par
\subsubsection{Tungsten}
Fig.~\ref{fig:CLs_673K_W4d} displays the collected W~4\textit{d} core level spectra for all samples during the 5~h 673~K holding period (Stage \textbf{3}). W is not observed within this period for the 10Ti and 15Ti samples, however, it is detected for sample 5Ti, whose TiW film contains the lowest Ti concentration. This confirms that W co-diffuses to the surface only for sample 5Ti, and given that it is already detected at $t$ = 0~h of the holding period, the diffusion likely occurred prior to Stage \textbf{3}. The BE position of the W~4\textit{d}\textsubscript{5/2} peak is at 243.2~eV, in good agreement with metallic W.~\cite{Kalha_W_2022} Within the 5~h period, the concentration of surface accumulated W does not increase in intensity with increasing annealing duration, suggesting that the accumulation has plateaued and the diffusion has subsided. The presence of W at the Cu surface may also influence the oxidation behaviour of the accumulated Ti as observed in the previous section.\par
\begin{figure}
\centering
\includegraphics[width=0.33\linewidth,keepaspectratio]{Figures/W4d_673K_15Ti_slim.png}
\caption{W~4\textit{d} core level spectra collected during the 673~K holding period (Stage \textbf{3}) for samples (a) 5Ti, (b) 10Ti, and (c) 15Ti. Spectra for each core level are plotted over the same $y$-axis scale to show the differences in intensity across the three samples. Note the $\times$10 magnification of the spectra for sample 5Ti in (a). The spectra have not been normalised, but a constant linear background has been removed. Additionally, spectra recorded every other spectral cycle are displayed to aid with the interpretation of the data. For sample 5Ti (a), the inset shows a $\times$10 magnification of the spectra to aid with viewing. The legend is the same as that used in Fig.~\ref{fig:CLs_673K_Cu2p}(b) and Fig.~\ref{fig:CLs_673K_Ti1s}(b). Here, $t$~=~0~h refers to the start of the 5~h holding period.}
\label{fig:CLs_673K_W4d}
\end{figure}
Fig.~\ref{fig:Composite_Quant}(c) displays the relative accumulation profile of W at the Cu surface across the measurement window for all three samples. Due to the poor signal-to-noise ratio (SNR) of the W~4\textit{d} spectra, it is difficult to have complete confidence in determining the exact temperature at which W is first observed for sample 5Ti. However, the signal becomes apparent at 553-563~K, similar to when Ti was observed at the surface of the same sample. The poor SNR is also responsible for the large scatter in the accumulation profile, leading to an area change greater than 100~rel.\%. Fitting the data points with an asymptotic curve shows that a plateau is reached when crossing from Stage \textbf{2} to Stage \textbf{3} of the annealing process, with the 673~K holding period profile flattening, similar to what was observed for the Ti profile. The observed plateau indicates that a finite quantity of W is able to migrate from the barrier and that a steady state is reached within the measurement window explored. \par
The diffusion of W is surprising as the vast majority of studies on TiW only report the out-diffusion of Ti. For example, even studies on pure W diffusion barriers,~\cite{Shen_1986, mercier1997, GUPTA1975362, wang_1994} or on a TiW barrier with a relatively low Ti concentration (4.9~at.\%)~\cite{Evans_1994} do not report any mobility of W. However, some studies observe W diffusion from a W or TiW barrier within thin film stacks at temperatures below 600$\degree$C, although no details are given on a possible reason as to why this occurs.~\cite{Christou_1975, Palmstrom_1985, ASHKENAZI1993746} \par
Based on the present results, it is hypothesised that the Ti concentration of the TiW film dictates the overall stability of the diffusion barrier. If it is too low (i.e. in the 5Ti sample), a small amount of W becomes mobile and is free to migrate through the Cu overlayer alongside Ti and accumulate at the surface. This suggests that Ti plays an active role in stabilising the barrier and achieving the desired microstructure necessary for good barrier performance. Therefore, tuning the Ti concentration to an optimum value can significantly improve the barrier performance.
\subsection{Elemental distribution across the in-situ annealed TiW/Cu bilayer} \label{DP}
From the in-situ annealing results, it is clear that under the conditions tested, the out-diffusion of Ti from TiW and through the Cu metallisation is observed for the two samples with the higher Ti concentration - 10Ti and 15Ti. Whereas, for the lowest Ti concentration sample (5Ti), both Ti and W diffuse through the copper metallisation. To quantify the elemental ratio of Cu, Ti, and W across the metallisation, depth profiling using laboratory-based SXPS was conducted on the in-situ annealed samples (i.e. post-mortem). Survey spectra collected at each etch cycle for all three samples can be found in Supplementary Information XVI, showing the change in composition and transition between the Cu overlayer and TiW sublayer.
\begin{figure*}[ht]
\centering
\includegraphics[keepaspectratio, width=0.95\linewidth]{Figures/Depth_Profiles.png}
\caption{Post-mortem laboratory-based SXPS sputter depth profiles collected across samples (a) 5Ti, (b) 10Ti and (c) 15Ti after in-situ annealing at beamline I09. Etch Cycle 0 refers to the spectra collected on the as-received sample (i.e. before any sputtering). Horizontal guidelines are added to show the final Ti~at.\% for each sample, with the dotted, dashed and solid orange lines referring to samples 5, 10 and 15Ti, respectively.}
\label{fig:DP}
\end{figure*}
The depth profiles for the three samples displayed in Fig.~\ref{fig:DP} highlight the distribution of Ti across the Cu layer and confirm what was observed in the in-situ measurements, in that at the Cu surface, the quantity of accumulated Ti increases in intensity as the Ti concentration of the film increases. The profiles further confirm that Ti is found throughout the Cu film after annealing. However, its distribution is not uniform, with more Ti observed at the Cu/air and TiW/Cu interfaces. Despite the strong out-diffusion, distinct Cu and TiW zones are still observable in the depth profiles, showing that the TiW/Cu bilayer has not failed when stressed under these conditions.\par
Several studies on Cu/Ti bilayer films have identified that a reaction between the two films can occur as low as 325$\degree$C, leading to the formation of intermetallic CuTi and Cu\textsubscript{3}Ti compounds at the interface.~\cite{Liotard_1985, Li_1992, Apblett_1992} As shown in Fig.~\ref{fig:01}(c), the shifts observed for the Ti~1\textit{s} core line are representative of a changing oxide to metal ratio rather than the formation of an intermetallic compound, whereas the Cu~2\textit{p}\textsubscript{3/2} spectra displayed in Fig.~\ref{fig:01}(d) show no change in the line shape. If an intermetallic compound were to form, one would expect some systematic change to the spectra with increasing annealing duration and temperature or for samples with a higher Ti concentration in the TiW film, as these will cause the greatest surface enrichment of Ti on the Cu. The possibility of such a reaction is difficult to answer from the core level spectra alone. The depth profiles can aid with this discussion. At etch cycle 0 (i.e. as-received surface), the Ti:Cu ratio for sample 15Ti is 7.5:92.5. Of course, this may be slightly skewed as the surface is oxidised, and so there may be additional diffusion of Ti across the metal/oxide interface, but also a carbon surface layer is present which will affect the quantification. Nevertheless, this ratio is insufficient to form stoichiometric CuTi or Cu\textsubscript{3}Ti intermetallic phases that were reported in previous studies on the Ti/Cu interface.~\cite{Liotard_1985} Therefore, based on this literature, the presented spectra and the quantified Ti:Cu ratio, a reaction between Cu and Ti at the developing Cu/Ti interface does not occur due to the relatively small amount of diffused Ti, which again may explain why no systematic shifts in the core level spectra commensurate with a Cu-Ti reaction were observed. However, it should be noted that it may not be possible to observe intermetallic compounds as (a) the quantity of diffused Ti is very small, and (b) the Cu~2\textit{p}\textsubscript{3/2} core line is known to have small chemical shifts.~\cite{Chawla1992DiagnosticSF}\par
In terms of W, the depth profiles shown in Fig.~\ref{fig:DP} confirm that W is only observed at the Cu surface for sample 5Ti and is not present at the surface or within the Cu bulk for samples 10Ti and 15Ti. Fig.~\ref{fig:DP}(a) shows that for sample 5Ti, the W profile is fairly constant across etch cycles 0-3, suggesting that W is homogeneously distributed throughout the Cu metallisation and is not accumulated at the Cu/air interface like Ti. Quantification of the Cu, Ti and W signals reveals that at the surface of sample 5Ti (etch cycle 0), the composition is 97.9 (Cu), 0.9 (Ti), and 1.2 (W)~rel. at.\%, showing that significant W diffusion has occurred. \par
Fig.~\ref{fig:DP} shows that the Cu signal tends towards 0~rel. at.\% for all samples when the interface is reached. However, Cu is still detected at the deepest point of the depth profile, with a composition at etch cycle 17 calculated to be 0.1 (Cu) 99.9 (Ti + W), 0.7 (Cu) 99.3 (Ti + W), and 1.4 (Cu) 98.6 (Ti + W) rel. at.\%, for samples 5Ti, 10Ti and 15Ti, respectively. Moreover, with increasing Ti concentration, the element profiles broaden, and their gradients toward the ``interface'' labelled zone reduce. This provides evidence that there is a degree of intermixing at the TiW/Cu interface, and for films with higher Ti concentrations, a greater intermixing is observed due to the larger rate of atomic flux of Ti across the interface during annealing. Therefore, the out-diffusion of Ti from the TiW also promotes the down diffusion of Cu into the TiW layer, and consequently, the TiW and Cu layers bleed into each other. \par
To summarise, the depth profiles show that clear TiW and Cu zones remain across all samples despite the diffusion and intermixing that occurs during annealing. Although the concentration of Cu observed at the deepest point of the depth profiles increases when the concentration of Ti in the TiW increases, it is difficult to determine how deep the Cu diffuses, as the measurement point of the last depth profile etch cycle is still very much at the surface of the 300~nm thick TiW film. However, given the low concentration of Cu detected at this point ($\leq$1.4~at.\%), and the fact that distinct Cu and TiW zones still remain, one can be confident that under the conditions tested, the TiW barrier has not failed, and the majority of Cu is held above the barrier.\par
\section{Conclusion}
The thermal stability of the TiW barrier in conjunction with a Cu metallisation overlayer was evaluated in real-time using a combination of SXPS and HAXPES, and annealing the sample in-situ to a target temperature of 673~K. The primary mode of degradation was the segregation of Ti from the TiW barrier and its diffusion to the copper surface to form a surface overlayer. The concentration of Ti in TiW was shown to have a significant influence on the thermal stability of the TiW barrier. Two thresholds are observed when moving across the TiW composition window tested here: (I) below a certain concentration of Ti, W gains mobility, suggesting that the incorporation of Ti stabilises W, and (II) above a certain concentration of Ti the diffusion drastically increases, suggesting that at higher concentrations grain boundary segregation of Ti from the TiW grains is favoured, resulting in significantly more out-diffusion of Ti. The post-mortem depth profiles validate the effectiveness of TiW diffusion barriers as despite the degradation observed during annealing, the Ti depletion is not significant enough to lead to the failure of the barrier, as distinct Cu and TiW zones are still present. Overall, it is clear that the composition heavily dictates the stability of TiW, but under the conditions tested, all three barrier compositions remain effective at suppressing the permeation of copper. Based on this, the TiW alloy can cement itself as an excellent diffusion barrier to incorporate into future device technologies.
\medskip
\textbf{Supporting Information} \par
The Supplementary Information includes room temperature reference spectra, heat maps of the Ti~1\textit{s} spectra collected across the measurement window, and the Ti~2\textit{p} spectra collected for all samples during the 673~K holding period. Additionally, core level spectra collected for samples 5Ti and 15Ti during the 523-673~K annealing period, survey spectra from the laboratory-based SXPS depth profile, information on the residual level of oxygen within the Cu films from laboratory-based SXPS, and a comparison of the Ti~2\textit{p} and Ti~1\textit{s} core levels can be found in the Supplementary Information. Information on the peak fitting procedures used, and the method to determine and monitor the thermal broadening is also available in the Supplementary Information.
\medskip
\textbf{Acknowledgements} \par
C.K. acknowledges the support from the Department of Chemistry, UCL. A.R. acknowledges the support from the Analytical Chemistry Trust Fund for her CAMS-UK Fellowship. This work was carried out with the support of Diamond Light Source, instrument I09 (proposal NT29451-1 and NT29451-2). The authors would like to thank Dave McCue, I09 beamline technician, for his support of the experiments.
\medskip
\bibliographystyle{MSP}
\section{Peak fit analysis of as-deposited TiW spectra}
To determine the Ti:W ratio of the as-deposited samples, the Ti~2\textit{p} and W~4\textit{f} core level spectra were collected with laboratory-based SXPS, after both ex-situ and in-situ preparation of the Si/SiO\textsubscript{2}/TiW/Cu samples. Samples were first cleaved to 5$\times$5~mm\textsuperscript{2} pieces using a diamond-tipped pen, after which they were submerged in a dilute solution of HNO\textsubscript{3} (5:1 65~\% conc. HNO\textsubscript{3}: Milli-Q water) for 10~min. This was carried out to selectively remove the copper metallisation layer without affecting the TiW layer. The samples were then sputter cleaned in-situ to remove contamination during the ex-situ preparation stages and any oxide formation. The survey spectra collected after the in-situ preparation are displayed in Fig.~\ref{fig:Sputter_Surv}.\par
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.8\linewidth]{Figures_SI/TIW_Quant_Surveys_SI.png}
\caption{SXP survey spectra collected before and after in-situ preparation of samples, including (a) survey spectra collected for sample 10Ti after each etch step, and (b) survey spectra collected for all three samples at the end of the in-situ preparation method. Spectra are normalised (0-1) to the height of the most intense peak and are vertically offset. VB = valence band.}
\label{fig:Sputter_Surv}
\end{figure}
Once the sputter cleaning was performed, a depth profile using a focused Ar\textsuperscript{+} source was then conducted for each sample to determine the Ti:W concentration profile across the film. The depth profile consisted of six etching cycles, each lasting for 30~min while the Ar\textsuperscript{+} ion gun operated at a 500~eV accelerating voltage and 10~mA emission current. After six etch steps, the SiO\textsubscript{2} layer was detectable. The Ti~2\textit{p} and W~4\textit{f} core level spectra were collected at each etch step. Representative Ti~2\textit{p} and W~4\textit{f} spectra, along with representative peak fits, are displayed in Fig~\ref{fig:W4f_Ti2p}. Spectra were aligned to the intrinsic Fermi energy ($E_F$) of the respective sample. A systematic shift toward higher binding energy (BE) is observed in the W~4\textit{f} spectra with decreasing Ti, a trend that is also observed in the Ti~2\textit{p} spectra. \par
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.6\linewidth]{Figures_SI/01_03_05_Quant.png}
\caption{SXP core level spectra collected for all samples after the in-situ removal of the copper capping layer and oxide layer, and representative peak fits of the (a) W~4\textit{f} and (b) Ti~2\textit{p} core level spectra. Spectra are normalised to the W~4\textit{f}\textsubscript{7/2} peak height of the respective sample. Peak fits of the W~4\textit{f} and Ti~2\textit{p} core levels for spectra collected on the 10Ti sample are displayed in (c) and (d), respectively.}
\label{fig:W4f_Ti2p}
\end{figure}
To determine the Ti:W ratio the Ti~2\textit{p} core level spectra collected across the entire depth profile were first fitted with the Smart-type background implemented in the Avantage software package, which is a Shirley-type background with the additional constraint that the background should not be greater than the data points. The smart background was chosen because at lower Ti concentrations, the background on the lower binding energy (BE) side of the Ti~2\textit{p} begins to rise due to the increase in intensity of the close neighbouring W~4\textit{p}\textsubscript{3/2} plasmon and this hampers the effective use of the Shirley-type background, as it would cut the data points. Due to the complexity of the Ti~2\textit{p} core level, the total area was fitted rather than to isolate the contributions from the two spin states. The average Ti~2\textit{p} relative atomic sensitivity factor (RASF) was applied to the resultant fitted area to quantify the region. For W~4\textit{f} the Shirley-type background was implemented and three peaks were added for the W~4\textit{f}\textsubscript{7/2}, W~4\textit{f}\textsubscript{5/2} and W~5\textit{p}\textsubscript{3/2} core lines. It is assumed that after sputtering only the metallic tungsten environment is present. The W~4\textit{f} peaks were given asymmetry to account for the core-hole coupling with conduction band states and constrained to have the same full width at half maximum (FWHM) and line shape as each other.~\cite{HUFNER1975417} The Avantage software package uses a least square fitting procedure to determine a suitable Lorentzian/Gaussian (L/G) mix, tail mix, full width at half maximum (FWHM), and tail exponent of the peaks. Additionally, the area ratio of the 4\textit{f} doublet peaks was set so that the lower spin state peak had an area that was 0.75 that of the higher spin state peak (i.e. 3:4 area ratio). The same line shape (FWHM, L/G mix, tail mix, tail exponent and area ratio) was applied to all W~4\textit{f} spectra across the depth profile. Additionally, the W~5\textit{p}\textsubscript{3/2} peak was fitted with a psuedo-Voigt profile peak with a fixed L/G mix of 30\% Lorentzian and a variable FWHM constraint. The BE range of the backgrounds, the line shapes, and FWHM constraints of the peaks was then applied to all spectra to be consistent across the sample set and the depth profiles. However, if the line shape was not constrained, the same value within error ($\pm$0.3~at.\%) was achieved. To determine the relative Ti:W ratio in at.\%, the RASF corrected Ti~2\textit{p} spectral area was compared to the RASF corrected W~4\textit{f}\textsubscript{7/2} spectral area. Fig.~\ref{fig:DP_Quant} displays the quantification results from the depth profiles along with a standard deviation across the film thickness. The three samples have an average Ti~at.\% relative to W of 5.4$\pm$0.3 (5Ti), 11.5$\pm$0.3 (10Ti) and 14.8$\pm$0.6~at.\% (15Ti). Furthermore, Fig.~\ref{fig:01_DPS} displays the spectra collected across the depth profile of sample 10Ti, and it can be seen that the W~4\textit{f} line shape remains fairly constant across the first five etch steps, and subtle changes are observed in the W~4\textit{f}/Ti~2\textit{p} area ratio, reflecting what is observed with the values from the quantification. The survey spectra displayed in Fig.~\ref{fig:01_DPS}(a) also nicely show how the depth profile penetrates across the TiW and into the substrate, as in the last three etches, Si-O peaks first emerge, followed by Si peaks.
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=\linewidth]{Figures_SI/AD_Quant_DPs.png}
\caption{Ti:W relative quantification as a function of etching time (also referred to as sputter duration) across the three TiW films. The depth of profile of samples 5Ti, 10Ti and 15Ti is displayed in (a), (b), and (c), respectively. 0~min etch time refers to the first measured point in the depth profile. This was collected after the sample surface was in-situ sputter cleaned to remove the remnants from the ex-situ cleaning process but before the first etching cycle of the depth profile. This measurement point is referred to as Etch 0.}
\label{fig:DP_Quant}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.66\linewidth]{Figures_SI/AD_DPs_spectra_01.png}
\caption{Spectra collected during the first five etch steps of the depth profile for sample 10Ti, including (a) survey, (b) W~4\textit{f}, and (c) Ti~2\textit{p} spectra. The survey spectra are normalised to the height of the maximum intensity peak, whereas the W~4\textit{f} and Ti~2\textit{p} spectra are normalised to the sum of the total W~4\textit{f}/5\textit{p}\textsubscript{3/2} and Ti~2\textit{p} areas. The dotted grey line in the survey spectra refers to the Etch 0 spectrum, and the survey spectra have been offset vertically. Etch 0 refers to the first measurement at sputtering time 0~min (i.e. after the in-situ cleaning but before the first depth profile etching cycle). As no Fermi edge or C~1\textit{s} was measured during the depth profiles, the BE scale is not calibrated and is plotted as recorded.}
\label{fig:01_DPS}
\end{figure}
\cleardoublepage
\section{Room temperature energy resolution}
The room temperature total energy resolution of the SXPS and HAXPES experiments at the synchrotron was determined by determining the 16/84\% width of the Fermi edge of a polycrystalline gold foil. Fig.~\ref{fig:Au_Ef} displays the Fermi edges of the foil measured with SXPS and HAXPES at room temperature and fitted with a Boltzmann curve.
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.4\linewidth]{Figures_SI/Au_Width.png}
\caption{Fermi edge (E\textsubscript{F}) spectra collected with (a) SXPS and (b) HAXPES on a polycrystalline gold foil at room temperature. The energy resolution is determined by extracting the 16/84\% width (i.e.\ one standard deviation on either side of the Fermi energy.}
\label{fig:Au_Ef}
\end{figure}
\cleardoublepage
\section{Sample Plate Holder}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.4\linewidth]{Figures_SI/Sample_Holder.png}
\caption{Annotated image of the sample plate holder used for the in-situ annealing experiment at beamline I09.}
\label{fig:Sample_Plate}
\end{figure}
\cleardoublepage
\section{Temperature Profiles}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.42\linewidth]{Figures_SI/Combined_Profile.png}
\caption{Temperature profiles for all three samples. The start of the measurement window is indicated by the vertically dotted grey line, whereas the red dotted and dashed lines indicate the end of the measurement cycle for samples 5Ti/15Ti and 10Ti, respectively. The temperature profile for samples 5Ti and 10Ti are near-identical and so overlap.}
\label{fig:T_profile}
\end{figure}
\cleardoublepage
\section{Energy resolution as a function of temperature}
In order to assess the effect of temperature on the thermal broadening of the collected spectra, the intrinsic Fermi edge of the sample (i.e. copper) was captured with SXPS and HAXPES at each spectral cycle. By extracting the 16/84\% width of the Fermi edge (as shown in Fig.~\ref{fig:Au_Ef}), the change in total energy resolution could be monitored with respect to temperature. According to M\"{a}hl~\textit{et al.} the thermal broadening ($\gamma_f$) of a Fermi edge at temperature $T$ measured with XPS can be described by:
\begin{equation}
{\gamma}_f = 4{\ln}(\sqrt{2}+1)k_b{T}\;{\approx}\;{\frac{7}{2}}k_b{T} ,
\end{equation}
where $k_b$ is the Boltzmann constant and approximating ${k_b}T$ to $\frac{T}{11600}\frac{\textrm{eV}}{\textrm{K}}$ gives a value of 90~meV and 200~meV for the thermal broadening at 300~K and 673~K, respectively.~\cite{MAHL1997197} Therefore, a change of 110~meV in the total energy resolution of this experiment is expected. Fig.~\ref{fig:Reso_T}(a) displays the change in Fermi edge width with respect to annealing temperature and duration during preliminary test measurements.\par
It can be seen in Fig.~\ref{fig:Reso_T}(a) that across the measured temperature range, on average the change in 16/84\% Fermi edge width is less than 60~meV. Considering everything remains constant during the measurement (i.e. pass energy, dwell time, analyser, geometry, sample) except for temperature, this change is representative of the thermal broadening. This value is slightly lower than the theoretical value, but this can be attributed to the assumptions made in the theoretical model and the error associated with the 16/84\% method. Additionally, Fig.~\ref{fig:Reso_T}(c) and (d) display the Fermi edge spectra at key temperatures measured in this experiment for sample 15Ti. The changes observed are minimal, with the hard X-ray-collected Fermi edges appearing more sensitive to temperature than the soft X-ray-collected edges.\par
Overall, the change in resolution is insignificant for the core level spectra as it falls below the energy resolution of the spectrometer. Therefore, when analysing the changes to the core level spectra for all samples, thermal broadening effects are negligible. Moreover, Fig.~\ref{fig:Reso_T}(b) displays the Cu~2\textit{p}\textsubscript{3/2} core level spectrum collected at selected temperatures. The room temperature spectrum is slightly broader than the higher temperature spectra, but the high-temperature spectra FWHM remain reasonably constant, falling in line with the changes observed when tracking the Fermi edge width. The reason for the broader room temperature spectrum and slight asymmetry on the lower binding energy side can be attributed to surface contamination (i.e. remnant oxide contributions) but when heated, the surface is cleaned, leading to a narrowing of the FWHM.
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=\linewidth]{Figures_SI/Resolution_Composite.png}
\caption{Energy resolution measurements as a function of annealing temperature and duration, including (a) the Fermi edge width collected with both soft (SX) and hard (HX) X-rays for sample 10Ti as a function of temperature during preliminary measurements, (b) selected Cu~2\textit{p}\textsubscript{3/2} core level spectra collected with SXPS on sample 15Ti as a function of annealing temperature, collected during this experiment, plotted on a relative BE scale and normalised to the maximum intensity to emphasis the change in peak FWHM as a function of annealing temperature and duration. (c) and (d) display the selected Fermi edge spectra collected as a function of annealing temperature measured with soft and hard X-rays, respectively. (c) and (d) are normalised to the maximum height (accounting for noise) of the Fermi edge and plotted on the same \textit{y}-axis scale. RT Ref. refers to the room temperature reference spectrum.}
\label{fig:Reso_T}
\end{figure}
\cleardoublepage
\section{Room temperature reference spectra}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.7\linewidth]{Figures_SI/Room_Temperature_Refs.png}
\caption{SXPS and HAXPES room-temperature reference spectra collected for as-deposited samples 5Ti, 10Ti and 15Ti after the surface was in-situ cleaned via argon sputtering, including (a) survey, (b) Cu~2\textit{p}\textsubscript{3/2}, (c) W~4\textit{d}, (d) Ti~2\textit{p} and (e) Ti~1\textit{s}, with the Ti~1\textit{s} collected with HAXPES and the others with SXPS. Spectra are normalised to the maximum height of the Cu~2\textit{p}\textsubscript{3/2} signal. Spectra collected on reference copper compounds (Cu, Cu\textsubscript{2}O) are also included, which were measured using the laboratory-based SXPS instrument.}
\label{fig:Refs_RoomT}
\end{figure}
To have confidence in the interpretation of the Cu~2\textit{p}\textsubscript{2/3} spectra, reference measurements were conducted using laboratory-based SXPS instrument ($h\nu$ = 1.4867~keV) on a polycrystalline Cu foil (Alfa Aesar, 99.9985\% metals basis, 0.25~mm thick) and an anhydrous Cu\textsubscript{2}O powder (Cu\textsubscript{2}O, Sigma Aldrich, $>=$99.99\% metals basis). The foil reference was sputter cleaned in-situ using a focused argon ion beam and sputtering for 10~min, with the ion gun operating at 2~keV voltage. The Cu\textsubscript{2}O powder was received in a sealed ampule under an argon atmosphere, and to minimise further oxidation (i.e. the formation of CuO) the sample was prepared in a glovebag under argon. The recorded Cu~2\textit{p}\textsubscript{3/2} spectra of these reference materials are overlaid on the room temperature reference spectra of samples 5, 10 and 15Ti, displayed in Fig.~\ref{fig:Refs_RoomT}(b). The binding energy scale was calibrated to the intrinsic Fermi energy for the TiW/Cu samples and the Cu foil reference, whereas for Cu\textsubscript{2}O the scale was calibrated to adventitious carbon (284.8~eV).\par
It can be observed, that there is good agreement between the Cu foil reference and the spectra recorded for the TiW/Cu samples. A very weak satellite is observed between 942-948~eV for the TiW/Cu samples, however, this is also present in the Cu foil reference, therefore indicating that the native oxide contribution has been minimised as much as possible. The slight differences in Cu~2\textit{p}\textsubscript{3/2} FWHM between the foil reference and TiW/Cu samples can be explained by the differences in total energy resolution between the synchrotron ($h\nu$ = 1.4~keV) and laboratory-based measurements, which were determined to be 330~meV and 600~meV, respectively. The laboratory-based SXPS instrument used for the collection of reference spectra was not the same used for the depth profiles described in the manuscript, hence the different energy resolution. \par
Cu Auger peaks are identified to overlap with the measured Ti~2\textit{p} and Ti~1\textit{s} core levels when measured with h$\nu$ = 1.4 and 5.9~keV, respectively. The Auger peak appears at a BE position of $\approx$4967.0~eV in the Ti ~1\textit{s} region and $\approx$457.0~eV in the Ti~2\textit{p} region, equating to a kinetic energy (KE) of $\approx$959.0~eV for both the Auger peaks. The reason why they both have the same kinetic energy is due to the strategic decision to tune the photon energies so that the Ti~1\textit{s} and Ti~2\textit{p} probing depths match. Possible Auger transition energies have been calculated and tabulated by Coghlan~\textit{et al.},~\cite{COGHLAN1973317} and the position of the Auger in the Ti~1\textit{s} spectra correlates with the Auger Cu~L\textsubscript{1}M\textsubscript{1}M\textsubscript{4,5} transition calculated at 962~eV (KE). It is clear that these peaks are not due to titanium as they do not possess the attributes of a core level peak nor the expected BE position of titanium metal/oxide in either the 2\textit{p} or 1\textit{s} spectrum. Aside from the Cu Auger peaks, the Ar~2\textit{p} core level peak is visible in the W~4\textit{d} region at approximately 241.0~eV corresponding to implanted argon from the sputtering process. However, this peak is again incredibly small and does not affect the analysis of the W~4\textit{d} spectrum that may develop during annealing.
\cleardoublepage
\section{In-situ annealing Ti~2\textit{p}~core level spectra}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=\linewidth]{Figures_SI/673K_Composite_Ti2p.png}
\caption{Ti~2\textit{p} core level spectra collected during the 673~K holding period (Stage \textbf{3}) for sample (a) 5Ti, (b) 10Ti, and (c) 15Ti. Spectra for each core level are plotted over the same $y$-axis scale to show the differences in intensity across the three samples. The spectra have not been normalised but a constant linear background has been removed. Additionally, spectra recorded every other spectral cycle are displayed to aid with the interpretation of the data. The 5Ti spectra have been magnified by $\times$15 to aid with viewing. The legend displayed in (b) also applies to (a) and (c). Ti(0) and Ti(IV) refers to metallic Ti and titanium oxide in the 4+ oxidation state, respectively.}
\label{fig:Ti2p_core_levels}
\end{figure}
\cleardoublepage
\section{Heat map of Ti~1\textit{s} spectra collected over the measurement window}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=\linewidth]{Figures_SI/Ti1_Colour_Map.png}
\caption{HAXPES maps of the Ti~1\textit{s} core level collected across the entire measurement window, for sample (a) 5Ti, (b) 10Ti and (c) 15Ti. The spectra are aligned to the intrinsic Fermi energy of the respective sample, and their intensity is not normalised but plotted as-collected (after the subtraction of a constant linear background). The top panel displays the median spectrum collected across the measurement window and the right panel displays the point-by-point temperature profile as a function of time. Due to the large variation in spectral intensity between sample 5Ti and 15Ti, the spectra displayed here are on independent intensity scales and so the intensities should not be directly compared. Ti(0) and Ti(IV) refers to metallic Ti and titanium oxide in the 4+ oxidation state, respectively.}
\label{fig:Ti1s_heat}
\end{figure}
\cleardoublepage
\section{5Ti Ti~1\textit{s} peak fit analysis}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.4\linewidth]{Figures_SI/v2_peak_fit_3ox.png}
\caption{Peak fit analysis of the Ti~1\textit{s} core level for sample 5Ti. The oxide peaks are constrained to have the same FWHM (2.2~eV) and Lorentzian/Gaussian mix (50), whereas the metal peak line shape was derived from peak fitting the 673~K spectra of sample 30Ti with one asymmetric line shape. A Shirley-type background was used, and the Cu~L\textsubscript{1}M\textsubscript{1}M\textsubscript{4,5} contribution was not removed.}
\label{fig:Ti1s_pf_5Ti}
\end{figure}
\cleardoublepage
\section{Residual oxygen within the as-deposited Cu film}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=\linewidth]{Figures_SI/O_traces.png}
\caption{Depth profile results across the three as-deposited TiW/Cu samples to determine the level of O within the bulk Cu film. Samples were sputtered using a focused 500~eV Ar\textsuperscript{+} ion-beam gun energy for 6 min, rastering over a 2$\times$2~mm\textsuperscript{2} area and measuring at the centre of the sputter crater. Three cycles of sputtering were conducted equating to 18 min total sputtering time. (a) and (b) show the Cu~2\textit{p}\textsubscript{3/2} and O~1\textit{s} spectra collected after the first, second and third etch steps for sample 5Ti only, respectively. Etch 0 refers to the as-received measurement (i.e. before any sputtering) and is not included here as the samples were stored and handled in air so a thin native oxide and adventitious carbon layer were present. The quantification results of the O/(Cu+O) ratio at each of the three etch steps for all three samples are shown in (c). The spectra are aligned to the ISO standard BE value of metallic Cu~2\textit{p}\textsubscript{3/2} (932.62~eV)~\cite{Cu_ISO} and normalised to the Cu~2\textit{p}\textsubscript{3/2} total spectral area. After Etch 3, the TiW layer is reached and the Ti and W signals become dominant.}
\label{fig:O_traces}
\end{figure}
\cleardoublepage
\section{Early Stages of Annealing for Sample 5Ti}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.67\linewidth]{Figures_SI/Stage_2_15Ti.png}
\caption{Initial stages of annealing (523-673~K) described by the Cu~2\textit{p}\textsubscript{3/2} and Ti~1\textit{s} core level spectra for sample 5Ti. (a) Ti~1\textit{s} core level spectra collected (with no intensity normalisation) at each temperature increment, with +5~h referring to the data collected at the end of the 5~h 673~K holding period. (b) A magnified view of the Ti~1\textit{s} core level spectra collected between 523-623~K as well as a room temperature reference measurement on the same sample (prior to annealing) to highlight the Cu Auger contribution. (c) Normalised (0-1) Ti~1\textit{s} core level spectra to emphasise the change in line shape. (d) Normalised (0-1) Cu~2\textit{p}\textsubscript{3/2} spectra taken at selected temperatures. All data have been aligned to the intrinsic Fermi energy. (a) and (b), and (c) and (d) are plotted on the same $y$-axis scale.}
\label{fig:5Ti_early}
\end{figure}
\cleardoublepage
\section{Early Stages of Annealing for Sample 15Ti}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.67\linewidth]{Figures_SI/Stage_2_30Ti.png}
\caption{Initial stages of annealing (523-673~K) described by the Cu~2\textit{p}\textsubscript{3/2} and Ti~1\textit{s} core level spectra for sample 15Ti. (a) Ti~1\textit{s} core level spectra collected (with no intensity normalisation) at each temperature increment, with +5~h referring to the data collected at the end of the 5~h 673~K holding period. (b) A magnified view of the Ti~1\textit{s} core level spectra collected between 523-623~K as well as a room temperature reference measurement on the same sample (prior to annealing) to highlight the Cu Auger contribution. (c) Normalised (0-1) Ti~1\textit{s} core level spectra to emphasise the change in line shape. (d) Normalised (0-1) Cu~2\textit{p}\textsubscript{3/2} spectra taken at selected temperatures. All data have been aligned to the intrinsic Fermi energy. (a) and (b), and (c) and (d) are plotted on the same $y$-axis scale.}
\label{fig:15Ti_early}
\end{figure}
\cleardoublepage
\section{Cu~2\textit{p}\textsubscript{3/2} line shape changes}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.4\linewidth]{Figures_SI/Comparison_01_03_05_Cu2p_rel.png}
\caption{Comparison of the Cu~2\textit{p}\textsubscript{3/2} spectral line shape of the three samples. The spectra presented were captured at the end of the 673~K holding period (i.e. 673~K + 5~h). The spectra are normalised 0-1 and aligned to the main intensity to make it easier to observe changes in the line shape.}
\label{fig:Cu2p}
\end{figure}
\cleardoublepage
\section{In-situ annealing Ti~2\textit{p} concentration profile}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.4\linewidth]{Figures_SI/Ti2p_Quant.png}
\caption{Relative Ti concentration profile as a function of time, \textit{t} collected across the measurement window for all three samples, determined from peak fitting the Ti~2\textit{p} core level spectra. The yellow-filled marker for each dataset refers to the time when the 673~K holding period commences. Vertical guidelines are also in place to mark this point for each sample. The measured Ti~2\textit{p} signal intensity for each sample is first normalised relative to the area of the Cu~2\textit{p}\textsubscript{3/2} core level measured during the same spectral cycle and then afterwards the resultant Ti~2\textit{p}/Cu~2\textit{p}\textsubscript{3/2} area is normalised relative to the final intensity of sample 15Ti (I\textsubscript{F}).}
\label{fig:Ti2p_Quant}
\end{figure}
\cleardoublepage
\section{Ti~2\textit{p}/1\textit{s} comparison}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.8\linewidth]{Figures_SI/Ti1s_Ti2p_early.png}
\caption{A comparison of the (a) Ti~1\textit{s}, and (b) Ti~2\textit{p} core level spectra recorded at 573~K (\textit{t} = 2~h) for sample 10Ti. Spectra are normalised to the signal-to-noise ratio. Guidelines are marked for the positions of the expected peaks. It is clear that the Ti~1\textit{s} is more sensitive to smaller concentrations of titanium than the Ti~2\textit{p}. Additionally, the nature of the secondary background for the Ti~2\textit{p} region means that quantification of this area is incredibly difficult and cannot be done reliably, whereas a standard XPS background can easily be applied to the Ti~1\textit{s} region.}
\label{fig:Ti1s_2p}
\end{figure}
\cleardoublepage
\section{Depth Profile Survey Spectra}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.6\linewidth]{Figures_SI/Depth_Profiles_Surveys.png}
\caption{Survey spectra collected after each etch cycle during the post-mortem depth profile measurements for (a) 5Ti, (b) 10Ti, and (c) 15Ti samples. The top spectrum displayed in each sub-figure is taken on the as-received sample (i.e. no etch) and then the spectra collected after each cycle are stacked vertically below (going from blue to grey to black). Spectra coloured in blue are Cu-rich, black are W-rich and red is termed the ``interface'' as it marks the point where the Cu and W signals cross over in the depth profiles.}
\label{fig:DP_surveys}
\end{figure}
\cleardoublepage
\section{References}
\section{Introduction}
The binary pseudo-alloy of titanium-tungsten (Ti$_x$W$_{1-x}$, $x\leq0.3$) is a well-established, effective diffusion barrier and adhesion enhancer within silicon-based semiconductor devices.~\cite{NICOLET1978415, Wang_SQ_1993, ROSHANGHIAS2014386} It is designed to prevent the interdiffusion between adjacent metallisations and the underlying dielectric and semiconductor materials. TiW is compatible with various metallisations (Al, Au, Ag, In and Cu) and has remarkable thermal stability at elevated temperatures ($\leq$850$\degree$C).~\cite{Cunningham_1970, Harris_1976, GHATE1978117, NOWICKI1978195, Olowolafe1985, OPAROWSKI1987313, DIRKS1990201, Misawa_1992, OLOWOLAFE199337, Chiou_1995, BHAGAT20061998, Chang_2000, FUGGER20142487, LePriol2014, Souli_2017, Kalha_TiW_Cu_2022} Consequently, TiW diffusion barriers are now being widely implemented in next-generation SiC-based power semiconductor technologies with copper metallisation schemes,~\cite{Baeri_2004, Behrens_SiC_2013, Liu_2014} and more recently within electrodes for GaAs photoconductive semiconductor switches (PCSSs),~\cite{GaAs} and gate metal stacks in GaN-based high electron mobility transistor (HEMT) devices.~\cite{GaN}\par
Diffusion barriers are needed as Cu and Si readily react at relatively low temperatures to form intermetallic copper-silicide compounds at the interface, which seriously hamper the performance and reliability of devices.~\cite{Corn_1988, Harper_1990, Shacham_Diamand_1993, Liu_1993, Sachdeva_2001, Souli_2017} Studies have shown that TiW films are capable of retarding and limiting this interdiffusion and subsequent reaction.~\cite{Wang_SQ_1993, Souli_2017} However, when subjected to a high thermal budget, a depletion of Ti within the TiW grains has been observed, leading to the accumulation of Ti at grain boundaries.~\cite{CHOOKAJORN2014128} The segregated Ti is then able to diffuse out of the barrier and through the metallisation via grain boundary diffusion.~\cite{Olowolafe1985} This depletion of Ti is thought to lead to a greater defect density within the TiW layer, consequently allowing for the potential of Cu and Si to bypass the barrier and react. Fugger~\textit{et al.} cite that this out-diffusion process is an ``essential factor'' in the failure of this barrier,~\cite{FUGGER20142487} and others have also documented the segregation of Ti during high-temperature annealing.~\cite{OLOWOLAFE199337, OLOWOLAFE199337, Baeri_2004, Plappert_2012, CHOOKAJORN2014128, Kalha_TiW_Cu_2022}\par
Given the importance of the TiW barrier to the overall device performance, reliability and its application in future SiC technologies and beyond, this Ti diffusion degradation process must be better understood, including how it impacts the stability of the TiW/Cu structure. The common thread across the vast majority of past experimental studies on TiW and diffusion barriers in general, including the present authors' previous work,~\cite{Kalha_TiWO, Kalha_TiW_Cu_2022} is that ex-situ samples are used to track the evolution of the diffusion process and to determine the temperature at which the barrier fails. Such studies also often focus on one Ti concentration and are therefore unable to address the effect of the titanium concentration of the film on the degradation mechanism.\par
\begin{figure*}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.85\linewidth]{Figures/Schematic.png}
\caption{Schematic representation of the samples and experimental approach (not drawn to scale). (a) Device stack on a sample holder being annealed in-situ to 673~K and the expected Ti diffusion represented by grey vertical arrows. (b) A magnified view of the copper surface showing the Ti accumulation and the two photon energies used for SXPS and HAXPES measurements to excite the Ti~2\textit{p} and Ti~1\textit{s} electrons from the same depth. (c) SXPS laboratory-based Ar\textsuperscript{+} sputtering depth profile used to quantify the elemental distribution across the TiW/Cu bilayer after in-situ annealing (i.e. post-mortem).}
\label{fig:Schematic}
\end{figure*}
Although ex-situ prepared samples give a good representation of the device \emph{after} stress events, it is difficult to correlate the results directly with what a device is experiencing \emph{during} the applied stress.~\cite{Olowolafe1985, Baeri_2004} Therefore, it is crucial to develop new characterisation strategies that can probe the degradation mechanism dynamically under realistic conditions while allowing for changes to the chemical states across the device stack to be monitored.\par
To the best of our knowledge, only Le Priol~\textit{et al.} and Siol~\textit{et al.} provide in-situ monitoring measurements on TiW, both employing in-situ X-ray diffraction (XRD). Le Priol~\textit{et al.} studied the efficiency of a TiW barrier deposited from a 70:30~at.\% W:Ti alloy target against indium diffusion at temperatures between 573-673~K under vacuum.~\cite{LePriol2014} The authors could correlate the TiW barrier efficiency with its microstructure and determine the diffusion coefficient of In in TiW. Siol~\textit{et al.} were interested in understanding the oxidation of TiW alloy precursors, and observed oxygen dissolution and the formation and decomposition of mixed (W,Ti)-oxide phases when ramping the temperature between 303 to 1073~K in air.~\cite{SIOL202095}\par
An explanation for the lack of in-situ/operando experiments in the field, which is in contrast to the importance of these material interfaces in both novel and commercial device applications, is the challenges associated with performing such experiments. These include extensive periods of time required to collect sufficient data, the availability of instruments with in-situ capability, and difficulties in sample preparation and interfacing.\par
The present work combines soft and hard X-ray photoelectron spectroscopies (SXPS and HAXPES) with in-situ annealing to study the effect of annealing temperature, annealing duration, and Ti:W ratio on the thermal stability of TiW/Cu bilayers in real-time, considerably expanding on the existing ex-situ work, including the present authors' previous studies.~\cite{Kalha_TiWO, Kalha_TiW_Cu_2022} Si/SiO\textsubscript{2}/Ti$_x$W$_{1-x}$(300~nm)/Cu(25~nm) device stacks (see Fig.~\ref{fig:Schematic}(a) for a schematic of the stack) are annealed up to a maximum temperature of 673~K (400$\degree$C) and held there for 5~h. At the same time, soft and hard X-ray photoelectron spectra are continuously recorded to capture the Ti diffusion process and changes to the chemical state across the copper surface (see Fig.~\ref{fig:Schematic}(b) for a schematic). The target temperature of 673~K is selected as it is in a common temperature regime employed during device fabrication to obtain desired grain growth and texture of the copper metallisation.~\cite{Harper_2003, Plappert_2012} Additionally, it is a temperature that can occur at short circuit events during the operation of potential devices.~\cite{NELHIEBEL20111927}\par
A major benefit of combining the two variants of X-ray photoelectron spectroscopy (XPS) is that SXPS is more surface-sensitive, whereas HAXPES enables access to the Ti~1\textit{s} core line. The Ti~1\textit{s} offers an alternative to the commonly measured Ti~2\textit{p} with soft X-ray sources. The Ti~1\textit{s} compared to the Ti~2\textit{p} has the added benefits of covering a smaller binding energy (BE) range and consequently necessitating a shorter collection time, the absence of spin-orbit splitting (SOS), no additional broadening to consider from the Coster-Kronig effect that influences the Ti~2\textit{p}\textsubscript{1/2} peak, and the absence of underlying satellites. For these reasons, the exploitation of the 1\textit{s} core level over the 2\textit{p} is becoming increasingly popular for transition metals, especially for the disentanglement of charge transfer satellite structures in the X-ray photoelectron spectra of metal oxides.~\cite{Woicik_2015, Miedema2015, Ghiasi2019, Woicik_2020, HAXPES_Big_Boy}\par
HAXPES is typically employed as it offers a larger probing depth than conventional SXPS.~\cite{HAXPES_Big_Boy} However, here, it is strategically used to obtain comparable probing depths of the Ti~2\textit{p} and Ti~1\textit{s} core lines, collected with SXPS and HAXPES, respectively. Using this combination, the more widely studied Ti~2\textit{p} spectra can be used to understand the Ti~1\textit{s} spectra better. In addition to the synchrotron-based XPS experiments, quantitative laboratory-based SXPS depth profiles were also conducted on the samples following the in-situ experiment (i.e. post-mortem) to ascertain the quantitative distribution of Ti across the Cu metallisation (see Fig.~\ref{fig:Schematic}(c) for a schematic of the depth profiling).\par
\section{Methodology}
\subsection{Samples}\label{Samples}
Three as-deposited Si/SiO\textsubscript{2}/Ti$_x$W$_{1-x}$/Cu thin film stacks with varying Ti:W composition were prepared through an established industrial route. The stack consists of a 50~nm SiO\textsubscript{2} layer on an un-patterned Si (100) substrate, above which a 300~nm thick TiW layer was deposited via magnetron sputtering. The TiW films were deposited from composite targets with a nominal atomic concentration of 30:70 Ti:W, determined by X-ray fluorescence spectroscopy (XRF). By varying the deposition parameters, three samples with an average Ti concentration, $x$ across the entire film thickness of 5.4$\pm$0.3, 11.5$\pm$0.3, and 14.8$\pm$0.6 at.\% relative to W were realised (e.g (Ti/(Ti+W))$\times$100). These concentrations were determined using laboratory-based SXPS and depth profiling across the entire film thickness (further details regarding the quantification of the TiW films can be found in Supplementary Information I). These samples will be referred to as 5Ti, 10Ti and 15Ti, respectively, for the remainder of the manuscript. Finally, a 25~nm Cu capping layer was deposited via magnetron sputtering on top of the TiW barrier. Deposition of both TiW and Cu was conducted in an argon discharge with no active substrate heating or vacuum break between successive depositions. The deposition chamber operated under a base pressure of 10\textsuperscript{-8}-10\textsuperscript{-7}~mbar. Further details regarding the deposition process have been reported in Refs.~\cite{Plappert_2012, SAGHAEIAN2019137576}.
\subsection{Dynamic synchrotron-based SXPS/HAXPES}
\subsubsection{Beamline optics and end station details}
SXPS and HAXPES measurements were conducted at beamline I09 of the Diamond Light Source, UK,~\cite{Beam2018} at photon energies of 1.415~keV and 5.927~keV, respectively (these will be abbreviated as 1.4~keV and 5.9~keV throughout the remaining manuscript). 1.4~keV was selected using a 400~lines/mm plane grating monochromator, achieving a final energy resolution of 330~meV at room temperature. 5.9~keV was selected using a double-crystal Si~(111) monochromator (DCM) in combination with a post-monochromator Si~(004) channel-cut crystal, achieving a final energy resolution of 290~meV at room temperature. The total energy resolution was determined by extracting the 16/84\% width of the Fermi edge of a clean polycrystalline gold foil (see Supplementary Information II for further information on determining the resolution).~\cite{ISO} The end station of beamline I09 is equipped with an EW4000 Scienta Omicron hemispherical analyser, with a $\pm$28$\degree$ acceptance angle. The base pressure of the analysis chamber was 3.5$\times$10$\textsuperscript{-10}$~mbar. To maximise the efficiency in the collection of spectra, the measurements were conducted in grazing incidence and at near-normal emission geometry.\par
\subsubsection{Annealing} \label{methods_annealing}
Samples were individually annealed in-situ to a sample target temperature of 673~K (400$\degree$C) using a tungsten filament heater, and held at the temperature for approximately 5~h. The sample plate used for the experiment consisted of a copper disk (3~mm thick, 8~mm diameter) fixed to the centre of a flat tantalum plate, on which the sample was placed and secured using clips. Good thermal contact was made between the copper disk and the sample using a thin silver foil. This allowed the sample temperature to be inferred by attaching an N-type thermocouple to the centre of the copper disc. The thermocouple was also connected to a Lakeshore temperature controller, which was programmed to ramp the sample temperature at a constant rate under a closed-loop control (see Supplementary Information III for an image of the sample plate holder).\par
Prior to in-situ annealing, all samples were gently sputter cleaned in-situ for 10~minutes using a 0.5~keV de-focused argon ion (Ar\textsuperscript{+}) source, operating with a 6~mA emission current and 5$\times$10\textsuperscript{-5}~mbar pressure. This was necessary to remove the native copper oxide that had formed on the sample surface during sample transport.\par
The process of in-situ annealing encourages the purging of adsorbed gases and organic species within the sample and on the sample surface (i.e. degassing). Therefore, annealing in a UHV environment will increase the chamber pressure, which is undesired, especially during the collection of photoelectron spectra. To account for sample degassing, the annealing process was conducted step-wise to ensure a good analysis chamber pressure was maintained throughout the measurements. Fig.~\ref{fig:Temp_Profile} displays a representative temperature profile acquired for sample 5Ti and the related pressure profile within the analysis chamber (see Supplementary Information IV for the temperature profiles collected for all three samples). The temperature profile consists of three stages. Additionally, as seen in the pressure profile in Fig.~\ref{fig:Temp_Profile}, with every increasing step in temperature, a temporary increase in pressure resulted due to the degassing of the sample.\par
Prior to annealing in the analysis chamber, the samples were first heated in a subsidiary sample preparation chamber to remove the majority of adsorbed molecules. This stage of annealing involves a fast ramp from room temperature to 523~K and will be referred to as Stage \textbf{1} of the annealing process. The Ti diffusion process was assumed to be insignificant in this temperature range. Next, the sample was moved to the main analysis chamber, where the temperature was ramped step-wise from 523 to the target temperature of 673~K while maintaining on average a pressure of 7$\times$10\textsuperscript{-10}~mbar (referred to as Stage \textbf{2}). The temperature was then held at the 673~K target temperature for 5~h (referred to as Stage \textbf{3}). The spectra were continuously collected using SXPS and HAXPES from the start of Stage \textbf{2} until the end of Stage \textbf{3} of the annealing process. The period where the spectra were collected will be referred to as the ``measurement window''. Across the measurement window, the same group of spectra were collected iteratively, which will be referred to as the ``spectral cycle''. Each spectral cycle took approximately 15~minutes to collect, and details on which spectra were selected will be discussed in the following section. During Stage~\textbf{2}, the temperature was increased once a spectral cycle was completed, which coincidentally allows sufficient time for the analysis chamber pressure to recover below 8$\times$10\textsuperscript{-10}~mbar. \par
For completeness, we note that during the initial stages of annealing, sample 10Ti degassed more than samples 5Ti and 15Ti, and therefore the temperature ramp of Stage \textbf{2} for sample 10Ti was paused to allow the pressure to recuperate. This meant that sample 10Ti was held at 543~K for four spectral cycles rather than one. Therefore, the total time of annealing of sample 10Ti was extended by approximately 1~h compared to the annealing time of samples 5Ti and 15Ti. This is not expected to affect the diffusion process significantly or the resultant accumulation profiles, as the Ti diffusion at this temperature is minimal.\par
\begin{figure}
\centering
\includegraphics[keepaspectratio, width=0.4\linewidth]{Figures/TandP_profile.png}
\caption{Representative temperature profile acquired from the Lakeshore temperature controller during the measurements on sample 5Ti. The temperature profile consists of three stages. Stage \textbf{1}: a quick ramp to 523~K in a subsidiary chamber. Stage \textbf{2}: a 10~K/[spectral cycle] ramp in the main analysis chamber, which was then decreased to a 5~K/[spectral cycle] ramp once 653~K was reached. The temperature was ramped step-wise in Stage \textbf{2} to allow the pressure in the analysis chamber to recover to $<$7$\times$10\textsuperscript{-10}~mbar after each temperature step (see inset for the pressure profile). Stage \textbf{3}: holding period at 673~K for 5~h. The dotted line at $t$ = 0~h indicates the start of the measurement window.}
\label{fig:Temp_Profile}
\end{figure}
\subsubsection{Core level selection}\label{Decision}
The spectral cycle, which was run in an iterative loop during the experiment, included the following core level spectra: Cu~2\textit{p}\textsubscript{3/2}, Ti~2\textit{p} and W~4\textit{d} collected with SXPS, and Ti~1\textit{s} collected with HAXPES. The W~4\textit{d} core level was selected over the commonly measured W~4\textit{f} line as the former does not overlap with the core levels of Cu or Ti in this region, whereas the latter overlaps with the Ti~3\textit{p} core level. The Cu Fermi edge was also included in the spectral cycle and was collected with both SXPS and HAXPES throughout the measurement window to (a) provide an intrinsic method of calibrating the BE scale and (b) monitor any change to the total energy resolution as a consequence of raising the sample temperature. Based on 16/84\% fits of the collected Fermi edges across all measurements, the effect of thermal broadening is negligible under the experimental conditions used, and further information can be found in Supplementary Information V. All spectra were aligned to the intrinsic Cu Fermi energy (E\textsubscript{F}) and the spectral areas were obtained using the Thermo Avantage v5.9925 software package. The BE values quoted in this work are considered to have an estimated error of $\pm$0.1~eV.\par
The SXPS photon energy was set to 1.4~keV so that the kinetic energy (KE) of excited Ti~2\textit{p} electrons at this photon energy matches the KE of Ti~1\textit{s} electrons excited with the HAXPES photon energy (KE\textsubscript{Ti~1\textit{s}} $\approx$ KE\textsubscript{Ti~2\textit{p\textsubscript{3/2}}} $\approx$ 961~eV). Using the QUASES software package,~\cite{Shinotsuka_2015} the inelastic mean free path (IMFP) of Ti~2\textit{p} and Ti~1\textit{s} electrons in Cu metal at the SXPS and HAXPES photon energies were calculated. The IMFP for the Ti~1\textit{s} and Ti~2\textit{p}\textsubscript{3/2} is approximately 1.50~nm, and so the estimated probing depth (3$\lambda$) is 4.50~nm. Therefore, a direct comparison between the two Ti core levels will be possible as they originate from very similar probing depths.
\subsection{Laboratory-based SXPS}\label{SXPS_methods}
SXPS depth profile measurements were conducted on the samples that were annealed at I09 using a laboratory-based Thermo K-Alpha+ instrument (i.e. the in-situ annealed samples were removed and kept for a post-mortem analysis). The instrument operates with a monochromated Al~K$\alpha$ photon source ($h\nu$ = 1.4867~keV) and consists of a 180$\degree$ double-focusing hemispherical analyser, a two-dimensional detector that integrates intensity across the entire angular distribution range, and operates at a base pressure of 2$\times$10\textsuperscript{-9}~mbar. A 400~$\mu$m spot size was used for all measurements, achieved using an X-ray anode emission current of 6~mA and a cathode voltage of 12~kV. A flood gun with an emission current of 100~$\mu$A was used to achieve the desired level of charge compensation. The total energy resolution of the spectrometer was determined to be 400~meV. Survey and core level (W~4\textit{f}, Ti~2\textit{p}, O~1\textit{s} and Cu~2\textit{p}\textsubscript{3/2}) spectra were collected with pass energies of 200 and 20~eV, respectively. Depth profiles were conducted using a focused Ar\textsuperscript{+} ion source, operating at 500~eV energy and 10~mA current, rastering over a 2$\times$2~mm\textsuperscript{2} area with a 30$\degree$ sputtering angle. A total of 17 sputter or etch cycles, each lasting 180~s, was carried out with survey and core level spectra collected after each etch cycle. The data were analysed using the Thermo Avantage v5.9925 software package. The error associated with the quantification values is estimated to be $\pm$0.3~at.\% owing to the complexity of the W~4\textit{f} core level and the low quantities of Cu and Ti/W in the TiW and Cu layers, respectively. \par
\section{Results and Discussion}
Reference room temperature survey and core level spectra (Ti~1\textit{s}, Cu~2\textit{p}, Ti~2\textit{p} and W~4\textit{d}) were collected for the three samples after the in-situ sputter cleaning process, and prior to annealing, with the results displayed in Supplementary Information VI. From the survey spectra, the sample surfaces appear clean and are dominated by signals from Cu. Virtually no carbon is detected, and only a trace quantity of oxygen is present when measured with SXPS. The Cu~2\textit{p}\textsubscript{3/2} core level spectra are near identical for the three samples, and the position and line shape are commensurate with metallic copper.~\cite{SCHON197396, SCROCCO197952, Miller_1993} A low-intensity satellite is observed between 943-948~eV in the Cu~2\textit{p}\textsubscript{3/2} core level spectra, but comparing the spectra to reference measurements of a polycrystalline Cu foil and an anhydrous Cu\textsubscript{2}O powder, the satellite intensity is in agreement with the Cu foil. This confirms that the Cu surface of these samples can be considered metallic and the native oxide contribution is minimised after in-situ sputtering.\par
Importantly no Ti or W is observed in these room temperature measurements. This confirms both that the Cu layer is sufficiently thick so that even with SXPS the underlying TiW cannot be probed, and that the surfaces are consistent across all samples. The reference measurements show that the Cu~L\textsubscript{1}M\textsubscript{1}M\textsubscript{4,5} Auger line overlaps with the Ti~1\textit{s} core line but its intensity is vanishingly small.~\cite{COGHLAN1973317, Liu_SpeedyAuger_2021} Nevertheless, care was taken to remove this contribution when we quantified the Ti~1\textit{s} region to accurately determine the relative change in Ti concentration at the surface.\par
The following sections present the Cu, Ti and W core level spectra and associated accumulation profiles as a function of annealing duration/temperature across the three samples, with a focus on the initial stages of annealing and the 673~K holding period.
\subsection{In-situ annealing profiles}
\subsubsection{Copper}\label{Cu}
Fig.~\ref{fig:CLs_673K_Cu2p} displays the Cu~2\textit{p}\textsubscript{3/2} core level spectra collected over the 5~h holding period at 673~K for all three samples, i.e. Stage \textbf{3} (with \textit{t}~=~0~h in Fig.~\ref{fig:CLs_673K_Cu2p} referring to the start of the 5~h holding period). The spectra across all samples confirm that Cu still remains in its metallic state during annealing, with a BE position of approximately 932.5~eV. Additionally, the narrow full width at half maximum (FWHM), found to be 0.8~eV, and the lack of significant satellite features in the 943-948~eV region give further confirmation of the metallic nature of the Cu surface.~\cite{SCHON197396, SCROCCO197952, Miller_1993} From Fig.~\ref{fig:CLs_673K_Cu2p} it can be observed that after annealing and within the 673~K holding period, sample 5Ti has the highest Cu~2\textit{p}\textsubscript{3/2} signal intensity (Fig.~\ref{fig:CLs_673K_Cu2p}(a)), followed by samples 10Ti (Fig.~\ref{fig:CLs_673K_Cu2p}(b)) and 15Ti (Fig.~\ref{fig:CLs_673K_Cu2p}(c)). Moreover, within the 5~h holding period, the signal intensity is continually decreasing with annealing duration and this effect is most notable in Fig.~\ref{fig:CLs_673K_Cu2p}(c) for the sample with the highest Ti concentration.\par
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth,keepaspectratio]{Figures/673K_Composite_Cu2p.png}
\caption{Cu~2\textit{p}\textsubscript{3/2} core level spectra collected during the 673~K holding period (Stage \textbf{3}) for sample (a) 5Ti, (b) 10Ti, and (c) 15Ti. Spectra for each sample are plotted over the same $y$-axis scale to show the differences in intensity across the three samples. The spectra have not been normalised but a constant linear background has been removed. To avoid congestion of this figure, spectra collected every other spectral cycle are presented (i.e. $\approx$30~minutes) rather than at every spectral cycle (i.e. $\approx$15~minutes). The legend displayed in (b) also applies to (a) and (c). Here, $t$~=~0~h refers to the start of the 5~h holding period.}
\label{fig:CLs_673K_Cu2p}
\end{figure*}
To determine the change in concentration of Cu at the sample surface across the measurement window, peak fit analysis of the Cu~2\textit{p}\textsubscript{3/2} core level was conducted to determine the change in area, with the resultant profile displayed in Fig.~\ref{fig:Composite_Quant}(a). In Fig.~\ref{fig:Composite_Quant}, time, $t$ = 0~h is redefined as the first measurement point of the measurement window (i.e. at the start of Stage \textbf{2} at a temperature of 523~K (250$\degree$C)). Note, $t$ = 0~h in the context of Fig.~\ref{fig:Composite_Quant} is not the same as $t$ = 0~h in Fig.~\ref{fig:CLs_673K_Cu2p}. The same is also true for Fig.~\ref{fig:CLs_673K_Ti1s} and Fig.~\ref{fig:CLs_673K_W4d}, which present the equivalent spectra to Fig.~\ref{fig:CLs_673K_Cu2p} for the Ti~1\textit{s} and W~4\textit{d} core levels, respectively.\par
The Cu~2\textit{p}\textsubscript{3/2} intensity profile in Fig.~\ref{fig:Composite_Quant}(a) reflects what is observed in the core level spectra collected across the 673~K holding period shown in Fig.~\ref{fig:CLs_673K_Cu2p}, in that the Cu~2\textit{p}\textsubscript{3/2} signal intensity decreases as a function of time and annealing temperature across both Stages \textbf{2} and \textbf{3} of the annealing process. The decrease in intensity of the Cu~2\textit{p}\textsubscript{3/2} signal with time is a consequence of the diffusion of Ti out of the TiW layer during annealing. The accumulation of Ti leads to a displacement of Cu atoms and the formation of a Ti-rich surface layer, consequently attenuating the Cu signal. Additionally, when the TiW is more Ti-rich, Fig.~\ref{fig:Composite_Quant}(a) shows that the Cu signal diminishes more extensively suggesting a greater out-diffusion of Ti. As expected based on this interpretation, sample 15Ti shows the largest decay rate in the Cu~2\textit{p}\textsubscript{3/2} signal, followed by sample 10Ti and then 5Ti. At the end of the measurement window, the Cu~2\textit{p}\textsubscript{3/2} signal intensity has depreciated by approximately 2.8, 8.8 and 32.3~\%, for sample 5Ti, 10Ti and 15Ti, respectively. \par
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.95\linewidth,keepaspectratio]{Figures/Quant_Composite_DC_Correct_ADD.png}
\caption{Relative area intensities measured as a function of time, \textit{t} collected across the measurement window for all three samples, including (a) Cu, (b) Ti, and (c) W profiles, determined from peak fitting the Cu~2\textit{p}\textsubscript{3/2}, Ti~1\textit{s} and W~4\textit{d} core level spectra, respectively, at each spectral cycle. Here, $t$~=~0~h refers to the start of the measurement window. The yellow-filled marker for each dataset refers to the time when the 673~K holding period commences (i.e. data points before and after the marker refer to Stage~\textbf{2} and Stage~\textbf{3} of the annealing process). Vertical guidelines are also in place to mark this point for each sample. For Cu, the measured total Cu~2\textit{p}\textsubscript{3/2} areas are normalised relative to the initial raw area (I\textsubscript{0}) of their respective sample (i.e. I/I\textsubscript{0}). For Ti, the measured total raw Ti~1\textit{s} signal area for each sample is first normalised relative to the raw area of the Cu~2\textit{p}\textsubscript{3/2} core level measured during the same spectral cycle and then afterwards the resultant Ti~1\textit{s}/Cu~2\textit{p}\textsubscript{3/2} area is normalised relative to the final raw intensity of sample 15Ti (i.e.~I/I\textsubscript{F}). The W accumulation profile was determined by normalising the measured total raw W~4\textit{d} spectral areas following the method used for the Ti~1\textit{s} normalisation (i.e.~I/I\textsubscript{F}).}
\label{fig:Composite_Quant}
\end{figure*}
\subsubsection{Titanium} \label{sec:Ti}
The Ti~1\textit{s} core level spectra collected across the 5~h 673~K holding period (Stage \textbf{3}) are displayed in Fig.~\ref{fig:CLs_673K_Ti1s}, with the BE positions of the main signals annotated (see Supplementary Information VII and VIII for the equivalent Ti~2\textit{p} core level spectra and heat maps of the Ti~1\textit{s} spectra collected across the measurement window, respectively). \par
Fig.~\ref{fig:CLs_673K_Ti1s} shows that by the time the 673~K holding period starts, a Ti~1\textit{s} peak is observed across all three samples and the intensity continually increases during the 5~h holding period. This confirms that the onset of diffusion occurs prior to Stage \textbf{3} of the annealing process as assumed during the discussion of the Cu profile. Significant differences in intensity of the Ti~1\textit{s} spectra as a function of Ti concentration are observed, with sample 15Ti showing a considerably more intense peak than sample 10Ti and 5Ti (note the $\times$30 magnification of the 5Ti spectra). Notably, the spectral line shape also appears different across the samples indicating a change in the chemical state of the accumulated Ti. \par
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth,keepaspectratio]{Figures/673K_Composite_Ti1s.png}
\caption{Ti~1\textit{s} core level spectra collected during the 673~K holding period (Stage \textbf{3}) for sample (a) 5Ti, (b) 10Ti, and (c) 15Ti. Spectra for each core level are plotted over the same $y$-axis scale to show the differences in intensity across the three samples. The spectra have not been normalised, but a constant linear background has been removed. Additionally, spectra recorded every other spectral cycle are displayed to aid with the interpretation of the data. For sample 5Ti, the spectra are also shown magnified by $\times$30 to aid with viewing. The legend displayed in (b) also applies to (a) and (c). Here, $t$~=~0~h refers to the start of the 5~h holding period.}
\label{fig:CLs_673K_Ti1s}
\end{figure*}
All spectra exhibit a lower BE feature at BEs of 4964.2-4965.1~eV, corresponding to metallic Ti in varying environments (labelled as Ti(0)). As the Ti~1\textit{s} core level is not as widely studied as Ti~2\textit{p} due to the need for hard X-ray sources, only a handful of publications exist, with reported BEs varying considerably.~\cite{hagstrom1964extension, Nordberg1966, Diplas_2001, diplas2001electron, MOSLEMZADEH2006129, Woicik_2015, Renault_2018, RISTERUCCI201778, Regoutz_2018} The BE positions of the Ti(0)~1\textit{s} peak observed in the present work fall within the literature range of metallic Ti, and the asymmetric line shape of the peak, which can be clearly observed in Fig.~\ref{fig:CLs_673K_Ti1s}(b) and (c), is commensurate with this assignment. An asymmetric line shape is a hallmark of the core level spectra of many transition metals.~\cite{HUFNER1975417}
The 10Ti and 15Ti samples show a small BE difference of 0.2~eV, which could be attributed to the differences in the Ti:Cu and/or Ti:O ratio at the evolving surface. In contrast, the BE position in the 5Ti spectra is considerably lower, with a -0.9~eV shift relative to the BE position observed in the spectra of sample 10Ti. This shift can be attributed to the distinctly different surface configuration of this sample due to the dominance of Ti-O environments and the co-diffusion of tungsten, both of which will be discussed later. Moreover, the quantity of Ti diffused to the surface is incredibly small for sample 5Ti, and therefore, the shift could be due to strong surface effects, with far fewer nearest neighbours being Ti leading to a negative shift in BE position.~\cite{CHOPRA1986L311, Kuzmin_2011} \par
During the 673~K holding period, the nature of the accumulated Ti for samples 10Ti and 15Ti is predominately metallic, given that a single asymmetric peak is visible (see Figs.~\ref{fig:CLs_673K_Ti1s}(b) and (c)). The accumulated Ti for sample 5Ti, shown in Fig.~\ref{fig:CLs_673K_Ti1s}(a) is strikingly different as the intensity of the lower BE metallic peak is overshadowed by a large, fairly symmetric peak at approximately +4.5~eV from the Ti(0)~1\textit{s} peak. This peak, labelled as Ti(IV)~1\textit{s}, is attributed to Ti-O environments in the Ti 4+ oxidation state (i.e. TiO\textsubscript{2} like). Renault~\textit{et al.} report the Ti~1\textit{s} BE position of the TiO\textsubscript{2} environment on a TiN film at 4968.8~eV,~\cite{Renault_2018} which agrees well with the value reported here. Therefore, unlike samples 10Ti and 15Ti, the Ti accumulated at the surface of sample 5Ti is not predominately metallic but oxidic. Additionally, there is a shoulder on the lower BE side of this Ti(IV)~1\textit{s} peak (marked with an asterisk, *), which is attributed to lower valence states of Ti (i.e. 2+, 3+) that may also form due to the limited quantity of oxygen expected to be present (see Supplementary Information IX for a peak fit analysis of the spectra highlighting the presence of such environments). This shoulder increases in intensity with increasing annealing duration, and at the end of the 5~h period, a distinct Ti(0)~1\textit{s} peak is difficult to observe.\par
To aid with the interpretation of the Ti~1\textit{s} spectra, as well as validate the chemical state assignments made so far, the Ti~2\textit{p} spectra are used in parallel (see Supplementary Information VII). The Ti~2\textit{p} spectra for samples 10Ti and 15Ti show a doublet peak with an asymmetric line shape at 454.5 and 460.6~eV (SOS = 6.1~eV), in agreement with metallic Ti.~\cite{TANAKA1990429, Kuznetsov_1992} For sample 5Ti, three peaks are identified at 453.8, 459.0, and 464.8~eV. The lowest BE peak corresponds to Ti~2\textit{p}\textsubscript{3/2} of Ti(0), whereas the other two correspond to the doublet of Ti oxide in the 4+ oxidation state (SOS = 5.8~eV), labelled as Ti(IV) (with the Ti(IV)~2\textit{p}\textsubscript{3/2} peak overlapping the Ti(0)~2\textit{p}\textsubscript{1/2} peak). These BE positions and the SOS of the Ti(IV) oxide doublet match well with literature values.~\cite{Diebold_1996, Regoutz_2016}\par
A shift of the lower BE Ti(0)~2\textit{p}\textsubscript{3/2} peak between the three samples is observed, with the peak positioned at 453.8, 454.7 and 454.4~eV for sample 5Ti, 10Ti and 15Ti, respectively. The relative shifts are similar to those observed in the Ti~1\textit{s} spectra. Moreover, the Ti~2\textit{p} spectra recorded for sample 5Ti also display a shoulder on the lower BE side of the main Ti(IV)~2\textit{p}\textsubscript{3/2}, again reflecting what has been observed in the Ti~1\textit{s} spectra, suggesting the presence of lower valence oxidation states that may form during the reaction between Ti and oxygen.~\cite{POUILLEAU1997235, MCCAFFERTY199992} Overall, this confirms the peak assignments made using the Ti~1\textit{s} core level are valid and shows the importance of using multiple core levels to have confidence in the assignment of chemical states.\par
The observation of almost completely oxidised Ti on the surface of sample 5Ti is of interest, given that these measurements were conducted under ultra-high vacuum (UHV) conditions and annealed in-situ. The level of observed oxidation cannot be explained by Ti gettering residual oxygen from the analysis chamber as the quantity present in the chamber is insufficient to promote oxidation of Ti to the extent observed. Furthermore, as the sample is heated during the measurement, the sticking coefficients for adsorbed gases are greatly reduced. An alternative source of oxygen is residual oxygen within the Cu film, whether that be intrinsic to the film (i.e. incorporated during deposition) or that the sputtering process prior to annealing did not fully remove the native oxide layer that formed during the exposure of the samples to the atmosphere. From the room temperature reference survey spectra found in Supplementary Information VI, a small intensity O~1\textit{s} signal is present. Laboratory-based SXPS depth profiling on the as-deposited samples was conducted to determine the oxygen level within the starting (i.e. pre-annealed) films and to validate this assumption further. Three sputter cycles (or etch steps) were completed before the underlying TiW signal became strong (see Supplementary Information X for the collected spectra). The profiles showed that within the Cu bulk, less than 2~rel. at.\% of O is present, i.e., $<$2~at.\% O, $>$98~at.\% Cu. Within the errors of the performed quantification, this amount would be enough to facilitate the observed Ti oxidation.\par
Overall it is apparent that the oxidation of Ti is dependent on both the quantity and rate of accumulation of Ti metal at the surface. Given the significant Ti oxidation observed for sample 5Ti, owing to the low concentration of accumulated Ti, it would be expected that during the early stages of annealing for the higher concentration samples, when an equally low concentration of Ti is expected to accumulate, oxidation should also occur. To confirm this and explore the oxidation of accumulated Ti further, Fig.~\ref{fig:01} displays the Ti~1\textit{s} core level spectra collected across the measurement window for sample 10Ti (equivalent figures for sample 5Ti and 15Ti can be viewed in Supplementary Information XI and XII, respectively). Fig.~\ref{fig:01}(a) shows that during the initial stages of annealing sample 10Ti ($\leq$603~K), the intensity first increases within the region of 4966-4970~eV. After 603~K the intensity increases below 4966~eV, where the metallic Ti(0)~1\textit{s} peak is located, and this peak quickly becomes the dominant contribution to the total line shape and consequently masks the intensity of the environments seen on the higher BE side.\par
\begin{figure*}[ht]
\centering
\includegraphics[width=0.65\linewidth,keepaspectratio]{Figures/Stage_2_20Ti_arrow.png}
\caption{Initial stages of annealing (523-673~K) described by the Cu~2\textit{p}\textsubscript{3/2} and Ti~1\textit{s} core level spectra. (a) Raw Ti~1\textit{s} core level spectra collected (i.e. with no intensity normalisation) at each temperature increment, with +5~h referring to the data collected at the end of the 5~h 673~K holding period. (b) A magnified view of the raw Ti~1\textit{s} core level spectra collected between 523-623~K and a room temperature reference measurement on the same sample (i.e. before annealing) to highlight the Cu Auger contribution. (c) Normalised (0-1) Ti~1\textit{s} core level spectra to emphasise the change in line shape as a function of temperature. (d) Normalised (0-1) Cu~2\textit{p}\textsubscript{3/2} spectra taken at selected temperatures. (a) and (b), and (c) and (d) are plotted on the same $y$-axis scale, respectively (note the $\times$12.5 magnification of the $y$-axis scale of (b)).}
\label{fig:01}
\end{figure*}
From Fig.~\ref{fig:CLs_673K_Ti1s}(a), we know that the 4966-4970~eV region corresponds to Ti-O environments, namely the Ti(IV) oxidation environment, suggesting that even for sample 10Ti, during the initial stages of annealing when the accumulated Ti concentration is low, oxidation of Ti metal occurs. This region will be referred to as Ti-O environments in the following discussion. Fig.~\ref{fig:01}(b) further emphasises the development of Ti-O environments by focusing on the spectra collected between 523-623~K. From this, it is clear that Ti-O environments evolve first and then after 603~K, the Ti(0)~1\textit{s} peak appears due to the continuing diffusion of Ti metal from the TiW layer. It should be noted that the Cu~LMM Auger peak is also present in this region, however, given that the main Cu~2\textit{p}\textsubscript{3/2} core level peak decreases with annealing duration and temperature, the observed increase in spectral intensity in this region cannot be explained by any interference from the Auger peak.\par
The transition from predominantly Ti oxide to metal is evident in Fig.~\ref{fig:01}(c), showing the Ti~1\textit{s} spectra normalised to the maximum peak height. This figure shows that the main intensity peak signal shifts towards lower BEs across the temperature range of 623-673~K (highlighted with an arrow), and this is accompanied by a decrease in the relative intensity of the Ti-O region. The observed shift is due to the emergence of the Ti(0)~1\textit{s} metal peak and the overall reduction of the Ti-O contribution to the total spectral line shape. Lastly, Fig.~\ref{fig:01}(d) displays the Cu~2\textit{p}\textsubscript{3/2} spectrum recorded at different temperatures across the measurement window, and no discernible change is observed in the spectra. Additionally, Supplementary Information XIII shows that the same observation is true when comparing the Cu~2\textit{p}\textsubscript{3/2} line shape across all three samples. This indicates that only the Ti, not the Cu, is undergoing changes to its chemical state at the developing interface.\par
Therefore, oxidation of the surface accumulated Ti is also observed in sample 10Ti but is more evident during the initial stages of annealing where the rate of metal Ti diffusion and quantity of accumulated Ti is small. The same holds true for sample 15Ti as seen in Supplementary Information XII.
Beyond the qualitative analysis of the Ti~1\textit{s}/2\textit{p} spectra, an accumulation profile of Ti at the Cu surface across the measurement window can be obtained. The Ti accumulation profiles for the samples were extracted from the Ti~1\textit{s} core level spectral areas and are displayed in Fig.~\ref{fig:Composite_Quant}(b) (the equivalent Ti~2\textit{p} profile can be found in Supplementary Information XIV). Before discussing these profiles, it is important to reiterate that they represent changes in the quantity of surface-accumulated Ti with respect to time and not temperature, but with increasing time, the temperature also rises.\par
The temperature at which Ti is first observed at the Cu surface (i.e.~the onset), is difficult to identify with full confidence as the signal is very small, especially for samples 5Ti and 10Ti. For these two samples, the temperature range between 553-563~K (i.e. within the first two hours of Stage \textbf{2}) is when a Ti signal is clearly detectable. The detection of these small Ti signals was only possible through analysing the Ti~1\textit{s} core level as it was much more intense and sharper than the Ti~2\textit{p} (Supplementary Information XV provides a comparison of the Ti~2\textit{p} and Ti~1\textit{s} measured at the same point to highlight this issue). In contrast, for sample 15Ti, it is obvious from Fig.~S15(b) in Supplementary Information XII, that Ti is observed from the start of the measurement window (i.e.~523~K) and may have even begun to accumulate during Stage \textbf{1} of the annealing process. \par
The Ti profile displayed in Fig.~\ref{fig:Composite_Quant}(b) shows that with increasing the concentration of Ti within the TiW film, a greater out-diffusion of Ti is observed and thus, a greater accumulation of Ti on the Cu surface occurs. From the profile, it is apparent that the rate of diffusion and the quantity of accumulated Ti differs significantly across the three samples. Focusing on the last data point in the Ti profile at the end of the 673~K holding period, the Ti~1\textit{s}/Cu~2\textit{p}\textsubscript{3/2} area ratios of samples 5Ti and 10Ti are 3.7$\pm$0.5 and 18.2$\pm$0.5~\%, respectively of that of sample 15Ti. This indicates that a linear relationship between the Ti concentration in the film and the quantity of accumulated Ti on the Cu surface does not exist (i.e. they do not scale proportionally).\par
Sinojiya~\textit{et al.} studied similar Ti$_x$W$_{1-x}$ films across a composition range and observed that above a certain Ti concentration threshold, segregation of Ti toward the grain boundaries was favoured, and this enrichment increased with increasing Ti concentration.~\cite{Sinojiya_2022} Additionally, they observed that the change in Ti concentration not only enhances the segregation of Ti but is also accompanied by a change in stress, microstructure, and grain boundary density within the TiW films. A columnar grain boundary structure was also observed at higher concentrations with a relatively higher grain boundary density. Therefore, in our case, for sample 15Ti it is possible that a greater quantity of Ti was already segregated from the TiW grains within the as-deposited films or that annealing promoted a greater segregation compared to samples 5Ti and 10Ti, and consequently that this led to the differences observed in the Ti accumulation profile between the three samples. Furthermore, based on the work of Sinojiya~\textit{et al.}, the expected differences in the microstructure across samples 5, 10 and 15Ti will also contribute to the changes observed in the Ti diffusion profile as properties such as grain boundary density will affect the rate of diffusion. \par
The Ti accumulation profile displayed in Fig.~\ref{fig:Composite_Quant}(b), collected across the measurement window of all three samples, exhibit two different diffusion regimes. The first regime occurs before the 673~K target is reached (i.e. during Stage \textbf{2}), wherein a rapid exponential increase in intensity occurs when ramping the temperature. Once the 673~K target is reached (i.e. during Stage \textbf{3}), the second regime occurs wherein the diffusion rate begins to decelerate and starts to plateau. A plateau is observed for sample 5Ti, and signs of a plateau are present for sample 10Ti by the end of the measurement window. In contrast, the profile for sample 15Ti does not show signs of plateauing, indicating that Ti continues to accumulate at the Cu surface under the temperature and measurement window tested in this experiment. By fitting the linear portions of the Ti~1\textit{s} profile collected during Stages \textbf{2} and \textbf{3} of annealing, the rate of increase in the Ti~1\textit{s} signal intensity relative to sample 15Ti can be determined. The results of the linear fits of Stage \textbf{2} for samples 5Ti, 10Ti and 15Ti were found to be 0.7, 4.9 and 16.5, respectively, and for Stage \textbf{3} were found to be 0.2, 1.4 and 9.7, respectively (error estimated to be $\pm$20\%). These values highlight the dramatic decrease in the Ti accumulation rate during Stage \textbf{3} of annealing. Multiple processes could be responsible for these changes in the accumulation rate. For instance, only a finite quantity of Ti may be available to segregate from the TiW grains, therefore, after annealing for several hours, a plateau is reached as no more Ti is available to diffuse.~\cite{Kalha_TiW_Cu_2022} Additionally, the accumulation appears to decelerate after the 673~K mark is reached. This deceleration may imply that when subjected to a constant temperature rather than a temperature ramp, the rate of diffusion levels off as a steady-state system is reached due to the thermal input remaining at a constant rate. \par
\subsubsection{Tungsten}
Fig.~\ref{fig:CLs_673K_W4d} displays the collected W~4\textit{d} core level spectra for all samples during the 5~h 673~K holding period (Stage \textbf{3}). W is not observed within this period for the 10Ti and 15Ti samples, however, it is detected for sample 5Ti, whose TiW film contains the lowest Ti concentration. This confirms that W co-diffuses to the surface only for sample 5Ti, and given that it is already detected at $t$ = 0~h of the holding period, the diffusion likely occurred prior to Stage \textbf{3}. The BE position of the W~4\textit{d}\textsubscript{5/2} peak is at 243.2~eV, in good agreement with metallic W.~\cite{Kalha_W_2022} Within the 5~h period, the concentration of surface accumulated W does not increase in intensity with increasing annealing duration, suggesting that the accumulation has plateaued and the diffusion has subsided. The presence of W at the Cu surface may also influence the oxidation behaviour of the accumulated Ti as observed in the previous section.\par
\begin{figure}
\centering
\includegraphics[width=0.33\linewidth,keepaspectratio]{Figures/W4d_673K_15Ti_slim.png}
\caption{W~4\textit{d} core level spectra collected during the 673~K holding period (Stage \textbf{3}) for samples (a) 5Ti, (b) 10Ti, and (c) 15Ti. Spectra for each core level are plotted over the same $y$-axis scale to show the differences in intensity across the three samples. Note the $\times$10 magnification of the spectra for sample 5Ti in (a). The spectra have not been normalised, but a constant linear background has been removed. Additionally, spectra recorded every other spectral cycle are displayed to aid with the interpretation of the data. For sample 5Ti (a), the inset shows a $\times$10 magnification of the spectra to aid with viewing. The legend is the same as that used in Fig.~\ref{fig:CLs_673K_Cu2p}(b) and Fig.~\ref{fig:CLs_673K_Ti1s}(b). Here, $t$~=~0~h refers to the start of the 5~h holding period.}
\label{fig:CLs_673K_W4d}
\end{figure}
Fig.~\ref{fig:Composite_Quant}(c) displays the relative accumulation profile of W at the Cu surface across the measurement window for all three samples. Due to the poor signal-to-noise ratio (SNR) of the W~4\textit{d} spectra, it is difficult to have complete confidence in determining the exact temperature at which W is first observed for sample 5Ti. However, the signal becomes apparent at 553-563~K, similar to when Ti was observed at the surface of the same sample. The poor SNR is also responsible for the large scatter in the accumulation profile, leading to an area change greater than 100~rel.\%. Fitting the data points with an asymptotic curve shows that a plateau is reached when crossing from Stage \textbf{2} to Stage \textbf{3} of the annealing process, with the 673~K holding period profile flattening, similar to what was observed for the Ti profile. The observed plateau indicates that a finite quantity of W is able to migrate from the barrier and that a steady state is reached within the measurement window explored. \par
The diffusion of W is surprising as the vast majority of studies on TiW only report the out-diffusion of Ti. For example, even studies on pure W diffusion barriers,~\cite{Shen_1986, mercier1997, GUPTA1975362, wang_1994} or on a TiW barrier with a relatively low Ti concentration (4.9~at.\%)~\cite{Evans_1994} do not report any mobility of W. However, some studies observe W diffusion from a W or TiW barrier within thin film stacks at temperatures below 600$\degree$C, although no details are given on a possible reason as to why this occurs.~\cite{Christou_1975, Palmstrom_1985, ASHKENAZI1993746} \par
Based on the present results, it is hypothesised that the Ti concentration of the TiW film dictates the overall stability of the diffusion barrier. If it is too low (i.e. in the 5Ti sample), a small amount of W becomes mobile and is free to migrate through the Cu overlayer alongside Ti and accumulate at the surface. This suggests that Ti plays an active role in stabilising the barrier and achieving the desired microstructure necessary for good barrier performance. Therefore, tuning the Ti concentration to an optimum value can significantly improve the barrier performance.
\subsection{Elemental distribution across the in-situ annealed TiW/Cu bilayer} \label{DP}
From the in-situ annealing results, it is clear that under the conditions tested, the out-diffusion of Ti from TiW and through the Cu metallisation is observed for the two samples with the higher Ti concentration - 10Ti and 15Ti. Whereas, for the lowest Ti concentration sample (5Ti), both Ti and W diffuse through the copper metallisation. To quantify the elemental ratio of Cu, Ti, and W across the metallisation, depth profiling using laboratory-based SXPS was conducted on the in-situ annealed samples (i.e. post-mortem). Survey spectra collected at each etch cycle for all three samples can be found in Supplementary Information XVI, showing the change in composition and transition between the Cu overlayer and TiW sublayer.
\begin{figure*}[ht]
\centering
\includegraphics[keepaspectratio, width=0.95\linewidth]{Figures/Depth_Profiles.png}
\caption{Post-mortem laboratory-based SXPS sputter depth profiles collected across samples (a) 5Ti, (b) 10Ti and (c) 15Ti after in-situ annealing at beamline I09. Etch Cycle 0 refers to the spectra collected on the as-received sample (i.e. before any sputtering). Horizontal guidelines are added to show the final Ti~at.\% for each sample, with the dotted, dashed and solid orange lines referring to samples 5, 10 and 15Ti, respectively.}
\label{fig:DP}
\end{figure*}
The depth profiles for the three samples displayed in Fig.~\ref{fig:DP} highlight the distribution of Ti across the Cu layer and confirm what was observed in the in-situ measurements, in that at the Cu surface, the quantity of accumulated Ti increases in intensity as the Ti concentration of the film increases. The profiles further confirm that Ti is found throughout the Cu film after annealing. However, its distribution is not uniform, with more Ti observed at the Cu/air and TiW/Cu interfaces. Despite the strong out-diffusion, distinct Cu and TiW zones are still observable in the depth profiles, showing that the TiW/Cu bilayer has not failed when stressed under these conditions.\par
Several studies on Cu/Ti bilayer films have identified that a reaction between the two films can occur as low as 325$\degree$C, leading to the formation of intermetallic CuTi and Cu\textsubscript{3}Ti compounds at the interface.~\cite{Liotard_1985, Li_1992, Apblett_1992} As shown in Fig.~\ref{fig:01}(c), the shifts observed for the Ti~1\textit{s} core line are representative of a changing oxide to metal ratio rather than the formation of an intermetallic compound, whereas the Cu~2\textit{p}\textsubscript{3/2} spectra displayed in Fig.~\ref{fig:01}(d) show no change in the line shape. If an intermetallic compound were to form, one would expect some systematic change to the spectra with increasing annealing duration and temperature or for samples with a higher Ti concentration in the TiW film, as these will cause the greatest surface enrichment of Ti on the Cu. The possibility of such a reaction is difficult to answer from the core level spectra alone. The depth profiles can aid with this discussion. At etch cycle 0 (i.e. as-received surface), the Ti:Cu ratio for sample 15Ti is 7.5:92.5. Of course, this may be slightly skewed as the surface is oxidised, and so there may be additional diffusion of Ti across the metal/oxide interface, but also a carbon surface layer is present which will affect the quantification. Nevertheless, this ratio is insufficient to form stoichiometric CuTi or Cu\textsubscript{3}Ti intermetallic phases that were reported in previous studies on the Ti/Cu interface.~\cite{Liotard_1985} Therefore, based on this literature, the presented spectra and the quantified Ti:Cu ratio, a reaction between Cu and Ti at the developing Cu/Ti interface does not occur due to the relatively small amount of diffused Ti, which again may explain why no systematic shifts in the core level spectra commensurate with a Cu-Ti reaction were observed. However, it should be noted that it may not be possible to observe intermetallic compounds as (a) the quantity of diffused Ti is very small, and (b) the Cu~2\textit{p}\textsubscript{3/2} core line is known to have small chemical shifts.~\cite{Chawla1992DiagnosticSF}\par
In terms of W, the depth profiles shown in Fig.~\ref{fig:DP} confirm that W is only observed at the Cu surface for sample 5Ti and is not present at the surface or within the Cu bulk for samples 10Ti and 15Ti. Fig.~\ref{fig:DP}(a) shows that for sample 5Ti, the W profile is fairly constant across etch cycles 0-3, suggesting that W is homogeneously distributed throughout the Cu metallisation and is not accumulated at the Cu/air interface like Ti. Quantification of the Cu, Ti and W signals reveals that at the surface of sample 5Ti (etch cycle 0), the composition is 97.9 (Cu), 0.9 (Ti), and 1.2 (W)~rel. at.\%, showing that significant W diffusion has occurred. \par
Fig.~\ref{fig:DP} shows that the Cu signal tends towards 0~rel. at.\% for all samples when the interface is reached. However, Cu is still detected at the deepest point of the depth profile, with a composition at etch cycle 17 calculated to be 0.1 (Cu) 99.9 (Ti + W), 0.7 (Cu) 99.3 (Ti + W), and 1.4 (Cu) 98.6 (Ti + W) rel. at.\%, for samples 5Ti, 10Ti and 15Ti, respectively. Moreover, with increasing Ti concentration, the element profiles broaden, and their gradients toward the ``interface'' labelled zone reduce. This provides evidence that there is a degree of intermixing at the TiW/Cu interface, and for films with higher Ti concentrations, a greater intermixing is observed due to the larger rate of atomic flux of Ti across the interface during annealing. Therefore, the out-diffusion of Ti from the TiW also promotes the down diffusion of Cu into the TiW layer, and consequently, the TiW and Cu layers bleed into each other. \par
To summarise, the depth profiles show that clear TiW and Cu zones remain across all samples despite the diffusion and intermixing that occurs during annealing. Although the concentration of Cu observed at the deepest point of the depth profiles increases when the concentration of Ti in the TiW increases, it is difficult to determine how deep the Cu diffuses, as the measurement point of the last depth profile etch cycle is still very much at the surface of the 300~nm thick TiW film. However, given the low concentration of Cu detected at this point ($\leq$1.4~at.\%), and the fact that distinct Cu and TiW zones still remain, one can be confident that under the conditions tested, the TiW barrier has not failed, and the majority of Cu is held above the barrier.\par
\section{Conclusion}
The thermal stability of the TiW barrier in conjunction with a Cu metallisation overlayer was evaluated in real-time using a combination of SXPS and HAXPES, and annealing the sample in-situ to a target temperature of 673~K. The primary mode of degradation was the segregation of Ti from the TiW barrier and its diffusion to the copper surface to form a surface overlayer. The concentration of Ti in TiW was shown to have a significant influence on the thermal stability of the TiW barrier. Two thresholds are observed when moving across the TiW composition window tested here: (I) below a certain concentration of Ti, W gains mobility, suggesting that the incorporation of Ti stabilises W, and (II) above a certain concentration of Ti the diffusion drastically increases, suggesting that at higher concentrations grain boundary segregation of Ti from the TiW grains is favoured, resulting in significantly more out-diffusion of Ti. The post-mortem depth profiles validate the effectiveness of TiW diffusion barriers as despite the degradation observed during annealing, the Ti depletion is not significant enough to lead to the failure of the barrier, as distinct Cu and TiW zones are still present. Overall, it is clear that the composition heavily dictates the stability of TiW, but under the conditions tested, all three barrier compositions remain effective at suppressing the permeation of copper. Based on this, the TiW alloy can cement itself as an excellent diffusion barrier to incorporate into future device technologies.
\medskip
\textbf{Supporting Information} \par
The Supplementary Information includes room temperature reference spectra, heat maps of the Ti~1\textit{s} spectra collected across the measurement window, and the Ti~2\textit{p} spectra collected for all samples during the 673~K holding period. Additionally, core level spectra collected for samples 5Ti and 15Ti during the 523-673~K annealing period, survey spectra from the laboratory-based SXPS depth profile, information on the residual level of oxygen within the Cu films from laboratory-based SXPS, and a comparison of the Ti~2\textit{p} and Ti~1\textit{s} core levels can be found in the Supplementary Information. Information on the peak fitting procedures used, and the method to determine and monitor the thermal broadening is also available in the Supplementary Information.
\medskip
\textbf{Acknowledgements} \par
C.K. acknowledges the support from the Department of Chemistry, UCL. A.R. acknowledges the support from the Analytical Chemistry Trust Fund for her CAMS-UK Fellowship. This work was carried out with the support of Diamond Light Source, instrument I09 (proposal NT29451-1 and NT29451-2). The authors would like to thank Dave McCue, I09 beamline technician, for his support of the experiments.
\medskip
\bibliographystyle{MSP}
\section{Peak fit analysis of as-deposited TiW spectra}
To determine the Ti:W ratio of the as-deposited samples, the Ti~2\textit{p} and W~4\textit{f} core level spectra were collected with laboratory-based SXPS, after both ex-situ and in-situ preparation of the Si/SiO\textsubscript{2}/TiW/Cu samples. Samples were first cleaved to 5$\times$5~mm\textsuperscript{2} pieces using a diamond-tipped pen, after which they were submerged in a dilute solution of HNO\textsubscript{3} (5:1 65~\% conc. HNO\textsubscript{3}: Milli-Q water) for 10~min. This was carried out to selectively remove the copper metallisation layer without affecting the TiW layer. The samples were then sputter cleaned in-situ to remove contamination during the ex-situ preparation stages and any oxide formation. The survey spectra collected after the in-situ preparation are displayed in Fig.~\ref{fig:Sputter_Surv}.\par
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.8\linewidth]{Figures_SI/TIW_Quant_Surveys_SI.png}
\caption{SXP survey spectra collected before and after in-situ preparation of samples, including (a) survey spectra collected for sample 10Ti after each etch step, and (b) survey spectra collected for all three samples at the end of the in-situ preparation method. Spectra are normalised (0-1) to the height of the most intense peak and are vertically offset. VB = valence band.}
\label{fig:Sputter_Surv}
\end{figure}
Once the sputter cleaning was performed, a depth profile using a focused Ar\textsuperscript{+} source was then conducted for each sample to determine the Ti:W concentration profile across the film. The depth profile consisted of six etching cycles, each lasting for 30~min while the Ar\textsuperscript{+} ion gun operated at a 500~eV accelerating voltage and 10~mA emission current. After six etch steps, the SiO\textsubscript{2} layer was detectable. The Ti~2\textit{p} and W~4\textit{f} core level spectra were collected at each etch step. Representative Ti~2\textit{p} and W~4\textit{f} spectra, along with representative peak fits, are displayed in Fig~\ref{fig:W4f_Ti2p}. Spectra were aligned to the intrinsic Fermi energy ($E_F$) of the respective sample. A systematic shift toward higher binding energy (BE) is observed in the W~4\textit{f} spectra with decreasing Ti, a trend that is also observed in the Ti~2\textit{p} spectra. \par
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.6\linewidth]{Figures_SI/01_03_05_Quant.png}
\caption{SXP core level spectra collected for all samples after the in-situ removal of the copper capping layer and oxide layer, and representative peak fits of the (a) W~4\textit{f} and (b) Ti~2\textit{p} core level spectra. Spectra are normalised to the W~4\textit{f}\textsubscript{7/2} peak height of the respective sample. Peak fits of the W~4\textit{f} and Ti~2\textit{p} core levels for spectra collected on the 10Ti sample are displayed in (c) and (d), respectively.}
\label{fig:W4f_Ti2p}
\end{figure}
To determine the Ti:W ratio the Ti~2\textit{p} core level spectra collected across the entire depth profile were first fitted with the Smart-type background implemented in the Avantage software package, which is a Shirley-type background with the additional constraint that the background should not be greater than the data points. The smart background was chosen because at lower Ti concentrations, the background on the lower binding energy (BE) side of the Ti~2\textit{p} begins to rise due to the increase in intensity of the close neighbouring W~4\textit{p}\textsubscript{3/2} plasmon and this hampers the effective use of the Shirley-type background, as it would cut the data points. Due to the complexity of the Ti~2\textit{p} core level, the total area was fitted rather than to isolate the contributions from the two spin states. The average Ti~2\textit{p} relative atomic sensitivity factor (RASF) was applied to the resultant fitted area to quantify the region. For W~4\textit{f} the Shirley-type background was implemented and three peaks were added for the W~4\textit{f}\textsubscript{7/2}, W~4\textit{f}\textsubscript{5/2} and W~5\textit{p}\textsubscript{3/2} core lines. It is assumed that after sputtering only the metallic tungsten environment is present. The W~4\textit{f} peaks were given asymmetry to account for the core-hole coupling with conduction band states and constrained to have the same full width at half maximum (FWHM) and line shape as each other.~\cite{HUFNER1975417} The Avantage software package uses a least square fitting procedure to determine a suitable Lorentzian/Gaussian (L/G) mix, tail mix, full width at half maximum (FWHM), and tail exponent of the peaks. Additionally, the area ratio of the 4\textit{f} doublet peaks was set so that the lower spin state peak had an area that was 0.75 that of the higher spin state peak (i.e. 3:4 area ratio). The same line shape (FWHM, L/G mix, tail mix, tail exponent and area ratio) was applied to all W~4\textit{f} spectra across the depth profile. Additionally, the W~5\textit{p}\textsubscript{3/2} peak was fitted with a psuedo-Voigt profile peak with a fixed L/G mix of 30\% Lorentzian and a variable FWHM constraint. The BE range of the backgrounds, the line shapes, and FWHM constraints of the peaks was then applied to all spectra to be consistent across the sample set and the depth profiles. However, if the line shape was not constrained, the same value within error ($\pm$0.3~at.\%) was achieved. To determine the relative Ti:W ratio in at.\%, the RASF corrected Ti~2\textit{p} spectral area was compared to the RASF corrected W~4\textit{f}\textsubscript{7/2} spectral area. Fig.~\ref{fig:DP_Quant} displays the quantification results from the depth profiles along with a standard deviation across the film thickness. The three samples have an average Ti~at.\% relative to W of 5.4$\pm$0.3 (5Ti), 11.5$\pm$0.3 (10Ti) and 14.8$\pm$0.6~at.\% (15Ti). Furthermore, Fig.~\ref{fig:01_DPS} displays the spectra collected across the depth profile of sample 10Ti, and it can be seen that the W~4\textit{f} line shape remains fairly constant across the first five etch steps, and subtle changes are observed in the W~4\textit{f}/Ti~2\textit{p} area ratio, reflecting what is observed with the values from the quantification. The survey spectra displayed in Fig.~\ref{fig:01_DPS}(a) also nicely show how the depth profile penetrates across the TiW and into the substrate, as in the last three etches, Si-O peaks first emerge, followed by Si peaks.
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=\linewidth]{Figures_SI/AD_Quant_DPs.png}
\caption{Ti:W relative quantification as a function of etching time (also referred to as sputter duration) across the three TiW films. The depth of profile of samples 5Ti, 10Ti and 15Ti is displayed in (a), (b), and (c), respectively. 0~min etch time refers to the first measured point in the depth profile. This was collected after the sample surface was in-situ sputter cleaned to remove the remnants from the ex-situ cleaning process but before the first etching cycle of the depth profile. This measurement point is referred to as Etch 0.}
\label{fig:DP_Quant}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.66\linewidth]{Figures_SI/AD_DPs_spectra_01.png}
\caption{Spectra collected during the first five etch steps of the depth profile for sample 10Ti, including (a) survey, (b) W~4\textit{f}, and (c) Ti~2\textit{p} spectra. The survey spectra are normalised to the height of the maximum intensity peak, whereas the W~4\textit{f} and Ti~2\textit{p} spectra are normalised to the sum of the total W~4\textit{f}/5\textit{p}\textsubscript{3/2} and Ti~2\textit{p} areas. The dotted grey line in the survey spectra refers to the Etch 0 spectrum, and the survey spectra have been offset vertically. Etch 0 refers to the first measurement at sputtering time 0~min (i.e. after the in-situ cleaning but before the first depth profile etching cycle). As no Fermi edge or C~1\textit{s} was measured during the depth profiles, the BE scale is not calibrated and is plotted as recorded.}
\label{fig:01_DPS}
\end{figure}
\cleardoublepage
\section{Room temperature energy resolution}
The room temperature total energy resolution of the SXPS and HAXPES experiments at the synchrotron was determined by determining the 16/84\% width of the Fermi edge of a polycrystalline gold foil. Fig.~\ref{fig:Au_Ef} displays the Fermi edges of the foil measured with SXPS and HAXPES at room temperature and fitted with a Boltzmann curve.
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.4\linewidth]{Figures_SI/Au_Width.png}
\caption{Fermi edge (E\textsubscript{F}) spectra collected with (a) SXPS and (b) HAXPES on a polycrystalline gold foil at room temperature. The energy resolution is determined by extracting the 16/84\% width (i.e.\ one standard deviation on either side of the Fermi energy.}
\label{fig:Au_Ef}
\end{figure}
\cleardoublepage
\section{Sample Plate Holder}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.4\linewidth]{Figures_SI/Sample_Holder.png}
\caption{Annotated image of the sample plate holder used for the in-situ annealing experiment at beamline I09.}
\label{fig:Sample_Plate}
\end{figure}
\cleardoublepage
\section{Temperature Profiles}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.42\linewidth]{Figures_SI/Combined_Profile.png}
\caption{Temperature profiles for all three samples. The start of the measurement window is indicated by the vertically dotted grey line, whereas the red dotted and dashed lines indicate the end of the measurement cycle for samples 5Ti/15Ti and 10Ti, respectively. The temperature profile for samples 5Ti and 10Ti are near-identical and so overlap.}
\label{fig:T_profile}
\end{figure}
\cleardoublepage
\section{Energy resolution as a function of temperature}
In order to assess the effect of temperature on the thermal broadening of the collected spectra, the intrinsic Fermi edge of the sample (i.e. copper) was captured with SXPS and HAXPES at each spectral cycle. By extracting the 16/84\% width of the Fermi edge (as shown in Fig.~\ref{fig:Au_Ef}), the change in total energy resolution could be monitored with respect to temperature. According to M\"{a}hl~\textit{et al.} the thermal broadening ($\gamma_f$) of a Fermi edge at temperature $T$ measured with XPS can be described by:
\begin{equation}
{\gamma}_f = 4{\ln}(\sqrt{2}+1)k_b{T}\;{\approx}\;{\frac{7}{2}}k_b{T} ,
\end{equation}
where $k_b$ is the Boltzmann constant and approximating ${k_b}T$ to $\frac{T}{11600}\frac{\textrm{eV}}{\textrm{K}}$ gives a value of 90~meV and 200~meV for the thermal broadening at 300~K and 673~K, respectively.~\cite{MAHL1997197} Therefore, a change of 110~meV in the total energy resolution of this experiment is expected. Fig.~\ref{fig:Reso_T}(a) displays the change in Fermi edge width with respect to annealing temperature and duration during preliminary test measurements.\par
It can be seen in Fig.~\ref{fig:Reso_T}(a) that across the measured temperature range, on average the change in 16/84\% Fermi edge width is less than 60~meV. Considering everything remains constant during the measurement (i.e. pass energy, dwell time, analyser, geometry, sample) except for temperature, this change is representative of the thermal broadening. This value is slightly lower than the theoretical value, but this can be attributed to the assumptions made in the theoretical model and the error associated with the 16/84\% method. Additionally, Fig.~\ref{fig:Reso_T}(c) and (d) display the Fermi edge spectra at key temperatures measured in this experiment for sample 15Ti. The changes observed are minimal, with the hard X-ray-collected Fermi edges appearing more sensitive to temperature than the soft X-ray-collected edges.\par
Overall, the change in resolution is insignificant for the core level spectra as it falls below the energy resolution of the spectrometer. Therefore, when analysing the changes to the core level spectra for all samples, thermal broadening effects are negligible. Moreover, Fig.~\ref{fig:Reso_T}(b) displays the Cu~2\textit{p}\textsubscript{3/2} core level spectrum collected at selected temperatures. The room temperature spectrum is slightly broader than the higher temperature spectra, but the high-temperature spectra FWHM remain reasonably constant, falling in line with the changes observed when tracking the Fermi edge width. The reason for the broader room temperature spectrum and slight asymmetry on the lower binding energy side can be attributed to surface contamination (i.e. remnant oxide contributions) but when heated, the surface is cleaned, leading to a narrowing of the FWHM.
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=\linewidth]{Figures_SI/Resolution_Composite.png}
\caption{Energy resolution measurements as a function of annealing temperature and duration, including (a) the Fermi edge width collected with both soft (SX) and hard (HX) X-rays for sample 10Ti as a function of temperature during preliminary measurements, (b) selected Cu~2\textit{p}\textsubscript{3/2} core level spectra collected with SXPS on sample 15Ti as a function of annealing temperature, collected during this experiment, plotted on a relative BE scale and normalised to the maximum intensity to emphasis the change in peak FWHM as a function of annealing temperature and duration. (c) and (d) display the selected Fermi edge spectra collected as a function of annealing temperature measured with soft and hard X-rays, respectively. (c) and (d) are normalised to the maximum height (accounting for noise) of the Fermi edge and plotted on the same \textit{y}-axis scale. RT Ref. refers to the room temperature reference spectrum.}
\label{fig:Reso_T}
\end{figure}
\cleardoublepage
\section{Room temperature reference spectra}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.7\linewidth]{Figures_SI/Room_Temperature_Refs.png}
\caption{SXPS and HAXPES room-temperature reference spectra collected for as-deposited samples 5Ti, 10Ti and 15Ti after the surface was in-situ cleaned via argon sputtering, including (a) survey, (b) Cu~2\textit{p}\textsubscript{3/2}, (c) W~4\textit{d}, (d) Ti~2\textit{p} and (e) Ti~1\textit{s}, with the Ti~1\textit{s} collected with HAXPES and the others with SXPS. Spectra are normalised to the maximum height of the Cu~2\textit{p}\textsubscript{3/2} signal. Spectra collected on reference copper compounds (Cu, Cu\textsubscript{2}O) are also included, which were measured using the laboratory-based SXPS instrument.}
\label{fig:Refs_RoomT}
\end{figure}
To have confidence in the interpretation of the Cu~2\textit{p}\textsubscript{2/3} spectra, reference measurements were conducted using laboratory-based SXPS instrument ($h\nu$ = 1.4867~keV) on a polycrystalline Cu foil (Alfa Aesar, 99.9985\% metals basis, 0.25~mm thick) and an anhydrous Cu\textsubscript{2}O powder (Cu\textsubscript{2}O, Sigma Aldrich, $>=$99.99\% metals basis). The foil reference was sputter cleaned in-situ using a focused argon ion beam and sputtering for 10~min, with the ion gun operating at 2~keV voltage. The Cu\textsubscript{2}O powder was received in a sealed ampule under an argon atmosphere, and to minimise further oxidation (i.e. the formation of CuO) the sample was prepared in a glovebag under argon. The recorded Cu~2\textit{p}\textsubscript{3/2} spectra of these reference materials are overlaid on the room temperature reference spectra of samples 5, 10 and 15Ti, displayed in Fig.~\ref{fig:Refs_RoomT}(b). The binding energy scale was calibrated to the intrinsic Fermi energy for the TiW/Cu samples and the Cu foil reference, whereas for Cu\textsubscript{2}O the scale was calibrated to adventitious carbon (284.8~eV).\par
It can be observed, that there is good agreement between the Cu foil reference and the spectra recorded for the TiW/Cu samples. A very weak satellite is observed between 942-948~eV for the TiW/Cu samples, however, this is also present in the Cu foil reference, therefore indicating that the native oxide contribution has been minimised as much as possible. The slight differences in Cu~2\textit{p}\textsubscript{3/2} FWHM between the foil reference and TiW/Cu samples can be explained by the differences in total energy resolution between the synchrotron ($h\nu$ = 1.4~keV) and laboratory-based measurements, which were determined to be 330~meV and 600~meV, respectively. The laboratory-based SXPS instrument used for the collection of reference spectra was not the same used for the depth profiles described in the manuscript, hence the different energy resolution. \par
Cu Auger peaks are identified to overlap with the measured Ti~2\textit{p} and Ti~1\textit{s} core levels when measured with h$\nu$ = 1.4 and 5.9~keV, respectively. The Auger peak appears at a BE position of $\approx$4967.0~eV in the Ti ~1\textit{s} region and $\approx$457.0~eV in the Ti~2\textit{p} region, equating to a kinetic energy (KE) of $\approx$959.0~eV for both the Auger peaks. The reason why they both have the same kinetic energy is due to the strategic decision to tune the photon energies so that the Ti~1\textit{s} and Ti~2\textit{p} probing depths match. Possible Auger transition energies have been calculated and tabulated by Coghlan~\textit{et al.},~\cite{COGHLAN1973317} and the position of the Auger in the Ti~1\textit{s} spectra correlates with the Auger Cu~L\textsubscript{1}M\textsubscript{1}M\textsubscript{4,5} transition calculated at 962~eV (KE). It is clear that these peaks are not due to titanium as they do not possess the attributes of a core level peak nor the expected BE position of titanium metal/oxide in either the 2\textit{p} or 1\textit{s} spectrum. Aside from the Cu Auger peaks, the Ar~2\textit{p} core level peak is visible in the W~4\textit{d} region at approximately 241.0~eV corresponding to implanted argon from the sputtering process. However, this peak is again incredibly small and does not affect the analysis of the W~4\textit{d} spectrum that may develop during annealing.
\cleardoublepage
\section{In-situ annealing Ti~2\textit{p}~core level spectra}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=\linewidth]{Figures_SI/673K_Composite_Ti2p.png}
\caption{Ti~2\textit{p} core level spectra collected during the 673~K holding period (Stage \textbf{3}) for sample (a) 5Ti, (b) 10Ti, and (c) 15Ti. Spectra for each core level are plotted over the same $y$-axis scale to show the differences in intensity across the three samples. The spectra have not been normalised but a constant linear background has been removed. Additionally, spectra recorded every other spectral cycle are displayed to aid with the interpretation of the data. The 5Ti spectra have been magnified by $\times$15 to aid with viewing. The legend displayed in (b) also applies to (a) and (c). Ti(0) and Ti(IV) refers to metallic Ti and titanium oxide in the 4+ oxidation state, respectively.}
\label{fig:Ti2p_core_levels}
\end{figure}
\cleardoublepage
\section{Heat map of Ti~1\textit{s} spectra collected over the measurement window}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=\linewidth]{Figures_SI/Ti1_Colour_Map.png}
\caption{HAXPES maps of the Ti~1\textit{s} core level collected across the entire measurement window, for sample (a) 5Ti, (b) 10Ti and (c) 15Ti. The spectra are aligned to the intrinsic Fermi energy of the respective sample, and their intensity is not normalised but plotted as-collected (after the subtraction of a constant linear background). The top panel displays the median spectrum collected across the measurement window and the right panel displays the point-by-point temperature profile as a function of time. Due to the large variation in spectral intensity between sample 5Ti and 15Ti, the spectra displayed here are on independent intensity scales and so the intensities should not be directly compared. Ti(0) and Ti(IV) refers to metallic Ti and titanium oxide in the 4+ oxidation state, respectively.}
\label{fig:Ti1s_heat}
\end{figure}
\cleardoublepage
\section{5Ti Ti~1\textit{s} peak fit analysis}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.4\linewidth]{Figures_SI/v2_peak_fit_3ox.png}
\caption{Peak fit analysis of the Ti~1\textit{s} core level for sample 5Ti. The oxide peaks are constrained to have the same FWHM (2.2~eV) and Lorentzian/Gaussian mix (50), whereas the metal peak line shape was derived from peak fitting the 673~K spectra of sample 30Ti with one asymmetric line shape. A Shirley-type background was used, and the Cu~L\textsubscript{1}M\textsubscript{1}M\textsubscript{4,5} contribution was not removed.}
\label{fig:Ti1s_pf_5Ti}
\end{figure}
\cleardoublepage
\section{Residual oxygen within the as-deposited Cu film}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=\linewidth]{Figures_SI/O_traces.png}
\caption{Depth profile results across the three as-deposited TiW/Cu samples to determine the level of O within the bulk Cu film. Samples were sputtered using a focused 500~eV Ar\textsuperscript{+} ion-beam gun energy for 6 min, rastering over a 2$\times$2~mm\textsuperscript{2} area and measuring at the centre of the sputter crater. Three cycles of sputtering were conducted equating to 18 min total sputtering time. (a) and (b) show the Cu~2\textit{p}\textsubscript{3/2} and O~1\textit{s} spectra collected after the first, second and third etch steps for sample 5Ti only, respectively. Etch 0 refers to the as-received measurement (i.e. before any sputtering) and is not included here as the samples were stored and handled in air so a thin native oxide and adventitious carbon layer were present. The quantification results of the O/(Cu+O) ratio at each of the three etch steps for all three samples are shown in (c). The spectra are aligned to the ISO standard BE value of metallic Cu~2\textit{p}\textsubscript{3/2} (932.62~eV)~\cite{Cu_ISO} and normalised to the Cu~2\textit{p}\textsubscript{3/2} total spectral area. After Etch 3, the TiW layer is reached and the Ti and W signals become dominant.}
\label{fig:O_traces}
\end{figure}
\cleardoublepage
\section{Early Stages of Annealing for Sample 5Ti}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.67\linewidth]{Figures_SI/Stage_2_15Ti.png}
\caption{Initial stages of annealing (523-673~K) described by the Cu~2\textit{p}\textsubscript{3/2} and Ti~1\textit{s} core level spectra for sample 5Ti. (a) Ti~1\textit{s} core level spectra collected (with no intensity normalisation) at each temperature increment, with +5~h referring to the data collected at the end of the 5~h 673~K holding period. (b) A magnified view of the Ti~1\textit{s} core level spectra collected between 523-623~K as well as a room temperature reference measurement on the same sample (prior to annealing) to highlight the Cu Auger contribution. (c) Normalised (0-1) Ti~1\textit{s} core level spectra to emphasise the change in line shape. (d) Normalised (0-1) Cu~2\textit{p}\textsubscript{3/2} spectra taken at selected temperatures. All data have been aligned to the intrinsic Fermi energy. (a) and (b), and (c) and (d) are plotted on the same $y$-axis scale.}
\label{fig:5Ti_early}
\end{figure}
\cleardoublepage
\section{Early Stages of Annealing for Sample 15Ti}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.67\linewidth]{Figures_SI/Stage_2_30Ti.png}
\caption{Initial stages of annealing (523-673~K) described by the Cu~2\textit{p}\textsubscript{3/2} and Ti~1\textit{s} core level spectra for sample 15Ti. (a) Ti~1\textit{s} core level spectra collected (with no intensity normalisation) at each temperature increment, with +5~h referring to the data collected at the end of the 5~h 673~K holding period. (b) A magnified view of the Ti~1\textit{s} core level spectra collected between 523-623~K as well as a room temperature reference measurement on the same sample (prior to annealing) to highlight the Cu Auger contribution. (c) Normalised (0-1) Ti~1\textit{s} core level spectra to emphasise the change in line shape. (d) Normalised (0-1) Cu~2\textit{p}\textsubscript{3/2} spectra taken at selected temperatures. All data have been aligned to the intrinsic Fermi energy. (a) and (b), and (c) and (d) are plotted on the same $y$-axis scale.}
\label{fig:15Ti_early}
\end{figure}
\cleardoublepage
\section{Cu~2\textit{p}\textsubscript{3/2} line shape changes}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.4\linewidth]{Figures_SI/Comparison_01_03_05_Cu2p_rel.png}
\caption{Comparison of the Cu~2\textit{p}\textsubscript{3/2} spectral line shape of the three samples. The spectra presented were captured at the end of the 673~K holding period (i.e. 673~K + 5~h). The spectra are normalised 0-1 and aligned to the main intensity to make it easier to observe changes in the line shape.}
\label{fig:Cu2p}
\end{figure}
\cleardoublepage
\section{In-situ annealing Ti~2\textit{p} concentration profile}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.4\linewidth]{Figures_SI/Ti2p_Quant.png}
\caption{Relative Ti concentration profile as a function of time, \textit{t} collected across the measurement window for all three samples, determined from peak fitting the Ti~2\textit{p} core level spectra. The yellow-filled marker for each dataset refers to the time when the 673~K holding period commences. Vertical guidelines are also in place to mark this point for each sample. The measured Ti~2\textit{p} signal intensity for each sample is first normalised relative to the area of the Cu~2\textit{p}\textsubscript{3/2} core level measured during the same spectral cycle and then afterwards the resultant Ti~2\textit{p}/Cu~2\textit{p}\textsubscript{3/2} area is normalised relative to the final intensity of sample 15Ti (I\textsubscript{F}).}
\label{fig:Ti2p_Quant}
\end{figure}
\cleardoublepage
\section{Ti~2\textit{p}/1\textit{s} comparison}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.8\linewidth]{Figures_SI/Ti1s_Ti2p_early.png}
\caption{A comparison of the (a) Ti~1\textit{s}, and (b) Ti~2\textit{p} core level spectra recorded at 573~K (\textit{t} = 2~h) for sample 10Ti. Spectra are normalised to the signal-to-noise ratio. Guidelines are marked for the positions of the expected peaks. It is clear that the Ti~1\textit{s} is more sensitive to smaller concentrations of titanium than the Ti~2\textit{p}. Additionally, the nature of the secondary background for the Ti~2\textit{p} region means that quantification of this area is incredibly difficult and cannot be done reliably, whereas a standard XPS background can easily be applied to the Ti~1\textit{s} region.}
\label{fig:Ti1s_2p}
\end{figure}
\cleardoublepage
\section{Depth Profile Survey Spectra}
\begin{figure}[ht!]
\centering
\includegraphics[keepaspectratio, width=0.6\linewidth]{Figures_SI/Depth_Profiles_Surveys.png}
\caption{Survey spectra collected after each etch cycle during the post-mortem depth profile measurements for (a) 5Ti, (b) 10Ti, and (c) 15Ti samples. The top spectrum displayed in each sub-figure is taken on the as-received sample (i.e. no etch) and then the spectra collected after each cycle are stacked vertically below (going from blue to grey to black). Spectra coloured in blue are Cu-rich, black are W-rich and red is termed the ``interface'' as it marks the point where the Cu and W signals cross over in the depth profiles.}
\label{fig:DP_surveys}
\end{figure}
\cleardoublepage
\section{References}
|
1,116,691,500,187 | arxiv | \section{#1}}
\else
\newtheorem{theorem}{Theorem}
\def\newsection#1{\section{#1}}
\fi
\def\Newsection#1#2{\setcounter{section}{#1} \addtocounter{section}{-1}
\newsection{#2} }
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{question}[theorem]{Question}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{observation}[theorem]{Observation}
\newtheorem{example}[theorem]{Example}
\newtheorem{algorithm}[theorem]{Algorithm}
\newtheorem{thmdef}[theorem]{Theorem and Definition}
\newtheorem{deflem}[theorem]{Definition and Lemma}
\def\par\medskip\noindent {\sc Proof. }{\par\medskip\noindent {\sc Proof. }}
\def\proofof #1 {\par\medskip\noindent {\sc Proof of #1. }}
\def\par\medskip\noindent{\sc Sketch of proof. }{\par\medskip\noindent{\sc Sketch of proof. }}
\def\sketchof #1 {\par\medskip\noindent {\sc Sketch of proof of #1. }}
\def\framebox[10pt]{\rule{0pt}{3pt}}{\framebox[10pt]{\rule{0pt}{3pt}}}
\def\rule{0pt}{2pt}{\rule{0pt}{2pt}}
\def\hfill $\Box$ \par\medskip{\rule{0pt}{0pt}\hfill $\Box$\par\medskip\noindent}
\def\rule{0pt}{0pt}\hfill $\Box${\rule{0pt}{2pt}\nolinebreak\hfill\hfill\nolinebreak$\framebox[10pt]{\rule{0pt}{3pt}}$}
\def\par\medskip \noindent {\sc Remark. }{\par\medskip \noindent {\sc Remark. }}
\def\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent
{\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent}
\def\MSC#1{
\def\arabic{footnote}{\fnsymbol{footnote}}
\footnotetext[0]{\hskip -4ex{\bf MSC 2000 Classification:} #1}
\def\arabic{footnote}{\arabic{footnote}}
}
\def\keywords#1{
\def\arabic{footnote}{\fnsymbol{footnote}}
\footnotetext[0]{\hskip -4ex{\bf Keywords:} #1}
\def\arabic{footnote}{\arabic{footnote}}
}
\def\address#1{
\def\arabic{footnote}{\fnsymbol{footnote}}
\footnotetext[0]{\hskip -4ex{\bf Address:} #1\\}
\def\arabic{footnote}{\arabic{footnote}}
}
\def\heading#1{\par\bigskip\noindent{$\bullet$ \sc #1}\par\medskip
\addtocontents{toc}{
\protect\contentsline{subsection} {\protect\numberline{}#1}{\thepage}}
}
\def\headingX#1{\par\bigskip\noindent{$\bullet$ \sc #1}\par\medskip
}
\newlength\captionwidth
\setlength{\captionwidth}{\textwidth}
\addtolength{\captionwidth}{-15mm}
\def\LabelCaption#1#2{
\centerline{\parbox{\captionwidth}{
\caption{\sl #2} \label{#1} } }
}
\hyphenation{pa-ra-bo-lic}
\hyphenation{Mi-siu-re-wicz}
\hyphenation{Schlei-cher}
\hyphenation{Thur-ston}
\def\reminder #1 {{\sf #1}}
\def\hide #1 {}
\long\def\longhide #1 {}
\newenvironment{Longhide}{\longhide}{}
\section{Introduction}
\hide{
This article is a contribution to the study of dynamical systems generated by the
iteration of entire functions. Already Euler had studied the question for which $a$
the limit
\begin{equation}
a^{a^{a^{a^{a^{a^{\dots}}}}}}
\label{EqEulerIteration}
\end{equation}
exists. Writing $\lambda=\ln a$, this question amounts to asking for which
$\lambda$ the iteration $a_0:=0$, $a_{n+1}:=e^{\lambda a_n}$ has a limit.
This question has a rich structure. If the map $E(z)=e^{\lambda z}$ has an
attracting or parabolic fixed point, then the limit will certainly exist in ${\mbox{\bbf C}}$.
It is readily verified that this is the case whenever $\lambda=\mu e^{-\mu}$, where
$|\mu|<1$, or $\mu$ is a root of unity.
Furthermore, there are countably many $\lambda$ for which the orbit of $0$ under
iteration of $E$ lands exactly on a fixed point, which is necessarily repelling.
This leads to the classification of postsingularly preperiodic exponential
functions, which is carried out in \cite{ExpoSpiders}.
This exhausts the possibilities where the limit exists in ${\mbox{\bbf C}}$; but for which
parameters is the limit $\infty$? This is certainly the case for all real
$\lambda>1/e$, but the complete answer is much richer: {\em the set of parameters
$\lambda\in{\mbox{\bbf C}}$ for which the limit of (\ref{EqEulerIteration}) is $\infty$ consists
of uncountably many disjoint curves in ${\mbox{\bbf C}}$, each of which is homeomorphic to
$(0,\infty)$ or $[0,\infty]$}.
The complete answer will be given in \cite{frs} based upon our results. In the
present paper, we construct the parts homeomorphic to $(0,\infty)$ of all the
curves and classify them as {\em parameter rays in the space of exponential maps}.
In \cite{frs}, we attach {\em endpoints} to certain of these rays and show that
this yields a complete classification of all parameters $\lambda$ for which the
limit is $\infty$.
A more systematic motivation for our study is the program to extend the successful
theory of iterated polynomials to transcendental entire functions.
}
This article is a contribution to extend the successful theory of iterated
polynomials to transcendental entire functions. Douady and
Hubbard~\cite{Orsay}, and many others since then, have shown that the dynamical
planes of polynomials can be studied in terms of {\em dynamic rays} and their
landing points. Similarly, the space of quadratic polynomials can be understood in
terms of the structure of the Mandelbrot set, which itself is studied in terms of
{\em parameter rays}.
In recent years, it has become clear that dynamical rays also make sense for
transcendental entire functions. Specifically for exponential functions, they were
introduced in \cite{dk,dgh1}, and in \cite{sz} it was shown that dynamic rays
classify all {\em escaping points}: those points which converge to $\infty$ under
iteration of the map. These results are extended to larger classes of entire
functions in \cite{dt,ro,GuenterThesis}.
The simplest parameter space of transcendental functions is probably the space of
exponential functions; it has been studied in \cite{dgh2,el,br,rs1} and elsewhere
(see also \cite{fagella} and \cite{kk} for studies of other transcendental parameter
spaces). Similarly as for the Mandelbrot set, a systematic study of exponential
parameter space uses parameter rays and how parameter space is partitioned by
parameter rays landing at common points \cite{hab,rs1,rs2}; compare
Figure~\ref{FigParaRays}.
\begin{figure}
\includegraphics[width=\textwidth]{ParaRays.eps}%
\caption{The parameter space of complex exponential maps $z\mapsto
e^z+\kappa$, with hyperbolic components in white, the bifurcation locus in
grey, and several parameter rays in black. (Picture courtesy of Lasse Rempe.) }
\label{FigParaRays}
\end{figure}
Parameter rays in the space of complex exponential maps are curves of parameters
for which the singular value ``escapes'', i.e.\ converges to $\infty$; such
parameters will be called ``escaping parameters''. Certain
parameter rays were constructed in \cite{dgh2} and more in \cite{hab}. In the
present paper, we construct and classify all parameter rays
(Theorem~\ref{fulllength}). In a sequel \cite{frs}, it will be shown how this helps
to classify all escaping parameters: every such parameter is either on a unique
parameter ray, or it is the landing point of a unique parameter ray with precisely
described combinatorics.
It turns out that the set of escaping parameters yields a nice dimension paradox:
the union of all parameter rays has Hausdorff dimension 1, while the set of only
those endpoints which are escaping parameters has dimension 2 \cite{bfs,frs}. This
is the parameter space analog to a well-known situation in the dynamical planes of
exponential maps \cite{k,sz}.
\begin{center}
{\bf Acknowledgements}
\end{center}
We would like to thank Lasse Rempe for many helpful discussions
and comments, as well as the dynamics workgroup at
International University Bremen, especially Alex Kaffl, G\"unter
Rottenfu{\ss}er and Johannes R\"uckert. Furthermore, we would
like to thank the audiences in Paris, Warwick, and Oberwolfach for
interesting discussions, and the International University Bremen
for all the support.
Special thanks go to Niklas Beisert, who has contributed key ideas
to Section~\ref{sec_winnr}.
\section{Dynamic Rays}
\label{sec_dynrays}
\subsection{Notation and Definitions}
We investigate the family
\[
\{E_{\kappa}:{\mbox{\bbf C}}\to{\mbox{\bbf C}}\;,\ \;z\mapsto\exp(z+\kappa)\;\:|\
\kappa\in{\mbox{\bbf C}}\}\;.
\]
Translating $\kappa$ by an integer multiple of $2\pi i$ yields the
same mapping, but slightly changes combinatorics. Therefore we
consider the complex plane as parameter space rather than the
cylinder ${\mbox{\bbf C}}/2\pi i{\mbox{\bbf Z}}$. The asymptotic value $0$ is the only
singular value of $E_\kappa$ (i.e. there are no other asymptotic
or critical values), and we call $\left(E_{\kappa}^{\circ
n}(0)\right)_{n\in{\mbox{\bbfsm N}}}=(0,e^\kappa,\exp(e^\kappa+\kappa),\dots)$
the \emph{singular orbit}.
As usual, the iterates of a function $f$ are denoted by
$f^{\circ(n+1)}(z):=f\circ f^{\circ n}(z)=f\circ\cdots\circ f(z)$,
with $f^{\circ 0}:=\mbox{\rm id}$. If $f$ is bijective, we define also
$f^{\circ (-n)}:=(f^{-1})^{\circ n}$. Let ${\mbox{\bbf N}}:=\{0,1,2,3,\dots\}$,
${\mbox{\bbf N}}^*:={\mbox{\bbf N}}\setminus \{0\}$, ${\mbox{\bbf C}}^*:={\mbox{\bbf C}}\setminus\{0\}$, and
${\mbox{\bbf C}}':={\mbox{\bbf C}}\setminus {\mbox{\bbf R}}_0^{-}$. We shall write $B_r(z_0):=\{z\in{\mbox{\bbf C}}:\
|z-z_0|<r\}$.
Let $\sigma:\mathcal S\to\mathcal S$ denote the shift map on the space
$\mathcal S:={\mbox{\bbf Z}}^{{\mbox{\bbfsm N}}^*}$ of sequences over the integers. We are going to
use
\[
F:{\mbox{\bbf R}}^+_0\to {\mbox{\bbf R}}^+_0\ ,\quad F(t):=e^t-1
\]
as a bijective model function for exponential growth.
The following discussion will take place in the dynamical plane of
a fixed map $E_\kappa$.
We define
\hide{Recall the definitions}
\begin{eqnarray*}
I(E_{\kappa})&:=&\{z\in{\mbox{\bbf C}}:\ |(E_{\kappa})^{\circ n}(z)|\to\infty
\mbox{ as }n\to\infty\}\;;\\ I&:=&\{\kappa\in{\mbox{\bbf C}}:\ 0\in
I(E_\kappa)\}\;.
\end{eqnarray*}
\begin{lemma}[Characterization of Escaping Points]\label{esc_orb}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent For all $\kappa\in{\mbox{\bbf C}}$,
\[
z\in I(E_\kappa)\quad\Longleftrightarrow\quad
\mbox{\rm Re}\left(E_{\kappa}^{\circ n}(z)\right) \to +\infty\quad \mbox{as
}n\to\infty\;.
\]
\end{lemma}
\begin{proof}
This follows from $|E_\kappa^{\circ(n+1)}(z)|=\exp(\mbox{\rm Re}
(E_{\kappa}^{\circ n}(z)+\kappa))$. \hfill $\Box$ \par\medskip
\end{proof}
To start with, we would like to endow the plane with dynamical
structure so as to obtain symbolic dynamics. On the slit plane
${\mbox{\bbf C}}'$, there is a biholomorphic branch $\mbox{Log}:{\mbox{\bbf C}}'\to \{z\in{\mbox{\bbf C}}:|\mbox{\rm Im}
(z)|<\pi\}$ of the logarithm, which we will refer to as the
principal branch. Thus the branches of $E_\kappa^{-1}$ on ${\mbox{\bbf C}}'$
are given by
\[
L_{\kappa,j}:{\mbox{\bbf C}}' \to {\mbox{\bbf C}} ;\quad L_{\kappa,j}(z)=\mbox{Log} z
-\kappa +2\pi ij\quad (j\in{\mbox{\bbf Z}})\;.
\]
Define the partition
\[
R_j := \{z\in {\mbox{\bbf C}}: -\mbox{\rm Im}\kappa-\pi+2\pi j < \mbox{\rm Im} z <
-\mbox{\rm Im}\kappa+\pi+2\pi j\}\quad (j\in{\mbox{\bbf Z}})
\]
(see Figure \ref{partit}): these are the components of ${\mbox{\bbf C}}\setminus
E_{\kappa}^{-1}({\mbox{\bbf R}}^-)$. Clearly, every $L_{\kappa,j}$ maps ${\mbox{\bbf C}}'$
biholomorphically onto $R_j$. The following definition gives rise
to symbolic dynamics and is the key idea for understanding the set
$I(E_\kappa)$.
\begin{figure}
\begin{picture}(0,0)%
\includegraphics{partition2.pstex}%
\end{picture}%
\setlength{\unitlength}{2901sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(8842,3869)(1531,-4358)
\put(2311,-1606){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$0$}%
}}}
\put(7951,-1606){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$0$}%
}}}
\put(8761,-1066){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}${\mbox{\bbf C}}'={\mbox{\bbf C}}\setminus{\mbox{\bbf R}}^-_0$}%
}}}
\put(3601,-1636){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$R_3$}%
}}}
\put(3601,-3436){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$R_0$}%
}}}
\put(3601,-2221){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$R_2$}%
}}}
\put(3601,-2941){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$R_1$}%
}}}
\put(3231,-3841){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$-\kappa$}%
}}}
\put(1531,-3661){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2\pi$}%
}}}
\put(5941,-2131){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$L_{\kappa,2}$}%
}}}
\put(5941,-3121){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$E_{\kappa}$}%
}}}
\end{picture}
\caption{The (static) partition and $L_{\kappa,j}$.}
\label{partit}
\end{figure}
\begin{definition}[External Addresses]\label{defextadr}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent Let $z=z_1\in {\mbox{\bbf C}}$ be such that
$z_{n+1}:=E_\kappa^{\circ n}(z) \not \in {\mbox{\bbf R}}^-$ for all $n\in{\mbox{\bbf N}}$.
Then the \emph{external address} ${\underline s}(z)=(s_1,s_2,\dots)\in\mathcal S$ of
$z$ is defined to be the sequence of labels such that $z_n\in
R_{s_n}$ for all $n \ge 1$.
\end{definition}
\par\medskip \noindent {\sc Remark. } The difference between a parameter $\kappa\in{\mbox{\bbf C}}$ and its
translate $\kappa'=\kappa+2\pi i k$ (with $k\in{\mbox{\bbf Z}}$) is a different
labeling: a point $z$ has external address ${\underline s}$ for the map
$E_{\kappa'}$ if and only if it has external address
$(s_1+k,s_2+k,s_3+k,\dots)$ for the map $E_{\kappa}$.
\begin{definition}[Exponential Boundedness]
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent A sequence ${\underline s} \in\mathcal S$ is said to be \emph{exponentially
bounded} if there is an $x>0$, called \emph{growth parameter},
such that $|s_{k+1}|\le F^{\circ k}(x)$ for all $k\in{\mbox{\bbf N}}$. The set
of exponentially bounded sequences is denoted by $\mathcal S_0$.
\end{definition}
The following lemma is taken from \cite{sz}, Lemma 2.4.
\begin{lemma}[External Addresses are Exponentially
Bounded]\label{extad_expbd} \rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent For every $\kappa$ and
every $z\in{\mbox{\bbf C}}$ there is an $x>0$ such that for all $n\ge 0$ we
have $|E_\kappa^{\circ n}(z)|\le F^{\circ n}(x)$. Thus every
sequence which is realized as an external address is exponentially
bounded. \hfill $\Box$ \par\medskip
\end{lemma}
\subsection{Dynamic Rays}\label{secdynrays}
This section summarizes necessary results from \cite{sz} on
dynamic rays.
\begin{definitionlemma}[Minimal Potential]\label{minpot}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent For every ${\underline s}=(s_1,s_2,\dots) \in \mathcal S$ define the
\emph{minimal potential} of ${\underline s}$ by
\[
t_{{\underline s}}:=\inf\left\{ x>0:\; \limsup_{n \to \infty} \frac
{|s_n|}{F^{\circ (n-1)}(x)} = 0 \right\}\;,
\]
and furthermore define
\[
t_{{\underline s}}^*:=\sup_{n\ge 1} F^{\circ (-n+1)}(|s_n|)\;.
\]
These definitions lead to the following properties:
\begin{itemize}
\item $t_{{\underline s}}=\limsup_{n\ge 1} F^{\circ (-n+1)}(|s_n|)\le
t_{{\underline s}}^*$; \item If ${\underline s}\in\mathcal S_0$, then $t_{{\underline s}}\le t_{{\underline s}}^*<
\infty$; otherwise $t_{{\underline s}}=t_{{\underline s}}^*=\infty$; \item $|s_{n+1}|\le
F^{\circ n}(t_{{\underline s}}^*)$ for all $n\in{\mbox{\bbf N}}$; \item $\forall t>t_{{\underline s}}\
\exists N_0\in{\mbox{\bbf N}}\ \forall N\ge N_0:\quad F^{\circ N}(t)>
t_{\sigma^N{\underline s}}^*\;.$ In particular, for every $t>t_{{\underline s}}$ there
exists $N_0\in{\mbox{\bbf N}}$ such that $\forall N\ge N_0:\ |s_{N+1}|\le
F^{\circ N}(t)$.
\end{itemize}
\end{definitionlemma}
\begin{proof}
Let $t_{{\underline s}}':=\limsup_{n\ge 1} F^{\circ (-n+1)}(|s_n|)$ and
$L(x):=\limsup_{n\ge 1}\frac{|s_n|}{F^{\circ n}(x)}$. Every
$x<t_{{\underline s}}$ satisfies $L(x)=\infty$, but $L(t_{{\underline s}}')= 1$. That
shows $t_{{\underline s}}'\ge t_{{\underline s}}$. Similarly, every $x$ such that $L(x)=0$
satisfies $x\ge t_{{\underline s}}'$, and thus $t_{{\underline s}}'\le t_{{\underline s}}$.
The second and third items follow directly from the definitions.
For the last item observe that $t_{\sigma^N{\underline s}}^*=F^{\circ
N}\left(\sup_{n\ge N+1} F^{\circ (-n+1)}(|s_{n}|)\right)$. Since
$t>\limsup_{n\ge 1} F^{\circ (-n+1)}(|s_n|)$, there is an $N_0$
such that for all $N\ge N_0$ we have $t>\sup_{n\ge N+1}F^{\circ
(-n+1)}(|s_n|)$. \hfill $\Box$ \par\medskip
\end{proof}
According to Definition 6.7 in \cite{sz}, we divide $\mathcal S_0$ into
so-called \emph{slow} and \emph{fast} sequences: a sequence
${\underline s}\in\mathcal S_0$ is called slow if it has a growth parameter $x$ which
works for infinitely many shifts $\sigma^n{\underline s}$ of ${\underline s}$ as well;
otherwise ${\underline s}$ is called fast. Consider the set $X\subset
\mathcal S_0\times{\mbox{\bbf R}}_0^+$ defined by
\[
X:=\{({\underline s},t)\in\mathcal S_0\times{\mbox{\bbf R}}:\ t>t_{{\underline s}}\}\cup
\bigcup_{{\underline s}\in\mathcal S_0\;\mbox{\scriptsize fast}} \{({\underline s},t_{{\underline s}})\}\;.
\]
Let us endow $X$ with the product topology induced by the discrete
topology on $\mathcal S_0$ and the standard topology on ${\mbox{\bbf R}}$. (See
\cite{r2} for a deeper discussion of the topology of the sets
$I(E_{\kappa})$.)
\begin{theorem}[Dynamic Rays]\label{dynrays}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent \begin{enumerate} \item For every $\kappa\not\in I$
there is a continuous bijection
\[
g^{\kappa}:\;X\to I(E_{\kappa})\;.
\]
If $\kappa\in I$ then we have to restrict the map $g^{\kappa}$:
there is a preferred pair $({\underline s}^{\kappa},t^{\kappa})$ and a set
\[
X^{\kappa}:=X\setminus\{({\underline s},t):\;\exists n\ge 1:
\sigma^n{\underline s}={\underline s}^{\kappa}\mbox{ and }F^{\circ n}(t)\le t^{\kappa}\}
\]
with a continuous injection
$g^{\kappa}:X^{\kappa}\to I(E_{\kappa})$ with
$g^{\kappa}({\underline s}^{\kappa},t^{\kappa})=0$. For every $z\in
I(E_{\kappa})\setminus g^{\kappa}(X^{\kappa})$ there is a unique
$n\ge 1$ and a $t<t^{\kappa}$ such that $E_{\kappa}^{\circ
n}(z)=g^{\kappa}({\underline s}^{\kappa},t)$.
\item For every ${\underline s}\in\mathcal S_0$ and $\kappa\in{\mbox{\bbf C}}$, we define the curve
\[g_{{\underline s}}^{\kappa}(t):=g^{\kappa}({\underline s},t)\qquad (t>t_{{\underline s}})\]
wherever this is defined. For fixed $t>t_{{\underline s}}$,
$g_{{\underline s}}^{\kappa}(t)$ depends analytically on $\kappa$ (on
$\kappa$-disks where $g_{{\underline s}}^{\kappa}(t)$ is defined). For fixed
$\kappa$ the curves have the following properties.
\begin{enumerate}
\item For all $t,{\underline s}$ where $g_{{\underline s}}^{\kappa}(t)$ is defined we have
\[
E_\kappa \circ g_{{\underline s}}^{\kappa}(t) = g_{\sigma{\underline s}}^{\kappa}\circ
F(t)\;.
\]
\item \label{defined}Suppose $K\ge |\kappa|$. Then
$g_{{\underline s}}^{\kappa}(t)$ is defined for all $t\ge
t_{{\underline s}}^K:=t_{{\underline s}}^*+2\log(K+3)$, and
\begin{eqnarray}
g_{{\underline s}}^{\kappa}(t) &=& t-\kappa+2\pi is_1 +r_{\kappa,{\underline s}}(t) \nonumber\\
\mbox{with}\quad |r_{\kappa,{\underline s}}(t)| &\le&
2e^{-t}(K+2\pi|s_2|+12)<5\;.\label{rK_est}
\end{eqnarray}
\item The orbit $z_n:=E_{\kappa}^{\circ (n-1)}(z_1)$ of any
$z_1=g_{{\underline s}}^{\kappa}(t)$ satisfies
\begin{equation}\label{lowpotasymp}
z_n=F^{\circ (n-1)}(t)-\kappa+2\pi is_{n} +O\left(F^{\circ
(-n)}(t)\right)\quad \mbox{as } n\to\infty\;.
\end{equation}
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{proof}
Everything can be found in \cite{sz} (Prop.~3.2, Theorem 4.2, and
Prop.~4.4), except for the second inequality in (\ref{rK_est}): we
get for all $t\ge t_{{\underline s}}^K$ (using $x=t_{{\underline s}}^*$, $A=1$, $C<1.5$
for the variables appearing in \cite{sz})
\begin{eqnarray*}
|r_{\kappa,{\underline s}}(t)|\le \frac{2K+4+4\pi
F(t_{{\underline s}}^*)+6\pi}{e^{t_{{\underline s}}^*}(K+3)^2}\le\frac{2K+4\pi+24}{6K+9}<5\;.
\end{eqnarray*}
\vspace{-1cm} \hfill $\Box$ \par\medskip
\end{proof}
\par\medskip \noindent {\sc Remark. } We call the curves $g_{{\underline s}}^{\kappa}(t)$ ($t>t_{{\underline s}}$)
\emph{dynamic rays} at external address ${\underline s}$. Viana \cite{v}
showed that they are $C^{\infty}$-smooth. (For $C^2$, see Section
\ref{sec_bound}.) For a fast sequence ${\underline s}$, the point
$g^{\kappa}({\underline s},t_{{\underline s}})$ (if defined) is called the \emph{endpoint}
of the dynamic ray $g_{{\underline s}}^{\kappa}$. See \cite{r2} for a
discussion of smoothness in the endpoints. The variable $t$ is
referred to as the \emph{potential} (in analogy to the terminology
for polynomial external rays).
\par\medskip \noindent {\sc Remark. } By Definition and Lemma \ref{minpot}, for every
$z=g_{{\underline s}}^{\kappa}(t)$ there is an $N$ such that $F^{\circ
(N-1)}(t)\ge t_{\sigma^N{\underline s}}^{|\kappa|}$, i.e.~for all $n\ge N$,
$E_{\kappa}^{\circ n}(z)=g_{\sigma^n{\underline s}}^{\kappa}(F^{\circ
(n-1)}(t))$ satisfies the condition required for (\ref{rK_est}).
This is crucial for a lot of arguments given in this paper
(see Definition~\ref{raytails} and Lemma~\ref{Lem:FromTailsToRays}).
\section{Construction of Parameter Rays}
Now we want to turn our attention to \emph{parameter space}, the
set of parameters $\kappa$. We are interested in the set $I$ of
\emph{escaping parameters}, which are those parameters for which
the singular value $0$ is an escaping point. Again, this
investigation will lead to curves, called \emph{parameter rays},
which parameterize the escaping parameters by the external address
${\underline s}$ and the potential $t$ (i.e. speed of escape) of the singular
orbit under $E_\kappa$.
Based on dynamic rays, we start the construction of the parameter
rays at large potentials, where it is comparably easy to find an
escaping parameter $\kappa$ with given combinatorial data
$({\underline s},t)$. Then we will extend these \emph{parameter ray tails}
onto the full domain $(t_{{\underline s}},\infty)$ of potentials.
\begin{definition}[Pointwise Definition of Parameter Rays]\label{defpara}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent For an external address ${\underline s}\in\mathcal S_0$ and a potential
$t>t_{{\underline s}}$, let
\[
\mathcal D_{{\underline s}}(t):={\mbox{\bbf C}}\setminus \{ \kappa\in I:\; \exists n\ge 1,\;
t^{\kappa}\ge F^{\circ n}(t):\
g_{\sigma^n{\underline s}}^\kappa(t^{\kappa})=0\}
\]
be the set of parameters for which $g_{{\underline s}}^{\kappa}(t)$ is defined
in the sense of Theorem \ref{dynrays}, and
\begin{equation}\label{eqpara}
\mathcal G_{{\underline s}}(t):=\{\kappa\in\mathcal D_{{\underline s}}(t):\; g_{{\underline s}}^{\kappa}(t)=0\}\ .
\end{equation}
\end{definition}
\par\medskip \noindent {\sc Remark. } As our main result (Theorem \ref{fulllength}), we will
show that for every exponentially bounded external address
${\underline s}\in\mathcal S_0$ and for every potential $t>t_{{\underline s}}$ we have
$|\mathcal G_{{\underline s}}(t)|=1$, and that the unique map
$G_{{\underline s}}:(t_{{\underline s}},\infty)\to I$, $t\mapsto \kappa\in \mathcal G_{{\underline s}}(t)$ is
a curve. This curve will be called the \emph{parameter ray} at
external address ${\underline s}$.
\subsection{Parameter Ray Tails}
\begin {proposition}[Existence of Parameter Ray Tails]\label{parrayend}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent For every ${\underline s} \in \mathcal S_0$ there is a constant
$T_{{\underline s}}>t_{{\underline s}}$ and a unique map $G_{{\underline s}}:[T_{{\underline s}},\infty) \to {\mbox{\bbf C}}$,
called \emph{parameter ray tail}, such that for all $t\ge T_{{\underline s}}$
\[G_{{\underline s}}(t)\in \mathcal G_{{\underline s}}(t)\quad\mbox{and}\quad |G_{{\underline s}}(t)|<2\pi t\;.
\]
The parameter $G_{{\underline s}}(t)$ is a simple root of the map
$\kappa\mapsto g_{{\underline s}}^{\kappa}(t)$, and the parameter ray tails
carry the asymptotics
\[
G_{{\underline s}}(t)=t+ 2\pi i s_1 +R_{{\underline s}}(t)
\]
with $|R_{{\underline s}}(t)| < 2e^{-t}(2\pi t+2\pi |s_2|+12)<5$.
\end{proposition}
\begin{proof}
Define $T_{{\underline s}}:=20+2t_{{\underline s}}^*$. Consider an arbitrary fixed
potential $t\ge T_{{\underline s}}$ and define $K:=2\pi t$. We will show that
the map $h:\kappa\mapsto g_{{\underline s}}^{\kappa}(t):B_K(0)\to {\mbox{\bbf C}}$ is
well-defined on the open disk $B_K(0)$ and contains a unique root
$\kappa_0$.
\begin{figure}
\begin{picture}(0,0)%
\includegraphics{parrayend2.pstex}%
\end{picture}%
\setlength{\unitlength}{2901sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(4363,4363)(146,-4119)
\put(1801,-1861){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z_0$}%
}}}
\put(2836,-286){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$B_K(0)$}%
}}}
\put(2341,-2716){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$B_{5}(z_0)$}%
}}}
\put(2071,-1006){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\kappa_0$}%
}}}
\put(2251,-1546){\makebox(0,0)[lb]{\smash{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$|R_{{\underline s}}(t)|$}%
}}}
\end{picture}
\caption{The setting in the proof of Proposition \ref{parrayend}.}
\label{Gproof}
\end{figure}
Indeed, by Theorem \ref{dynrays} (\ref{defined}), this map is
well-defined, because $t\ge T_{{\underline s}}\ge 20$ implies $t/2>2\log(2\pi
t+3)$ and thus
\begin{eqnarray*}
t &=& t/2+t/2\;>\;2\log(2\pi t+3)+t_{{\underline s}}^*= t_{{\underline s}}^K\;.
\end{eqnarray*}
In fact, this implies by (\ref{rK_est}) that for all $\kappa\in
B_K(0)$
\begin{equation}\label{gskappa_est}
g_{{\underline s}}^{\kappa}(t)=t-\kappa+2\pi is_1 + r_{\kappa,{\underline s}}(t)
\quad\mbox{with } |r_{\kappa,{\underline s}}(t)|<5\;.
\end{equation}
Now for given $\kappa$, define $z_0:=t+ 2\pi i s_1$, so that
$g_{{\underline s}}^{\kappa}(t)=z_0-\kappa+r_{\kappa,{\underline s}}(t)$. Since
$|r_{\kappa,{\underline s}}(t)|<5$, we have $g_{{\underline s}}^{\kappa}(t) \neq 0$ for
$|z_0-\kappa|\geq 5$. Within the disk $B_K(0)$, the only
parameters $\kappa$ with $g_{{\underline s}}^{\kappa}(t)=0$ are thus contained
in the disk $B_{5}(z_0)$.
Note that $B_6(z_0)\subset B_K(0)$, because every $\kappa\in
B_6(z_0)$ satisfies $$|\kappa|\leq t+2\pi|s_1|+6 \le t+2\pi
t_{{\underline s}}^* + 6 \le t+\pi T_{{\underline s}}+6<2\pi t=K\;.$$ By
(\ref{gskappa_est}), $h(\partial B_{5.5}(z_0))$ winds exactly once
around $0$. Analyticity of $h$ and Rouch\'{e}'s theorem imply
therefore that there is exactly one $\kappa_0=:G_{{\underline s}}(t)$
(counting multiplicities) with $|\kappa_0|<K$ for which
$h(\kappa_0)=0$. This is a simple root of $\kappa\mapsto
g_{{\underline s}}^{\kappa}(t)$.
Since $g_{{\underline s}}^{\kappa_0}(t)=t-\kappa_0+2\pi
is_1+r_{\kappa_0,{\underline s}}(t)=0$, Theorem \ref{dynrays} (\ref{defined})
yields $|G_{{\underline s}}(t)-t-2\pi i s_1|\le |r_{2\pi t,{\underline s}}(t)|$. \hfill $\Box$ \par\medskip
\end{proof}
Notice that we have not yet shown that $G_{{\underline s}}$ is a curve. The
following proposition implies uniqueness of the parameter ray
tails (without the restriction on $|\kappa|$) and is the main
argument for extending these ray tails onto the full domain of
definition $(t_{{\underline s}},\infty)$. We defer the proof, which is the
technical heart of this paper, to Section \ref{sec_bound}.
\begin{proposition}[A Bound on the Growth of Parameter Rays]\label{parabound}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent For every ${\underline s}\in\mathcal S_0$ there is a continuous function
$\xi_{{\underline s}}:(t_{{\underline s}},\infty)\to{\mbox{\bbf R}}$ such that for every $t>t_{{\underline s}}$
\[
\mathcal G_{{\underline s}}(t)\subset B_{\xi_{{\underline s}}(t)}(0)\ .
\]
Moreover, for sufficiently large $t$ we can choose
$\xi_{{\underline s}}(t)=2t$.
\end{proposition}
\begin{corollary}[Parameter Ray Tails Are Unique]\label{parendunique}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent For every ${\underline s} \in\mathcal S_0$ and every sufficiently large
$t$, $\mathcal G_{{\underline s}}(t)$ contains exactly one element $\kappa_0$. The
multiplicity of $\kappa_0$ as a root of $\kappa\mapsto
g_{{\underline s}}^{\kappa}(t)$ is $1$.
\end{corollary}
\begin{proof}
By Proposition \ref{parabound} we have $\mathcal G_{{\underline s}}(t)\subset
B_{2t}(0)\subset B_{2\pi t}(0)$, and the claim follows from
Proposition \ref{parrayend}. \hfill $\Box$ \par\medskip
\end{proof}
\subsection{Parameter Rays at Their Full Length}
\begin{lemma}[The Domain of Definition of $\kappa\mapsto
g_{{\underline s}}^{\kappa}(t)$]\label{defdomain} \rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent Let ${\underline s}\in\mathcal S_0$
be an external address.
\begin{enumerate}
\item \label{existsN} For every bounded set $\Lambda$ of
parameters and every compact interval $J\subset (t_{{\underline s}},\infty)$
of potentials there is an $N\in{\mbox{\bbf N}}$ such that
$$ \forall n\ge N,\;\forall\kappa\in \Lambda,\;\forall t\in J:\quad
\kappa\in \mathcal D_{\sigma^n{\underline s}}(F^{\circ n}(t))\;.$$
\item Let $t_0>t_{{\underline s}}$. For every $\kappa_0\in \mathcal D_{{\underline s}}(t_0)$ there
are neighborhoods $J\subset{\mbox{\bbf R}}$ and $\Lambda\subset{\mbox{\bbf C}}$ of $t_0$ and
$\kappa_0$ respectively such that
$$ \forall t\in J,\;\forall \kappa\in\Lambda:\quad \kappa\in\mathcal D_{{\underline s}}(t)\;.$$
In particular, $\mathcal D_{{\underline s}}(t)$ is open for every $t>t_{{\underline s}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Recall from Theorem \ref{dynrays} that if $t>t_{{\underline s}}$, the only
possible reason for $g_{{\underline s}}^{\kappa}(t)$ not to be defined is the
existence of an $n\ge 1$ such that
$g_{\sigma^n{\underline s}}^{\kappa}(t_0)=0$ with $t_0\ge F^{\circ n}(t)$.
For the first claim, let $K:=\sup _{\kappa\in \Lambda}|\kappa|$.
Take $N$ big enough such that $F^{\circ
N}(t)>\max\{t_{\sigma^N{\underline s}}^K,K+5\}$ for all $t\in J$. By Theorem
\ref{dynrays}, this implies for all $\kappa\in \Lambda$ and all
$t\in J$ that the dynamic ray $g_{\sigma^N{\underline s}}^{\kappa}$ is defined
at the potential $F^{\circ N}(t)$ with
\begin{eqnarray}
\mbox{\rm Re}\left(g_{\sigma^N{\underline s}}^{\kappa}(F^{\circ N}(t))\right)&\ge&
F^{\circ N}(t)-K-\left|r_{\kappa,\sigma^N{\underline s}}(F^{\circ N}(t))\right|>\nonumber\\
&>&F^{\circ N}(t)-K-5 > 0\;.\label{notzero}
\end{eqnarray}
This shows the first statement.
For the second claim, let $(t_0,\kappa_0)$ be a pair of a
potential $t_0>t_{{\underline s}}$ and a parameter $\kappa_0\in{\mbox{\bbf C}}$ such that
$\kappa_0\in \mathcal D_{{\underline s}}(t_0)$. Suppose by way of contradiction that
there are sequences $(t_n)_{n\ge 1}$ and $(\kappa_n)_{n\ge 1}$
with $t_n\to t_0$ and $\kappa_n \to\kappa_0$, such that
\[
\forall n\ge 1\ \ \exists N_n\ge 1\;,\ \exists t_n'\ge F^{\circ
N_n}(t_n):\quad g_{\sigma^{N_n}{\underline s}}^{\kappa_n}(t_n')=0\;.
\]
By the first part above, we may pass to a subsequence so that all
the $N_n$ are equal to some $N$. Furthermore, the sequence
$(t_n')_{n\ge 1}$ is contained in some compact interval $[F^{\circ
N}(t_*),t^*]$, where $t_*=\inf_n t_n'$ and $t^*$ is some potential
where we have good control, compare (\ref{notzero}). So by passing
to a subsequence once more we may assume that $(t_n')_n$ converges
to some $t_0'\ge F^{\circ N}(t_0)$. Note that
$\kappa_0\in\mathcal D_{{\underline s}}(t_0)$ implies $\kappa_0\in
\mathcal D_{\sigma^N{\underline s}}(t_0')$. So since the map $(t,\kappa)\mapsto
g_{\sigma^{N}{\underline s}}^{\kappa}(t)$ is continuous wherever it is
defined, it follows from $g_{\sigma^{N}{\underline s}}^{\kappa_n}(t_n')=0$ for
all $n\ge 1$ that $\lim_{n\to\infty}
g_{\sigma^{N}{\underline s}}^{\kappa_n}(t_n')= 0$, and therefore
$g_{\sigma^{N}{\underline s}}^{\kappa_0}(t_0')= 0$. This contradicts the
assumption $\kappa_0\in \mathcal D_{{\underline s}}(t_0)$. \hfill $\Box$ \par\medskip
\end{proof}
\begin{proposition}[Discreteness and Local Cont.~Extension
of $G_{{\underline s}}$]\label{discreteness}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent Consider a
sequence ${\underline s}\in\mathcal S_0$ and a potential $t>t_{{\underline s}}$.
\begin{enumerate}
\item If $t_n\to t$ and $\kappa_n\to\kappa$ are sequences such
that $\kappa_n\in\mathcal G_{{\underline s}}(t_n)$ for all $n\ge 1$, then
$\kappa\in\mathcal G_{{\underline s}}(t)$. In particular, $\mathcal G_{{\underline s}}(t)$ is closed.
\item The set $\mathcal G_{{\underline s}}(t)$ is discrete in ${\mbox{\bbf C}}$.
\item For every $\kappa_0\in \mathcal G_{{\underline s}}(t)$ there are neighborhoods
$\Lambda\subset{\mbox{\bbf C}}$ and $J \subset{\mbox{\bbf R}}$ containing $\kappa_0$ and $t$
respectively, such that for every $t'\in J$, the number of
elements of $\mathcal G_{{\underline s}}(t')\cap \Lambda$ (counting multiplicities)
equals the finite multiplicity of $\kappa_0$ as a root of the map
$\kappa\mapsto g_{{\underline s}}^{\kappa}(t)$.
More precisely, for every sequence $t_n\to t$ there is an $N\in{\mbox{\bbf N}}$
and a sequence $(\kappa_n)_{n\ge N}\to\kappa_0$ such that
$\kappa_n\in\mathcal G_{{\underline s}}(t_{n})$ for all $n\ge N$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $t_n\to t$ and $\kappa_n\to\kappa$ be sequences such that
$\kappa_n\in\mathcal G_{{\underline s}}(t_n)$ for all $n\ge 1$. We have to show that
$\kappa\in\mathcal G_{{\underline s}}(t)$. By Lemma \ref{defdomain} (\ref{existsN}) we
may assume without loss of generality that there is an $N\in{\mbox{\bbf N}}$
such that for all $m\ge N$ we have
$\{\kappa,\kappa_1,\kappa_2,\ldots\}\subset
\mathcal D_{\sigma^m{\underline s}}(F^{\circ m}(t_n))$. For every $n$ we have
$g_{\sigma^N{\underline s}}^{\kappa_n}(F^{\circ N}(t_n))=E_{\kappa_n}^{\circ
N}(0)$ and thus by continuity $g_{\sigma^N{\underline s}}^{\kappa}(F^{\circ
N}(t))=E_{\kappa}^{\circ N}(0)$. If $\kappa\not\in \mathcal D_{{\underline s}}(t)$,
then there is an $m\ge 1$ and a $t_0\ge F^{\circ m}(t)$ such that
$g_{\sigma^m{\underline s}}^{\kappa}(t_0)=0$ and thus
\[
E_{\kappa}^{\circ N}(0)=g_{\sigma^{m+N}{\underline s}}^{\kappa}(F^{\circ
N}(t_0))\;.
\]
We get two different potentials for $E_{\kappa}^{\circ N}(0)$,
which contradicts injectivity of $g^{\kappa}$ in Theorem
\ref{dynrays}. Therefore $\kappa_0\in\mathcal D_{{\underline s}}(t)$, and by
continuity $\kappa_0\in\mathcal G_{{\underline s}}(t)$.
For discreteness, consider a parameter $\kappa_0\in\mathcal G_{{\underline s}}(t)$ and
suppose that $\kappa_0$ is not isolated in $\mathcal G_{{\underline s}}(t)$. Let $U$
be the connected component of $\mathcal D_{{\underline s}}(t)$ which contains
$\kappa_0$. Since $\mathcal D_{{\underline s}}(t)$ is open, $U$ is open in ${\mbox{\bbf C}}$.
Analyticity of $\kappa\mapsto g_{{\underline s}}^{\kappa}(t)$ and the identity
principle imply $U\subset \mathcal G_{{\underline s}}(t)\subset \mathcal D_{{\underline s}}(t)$. Therefore
$U$ is also the connected component of the closed set $\mathcal G_{{\underline s}}(t)$
containing $\kappa_0$ and therefore closed in ${\mbox{\bbf C}}$. We conclude
$U={\mbox{\bbf C}}$, which is a contradiction.
For the third claim, consider neighborhoods $J_0,\Lambda_0$ of
$t,\kappa_0$ respectively as provided by Lemma \ref{defdomain}(2).
Since $\mathcal G_{{\underline s}}(t)$ is discrete, there is an $\varepsilon>0$ such that
$\kappa_0$ is the only root of the map $\kappa\mapsto
g_{{\underline s}}^{\kappa}(t)$ within $\Lambda:=D_{\varepsilon}(\kappa_0)$ and such
that $\overline\Lambda\subset \Lambda_0$. Let $\gamma(s):=\kappa_0+\varepsilon
e^{is}$. By Rouch\'e's Theorem, the multiplicity of $\kappa_0$ as
a zero equals the winding number $\eta(g_{{\underline s}}^{\gamma}(t),0)$ of
$g_{{\underline s}}^{\gamma}(t)$ around $0$. The holomorphic family
$\{f_{t'}(\kappa):=g_{{\underline s}}^{\kappa}(t'): \kappa\in \Lambda, t'\in
J_0\}$ is bounded and thus normal by Montel's Theorem. Hence if
$t_n\to t$ then $f_{t_n}$ converges uniformly to $f_t$ on
$\overline\Lambda$. In particular we have
$\eta(g_{{\underline s}}^{\gamma}(t'),0)=\eta(g_{{\underline s}}^{\gamma}(t),0)$ for
potentials $t'$ sufficiently close to $t$.
By uniform convergence we can shrink $\gamma$ as $t_n$ gets closer
to $t$, and we find such parameters $\kappa_n$ which converge to
$\kappa_0$ as claimed in the additional statement. \hfill $\Box$ \par\medskip
\end{proof}
We are now ready to state and to prove the main result.
\begin{theorem}[Parameter Rays at Their Full Length]\label{fulllength}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent For every external address ${\underline s}\in\mathcal S_0$ there is a
unique curve $G_{{\underline s}}:(t_{{\underline s}},\infty)\to I\subset{\mbox{\bbf C}}$, called
\emph{parameter ray}, such that for all $t>t_{{\underline s}}$,
$\kappa_0=G_{{\underline s}}(t)$ satisfies $g_{{\underline s}}^{\kappa_0}(t)=0$ and is a
simple root of the map $\kappa\mapsto g_{{\underline s}}^{\kappa}(t)$.
Conversely, if $g_{{\underline s}}^{\kappa}(t)=0$ with $t>t_{{\underline s}}$, then
$\kappa=G_{{\underline s}}(t)$. The parameter rays are injective and pairwise
disjoint, and they carry the asymptotics
\[
G_{{\underline s}}(t)=t+ 2\pi i s_1 +R_{{\underline s}}(t) \quad \mbox{with} \quad
|R_{{\underline s}}(t)| =O(te^{-t}) \mbox{ as }t\to\infty\ .
\]
\end{theorem}
\begin{proof}
By Proposition \ref{parabound} and Lemma \ref{discreteness}(2),
$\mathcal G_{{\underline s}}(t)$ is bounded and discrete, and thus finite. Set
$n(t):=|\mathcal G_{{\underline s}}(t)|$ (counting multiplicities) and let
\[
J_{{\underline s}}:=\{T>t_{{\underline s}}:\ n(t)\ge 1 \ \forall t\ge T\}\;.
\]
By Corollary \ref{parendunique} we have $n(t)=1$ for every
sufficiently large $t$. The set $J_{{\underline s}}$ is thus non-empty, and it
follows from Proposition \ref{discreteness}(3) that $J_{{\underline s}}$ is
open. We will now show that $J_{{\underline s}}$ is also closed in
$(t_{{\underline s}},\infty)$. Let $t_*:=\inf J_{{\underline s}}$ and suppose
$t_*>t_{{\underline s}}$. A function $t\mapsto G_{{\underline s}}(t)\in\mathcal G_{{\underline s}}(t)$ can be
defined on $J_{{\underline s}}$, possibly involving a choice. By Proposition
\ref{parabound}, the set $\{G_{{\underline s}}(t):\ t\in(t_*,t_*+1)\}$ is
contained in a compact set. Thus the set $L$ of all limits
$\lim_{t\searrow t_*} G_{{\underline s}}(t)$ is a nonempty compact subset of
${\mbox{\bbf C}}$. By Proposition \ref{discreteness}(1), it follows that
$L\subset \mathcal G_{{\underline s}}(t)$. Hence $J_{{\underline s}}$ is closed in
$(t_{{\underline s}},\infty)$ and $J_{{\underline s}}=(t_{{\underline s}},\infty)$.
Similarly we show that the set $J_{{\underline s}}':=\{t>t_{{\underline s}}:\ n(t)\ge 2\}$
is vacuous: by using Proposition \ref{discreteness} like above,
this set is open and closed relative $(t_{{\underline s}},\infty)$. However,
the complement is non-empty, because it contains an interval of
the form $(T,\infty)$ (on which Corollary \ref{parendunique}
holds). Therefore $J_{{\underline s}}'=\emptyset$, and for every $t>t_{{\underline s}}$ we
have $n(t)=1$. This shows that the choice for
$G_{{\underline s}}:(t_{{\underline s}},\infty)\to I$ above was unique.
Now this means that $G_{{\underline s}}$ is continuous because of Proposition
\ref{discreteness}(3). Injectivity of $G_{{\underline s}}$ and disjointness of
the parameter rays follow from the injectivity of $g^{\kappa}$ in
Theorem \ref{dynrays}, since every parameter $\kappa$ has at most
one external address and one potential. The asymptotic behavior
follows from Proposition \ref{parrayend}. \hfill $\Box$ \par\medskip
\end{proof}
\par\medskip \noindent {\sc Remark. } Note that unlike dynamic rays, the parameter rays are
always defined on the entire interval $(t_{{\underline s}},\infty)$.
In \cite{frs}, the above result will be extended to endpoints for
fast sequences ${\underline s}$. This will yield a complete classification of
escaping parameters: there is a continuous bijection $G:X\to I$,
where $X$ is defined as in Section \ref{sec_dynrays}, and the
path-connected components of $I$ are exactly the parameter rays,
including the endpoints at fast addresses. Moreover, one can
easily show \cite{f} that the parameter rays are $C^1$-curves, and
it seems that with some more work one can also show $C^{\infty}$.
\subsection{Vertical Order of Parameter Rays}
We show that parameter rays have a natural vertical order which
coincides with the lexicographic order of their external addresses.
\begin{definition}[Vertical Order]
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent Let $\Gamma$ be a family of injective rays
$\gamma:{\mbox{\bbf R}}^+\to{\mbox{\bbf C}}$ with $\mbox{\rm Re}\gamma(t) \to \infty$ as $t\to\infty$,
such that any two curves $\gamma_1\neq\gamma_2\in\Gamma$ have a
bounded set of intersection. For $\gamma_1,\gamma_2\in\Gamma$ let
${\mbox{\bbf H}}_R$ be a right half-plane in which $\gamma_1$ and $\gamma_2$
are disjoint, and denote ${\mbox{\bbf H}}_R^+(\gamma_2)$ and ${\mbox{\bbf H}}_R^-(\gamma_2)$
the upper respectively lower component of ${\mbox{\bbf H}}_R\setminus
\gamma_2$. Then
\[
\gamma_1\succ\gamma_2:\quad\Longleftrightarrow\quad
\gamma_1\subset H_R^+(\gamma_2)
\]
defines a linear order on $\Gamma$. We say that $\gamma_1$ is
\emph{above} $\gamma_2$. \hfill $\Box$ \par\medskip
\end{definition}
Equip $\mathcal S={\mbox{\bbf Z}}^{{\mbox{\bbf N}}^*}$ with the lexicographic order `$>$'.
\begin{lemma}[Vertical Order of Dynamic
Rays]\label{vert_order_dyn}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent For all $\kappa\in{\mbox{\bbf C}}$ and all exponentially bounded
addresses ${\underline s},\tilde{\underline s}\in\mathcal S_0$,
\[
g_{{\underline s}}^{\kappa}\succ g_{\tilde {\underline s}}^{\kappa}
\quad\Longleftrightarrow \quad {\underline s} > \tilde{\underline s}\;.
\]
\end{lemma}
\begin{proof}
Without loss of generality, say ${\underline s}>\tilde {\underline s}$. If $s_1>\tilde
s_1$, then $g_{{\underline s}}^{\kappa}\succ g_{\tilde{\underline s}}^{\kappa}$ follows
directly from the asymptotic estimate (\ref{rK_est}) in Theorem
\ref{dynrays}. Otherwise let $k> 1$ be the first entry in which
${\underline s}$ and $\tilde {\underline s}$ differ. Then by the same argument,
$g_{\sigma^{k-1}{\underline s}}^{\kappa}\succ
g_{\sigma^{k-1}\tilde{\underline s}}^{\kappa}$. Since
$$g_{{\underline s}}^{\kappa}(t)=L_{\kappa,s_1}\circ \ldots \circ
L_{\kappa,s_{k-1}}\circ
g_{\kappa,\sigma^{k-1}{\underline s}}^{\kappa}(F^{\circ (k-1)}(t))\;,$$ where
the translated logarithms $L_{\kappa,s}=\mbox{Log}-\kappa +2\pi i s$
preserve the vertical order, the claim follows. \hfill $\Box$ \par\medskip
\end{proof}
\begin{proposition}[Vertical Order of Parameter Rays]
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent For all exponentially bounded addresses
${\underline s},\tilde{\underline s}\in\mathcal S_0$,
\[
G_{{\underline s}}\succ G_{\tilde {\underline s}} \quad\Longleftrightarrow \quad {\underline s} >
\tilde{\underline s}\;.
\]
\end{proposition}
\begin{proof}
The claim follows from the asymptotic estimate in Theorem~\ref{fulllength} if the
first entries in ${\underline s}$ and $\tilde {\underline s}$ are different, so we assume they are equal
and ${\underline s}>\tilde {\underline s}$.
Consider the external address ${\underline s}=s_1s_2s_3\dots$ and set
\[
S:=\{z\in{\mbox{\bbf C}}\colon \mbox{\rm Re}(z)\ge \xi, 2\pi (s_1-2)\le \mbox{\rm Im}(z)\le 2\pi (s_1+2)\}
\]
depending on $\xi\in{\mbox{\bbf R}}$. By Theorem~\ref{dynrays}, part 2~(b), we can fix $\xi$ so
that for all $\kappa\in S$, there is a potential $\tau_\kappa>t_{\underline s}$ such that
$g_{{\underline s}}^{\kappa}(t)$ is defined for $t\geq \tau_\kappa$ and
$\mbox{\rm Re}(g_{{\underline s}}^{\kappa}(\tau_\kappa))<-11$.
In the dynamical plane for $\kappa\in S$, consider the right half plane
\[
H_\kappa:=\{z\in{\mbox{\bbf C}}\colon \mbox{\rm Re}(z)>\mbox{\rm Re}(g_{{\underline s}}^{\kappa}(\tau_\kappa))\}
\,\,.
\]
The ray tail $g_{{\underline s}}^{\kappa}((\tau_\kappa,\infty))$ cuts $H_\kappa$ into two
unbounded components $H^+_\kappa$ and $H^-_\kappa$ (plus possibly some bounded
components) so that in $H^+_\kappa$, imaginary parts are unbounded above, while in
$H^-_\kappa$, they are unbounded below. The asymptotics in (\ref{rK_est}) from
Theorem~\ref{dynrays} together with the condition
$\mbox{\rm Re}(g_{{\underline s}}^{\kappa}(\tau_\kappa))<-11$
implies that for every $\kappa\in S$, either $0\in H^+_\kappa\cup H^-_\kappa$ or
$0=g_{{\underline s}}^{\kappa}(t)$ for some $t\geq\tau_\kappa$.
Define $S^+_0:=\{\kappa\in S\colon 0\in H^+_\kappa\}$ and analogously $S^-_0$. By
construction, $S\subset S^+_0\dot\cup S^-_0\dot\cup G_{\underline s}((t_{\underline s},\infty))$.
The asymptotics of the parameter ray $G_{\underline s}$ from Theorem~\ref{fulllength} implies
that $S\setminus G_{\underline s}((t_s,\infty))$ contains two unbounded components (plus possibly
some bounded ones); let $S^+$ and $S^-$ be the unbounded component above resp.\
below $G_{\underline s}((t_s,\infty))$ in the obvious sense.
We claim that $S^+\subset S^+_0$ and $S^-\subset S^-_0$. Indeed, the set $S^+$
contains a tail of the parameter ray $G_{{\underline s}'}$ with ${\underline s}'=(s_1+1)s_2s_3s_3\dots$.
For parameters $\kappa$ on this tail, the vertical order of dynamic rays implies
$0\in H^+_\kappa$, hence $\kappa\in S^+_0$. Since $S^+$ is connected, it follows
that $S^+\subset S^+_0$, and analogously $S^-\subset S^-_0$.
The parameter ray $G_{\tilde {\underline s}}$ also has a tail in $S$, and for parameters
$\kappa$ on this tail, the vertical order of dynamic rays implies
$\kappa\in S^-_0$. If $\mbox{\rm Re}(\kappa)$ is sufficiently large, then $\kappa\in S^+\cup
S^-$. Finally, $\kappa\in S^+$ would imply $\kappa\in S^+_0$, a contradiction, so
we conclude $\kappa\in S^-$.
\hfill $\Box$ \par\medskip
\end{proof}
\hide{
\begin{proof}
If $s_1\neq \tilde s_1$, the claim follows directly from the
asymptotic estimate in Theorem \ref{fulllength}. So assume that
$s_1=\tilde s_1$ and ${\underline s} > \tilde {\underline s}$. Consider the function
$f=g^{\bullet}_{{\underline s}}(t):\kappa\mapsto g_{{\underline s}}^{\kappa}(t)$. Let us
only consider sufficiently large potentials $t>T$, where the
parameter rays $G_{{\underline s}}$, $G_{\tilde{\underline s}}$ are close to straight
lines, such that $f$ is defined for all $\kappa\in
\Lambda:=B_{3\pi}(t+2\pi is_1)$, i.e. $f:\Lambda\to{\mbox{\bbf C}}$ is a
continuous function in $\kappa$ (and $t$). In fact,
$f(\kappa)=(t+2\pi is_1) -\kappa+r_{\kappa,{\underline s}}(t)$ is a linear
function up to an error of $|r_{\kappa,{\underline s}}(t)|=O(|\kappa|e^{-t})$.
Since $t$ is large, we may assume that $G_{\tilde
{\underline s}}(t),G_{\tilde{\underline s}}(t)+2\pi i \in \Lambda$.
Let $\varepsilon =2R$, where $R$ is an estimate for $|r_{\kappa,{\underline s}}(t)|$
on $\Lambda$. Consider the `square' $Q\subset \Lambda$ with
horizontal edges $G_{\tilde{\underline s}}(t+t')$ and $G_{\tilde{\underline s}}(t+t')+2\pi
i$ ($t'\in [-\varepsilon,\varepsilon]$) and straight vertical edges. Lemma
\ref{vert_order_dyn} yields that $\mbox{\rm Im} f <0$ on the lower edge and
$\mbox{\rm Im} f>0$ on the upper edge. Moreover, $Re f$ is negative
(positive) on the right (left) edge. By continuity, the only zero
$G_{{\underline s}}(t)$ of $f$ is contained in $Q$ and thus above
$G_{\tilde{\underline s}}(t)$. Since the parameter rays are parameterized by
real parts (up to a negligible error), the claim follows.
\hfill $\Box$ \par\medskip
\end{proof}
}
\section{The Proof of the Bound on Parameter
Rays}\label{sec_bound}
\subsection{First Derivative of Dynamic Rays}\label{secderiv}
In order to prove Proposition \ref{parabound}, we will need
estimates on the derivative of dynamic rays, which lead to
estimates on the winding numbers of dynamic rays. These in turn
will help us control the rays at small potentials in order to
obtain a bound on the absolute value of all $\kappa\in \mathcal G_{{\underline s}}(t)$
with prescribed combinatorics $({\underline s},t)$.
We will often be concerned with obtaining estimates on some ``tail
pieces'' of dynamic rays. In order to simplify the discussion
without having to keep track of exact constants, we make the
following definition.
\begin{definition}[Properties on Ray Tails]
\label{raytails} \rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent
We say that a property $P(\kappa,{\underline s},t)$ holds \emph{on ray tails} if there are
$A,B,C\ge 0$ such that for all ${\underline s}\in\mathcal S_0$ and all $K\ge 1$, the property
$P(\kappa,{\underline s},t)$ holds whenever $|\kappa| \le K$ and
\[
t\ge At^*_{\underline s}+B\log K+C
\;,
\]
where $t^*_{\underline s}$ is the constant from Definition and Lemma
\ref{minpot}.
\hide{
$t\ge T(K,{\underline s})$,
where $T:[1,\infty)\times \mathcal S_0\to {\mbox{\bbf R}}^+$ is defined by
\[
T(K,{\underline s}):=At_{{\underline s}}^*+B\log K+C
\]
and $t_{{\underline s}}^*$ is the constant from Definition and Lemma
\ref{minpot}. Moreover, let
\[
n_{{\underline s}}^T(K,t):=\min\{n\ge 0:\ F^{\circ n}(t)\ge
T(K,\sigma^n{\underline s})\}\;.
\]
}
\end{definition}
\par\medskip \noindent {\sc Remark. }
The problem is that it is much easier to control tails of dynamic
rays than to control entire rays. This control is non-uniform in $\kappa$: if
$|\kappa|$ is large, then we have good control only for large potentials $t$. The
following result often allows to transfer results from ray tails to all rays.
\begin{lemma}[From Ray Tails to Entire Rays]
\label{Lem:FromTailsToRays} \rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent
Suppose a property $P(\kappa,{\underline s},t)$ holds on ray tails and is backward invariant,
i.e.\ it holds for $g_{\kappa,{\underline s}}(t)$ whenever it holds for
$g_{\kappa,\sigma({\underline s})}(F(t))$. Then it holds on all dynamic rays.
\end{lemma}
\par\medskip\noindent {\sc Proof. }
The property holds on $g_{\kappa,{\underline s}}(t)$ as soon as there is an $n\in{\mbox{\bbf N}}$
such that it holds on $g_{\kappa,\sigma^n({\underline s})}(F^{\circ n}(t))$. This is true as
there is an $N$ such that $F^{\circ N}(t)\ge At^*_{\sigma^N({\underline s})}+B\log K+C$ and
this follows from the last claim in Definition and Lemma \ref{minpot}.
\hfill $\Box$ \par\medskip
The quantifier ``on ray tails'' commutes with finite,
but not infinite conjunctions: ``$\forall n:\, P_n(\kappa,{\underline s},t)$
on ray tails'' is weaker than ``on ray tails, $\forall n:\,
P_n(\kappa,{\underline s},t)$'': in the first case, the constants $A$, $B$, $C$ may depend on
$n$.
Using this notation, we can now say that the asymptotic bound
(\ref{rK_est}) of Theorem \ref{dynrays} holds on ray tails: in this case, for all
$t\ge t_{\underline s}^*+2\log K+2\log 3 \ge t_{\underline s}^*+2\log(K+3)=t_{\underline s}^K$.
This gives us very good control on the orbit
of points on dynamic rays, except for at most finitely many steps.
The following lemma helps to estimate after how many iteration steps good control
takes over.
\begin{lemma}[Bound on Initial Iteration Steps]
\label{Lem:BoundInitialSteps} \rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent
Fix ${\underline s}\in\mathcal S_0$, $t>t_{{\underline s}}$, $A\geq 1$ and $B,C\geq 0$. Then for every $K\ge 1$,
there is an $N\in{\mbox{\bbf N}}$ such that for all $n\geq N$, we have
\begin{equation}
F^{\circ n}(t)\ge At_{\sigma^n({\underline s})}^*+B\log K+C \;.
\label{Eq:Bound_on_N}
\end{equation}
The value of $N$ has the following properties for fixed ${\underline s}$:
\begin{enumerate}
\item
for fixed $K$, it is (weakly) monotonically decreasing in $t$;
\item
if $t\ge At_{{\underline s}}^*+B\log K+C$, then $N=0$;
\item
if for fixed $t$, $N_0$ is such that
\[
F^{\circ N_0}(t)\ge At_{\sigma^{N_0}({\underline s})}^*+C+1 \;,
\]
then (\ref{Eq:Bound_on_N}) holds for $N=N_0+N_1$ as soon as $F^{\circ N_1}(1)\ge
B\log K$.
\end{enumerate}
\end{lemma}
\par\medskip\noindent {\sc Proof. }
Note first that by convexity, $F(x+y)\ge F(x)+F(y)$ for all $x,y\ge 0$. Similarly,
$F(Ax)\ge AF(x)$ and in particular $F(x)\ge x$. Moreover, by definition,
$F(t^*_{{\underline s}})\ge t^*_{\sigma({\underline s})}$. This implies that if (\ref{Eq:Bound_on_N}) holds
for $n$, then it also holds for $n+1$. The second claim follows, and the first is
trivial.
The third claim is verified as follows:
\begin{eqnarray*}
F^{\circ N}(t)=F^{\circ(N_0+N_1)}(t) &\ge&
F^{\circ N_1}(At_{\sigma^{N_0}({\underline s})}^*+C+1)
\\
&\ge&
F^{\circ N_1}(At^*_{\sigma^{N_0}({\underline s})})+ F^{\circ N_1}(C)+ F^{\circ N_1}(1)
\\
&\ge&
At^*_{\sigma^N({\underline s})} + C + B\log K \;.
\end{eqnarray*}
\hfill $\Box$ \par\medskip
Using the terminology of ``properties on ray tails'', the following statements
follow easily from \cite{sz}, Lemma $3.3$ and Proposition $3.4$.
\begin{lemma}[Further Properties of Dynamic Rays]\label{propdynrays}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent Let ${\underline s}\in\mathcal S_0$, $K>0$ and $\kappa$ be a parameter with
$|\kappa|\le K$. On the interval $(t_{{\underline s}}^K,\infty)$, the curve
$g_{{\underline s}}^{\kappa}$ is the uniform limit of the functions
$g_{\kappa,{\underline s}}^n$ defined by $g_{\kappa,{\underline s}}^0:=\mathrm{id}$ and
$g_{\kappa,{\underline s}}^{n+1}(t):=L_{\kappa,s_1}\circ
g_{\kappa,\sigma{\underline s}}^n(F(t))$, i.e.
\begin{equation}\label{gn}
g_{\kappa,{\underline s}}^n(t):=L_{\kappa,s_1}\circ\cdots\circ
L_{\kappa,s_n}(F^{\circ n}(t))\;.
\end{equation}
On ray tails, they satisfy for all $n,k\ge 1$
\begin{eqnarray}
|g_{\kappa,{\underline s}}^n(F^{\circ k}(t))|&\ge& 2^k\quad\mbox{and}\\
|g_{\kappa,{\underline s}}^k(t)-g_{{\underline s}}^{\kappa}(t)|&\le & 2^{-k}\;.
\end{eqnarray}
\hfill $\Box$ \par\medskip
\end{lemma}
We omit the straightforward but technical proof of the following
lemma.
\begin{lemma}[Some Properties of $F$]\label{PropF}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent If $x\ge 0$ and $t\ge 5$ are real numbers such that
$t\ge 2x+5$ then
\[
\sum_{k=1}^{\infty}\frac{1}{F^{\circ
k}(t)+1}<3e^{-t}\quad\mbox{and}\quad
\sum_{k=1}^{\infty}\frac{F^{\circ k}(x)}{F^{\circ k}(t)+1} <
(e^x+1)e^{-t}\;.
\]
Moreover,
\begin{eqnarray}
\frac{d}{dt}F^{\circ n}(t)&=& \prod_{k=1}^n (F^{\circ
k}(t)+1)\quad\mbox{and}\label{ddtFn}\\
\exists T>0:\ \forall t\ge T,n\ge 1:\quad && \frac{(F^{\circ
n})'(t)}{(F^{\circ n}(t)+1)^2}\le \frac1{F^{\circ
n}(t-1)+1}\;.\label{Fk'Fk2}
\end{eqnarray}
\hfill $\Box$ \par\medskip
\end{lemma}
Differentiability of dynamic rays has already been proven in 1988
by M.~Viana da Silva \cite{v}. We will prove it again in order to
obtain explicit estimates on the first and second derivatives.
\begin{proposition}[The Derivative of Dynamic Rays]\label{ddtdynray}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent For every sequence ${\underline s}\in\mathcal S_0$ and every parameter
$\kappa$, the dynamic ray $g_{{\underline s}}^{\kappa}(t)$ is continuously
differentiable with respect to the potential $t$ with derivative
\begin{equation}\label{ddteqn}
(g_{{\underline s}}^{\kappa})'(t)=\prod_{k=1}^{\infty} \frac {F^{\circ
k}(t)+1}{g_{\sigma^k{\underline s}}^{\kappa} (F^{\circ k}(t))} \,\neq 0\,\,\,.
\end{equation}
Moreover, on ray tails,
\begin{equation}\label{gprimeest}
|(g_{{\underline s}}^{\kappa})'(t)-1|< e^{-t/2} \,\,.
\end{equation}
\end{proposition}
\begin{proof}
It is sufficient to prove that $g_{{\underline s}}^{\kappa}$ is differentiable
and satisfies (\ref{ddteqn}) and (\ref{gprimeest}) on ray tails:
if (\ref{ddteqn}) is known for $g_{\sigma{\underline s}}^{\kappa}$ at $F(t)$,
then
\begin{eqnarray*}
(g_{{\underline s}}^{\kappa})'(t)&=&\frac{(E_{\kappa}\circ
g_{{\underline s}}^{\kappa})'(t)}{E_{\kappa}(g^{\kappa}_{{\underline s}}(t))}=\frac{(g_{\sigma{\underline s}}^{\kappa}\circ
F)'(t)}{g^{\kappa}_{\sigma{\underline s}}(F(t))}=\\
&=&\left(\prod_{k=1}^{\infty}\frac{F^{\circ
(k+1)}(t)+1}{g^{\kappa}_{\sigma^{k+1}}(F^{\circ
(k+1)}(t))}\right)\frac{F(t)+1}{g^{\kappa}_{\sigma{\underline s}}(F(t))}=\prod_{k=1}^{\infty}
\frac {F^{\circ k}(t)+1}{g_{\sigma^k{\underline s}}^{\kappa} (F^{\circ
k}(t))}\;.
\end{eqnarray*}
By Lemma~\ref{Lem:FromTailsToRays}, (\ref{ddteqn}) holds on all rays.
Recall the functions $g_{\kappa,{\underline s}}^n(t)$, defined in Lemma
\ref{propdynrays}, which converge uniformly to
$g_{{\underline s}}^{\kappa}(t)$. By the chain rule, for every $\kappa\in{\mbox{\bbf C}}$,
${\underline s}\in\mathcal S_0$, $n\ge 1$ and $t$ (where defined),
\begin{eqnarray*}
\frac{d}{dt}L_{\kappa,s_1}\circ\cdots\circ
L_{\kappa,s_n}(t)&=&\prod_{k=1}^{n}\left(g_{\kappa,\sigma^k{\underline s}}^{n-k}(F^{\circ
(k-n)}(t))\right)^{-1}\;,
\end{eqnarray*}
and thus together with (\ref{ddtFn}) in Lemma \ref{PropF}, by the
chain rule again,
\begin{eqnarray}
(g^n_{\kappa,{\underline s}})'(t)=\left(\prod_{k=1}^{n} \frac {F^{\circ
k}(t)+1} {g_{\sigma^k{\underline s}}^{\kappa}(F^{\circ k}(t))}\right) \cdot
\left(\prod_{k=1}^{n}
\underbrace{\frac{g_{\sigma^k{\underline s}}^{\kappa}(F^{\circ k}(t))}
{g^{n-k}_{\kappa,\sigma^k{\underline s}}(F^{\circ k}(t))}}_{=:P_k^n(t)}\right)
\,\,.\label{gnprime}
\end{eqnarray}
Let us first show that (on ray tails) $\prod_{k=1}^n P_k^n(t)$
converges uniformly to $1$ as $n\to\infty$. Indeed, on ray tails,
\[
|P_k^n(t)-1|=\left|\frac{g_{\sigma^k{\underline s}}^{\kappa}(F^{\circ
k}(t))-g^{n-k}_{\kappa,\sigma^k{\underline s}}(F^{\circ
k}(t))}{g^{n-k}_{\kappa,\sigma^k{\underline s}}(F^{\circ k}(t))} \right|\le
\frac 1{2^{n-k}}\cdot\frac1{2^k}=\frac1{2^n}
\]
by Lemma \ref{propdynrays}. Thus $1-(1-2^{-n})^n\le |\prod_{k=1}^n
P_k^n(t)-1|\le (1+2^{-n})^n-1$, which means that $\prod_{k=1}^n
P_k^n(t)$ converges to $1$ uniformly in $t$.
By Weierstra{\ss}' Theorem, it only remains to show that the first
product of (\ref{gnprime}) converges uniformly on ray tails and
satisfies the uniform bound (\ref{gprimeest}) there. Note that
\[
\mbox{Log} \prod_{k=1}^{n} \frac{F^{\circ
k}(t)+1}{g_{\sigma^k{\underline s}}^{\kappa} (F^{\circ k}(t))}=-\sum_{k=1}^{n}
\mbox{Log}\left(1+\underbrace{\frac{g_{\sigma^k{\underline s}}^{\kappa} (F^{\circ
k}(t))-F^{\circ k}(t)-1}{F^{\circ k}(t)+1}}_{=:x_k(t)}\right)\;.
\]
Again by Lemma \ref{propdynrays}, on ray tails,
$$|x_k(t)|\le
\frac{|\kappa|+2\pi |s_{k+1}|+2}{F^{\circ k}(t)+1}\le
\frac{|\kappa|+2\pi F^{\circ k}(t_{{\underline s}}^*)+2}{F^{\circ k}(t)+1}\le
1/2\;.
$$
Thus on ray tails, by the first inequality from Lemma \ref{PropF},
\[
\sum_{k=1}^{\infty}|x_k(t)|<\left(3(|\kappa|+2)+2\pi(e^{t_{{\underline s}}^*}+1)
\right)e^{-t}\le e^{-t/2}/4\;.
\]
Since $|\mbox{Log}(1+x)|\le 2|x|$ for $|x|\le 1/2$, it follows on ray
tails:
\begin{eqnarray*}
\left|\mbox{Log} \prod_{k=1}^{\infty} \frac{F^{\circ
k}(t)+1}{g_{\sigma^k{\underline s}}^{\kappa} (F^{\circ k}(t))}\right|\le
\sum_{k=1}^{\infty}|\mbox{Log}(1+x_k(t))|
\le\sum_{k=1}^{\infty}2|x_k(t)|\le e^{-t/2}/2\le 1/2\;.
\end{eqnarray*}
Finally, (\ref{gprimeest}) follows on ray tails, since $|z-1|\le
2|\log z|$ for $|\log z|\le 1/2$.
\hfill $\Box$ \par\medskip
\end{proof}
\subsection{Second Derivative of Dynamic Rays}
\begin {proposition} [The Second Derivative of Dynamic Rays]
\label{d2dtdynray}\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent
Every dynamic ray $g^{\kappa}_{{\underline s}}:(t_{{\underline s}},\infty)\to
{\mbox{\bbf C}}$ is twice continuously differentiable. On ray tails,
\begin{equation}
\label{est_2nd_der} |(g_{{\underline s}}^{\kappa})''(t)| < e^{-t/2}
\quad\text{and}\quad \left |
\frac {(g^{\kappa}_{{\underline s}})''(t)}{(g^{\kappa}_{{\underline s}})'(t)}\right| < e^{-t/2}\ .
\end{equation}
\end {proposition}
\begin{proof}
Define $f_{{\underline s}}(t):=(t+1)/g_{{\underline s}}^{\kappa}(t)$ and for $k\ge 1$
\[
P_k(t):=\frac {F^{\circ k}(t)+1}{g_{\sigma^k{\underline s}}^{\kappa} (F^{\circ
k}(t))}=f_{\sigma^k{\underline s}}(F^{\circ k}(t))\;. \] The partial products
$h_N(t):=\prod_{k=1}^N P_k(t)$ in (\ref{ddteqn}) from Proposition
\ref{ddtdynray} converge uniformly to $(g_{{\underline s}}^{\kappa})'$ and we
thus need to show that the derivatives $h_N'$ converge uniformly
and that the limit satisfies (\ref{est_2nd_der}). By the product
rule,
\[
h_N'(t)=h_N(t)\cdot\sum_{k=1}^N\frac{P_k'(t)}{P_k(t)}\;.
\]
The factor $h_N(t)$ can be bounded by $2$. (This also shows that
the first estimate in (\ref{est_2nd_der}) implies the second one.)
Since $|s_{k+1}|\le F^{\circ k}(t_{{\underline s}}^*)$, on ray tails,
\[
|P_k(t)|\ge \frac{F^{\circ k}(t)+1}{F^{\circ k}(t)+|\kappa|+2\pi
|s_{k+1}|+1}\ge 1/2\quad \mbox{for every }k\ge 1\;.
\]
It remains to show that $\sum P_k'(t)$ converges on ray tails.
We estimate
\begin{eqnarray*}
|f_{\sigma^k{\underline s}}'(F^{\circ
k}(t))|&=&\left|\frac{g_{\sigma^k{\underline s}}^{\kappa}(F^{\circ
k}(t))-(F^{\circ k}(t)+1)
(g_{\sigma^k{\underline s}}^{\kappa})'(F^{\circ k}(t))}{(g_{\sigma^k{\underline s}}^{\kappa}(F^{\circ k}(t)))^2}\right|=\\
&=& \left|\frac{-\kappa+2\pi i s_{k+1}-1+O(F^{\circ
k}(t)e^{-2F^{\circ k}(t)/3})}{(F^{\circ k}(t)-\kappa+2\pi
is_{k+1}+O(e^{-F^{\circ k}(t)}))^2}\right|\le\\
&\le& \frac{2\pi F^{\circ k}(t_{{\underline s}}^*)+|\kappa|+2}{((F^{\circ
k}(t)+1)/2)^2}
\end{eqnarray*}
and thus by (\ref{Fk'Fk2}) in Lemma \ref{PropF}
\[
|P_k'(t)|=|f_{\sigma^k{\underline s}}'(F^{\circ k}(t))|\cdot (F^{\circ
k})'(t)\le 4\cdot \frac{2\pi F^{\circ
k}(t_{{\underline s}}^*)+|\kappa|+2}{F^{\circ k}(t-1)+1}\;.
\]
Using Lemma \ref{PropF} once more,
\begin{eqnarray*}
\sum_{k=1}^{\infty}|P_k'(t)|&\le& \left(3(4|\kappa|+8)+8\pi
(e^{t_{{\underline s}}^*}+1)\right)e^{-t+1}\le e^{-t/2}/4
\end{eqnarray*}
on ray tails. This shows that the $h_N'$ converge uniformly, and
\[
|(g_{{\underline s}}^{\kappa})''(t)|\le |h_N'(t)|\le 2\sum_{k=1}^{\infty}
2|P_K'(t)|\le e^{-t/2}\;.
\]
\vspace{-1.2cm}
\hfill $\Box$ \par\medskip
\end{proof}
\hide{
For easier reference, we rephrase part of the previous result as follows,
expanding Definition~\ref{raytails}.
\begin{corollary}[The Second Derivative of Dynamic Rays]
\label{Cor:SecondDerivative} \rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent
For every ${\underline s}\in\mathcal S_0$, there is a function $(K,t_0)\mapsto n_{{\underline s}}(K,t_0)$
such that for all $|\kappa|\le K$, $n\ge n_{{\underline s}}(K,t_0)$ and $t\ge
t_0>t_{{\underline s}}$
\begin{equation}\label{Eq:d2dtest}
\left|\frac{g''_{\kappa,\sigma^{n}{\underline s}}(F^{\circ n}(t))}
{g'_{\kappa,\sigma^{n}{\underline s}}(F^{\circ n}(t))}\right|<e^{-F^{\circ
n}(t)/2} < 1\;.
\end{equation}
This function is monotonically increasing in $K$ and decreasing in
$t$
\hide{, so that for given $K$ and ${\underline s}$, we have $n_{{\underline s}}(K,t_0)=0$ for all
sufficiently large $t_0$
}
.
\rule{0pt}{0pt}\hfill $\Box$
\end{corollary}
}
\subsection{Variation Numbers of Dynamic Rays}\label{sec_winnr}
Several key ideas in this section are are due to Niklas Beisert.
For a closed $C^1$-curve $\gamma:[t_0,t_1]\to{\mbox{\bbf C}}$ and
$a\in{\mbox{\bbf C}}\setminus\gamma([t_0,t_1])$, \emph{the winding number of $\gamma$
around $a$} is defined by
\[
\eta(\gamma,a):=\frac{1}{2\pi}\int_{t_0}^{t_1}
d\arg(\gamma(t)-a)\quad \in{\mbox{\bbf Z}}\;.
\]
\begin{definition}[Variation
Number]\label{def_windingnr} \rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent Consider a $C^1$-curve
$\gamma:(t_0,\infty)\to{\mbox{\bbf C}}$ with $t_0\ge -\infty$ and $a\not\in
\gamma(t_0,\infty)$. Define the \emph{variation number of}
$\gamma$ \emph{around} $a$ by
\begin{eqnarray*}
\alpha(\gamma,a):=\frac{1}{2\pi}\int_{t_0}^\infty
\left|\mbox{\rm Im}\frac{\gamma'(t)}{\gamma(t)-a} \right|dt=
\frac{1}{2\pi}\int_{t_0}^\infty
\left|d\arg(\gamma(t)-a)\right|\quad\in [0,\infty]\;.
\end{eqnarray*}
\end{definition}
Unlike the winding number, the variation number also measures
local oscillations of the curve.
\begin{definition}[Admissible Curves]
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent An \emph{admissible curve} is an injective $C^2$-curve
$\gamma:(t_0,\infty)\to{\mbox{\bbf C}}$ with non-vanishing derivative $\gamma'$
such that for $t\to\infty$
\[
|\gamma(t)|=t+O(1)\;,\quad|\gamma'(t)|=1+O(1/t)\;,\quad\mbox{and}\quad
|\gamma''(t)|=O(1/t^2)\;.
\]
\end{definition}
\begin{lemma}[Admissible Curves Have Variation Numbers]
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent Let $\gamma:(t_0,\infty)\to{\mbox{\bbf C}}$ be an admissible curve.
\begin{enumerate}
\item If $a\in{\mbox{\bbf C}}\setminus\overline{\gamma(t_0,\infty)}$ and
$|\gamma'(t)|$ is bounded as $t\searrow t_0$, then
$\alpha(\gamma,a)$ is finite.
\item For every $t_1>t_0$,
$\alpha(\gamma|_{(t_1,\infty)},\gamma(t_1))$ is finite.
\item For every $t_1>t_0$, $\alpha(\gamma'|_{(t_1,\infty)},0)$ is
finite.
\end{enumerate}
\end{lemma}
\begin{proof}
In all statements the integrands are locally Riemann integrable.
Therefore we only have to show that the integrals
$\int_{t_0}^\infty |\mbox{\rm Im}\frac{\gamma'}{\gamma-a}|$ are finite near
the boundaries of integration.
Let us first discuss the lower boundary of integration. In Case 1,
we can bound $|\gamma(t)-a|$ below and $|\gamma'(t)|$ above. Fix
$\varepsilon>0$. For Case 3 the continuous function
$|\gamma''(t)/\gamma'(t)|$ is bounded on $[t_1,t_1+\varepsilon]$. In Case
2 however, the denominator tends to $0$. By the Taylor Theorem
applied to $\gamma\in C^2$, for every $t\in(t_1,t_1+\varepsilon)$ there
is a $\xi\in [t_1,t]$ such that
\begin{eqnarray*}
\left|\mbox{\rm Im} \frac{\gamma'(t)}{\gamma(t)-\gamma(t_1)}\right|&=&
\left|\mbox{\rm Im}
\frac{\gamma'(t)}{\gamma'(t)(t-t_1)+\gamma''(\xi)(t-t_1)^2/2}\right|=\\
&=&\left|\mbox{\rm Im}\frac
1{t-t_1+\frac{\gamma''(\xi)}{2\gamma'(t)}(t-t_1)^2}\right|\;.
\end{eqnarray*}
For $\xi,t\in [t_1 , t_1+\varepsilon]$,
$\frac{\gamma''(\xi)}{2\gamma'(t)}=:c(t)$ can be estimated
uniformly and is thus of class $O(1)$. Now for
$t-t_1=:\delta\searrow 0$ we observe
\[
\mbox{\rm Im}\frac 1{\delta+c\delta^2}=\frac{\delta^2 \mbox{\rm Im} \bar
c}{|\delta+c\delta^2|^2}=\frac{-\mbox{\rm Re} c}{|1+c\delta|^2}=O(1)\;.
\]
It is left to show that the limits $\lim_{x\to\infty}\int^x$ are
finite: for the first two cases we have (with $\gamma(t_1)=:a$)
\begin{eqnarray*}
\mbox{\rm Im} \frac{\gamma'(t)}{\gamma(t)-a}&=&
\mbox{\rm Im}\frac{1+O(t^{-1})}{t+O(1)}=\mbox{\rm Im}\frac{(1+O(t^{-1}))\overline{(t+O(1))}}{|t+O(1)|^2}=\\
&=&\frac{\mbox{\rm Im}(t+O(1))}{|t+O(1)|^2}=O(t^{-2})\;,
\end{eqnarray*}
and for the last case we estimate $|\mbox{\rm Im}
(\gamma''(t)/\gamma'(t))|\le |\gamma''(t)/\gamma'(t)|=O(1/t^2)$.
\hfill $\Box$ \par\medskip
\end{proof}
\begin{lemma}[The Variation Number of a Half Line]\label{halfline}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent For the curve $\ell:{\mbox{\bbf R}}^+\to{\mbox{\bbf C}}$, $\ell(s)=\lambda s$
($\lambda\in{\mbox{\bbf C}}^*$) we have for every $a\not\in \lambda{\mbox{\bbf R}}^+_0$
\[
\alpha(\ell,a)=\frac{|\arg(a/\lambda)-\pi|}{2\pi}\;,
\]
if the argument is chosen in the interval $[0,2\pi]$.
\hfill $\Box$ \par\medskip
\end{lemma}
The proof is left to the reader. The following lemma gives a very
useful connection between the variation number of a curve and of
its derivative.
\begin{lemma}[Variation Numbers of a Curve and its Derivative]\label{BoundVar}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent Let $\gamma_0:(t_0,\infty)\to{\mbox{\bbf C}}$ be an admissible
curve, and $\gamma:=\gamma_0|_{(t_1,\infty)}$ its restriction for
some $t_1>t_0$. Then for every $a\not\in\gamma_0([t_1,\infty))$
\[
\alpha(\gamma,a)\le \alpha(\gamma',0)+1/2\ .
\]
\end{lemma}
\begin{figure}
\begin{picture}(0,0)%
\includegraphics{sigma.pstex}%
\end{picture}%
\setlength{\unitlength}{2486sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(7447,4704)(1071,-5698)
\put(1351,-1186){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\sigma_t$}%
}}}
\put(3586,-1711){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\gamma(t)$}%
}}}
\put(2656,-4681){\makebox(0,0)[lb]{\smash{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\gamma$}%
}}}
\end{picture}
\caption{The linear continuation as defined in the proof of Lemma
\ref{BoundVar}.} \label{FigWinding}
\end{figure}
\begin{proof}
For every $t\ge t_1$ define the curve $\sigma_t:{\mbox{\bbf R}}\to{\mbox{\bbf C}}$ to be the
linear continuation of $\gamma$ at $t$, i.e.
$\sigma_t(s):=\gamma(s)$ if $s\ge t$ and
$\sigma_t(s):=\gamma(t)+(s-t)\gamma'(t)$ if $s\le t$, see Figure
\ref{FigWinding}. Consider the following two functions
$u,v:(t_1,\infty)\to{\mbox{\bbf R}}$:
\begin{eqnarray*}
u(t)&=&\frac1{2\pi}\int^\infty_t |d\arg\gamma'| \qquad \mbox{and} \\
v(t)&=& \frac1{2\pi}\int^\infty_{-\infty} |d\arg(\sigma_t-a)|\;.
\end{eqnarray*}
Note that the form $d\arg$ is defined everywhere, although $\arg$
may not be.
Lemma \ref{halfline} (generalized to the degenerate case, which we
define via $\alpha(\ell,\ell(s)):=1/2$) gives us
\begin{eqnarray*}
\frac d{d t} \int_{-\infty}^t |d\arg(\sigma_t(s)-a)| &=& 2\pi
\frac d{dt} \alpha(\sigma_t|_{(-\infty,t)})=\frac
d{dt}\left|\arg\left(\frac{a-\gamma(t)}{\gamma'(t)}-\pi\right)\right|=\\
&=&\rho(t)\frac d{d
t}\left(\rule{0pt}{13pt}\arg(a-\gamma(t))-\arg(\gamma'(t))\right)\;,
\end{eqnarray*}
with $\rho(t):=1$ (or $\rho(t):=-1$) if $a$ is on the left (or
right) side of the oriented line $\gamma(t)+\gamma'(t){\mbox{\bbf R}}$. We have
a change of sign whenever $a\in\sigma_t(-\infty,t)$: in that case
$a\in \gamma(t)+\gamma'(t){\mbox{\bbf R}}$, and the derivative vanishes.
Differentiating under the integral (which is allowed since the
integrands are differentiable and all the integrals exist) shows
if $a\not\in\gamma(t)+\gamma'(t){\mbox{\bbf R}}$
\[
\frac{d}{d t}\int^\infty_t |d\arg(\gamma(s)-a)|\stackrel{(*)}{=}
-\rho(t)\frac{d}{d t} \arg(\gamma(t)-a)= -\rho(t)\frac{d}{d t}
\arg(a-\gamma(t))\;.
\]
Step (*) can be seen like this: the derivative of the integral is
negative, so that the sign of the factor in front of the
parentheses is positive (negative) if $\arg(\gamma(t)-a)$ is
decreasing (increasing) in $t$, and this is exactly the case if
$a$ is on the right (left) side of $\gamma(t)+\gamma'(t){\mbox{\bbf R}}$. If
$a\in\gamma(t)+\gamma'(t){\mbox{\bbf R}}$, the value of $\rho$ is not important
for us, and we have $\arg(a-\gamma(t))-\arg(\gamma'(t))=0$.
Therefore
\begin{eqnarray*}
v'(t)&=&\frac{\rho(t)}{2\pi}\frac d{d
t}\left(\rule{0pt}{13pt}\arg(a-\gamma(t))-\arg(\gamma'(t))\right)
-\frac{\rho(t)}{2\pi}\left(\frac{d}{d t} \arg(a-\gamma(t))\right)=\\
&&=\frac{-\rho(t)}{2\pi} \left(\frac{d}{d
t}\arg(\gamma'(t))\right)
\qquad\mbox{and}\\
u'(t)&=&-\frac1{2\pi} \left|\frac{d}{d t}\arg(\gamma'(t))\right|
\;.
\end{eqnarray*}
Hence $u$ and $v$ are continuously differentiable with $v'(t)=\pm
u'(t)$ everywhere. Since we have $v(\infty)=1/2=u(\infty)+1/2$,
this yields $v(t_1)\le u(t_1)+1/2$. Together with
$\gamma([t_1,\infty))\subset\sigma_{t_1}({\mbox{\bbf R}})$, we therefore get
$\alpha(\gamma,a)\le v(t_1) \le u(t_1)+1/2 =
\alpha(\gamma',0)+1/2$. \hfill $\Box$ \par\medskip
\end{proof}
\begin{proposition}[Variation Numbers and Pullbacks]
\label{windnr_pullback} \rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent
Let $\gamma_0:(t_0,\infty)\to{\mbox{\bbf C}}$ be an admissible curve
such that $\alpha((\gamma_0)',0)\le 1/2$. For every $n\in{\mbox{\bbf N}}$
choose $a_n\in{\mbox{\bbf C}}\setminus \overline{\gamma_n(t_0,\infty)}$ and
$\gamma_{n+1}:(t_0,\infty)\to{\mbox{\bbf C}}$ such that
$\exp(\gamma_{n+1}(t))=\gamma_n(t)-a_n$. Then
\begin{eqnarray*}
&\forall n\in{\mbox{\bbf N}}\mbox{, }\forall t_1>t_0\mbox{, }\forall b\not\in
\overline{\gamma_n(t_1,\infty)}:\\
& \alpha(\gamma_n|_{(t_1,\infty)},b)\le 2^n\quad\mbox{and}\quad
\alpha((\gamma_n)'|_{(t_1,\infty)},0)\le 2^n-1/2\ .
\end{eqnarray*}
\end{proposition}
\begin{proof}
It follows from the calculations below that all the variation
numbers are indeed defined. The case $n=0$ follows from Lemma
\ref{BoundVar}. Let $a$, $\gamma$, $\tilde\gamma$ denote $a_n$,
$\gamma_n$, $\gamma_{n+1}=\log(\gamma_n-a)$ respectively. We
estimate
\begin{eqnarray*}
\alpha({\tilde
\gamma}',0)&=&\alpha\left(\frac{\gamma'}{\gamma-a},0\right)=
\frac1{2\pi}\int\left| d\left(\mbox{\rm Im}\log\left(\frac{\gamma'}{\gamma-a}\right)\right)\right|=\\
&=& \frac1{2\pi}\int\left|\rule{0pt}{13pt}d(\mbox{\rm Im}\log \gamma'-\mbox{\rm Im}\log(\gamma-a))\right|= \\
&=& \frac1{2\pi}\int\left|\mbox{\rm Im}\left(\frac{\gamma''}{\gamma'}
\right)-\mbox{\rm Im}\left(\frac{\gamma'}{\gamma-a}
\right)dt\right| \le \\
&\le& \frac1{2\pi}\int\left|\mbox{\rm Im}\left(\frac{\gamma''}{\gamma'}
\right)dt\right|
+\frac1{2\pi}\int\left|\mbox{\rm Im}\left(\frac{\gamma'}{\gamma-a} \right)dt\right|=\\
&=& \alpha(\gamma',0)+\alpha(\gamma,a)\stackrel{\mbox{\scriptsize
Lemma }\ref{BoundVar}}{\le}
2\alpha(\gamma',0)+1/2\le\\
&\le& 2(2^n-1/2)+1/2=2^{n+1}-1/2\ .
\end{eqnarray*}
Lemma \ref{BoundVar} thus gives us
$\alpha(\gamma_n,a)\le\alpha((\gamma_n)',0)+1/2\le 2^n$ for all
$n\in{\mbox{\bbf N}}$. \hfill $\Box$ \par\medskip
\end{proof}
\hide{
Recall that, for fixed ${\underline s}\in\mathcal S_0$, by Proposition
\ref{d2dtdynray} there is a function $(K,t)\mapsto n_{{\underline s}}(K,t)$
such that for all $|\kappa|\le K$, $n\ge n_{{\underline s}}(K,t_0)$, $t\ge
t_0>t_{{\underline s}}$
\begin{equation}\label{d2dtest'}
\left|\frac{g''_{\kappa,\sigma^{n}{\underline s}}(F^{\circ n}(t))}
{g'_{\kappa,\sigma^{n}{\underline s}}(F^{\circ n}(t))}\right|<e^{-F^{\circ
n}(t)/2}\;.
\end{equation}
This function is monotonically increasing in $K$ and decreasing in
$t$.
}
\subsection{The Bound on Parameter Rays}
In this final subsection, we will complete the proof of
Proposition~\ref{parabound}. It may be helpful to outline the general line of
argument before going into details. We want to construct parameter rays
$G_{{\underline s}}\colon(t_{{\underline s}},\infty)\to{\mbox{\bbf C}}$; we know these exist as curves for sufficiently
large potentials (Proposition~\ref{parrayend}). The danger is that there is a
$\tilde t>t_{{\underline s}}$ such that as $t\searrow\tilde t$, $G_{{\underline s}}(t)\to\infty$.
We will of course use our estimates on dynamic rays (Theorem~\ref{dynrays}). The
problem is that these estimates depend on $\kappa$ and become worse as
$\kappa\to\infty$. Rescue comes from $\infty$ in a different way: for a given
parameter $\kappa=G_{{\underline s}}(t)$ with $t>t_{{\underline s}}$, one needs to iterate the dynamic ray
$g_{{\underline s}}(t,\infty)$ only a finite number of times until the iterated image ray is
almost horizontal: if $N$ is this number of iterations, then the winding number of
$g_{{\underline s}}(t,\infty)$ will be bounded by $2^N$ (Proposition~\ref{windnr_dynrays}).
This induces a partition of the dynamical plane with horizontal uncertainty of
approximately $2^N$ (Lemma~\ref{bound_critorb}). But if now $\mbox{\rm Re}(\kappa)$ is too
large, then the imaginary bounds imply that the singular orbit must escape very
fast (Proposition~\ref{behavsing}), which means it must have large potential $t$.
For bounded potential $t$, this yields an upper bound for
$\mbox{\rm Re}(\kappa)$.
The success of this argument depends on the fact that as $|\kappa|$ increases, the
number $N$ of necessary iterations grows extremely slowly in $|\kappa|$, much
slower than the errors arising from the growth of $|\kappa|$ itself.
\begin{proposition}[Variation Number of Dynamic Rays]
\label{windnr_dynrays} \rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent
Consider an arbitrary parameter $\kappa$ and an
external address ${\underline s}\in\mathcal S_0$. Let $z=g_{{\underline s}}^{\kappa}(t_0)$ be any
point on the dynamic ray of address ${\underline s}$ with potential
$t_0>t_{{\underline s}}$.
Suppose $N\in{\mbox{\bbf N}}$ is such that for all $t\ge F^{\circ N}(t_0)$,
\[
\left|\frac{g''_{\kappa,\sigma^{N}{\underline s}}(t)}
{g'_{\kappa,\sigma^{N}{\underline s}}(t)}\right|
<
e^{-t/2}
\;.
\]
Then
\[
\alpha\left(g_{{\underline s}}^{\kappa}|_{(t_0,\infty)},z\right)\le 2^N\ .
\]
\end{proposition}
\begin{proof}
On ray tails, we have bounds on the rays (Theorem \ref{dynrays} (\ref{defined})),
their first derivatives (Proposition~\ref{ddtdynray}) and their second derivatives
(Proposition~\ref{d2dtdynray}), so ray tails (and hence entire dynamic rays) are
admissible curves.
Specifically, the curve
$\gamma:=g_{\sigma^{N}{\underline s}}^{\kappa}:(F^{\circ N}(t_0),\infty)\to{\mbox{\bbf C}}$ satisfies
\begin{eqnarray*}
\alpha(\gamma',0)
&=&
\frac1{2\pi}\int_{F^{\circ n}(t_0)}^\infty
\left|\mbox{\rm Im}\left(\frac{\gamma''(t)}{\gamma'(t)}\right)\right|dt \le
\frac1{2\pi}\int_{F^{\circ n}(t_0)}^\infty
\left|\frac{g''_{\kappa,\sigma^{n}{\underline s}}(t)}
{g'_{\kappa,\sigma^{n}{\underline s}}(t)}\right|dt <
\\
&<&
\frac1{2\pi}\int_{F^{\circ n}(t_0)}^\infty e^{-t/2} dt\le
\pi^{-1} e^{-F^{\circ n}(t_0)/2} < 1/2 \,\,.
\end{eqnarray*}
If we define $\gamma_0:=\gamma$ and
$\gamma_{k+1}:=L_{\kappa,s_{n-k}}(\gamma_k)$, then
$$\gamma_n=g_{{\underline s}}^{\kappa}\left.\left(F^{\circ (-n)}(t)\right)\right|_{t>F^{\circ
n}(t_0)}\;,$$and applying Proposition \ref{windnr_pullback}
settles the claim. \hfill $\Box$ \par\medskip
\end{proof}
\begin{lemma}[Bounding the Imaginary Parts]
\label{bound_critorb} \rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent
Suppose $s\in\mathcal S_0$ and suppose $\kappa$ is a parameter
such that $g_{{\underline s}}^{\kappa}(t_0)=0$ for some potential
$t_0>t_{{\underline s}}$. Let $N$ be as in Proposition~\ref{windnr_dynrays}.
Then the singular orbit $(z_k)_{k\ge 1}=(E_{\kappa}^{\circ (k-1)}(0))_{k\ge 1}$
satisfies for all $k\ge 1$
\begin{equation}\label{Imzk_estim}
|\mbox{\rm Im}(z_k)|\le 2\pi(2^N+1+|s_k|)+|\mbox{\rm Im}\kappa|\;.
\end{equation}
Furthermore,
\[
|\mbox{\rm Im}\kappa|\le 2\pi(2^N+1+|s_1|)\;.
\]
\end{lemma}
\begin{proof}
The inverse images of the ray $g_{{\underline s}}^{\kappa}|_{(t_0,\infty)}$
provide a \emph{dynamic partition} of the plane, as opposed to the
\emph{static partition} introduced in Section \ref{sec_dynrays}.
Note that dynamic rays cannot intersect the boundary of the
dynamic partition, so that each ray $g_{{\underline s}'}^{\kappa}$ has to be
contained in one of the two components which are asymptotic to the line
$t-\kappa+2\pi i s_1'$ for large $t$. By Proposition
\ref{windnr_dynrays}, the vertical variation of any boundary component
of the dynamic partition is bounded by $2\pi\cdot 2^N$; since the
boundaries are a vertical distance $2\pi$ apart, (\ref{Imzk_estim})
follows.
The additional inequality follows similarly: the strip of the
dynamic partition containing $0$ is asymptotic to the line
$t-\kappa+2\pi is_1$, so that the bound on the vertical variation
within the strip yields $|-\mbox{\rm Im}\kappa+2\pi s_1|\le 2\pi (2^N+1)$,
and the triangle inequality gives the desired estimate. \hfill $\Box$ \par\medskip
\end{proof}
\hide{
The following lemma extends the estimates from the previous lemma. It says that
large winding numbers $n_{{\underline s}}(K,t)$ occur only when the $K=|\kappa|$ is
exponentially large.
\begin{lemma}[Bounding the Imaginary Parts, II]
\label{bound_im} \rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent
For every ${\underline s}\in\mathcal S_0$, there is a monotonically decreasing function
$M_{{\underline s}}:(t_{{\underline s}},\infty)\to[6,\infty)$ such that for all $t>t_{{\underline s}}$ and $K\ge
M_{{\underline s}}(t)$, the number $n_{{\underline s}}(K,t)$ from Corollary~\ref{Cor:SecondDerivative}
satisfies
\begin{eqnarray}
2\pi( 2^{n_{{\underline s}}(K,t)}+1+|s_1|) &<& K/3<2K/3-2 \\
2\pi( 2^{n_{{\underline s}}(K,t)}+1+|s_k|) &<& F^{\circ (k-1)}(K/2-2)-2K
\hide{
2\pi( 2^{n_{{\underline s}}(K,t)}+1+|s_k|)<\left\{\begin{array}{ll}
K/3<2K/3-2 & \mbox{if }k=1 \\
F^{\circ (k-1)}(K/2-2)-2K & \mbox{if }k\ge 2 \\
\end{array}\right.\;.\nonumber
}
\end{eqnarray}
for $k\geq 2$.
\end{lemma}
\begin{proof}
Recall that for all $n\in {\mbox{\bbf N}}$, $|s_{n+1}|<F^{\circ n}(t_{{\underline s}}^*)$.
Let $T(K,{\underline s})=At_{{\underline s}}^*+B\log K+C$ be a lower bound of
potentials which satisfy the estimates in Proposition~\ref{d2dtdynray}.
Now there is an $M\geq 6$ so that all $K\geq M$ satisfy
\begin{eqnarray}
2\pi (B\log K+C+1+t_{{\underline s}}^*)&<&K/3\;; \label{K2}\\
t_{{\underline s}}^*&<&K/2-3-\log (2\pi)\;; \label{K3}\\
2\pi(B\log K+C+1)+F(K/2-3)&<&F(K/2-2)-2K\;. \label{K4}
\end{eqnarray}
So far, these estimates depend only on ${\underline s}$, but not on $t$.
There is an $N\geq n_{{\underline s}}(K,t)$ large enough so that for all $n\geq N$
\begin{equation}
2^n+At_{\sigma^{n-1}{\underline s}}^*\le F^{\circ (n-1)}(t)
\,\,.
\label{2^N_est}
\end{equation}
Set $M_{{\underline s}}(t):=\max\{M,F^{\circ (N-1)}(t)\}$.
Then if $K\ge M_{{\underline s}}(t)$, we have
\[
2^N \leq F^{\circ (N-1)}(t) - At_{\sigma^{N-1}{\underline s}}^*< B\log K+C
\]
We conclude, using (\ref{K2})
\[
2\pi (2^N+1+|s_1|)\le 2\pi (B\log K+C+1+t_{{\underline s}}^*)<K/3<2K/3-2\;.
\]
Furthermore, for $k\ge 2$,
\begin{eqnarray*}
2\pi(2^N+1+|s_k|)&\le& 2\pi (B\log K+C+1+F^{\circ
(k-1)}(t_{{\underline s}}^*))\le \\
&\stackrel{(\ref{K3})}{\le}&2\pi (B\log K+C+1)+F^{\circ
(k-1)}(K/2-3)<\\
&\stackrel{(\ref{K4})}{<}& F^{\circ (k-1)}(K/2-2)-2K\;.
\end{eqnarray*}
\vspace{-1cm}
\hfill $\Box$ \par\medskip
\end{proof}
}
\begin{proposition}[The Behavior of the Singular Orbit]\label{behavsing}
\rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent Let $\kappa\in I$ be a parameter such that
$\mbox{\rm Re}\kappa\ge 4$ and $|\mbox{\rm Im}\kappa|\le \mbox{\rm Re} \kappa-2$. Let
$K:=|\kappa|$ and let $(z_k)_{k\ge
1}:=(E_{\kappa}^{\circ(k-1)}(0))_{k\ge 1}$ be the singular orbit
and suppose that for every $k\ge 2$
\begin{eqnarray*}
|\mbox{\rm Im}(z_k)| &<& F^{\circ(k-1)}(\mbox{\rm Re}\kappa-2)-K\;. \\
\mbox{Then}\quad \forall k\ge 1 \quad |\mbox{\rm Re}(z_k)|&\ge& F^{\circ
(k-1)}(\mbox{\rm Re}\kappa-1)-K\;.
\end{eqnarray*}
\end{proposition}
\begin{proof}
Let $(B_k), (C_k)$ denote the statements
\begin{eqnarray*}
|\mbox{\rm Re}(z_k)|\ge F^{\circ (k-1)}(\mbox{\rm Re}\kappa-1)-K && (B_k)\;,\\
\mbox{\rm Re}(z_k)\ge -\mbox{\rm Re}\kappa && (C_k)\;.
\end{eqnarray*}
The induction seeds $(B_1):\ 0\ge \mbox{\rm Re}\kappa -1-K$ and $(C_1): 0\ge
-\mbox{\rm Re}\kappa$ are trivial. The induction steps follow immediately
from \cite{rs1}, Lemmas 4.2 and 4.3, which say, translated (and
weakened) from the $\exp(z)+\kappa$ into the $\exp(z+\kappa)$
parametrization: if $\kappa$ is a parameter as specified above,
and if there is a $k\ge 2$ such that $(C_{k'})$ holds for all $1\le
k'\le k-1$, then $(B_k)$ and $(C_k)$ hold (since $\kappa$ does not
admit an attracting orbit).\hfill $\Box$ \par\medskip
\end{proof}
\par\medskip \noindent {\sc Remark. }
While the complete proof of the preceding proposition needs the detailed arguments
from \cite{rs1}, the idea is simple: if the real part of $z_k$ is large and
positive, then $|z_{k+1}|=|E_\kappa(z_k)|$ is exponentially large. If the
imaginary parts of the orbit are bounded, then $|\mbox{\rm Re}(z_{k+1})|$ must be almost as
large as $|z_{k+1}|$. If the real part is negative, then $z_{k+2}$ is extremely
close to the origin, and there is an attracting orbit of period at most $k+2$.
\hide{
\begin{proposition}[Bound on Singular Value]
\label{Prop:BoundSingValue} \rule{0pt}{0pt}\nopagebreak\par\nopagebreak\noindent
Suppose that $\kappa\in \mathcal G_{{\underline s}}(t)$ with ${\underline s}\in \mathcal S_0$ and $t>t_{\underline s}$. Then
$|\kappa|<R$ for a constant $R$ depending only on ${\underline s}$ and $t$; for sufficiently
large $t$, we have $|\kappa|<2t$.
\end{proposition}
\par\medskip\noindent {\sc Proof. }
}
\proofof{Proposition \ref{parabound}}
It suffices to consider the case $|\kappa| > e$. Set $K:=|\kappa|$. By
Proposition~\ref{d2dtdynray}, there are universal constants $A,B,C\geq 1$ such
that
\[
\left|\frac{(g_{\sigma^n({\underline s})}^\kappa)''(t')}{(g_{\sigma^n({\underline s})}^\kappa)'(t')}\right|
<e^{-t'/2}<1
\]
for all $t'\geq t$ provided $t'\ge At_{\sigma^n({\underline s})}^*+B\log K+C$.
There is an $N_0\in{\mbox{\bbf N}}$ so that $F^{\circ N_0}(t)\ge At_{\sigma^{N_0}({\underline s})}^*+C+1$.
Let $N_1\in{\mbox{\bbf N}}$ be minimal with $F^{\circ N_1}(1)\ge B\log K>1$. Then $N_1\ge 1$
and $F^{\circ (N_1-1)}(1) < B\log K$.
By Lemma~\ref{Lem:BoundInitialSteps}, we have for all $n\ge N_0+N_1=: N$
\[
F^{\circ n}(t)\ge At_{\sigma^n({\underline s})}^*+B\log K+C \;.
\]
There is a constant $c_1>0$ such that all $n\in{\mbox{\bbf N}}$ satisfy $2^n\le c_1 F^{\circ
(n-1)}(1)$.
By Proposition~\ref{bound_critorb}, we have the estimates
\begin{eqnarray*}
|\mbox{\rm Im}\kappa|
&\le&
2\pi(2^N+1+|s_1|)
\le
2\pi(2^{N_0}c_1 F^{\circ(N_1-1)}(1)+1+|s_1|)
\\
&\le& 2\pi(2^{N_0}c_1B\log K+1+|s_1|)
= c_2+c_32^{N_0}\log K
\end{eqnarray*}
with constants $c_2,c_3>0$ depending only on $|s_1|$, and also
\begin{equation}
|\mbox{\rm Im} z_k|\le 2\pi(2^N+1+|s_k|)+|\mbox{\rm Im}\kappa| \;.
\label{Eq:ImagPartsOrbit_z_k}
\end{equation}
There is an $M>0$ so that if $K\ge M$, then
\begin{eqnarray}
K &\ge& 2c_2+2c_32^{N_0}\log K+2
\label{Eq:K-bound 1}
\\
K & \ge&
2\pi t_{{\underline s}}^*+(2\pi2^{N_0}c_1B+c_32^{N_0}+1)\log K+2\pi+4+c_2
\label{Eq:K-bound 2}
\;.
\end{eqnarray}
Now suppose $K\ge M$. This implies
\[
|\mbox{\rm Re}\kappa|\ge K - |\mbox{\rm Im}\kappa|
\ge K-c_2-c_32^{N_0}\log K
\ge c_2+c_32^{N_0}\log K+2
\ge |\mbox{\rm Im}\kappa|+2
\]
and
\begin{eqnarray*}
|\mbox{\rm Re}(\kappa)|-2
&\ge&
K - |\mbox{\rm Im}(\kappa)| -2
\\
&\ge&
2\pi t_{{\underline s}}^*+2\pi2^{N_0}c_1B\log K + \log K+2\pi+ 2
\\
&\ge&
2\pi t_{{\underline s}}^*+2\pi2^{N_0}c_1 F^{\circ(N_1-1)}(1) +2\pi+ F^{-1}(2K)
\\
&\ge&
2\pi t_{{\underline s}}^*+2\pi2^{N_0}2^{N_1} +2\pi+ F^{-1}(2K).
\end{eqnarray*}
Since parameters with $\mbox{\rm Re}(\kappa)<-1$ are known to be attracting, we conclude
that $\mbox{\rm Re}(\kappa)\ge 2\pi+2$. We obtain for $k\geq 2$ (using convexity of $F$)
\begin{eqnarray*}
F^{\circ(k-1)}(\mbox{\rm Re}\kappa-2)
&\ge&
F^{\circ(k-1)}(2\pi t_{\underline s}^*+2\pi 2^N+2\pi+F^{-1}(2K))
\\
& > &
2\pi F^{\circ(k-1)}(t_{\underline s}^*)+2\pi 2^N+2\pi+2K
\\
&\ge&
2\pi t_{\sigma^{k-1}({\underline s})}^*+2\pi 2^N+2\pi+2K
\\
&\ge&
2\pi(|s_k|+ 2^N+1)+2K
\ge
|\mbox{\rm Im} z_k|+K
\;.
\end{eqnarray*}
By Proposition~\ref{behavsing}, it follows for all $k\geq 2$
\[
F^{\circ(k-1)}(\mbox{\rm Re}\kappa-1)-K\le\mbox{\rm Re} z_k\stackrel{\mbox{\scriptsize
(\ref{lowpotasymp}) in Thm \ref{dynrays}}}{\le}
F^{\circ(k-1)}(t)-\mbox{\rm Re} \kappa+O(e^{-F^{\circ(k-1)}(t)})\;.
\]
Comparing the growth of the left and the right hand sides as
$k\to\infty$, we conclude $\mbox{\rm Re} \kappa\le t+1$. The triangle inequality thus yields
$K\le |\mbox{\rm Re}\kappa|+|\mbox{\rm Im}\kappa|< 2|\mbox{\rm Re}\kappa|-2\le 2t$.
Every fixed choice of $t>t_{\underline s}$ yields a fixed value of $N_0$ and thus a fixed
value of $M$; and clearly we can choose $M$ so that it depends continuously on
$t$. Then
\[
K\le\max\{2t,M\}
\hide{
K\le\max\left\{2t,2\pi t_{{\underline s}}^*+(2\pi2^{N_0}c_1B+c_32^{N_0}+1)\log K+2\pi+4+2c_2
\right\}
}
\;.
\]
Note that as $t$ increases, $N_0$ and hence $M$ do not increase, while
$c_1,c_2,c_3$ are independent of $t$. Therefore, for large $t$ we have the bound
$K\le 2t$.
\hfill $\Box$ \par\medskip
\hide{
\[
|\mbox{\rm Re}\kappa|> 2^N+t_{\underline s}^*+\dots
\;.
\]
Since $|s_k|\le t^*_{\sigma^k({\underline s})}\le F^{\circ(k-1)}(t^*_{\underline s})$, there is a $c_4$
such that if $\mbox{\rm Re}\kappa>2^N+t_{\underline s}^*+c_4$, then
\[
F^{\circ(k-1)}(\mbox{\rm Re}\kappa-2)-K\ge 2\pi(2^N+1+|s_k|)+|\mbox{\rm Im}\kappa|
\]
for all $k$.
But in this case, Proposition~\ref{} implies that $|\mbox{\rm Re} z_n|\ge
F^{\circ(n-1)}(\mbox{\rm Re}\kappa-1)-K$, and together with the asymptotics of rays this
implies that $t\ge\mbox{\rm Re}\kappa-1$, a contradiction.
}
\hide{
\proofof{Proposition \ref{parabound}} We will show that
\[
\xi_{{\underline s}}(t):=\max\{2t,\;3M_{{\underline s}}(t)/2\}
\]
is a valid choice. Consider a parameter $\kappa\in\mathcal G_{{\underline s}}(t)$ with
$|\kappa|>\frac 32M_{{\underline s}}(t)$, where $M_{{\underline s}}\ge 6$ is from
Lemma~\ref{bound_im}. We will show that $|\kappa|\leq 2t$.
Set $K:=|\kappa|$. By Lemma \ref{bound_critorb} and Lemma \ref{bound_im},
\[
|\mbox{\rm Im} \kappa|\le 2\pi\left(2^{n(K,t)}+1+|s_1|\right) <K/3< 2K/3-2
\;,
\]
so that $|\mbox{\rm Re}\kappa|>2K/3>K/3+2>|\mbox{\rm Im}\kappa|+2$. Since parameters $\kappa$ with
$\mbox{\rm Re}\kappa<-1$ are known to be attracting, it follows that
$\mbox{\rm Re}\kappa>|\mbox{\rm Im}\kappa|+2$ and $\mbox{\rm Re}\kappa>4$.
Our next claim is
\begin{eqnarray*}
|\mbox{\rm Im} z_k|\le 2\pi\left(2^{n(K,t)}+1+|s_k|\right)+|\mbox{\rm Im}\kappa| \le
F^{\circ(k-1)}(\mbox{\rm Re}\kappa-2)-K\;:
\end{eqnarray*}
indeed, the first inequality is Lemma~\ref{bound_critorb}, and the second is
Lemma~\ref{bound_im}, using $\mbox{\rm Re}\kappa>K/2$.
Now Proposition~\ref{behavsing} yields $\mbox{\rm Re} z_k \ge F^{\circ(k-1)}(\mbox{\rm Re}\kappa-1)-K$
for all $k\ge 2$. Hence
\[
F^{\circ(k-1)}(\mbox{\rm Re}\kappa-1)-K\le\mbox{\rm Re} z_k\stackrel{\mbox{\scriptsize
(\ref{lowpotasymp}) in Thm \ref{dynrays}}}{\le}
F^{\circ(k-1)}(t)-\mbox{\rm Re} \kappa+O(e^{-F^{\circ(k-1)}(t)})\;.
\]
By comparing the growth of the left and the right hand sides as
$k\to\infty$, we conclude $\mbox{\rm Re} \kappa\le t+1$. The triangle inequality thus yields
$K\le |\mbox{\rm Re}\kappa|+|\mbox{\rm Im}\kappa|< (t+1)+(t+1-2)=2t$.
The additional statement follows immediately, because $M_{{\underline s}}(t)$
is monotonically decreasing. \hfill $\Box$ \par\medskip
}
\small
|
1,116,691,500,188 | arxiv | \section{Introduction}
\label{introduction}
Centuries of misuse of natural resources has stressed available freshwater supplies throughout the world.
With the rapid development of industries,
chemical waste has been thrown deliberately in water to the point of making it difficult to clean.
Particularly,
direct or indirect discharge of heavy metals into the environment has increased recently,
especially in developing countries~\cite{ko-jhm2017}.
Unlike organic contaminants,
heavy metals are not biodegradable and tend to accumulate in living organisms.
Many heavy metal ions are known also to be toxic or carcinogenic~\cite{gumpu-sab2015}.
Toxic heavy metals of particular concern in treatment of industrial waste-water
include zinc, copper, iron, mercury, cadmium, lead and chromium.
As a result,
filtration process that can acquire freshwater from contaminated, brackish water or seawater
is an effective method to also increase the potable water supply.
Modern desalination is mainly based on reverse osmosis (RO)
performed through membranes,
due to their low energy consumption and
easy operation.
Current RO plants have already operated near the thermodynamic limit,
with the applied pressure being only 10 to 20\% higher than the osmotic pressure of the concentrate~\cite{li-des2017}.
Meanwhile,
advances in nanotechnology have inspired the design of novel membranes based on two-dimensional (2D) nanomaterials.
Nanopores with diameters ranging from a few Angstroms to several nanometers
can be drilled in membranes to fabricate molecular sieves~\cite{wang-nn2017}.
As the diameter of the pore approaches the size of the hydrated ions,
various types of ions can be rejected by nanoporous membranes leading to efficient water desalination.
Graphene,
a single-atom-thick carbon membrane was demonstrated to have several orders of magnitude higher flux rates when
compared with conventional zeolite membranes~\cite{celebi-science2014}.
In this way, graphene and graphene oxided are one of the most
prominent materials for high-efficient membranes~\cite{Xu15, Kemp13, huang-jpcl2015}.
More recently,
others 2D materials have also been investigated for water filtration.
A nanoporous single-layer of molybdenum disulfide (MoS$_2$)
has shown great desalination capacity~\cite{kou-pccp2016,weifeng-acsnano2016,aluru-nc2015}.
The possibility to craft the pore edge with Mo, S or both provides
flexibility to design the nanopore with desired functionality.
In the same way, boron nitride nanosheets also has been investigated
for water purification from distinct pollutants~\cite{Lei13, Azamat15}.
Therefore, not only the nanopore size matters for cleaning of
water purposes but also the hydrophobicity and geometry of the porous.
For instance,
the performance of commercial RO membrane
is usually on the order of 0.1 L/cm$^{2}\cdot$day$\cdot$MPa
(1.18 g/m$^{2}\cdot$s$\cdot$atm)~\cite{pendergast-ees2011}.
With the aid of zeolite nanosheets,
permeability high as 1.3 L/cm$^{2}\cdot$day$\cdot$MPa
can be obtained~\cite{jamali-jpcc2017}.
Recent studies has show that MoS$_2$ nanopore filters
have potential to achieve a water permeability of roughly
100 g/m$^{2}\cdot$s$\cdot$atm~\cite{weifeng-acsnano2016} --
2 orders of magnitude higher than the commercial RO.
This is comparable with that measured experimentally for the graphene filter
($\sim$70 g/m$^{2}\cdot$s$\cdot$atm)
under similar conditions~\cite{surwade-nn2015}.
These results have shown that the
water permeability scales linearly with the pore density.
Therefore, the water filtering
performance of 2D nanopores can be even higher.
\hl{
Controlling the size and shape of the pores created in these membranes,
however, represents a huge experimental challenge.
Inspired by a number of molecular dynamics studies
predicting ultrahigh water permeability across graphene and others 2D nanoporous membranes}~\cite{tanugi-nl2012,aluru-nc2015},
\hl{technologies have been developed to either create and control the nanopore size and distribution.
Methods including electron beam}~\cite{garaj-nature2010},
\hl{ion irradiation}~\cite{yoon-acsnano2016}
\hl{and chemical etching}~\cite{ohern-nl2015}
\hl{have been reported to introduce pores in graphene.
J. Feng et al.}~\cite{feng-nl2015}
\hl{have also developed a scalable method to controllably make nanopores
in single-layer MoS$_2$ with subnanometer precision using electrochemical reaction (ECR).
Recently,
K. Liu and colleagues}~\cite{liu-nl2017}
\hl{investigated the geometrical effect of the nanopore shape on ionic blockage induced by DNA translocation
through h-BN and MoS$_2$ nanopores.
They observed a geometry-dependent ion scattering effect,
and further proposed a modified ionic blockage model
which is highly related to the ionic profile caused by geometrical variations.
Additionally,
recent experimental efforts have been devoted
to amplify the filtering efficiency of the nanoporous membranes.
Z. Wang and colleagues}~\cite{wang-nl2017}
\hl{mechanistically related the
performance of MoS$_2$ membranes to the size of their
nanochannels in different hydration states.
They attributed the high water flux
(30-250 L/m$^{2}\cdot$h$\cdot$bar)
of MoS$_2$ membranes to the low hydraulic
resistance of the smooth, rigid MoS$_2$ nanochannels.
The membrane compaction
with high pressure have also been found to create a neatly stacked nanostructure
with minimum voids,
leading to stable water flux and enhanced separation performance.
By tuning the pore creation process,
D. Jang et al.}~\cite{jang-acsnano2017}
\hl{have demonstrated nanofiltration membranes that reject small molecules but offer high permeance to water or monovalent ions.
Also, studies have shown how defects, oxidation and functionalization can affect the ionic blockage}~\cite{Achtyl15, Levita16, Jijo17}
\hl{All of these studies
point to a near future where 2D membranes will have a major impact on desalination processes.
}
In this work, we address the issue of the selectivity of
the porous. In order to do that,
we compare the water filtration capacity of MoS$_2$ and graphene through molecular dynamics simulations.
While graphene is a purely hydrophobic material, MoS$_2$ sheets have
both hydrophobic (S) and hydrophilic (Mo) sites. Recent studies have shown that
the water dynamics and structure inside hydrophobic or hydrophilic
pores can be quite distinct
regarding the pore size~\cite{Mosko14, kohler-pccp2017, bordin-PhysA17}
and even near hydrophobic or hydrophilic protein sites~\cite{mateus_protein}.
Three cations are considered:
the standard monovalent sodium (Na$^+$),
the divalent zinc (Zn$^{2+}$)
and trivalent iron (Fe$^{3+}$).
The study of sodium removal is relevant due to it applications
for water desalination~\cite{Corry08, Das14, Mah15}.
Zinc is a trace element that is essential for human health.
It is important for the physiological functions of living tissue
and regulates many biochemical processes.
However,
excess of zinc can cause eminent health problems~\cite{fu-jem2011}.
The cation Zn$^{2+}$ is ranked 75th in the
{\it Comprehensive Environmental Response,
Compensation and Liability Act} (CERCLA)
2017 priority list of hazardous substances.
In its trivalent form,
ferric chloride Fe$^{3+}$Cl$_3^-$ is a natural flocculant,
with high power of aggregation.
It is also on the CERCLA list with
recommended limit concentration of 0.3 mg/l.
In this way,
we explore the water permeation and cations rejection
by nanopore with distinct radii. Our results
shows that the hydrophilic/hydrophobic MoS$_2$
nanopore have a higher salt rejection in all scenarios,
while the purely hydrophobic graphene have
a higher water permeation.
Specially, MoS$_2$ membranes shows the impressive capacity
of block all the trivalent iron cations regardless the
nanopore size.
Our paper is organized as follow. In the Section~\ref{methods} we introduce our model and
the details about the simulation method. On Section~\ref{water-results} we show and discuss our
results for the water permeation in the distinct membranes, while
in the Section~\ref{ion-results} we show the ion rejection properties for each case.
Finally, a summary of our results and the conclusions are shown in Section~\ref{conclusions}.
\section{Computational Details and Methods}
\label{methods}
Molecular dynamics (MD) simulations were performed using
the LAMMPS package~\cite{plimpton1995}.
A typical simulation box consists of a graphene sheet
acting as a rigid piston in order to apply an external force (pressure)
over the ionic solution.
The pressure gradient forces the solution against the 2D nanopore:
a single-layer of molybdenum disulfide or graphene.
Figure~\ref{fig1}
shows the schematic representation of the simulation framework.
\begin{figure}[t!]
\centering
\includegraphics[width=12.5cm]{fig1.eps}
\caption{(a) Schematic representation of the simulation framework.
The system is divided as follows:
On the left side we can see the piston (graphene) pressing the ionic
solution (in this case, water+NaCl) against the MoS$_2$ nanopore.
For the case of a graphene nanopore the depiction is the same,
but with a porous graphene sheet instead of the MoS$_2$ sheet.
On the right side we have bulk water.
(b) Definition of the pore diameter $d$.
}
\label{fig1}
\end{figure}
A nanopore was drilled in both MoS$_2$ and graphene sheets by removing the desired atoms,
as shown in Figure~\ref{fig1}.
The accessible pore diameters considered in this work range from
0.26 - 0.95 nm for the MoS$_2$
\hl{(which means a pore area ranging from 5.5 - 71 {\AA}$^2$)}
and 0.17 - 0.92 nm for the graphene
\hl{(with area ranging from 2.5 - 67 {\AA}$^2$)}.
\hl{M. Heiranian et al.}~\cite{aluru-nc2015}
\hl{have studied different MoS$_2$ nanopore's composition for water filtration:
with only Mo, only S and a mix of the two atoms at the pore's edge.
They found similar ion rejection rates for both cases.
Here, in order to account for circular nanopores,
mixed pore edges have been chosen.}
The system contains 22000 atoms distributed in a box with dimensions $5\times 5 \times 13$ nm in x, y and z, respectively.
Although the usual salinity of seawater is $\sim0.6$M,
we choose a molarity of $\sim1.0$M
for all the cations (Na$^{+}$, Zn$^{2+}$ and Fe$^{3+}$)
due the computational cost associated with low-molarity solutions.
The TIP4P/2005~\cite{abascal-jcp2005} water model was used
and the SHAKE algorithm~\cite{ryckaert1977} was employed to maintain
the rigidity of the water molecules.
The non-bonded interactions are described by the Lennard-Jones (LJ) potential
with a cutoff distance of 0.1 nm and the parameters tabulated in Table 1.
The Lorentz-Berthelot mixing rule were used to obtain the LJ parameters for different atomic species.
The long-range electrostatic interactions were calculated by the {\it Particle Particle Particle Mesh} method~\cite{hockney1981}.
Periodic boundary conditions were applied in all the three directions.
\begin{table}
\centering
\setlength{\arrayrulewidth}{0.3mm}
\renewcommand{\arraystretch}{1.3}
\caption{The Lennard-Jones parameters and charges of the simulated atoms.
The crossed parameters were obtained by Lorentz-Berthelot rule.
}
\vspace{0.1cm}
\label{t1}
\begin{tabular}{llll}
\hline
Interaction & $\sigma$ (nm) & $\varepsilon$ (kcal/mol) & Charge \\
\hline
C$-$C~\cite{farimani-jpcb2011} & 3.39 & 0.0692 & 0.00 \\
\hline
Mo$-$Mo~\cite{liang-prb2009} & 4.20 & 0.0135 & 0.60 \\
\hline
S$-$S~\cite{liang-prb2009} & 3.13 & 0.4612 & -0.30 \\
\hline
O$-$O~\cite{abascal-jcp2005} & 3.1589 & 0.1852 & -1.1128 \\
\hline
H$-$H & 0.00 & 0.00 & 0.5564 \\
\hline
Na$-$Na~\cite{raul-jpcb2016} & 2.52 & 0.0347 & 1.00 \\
\hline
Cl$-$Cl~\cite{raul-jpcb2016} & 3.85 & 0.3824 & -1.00 \\
\hline
Zn$-$Zn~\cite{hinkle-jced2016} & 0.0125 & 1.960 & 2.00 \\
\hline
Fe$-$Fe~\cite{hinkle-jced2016} & 0.18 & 0.745 & 3.00 \\
\hline
\end{tabular}
\end{table}
For each simulation,
the system was first equilibrated for constant number of
particles, pressure and temperature (NPT) ensemble
for 1 ns at P = 1 atm and T = 300 K.
Graphene and MoS$_2$ atoms were held fixed in the space during
equilibration and the NPT simulations allow water
to reach its equilibrium density (1 g/cm$^3$).
After the pressure equilibration, a 5 ns simulation in the constant number
of particles, volume and temperature
(NVT) ensemble to further equilibrate the system at the same T = 300 K.
Finally, a 10 ns production run were carried out, also in the NVT ensemble.
The Nos\'e-Hoover thermostat~\cite{nose1984,hoover1985} was used at
each 0.1 ps in
both NPT and NVT simulations, and the Nos\'e-Hoover barostat was used
to keep the pressure constant
in the NPT simulations.
Different external pressures were applied on the rigid piston
to characterize the water filtration through the 2D (graphene and MoS$_2$)
nanopores.
For simplicity, the pores were held fixed in space to study solely the
water transport
and ion rejection properties of these materials.
The external pressures range from 10 to 100 MPa.
These are higher than the osmotic pressure used in the experiments.
The reason for applying such high pressures at MD simulations with
running time in nanosecond scale is because the low pressures would yield
a very low water flux that would not go above the statistical error.
We carried out three independent simulations for each system
collecting the trajectories of atoms every picoseconds.
\section{Water flux}
\label{water-results}
\begin{figure}[t!]
\centering
\includegraphics[width=15.5cm]{fig2.eps}
\caption{Water flux as a function of the applied pressure for MoS$_2$ and graphene nanopores with similar pore areas.
(a) monovalent Na$^+$,
(b) divalent Zn$^{2+}$
and (c) trivalent Fe$^{3+}$ cations are considered
for the ionic solution at the reservoir.
(d) Water permeability through the pores as function of the pore
diameter for the
case of $\Delta$P = 50 MPa.
The dotted lines are a guide to the eye.}
\label{fig2}
\end{figure}
First, let us compare the flux performance of the graphene
and the MoS$_2$ membranes.
In the Figure~\ref{fig2}, we show the water flux through 2D nanopores
in number of molecules per nanosecond
(MoS$_2$ and graphene) as a function of the applied pressure gradient
for different pore diameters.
The water is filtered from a reservoir containing an ionic solution of either
monovalent sodium (Na$^+$),
divalent zinc (Zn$^{2+}$) or trivalent iron cations (Fe$^{3+}$).
In all cases, chlorine (Cl$^-$) was used as the standard anion.
Four pore sizes for each material were investigated.
Our results indicates that for the smaller pore diameter,
the black points in the Figure~\ref{fig2}, both materials have the
same water permeation. However, for the other values of
pore diameter the graphene membrane shows a higher
water flux, for all applied pressure gradient.
While the flux at the purely hydrophobic graphene pore
for a fixed pressure monotonically increases with
the pore diameter, this is not the case for the
MoS$_2$ pore for which the flows shows a minimum around pore
diameter of $0.37$ nm probably due to the non uniform distribution
of the hydrophobic and hydrophilic sites of the pore.
The Figures~\ref{fig2}(a), (b) and (c) show that this
behavior of the water flux
is not affected by the cation valence,
only by the applied pressure, by geometric effects and by the pore
composition.
For instance,
the 0.46 nm graphene pore shows enhanced water flux compatible with
the 0.6 nm MoS$_2$ pore for all cations.
Therefore, is clear that pore composition affects the
water permeation properties more than the water-ion interaction.
This result agrees with the findings by Aluru and his
group~\cite{aluru-nc2015},
were they showed that even a small change in pore composition
can lead to enhanced water flux through a MoS$_2$ nanocavity.
This is also consistent with our recent findings that the dynamics of water
inside nanopores with diameter $\approx$ 1.0 nm is strongly affected
by the presence of hydrophilic or hydrophobic sites~\cite{kohler-pccp2017}.
This investigation, over distinct cation valences and
membranes, highlights the importance of the nanopore physical-chemistry properties
for water filtration processes.
To quantify the water permeability through the pores,
we compute the permeability coefficient,
$p$,
across the pore.
For dilute solutions
\begin{eqnarray}
\label{eq1}
p=\frac{j_{\mathrm{w}}}{ -V_{\mathrm{w}}\Delta C_{\mathrm{s}} +\frac{V_{\mathrm{W}}}{N_{A}k_{\mathrm{B}}T}\hspace{0.1cm} \Delta P }
\end{eqnarray}
where $j_{\mathrm{w}}$ is the flux of water (H$_2$O/ns),
$V_{\mathrm{w}}$ is the molar volume of water (19 ml/mol),
$\Delta C_{\mathrm{s}}$ is the concentration gradient of the solute (1.0 M),
$N_{A}$ is the Avogadro number,
$k_{\mathrm{B}}$ is the Boltzmann constant,
$T$ is the temperature (300 K)
and $\Delta$ P is the applied hydrodynamic pressure (MPa).
The case of $\Delta$ P = 50 MPa is shown in Figure~\ref{fig2}(d).
The permeability coefficient of the MoS$_2$
range from approximately 33 to 55 H$_2$O/ns for the 0.26 and 0.95 nm
diameters, respectively.
The graphene nanopore presents a permeability coefficient
of $\sim$ 34 - 63 H$_2$O/ns as the pore diameter is varied from 0.17 to
0.92 nm,
respectively.
For smaller pores the difference
between MoS$_2$ and graphene is inside the error bars,
whereas for the larger pores
both materials exhibit high permeability rates,
with a slight advantage in the case of graphene.
\begin{figure}[t!]
\centering
\includegraphics[width=14cm]{fig3.eps}
\caption{Averaged axial distribution of water molecules inside the (a)
graphene (Gra) and (b) MoS$_2$ nanopores with distinct diameters.
Here, z = 0 is at the center of the pore, the external
pressure is $\Delta$P = 10 MPa and the cation is the Na$^+$.}
\label{fig3}
\end{figure}
The water structure and dynamics inside nanopores are
strongly related~\cite{kohler-pccp2017, Bordin13a}.
Therefore,
distinct structural regimes can lead to different diffusive behaviors.
In the Figure~\ref{fig3}
we present the distribution of water molecules in the z-direction
inside the MoS$_2$ (solid line) and graphene (dotted line) nanopores.
As for the water flux, the water axial distribution is not
affects by the cation
valence. Therefore, for simplicity and since there are
more studies about monovalent salts,
we show only the Na$^+$ case.
The nanopore length in the z-direction,
considering the van der Walls diameter for each sheet,
is 0.63 (-0.315 to 0.315) nm for the MoS$_2$
and 0.34 (-0.17 to 0.17) nm for the graphene.
The structure inside both pores are considerably different.
For the graphene nanopore, shown in Figure~\ref{fig3}(a),
there is no favorable positions for the water molecules to
remain throughout the simulation.
This can be related to the hydrophobic characteristic
of the graphene sheet and the
high slippage observed for water inside carbon
nanopores~\cite{Falk10, Tocci14}.
Since all the pore is hydrophobic, there is no
preferable position for the water molecules, and the permeability
is higher.
On the other hand,
along the MoS$_2$ cavity we can observe a high
structuration in three sharp peaks,
as shown in Figure~\ref{fig3}(b).
This structuration comes from the existence of hydrophilic (Mo) and hydrophobic
sites (S atoms).
This layered organization within the MoS$_2$
nanopore can be linked to the reduced flux
compared with graphene,
since it implies an additional term in the energy
required for the water molecule to pass through the pore.
The higher water flux through graphene nanopores
compared with MoS$_2$ imply that for a desired water flux,
a smaller applied pressure is needed for graphene.
Nevertheless,
it is important to note that both fluxes are higher,
specially when compared with currently desalination
technologies~\cite{aluru-nc2015,azamat-cms2017}.
Therefore,
both materials are capable of providing a high water permeability.
The question is whether these materials are also able to effectively clean the water by removing the ions.
\section{Ion rejection efficiency}
\label{ion-results}
The other important aspect for
the cleaning of water is the membrane ability to separate
water and ions.
In this way, we investigate how the cation valence
and the pore size affects the percentage rejected ions.
In the Figure~\ref{fig4} we show the
percentage of total ions rejected by the 2D nanopores
as a function of the applied pressure for the three cations.
The pores diameters are the same from the discussed in the previous section.
The ion rejection by the smallest pores,
0.17 and 0.26 nm for graphene and MoS$_2$, respectively,
was 100\% for all applied pressures and cation solutions.
This is expected since the pore size is much smaller than the
hydration radii of the cations. Therefore, is more energetically
favorable for the cation to remain in the bulk solution
instead of strip off the water and enter the pore~\cite{Bordin12a}.
As the pore diameter increases this energetic penalty becomes smaller.
As well, the valence plays a crucial role here, with
the monovalent ions having a smaller
penalty than divalent and trivalent cations.
In this way, for the nanopores with diameter 0.37 nm and 0.46 nm
for graphene and MoS$_2$, respectively,
Na$^+$ and Cl$^-$ ions flow through the pore reducing the rejection efficiency for both materials,
as we can see in the Figure~\ref{fig4}(a).
However, it is important to note that the ion rejection
performance of molybdenum disulfide membranes
is superior from the observed for graphene membranes
for all ranges of pressure, sizes and cation valences.
For instance, for the divalent case Zn$^{2+}$, shown in
the Figure~\ref{fig4}(b) and the smaller $\Delta$P
the rejection is 100\% for all pores sizes in the MoS$_2$ membrane,
while for the graphene membrane we observe cation permeation
for the bigger pores.
\begin{figure}[t!]
\centering
\includegraphics[width=15.5cm]{fig4.eps}
\caption{Percentage of ion rejection by various pores as a function of the applied pressure.
Pores with different diameters are considered.
}
\label{fig4}
\end{figure}
The MoS$_2$ membrane shows a very good performance for the rejection
of the trivalent cation Fe$^{3+}$. As the
Figure~\ref{fig4}(c) shows, for all nanopore size
and applied pressure the rejection is 100\%. Such efficiency was not observed
in the graphene membranes, were only the case with small pore diameter as
100\% of iron rejection.
Here, we should address that not only the
hydration shell plays an important role
in the cations rejection.
While sodium chloride is uniformly dispersed in water and
we do not observe clusters at the simulated concentration,
the iron cations tend to form large clusters of ferric chlorides in solution,
as shown in Figure~\ref{fig5}.
Moreover,
we observe this structures throughout the whole simulation
and even at high pressure regime the clusters remains too large to
overcome the pore.
In fact,
ferric chlorides are effective as primary coagulants
due to their associative character in solution.
At controlled concentrations, it is excellent for both drinking and wastewater treatment applications,
including phosphorus removal~\cite{kim-ee2015},
sludge conditioning and struvite control~\cite{amuda-jhm2007,sun-wr2015}.
It also prevent odor and corrosion by controlling hydrogen sulfide formation.
Additionally, our results indicates that the associative properties of
ferric chlorides
can be used to increase the efficiency of salt
rejection by both MoS$_2$ and graphene nanopores,
which may contribute in water cleaning devices.
\begin{figure}[t!]
\centering
\includegraphics[width=10cm,trim={0 0 0 11cm},clip]{fig5.eps}
\caption{Side and front view snapshots of (a) Fe$^{3+}$Cl$^-$ cluster formation preventing the ion passage through a 0.95 nm MoS$_2$ nanopore,
and (b) monovalent Na$^+$Cl$^-$ passing through the same nanopore without clusterization for an external applied pressure of 50 MPa.
}
\label{fig5}
\end{figure}
\section{Summary and conclusions}
\label{conclusions}
We have calculated water fluxes through various MoS$_2$ and graphene nanopores
and the respective percentage of total ions rejected by both materials
as a function of the applied pressure gradient.
Our results indicate that 2D nanoporous membranes
are promising for water purification and salt rejection.
The selectivity of the membranes was found to depend on factors such as
the pore diameter,
the cationic valence
and the applied pressure. Nevertheless, our results shows that
the ion valency do not affect the water permeation -- this is
only affected by the pore size and chemical composition.
Particularly,
our findings indicate that graphene is a better water conductor than MoS$_2$,
with a higher permeability coefficient.
Although, both material have presented high water fluxes.
On the other hand,
MoS$_2$ nanopores with water accessible pore diameters
ranging from 0.26 to 0.95 nm strongly reject ions
even at theoretically high pressures of 100 MPa.
Additionally,
the rejection is shown to depend strongly on the ion valence.
It reaches 100\% for trivalent ferric chloride (Fe$^{3+}$Cl$_{3}^-$)
for all MoS$_2$ pore sizes and applied pressures.
This is a direct result
of the ability of heavy metals to form agglomerates,
eventually exhibiting long ionic chains.
At the same time, this did not affected the water
flux. Then, the ferric chloride
properties can be used to improve the effectiveness
of 2D material based nanofilters. New studies are been performed
in this direction.
\begin{acknowledgements}
We thank the Brazilian agencies CNPq and INCT-FCx for the financial support,
CENAPAD/SP and CESUP/UFRGS for the computer time.
\end{acknowledgements}
|
1,116,691,500,189 | arxiv |
\section{Supplementary Information}
\subsection{Relativistic Hamiltonian approximations}
Our work is based on the 4-component electronic Dirac-Coulomb Hamiltonian which in atomic units is given as
\begin{equation}
\hat{H} = \sum_{i=1}^N \left [c(\boldsymbol\alpha_i \cdot \mathbf{p}_i) + \beta^{\prime}_i m c^2 - \phi_{nuc} \right]
+ \sum_{i<j} \frac{1}{r_{ij}} +V_{NN}.
\end{equation}
\noindent
We work within the Born-Oppenheimer clamped nuclei approximation which allows to factorize out time-dependence of the one-electron
problem in the nuclear frame. The one-electron operator of the electronic Hamiltonian is accordingly given by the Dirac Hamiltonian
in the electrostatic potential $\phi_{nuc}$ of clamped nuclei. The relativistic energy scale has been aligned with the non-relativistic one by subtraction of the electron rest mass.
The full Lorentz-invariant two-electron interaction can not be written
in a simple closed form, so approximation and thus loss of strict Lorentz invariance is in practice unavoidable.
In Coulomb gauge the zeroth-order $\mathcal{O}(c^{0})$ operator is given by the Coulomb term employed here.
This resulting Hamiltonian covers the major part of the spin-orbit interaction, including two-electron spin-same orbit, as well as scalar relativistic effects. Experience
shows that the Coulomb term is enough for most chemical purposes \cite{visser:hyd4}, but for highly accurate molecular spectra the Breit (Gaunt) term, carrying
spin-other orbit interaction, is recommended.
A fundamental conceptual problem is that the Dirac-Coulomb(-Breit) Hamiltonian has no bound solutions
due to the one-electron negative-energy continuum solutions generated by the Dirac Hamiltonian \cite{brown:ravenhall}. We adopt the no-pair approximation (NPA),
widely used in relativistic quantum chemistry \cite{dyall}, in which the N-particle basis of Slater determinants is constructed from positive-energy bispinors only. This procedure in fact neglects all QED effects, but it is justifiable at the energy scale relevant to chemistry. In particular, the Born-Oppenheimer approximation is expected to have larger impact than the neglect of QED effects.
We finally note that the Fock space approach to include positronic states within the Dirac-Coulomb(-Breit) Hamiltonian approximation \cite{saue:wilson,Kutzelnigg_CP2011} should be tractable on a quantum computer as well, since the direct mapping (including qubits for positrons) covers the whole Fock space generated by a finite basis set.
For further discussion of the Dirac-Coulomb approximation and how to possibly go beyond it the reader may consult Refs.\cite{saue:wilson,saue:hamprimer,Kutzelnigg_CP2011,liu:PCCP2012,derezinski2012}.
\subsection{Size of 4c relativistic FCI eigenvalue problem}
In this section, we compare dimensions of non-relativistic and 4c relativistic Hamiltonian matrices. In the NR case, the Hamiltonian matrix is block diagonal according to $M_{S}$. Thus for a closed shell system with $n$ electrons in $m$ orbitals, the number of determinants is
\begin{equation}
N_{\rm{NR}} = \left( \begin{array}{c} m \\ n/2 \end{array} \right)^2 .
\end{equation}
\noindent
The relativistic Hamiltonian mixes determinants with different $M_{K}$ values and therefore
\begin{equation}
N_{\rm{R}} = \left( \begin{array}{c} 2m \\ n \end{array} \right).
\end{equation}
\noindent
Using Stirling's approximation in the form
\begin{equation}
\mathrm{ln}~m! \approx \frac{1}{2} \mathrm{ln}~(2\pi m) + m\mathrm{ln}~m - m \qquad \mathrm{for}~m\rightarrow \infty,
\end{equation}
\noindent
and setting $m = k \cdot n$, the ratio between the relativistic and non-relativistic number of determinants is given by the expression
\begin{equation}
k_{\rm{R}/\rm{NR}} = \frac{N_{\rm{R}}}{N_{\rm{NR}}} = \Bigg( \frac{\sqrt{\pi (2k - 1)}}{2k}\Bigg) \cdot m^{1/2}.
\end{equation}
\subsection{Controlled-U circuit design}
In this section, we construct a quantum circuit which corresponds to the controlled action of powers of $U=e^{i\tau\hat{H}}$ (see Figure 1 of the paper) for a CI space of dimension 3. For this case, we need two qubits to encode the quantum chemical wave function and $U$ has a block diagonal structure with $3 \times 3$ block of an exponential of a Hamiltonian and unity on a diagonal to complete the vector space of two qubits.
We use the \textit{Quantum Shannon Decomposition} technique of Shende et. al. \cite{shende_2006}. It turns out to be very useful to generalize the concept of controlled gates to quantum multiplexors. A quantum multiplexor is a quantum conditional which acts on target qubit(s) in a different way, according to the state of select qubit(s). If the select qubit is the most significant one, then it has the following matrix form
\hskip 1cm
\begin{minipage}{0.08\textwidth}
\begin{center}
\vskip 0.3cm
\mbox{
\xymatrix @*=<0em> @C=0.8em @R=0.8em {
& \ctrlm{1} & \qw \\
& \gate{U} & \qw
}
}
\end{center}
\end{minipage}
\hskip -1.2cm
\begin{minipage}{0.39\textwidth}
\begin{equation}
= \qquad \left( \begin{array}{cc} U_0 & ~ \\ ~ & U_1 \end{array} \right).
\end{equation}
\end{minipage}
\vskip 0.4cm
\noindent
It performs $U_0$ on the target qubit if the select qubit is $\ket{0}$ and $U_1$ if the select qubit is $\ket{1}$. A controlled gate is a special case where $U_0 = I$. More generally, if $U$ is a quantum multiplexor with $s$ select qubits and $t$ target qubits and the select qubits are most significant, the matrix of $U$ will be block diagonal, with $2^s$ blocks of size $2^{t} \times 2^{t}$.
A controlled 2-qubit $U$ (c-$U_{2q}$) is a special case of multiplexed $U$ and can be decomposed in the following way \cite{shende_2006}
\begin{minipage}{0.10\textwidth}
\begin{center}
\vskip 0.70cm
\mbox{
\xymatrix @*=<0em> @C=0.8em @R=0.8em {
& \ctrl{1} & \qw \\
& \multigate{1}{U} & \qw \\
& \ghost{U} & \qw
}
}
\end{center}
\end{minipage}
\begin{minipage}{0.08\textwidth}
\vskip 0.6cm
\begin{center} = \end{center}
\end{minipage}
\hskip -1.3cm
\begin{minipage}{0.35\textwidth}
\begin{equation}
\xymatrix @*=<0em> @C=0.8em @R=0.5em {
& \qw & \gate{R_z} & \qw & \qw \\
& \multigate{1}{W} & \ctrlm{-1} & \multigate{1}{V} & \qw \\
& \ghost{W} & \ctrlm{-1} & \ghost{V} & \qw
}
\label{cu}
\end{equation}
\end{minipage}
\vskip 0.4cm
\noindent
A multiplexed $z$-rotation in the middle of the circuit on the right-hand side (at this stage without angle specification) is in fact a diagonal matrix with second half of a diagonal equal to a Hermitian conjugate of the first one.
The circuit equation (\ref{cu}) corresponds to the matrix equation
\begin{equation}
\left( \begin{array}{cc} I & ~ \\ ~ & U \end{array} \right) = \left( \begin{array}{cc} V & ~ \\ ~ & V \end{array} \right) \left( \begin{array}{cc} D & ~ \\ ~ & D^{\dagger} \end{array} \right) \left( \begin{array}{cc} W & ~ \\ ~ & W \end{array} \right).
\end{equation}
\noindent
Note that right in the equation means left in the circuit as the time in a circuit flows from the left to the right.
We then have
\begin{eqnarray}
\label{w}
I & = & V D W, \\
U & = & V D^{\dagger} W, \\
\label{u_diag}
U^{\dagger} & = & V D^{2} V^{\dagger}.
\end{eqnarray}
A single-multiplexed $R_z$ gate (with angle $\phi_0$ for $\ket{0}$ state of a select qubit and $\phi_1$ for $\ket{1}$) can be implemented with the following circuit
\vskip 0.3cm
\hskip -0.5cm
\begin{minipage}{0.08\textwidth}
\mbox{
\xymatrix @*=<0em> @C=0.8em @R=0.8em {
& \ctrlm{1} & \qw \\
& \gate{R_z} & \qw
}
}
\end{minipage}
\begin{minipage}{0.03\textwidth}
=
\end{minipage}
\hskip -0.5cm
\begin{minipage}{0.40\textwidth}
\vskip -0.5cm
\begin{equation}
\xymatrix @*=<0em> @C=0.8em @R=0.5em {
& \qw & \ctrl{1} & \qw & \ctrl{1} & \qw \\
& \gate{R_{z}(\frac{\phi_{0} + \phi_{1}}{2})} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \gate{R_{z}(\frac{\phi_{0} - \phi_{1}}{2})} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw
}
\hskip 0.2cm ,
\end{equation}
\end{minipage}
\vskip 0.4cm
\noindent
since $\sigma_x$ gates on both sides of $R_z$ turn over the direction of the $R_z$ rotation. If we use this approach for demultiplexing the $R_{z}$ gate in (\ref{cu}), we end up (after some simple circuit manipulations) with the following circuit for c-$U_{2q}$
\begin{equation}
\label{circuit}
\begin{small}
\xymatrix @*=<0em> @C=0.3em @R=0.4em {
& \gate{R_z(\varphi_1)} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \gate{R_z(\varphi_2)} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \gate{R_z(\varphi_3)} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \gate{R_z(\varphi_4)} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \qw \\
& \multigate{1}{W} & \ctrl{-1} \qw & \qw & \qw & \qw & \ctrl{-1} & \qw & \qw & \multigate{1}{V} & \qw \\
& \ghost{W} & \qw & \qw & \ctrl{-2} & \qw & \qw & \qw & \ctrl{-2} & \ghost{V} & \qw
}
\end{small}
\end{equation}
\noindent
where
\begin{eqnarray}
\label{phi}
\varphi_1 & = & \frac{1}{4}(\phi_{00} + \phi_{01} + \phi_{10} + \phi_{11}), \\
\varphi_2 & = & \frac{1}{4}(\phi_{00} + \phi_{01} - \phi_{10} - \phi_{11}), \nonumber \\
\varphi_3 & = & \frac{1}{4}(\phi_{00} - \phi_{01} - \phi_{10} + \phi_{11}), \nonumber \\
\varphi_4 & = & \frac{1}{4}(\phi_{00} - \phi_{01} + \phi_{10} - \phi_{11}). \nonumber
\end{eqnarray}
\noindent
Individual $\phi$'s in (\ref{phi}) can be extracted from the diagonal of $D$, which has the form: diag($e^{-i\phi_{00}}$,$e^{-i\phi_{01}}$,$e^{-i\phi_{10}}$,$e^{-i\phi_{11}}$).
We would like to emphasize that this is not intended to be a decomposition technique for general $U$'s, as it itself requires classical diagonalization [of $U^{\dagger}$, see (\ref{u_diag})]. A general \textit{efficient} decomposition of an exponential of a Hamiltonian to elementary gates is known only for the direct mapping \cite{lanyon_2010, whitfield_2010}. But this mapping is not suitable for small scale experiments due to the relatively high number of required qubits and operations thereon. Our aim was in fact to prepare the ground for a first \textit{non-trivial} (more than one qubit in the quantum chemical part of the register) experimental realization of (relativistic) quantum chemical computation on a quantum computer.
Because $V$ belongs to the group \textbf{O}(4) (matrix of eigenvectors of a symmetric matrix), it can be decomposed using only two CNOT gates \cite{vatan_2004}:
\begin{equation}
\label{v_circuit}
\xymatrix @*=<0em> @C=0.8em @R=0.4em {
& \gate{S} & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & *=<0em>{\times} \qw & \gate{A} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \gate{S^{\dagger}} & \qw \\
& \gate{S} & \gate{H} & \ctrl{-1} & *=<0em>{\times} \qw \qwx & \gate{B} & \ctrl{-1} & \gate{H} & \gate{S^{\dagger}} & \qw \gategroup{1}{5}{2}{5}{.7em}{--}
}
\end{equation}
\vskip 0.1cm
\noindent
$H$ and $S$ are standard Hadamard and phase gates and $A$, $B$ are generic single-qubit gates that can be further decomposed e.g. by $Z$-$Y$ decomposition \cite{nielsen_chuang}
\begin{equation}
\label{zydec}
A = e^{i\alpha} R_{z}(\beta) R_{y}(\gamma) R_{z}(\delta).
\end{equation}
\noindent
There is a highlighted swap gate in (\ref{v_circuit}) which should be applied only if the determinant of $V$ is equal to $-1$ \cite{vatan_2004}.
The matrix $W$, on the other hand, is not real as it is equal to $D^{\dagger}V^{\dagger}$ (\ref{w}) and can be implemented using three CNOT gates (see e.g. \cite{vatan_2004,shende_2004}). The total count is thus 9 CNOTs.
The disadvantage of the aforementioned scheme is that $W$ must be decomposed for each power of $U$ individually. If we separate $W$ to $V^{\dagger}$ and $D^{\dagger}$, $V^{\dagger}$ is the same for all powers of $U$ (eigenvectors don't change) and $D^{\dagger}$ can be up to a non-measurable global phase implemented with the following circuit
\begin{equation}
\label{d_circuit}
\xymatrix @*=<0em> @C=0.8em @R=0.4em {
& \ctrl{1} & \qw & \ctrl{1} &\qw & \gate{R_{z}(\varphi_{6})} & \qw \\
& *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \gate{R_{z}(-\frac{\varphi_{5}}{2})} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \gate{R_{z}(\frac{\varphi_{5}}{2})} & \gate{R_{z}(\varphi_{7})} & \qw
}
\end{equation}
\noindent
where
\begin{eqnarray}
\label{phi2}
\varphi_{5} & = & \frac{1}{2}(\phi_{00} - \phi_{01} - \phi_{10} + \phi_{11}), \nonumber \\
\varphi_{6} & = & \frac{1}{4}(-\phi_{00} - \phi_{01} + \phi_{10} + \phi_{11}), \\
\varphi_{7} & = & \frac{1}{2}(-\phi_{00} + \phi_{01}). \nonumber
\end{eqnarray}
\noindent
The circuit for $V^{\dagger}$ is the same as for $V$ (\ref{v_circuit}), merely $A$ is replaced by $B^{\dagger}$ and $B$ by $A^{\dagger}$.
Presented 10-CNOT-circuit is universal for all powers of $U$. The only thing one has to do is to multiply the angles of $R_{z}$ rotations in (\ref{circuit}) and (\ref{d_circuit}) according to the power of $U$, e.g. by 2 for the second power.
Table \ref{par} summarizes the circuit parameters for ground as well as excited state calculations described in the preceding text. Notice that $\phi_{11}$ is zero in both cases by construction. To complete the vector space of two qubits, we in fact added one eigenvalue of the Hamiltonian equal to zero. Other simplification, which originates from the block diagonal structure of $U$, is that $A$ and $B$ matrices in the decomposition of $V$ (\ref{v_circuit}) differ only in a global phase. Because the global phase is not measurable, we present just the angles of rotations. Also only the parameters corresponding to $A$ and $B$ are shown. Going to their Hermitian conjugates means swapping of $\beta$ and $\delta$ and changing the sign of all of them.
\begin{table}[t]
\begin{tabular}{c c c}
\hline
\hline
& Ground state ($0^{+}$) & Excited state (1) \\
\hline
$\phi_{00}$ & -1.01642278 & -1.00656763 \\
$\phi_{01}$ & -0.68574813 & -0.18597924 \\
$\phi_{10}$ & 0.69657237 & -0.39129153 \\
$\phi_{11}$ & 0 & 0 \\
\hline
$\beta$ & 0.73125768 & -0.00680941 \\
$\gamma$ & -0.10311594 & 2.21832498 \\
$\delta$ & -0.12107336 & -3.13494247 \\
\hline
$\Delta E_{\rm{shift}}$ & -6477.89247780 & -6477.89247780 \\
\hline
\hline
\end{tabular}
\caption{Circuit parameters: rotation angles $\phi_{ij}$, $i,j \in \{0,1\}$ (\ref{phi},\ref{phi2}), $Z$-$Y$ decomposition parameters of $A$, $B$ (\ref{v_circuit}) and energy shifts (core energy + nuclear repulsion) for CAS(4,3) calculations of $0^{+}$ and $1$ states. For the details see preceding text.}
\label{par}
\end{table}
For the excited state, the determinant of $V$ is equal to $-1$ and therefore the swap gate in (\ref{v_circuit}) should be applied. Because we took Hamiltonian matrices from the DIRAC program \cite{dirac}, the parameters in Table \ref{par} refer to the difference between the total energy and core energy + nuclear repulsion ($\Delta E_{\rm{shift}}$). The presented method with the parameters form Table \ref{par} implements the exponential $e^{i\tau\hat{H}}$, as was already mentioned. But in our version of the algorithm \cite{veis_2010}, we in fact need $e^{-i\tau\hat{H}}$. The obtained energy therefore corresponds to the negative of the energy. For the negative, the energy guesses $E_{\mathrm{max}} = 3.5$ and $E_{\mathrm{min}} = 2.0$ corresponding to the maximum and minimum expected energies were used.
We don't give any explicit proof that the \textit{Quantum Shannon decomposition} is optimal in the number of CNOT gates for the specific case of block diagonal c-$U_{2q}$. However, this conjecture is supported by the fact that we also implemented the Group Leaders Optimization Algorithm (GLOA) of Dashkin and Kais \cite{daskin_2011} and unsuccessfully tried to find a better circuit (in terms of number of controlled operations) with a fidelity error smaller than 0.01.
\bibliographystyle{h-physrev}
|
1,116,691,500,190 | arxiv | \section{Introduction}
In recent years, there have been many developments for improving the bandwidth and energy efficiency of a wireless communication network. Energy harvesting cognitive radio network (EH-CRN) is one such solution which improves the bandwidth efficiency of the network while ensuring perpetual operation of the devices at the same time \cite{EH_survey_1,EH_survey_2,EH_CRN_survey}. In a CRN, a set of unlicensed users share the spectrum allocated to licensed users in a way such that the licensed user can achieve an acceptable quality of service (QoS). The unlicensed and licensed users are also known as secondary user (SU) and primary user (PU) respectively. Depending on the way of sharing, the CRN can be classified into three categories: interweave, overlay and underlay.
In underlay EH-CRN, the SUs and PUs coexist in an interference limited scenario and may harvest energy from the environmental sources. The secondary transmitter (ST) transmits its data using the spectrum allocated to PU while keeping acceptable interference at the primary receiver (PR). EH-CRN operating in underlay mode has been studied in \cite{underlay_pappas,underlay_duan,underlay_xu_multihop,underlay_xu,underlay_kalamkar,underlay_myopic,my_spcom}. We briefly summarize the related literature and present our main contribution.
In \cite{underlay_pappas} and \cite{underlay_duan}, authors considered an underlay EH-CRN with multipacket reception model. The SU transmits not only when PU is idle, it also transmits with a probability $p$ when PU is occupying the channel. Both works studied the stable throughput region and obtained optimal transmission probability $p$ maximizing the SU's throughput. However, \cite{underlay_pappas} considers an EH-PU whereas, \cite{underlay_duan} considers two different scenarios, EH-SU and, EH-PU and EH-SU. In \cite{underlay_xu_multihop}, authors considered an underlay EH-CRN with one PU and multiple EH-SUs. The SUs harvest RF energy from primary's transmission and use multihop transmission along with TDMA to transmit their own data. Authors jointly optimize the time and power allocation maximizing the end-to-end throughput. In \cite{underlay_xu}, authors considered a scenario where multiple EH-SUs communicate with an intended receiver using TDMA. The authors jointly optimize the power and time allocation which maximizes the sum rate of the SUs. In \cite{underlay_kalamkar}, a scenario is considered where a SU communicates with the receiver via multiple energy harvesting decode-and-forward relays while ensuring the outage probability of PU is below an acceptable threshold. The authors obtained the outage probability of the SU for Nakagami-$m$ channel in closed form. In \cite{underlay_myopic}, authors considered a single pair of PU and SU operating in underlay mode. The EH-SU operates in half duplex fashion and harvests energy from PU's transmission for the first fraction of the slot and then transmits its data in the remaining. Authors aim to obtain a myopic policy which optimizes the time sharing between the two phases and maximizes the sum rate of SU under PU's outage constraint. In \cite{my_spcom}, authors consider the same system model as in \cite{underlay_myopic} and maximize the sum throughput of the SU by jointly optimizing the time sharing and power allocation among the slots.
We consider an underlay EH-CRN where the PU has a reliable energy source and SU is equipped with a rechargeable battery and harvests energy from the environmental sources such as solar, vibration, RF etc. Similar to the system model presented in \cite{underlay_myopic} and \cite{my_spcom}, in our model, the SU operate in slotted half-duplex fashion, i.e., at any given time, SU can either harvest energy from the environment or transmit its data. We consider both the energy and channel uncertainties in our model which to the best of our knowledge, has not been studied in the literature in the context of EH-CRN. We model the uncertainty in energy harvesting process as a first order stationary Markov process as in \cite{learning_theoretic} and the estimated channel gains are assumed to have bounded uncertainty as in \cite{robust_xu,robust_zhang}. The main contributions of this paper are as follows:
\begin{itemize}
\item We propose a robust online time sharing policy taking energy arrival and channel uncertainties into consideration. We formulate the problem of maximizing secondary average sum throughput by a given time deadline (short-term throughput) subject to energy harvesting constraint of secondary transmitter (ST) and interference constraint of primary receiver (PR) as a finite horizon discrete-time Markov decision process (MDP) \cite{MDP_puterman}.
\item We solve the optimization problem using finite horizon stochastic dynamic programming (SDP) \cite{DP_bertsekas,MDP_puterman} and compared the performance of our proposed online policy with the myopic \cite{underlay_myopic} and offline policy \cite{my_spcom}.
\item In addition, we also investigate the effects of various system parameters such as different channel conditions, radius of the uncertainty region, battery capacity and interference threshold at PR on the proposed time sharing policy.
\end{itemize}
The organization of the paper is as follows. System model is presented in section II which includes the energy arrival model, battery dynamics and channel uncertainty model. Problem formulation is presented in section III. We discuss the results in section IV and finally, we conclude the paper in section V.
\textit{Notations:} The bold faced symbol (e.g. $\mathbf{A}$) represents a matrix and with bar (e.g. $\bar{\mathbf{a}}$) represents a vector. $\bar{\mathbf{a}}\succeq\bar{\mathbf{0}}$ means that every element of vector $\bar{\mathbf{a}}$, $a_i$ is greater than or equal to 0.
\section{System Model}
This section presents our system model, which includes the description of underlay EH-CR system operating in slotted mode, energy arrival process, battery dynamics at the secondary transmitter (ST), and channel uncertainty model.
\vspace{-3mm}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\linewidth]{system_model}
\vspace{-1mm}
\caption{An underlay EH-CRN network}
\label{fig_system_model}
\vspace{-3mm}
\end{figure}
\subsection{Underlay EH-CRN Operating in Slotted Mode}
The underlay EH-CRN operating in slotted mode, is shown in Fig. \ref{fig_system_model}. In our model, the ST scavenges energy from the environment\footnote{As the power density of RF energy sources is too low \cite{RF_EH}, we do not consider RF energy harvesting in our work.} and stores it in rechargeable battery of finite capacity. In Fig. \ref{fig_system_model}, $g^i_{pp},g^i_{ps},g^i_{sp}$ and $g^i_{ss}$ represent the channel coefficients corresponding to PT-PR, PT-SR, ST-PR and ST-SR link in the $i$th slot respectively. Both PT and ST transmit simultaneously for $N$ slots, each of duration 1 second. In each slot, the PT uses a constant power $p_p$ for transmission. However, the secondary transmitter (ST) splits every $i$th slot into two phases: \textit{harvesting phase} and \textit{transmission phase} of duration $(1-\beta_i)$ and $\beta_i$ second respectively, where $0\leq\beta_i\leq1$. In \textit{harvesting phase} of $i$th slot, ST harvests energy and stores it in the battery of maximum capacity $B_{max}$ and then in the \textit{transmission phase}, it transmits its data to the secondary receiver (SR) with power $p_s^i$ Watt. The ST chooses its transmission power such that it causes at max $P_{th}$ Watt of interference at the PR.
We assume all the channel coefficients $g^i_{pp},g^i_{ps},g^i_{sp}$ and $g^i_{ss}$ to be i.i.d. zero mean complex Gaussian with variances $\sigma_{pp}^2,\sigma_{ps}^2,\sigma_{sp}^2$ and $\sigma_{ss}^2$ respectively. In $i$th slot, the instantaneous achievable throughput of the ST (in bps/Hz) is given as $
R_{i}\left ( \beta _{i},p_s^{i} \right )=\beta _{i}\log_{2}\left ( 1+\frac{\left | g_{ss}^{i} \right |^{2}p_s^{i}}{\sigma_n^2+\left | g_{ps}^{i} \right |^{2}p_{p} } \right ),\, \forall i,$
where $\left | g_{ss}^{i} \right |^{2}$ and $\left | g_{ps}^{i} \right |^{2}$ are the channel power gains of ST-SR and PT-SR link respectively, and $\sigma_n^2$ is the variance of zero mean additive white Gaussian noise (AWGN) at SR.
\subsection{Energy Uncertainty Model}
This section presents our model of energy uncertainty. We first present the energy harvesting process and then, we study the battery dynamics governed by the harvesting process.
\subsubsection{Energy Harvesting Process}
In our model, the ST has energy harvesting capability and harvests energy from the environmental sources. We assume that the ST operate in half duplex mode such that in the beginning of each slot, the SU first harvests energy from the environment with a rate $E_h^i$ J/s for some fraction of time, and then transmits its data in the remaining duration of the slot.
In energy harvesting, the energy arrival time and amount are not known in advance and are random in nature. In order to capture this randomness, we model the process of energy as a first order stationary Markov process with $M_s$ number of states as in \cite{learning_theoretic}. The state transition probabilities are assumed to be known at the ST apriori. In practice, these transition probabilities can be estimated by observing the energy arrival pattern. At the beginning of $i$th slot, the ST harvests the energy from the environment at a harvesting rate $E_h^i$ which takes values from a finite set $\mathcal{E}=\{ e_1^h=0,e_2^h,\cdots,e_{M_s}^{h}\}$, where $e_1^h=0$ represents that no energy is harvested. In this paper, we consider $M_s=2$ without any loss of generality.
\begin{figure}[!h]
\centering
\includegraphics[width=0.65\linewidth]{markov_process}
\vspace{-1mm}
\caption{Two state Markov process}
\label{markov_process}
\vspace{-1mm}
\end{figure}
Fig. \ref{markov_process} shows a two state Markov process, where $P_{ij},\,i,j\in\{1,2\}$ are transition probabilities defined as
\begin{align*}
P_{ij}=\mathbb{P}(e_i^h\rightarrow e_j^h),\quad i,j\in\{1,2\}.
\end{align*}
We denote the state transition probability matrix by $\mathbf{T}$ such that $[\mathbf{T}]_{ij}=P_{ij}$, which is assumed to be known apriori. The state transition probability of the random variable $E_h^i$ is given as
\begin{align*}
\mathbb{P}\left (E_h^i\mid E_h^1,E_h^2,\cdots,E_h^{i-1} \right )=\mathbb{P}\left ( E_h^i\mid E_h^{i-1} \right ),\;\; i=2,\ldots,N+1
\end{align*}
\subsubsection{Battery Dynamics}
Since the energy harvesting process follows first order Markov process, so do the battery dynamics. The energy available in the battery at the beginning of each slot depends upon the energy harvested and energy consumed in the previous slot.
In the $i$th slot, the ST harvests energy for $1-\beta_i$ fraction of slot with a rate $E_h^i$ and then, transmits its data for $\beta_i$ fraction of the slot with power $p_s^i$. If $B_i$ is the state of the battery at the beginning of $i$th slot, we have
\begin{align}
0\leq \beta_ip_s^i\leq B_i,\quad\forall i. \label{eq:consumed_pwr_constr}
\end{align}
If $B_{max}$ denotes the capacity of the battery, the energy available in the battery at the beginning of $(i+1)$th slot can be expressed recursively as
\begin{align}
B_{i+1}=\textup{min}\left \{ B_i+(1-\beta _i)E_h^i-\beta _ip_s^{i}, B_{max} \right \}, \quad \forall i, \label{eq:battery_dynamics}
\end{align}
where $(1-\beta _i)E_h^i$ and $\beta _ip_s^{i}$ represent the harvested and consumed energies in the $i$th slot respectively. We assume $B_1=0$ without loss of generality.
Both the harvested energy and battery state jointly determine the time sharing and transmit power in a slot. Therefore, we can form a new first order Markov process whose states are defined as the joint state of energy harvesting states and battery states. The $i$th state of this new Markov process, $Q_i$ can be defined as
\begin{align}
Q_i\triangleq \left\{
\begin{array}{ll}
B_1, & \text{for } i=1\\
(E_h^{i-1},B_i), & \text{for }i=2,\ldots,N\\
B_{N+1}, & \text{for }i=N+1
\end{array}
\right.
\label{eq:new_markov_state}
\end{align}
The state transition probability of this new Markov process is given as
\begin{align*}
\mathbb{P}\left (Q_j\mid Q_1,\ldots,Q_{j-1} \right )=\mathbb{P}\left ( Q_j\mid Q_{j-1} \right ),\;\; j = 2,\ldots,N+1
\end{align*}
\subsection{Channel Uncertainty Model}
We assume the coherence time of the fading channel to be equal to the slot length, i.e., the channel coefficients remain constant for each time slot but may vary from one slot to other. The ST can estimate the channel coefficients between itself to PR using channel reciprocity \cite{estimation_using_feedback}. However, due to practical constraints such as feedback delay or estimation errors, the estimated channel coefficients may be erroneous. Therefore we assume the CSI of PT-SR and ST-PR links to be imperfect with bounded uncertainty \cite{robust_xu,robust_zhang}.
Under bounded uncertainty, the actual channel coefficients of PT-SR and ST-PR links can be written as
\begin{align*}
g_{ps}&=\hat{g}_{ps}+\Delta g_{ps}\\
g_{sp}&=\hat{g}_{sp}+\Delta g_{sp}
\end{align*}
where $\hat{g}$ and $\Delta g$ are the estimated channel coefficient and the estimation error respectively. Without assuming any statistical knowledge about the error, we bound the estimation error as $|\Delta g|\leq \varepsilon$, where $\varepsilon\geq0$ is the radius of the uncertainty region. We assume the estimated channel coefficients $\hat{g}_{ps}$ and $\hat{g}_{sp}$ to be zero mean complex Gaussian with variances $\hat{\sigma}^2_{ps}$ and $\hat{\sigma}^2_{sp}$ respectively.
\section{Problem Formulation}
\subsection{Online Policy}
The transmit power of ST, $p_s^i$ is controlled by the state of the new Markov process, $Q_i$ as well as the interference threshold at the PR, $P_{th}$. Our aim is to obtain optimal $\pmb{\beta}$ and $\bar{\mathbf{p}}_s$ which maximizes the worst case short-term average throughput of the ST considering the energy harvesting constraints of ST, interference threshold at PR, and imperfect CSI. The optimization problem is given as
\begingroup
\allowdisplaybreaks
\begin{subequations}
\begin{align}
\max_{\bar{\mathbf{p}}_s,\bar{\pmb{\beta}}}\min_{
\begin{array}{l}
|\Delta g_{ps}^i|\leq \varepsilon\\
|\Delta g_{sp}^i|\leq \varepsilon
\end{array}
}\quad & \mathbb{E}_{Q_2^N}\left\{\left [\sum_{i=1}^{N} R(\beta_i, p_s^i) \right]| \mathbf{T} \right\} \label{eq:obj_orig}\\
\text{s.t.}\quad 0\leq \sum_{j=1}^{i}&\left( 1-\beta _{j} \right)E_{h}^{j}-\sum_{j=1}^{i}\beta_jp_s^{j}\leq B_{max},\;\; \forall i \nonumber\\
&\hspace{-3mm} (\text{Energy causality constraint of ST})\label{eq:c1_orig}\\
& 0\leq \beta_ip_s^{i}\leq B_i,\;\; \forall i \label{eq:c2_orig} \\
& \quad (\text{Consumed energy constraint of ST})\nonumber\\
& |\hat{g}^i_{sp}+\Delta g^i_{sp}|^2p_s^i\leq P_{th}, \quad \forall i \label{eq:c3_orig}\\
& \quad(\text{Interference constraint of PR})\nonumber\\
& \bar{\mathbf{0}}\preceq \bar{\pmb{\beta}}\preceq \bar{\mathbf{1}},\qquad \bar{\mathbf{p}}_s\succeq\vec{0} \label{eq:c4_orig}\\
& \quad(\text{Non-negativity constraint})\nonumber
\end{align}
\end{subequations}
\endgroup
where constraint (\ref{eq:c1_orig}) is the energy causality constraint. It states that in any slot, we can use as much energy as we have harvested upto that slot. The optimization problem in (\ref{eq:obj_orig})-(\ref{eq:c4_orig}) is a stochastic optimization problem where conditional expectation $\mathbb{E}_{Q_2^N}[\cdot|\mathbf{T}]$ is taken with respect to all possible values of state $Q_i,\,i=2,\ldots,N$ for a given state transition matrix $\mathbf{T}$. This optimization problem can be rewritten aiming for robust online policy as (See Appendix)
\begingroup
\allowdisplaybreaks
\begin{subequations}
\begin{align}
\max_{\bar{\mathbf{p}}_s,\bar{\pmb{\beta}}} \quad & \mathbb{E}_{Q_2^N}\left\{\left [\sum_{i=1}^{N} R_1(\beta_i, p_s^i) \right]| \mathbf{T} \right\} \label{eq:obj_new}\\
\text{s.t.} \quad & \eqref{eq:c1_orig}, \, \eqref{eq:c2_orig},\, \eqref{eq:c4_orig} \label{eq:c1_new}\\
& (| \hat{g}_{sp}^{i}|^2 +2\varepsilon |\hat{g}_{sp}^{i}|+\varepsilon ^2)p_s^{i}\leq P_{th},\;\forall i \label{eq:c2_new}
\end{align}
\end{subequations}
\endgroup
where $R_1(\beta_i, p_s^i)=\beta _{i}\log_{2}\left ( 1+\frac{\left|g_{ss}^{i} \right|^{2}p_s^{i}}{\sigma_n^2+(|\hat{g}_{ps}^{i}|^{2}+2\varepsilon |\hat{g}^i_{ps}|+\varepsilon^2)p_{p} } \right )$ is the worst case instantaneous achievable throughput of ST in $i$th slot.
The optimization problem (\ref{eq:obj_new})-(\ref{eq:c2_new}) can not be solved for each slot independently due to time coupled constraint (\ref{eq:c1_orig}). Therefore, we first rewrite the optimization problem in (\ref{eq:obj_orig})-(\ref{eq:c4_orig}) as a classical finite horizon MDP by reformulating the constraint (\ref{eq:c1_orig}) as (\ref{eq:battery_dynamics}) and combining the constraints (\ref{eq:c2_orig}) and (\ref{eq:c3_orig}). The optimization problem can now be rewritten as
\begingroup
\allowdisplaybreaks
\begin{subequations}
\begin{align}
\max_{\bar{\mathbf{p}}_s,\bar{\pmb{\beta}}}\quad & \mathbb{E}_{Q_2^N}\left\{\left [\sum_{i=1}^{N} R_1(\beta_i, p_s^i) \right]| \mathbf{T} \right\} \label{eq:obj}\\
\text{s.t.}\quad & B_{i+1}=\textup{min}\left \{ B_i+(1-\beta _i)E_h^i-\beta _ip_s^{i}, B_{max} \right \}, \;\; \forall i \label{eq:c1}\\
& 0\leq p_s^i\leq \text{min}\left \{ \frac{B_i}{\beta_i},\frac{P_{th}}{|\hat{g}_{sp}^{i}|^{2}+2\varepsilon |\hat{g}^i_{sp}|+\varepsilon^2} \right \},\;\;\forall i \label{eq:c2}\\
& \bar{\mathbf{0}}\preceq \bar{\pmb{\beta}}\preceq \bar{\mathbf{1}},\qquad \bar{\mathbf{p}}_s\succeq\vec{0} \label{eq:c3}
\end{align}
\end{subequations}
\endgroup
The resulting optimization problem (\ref{eq:obj})-(\ref{eq:c3}) can now be solved optimally using finite horizon SDP \cite{MDP_puterman},\cite{DP_bertsekas}. The optimal values of optimization variables $\bar{\mathbf{p}}_s$ and $\bar{\pmb{\beta}}$ are obtained using backward induction method \cite{MDP_puterman} and are calculated in time reversal order. The SDP algorithm is given in Algorithm \ref{algo:SDP}.
\begin{proposition*}
The optimal last state of the newly formed Markov process is $Q^*_{N+1}=B_{N+1}=0$.
\end{proposition*}
The proposition states that by the end of the transmission, all the energy harvested would be consumed and in the end of last time slot, energy causality constraint will be satisfied with equality, i.e.,
\begin{align*}
\sum_{j=1}^{N+1}(1-\beta_j)E_h^j=\sum_{j=1}^{N+1}\beta_jp_s^j.
\end{align*}
It follows from the fact that it is always suboptimal to have some energy left in the battery at the end of the transmission.
\begin{algorithm}
\caption{SDP algorithm}
\label{algo:SDP}
\begin{algorithmic}
\State \textbf{Initialization:} Initialize $\mathbf{T}$, $Q_{N+1}=B_{N+1}=0$.
\State Set $n\leftarrow N$
\State \textbf{Look up:}
\While{$n\neq1$}
\State Calculate $\mathbb{E}_{Q_n}\left\{\left [ R_1(\beta_n, p_s^n) \right]| \mathbf{T} \right\}$ for all possible values of $Q_n,\, n=2,\ldots,N$.
\State $n\leftarrow n-1$
\EndWhile
\State \textbf{Optimal $\bar{\mathbf{p}}_s$ and $\bar{\pmb{\beta}}$ using backward induction:}
\State set $n\leftarrow1$
\While{$n\neq N$}
\State given $Q_n=\{E_h^{n-1},B_n\}$, obtain
\State $[p_s^n,\beta_n]=\underset{{p_s^n,\beta_n}}{\arg\max}\,\mathbb{E}_{Q_n}\left\{\left [R_1(\beta_n, p_s^n) \right]| \mathbf{T} \right\}$ from Look up.
\State $n\leftarrow n+1$
\EndWhile\\
\Return $\bar{\mathbf{p}}_s$ and $\bar{\pmb{\beta}}$
\end{algorithmic}
\end{algorithm}
\vspace{-3mm}
\subsection{Myopic Policy}
In the myopic policy, the SU aims to maximize its immediate throughput in each slot and therefore, it consumes all the harvested energy for transmission in the same slot. In this case, the throughput in each slot can be maximized by optimizing the time sharing parameter $\vec{\pmb{\beta}}$ only. Under myopic policy, the transmit power of SU in $i$th slot is given by $p_s^i=\frac{(1-\beta_i)}{\beta_i}E_h^i$. The optimization problem for robust myopic policy is given as \cite{underlay_myopic}:
\begingroup
\begin{subequations}
\begin{align}
\max_{\vec{\pmb{\beta}}}\! & \sum_{i=1}^N\!\beta_i\log_2\!\left(\!1+\frac{(1-\beta_i)|g^i_{ss}|^2E_h^i}{\beta_i(\sigma_n^2+(|\hat{g}_{ps}^i|^2+2\varepsilon|\hat{g}_{ps}^i|+\varepsilon^2)p_p)}\right),\label{eq:myopic_obj}\\
\text{s.t.}\;\; & (1-\beta_i)(|\hat{g}_{sp}^i|^2+2\varepsilon|\hat{g}_{sp}^i|+\varepsilon^2)E_h^i\leq \beta_iP_{th},\;\forall i, \label{eq:myopic_c1}\\
& \vec{0}\preceq \vec{\pmb{\beta}}\preceq\vec{1},\label{eq:myopic_c2}
\end{align}
\end{subequations}
\endgroup
which is a convex optimization problem and can be solved using any standard convex optimization solver such as CVX \cite{cvx}.
\subsection{Offline Policy}
In the offline policy, all the channel coefficients and energy arrivals are assumed to be known apriori. The offline policy outperforms the myopic and online policies in terms of sum throughput and acts as a benchmark for these policies. The optimization problem for robust offline policy is given as \cite{my_spcom}:
\begingroup
\allowdisplaybreaks
\begin{subequations}
\begin{align}
\max_{\vec{p}_s,\pmb{\vec{\beta}}}\quad & \sum_{i=1}^NR_1(\beta_i,p_s^i), \label{eq:offline_obj}\\
\text{s.t.} \quad & \eqref{eq:c1_orig}, \, \eqref{eq:c2_orig},\, \eqref{eq:c4_orig},\, \eqref{eq:c2_new}, \label{eq:offline_c1}
\end{align}
\end{subequations}
\endgroup
which is a convex optimization problem and can be solved using any standard convex optimization solver such as CVX \cite{cvx}.
\section{Results and Discussions}
For simulation, it is assumed that the PT uses a constant power of $p_p=2$ Watt in all the slots for transmission, and $\sigma_n^2=0.1$. The number of states of the energy harvesting process is assumed to be $M_s=2$, such that $E_h$ takes values from the discrete set $\mathcal{S}=\{e_1^h=0,e_2^h=0.5\}$ depending upon the transition probability matrix
\begin{align}
\mathbf{T}=\frac{1}{2}\begin{bmatrix}
1 &1 \\
1& 1
\end{bmatrix}
\label{eq:transition_prob_matrix}
\end{align}
Since we are considering discrete time SDP, the optimization variables, $\beta_i$ and $p_s^i$ are considered to be discrete with step size of 0.2.
\vspace{-1mm}
\subsection{Effect of uncertainty region radius $\varepsilon$ on secondary throughput}
\label{subsection:diff_epsilon}
Fig. \ref{fig:throughput_diff_channel_uncertainty_online} shows ST's average sum throughput ($R^{avg}_{sum}$) averaged over different channel realizations for different values of uncertainty region radius $\varepsilon$ under the optimal online time sharing policy. The variances of all the channel links are assumed to be unity, i.e., $\sigma_{pp}^2=\sigma_{ps}^2=\sigma_{sp}^2=\sigma_{ss}^2=1$, interference threshold $P_{th}=1$ Watt, and $B_{max}=1$ Joule. The effect of radius of uncertainty region on the worst case average throughput is clearly visible from the figure. As $\varepsilon$ increases, the average throughput decreases due to two reasons. First, increasing $\varepsilon$ reduces the instantaneous throughput and second, from the constraint (\ref{eq:c2_new}), increasing $\varepsilon$ puts more stringent constraint on the transmit power $\bar{\mathbf{p}}_s$ which in turn, reduces the average throughput.
\begin{figure*}[!t]
\centering
\begin{minipage}{0.3\linewidth}
\centering
\includegraphics[width=2.3in]{throughput_diff_channel_uncertainty_online}
\caption{Average sum throughput of ST ($R_{sum}^{avg}$) versus number of slots ($N$) for different radius of uncertainty region ($\varepsilon$) under optimal online policy.}
\label{fig:throughput_diff_channel_uncertainty_online}
\end{minipage}
\hspace{3mm}
\begin{minipage}{0.3\linewidth}
\centering
\includegraphics[width=2.3in]{throughput_diff_channel_online}
\caption{Average sum throughput of ST $(R_{sum}^{avg})$ versus number of slots $(N)$ for different channel conditions under optimal online policy.}
\label{fig:throughput_diff_channel_online}
\end{minipage}
\hspace{3mm}
\begin{minipage}{0.3\linewidth}
\centering
\centering
\includegraphics[width=2.3in]{beta_vs_interf_online_new}
\caption{Average harvesting and transmission time versus interference threshold $(P_{th})$ under optimal online policy.}
\label{fig:beta_vs_interf_online}
\end{minipage}
\vspace{-0.5cm}
\end{figure*}
\subsection{Effect of different channel conditions on throughput}
Fig. \ref{fig:throughput_diff_channel_online} shows ST's average sum throughput for various channel conditions under optimal online time sharing policy. For simulations, we assume that the weak and strong channel links have variance 0.1 and 1 respectively, e.g., in case of weak PT-SR link, we assume $\sigma_{ps}^2=0.1$ and $\sigma_{pp}^2=\sigma_{sp}^2=\sigma_{ss}^2=1$, $P_{th}=1$ Watt and $\varepsilon=0.05$. In case of all channel links to be equally strong, we assume $\sigma_{pp}^2=\sigma_{ps}^2=\sigma_{sp}^2=\sigma_{ss}^2=1$. From the figure, it is noticed that the secondary throughput increases as the link ST-PR becomes weak. This results in low interference constraint at PR allowing the ST to transfer information with high power which in turn, results in higher throughput. The weak PT-SR link causes low interference at SR, therefore the throughput increases. The weak ST-SR link degrades the secondary performance because of poor channel gains. When all links are equally strong, the performance lies in between that of the weak ST-PR link and weak ST-SR link. This is because when weak ST-PR link allows ST to transmit with higher power, weak ST-SR negates this gain from weak primary interference resulting in no throughput gain.
\subsection{Effect of interference threshold $P_{th}$ on average
harvesting/transmission time}
Fig. \ref{fig:beta_vs_interf_online} shows the variations of average harvesting/transmission time with the change in interference threshold at the SR, $P_{th}$. The plot is obtained for fixed number of secondary slots $N=8$, radius of uncertainty region $\varepsilon=0.05$, and variances of channel coefficients are assumed to be same as in section \ref{subsection:diff_epsilon}. As the value of $P_{th}$ increases, SU can transmit with more power, which can be obtained by consuming more energy in less amount of time as $P^i_s=E^i_s/\beta_i$, where $E^i_s$ is the energy consumed by ST in $i$th slot. Therefore, the harvesting time increases and transmission time decreases so that the ST can accumulate more energy and can transmit with higher power.
\subsection{Effect of battery capacity $B_{max}$}
\vspace{-3mm}
\begin{figure}[!h]
\centering
\includegraphics[width=0.75\linewidth]{throughput_vs_slot_diff_B_max}
\caption{Average sum throughput of ST $(R_{sum}^{avg})$ versus number of slots $(N)$ for different values of battery capacity $B_{max}$}
\label{fig:throughput_vs_slot_diff_B_max}
\vspace{-3mm}
\end{figure}
Fig \ref{fig:throughput_vs_slot_diff_B_max} shows the effect of battery capacity on the average achievable throughput of ST. The radius of uncertainty region is assumed to be $\varepsilon=0.05$ and all other parameters are same as in section \ref{subsection:diff_epsilon}. As we decrease the battery capacity, secondary throughput reduces. This effect of the battery capacity can be observed from constraint (\ref{eq:c1}). When battery capacity is reduced, $B_{max}$ dominates in constraint (\ref{eq:c1}) and the next state in the battery is limited by $B_{max}$, i.e., this constraint does not allow the ST to harvest the energy it needs, which reduces the throughput. As battery capacity is increased, the ST can accommodate more energy and therefore can transmit with higher power whenever channel conditions allow. Fig. \ref{fig:throughput_vs_slot_diff_B_max} shows that after a limit, further increment in battery capacity has no impact on the throughput as in this case, first term in constraint (\ref{eq:c1}) becomes dominant and $B_{max}$ has no effect on the next state of the battery.
\subsection{Nature of harvested and consumed energies}
\vspace{-3mm}
\begin{figure}[!h]
\centering
\includegraphics[width=0.75\linewidth]{consumed_and_harvested_energy_online}
\caption{Energy versus number of slots $(N=7)$ under optimal online policy}
\label{fig:consumed_and_harvested_energy_online}
\vspace{-3mm}
\end{figure}
Fig. \ref{fig:consumed_and_harvested_energy_online} shows the nature of harvested and consumed energy with number of secondary slots $N$. All the simulation parameters are kept same as in section \ref{subsection:diff_epsilon} and $\varepsilon=0.05$. From the figure, it is clear that in order to satisfy the energy causality constraint in (\ref{eq:c1_orig}), consumed energy always remains less than or equal to harvested energy and, the remaining energy is less than the maximum storage capacity $B_{max}=1$ Joule. Since it is a joint optimization of time and energy and it will not harvest energy which it can't use. Therefore at the end of the transmission, all the harvested energy is consumed under the optimal online policy.
\subsection{Performance comparison between the optimal online, offline and myopic time sharing policies}
\begin{figure}[!h]
\centering
\includegraphics[width=0.75\linewidth]{online_vs_offline_and_myopic_full_CSI}
\caption{Average sum throughput of ST $(R_{sum}^{avg})$ versus number of slots $(N)$ for optimal online and offline policies}
\label{fig:online_vs_offline_and_myopic_full_CSI}
\vspace{-5mm}
\end{figure}
Fig. \ref{fig:online_vs_offline_and_myopic_full_CSI} shows the comparison of online, offline and myopic policies in terms of average sum throughput of ST. All simulation parameters are same as in section \ref{subsection:diff_epsilon}. The offline and myopic policies considered for comparison are adopted from \cite{underlay_myopic} and \cite{my_spcom}, and modified slightly in accordance with our system model. The figure shows that the average sum throughput of ST for online policy lies in between the offline and myopic policies. Since in the offline policy, all the channel gains are assumed to be known apriori, the ST obtains the optimal policy before the transmission starts and therefore, achieves much higher throughput. Therefore, it acts as a benchmark for the online transmission policy. On the other hand, the myopic policy tries to maximize immediate throughput and consumes all the harvested energy in the same slot and therefore performs worse than the online policy.
\section{Conclusions}
We proposed a robust online time sharing policy that maximizes the throughput of SU under energy arrival and channel uncertainties. The proposed policy jointly optimizes the time sharing between the \textit{harvesting phase} and \textit{transmission phase}, and the transmit power of ST. The results show that our proposed policy outperforms the myopic policy and unlike offline policy, it does not require any prior information of energy arrivals and channel gains. However, the computational complexity of our policy is more than that of the myopic and offline policies as the SDP suffers from \textit{curse of dimensionality}.
|
1,116,691,500,191 | arxiv |
\section{Introduction}
Constructing valid confidence regions is essential to assess the uncertainty which is associated with point estimates. Even for a single parameter there exist multiple valid confidence intervals. As a result, a large body of literature has been developed to construct confidence intervals which have desirable properties. In general, confidence intervals are constructed to minimize corresponding volume (see for example \citet{efron2006minimum} and \citet{jeyaratnam1985minimum}).
In recent advances, \citet{chernozhukov2017central} developed a central limit theorem for a high-dimensional vector of random variables, allowing for confidence regions for high-dimensional parameter vectors. Nevertheless, these results only hold for specific sets $\mathcal{A}$ which are not too complex. A common application (see e.g. \citet{belloni2018uniformly}) of their work is to construct confidence regions in shape of a hypercube.
We consider a more general setting of $s$-sparsely convex confidence regions which are then shown to have exponentially smaller volume than the corresponding cube.
\section{Main Results}
At first, we recap the high-dimensional limit theorem from \citet{chernozhukov2017central}. Let $Y_1,\dots,Y_n$ be independent random vectors in $\mathbb{R}^d$, where each component is centered $\mathbb{E}[Y_{i,j}]=0$ and $\mathbb{E}[Y^2_{i,j}]<\infty$. Additionally let $X_1,\dots,X_n$ be independent random vectors, where
\begin{align*}
X_i=(X_1,\dots,X_d)^T\sim \mathcal{N}(0,\Sigma)
\end{align*}
with $\Sigma:=\mathbb{E}[Y_i Y_i^T]$. Assume
\begin{align*}
0<c\le \lambda_{\min}\le \lambda_{\max}\le C<\infty,
\end{align*}
where $\lambda_{\min}$ and $\lambda_{\max}$ denote the minimal and maximal eigenvalues of $\Sigma$, respectively. It is crucial that both constants $c$ and $C$ do not depend on the dimension $d$.
Further, define the normalized sums
\begin{align*}
S_n^Y:=\frac{1}{\sqrt{n}}\sum\limits_{i=1}^n Y_i \text{ and } S_n^X:=\frac{1}{\sqrt{n}}\sum\limits_{i=1}^n X_i.
\end{align*}
Next, we specify the class of $s$-sparsely convex sets as defined in \citet{chernozhukov2017central}.
\begin{definition}[$s$-sparsely convex sets]
A set $A\subset \mathbb{R}^d$ is called $s$-sparsely convex if there exists an integer $Q>0$ and convex sets $A_Q\subset \mathbb{R}^d$, $q=1,\dots,Q$, such that $A=\cap^Q_{q=1}A_q$. Additionally, the indicator function of each $A_q$, $\omega\mapsto I(\omega\in A_Q)$, depends only on $s$ elements of its argument $\omega = (\omega_1,\dots,\omega_p)$.
\end{definition}
\noindent
\citet{chernozhukov2017central} were able to prove that, under some regularity conditions, for the class of $s$-sparsely convex sets $\mathcal{A}^{sp}(s)$ it holds
\begin{align*}
\sup_{A\in\mathcal{A}^{sp}(s)}\Big|P(S_n^Y\in A)-P(S_n^X\in A)\Big|\xrightarrow{n \to \infty} 0
\end{align*}
even if $d$ is larger than $n$. They additionally provide a result for bootstrapping in high-dimensions, enabling the construction of high-dimensional confidence regions. The standard confidence region is based on hyperrectangles.
Define
\begin{align*}
A_\infty:=\left\{x\in \mathbb{R}^d\big|\|x\|_{\infty}\le c_\alpha^{(\infty)} \right\}\in \mathcal{A}^{sp}(1)
\end{align*}
as a $d$-dimensional cube with edge length of $2c_\alpha^{(\infty)}$, where
\begin{align*}
c_\alpha^{(\infty)}:= q_\alpha\left(\|X\|_{\infty}\right).
\end{align*}
Relying on bootstrap to approximate the covariance structure $\Sigma$ enables the approximation of $q_\alpha\left(\|Y\|_{\infty}\right)$ by $q_\alpha\left(\|X\|_{\infty}\right)$. From now on we will only focus on the volume of specific $s$-sparsely convex sets and omit the approximation from $Y$ to $X$. The volume $V_{A_\infty}$ of $A_\infty$ is given by
\begin{align*}
V(A_\infty)=\left(2c_\alpha^{(\infty)}\right)^d.
\end{align*}
Motivated by the well known property that the volume of the $d$-ball with fixed radius approaches zero, we will use different $s$-sparsely convex sets and analyze their behavior in large dimensions.
Let $s\in\mathbb{N}$, which is fixed and does not depend on $d$. Additionally, for simplicity assume that $\frac{d}{s}=l_s\in\mathbb{N}$ and define the corresponding index sets
\begin{align*}
J_k:=\{(k-1)\cdot s + 1,\dots, k\cdot s\},\quad k=1,\dots,l_s.
\end{align*}
Next, define, for any $p\ge 1$,
\begin{align*}
A_p:=\left\{x\in \mathbb{R}^d\big|\max_{1\le k\le l_s}\|x_{J_k}\|_p \le c_\alpha^{(p)} \right\}\in \mathcal{A}^{sp}(s),
\end{align*}
which is the intersection of $l_s$ orthogonal $d$-dimensional cylinders with radius $ c_\alpha^{(p)}$, where each set only depends on $s$ components (and therefore an $s$-sparsely convex set). It can be interpreted as a crude approximation of the $d$-ball. Here
\begin{align*}
c_\alpha^{(p)}:= q_\alpha\left(\max_{1\le k\le l_s}\|X_{J_k}\|_p\right).
\end{align*}
Since the sets $J_k$ are disjoint, it immediately follows
\begin{align*}
V(A_p)&=\left(\frac{\left(2\Gamma\left(\frac{1}{p}+1\right)c_\alpha^{(2)}\right)^s}{\Gamma\left(\frac{s}{p}+1\right)}\right)^{l_s}=\left(\frac{2\Gamma\left(\frac{1}{p}+1\right)c_\alpha^{(2)}}{\Gamma\left(\frac{s}{2}+1\right)^{\frac{1}{s}}}\right)^d,
\end{align*}
which is the volume of the $s$-ball (with respect to the $\ell_p$-norm) with radius $c_\alpha^{(p)}$ to the power of $l_s$.
To compare the volumes for a growing number of dimensions $d$, we have to consider the different size of the quantiles, since they depend on $d$. The following theorem states the main result of this note.
\begin{theorem}\label{HDCR_th1}
For all $p\ge 2$ and $s$ large enough (the specific value only depends on the bounds of the eigenvalues), it holds
\begin{align*}
\lim_{d\to\infty}\frac{V(A_p)}{V(A_\infty)}=0.
\end{align*}
Especially, the ratio is decaying exponentially in $d$.
\end{theorem}
\noindent
Therefore, the volume of the confidence set based on $A_p$ is asymptotically negligible compared to the volume of $A_\infty$.\newpage
\noindent
\begin{proof}
At first, observe that due to Theorem 3.4 from \citet{hartigan2014bounding}, we obtain for every fixed $\alpha$ and $d$ large enough
\begin{align*}
c\sqrt{\log(d)}\le c_\alpha^{(\infty)},
\end{align*}
due to
\begin{align*}
P\left( \max_{j=1,\dots,d} |X_j|\le x\right)&\le P\left(\max_{j=1,\dots,d} X_j\le x\right)+ P\left(\min_{j=1,\dots,d} X_j\ge -x\right)\\
&=2P\left(\max_{j=1,\dots,d} X_j\le x\right).
\end{align*}
Here, the constant $c$ depends on the eigenvalues of $\Sigma$ and $\alpha$. Next, remark that
\begin{align*}
E\left[\|X_{J_k}\|_p\right]&\le E\left[\|\Sigma_k^{\frac{1}{2}}\|_p\|\Sigma_k^{-\frac{1}{2}}X_{J_k}\|_p\right]\\
&\le s^{\frac{1}{p}-\frac{1}{2}} \|\Sigma_k^{\frac{1}{2}}\|_pE\left[ \|\Sigma_k^{-\frac{1}{2}}X_{J_k}\|_2\right]\\
&\le s^{\frac{1}{p}} \sqrt{\lambda_{\max}}
\end{align*} for any $p\ge 2$. In the last step we used
\begin{align*}
\|\Sigma_k^{\frac{1}{2}}\|_p\le \|\Sigma_k^{\frac{1}{2}}\|_2\le \sqrt{\lambda_{\max}},
\end{align*}
see e.g. \citet{goldberg1987equivalence}. We can rely on basic Gaussian concentration inequalities as in Example 5.7 from \citet{boucheron2013concentration}. It holds for all $t>0$
\begin{align*}
P\left(\|X_{J_k}\|_p-\mathbb{E}\left[\|X_{J_k}\|_p\right]\ge t\right)\le \exp\left(-\frac{t^2}{2\|\Sigma_k^{\frac{1}{2}}\|^2_{2,p}}\right)
\end{align*}
with
\begin{align*}
\max_{1\le k\le l_s} \|\Sigma_k^{\frac{1}{2}}\|_{2,p}&=\max_{1\le k\le l_s}\sup_{y\in\mathbb{R}^s:\|y\|_2=1}\|\Sigma_k^{\frac{1}{2}} y\|_p\\
&\le s^{\frac{1}{p}-\frac{1}{2}}\max_{1\le k\le l_s}\sup_{y\in\mathbb{R}^s:\|y\|_2=1}\|\Sigma_k^{\frac{1}{2}} y\|_2\\
&\le s^{\frac{1}{p}-\frac{1}{2}}\sqrt{\lambda_{\max}}.
\end{align*}
Therefore, for
$$\bar{x}_p:=s^{\frac{1}{p}-\frac{1}{2}}\sqrt{2 \lambda_{\max}\log\left(\frac{d}{\alpha s}\right)}+ s^{\frac{1}{p}} \sqrt{\lambda_{\max}}$$
we obtain
\begin{align*}
P\left(\|X_{J_k}\|_p\ge \bar{x}_p\right)&= P\left(\|X_{J_k}\|_p -E\left[\|X_{J_k}\|_p\right]\ge\bar{x}_p -E\left[\|X_{J_k}\|_p\right]\right)\\
&\le P\left(\|X_{J_k}\|_p -E\left[\|X_{J_k}\|_p\right]\ge s^{\frac{1}{p}-\frac{1}{2}}\sqrt{2\lambda_{\max}\log\left(\frac{d}{\alpha s}\right)}\right)\\
&\le \exp\left(\log\left(\frac{\alpha s}{d}\right)\left(\frac{s^{\frac{1}{p}-\frac{1}{2}} \sqrt{\lambda_{\max}}}{\|\Sigma_k^{\frac{1}{2}}\|_{2,p}}\right)^2\right)\\
&\le \frac{\alpha s}{d}.
\end{align*}
It follows
\begin{align*}
1 - P\left(\max_{1\le k\le l_s}\|X_{J_k}\|_p\le \bar{x}_p\right)&\le \sum_{k=1}^{l_s}\left(1-P\left(\|X_{J_k}\|_p\le \bar{x}_p\right)\right)\le \alpha.
\end{align*}
Therefore, it holds that
\begin{align*}
c_\alpha^{(p)}&\le s^{\frac{1}{p}-\frac{1}{2}}\sqrt{2 \lambda_{\max}\log\left(\frac{d}{\alpha s}\right)}+ s^{\frac{1}{p}} \sqrt{\lambda_{\max}}
\end{align*}
As a result, we directly obtain for every fixed $\alpha$, $p\ge 2$ and $s$
\begin{align*}
c_\alpha^{(p)}&\le C s^{\frac{1}{p}-\frac{1}{2}} \sqrt{\log(d)},
\end{align*}
where the constant $C$ does not depend on $s$ as long as $d$ is large enough ($\log(d)> s$).
If we compare the volumes of $A_\infty$ and $A_p$, it holds
\begin{align*}
\left(\frac{V(A_p)}{V(A_\infty)}\right)^{\frac{1}{d}}&=\frac{\Gamma\left(\frac{1}{p}+1\right)c_\alpha^{(p)}}{\Gamma\left(\frac{s}{p}+1\right)^{\frac{1}{s}}c_\alpha^{(\infty)}}\\
&\le \frac{\Gamma\left(\frac{1}{p}+1\right)Cs^{\frac{1}{p}-\frac{1}{2}}}{\Gamma\left(\frac{s}{p}+1\right)^{\frac{1}{s}}c}\\
&<1
\end{align*}
for $d$ large enough as long as
\begin{align*}
\Gamma\left(\frac{s}{p}+1\right)^{\frac{1}{s}}\ge s^{\frac{1}{p}-\frac{1}{2}}\Gamma\left(\frac{1}{p}+1\right)\frac{C}{c},
\end{align*}
which will be satisfied for $s$ large enough due to the faster than exponential growth rate of the gamma function.
\end{proof}
\section{Notation}
Throughout the paper, we consider a random element $X$ from some common probability space $(\Omega,\mathcal{A},P)$.
We denote by $P\in \mathcal{P}_n$ a probability measure out of large class of probability measures, which may vary with the sample size (since the model is allowed to change with $n$) and by $\mathbb{P}_n$ the empirical probability measure. Additionally, let $\mathbb{E}$, respectively $\mathbb{E}_n$, be the expectation with respect to $P$, respectively $\mathbb{P}_n$.
For an $\alpha\in (0;\frac{1}{2})$ and a real valued random variable $Z$, define
\begin{align*}
q_\alpha(Z):= (1-\alpha)\text{ - Quantile of } Z.
\end{align*}
For a given set $A\in\mathbb{R}^d$, define
\begin{align*}
V(A):=\int\dots\int 1_{A} dx_1\dots dx_d.
\end{align*}
Further, for a vector $v\in \mathbb{R}^d$ and $p\ge 1$ denote the $\ell_p$ norm
\begin{align*}
\|v\|_p:=\left(\sum_{l=1}^p |v_l|^p\right)^{1/p},
\end{align*}
$\|v\|_0$ equals the number of non-zero components and $\|v\|_\infty=\sup_{l=1,\dots,d }|v_l|$ denotes the $\sup$-norm. For any subset $J=\{J_1,\dots,J_k\}\subseteq \{1,\dots,d\}$, we define
$$v_{J}:=(v_{J_1},\dots,v_{J_{k}})^T\in\mathbb{R}^{k}$$ as the corresponding subvector of $v$.\\
Let $A$ be a $m\times d$ matrix. Denote the operator norms on $\mathbb{R}^{m\times d}$, which are induced by the $\ell_p$ vector norms, $\|A\|_{q,p} := \sup_{v\in \mathbb{R}^d:\|v\|_q= 1} \|Av\|_p$ and $\|A\|_{p} := \|A\|_{p,p}$.\\
Let $c$ and $C$ denote positive constants independent of $n$ with values that may change at each appearance. The notation $a_n\lesssim b_n$ means $a_n\le Cb_n$ for all $n$ and some $C$. Furthermore, $a_n=o(1)$ denotes that there exists a sequence $(b_n)_{\ge 1}$ of positive numbers such that $|a_n|\le b_n$ for all $n$ where $b_n$ is independent of $P\in\mathcal{P}_n$ for all $n$ and $b_n$ converges to zero. Finally, $a_n=O_P(b_n)$ means that for any $\epsilon>0$, there exists a $C$ such that $P(a_n>Cb_n)\le\epsilon$ for all $n$.
\section{Simulation}
This section provides a simulation study to underline our theoretical findings. Let
$$X=(X_1,\dots,X_d)\sim\mathcal{N}(0,\Sigma).$$
We consider three different correlation structures
$$\Sigma_l=\left(c_l^{|i-j|}\right)_{i,j\in \{1,\dots,d\}},\quad l=1,2,3$$
with $c_1=0$, $c_2=0.5$ and $c_3=0.9$. Observe that the corresponding eigenvalues are
bounded from above by
\begin{align*}
\|\Sigma_l\|_2\le \sqrt{\|\Sigma_l\|_1\|\Sigma_l\|_\infty}=\|\Sigma_l\|_1\le \frac{1+c_l}{1-c_l}.
\end{align*}
Following the argument from \citet{rosenblum1997hardy} (p. 62) the bound is sharp in the sense that
\begin{align*}
\lambda_{\max}(\Sigma_l) \xrightarrow{d\to\infty} \frac{1+c_l}{1-c_l}.
\end{align*}
Since the theoretical guarantees only hold for $s$ large enough (depending on the eigenvalues), we would expect to need a larger sparsity index $s$ for a larger $c_j$.
We generate $n=10^5$ independent samples of $X$ to estimate the quantiles
$c_\alpha^{(p)}$. The number is chosen large to obtain precise estimates. Afterwards, we calculate the corresponding volume of each region and plot the ratio. Since the ratio of volumes is decaying exponentially with the dimension $d$, we plot the logarithm of the ratio for given volumes. The linear behavior in all simulations supports our theoretical results.
\begin{figure}[!ht]
\center
\includegraphics[height=0.55\textheight,width=\textwidth,keepaspectratio]{HDCR/plots/c0_2.png}
\caption{\textmd{Simulation results for $c=0.0$.}}
\label{hdcr_c0}
\includegraphics[height=0.55\textheight,width=\textwidth,keepaspectratio]{HDCR/plots/c5_2.png}
\caption{\textmd{Simulation results for $c=0.5$.}}
\label{hdcr_c5}
\end{figure}
\
\newpage
\begin{figure}[!ht]
\center
\includegraphics[height=0.45\textheight,width=\textwidth,keepaspectratio]{HDCR/plots/c9_2.png}
\caption{\textmd{Simulation results for $c=0.9$.}}
\label{hdcr_c9}
\includegraphics[height=0.5\textheight,width=\textwidth,keepaspectratio]{HDCR/plots/c9_high_1.png}
\caption{\textmd{Additional simulation results for $c=0.9$ and larger sparsity index $s$.}}
\label{hdcr_c9_high}
\end{figure}
\noindent
In the highly correlated setting, the volume seems to be only increasing. Therefore, we add an additional plot with higher sparsity index (like Theorem \ref{HDCR_th1} would propose).
\noindent
The covariance structure has a huge effect on the corresponding quantiles (strongly positively correlated variables do not concentrate as fast as variables with weaker correlation). An simple solution to improve this problem is to permute the rows of $X$ randomly (corresponding to a randomly chosen structure of the sets $J_k$).
\begin{figure}[!htp]
\center
\includegraphics[height=0.4\textheight,width=\textwidth,keepaspectratio]{HDCR/plots/c0_comb_2.png}
\caption{\textmd{Simulation results for $c=0.0$ and random permutations of columns.}}
\label{hdcr_c0_comb}
\includegraphics[height=0.4\textheight,width=\textwidth,keepaspectratio]{HDCR/plots/c5_comb_2.png}
\caption{\textmd{Simulation results for $c=0.5$ and random permutations of columns.}}
\label{hdcr_c5_comb}
\end{figure}
\begin{figure}[!htp]
\center
\includegraphics[height=0.4\textheight,width=\textwidth,keepaspectratio]{HDCR/plots/c9_comb_2.png}
\caption{\textmd{Simulation results for $c=0.9$ and random permutations of columns.}}
\label{hdcr_c9_comb}
\includegraphics[height=0.4\textheight,width=\textwidth,keepaspectratio]{HDCR/plots/c9_high_comb_2.png}
\caption{\textmd{Additional simulation results for $c=0.9$, random permutations of columns and larger sparsity index $s$.}}
\label{hdcr_c9_high_comb}
\end{figure}
\newpage
\section{Conclusion}
In this note, we compared specific $s$-sparsely convex high-dimensional confidence regions and the corresponding hypercube with respect to their volume. Relying on Gaussian concentration inequalities, we were able to derive theoretical results demonstrating the exponential decaying ratio. In a simulation study, our theoretical results are validated as the exponential decay is clearly observable. |
1,116,691,500,192 | arxiv | \section{Introduction}
\label{sec:intro}
Within the anisoplanetic field of an imaging instrument, in the absence of saturation, an in-focus image $I$ can formally be described as the result of a convolution product
\begin{equation}
I = O \star \mathrm{PSF}
\label{eq:convol}
\end{equation}
\noindent
between the spatially incoherent brightness distribution of an object $O$ and the instrumental point spread function (PSF). The careful optical design of telescopes and instruments assisted by adaptive optics (AO) attempts to bring the PSF as close as possible to the theoretical diffraction limit. Yet even for high quality AO correction, subtle temporal instabilities in the PSF make it difficult to solve for important problems such as: the identification of faint sources or structures in the direct neighborhood of a bright object (the high-contrast imaging scenario) or the discrimination of sources close enough from one another to be called non-resolved (the super-resolution scenario). Weak signals of astrophysical interest compete with time-varying residual diffraction features that render the deconvolution difficult.
The overall purpose of interferometric processing of diffraction-dominated images is to provide an alternative to the otherwise ill-posed image deconvolution problem. The technique takes advantage of the properties of the Fourier transform, that turns the convolution into a multiplication. One must however abandon the language describing images and instead, manipulate the modulus, also refered to as the visibility, and the phase of their Fourier transform counterpart. This Fourier transform can be sampled over a finite area of the Fourier plane traditionally described using the $(u,v)$ coordinates, whose extent depends on the geometry of the instrument pupil.
Non-redundant masking (NRM) interferometry uses a custom aperture mask featuring a finite number of holes that considerably simplifies the interpretation of images. Accurate knowledge of the mask's sub-aperture locations unambiguously associates particular complex visibility measurements in the image's Fourier transform to specific pairs of sub-apertures forming a baseline. The Fourier phase $\Phi$ at the coordinate $(u,v)$ is the argument of a single phasor:
\begin{eqnarray}
\phi(u,v) &=& \mathrm{Arg}\bigl(v_0(u,v) e^{i(\phi_0(u,v) + \Delta\varphi(u,v))}\bigr)\\
&=& \Phi_0(u,v) + \Delta\varphi(u,v),
\end{eqnarray}
\noindent
where $v_0(u,v)$ and $\phi_0(u,v)$ respectively represent the intrinsic target visibility modulus and phase for this baseline, and $\Delta\varphi(u,v)$ the instrumental phase difference (aka the piston) experienced by the baseline at the time of acquisition. The same geometrical knowledge also makes it possible to combine together complex visibility measurements by baselines forming closing-triangles which lead to the formation of closure-phases: observable quantities engineered to be insensitive to diffferential piston errors affecting the different baselines.
Closure-phase was first introduced in the context of radio interferometry by \citet{1958MNRAS.118..276J} and eventually exploited in the optical starting with \citet{1986Natur.320..595B}. This useful observable enables NRM interferometry to detect companions at smaller angular separations than coronagraph can probe.
Kernel-phase analysis attempts to take advantage of the same property without requiring the introduction of a mask. The description of the full aperture requires a more sophisticated model that will reflect the intrinsically redundant nature of the aperture. Any continuous aperture can be modeled as a periodic grid of elementary sub-apertures resulting in a virtual interferometric array where every possible pair of sub-apertures forms a baseline. Whereas the NRM ensures that each baseline is only sampled once, the regular grid results in a highly redundant scenario. For a baseline of coordinate $(u,v)$ and redundancy $R$, the Fourier-phase will be that of the sum of $R$ phasors all measuring the same $\phi_0(u,v)$ but experiencing different realisations of instrumental phase $(\Delta\varphi_k)_{k=1}^R$:
\begin{equation}
\phi(u,v) = \mathrm{Arg}\bigl(\sum_{k=1}^R v_0(u,v) e^{i(\phi_0(u,v) + \Delta\varphi_k)}\bigr).
\label{eq:phasor}
\end{equation}
In the low-aberration regime provided by modern adaptive optics (AO) systems the impact residual pupil aberration $\varphi$ has on the Fourier phase can be linearized and Eq. \ref{eq:phasor} rewritten as:
\begin{equation}
\phi(u,v) = \phi_0(u,v) + \frac{1}{R} \sum_{k=1}^R \Delta\varphi_k.
\end{equation}
The list of what pairs of sub-apertures contribute to the complex visibility of a redundant baseline is kept in the {\it baseline mapping matrix} $\mathbf{A}$. It contains as many columns as there are sub-apertures ($n_A$) and as many rows as there are distinct baselines ($n_B$). Elements in a row of $\mathbf{A}$ are either $0$, $1$, or $-1$ (see Fig. 1 of \citet{2010ApJ...724..464M}). The phase sampled at all relevant coordinates of the Fourier-plane, gathered into a vector $\Phi$ can thus be written compactly as:
\begin{equation}
\Phi = \Phi_0 + \mathbf{R}^{-1} \cdot \mathbf{A} \cdot \varphi,
\end{equation}
\noindent
where $\mathbf{R}$ is the diagonal (redundancy) matrix that retains the tally of how many sub-aperture pairs contribute to the Fourier phase for that baseline. $\Phi_0$ is the Fourier phase associated with the object being observed (it is related to the object function $O$ of Eq. \ref{eq:convol} by the Van-Cittert Zernike theorem), and $\varphi$ is the aberration experienced by the aperture.
The redundancy $\mathbf{R}$ is expected to be directly proportional the modulus transfer function (MTF) of the instrument. The product $\mathbf{R}^{-1} \cdot \mathbf{A}$, referred to as the {\it phase transfer matrix}, describes the way pupil phase aberration propagate into the Fourier plane. The baseline mapping and the phase transfer matrices are rectangular and feature $n_B$ rows (the number of baselines) for $n_A$ columns (the number of sub-apertures in the pupil), with $n_B > n_A$.
As shown in \citet{2010ApJ...724..464M}, selected linear combinations of the rows of the phase transfer matrix will cancel the effect of the pupil phase $\varphi$. These linear combinations, gathered into a operator called $\mathbf{K}$ (the left-hand null space or kernel of the phase transfer matrix) project the Fourier-phase onto a sub-space that is theoretically untouched by residual aberrations. The resulting observables, called kernel-phases, are a generalisation of the concept of closure-phase, that can be found for an arbitrary pupil, regardless of how redundant.
Practice suggests that kernel- and closure-phase do not perfectly self-calibrate. Recently published studies using kernel-phase to interpret high-angular resolution diffraction dominated observations indeed lead to contrast detection limits mostly constrained by systematic errors \citep{2019MNRAS.486..639K, 2019A&A...623A.164L, 2019JATIS...5a8001S} instead of statistical errors \citep{2019A&A...630A.120C}. The goal of this paper is to provide insights into the limitations of Fourier-phase methods in general and to introduce the means to improve on these limitations.
This difficulty affects the kernel-phase interpretation of images as well as that of NRM interferograms. For despite the generalized assistance of adaptive optics during NRM observations \citep{2006SPIE.6272E.103T}, the need for long integration times and the use of sub-apertures that are not infinitely small means that interferograms are not simply affected by a simple and stable piston but by a time varying higher order amount of aberration \citep{2013MNRAS.433.1718I}. Closure-phase thus acquired on a point source therefore rarely reach zero and are biased.
Nevertheless, even when evolving over time, if the spatial and temporal properties of the perturbations experienced by the system remain stable across the observation of multiple objects, the overall resulting bias is also expected to remain stable. It is therefore possible to calibrate the closure-phases acquired on a target of interest with those acquired on a point-source. Thus calibrated closure-phases have been used as the prime observable for the detection of low to moderate contrast companions \citep{2008ApJ...679..762K}. There is however a limit to the stability of the observing conditions when hopping from target to target: changes in elevation, apparent magnitude for the adaptive optics, and telescope flexures, will result in an evolution of the closure-phase bias. Observations are therefore in practice never perfectly calibrating and the evolution of the calibration bias results in what is generally referred to as {\it systematic error}.
For low-to-moderate contrast detections up to a few tens, systematic errors are often a minor contribution that do not significantly affect the interpretation of the data. However as observing programs become more ambitious, attempting the direct detection of higher contrast companions \citep{2012ApJ...745....5K} in a part of the parameter space that cannot yet be probed by coronagraphic techniques, systematic errors become more important than statistical errors and one must resort to more advanced calibration strategies using multiple calibrators \citep{2013MNRAS.433.1718I, 2019MNRAS.486..639K}, to better estimate the calibration bias that will minimize the amount of systematic error.
\section{Fourier-phase calibration errors}
\label{sec:calib}
\begin{figure}
\includegraphics[width=\columnwidth]{SCExAO_model_bina_42cm.png}
\caption{Binary discrete representation of the SCExAO instrument pupil for kernel-phase analysis. Left panel: the dicretized instrument pupil built from a square grid of pitch $s$ = 42 cm. Right panel: the Fourier coverage associated to this discretization. The color code used to draw the Fourier coverage reflects the redundancy associated to the virtual interferometric baselines.}
\label{f:subaru_42cm_bina_model}
\end{figure}
Kernel-phase analysis requires to approximate the near-continuous aperture of the telescope by a discrete representation: a virtual array of sub-apertures, laid on a regular grid of predefined pitch $s$, paves the surface covered by the original aperture. Computation of all the possible pairs of virtual sub-apertures in the array leads to the creation of a second grid of virtual baselines, the majority of which are highly redundant. An example is shown in Fig. \ref{f:subaru_42cm_bina_model} for the aperture of an 8-meter telescope, discretized with a grid of pitch $s = 42$ cm. Keeping track of what pairs of sub-apertures contribute to all the baselines leads to the computation of the baseline mapping matrix $\mathbf{A}$ and the redundancy matrix $\mathbf{R}$. The two are used to eventually determine the kernel operator $\mathbf{K}$ that generalizes the notion of closure-phase, as introduced by \citet{2010ApJ...724..464M}.
\begin{figure}
\includegraphics[width=\columnwidth]{Fourier_binary_and_aberration}
\caption{Left: Simulated monochromatic ($\lambda$ = 1.6 $\mu$m) SCExAO image of a 10:1 binary in the presence of 100 nm of coma along the axis of the binary. Right: the associated Fourier-phase ranging from $\pm$1.5 radian (see also Fig. \ref{f:effect}).}
\label{f:ex_img}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{Kernel-effect}
\caption{Demonstration of the impact of the kernel processing. The top panel shows that the experimental Fourier phase extracted from a single aberrated image shown in Fig. \ref{f:ex_img} (the blue curve) bears little ressemblance with the theoretical expected binary signal (in orange). Contrasting with the raw Fourier-phase, the bottom panel shows how the projection onto the kernel susbpace efficiently erases the impact of the aberration and brings the experimental kernel-phase curve ($\mathbf{K} \cdot \Phi$), also plotted using a solid blue line, much closer to its theoretical counterpart ($\mathbf{K} \cdot \Phi_0$), now plotted using a dashed orange line so as to better distinguish them.}
\label{f:effect}
\end{figure}
The following simulation will illustrate the interest of this line of reasoning. Using a single, simulated, monochromatic ($\lambda = 1.6\,\mu$m) and noise-free image of a 10:1 contrast binary object (located two resolution elements to the left of the primary) affected by a fairly large (100 nm rms) amount of coma, shown in the left panel of Fig. \ref{f:ex_img}. The Fourier-phase $\Phi$ extracted from this image (shown in the right panel of Fig. \ref{f:ex_img}) appears to be completely dominated by the aberration. The plot of the same raw Fourier-phase (the blue curve in the top panel of Fig. \ref{f:effect}) compared to the predicted Fourier signature of the sole binary $\Phi_0$ confirms this observation. Using the kernel operator $\mathbf{K}$ computed according to the properties of the discrete model\footnote{The model is computed using a python package called XARA, developped in the context of the KERNEL project, available for download \url{http://github.com/fmartinache/xara/}} represented in Fig. \ref{f:subaru_42cm_bina_model}, it is possible to project the 546-component noisy Fourier phase vector $\Phi$ onto a sub-space that results in the formation of a 414-component kernel. The bottom panel shows how despite the drastic difference between the raw and theoretical Fourier-phase, the two resulting kernels match one another: the kernel operator effectively erases the great majority of the aberrations affecting the phase present in the Fourier space while leaving enough information to describe the target in a meaningful manner, such that:
\begin{equation}
\mathbf{K} \cdot \Phi = \mathbf{K} \cdot \Phi_0.
\end{equation}
Although quite satisfactory in its apparent ability to reduce the impact of the aberration, the match of the kernel curves is not perfect. The small difference between the two example curves is what is generally refered to as the calibration error. This error can be independantly measured using a second image, this time of a single point source (a calibrator), assuming that the system suffers the same aberration. In this noise-free scenario, subtraction of the kernel-phase extracted of one such calibration image would result in a perfect match. An instrumental drift between the two exposures would result in a new bias. If the magnitude of this new bias becomes comparable to or larger than the statistical uncertainties, the interpretation of kernel- and closure-phase typically requires to invoke a tunable amount of systematic error added in quadrature to the uncertainty.
\section{Kernel-phase discretization prescriptions}
\label{sec:strat}
Given that no noise was included in this ideal scenario, the fact that some aberration leaks into the kernel and results in the need for a calibration suggests that the discrete model used to describe how pupil phase propagates into the Fourier plane is not sufficiently accurate and we will look into ways to improve it.
\subsection{Building a discrete representation}
\label{sec:build}
\begin{figure}
\includegraphics[width=0.7\columnwidth]{SCExAO_grid_bina.png}
\caption{Example of discretized version of SCExAO's pupil using a square grid, aligned with the center of the aperture. Only the virtual sub-apertures for which the transmission, determined as the normalized intersection between the virtual sub-aperture and the underlying pupil, is greater or equal to 50 \% are kept as valid sub-apertures contributing to the model. The $s = 0.42$ m pitch value is chosen so as to fit an entire number (here 19) of sub-apertures across the pupil diameter.}
\label{f:grid_match}
\end{figure}
The discretization process is as follows: a high-resolution 2D image of the aperture is generated from the details of the pupil specifications (outer and inner diameter, spider thickness, angle and offset). A square grid of sub-apertures of pitch $s$ is laid atop the pupil image and points falling within the open parts of the aperture are kept in the model. To be counted amongst the virtual sub-apertures, the area of the transmissive part of the original aperture overlapping with the region covered by the square virtual sub-aperture has to be greater than 50 \%.
When building the model, it will be critical to ensure that no virtual baseline is greater than the actual telescope diameter: this would indeed result in attempting to extract information outside the Fourier domain the true aperture gives access to. To mitigate this risk, one will first ensure that an entire number of sub-apertures fits within one aperture diameter, and then eliminate all the computed baselines greater than the aperture diameter.
A regular grid is required so that the density of the discrete representation is as homothetic as possible to the original aperture: this translates into a model redundancy $\mathbf{R}$ that better matches the true modulation transfer function (MTF) of the instrument. It is also important to align the grid with the aperture model so that the symmetry properties of the apertures are reflected in its discrete representation: one either uses a grid that is centered on the aperture and that features an integer odd number of apertures (the option retained to build the discretized aperture shown in this paper) or an offset grid with an integer even number of sub-apertures.
Fig. \ref{f:grid_match} introduces the example what will serve as the reference to compare the relative merits of different discrete models. It uses the Subaru Telescope pupil mask of the SCExAO instrument \citep{2015PASP..127..890J}, characterized by its large (2.3 m diameter) central obstruction and non-intersecting thick spider vanes at the non-trivial angle of 51.75 degrees \citep{2009PASP..121.1232L}. This non-trivial aperture geometry makes it a rich test case. Using the aforementionned recommendations, a centered grid with a $s = 0.42$ m pitch, fits almost exactly 19 virtual sub-apertures across the aperture nominal diameter of 7.92 meters.
The $n_A = 272$ virtual sub-apertures of this array form $n_B = 562$ distinct baselines. As discussed in \citet{2013PASP..125..422M}, for a rotational symmetric of order 2 aperture\footnote{The aperture is identical to itself rotated by 180$^\circ$ relative to its center}, the Fourier-phase and its kernels are insensitive to even order aberrations. This property reflects in the properties of the linear phase transfer model: the number of non-singular values of the baseline transfer matrix $\mathbf{A}$ should be equal to $n_E = n_A / 2$, therefore leaving $n_K = n_B - n_A / 2$ kernel-phases. For the kernel analysis to lead to optimal results, it is important to ensure that these properties are verified. An aperture that does not respect this symmetry condition will in contrast result in less ($n_K = n_B - n_A + 1$) kernels.
\subsection{Grid pitch and image size}
\label{sec:pitch}
The 42 cm pitch of the grid illustrated in Fig. \ref{f:grid_match} does not offer enough resolution to reflect the presence of the 25 cm thick spider vanes of the real aperture. This contrasts with models that have generally been used since the inception of kernel-phase (see for instance Fig. 2 of \citet{2010ApJ...724..464M}) that have naively overemphasized the impact of spider, which in turn contributes as will be made clear below, to amplify the calibration bias.
The pitch $s$ of the grid is of course a free parameter that can be adjusted: the finer the grid, the more representative of the details of the pupil it is expected to become and the more capable of capturing higher spatial frequency components of the images. A discrete model with a finer pitch however implies the description of a wider effective field of view (of radius $0.5 \lambda/s$) over which the kernel-analysis can lead to meaningful results. The size of the image will therefore impose a limit to how fine the pitch can get.
For the wavelength ($\lambda = 1.6\,\mu$m) of the simulations used in this Section, the plate-scale (16.7 mas per pixel) and size ($128 \times 128 $) of the images suggest that the pitch cannot be finer than $s > 206.265 \times 1.6 / (128 \times 16.7) \approx 0.15$ m. Beyond this simulation scenario, image noises induced by dark current, readout and photon noise, and a preference for computationally manageable problems will guide the user toward using coarser models in practice.
\begin{figure*}
\includegraphics[width=\textwidth]{perf_sparse_binary}
\includegraphics[width=\textwidth]{perf_dense_binary}
\includegraphics[width=\textwidth]{perf_sparse_grey}
\caption{Comparison of the self-calibrating performance of the kernel-phase analysis of a single image for three discrete models of the same aperture. Each of the three panels features, side by side, a 2D representation of the discrete aperture model used and a plot of the kernels extracted from the image of a point source (the calibration error), in the presence of either coma (the orange curve) or a 3-cycle sinusoidal aberration (the red curve) and how they compare to the signal of a 100:1 contrast binary (the blue curve). The top panels presents the reference binary model of the SCExAO pupil, with a 42 cm pitch; the middle panel, a denser model with a 21 cm pitch that more accurately matches the fine structures of the telescope; the third panel, a model using the original 42 cm pitch grid but including the transmission function.}
\label{f:comp_models}
\end{figure*}
\subsection{Comparing models}
\label{sec:comp}
To assess the relative merits of multiple models, we look at the impact the discretization strategy has on the magnitude of the calibration bias. We have seen that the pitch of the model impacts the overall dimension of the problem. It also impacts the associated redundancy $\mathbf{R}$ and therefore the overall magnitude of the kernel-phases extracted from a given image. To enable a meaningful comparison of multiple models, we will compare the rms of the calibration bias to the theoretical standard deviation of the theoretical signature (see for instance eq. 27 of \citet{2019A&A...630A.120C}) induced by a 100:1 contrast companion that would be located two resolution elements to the left of the primary along the horizontal axis.
The simulations systematically include a 20 nm rms static (odd) aberration: either a 3-cycle horizontal sinusoid or coma along the same direction. These two examples were selected because they are both perfectly odd, and therefore have full impact on the analysis, and because they feature different structures: the impact of the sinusoidal modulation is more uniformly distributed across the aperture, whereas the impact of the coma (like that of most higher order Zernike modes) is stronger toward the edges.
The same two images ($128 \times 128$ pixels, one featuring coma and one featuring the sinusoidal aberration) are processed using the kernel-phase pipeline, using three discrete models. The results of these three analysis are summarized in Fig. \ref{f:comp_models} featuring, side by side, a rendering of the discrete aperture model and the plot of thus biased kernel-phase vector extracted from either image, and in Table \ref{tbl:perf} summarizing the dimensions of the models and their intrinsic sensivitity to calibration error.
\begin{table}
\begin{tabular}{l | c | c | c }
& sparse binary & dense binary & sparse grey \\
\hline
pitch & 0.42 & 0.21 & 0.42 \\
$n_A$ & 272 & 956 & 300 \\
$n_B$ & 562 & 2132 & 562 \\
$n_K$ & 426 & 1654 & 412 \\
\hline
ref. signal & 0.435 & 1.583 & 0.402 \\
coma bias & 0.180 (41 \%) & 0.317 (20 \%) & 0.036 (9 \%) \\
sine bias & 0.286 (66 \%) & 0.263 (17 \%) & 0.026 (6 \%) \\
\hline
\end{tabular}
\caption{Summary of the model properties and their performance. $n_A$, $n_B$ and $n_K$ respectively represent the number of sub-apertures, the number of baselines and the number of kernels of each model. The coma and sine bias rows respectively show the magnitude of the bias induced 20 nm rms of coma and the sinusoidal aberration, in radians}
\label{tbl:perf}
\end{table}
The first model, presented in the top panel of Fig. \ref{f:comp_models} is the reference using the $s=0.42$ m pitch grid introduced earlier. Using this model, the magnitude of the calibration bias extracted from the images affected by either type of aberration represents a significant fraction (of the order of 50 \%) of the signature of the 100:1 binary companion. Under such circumstances, the contrast detection limits associated to these uncalibrated kernel-phase are likely to be rapidly dominated by this systematic error. The middle panel of Fig. \ref{f:comp_models} illustrates the impact of a finer $s=0.21$ m grid pitch: the model better reflects the presence of the spiders and the overall shape of the pupil. A larger number of kernels is extracted from the same image (almost four times as many) but more importantly for this discussion, the relative magnitude of the calibration bias is reduced by a factor $\approx 2-3$: a kernel-phase analysis based on a finer and more accurate description of the original aperture will feature reduce model-induced calibration errors and will therefore be less susceptible to calibration errors in general.
Increasing the resolution of the grid is not the only option available. One can indeed also refine its description by allowing for a variable sub-aperture transmission. In addition to deciding whether to keep or discard one virtual sub-aperture as part of the model, the information on the clear fraction of the sub-aperture, translated into a local transmission value can be appended and taken into account when creating the phase transfer model. Such a ``grey aperture'' model makes it possible to more accurately describe the edges and high-spatial frequency features of the aperture without necessarily increasing the pitch of the model. One example using such a continuous transmission model is illustrated in the bottom panel of Fig. \ref{f:comp_models}: despite using a grid pitch identical to the reference model, the discrete representation of the aperture clearly better outlines the finer features of the aperture as the trace of the spider vanes becomes visible. For this example, the transmission cut-off value was set to 10$^{-3}$: the grey model includes a slightly higher number of virtual sub-apertures than its binary counterpart for which the cut-off was set to 0.5. In the end, one forms a number of kernels (see the Appendix for a general discussion regarding the number of kernels) similar to the binary case. The improvement brought by the inclusion of this transmission model is substantial: the magnitude of the bias is brought well below 10 \% that of the signature of the 100:1 binary companion.
The two effects of a finer resolution and a transmission model can be compounded to lead to even better performance. Generally, whether one uses a binary or a grey model, doubling the resolution of the grid leads to an improvement by a factor $\approx$2. The performance of kernel-phase reaches a point where the details of the implementation of the upstream simulation becomes critical.
Overall, there seems to be no significant difference between the two types of aberrations introduced. Sinusoidal modulation seem to be better processed in general, likely because of the sharper edge structure of the coma, that systematically requires more resolution. The impact of aberrations of higher spatial frequency, beyond what the chosen model can effectively describe are filtered out either by adequate image cropping (following the recommendations given in Sec. \ref{sec:pitch}) or by application of an image mask. We can conclude that including the aperture transmission model is a major improvement that renders the kernel-phase analysis less susceptible to systematic errors.
\section{Kernel-phase analysis revisit}
\label{sec:applications}
In this section, we use the recommendations outlined in the previous section and apply them to a series of datasets whose kernel-phase analysis has already been published. We will feature two applications: the analysis of a ground-based dataset published by \citet{2016MNRAS.455.1647P} and an extended version of the dataset used for the original kernel-phase publication by \citet{2010ApJ...724..464M}. The review of these two applications will further illustrate the importance of better aperture modeling practices for kernel-phase analysis.
\subsection{Palomar/PHARO}
\label{sec:p3k}
The data consists of two data-cubes of 100 images of the binary system $\alpha$-Ophiuchi \citep{2011ApJ...726..104H} and of the single star $\epsilon$-Herculis that were acquired with the PHARO instrument \citep{2001PASP..113..105H} at the focus of the Palomar Hale Telescope, after AO correction provided by the P3K AO system \citep{2013ApJ...776..130D}.
The data-cubes were recovered from an archive linked in the original publication. The preprocessed large $512 \times 512$ pixels original frames were first cropped down to a more manageable $64 \times 64$ pixel size. With a plate scale of 25 mas per pixel, the field of view is still $\pm$800 mas. To reach sufficient resolution in the Fourier space, a fast Fourier transform (FFT) based extraction algorithm such as the one used in the original study requires an adapted amount of zero-padding. The now standard complex visibility extraction method of XARA instead explicitly computes the discrete Fourier transform for the spatial frequencies of the discrete model, such as suggested by \citet{2007OExpr..1515935S} and filters out sub-pixel centering errors as used by \citet{2019MNRAS.486..639K}: the cropping of the image not only filters out the higher level of noise brought out by weakly illuminated pixels, it also is more computationally efficient as it requires the computation of smaller Fourier transform matrices.
\begin{figure}
\includegraphics[width=\columnwidth]{alphaoph.png}
\caption{Example of frame acquired on $\alpha$-Ophiucus (non-linear scale: power 0.25). Notable features of this image include the thick and slightly tilted diffraction spikes induced by the medium-cross pupil mask used at the time of acquisition, and the ghost induced by the neutral density filter in the bottom left quadrant of the frame. The companion later recovered by the kernel-phase data reduction is buried underneath the first diffraction ring, to the left of the primary.}
\label{f:p3k_frame}
\end{figure}
Images were acquired using the K$_S$ filter (central wavelength: 2.14 $\mu$m) and the medium cross pupil mask inside the PHARO camera was used to limit the risk of saturation in the image for the otherwise bright target of interest. An example of image is shown in Fig. \ref{f:p3k_frame}. The images presents a few noteworthy features: the apparent companion clearly visible in the bottom left quadrant is a ghost induced the 0.1 \% neutral density filter used at the time of the acquisition. This ghost is present in all the frames, including those acquired on the calibrator. Also visible are strong diffraction spikes induced by the very thick spider vanes of the medium-cross mask, whose orientation does not quite match the grid of pixels (upper vertical spike leans slightly to the left).
We built a new discrete grey model of the medium-cross based on the specifications published by \citet{2001PASP..113..105H} that were confirmed by an image of the pupil enabled by one of the modes of the camera. In the image provided by the pupil imaging mode of the PHARO camera, the medium-cross mask appears to be rotated counterclockwise by two degrees. We used the recipe outlined in Sec. \ref{sec:strat} to produce grey discrete representation of the true aperture using a square grid with a pitch $s = 0.16$ meters that was then rotated to match the grid to the true aperture. To eliminate possible mistakes, we used a simulation reproducing the properties of the PHARO K$_S$-band images that included a fixed amount of aberration and rotated our mask until we found the orientation that minimizes the amount of calibration error. The optimum model thus identified is shown in Fig. \ref{f:pharo_model}.
\begin{figure}
\includegraphics[width=\columnwidth]{PHARO_model_grey}
\caption{Representation of the discrete model (left: aperture, right: Fourier coverage) of the PHARO med-cross aperture, using the transmission model of the true aperture. To further reduce the amount of systematic error, the model was built using a square grid that was rotated to match the orientation of the original pupil mask. The impact of the presence of the spiders in the model is revealed in the Fourier plane as four small depressions appear in the intermediate spatial frequency range.}
\label{f:pharo_model}
\end{figure}
The presence of the ghost in all the images will contribute to the calibration bias of the data. \citet{2016MNRAS.455.1647P} chose to further window the data so as to mask the ghost out before attempting to extract the kernels: this however leaves too few useful pixels to lead to the formation of $n_K = 1048$ kernels of the model ($n_A = 528$, $n_B = 1312$). In this analysis, we keep all the information available in the image, under the assumption that the contribution of the ghost will be erased when subtracting the kerrnel-phase from the calibrator.
In the high-contrast regime (which in practice applies when the contrast is greater than 10:1), the amplitude of the kernel-signature of a binary is expected to be directly proportional to the contrast (secondary/primary). This makes it convenient to compute for a grid of positions around the primary, the normalized dot product between the calibrated signal and the theoretical signal of a high-contrast binary computed for all grid positions. The use of such colinearity maps was introduced by \citet{2019A&A...623A.164L}: the presence of a clear maximum in this map shows where the input signal best matches the theoretical signature of a binary.
\begin{figure*}
\includegraphics[width=0.33\textwidth]{KP_colin_map_grey_model_tgt_alone}
\includegraphics[width=0.33\textwidth]{KP_colin_map_grey_model_cal_alone}
\includegraphics[width=0.33\textwidth]{KP_colin_map_grey_model}
\caption{Colinearity maps for the raw and calibrated kernel-phases extracted from the data, over a $\pm$500 mas field of view. The left panel shows the map built from the uncalibrated kernel-phases of $\alpha$-Oph. The middle panel shows the same map built from the uncalibrated kernel-phase of $\epsilon$-Her. The right panel shows the colinearity map of the calibrated signal, that is the difference between the kernel-phases of $\alpha$-Oph and $\epsilon$-Her. The two uncalibrated map prominently feature the signature of the ghost present in all images in the bottom left quadrant as well as other structures at closer separations. The map of the calibrated signal on the other hand shows that most of these features are gone and reveals a positive bump on the left hand side of the central star (for a separation of 100 mas and a position angle of 84 degrees), taken as indication of the presence of a companion.}
\label{f:P3K_colin_maps}
\end{figure*}
Fig. \ref{f:P3K_colin_maps} shows the result of this computation for the raw signal of both target and calibrator as well as for the calibrated signal of $\alpha$-Ophiuchi over a $\pm$500 mas field of view. Kernel-phase is, like the closure-phase, a measure of asymmetry of a target so the colinearity map is always antisymmetric. The two uncalibrated map prominently feature the signature of the ghost present in all images in the bottom left quadrant as well as other structures at closer separations (up to $\sim$200 mas). Whereas the signature of the ghost is expected, these intermediate separation features (particularly on the map of the calibrator) suggest that the kernel-phases are still affected by a calibration bias. Our efforts have ensured that the modeling induced errors are minimal. however, given that individual images were integrated over 1.4 s, that is many times the coherence time, the kernel-phase are still affected by an additional bias induced by temporal variance described by \citet{2013MNRAS.433.1718I}. We can observe that the subtraction of the kernel-phases of the calibrator from those obtained on $\alpha$-Ophiucus effectively erases these features along with that of the ghost. The bright bump (and its antisymmetric dark counterpart) clearly visible to the left (and the right) of the central star in the right panel of Fig. \ref{f:P3K_colin_maps} is indicative of the quality of the detection of the companion around $\alpha$-Ophiuchi.
We use the location of the maximum of colinearity as the starting point for a traditional $\chi^2$ minimization algorithm using the Levenberg-Marquardt method that is shipped as part of the python package scipy. The uncertainties associated to the calibrated kernel-phases are simply computed as the quadratic sum $\sigma_e$ of the uncertainties deduced from variance between frames for the $\alpha$-Ophiuchi and $\epsilon$-Herculis. The result of this optimization is represented in the correlation plot of Fig. \ref{f:P3K_fit}: the model fit looks very convincing and locates the companion in the area hinted at in the calibrated colinearity maps of Fig. \ref{f:P3K_colin_maps}.
The careful modeling of the aperture unfortunately does not suffice to eliminate the need for ad hoc systematic errors at the time of the optimization: a relatively large amount of systematic error ($\sigma_s = 1.2$ rad) still needs to be added quadratically to the experimental dispersion ($\sigma_E = 0.1$) in order to give a reduced $\chi^2 = 1$, for the following parameters: separation $\rho = 123.5 \pm 2.9$ mas, position angle $\theta = 86.5 \pm 0.2$ degrees and contrast $c = 25.1 \pm 1.1$ . It should be pointed out that these don't quite match the values reported (see Table 1 of \citet{2016MNRAS.455.1647P}) for the NRM observation that usually set the standard. It should however also be pointed out that the new contrast estimate is in good agreement with measurements reported in Table 3 of \citet{2011ApJ...726..104H}, taken when the companion was at larger angular separation.
\begin{figure}
\includegraphics[width=\columnwidth]{KP_correl_grey_model}
\caption{Correlation plot of the calibrated signal of $\alpha$-Ophiuchi with the best binary fit solution: following the hint provided by the colinearity map of the calibrated data, the signal present in the data is fairly well reproduced by a binary companion located at angular separation $\rho = 123.4 \pm 1.7$ mas, position angle $\theta = 86.5 \pm 0.5$ and contrast $c = 25.1 \pm 0.1$. Residual structure in the data is accounted for by the introduction of an ad hoc amount of systematic error so that the reduced $\chi^2 = 1$.}
\label{f:P3K_fit}
\end{figure}
While the signature of the companion is more clearly visible in this analysis than in the results reported by \citet{2016MNRAS.455.1647P}, the situation is still not fully satisfactory, as our improved model of the aperture did not lead to a detection with uncertainties on the binary parameters driven solely by statistical errors. Several explanations were invoked in the original publication to justify the sub-par performance: they still apply here. The sub-standard seeing conditions that induce variability in the AO correction on targets of distinct magnitudes and the fact that both sources were acquired in very different areas of the detector explain in great part how the statistical variance experienced during the observation cannot on its own be representative of all errors affecting the kernel-phase. This new analysis, becauses it uses a model that is adapted to the information availabla in the data-cubes, nevertheless draws a more favorable picture for kernel-phase, which shows here a much more convincing result.
\subsection{HST/NICMOS}
\label{sec:hst}
As pointed out in Sec. \ref{sec:pitch}, the seminal kernel-phase publication used a rather crude discrete representation of the aperture of the Hubble Space Telescope and was nevertheless able to report the detection of a companion to the M-dwarf GJ 164 \citep{2009ApJ...695.1183M}. In attempting to accurately model the effective aperture of the NICMOS1 instrument used to acquire the data, we refered to the work of \citet{1997hstc.work..192K}. that suggests the presence of an important ($\sim$10 \%) misalignment of the instrument cold mask relative to the original optical telescope assembly (OTA) that was completely overlooked by \citet{2010ApJ...724..464M}.
Multiple datasets recovered from the HST archival were acquired on GJ 164 on epoch 2004-02-14 UT (proposal ID \#9749) in several narrow band filters: F108N, F164N and F190N \citep{2009nici.book.....V}. Our updated analysis also includes images of calibration star SAO 179809 observed at a single epoch (1998-05-01, proposal ID \#7232) acquired in the F190N filter. The original $256 \times 256$ pixel images were bad-pixel corrected, recentered and cropped down to $84 \times 84$ pixel size, over which the F190N filters seems to feature good SNR, before being gathered into data-cubes. With a plate scale of 43 mas per pixel, the effective field of view is thus $\pm$ 1.8 arc second. According to the image sampling constraints reminded in Sec. \ref{sec:pitch}, the size of the field of view and the wavelength of acquisition set a limit to how fine the pitch can get, which for the F190N filter translates into $s = 0.109$ m. Although the model pitch could be updated for the shorter wavelengths (which for a fixed image size give access to an increasing number of spatial frequencies), we build a single discrete model (including transmission) for a homogeneous analysis across the entire data-set that will enable comparison across spectral bandpasses.
\begin{figure}
\includegraphics[width=\columnwidth]{OTF_wrong_model.png}
\includegraphics[width=\columnwidth]{OTF_right_model.png}
\caption{Comparison of the predicted redundancy with the experimental OTF (estimated from the modulus of the Fourier transform of F190N images of calibration star SAO 179809) for two models: the first (top panel) assumes that the aperture is that of the HST Optical Telescope Assembly (OTA) and the second (bottom panel) takes into account the misaligned cold mask of the NICMOS camera. Whereas the redundancy of the first model fails to reproduce the modulus of the Fourier transform effectively measured from the image, the second model convincingly matches the fine features of the instrument OTF.}
\label{f:OTF}
\end{figure}
We however first need to demonstrate that the discrete model indeed benefits from the updated aperture description recommended by \citet{1997hstc.work..192K}. For this we use the images of the calibration star SAO 179809. Sec. \ref{sec:build} introduced the idea that an accurate discrete model should translate into a predicted redundancy diagonal matrix $\mathbf{R}$ that matches the true instrument OTF, which for a calibration star should correspond to the modulus of the complex visibility extracted for the different baselines of the model. Fig. \ref{f:OTF} illustrates this property and compares the redundancy associated to models characterized by the same $s = 0.109$ m pitch for two apertures: one that includes the outline of the OTA only (top panel) and one that includes the misaligned NICMOS cold mask (bottom panel). Whereas the OTA model should already be an improvement over the one originally used, we can observe that the associated redundancy fails to reproduce the modulus of the Fourier transform computed for the corresponding spatial frequencies. The gap is particularly visible for intermediate spatial frequencies that feature less power than what is predicted by the model. The more accurate model including the misaligned cold mask is a major improvement as the predicted redundancy $\mathbf{R}$ almost perfectly matches the fine features (in particular the dropped lobes visible for baseline indices ranging from 400 to 800) of the experimental OTF.
\begin{figure}
\includegraphics[width=\columnwidth]{SAO179809_KP_histogram}
\caption{Kernel-phase histograms computed from a set of images of calibration source SAO 179809 using two aperture models characterized by the same pitch $s = 0.109$ m. The first model assumes that the aperture is that of entire optical telecope assembly (OTA) and results in the blue histogram. The second model takes into account the misaligned cold mask inside the NICMOS camera and results in the red histogram.}
\label{f:KP_histo}
\end{figure}
Unlike any of the previously considered scenarios, the pupil is here clearly not rotation symmetric so we don't expect to form the optimal number of kernels (see the end of Sec \ref{sec:build}). The more accurate model is nevertheless expected to translate into more accurate kernel-phase. SAO 179809 being a calibration source, we can verify that the magnitude of the calibration biases decreases by comparing (see Fig. \ref{f:KP_histo}) the histograms of the kernel-phase computed using the two aforementioned models. The improvement is significant with a reduction of the standard deviation by a factor $\sim$10, despite a larger number of kernels in the better model (375 vs 320), demonstrating one more time that a more accurate model reduces the impact of calibration errors. With the accurate model, the magnitude of the calibration bias ($\sigma_S = 0.222$ radians) is now comparable to that of a 100:1 contrast ratio companion (rms = 0.215 radians) located two resolution elements away from the primary would give for this discrete model.
The images of SAO 179809 were acquired more than 5 years before those of GJ 164: although we will keep using the same aperture model, so we can't expect to use the kernel-phase of SAO 179809 reliably to calibrate the kernel-phase signal of GJ 164. The magnitude of the calibration error estimated from the observation of SAO 179809 can however provide an order of magnitude for the expected fitting error for a binary such as GJ 164.
\begin{figure}
\includegraphics[width=\columnwidth]{GJ164_windowed_images.png}
\caption{Snapshots of GJ 164 AB for the 2004-12-23 epoch in the three NICMOS narrow band filters: F108N, F164N and F190N. The non-linear scaling of the image makes the window applied to the data more apparent. Given that a single discrete model with a fixed pitch is used to process this data-set, the window, used to cover a finite number of resolution elements, must be scaled linearly with the wavelength. Here, the effect of the window is mostly to reduce the contribution of poorly illuminated pixels to the overall noise of the kernel-phase.}
\label{f:gj164_imgs}
\end{figure}
One interesting feature of the GJ 164 dataset is the availability of images acquired in multiple filters: 80 in the F190N filter, 40 in the F164N filter and 10 in the F108 filter for this 2004-12-23 epoch. Fig. \ref{f:gj164_imgs} shows a snapshot of GJ164 for these three filters. In addition to the expected linear scaling of the diffraction with the wavelength, one will observe the linear scaling size of the circular window matching the effective field of view induced by the choice of a unique model with a fixed pitch. The phase transfer model at the core of the kernel-analysis is achromatic: pupil coordinate points and baselines are indeed all expressed in meters and not in radians as is customary in long baseline interferometry. At the time of data extraction however, the wavelength needs to be taken into account in the computation of a discrete Fourier transform matrix to match the sampling of the data.
The published analysis of the F190N images has revealed that a companion is present at an angular separation $\sim$90 mas, which is of the order of $0.5 \lambda/D$. In the high-contrast regime, the kernel-phase signature of a binary companion of contrast $c$ (primary / secondary) at wavelength $\lambda$ has a simple analytic expression:
\begin{equation}
\mathbf{K} \cdot \Phi_0(u,v) = \mathbf{K} \cdot \frac{1}{c} \times \sin{\frac{2\pi}{\lambda} (\alpha u + \beta v)},
\end{equation}
\noindent
where $\alpha$ and $\beta$ are the angular Cartesian coordinates of the companion (in radians), $u$ and $v$ are vectors collecting the coordinates of the baselines (in meters). Assuming that the contrast of the companion is constant for the different filters, we can explicit the derivative of the binary kernel-phase signal relative to the wavelength:
\begin{equation}
\frac{\partial}{\partial\lambda}\mathbf{K} \cdot \Phi_0(u,v) = \mathbf{K} \cdot \frac{1}{c} \times \cos{\frac{2\pi}{\lambda} (\alpha u + \beta v)} \times \frac{-2\pi}{\lambda^2}.
\label{eq:scaling}
\end{equation}
\begin{figure}
\includegraphics[width=\columnwidth]{GJ164_rescaled_kernel_phase}
\caption{Representation of the median kernel-phase vector extracted for the three filters data-sets (F108N, F164N and F190N), rescaled by the wavelength of the filter (taken in microns) squared. Thus rescaled, the three signals are very consistent with one another, confirming the presence of a near constant contrast structure partly resolved from the central star.}
\label{f:gj164_scaled_kernels}
\end{figure}
If the companion is unresolved, the cosine term varies slowly and the dominant wavelength dependant effect is the overall $1/\lambda^2$ scaling factor of Eq. \ref{eq:scaling}. Thus by multiplying kernel-phase extracted in the different filters by associated wavelength (expressed in microns) squared, we expect to see signals of comparable structure and magnitude. Fig. \ref{f:gj164_scaled_kernels} shows the result of one such comparison for the median signal extracted from the three sets of images. The stability of the signature of the companion across the three bands (covering almost a decade) is striking, suggests that the contrast is indeed stable for the different filters, and attests of the consistency of the kernel-phase data analysis.
\begin{figure*}
\includegraphics[width=\textwidth]{GJ164_combined_correlation_plots}
\caption{Correlation plots for the combined 5-parameter model-fit of the multi-filter GJ 164 AB image data-set, split back between the different filters (from left to right: F108N, F164N and F190N). The detection of the companion by the kernel-phase analysis is unequivocal.}
\label{f:gj164_correl_plots}
\end{figure*}
Going from 1.9 to 1.08 $\mu$m however almost doubles the resolving power: the signature of the companion, expected to be degenerate in the F190N filter, for which one observes a strong correlation between contrast and angular separation, will be better constrained by the F108N observation. The three data-sets are combined to feed a five parameter model fit optimization algorithm: two astrometric parameters and three contrasts, for a total of 1120 degrees of freedom. The result of this global optimization is represented in the correlation plots of Fig. \ref{f:gj164_correl_plots}, split by filter. The best solution places the companion 89.3 $\pm$ 0.4 mas away from the primary at the 100.4 $\pm$ 0.1 degree position angle , and leads to the following three contrasts: 6.23 $\pm$ 0.1 in the F108N filter, 6.19 $\pm$ 0.1 in the F164N filter and 6.36 $\pm 0.1$ in the F190N filter. Fig. \ref{f:gj164_correl_plots} also shows that the $1/\lambda^2$ signal scaling factor of the binary signal (see Eq. \ref{eq:scaling}) leads to intrinsically higher signal to noise ratio for the observation at the shorter wavelength.
Although the astrometric solution for the combined fit is generally consistent with the result published by \citet{2010ApJ...724..464M}, the contrast in the F190N is revised and drops from $9.1 \pm 1.2$ to $6.36 \pm 0.1$. The origin of the initial overestimation of the contrast, not captured by the uncertainty, is not clear and can likely be attributed to a combination of causes: the use of an innapropriate aperture model in the first place, the strong contrast-separation degeneracy of the F190N observation and an overall more mature data analysis process today. One can incidentally note that the revised F190N and F164N contrasts are in better agreement with the majority of the contrasts measurements reported by \citet{2009ApJ...695.1183M} with NRM observations using broad band H and K filters.
In the absence of a calibrator, the computation of parameter uncertainties required the introduction of a controlled amount of systematic error (added in quadrature to the measured statistical uncertainties) so that the reduced $\chi^2$ is unity. In this case, $\sigma_S = 0.15$ rad amount of systematic error is required, which seems to comparable to the magnitude of the calibration bias that was estimated ($\approx$0.22 rad) after the analysis of the SAO 179809 data-set. Unlike what was the case with the PHARO dataset (see Sec. \ref{sec:p3k}) it seems our modeling of the aperture and the interpretation of the resulting data meets our expectations.
\section{Discussion}
\label{sec:disc}
While we were able to show that the modeling prescriptions outlined in this paper do bring closure- and kernel-phase closer to the true self-calibration, it seems that in order to reach the highest contrast detection limits one will always resort to calibration observations, which typically require telescope repointing and is therefore a time consuming option. If a target were to exhibit different properties at two nearby wavelengths, such as a strong absorption or emission spectral line caught by one filter and not the other, it seems a powerful and more efficient calibration scheme would be to subtract the kernel-phase acquired in the two filters from one another. One would then have to fit the thus calibrated data to a spectral differential kernel-phase model.
\begin{figure} \includegraphics[width=\columnwidth]{correlation_of_residuals.png}
\caption{Correlation plot of the kernel-phase residuals after subtraction from the best-fitting binary model. The relatively good match between the two residuals (correlation coefficient $\approx$ 0.87) suggests that the use of spectral differential kernel-phase would be a valid way to solve the calibration problem.}
\label{f:residuals}
\end{figure}
We have seen that with its stable contrast, GJ 164 AB does not really feature any noteworthy spectral behavior and that filters are reasonably far apart from one another so this dataset is not ideal to try this idea out. Nevertheless, because of the relative proximity of the F164N and F190N filters, we can still assess the potential of one such observing mode by looking for similarities in the structure of the kernel-phase residuals after subtraction of the best fit model. Fig. \ref{f:residuals} features a correlation plot of these residuals for the F164N and F190N filters that include experimental uncertainties. The apparent good correlation between the two residuals suggests that a spectral-differential calibration scheme has some merit: the magnitude of the differential kernel-phase residual is $\sim0.11$ rad, which is getting close to the associated experimental dispersion ($\sigma_E = 0.08$ rad). This approach should be further tested on images acquired in two filters less further away, or in the analysis of data-cubes produced by an AO-fed integral field spectrograph, which will be the object of future work.
\section{Conclusion}
\label{sec:conclu}
Kernel-phase is a versatile adaptation of the idea of closure-phase that can be used in a wide variety of configurations, assuming that images are reasonably well corrected. With versatility however comes the need for care. The description of the aperture upon which the analysis is made must be thought through, requiring good knowledge of the original pupil and matched to the constraints brought by images, in particular the number of useful pixels, as well as the scientific ambition.
We have seen that the inclusion of a transmission model for the description of the aperture required to build the pupil-Fourier phase transfer model brings a major improvement in fidelity. Several examples using ideal numerical simulations and actual data-sets from ground-based observations as well as from space have demonstrated that this overall higher fidelity reduces the impact of systematic errors and leads to better results. One should also note that the introduction of grey transmission model now makes it possible to take advantage of pupil apodization masks used to reduce the contribution of photon noise over a finite area of the image, which, assuming good self-calibration, will result in improved contrast detection limits.
Closure- and kernel-phase based observing programs are becoming more and more ambitious with instruments that make it theoretically possible in some cases to probe for planetary mass companions \citep{2019JATIS...5a8001S, 2019A&A...630A.120C} down to the diffraction limit without a coronagraph. The proper handling of systematic errors in both scenarios is becoming paramount. While efficient calibration procedures offer a way to recover from problematic solutions, the work described here is an attempt to avoid them in the first place.
\begin{acknowledgements}
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement CoG \# 683029). It is based in part on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555.
\end{acknowledgements}
|
1,116,691,500,193 | arxiv |
\section{An image of scanning electron microscopy, and temperature and magnetic dependences of linear resistivity for the present MnP sample.}
\begin{figure}[ht]
\includegraphics[width=12cm]{memoryS1}
\caption{(a) Scanning electron microscopy image of the micro-fabricated MnP sample. (b) Temperature dependence of the resistivity for the present MnP sample fabricated by using a focused ion beam. The resistivity of our previous sample is reproduced for comparison\cite{jiang}. The inset shows the history dependence of resistivity around the helical--ferromagnetic phase boundary.}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=10cm]{memoryS4}
\caption{Magnetic field dependence of the linear electrical resistivity at various temperatures. The hystereses were measured between 44 K and 60 K. Before the measurement in this temperature region, the temperature was increased to 80 K and then decreased to the measured temperature in the absence of a magnetic field. The red data shows the resistivity measured in the field-increasing process just after the zero-field cooling. The blue curves are the data for the field-decreasing process. Symbols indicate the metamagnetic transition fields shown in Fig. 1 (b) in the main text. Triangles, hexagons, pentagons, rhombuses, and stars represent FAN-to-CON, FM1-to-CON, FM2-to-FAN, FAN-to-FM1, and FM2-to-FM1 phase transitions, respectively. All the data are quite similar to those shown in the supplemental material of ref. 1.}
\end{figure}
\clearpage
\newpage
\section{$\rho^{\rm 2f}_{\rm asym}$ signal after various $H$-poling procedures.}
\begin{figure}[ht]
\includegraphics[width=11cm]{memoryS2}
\caption{(a)-(d) Magnetic field dependence of $\rho^{\rm 2f}_{\rm asym}$ after the $H$-poling procedure at 52 K with the positive and negative magnetic fields $H_{\rm p}$ and dc electric currents $j_{\rm p}$. The magnitudes of $j_{\rm p}$ and ac electric current for the measurement $j_{\rm ac}$ were $8.5 \times 10^{8}$ $\rm Am^{-2}$ and $5.9 \times 10^{8}$ $\rm Am^{-2}$, respectively. The sign of $\rho^{\rm 2f}_{\rm asym}$ depended on whether $H_{\rm p}$ and $j_{\rm p}$ were parallel or antiparallel, which is consistent with our previous paper.
While we only show the result of two conditions in main text, all the four conditions are shown here. }
\end{figure}
\newpage
\section{Lack of reproducibility for the $\rho^{\rm 2f}_{\rm asym}$ signal just after the $T$-poling.}
We found that the $\rho^{\rm 2f}_{\rm asym}$ signal after the $T$-poling was not reproducible. In Fig. 4, we show three observed signals of $\rho^{\rm 2f}_{\rm asym}$ measured with the same condition; they were observed just after the $T$-poling procedure at $\mu_{0}H_{\rm p}$ = 1 T for $j_{\rm p} \parallel -H_{\rm p}$. While 1st and 2nd data are similar to each other, the 3rd data is quite different from the other two.
\begin{figure}[ht]
\includegraphics[width=7cm]{memoryS3}
\caption{Three $\rho^{\rm 2f}_{\rm asym}$ data measured with the same condition. They were observed just after the $T$-poling procedure with $j_{\rm p} \parallel -H_{\rm p}$. The magnitudes of $\mu_{0}H_{\rm p}$ and $j_{\rm ac}$ were 1 T and $5.9 \times 10^{8}$ $\rm Am^{-2}$, respectively.}
\end{figure}
|
1,116,691,500,194 | arxiv | \section{Introduction}
Multi-component nonlinear Schr{\"{o}}dinger (NLS) systems emerge in a
variety of contexts of optical~\cite{kivshar} and atomic physics~\cite%
{book2a,book2}. In the former setting, they model, in particular, the
interaction of waves with different carrier wavelengths~\cite%
{manakov,intman1}, while in atomic Bose-Einstein condensates (BECs) they
apply [in the form of coupled Gross-Pitaevskii (GP) equations] to spinor (or
pseudo-spinor) systems, which
represent mixed condensates formed by different hyperfine states of the same
atomic species \cite{kawueda,stampueda,siambook}, as well as to
heteronuclear mixtures composed by different species \cite{hetero}.
In such multi-component settings, when the nonlinearity is self-defocusing
(self-repulsive), a prototypical example of a one-dimensional self-trapped
structure is given by dark-bright (DB) solitons. These are ubiquitous in
two-component systems with the self- and cross-repulsion (alias self- and
cross-phase-modulation, SPM and XPM, respectively) represented by cubic
terms. Since long ago, the DB solitons have drawn much interest in nonlinear
optics~\cite{christo,vdbysk1,vddyuri,ralak,dbysk2,shepkiv,parkshin},
including their realization in pioneering experiments reported in Refs.~\cite%
{seg1,seg2}. More recently, the remarkable control available in pristine
experimental settings of atomic BECs in ultracold gases, such as $^{87}$Rb
and $^{23}$Na, with a multitude of available co-trapped hyperfine states, as
well as in heteronuclear mixtures, such as $^{87}$Rb-$^{41}$K \cite{hetero},
has opened a new gateway to the realization of DB solitons.
Indeed, these structures were created in a multitude of state-of-the-art
experiments either controllably, or spontaneously, and their pairwise
interactions, as well as interactions with external potentials, were studied~%
\cite{hamburg,pe1,pe2,pe3,azu}. Related $SO(2)$ rotated DB soliton states,
in the form of dark-dark solitons, were also experimentally produced~\cite%
{pe4,pe5}.
The formation of the DB solitons is based on the fact that dark solitons are
fundamental modes in single-component one-dimensional (1D) self-defocusing
media. These modes, when created in one component of a two-component system,
induce an effective potential well in the other component. This potential
well gives rise to its fundamental bound state, i.e., the ground state (GS),
which represents the bright component of the DB soliton complex. The
knowledge of the explicit form of the dark soliton enables one to explore
the induced potential well, which is generically of the P{\"{o}}schl-Teller
type \cite{LDL}. It is possible to demonstrate, as done recently \cite%
{dbs_coupled_Manakov}, that, if the dispersion coefficient in the second
component is different from its counterpart in the first component, not only
the GS, but also excited states can be trapped by the potential well in the
second component. When the DB soliton states emerge at their bifurcation
point, they have an infinitesimal amplitude of the bright component in the
effective potential induced by the action of the SPM term. However, they can
be readily continued numerically to finite values of the amplitude.
Heteronuclear BEC mixtures with different atomic masses of the components
provide a straightforward realization of the coupled GP equations with
different dispersion coefficients (inverse atomic masses). In addition,
spin-orbit coupled BECs \cite{SOCBEC} offer the same possibility, for states
that coexist in the upper- and lower-energy bands of the linear spectrum
\cite{vaszb}. In terms of optics, a similar realization is provided by the
copropagation of two beams carried by widely different wavelengths in a
self-defocusing medium. Another recently developed ramification of the topic
of the DB solitons in systems with unequal effective dispersion coefficients
is the consideration of systems of equations with quintic SPM and cubic XPM
repulsive interactions. They model the immiscibility regimes in
heteronuclear binary Tonks-Girardeau (TG) gases \cite{Mario}, as well as
BEC-TG mixtures \cite{Giofil}.
It is natural and quite interesting to extend the concept of DB soliton
states to higher dimensions. In particular, the fact that the component
carrying patterns supported by nonzero background induces an effective
trapping potential in the other component, remains valid in this case. In
the two-dimensional (2D) setting, such patterns are well-known stable
vortices \cite{vortex} (vortices were studied in multi-component systems too
\cite{Japan}). A vortex in one component generates an effective axisymmetric
potential well in the
other, which may trap a bright 2D solitary wave, producing a complex that
was given different names -- in particular, a vortex-bright (VB) soliton~%
\cite{kodyprl}, a half-quantum vortex~\cite{tsubota},
a filled-core~vortex \cite{anderson}, as well as a baby Skyrmion~\cite%
{cooper}. Similar stable two-component modes are \textquotedblleft
semi-vortices\textquotedblright\ in the free 2D \cite{Fukuoka} and 3D \cite%
{Han Pu} space with the attractive SPM and XPM terms, which are made stable
by the spin-orbit coupling; they are composed of a {bright vortex soliton}
in one component, and a bright fundamental one in the other.
It is important to note that the
VB soliton complexes in the
self-repulsive setting of a mixture of internal states of $^{87}$Rb atoms
were created experimentally in the early work of~\cite{anderson}.
Subsequently, their stability~\cite{skryabin,kodyprl} and dynamics~\cite%
{kodyprl,tsubota} have been examined theoretically. It was shown that these
states feature intriguing interactions that decay with the distance $r$
between them as $1/r^{3}$~\cite{tsubota}. Pairs of VB soliton complexes can
form bound states in atomic BECs, as shown in detail in Ref.~\cite{pola}.
Our objective in the present work is to consider VB soliton complexes in the
system featuring repulsive SPM and XPM interactions, and different
dispersion coefficients of the two components (i.e., different atomic masses
in the respective coupled GP equations, or different propagation constants
in the coupled NLS equations for optical beams). We aim to generate a broad
set of novel families of excited complexes, with the vortex in the first
component potentially trapping not only the fundamental bright solitons, but
also excited radial states in the second component, represented by confined
multi-ring shaped waveforms. We demonstrate that such complexes are possible
in the two-component NLS/GP system. The fundamental state among them, the VB
soliton complex, is generically found to be stable. On the other hand, the
complexes whose bright component is represented by the excited ring-shaped
modes are found to be unstable. However, varying the dispersion coefficient
of the second component, we can identify scenarios where it is possible to
render the corresponding instability very weak, and the associated
structures very long-lived. Furthermore, if an additional
harmonic-oscillator trapping potential is added to the system, which is, as
a matter of fact, a mandatory ingredient of the experimental realization of
the setting in BEC, we show that it is possible to render such structures
completely stable in suitable parametric intervals. Lastly, we showcase
basic scenarios of the instability development, inferring that the unstable
(in the free space) multi-ring states are typically transformed into the
stable VB fundamental ones.
The presentation in the paper is structured as follows. The model is
introduced in Section II.
In Sec.~III, we discuss the computational analysis of the model, presenting
both the numerical methods and results. Finally, in Sec.~IV we summarize
our findings and mention possible directions for future studies.
\section{The model and analytical considerations}
Motivated by the above-mentioned realizations in BECs and nonlinear optics,
we consider the coupled defocusing GP/NLS system in $(2+1)$ dimensions (two
spatial and one temporal). In the scaled form, the system is
\begin{eqnarray}
&&i\partial _{t}{\Phi _{-}}=-\frac{D_{-}}{2}\nabla ^{2}\Phi _{-}+\gamma
\left( g_{1}|\Phi _{-}|^{2}+\sigma _{12}|\Phi _{+}|^{2}\right) \Phi
_{-}+V(x,y)\Phi _{-}, \label{start_gps_2Da} \\
&&i\partial _{t}{\Phi _{+}}=-\frac{D_{+}}{2}\nabla ^{2}\Phi _{+}+\gamma
\left( \sigma _{12}|\Phi _{-}|^{2}+g_{2}|\Phi _{+}|^{2}\right) \Phi
_{+}+V(x,y)\Phi _{+}, \label{start_gps_2Db}
\end{eqnarray}%
where $\nabla ^{2}=\partial _{x}^{2}+\partial _{y}^{2}$ is the Laplacian in
2D, $D_{\pm }$ are the dispersion coefficients, $\gamma $ is the overall
nonlinearity strength, with relative SPM and XPM interaction coefficients $%
g_{j}$ ($j=1,2$) and $\sigma _{12}$, respectively. Equations~(\ref%
{start_gps_2Da}) and (\ref{start_gps_2Db}) include the usual parabolic trapping
potential,
\begin{equation}
V(x,y)=\frac{1}{2}\Omega ^{2}(x^{2}+y^{2}),
\end{equation}
with normalized trap strength $\Omega $.
Fields $\Phi _{-}$ and $\Phi _{+}$ carry the vortex and bright-soliton
components, respectively. From now on, we focus on the basic case of equal
interaction coefficients,
\begin{equation}
g_{1,2}=\sigma _{12}=1, \label{1}
\end{equation}
and use rescaling to fix $D_{-}=\gamma =1$, while $D_{+}\equiv D\,\geq 0$ is
the relative dispersion coefficient in the second component.
In the case of the binary heteronuclear BECs, coefficient $D$ is determined
by the two atomic masses, $D=m_{-}/m_{+}$, while in the case of the
spin-orbit coupled BECs, is given by the ratio of the group-velocity
dispersion coefficients, as found by the corresponding dispersion relation
of the two-component branches \cite{vaszb}. On the other hand, in the optics
model, time $t$ is replaced by the propagation coordinate, $z$, in the
corresponding bulk waveguide \cite{kivshar}, and $D$ is determined by the
carrier wavelengths of the two beams, $D=\Lambda _{+}/\Lambda _{-}$. In
particular, referring to $^{87}$Ru-$^{7}$Li BEC mixtures, which are
available to current experiments (see Ref.~\cite{RuLi} and references
therein), the relative dispersion coefficient may reach values as large as $%
\simeq 12$, and as small as $\simeq 0.08$. In optics, the use of materials
with broadband transparency may give rise to a roughly similar range of $D$.
However, in the case of very large or very small $D$,
Eq.~(\ref{1}) is not relevant, and the analysis will need to be
adjusted to other values of the SPM and XPM coefficients.
Stationary solutions to Eqs.~(\ref{start_gps_2Da})-(\ref{start_gps_2Db})
with chemical potentials $\mu _{\pm }$ (or propagation constants $-\mu _{\pm
}$, in terms of the optical beams) are looked for as $\Phi _{\pm
}(x,y,t)=\phi _{\pm }(x,y)\exp (-i\mu _{\pm }t)$, reducing Eqs.~(\ref%
{start_gps_2Da})-(\ref{start_gps_2Db}) to the coupled system of stationary
equations:
\begin{eqnarray}
\mu _{-}{\phi _{-}} &=&-\frac{1}{2}\nabla ^{2}\phi _{-}+\left( |\phi
_{-}|^{2}+|\phi _{+}|^{2}\right) \phi _{-}+V(x,y)\phi _{-},
\label{stat_gps_2Da} \\
\mu _{+}{\phi _{+}} &=&-\frac{D}{2}\nabla ^{2}\phi _{+}+\left( |\phi
_{-}|^{2}+|\phi _{+}|^{2}\right) \phi _{+}+V(x,y)\phi _{+}.
\label{stat_gps_2Db}
\end{eqnarray}%
Further, we introduce the following \textit{Ans\"{a}tze} for the stationary
fields, with real radial functions $f_{\pm }(r)$:
\begin{eqnarray}
&&\phi _{-}(r,\theta )=f_{-}(r)e^{iS\theta }, \label{-} \\
&&\phi _{+}(r,\theta )=f_{+}(r)e^{in\theta }, \label{+}
\end{eqnarray}%
where $r=\sqrt{x^{2}+y^{2}}$ is the radial distance, $\theta =\tan ^{-1}(y/x)
$ is the polar angle, and the integer topological charges of the vortex and
bright solitons are $S$ and $n$, respectively. Thus, Eqs.~%
\eqref{stat_gps_2Da}-\eqref{stat_gps_2Db} reduce to the radial equations:
\begin{gather}
\frac{d^{2}f_{-}}{dr^{2}}+\frac{1}{r}\frac{df_{-}}{dr}-\frac{S^{2}f_{-}}{%
r^{2}}-2\left( f_{-}^{~2}+f_{+}^{~2}-\mu _{-}+V(r)\right) f_{-}=0,
\label{stat_gps_1Da} \\
D\left( \frac{d^{2}f_{+}}{dr^{2}}+\frac{1}{r}\frac{df_{+}}{dr}-\frac{%
n^{2}f_{+}}{r^{2}}\right) -2\left( f_{-}^{~2}+f_{+}^{~2}-\mu
_{+}+V(r)\right) f_{+}=0. \label{stat_gps_1Db}
\end{gather}%
It should be noted in passing that in the majority of
cases studied below we consider Eqs.~(\ref{start_gps_2Da})-(\ref%
{start_gps_2Db}), (\ref{stat_gps_2Da})-(\ref{stat_gps_2Db}), as well as
Eqs.~(\ref{stat_gps_1Da})-(\ref{stat_gps_1Db}) in the absence of the
trapping potential. Therefore, we set $V(r)=0$, unless it is said otherwise.
As indicated above, our fundamental premise, similar to that adopted in the
study of the 1D setting in Ref. \cite{dbs_coupled_Manakov}, is that the dark
mode (dark soliton in 1D, and vortex in this present case) of the defocusing
NLS equation induces an effective potential (via the XPM interaction) in the
other component, which in turn gives rise to trapping of the bright-soliton
state in it. Thus, Eq.~(\ref{stat_gps_2Da}) and, in particular, its radial
version simplifies to the single-component equation,
\begin{equation}
\nabla _{r}^{2}f_{-}-\frac{S^{2}f_{-}}{r^{2}}-2\left( f_{-}^{~2}-\mu
_{-}\right) f_{-}=0, \label{bvp_vortex}
\end{equation}%
in the absence of the bright component, i.e., for $f_{+}=0$;
here, $\nabla _{r}^{2}=d^{2}/dr^{2}+r^{-1}d/dr$ is the radial part of the
Laplace operator. Equation~(\ref{bvp_vortex}) was solved numerically via fixed-point
(Newton-Raphson) iterations (see Sec.~\ref{Compu_Anaysis} below for details on the
computational methods employed in this work). Suitable approximate solutions
for the vortical waveform are known too (see, e.g. Ref. \cite{berloff}), and
they may be useful as initial guesses for the iterative process described
numerically below. From here on, we assume that this iterative process
converges to a radial solution for the vortex. This is different from the 1D
case, where the dark soliton is available in the commonly known analytical
form, and the P{\"{o}}schl-Teller potential~\cite{LDL} that it induces in
the other component is analytically tractable~\cite{dbs_coupled_Manakov}. In
the 2D system presented in this work, the analysis has to be completed
numerically.
Thus, the resulting vortex profile $f_{-}$ (or $\phi _{-}$, for given $S$)
of Eq.~(\ref{bvp_vortex}) plays the role of the background for the weak
component $f_{+}$ (or $\phi _{+}$, for given $n$). As follows from Eq.~(\ref%
{bvp_vortex}), the amplitude of the background for the vortex is%
\begin{equation}
f_{-}(r\rightarrow\infty )=\sqrt{\mu _{-}} \label{asympt}
\end{equation}
which (upon rescaling) is set to be $\mu _{-}=1$, in our numerical
computations below. Then, when the solution for the component $f_{+}$
bifurcates from its linear limit
corresponding to $f_{+}\rightarrow 0$, the linearized form of Eq.~(\ref%
{stat_gps_1Db}) amounts to an eigenvalue problem
\begin{equation}
\mathcal{L}f_{+}=\mu _{+}f_{+}, \label{bright_linear_polar_eig_prob}
\end{equation}%
for known $f_{-}$, where $\mathcal{L}=-\left( D/2\right) \,\left( \nabla
_{r}^{2}-n^{2}/r^{2}\right) +f_{-}^{~2}$ is a linear operator and $(\mu
_{+},f_{+})$ is the eigenvalue-eigenvector pair. Armed with the set of
profiles for $f_{\pm }$ obtained from Eqs.~(\ref{bvp_vortex}) and (\ref%
{bright_linear_polar_eig_prob}) as initial guesses, we utilize an iterative
scheme towards the solution of the full nonlinear system of Eqs.~(\ref%
{stat_gps_1Da})-(\ref{stat_gps_1Db}) (see Sec.~\ref{Compu_Anaysis} below for
details).
It is natural to expect that nonlinear solutions to Eqs.~(\ref{stat_gps_1Da}%
)-(\ref{stat_gps_1Db}) and~(\ref{stat_gps_2Da})-(\ref{stat_gps_2Db}),
corresponding to the ground and excited states in the linear limit for
component $f_{+}$, emerge (bifurcate) at some critical values of $D$ with
the corresponding eigenvalues $\mu _{+}$ of the linear problem based on Eq.~(%
\ref{bright_linear_polar_eig_prob}). These values are found below by
performing numerical continuations over the aforementioned parameters.
\section{Numerical Analysis}
\label{Compu_Anaysis}
\subsection{Computational methods}
In this section, numerical results are presented for the coupled GP/NLS
system (\ref{start_gps_2Da})-(\ref{start_gps_2Db}). Our analysis addresses
the \textit{existence}, \textit{stability}, and \textit{dynamical evolution}
of the nonlinear modes under consideration. As concerns the existence
and stability, a parametric continuation is performed in chemical potential $%
\mu _{+}$ of the bright component for given values of relative dispersion
coefficient $D$. The corresponding states are thus identified along with
their stability spectra. When the solutions are predicted to be stable, this
is verified by direct simulations. For unstable states, the simulations aim
to reveal the eventual states into which they are spontaneously transformed.
In our numerical computations, a 1D uniform spatial grid is employed along
the radial direction, consisting of $N$ points $r_{j}=j\delta {r}$, with $%
j=1,\dots ,N$ and lattice spacing $\delta {r}=0.05$. The origin is located
at $j=0$, whereas the domain cut-off ($r_{\text{max}}$) is set at $j=N+1$
(from now on, we fix $r_{\text{max}}=50$). In this way, both fields $f_{\pm
}(r)$ are replaced by their discrete counterparts on the spatial grid, $%
f_{j,\pm }=f_{\pm }(r_{j})$. Then, the radial Laplacian $\nabla _{r}^{2}$ in
Eqs.~(\ref{stat_gps_1Da})-(\ref{stat_gps_1Db}), (\ref{bvp_vortex}) and (\ref%
{bright_linear_polar_eig_prob})
is replaced by second-order central-finite-difference formulas for the first
and second derivatives. To secure a well-posed problem, we employ the
boundary conditions (BCs) $f_{-}(r=0)=0$ and $df_{-}/dr(r \rightarrow \infty )=0$ for
the vortex soliton component, and $df_{+}/dr(r=0)=0$ and $f_{+}({r \rightarrow \infty }%
)=0$ for the bright-soliton one. In particular, the zero-derivative
(Neumann) BCs are incorporated into the internal discretization scheme using
the first-order backward and forward difference formulas, respectively.
Essentially, the zero-derivative (no-flux) BCs are enforced by requiring $%
f_{N+1,-}=f_{N,-}$ and $f_{0,+}=f_{1,+}$, whereas $f_{0,-}=f_{N+1,+}=0$, as
per the corresponding homogeneous Dirichlet
BCs.
The starting point is Eq.~(\ref{bvp_vortex}) for radial profile $f_{-}$ of
the vortex component. From now on, we fix the vorticities of the vortex- and
bright-soliton components to be $S=1$ and $n=0$, respectively, given that
our emphasis is on VB soliton complexes. We solve Eq.~(\ref{bvp_vortex}) by
means of the standard Newton-Raphson method, which converges to a vortex
profile as long as a sufficiently good initial guess is used. An example of
an input, which ensures both the convergence and compliance with
error-tolerance criteria, is $f_{-}(r)=\tanh r$. The resulting converged
waveform for different vorticities $S\geq 1$ features the correct asymptotic
form at $r\rightarrow 0$, $f_{-}(r\rightarrow 0)\sim r^{S}$, and its density
asymptotes to $\mu _{-}$ for large $r$.
Subsequently, with background field $f_{-}$ at hand,
we solve the eigenvalue problem~(\ref{bright_linear_polar_eig_prob})
numerically, to obtain the corresponding bright component, $f_{+}(r)$, with
an infinitesimal amplitude, along with the associated chemical potential, $%
\mu _{+}$. Our study is organized according to the order of the bound states
(the ground state, first excited state, and so on), and the value of $D$.
Specifically, we determine chemical potentials $\mu_{+}$ corresponding to
one of the lower eigenstates (the ground state corresponds to lowest $\mu
_{+}$, the first excited state pertains to the second lowest eigenvalue, and
so on) and the corresponding bright eigenfunction $f_{+}$ is obtained
afterwards. This way, the fully nonlinear self-trapped states of
system (\ref{stat_gps_1Da})-(\ref{stat_gps_1Db}) can be obtained with the
help of the Newton-Raphson scheme, the seed for the respective iterations
consisting of the vortex radial profile $f_{-}(r)$ together with the
eigenvalue-eigenvector pair $(\mu_{+},f_{+})$. \ Essentially, the seed fed
to our nonlinear solver originates from the underlying linear limit
discussed in the previous Section. In addition, we trace the stationary
solutions, for a given value of dispersion coefficient $D$, by performing a
numerical continuation with respect to chemical potential $\mu _{+}$, by
dint of the \textit{sequential continuation} method, i.e., using the
solution for given $\mu _{+}$, found by the solver, as the seed for the next
continuation step. We are thus able to numerically determine not only the
range of dispersion coefficient $D$, but also the range of chemical
potential $\mu _{+}$ for each case of interest. The validity of the
stationary solutions produced by the Newton-Raphson code has been
corroborated upon using a collocation method \cite{COLSYS} for solving
boundary value problems.
Having identified the stationary states, we turn to the study of their
stability. Motivated by the decomposition described in Ref. \cite%
{Kollar_Pego}, we start with the generalized perturbation ansatz around
stationary solutions, writing in the polar coordinates:
\begin{eqnarray}
\widetilde{\Phi }_{-}(r,\theta ,t) &=&e^{-i\mu _{-}t}e^{iS\theta }%
\Big\lbrace f_{-}+\varepsilon \sum_{|m|=0}^{\infty }\left[
a_{m}(r)e^{\lambda t}e^{im\theta }+b_{m}^{\ast }(r)e^{\lambda ^{\ast
}t}e^{-im\theta }\right] \Big\rbrace, \label{lin_ansatz_polara} \\
\widetilde{\Phi }_{+}(r,\theta ,t) &=&e^{-i\mu _{+}t}e^{in\theta }%
\Big\lbrace f_{+}+\varepsilon \sum_{|m|=0}^{\infty }\left[
c_{m}(r)e^{\lambda t}e^{im\theta }+d_{m}^{\ast }(r)e^{\lambda ^{\ast
}t}e^{-im\theta }\right] \Big\rbrace, \label{lin_ansatz_polarb}
\end{eqnarray}%
where $\lambda $ is a (complex) eigenvalue, $\varepsilon $ is an
infinitesimal amplitude of the perturbation, and the asterisk stands for
complex conjugate. We insert Eqs.~(\ref{lin_ansatz_polara})-(\ref%
{lin_ansatz_polarb}) into the radial version of Eqs.~(\ref{start_gps_2Da})-(%
\ref{start_gps_2Db}) and thus obtain, at order $\varepsilon $, an eigenvalue
problem in the following matrix form:
\begin{equation}
\tilde{\lambda}%
\begin{pmatrix}
a_{m} \\
b_{m} \\
c_{m} \\
d_{m}%
\end{pmatrix}%
=%
\begin{pmatrix}
A_{11} & A_{12} & A_{13} & A_{14} \\
-A_{12}^{\ast } & A_{22} & -A_{14}^{\ast } & -A_{13}^{\ast } \\
A_{13}^{\ast } & A_{14} & A_{33} & A_{34} \\
-A_{14}^{\ast } & -A_{13} & -A_{34}^{\ast } & A_{44}%
\end{pmatrix}%
\begin{pmatrix}
a_{m} \\
b_{m} \\
c_{m} \\
d_{m}%
\end{pmatrix}%
, \label{eig_prob_polar}
\end{equation}%
with eigenvalues
$\tilde{\lambda}=i\lambda $, eigenvectors $\mathcal{V}%
=(a_{m},b_{m},c_{m},d_{m})^{T}$, and matrix elements
\begin{eqnarray}
A_{11} &=&-\frac{D_{-}}{2}\left[ \nabla _{r}^{2}-\frac{\left( S+m\right) ^{2}%
}{r^{2}}\right] +\gamma \left[ 2g_{1}|f_{-}|^{2}+\sigma _{12}|f_{+}|^{2}%
\right] +V-\mu _{-}, \label{A11_polar} \\
A_{12} &=&\gamma \,g_{1}\,\left( f_{-}\right) ^{2}, \\
A_{13} &=&\gamma \,\sigma _{12}\,f_{-}\left( f_{+}\right) ^{\ast }, \\
A_{14} &=&\gamma \,\sigma _{12}\,f_{-}\,f_{+}, \\
A_{22} &=&\frac{D_{-}}{2}\left[ \nabla _{r}^{2}-\frac{\left( S-m\right) ^{2}%
}{r^{2}}\right] -\gamma \left[ 2g_{1}|f_{-}|^{2}+\sigma _{12}|f_{+}|^{2}%
\right] -\left( V-\mu _{-}\right) , \label{A22_polar} \\
A_{33} &=&-\frac{D_{+}}{2}\left[ \nabla _{r}^{2}-\frac{\left( n+m\right) ^{2}%
}{r^{2}}\right] +\gamma \left[ \sigma _{12}|f_{-}|^{2}+2g_{2}|f_{+}|^{2}%
\right] +V-\mu _{+}, \label{A33_polar} \\
A_{34} &=&\gamma \,g_{2}\,\left( f_{+}\right) ^{2}, \\
A_{44} &=&\frac{D_{+}}{2}\left[ \nabla _{r}^{2}-\frac{\left( n-m\right) ^{2}%
}{r^{2}}\right] -\gamma \left[ \sigma _{12}|f_{-}|^{2}+2g_{2}|f_{+}|^{2}%
\right] -\left( V-\mu _{+}\right) . \label{A44_polar}
\end{eqnarray}%
As a result, the full spectrum of nonlinear solutions $f_{\pm }$ is obtained
by putting together the spectra for different integer values of $m$. This
was done for $m\geq 0$ only ($m=0,1,2,3,4$ and $5$ in this work), since the
sets of the eigenvalues for $\pm m$ (with $m>0$) are complex conjugates. The
corresponding steady state is classified as a stable one if none of the
eigenvalues $\lambda =\lambda _{r}+i\,\lambda _{i}$ has $\lambda _{r}\neq 0$%
, given the Hamiltonian nature of our system. Note that two types of
instabilities can be thus identified: (i) \textit{exponential}, characterized
by a pair of real eigenvalues with $\lambda _{i}=0$, and (ii) \textit{%
oscillatory instabilities}, characterized by complex eigenvalue quartets.
Finally, the results for the spectral stability, obtained from the solution
of the eigenvalue problem [see, Eq.~(\ref{eig_prob_polar})] were
corroborated by means of direct simulations of the coupled GP/NLS system~(%
\ref{start_gps_2Da})-(\ref{start_gps_2Db}). To do so, a parallel version
(using OpenMP) of the standard fourth-order Runge-Kutta method (RK4), with a
fixed time-step of $\delta {t}=10^{-4}$, was employed. The simulations were
initialized at $t=0$ using the available stationary solutions. To obtain the
latter, we employed the Newton-Raphson method in a two-dimensional
rectangular domain for system~(\ref{stat_gps_2Da})-(\ref{stat_gps_2Db}),
using the NITSOL package \cite{NITSOL}. The 2D uniform spatial grid was
built of $N_{x}\times N_{y}$ grid points with $N_{x}\equiv N_{y}=251$ and
resolution $\delta {x}\equiv \delta {y}=0.08$. Both fields $\phi _{\pm
}(x,y) $ were replaced by their discrete counterparts on the 2D spatial
grid, i.e., $\phi _{i,j,\pm }=\phi _{\pm }(x_{i},y_{j})$ with $i=1,\dots
,N_{x}$ and $j=1,\dots ,N_{y}$. Then, the Laplacian on the rectangular grid
is replaced by the second-order central-finite-difference formula. As
mentioned above, the Neumann BCs for both fields at edges of the grid were
replaced by the first-order forward and backward difference formulas.
Furthermore, the steady states on the 2D Cartesian grid are obtained from
the radial ones by utilizing the following standard numerical procedure.
Having the 2D grid and (discrete) radial nonlinear states $f_{j,\pm }$ at
hand, we build interpolants $f_{\pm }(r)$ using a cubic spline
interpolation. Then, fields $\phi _{\pm }(r,\theta )$ given by Eqs.~(\ref{-}%
)-(\ref{+}) are constructed (for given vorticities $S$ and $n$) and
transformed from polar to Cartesian coordinates, $(r,\theta )\rightarrow
(x,y)$, by utilizing relations $r=\sqrt{x^{2}+y^{2}}$ and $\theta =\tan
^{-1}\left( y/x\right) $ once again. The resulting approximate solutions are
fed as initial guesses to our 2D Newton-Raphson method on the Cartesian grid
$\phi _{\pm }(x,y)$, and the resulting iteration process converges within a
few steps.
Two possible initializations of the direct simulations can be distinguished.
On the one hand, we initialized the dynamics in the presence of small
(uniformly distributed) random perturbations with amplitude $\varepsilon
=10^{-3}$, added to presumably stable stationary states. An alternative
approach was to initialize the evolution using the linearization ansatz~(\ref%
{lin_ansatz_polara})-(\ref{lin_ansatz_polarb}) for unstable solutions, with $%
\varepsilon =10^{-3}$, eigenvector $\mathcal{V}$ and given $m$ corresponding
to a (complex) eigenvalue being responsible for the instability. The latter
approach helps to stimulate the onset of the expected instability and
observe the ensuing dynamics. The underlying eigenvector $V$ is transformed
from polar to Cartesian coordinates using the same interpolation technique
mentioned previously.
\subsection{Numerical results}
\label{numer_res}
We start by considering the most fundamental state, namely the VB
soliton one in Fig.~\ref{fig2}. In this case, as well as in all the
other cases that we have considered herein, we find the VB structure
(shown, e.g., in the left panel of Fig.~\ref{fig2}) to be stable, as
evidenced by the absence of eigenvalues with nonzero real part in the
right panel of Fig.~\ref{fig2}.
\begin{figure}[tp]
\begin{center}
\vspace{-0.1cm}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig2a.eps}
\label{fig2a}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig2b.eps}
\label{fig2b}
}
}
\end{center}
\par
\vspace{-0.55cm}
\caption{
(Color online) Steady-state profiles (a) of the vortex and bright soliton
components (black and blue lines, respectively), and the eigenvalue spectrum
(b) corresponding to a ground state with $D=0.6$ and $\protect\mu _{+}=0.95$.
}
\label{fig2}
\end{figure}
The first excited state is shown in Fig.~\ref{fig3}. The profile displayed
in the top left panel demonstrates that the vortex retains its general
structure, while featuring some changes due to the presence of a more
complex spatial pattern in the bright component, with two maxima of the
local density ($|f_{+}|^{2}$), one at the center and another one at the
periphery, separated by a notch in the form of a dark ring. The typical
linearization spectrum shown in the top right panel illustrates the presence
of complex instabilities. It is relevant to stress that, both in this case
and in those considered below, the instabilities are associated with
quartets of complex eigenvalues (even when their imaginary parts are so
small that the eigenvalues may appear as real ones). The detailed spectra
shown in the middle and bottom panels of the figure (cf. Figs.~\ref{fig3c}-%
\ref{fig3g}) make it clear that the lowest perturbation eigenmodes are
always prone to instability, especially the ones with $m=0$ and $m=1$. For
smaller values of $D$, higher eigenmodes may become unstable too, and the
respective instabilities may eventually (i.e., at large $\mu _{+}$) even
dominate the respective growth rate. The enhanced instability at smaller $D$
is a natural feature to expect: indeed, as $D$ decreases, the notch shrinks, turning
into a circular quasi-1D dark soliton, whose snaking instability in two-dimensional
settings is well known \cite{siambook}. It is also relevant to stress that
the oscillatory pattern, featured, especially, by the $m=0$ mode is associated with the
presence of gaps in the spectrum (for our finite-domain computation), which
allow the eigenmode to periodically restabilize, before it collides with
another one and destabilizes anew. Similar features for other
\textquotedblleft dark\textquotedblright\ patterns have long been known
(see, in particular, Ref.~\cite{prl8285}), and are absent in the
infinite-domain limit, where the relevant eigenvalue follows the envelope of
the respective \textquotedblleft trajectory\textquotedblright\ in the
spectral plane.
\begin{figure}[tbp]
\begin{center}
\vspace{-0.1cm}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig3a.eps}
\label{fig3a}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig3b.eps}
\label{fig3b}
}
}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig3c.eps}
\label{fig3c}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig3d.eps}
\label{fig3d}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig3e.eps}
\label{fig3e}
}
}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig3f.eps}
\label{fig3f}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig3g.eps}
\label{fig3g}
}
}
\end{center}
\par
\vspace{-0.55cm}
\caption{(Color online) Bound states and continuation results corresponding
to the first excited state in the bright component. \textit{Top row}: (a)
Stationary profiles of the vortex and bright components (the black and blue
lines, respectively). (b) The corresponding eigenvalue spectrum for $D=0.1$
and $\protect\mu _{+}=0.97$. \textit{Middle and bottom rows}: The largest
real part of the eigenvalues as a function of $\protect\mu _{+}$ at various
fixed values of $D$: (c) $D=0.5$, (d) $D=0.4$, (e) $D=0.3$, (f) $D=0.2$, and
(g) $D=0.1$.}
\label{fig3}
\end{figure}
\begin{figure}[tp]
\begin{center}
\includegraphics[height=.16\textheight, angle =0]{summary.eps}
\par
\vspace{-0.1cm}
\end{center}
\caption{(Color online) Same as Fig.~\protect\ref{fig3} but in the presence
of the harmonic-oscillator trap. The largest real part of the eigenvalues as
a function of $\protect\mu _{+}$ at $D=0.5$ is depicted in the left and
right panels for values of the trap's strength of $\Omega =0.2$ and $\Omega =0.3
$, respectively. }
\label{fig3_comp_trap}
\end{figure}
\begin{figure}[tp]
\begin{center}
\includegraphics[height=.28\textheight, angle =0]{summary_over_s12.eps}
\par
\vspace{-0.1cm}
\end{center}
\caption{(Color online) Same as Fig.~\protect\ref{fig3} but for $\protect\mu %
_{+}=0.86$ and $D=0.1$. \textit{Top row}: Stationary profiles of the vortex
and bright components for interaction coefficients $\protect\sigma _{12}=0.9$
(left panel) and $\protect\sigma _{12}=1.1$ (right panel). \textit{Bottom row%
}: The largest real part of the eigenvalues as a function of interaction
coefficient $\protect\sigma _{12}$. }
\label{fig3_comp_s12}
\end{figure}
Although our main focus in this study has been on the free space case, we
now briefly touch upon the setting involving the presence of the trapping
harmonic-oscillator potential. In particular, Fig.~\ref{fig3_comp_trap}
displays the linearization spectra of the trapped first excited state in the
bright component at $D=0.5$ for two different values of the trap's strength,
$\Omega =0.2$ and $\Omega =0.3$ in the left and right panels of the figure,
respectively. It is evident from both panels that the trap contributes to
the stability of the solution, if compared with the same branch in the free
space for the same $D$ depicted in panel (c) of Fig.~\ref{fig3} (see the
range of the values of $\mu _{+}$ as well). In addition, our findings
suggest that the stability interval (i.e., the width of the interval of $\mu
_{+}$ in which the branch is stable), is progressively wider as $\Omega $
increases (see the right panel of Fig.~\ref{fig3_comp_trap}). Thus,
gradually increasing values of the trap's strength results in wiping out the
unstable modes of the spectrum. This may be expected, as the parabolic trap
makes the linearization spectrum of the system discrete (while it is
continuous in the uniform space), gradually imposing a larger distance
between the relevant eigenvalues, thus suppressing resonant interactions
between modes that cause instabilities for such excited states~\cite%
{siambook}.
Up to now, we considered the system with all the interaction (or
nonlinearity) coefficients equal. It is also relevant to briefly touch upon
the variation of these coefficients as in realistic atomic systems they are
not precisely equal to unity; furthermore, these are parameters that can be
tuned by means of the Feshbach resonances controlling inter-atomic
scattering. In that vein, in Fig.~\ref{fig3_comp_s12} we consider the state
with the first excited state in the bright component at $\mu _{+}=0.86$ and $%
D=0.1$. In particular, the top left and right panels correspond to the
profiles for $\sigma _{12}=0.9$ and $\sigma _{12}=1.1$, respectively,
highlighting the transition from miscibility ($\sigma _{12}^{2}<g_{1}g_{2}$)
to immiscibility (for $\sigma _{12}^{2}>g_{1}g_{2}$). The bottom panel in
the figure shows the linearization spectrum in this case as a function of $%
\sigma _{12}$. In the 1D case studied in Ref. \cite{phyreva91}, it has been
observed that the stability of the individual dark-bright solitons is not
dramatically affected by the variation of $\sigma _{12}$. Similar findings
are observed in the present 2D case, although the instability growth rates
start decreasing, resulting in a small stability region close to the upper
bound of the examined window of $\sigma_{12}$ (see the bottom panel of Fig.%
~\ref{fig3_comp_s12}). It should be stressed at this point that the lower
bound of $\sigma _{12}$ is determined by the fact that as $\sigma _{12}\,(<1)$
decreases,
the expanding
bright structure eventually reaches the domain size. On the contrary, when $%
\sigma _{12}\,(>1)$ increases, the bright component becomes narrower and more
focused within the potential well induced by the dark structure. Furthermore,
it should be pointed out that the structure decreases in amplitude with its
amplitude being vanished, thus, determining the upper bound of the considered
interval of $\sigma_{12}$.
The case of the second and third excited states in the bright component is
considered in Figs.~\ref{fig4} and \ref{fig5}, respectively. The former
state features a triple local density maximum in the bright component, with
these maxima separated by two dark rings; the latter state has four local
maxima, separated by three dark rings, as shown in the respective top left
panels of the figures. One can also observe in the corresponding top right
panels, which showcase typical examples of the spectral plane, $(\lambda
_{r},\lambda _{i})$, of eigenvalues $\lambda =\lambda _{r}+i\lambda _{i}$,
that the number of unstable modes is getting progressively larger with the
increase of the order of the state. Some additional relevant observations
regarding these figures are as follows. In Fig.~\ref{fig4}, we observe that,
for sufficiently small dispersion coefficient $D$, eigenvalues of
higher-order perturbation modes, including ones for $m=3$ (and even $m=4$
and $5$) grow fast with $\mu _{+}$, so that they play a dominant role in the
resulting dynamics, making it different from that in more typical cases of $%
m=0$ and $m=1$. As for the waveform in Fig.~\ref{fig5}, on the other hand,
it is relevant to point out that the internal structure of the ring state
has a conspicuous feedback effect on the spatial profile of the vortex. In
this case, the vortex features a nearly non-monotonic profile. Here, too,
higher perturbation eigenmodes, including most notably $m=3$, but also, in
some cases, $m=2$, $m=4$, etc., may result in the largest growth rate of the
instability.
\begin{figure}[tp]
\begin{center}
\vspace{-0.1cm}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig4a.eps}
\label{fig4a}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig4b.eps}
\label{fig4b}
}
} \vspace{-0.5cm}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig4c.eps}
\label{fig4c}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig4d.eps}
\label{fig4d}
}
}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig4e.eps}
\label{fig4f}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig4f.eps}
\label{fig4g}
}
}
\end{center}
\par
\vspace{-0.55cm}
\caption{(Color online) Same as Fig.~\protect\ref{fig3} but for the second
excited states in the bright component. \textit{Top row}: (a) Steady-state
profiles of the vortex and bright components (the black and blue lines,
respectively). (b) The corresponding eigenvalue spectrum for $D=0.05$ and $%
\protect\mu _{+}=0.95$. \textit{Middle and bottom rows}: The largest real
part of eigenvalues as a function of $\protect\mu _{+}$ at fixed values of $%
D $: (c) $D=0.2$, (d) $D=0.15$, (e) $D=0.1$, and (f) $D=0.05$.}
\label{fig4}
\end{figure}
\begin{figure}[tph]
\begin{center}
\vspace{-0.1cm}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig5a.eps}
\label{fig5a}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig5b.eps}
\label{fig5b}
}
}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig5c.eps}
\label{fig5c}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig5d.eps}
\label{fig5d}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig5e.eps}
\label{fig5e}
}
}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig5f.eps}
\label{fig5f}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig5g.eps}
\label{fig5g}
}
}
\end{center}
\par
\vspace{-0.55cm}
\caption{(Color online) Same as Fig.~\protect\ref{fig3}, but for the third
excited states in the bright component. \textit{Top row}: (a) Steady-state
profiles of the vortex and bright components (black and blue lines,
respectively). (b) The corresponding eigenvalue spectrum for $D=0.04$ and $%
\protect\mu _{+}=0.9825$. \textit{Middle and bottom rows}: The largest real
eigenvalue as a function of $\protect\mu _{+}$ at fixed values of $D$: (c) $%
D=0.12$, (d) $D=0.1$, (e) $D=0.08$, (f) $D=0.06$, and (g) $D=0.04$.}
\label{fig5}
\end{figure}
Having examined the spectral stability of the different states, we now turn
to direct simulations to study the evolution of these states. First, in Fig.~%
\ref{fig6}, we confirm that the evolution of the fundamental VB soliton
branch (where the bright component is the GS of the vortex-induced
potential) does not exhibit any instability in long simulations (up to $%
t=2000$), even though the solution is initially perturbed.
\begin{figure}[tbp]
\begin{center}
\vspace{-0.1cm}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig6a.eps}
\label{fig6a}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig6b.eps}
\label{fig6b}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig6c.eps}
\label{fig6c}
}
}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig6d.eps}
\label{fig6d}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig6e.eps}
\label{fig6e}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig6f.eps}
\label{fig6f}
}
}
\end{center}
\par
\vspace{-0.55cm}
\caption{(Color online) The evolution of densities $|\Phi _{-}(x,t)|^{2}$
and $|\Phi _{+}(x,t)|^{2}$ (the top and bottom rows), displayed at different
instants of time: $t=0$ (left panels), $t=1000$ (middle panels), and $t=2000$
(right panels), for perturbed complexes with the bright component in the
form of the ground state, at $D=0.6$ and $\protect\mu _{+}=0.95$.}
\label{fig6}
\end{figure}
The situation is different for the excited states. This is observed, in
particular, in the evolution of the structure with the bright component
represented by the first excited state displayed in Fig.~\ref{fig7}; for the
second and third excited states in the bright component the same is shown in
Figs.~\ref{fig8} and \ref{fig9}, respectively. In the case of the first
excited state, we see in Fig.~\ref{fig7} that the shape becomes elongated,
resulting in the breakup of the dark density ring embedded into the bright
component. As a result, the bright component gradually transforms into the
GS (see, e.g., the right panel in the figure).
\begin{figure}[tbp]
\begin{center}
\vspace{-0.1cm}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig7a.eps}
\label{fig7a}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig7b.eps}
\label{fig7b}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig7c.eps}
\label{fig7c}
}
}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig7d.eps}
\label{fig7d}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig7e.eps}
\label{fig7e}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig7f.eps}
\label{fig7f}
}
}
\end{center}
\par
\vspace{-0.55cm}
\caption{(Color online) The same as Fig.~\protect\ref{fig6}, but for the
complexes with the bright component in the form of the first excited state.
Top and bottom rows display densities $|\Phi _{-}(x,t)|^{2}$ and $|\Phi
_{+}(x,t)|^{2}$, respectively, at $t=0$ (left panels), $t=80$ (middle
panels), and $t=160$ (right panels) for $D=0.1$, $\protect\mu _{+}=0.97$ and
$m=2$.}
\label{fig7}
\end{figure}
In the case of the second excited state shown in Fig.~\ref{fig8}, the
instability breaks the two dark rings embedded into the bright component. As
a result, more norm (or power, in terms of the optical model) migrates from
the outside rings towards the mode's core, pulled into the potential well
induced by the vortex in the mate component. In this case, the vortex
structure is only weakly affected by the instability of the bright
component. Eventually (see the panel on the right side of the figure), the
bright waveform builds a conspicuous maximum at the center, having shed off
considerable amount of radiation. Thus, this solution approaches the GS in
the bright component too, as a result of the instability development.
\begin{figure}[tbp]
\begin{center}
\vspace{-0.1cm}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig8a.eps}
\label{fig8a}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig8b.eps}
\label{fig8b}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig8c.eps}
\label{fig8c}
}
}
\mbox{\hspace{-0.1cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig8d.eps}
\label{fig8d}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig8e.eps}
\label{fig8e}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.16\textheight, angle =0]{fig8f.eps}
\label{fig8f}
}
}
\end{center}
\par
\vspace{-0.55cm}
\caption{(Color online) Same as Fig.~\protect\ref{fig6} but for the
complexes with the bright component in the form of the second excited state.
Top and bottom rows display densities $|\Phi _{-}(x,t)|^{2}$ and $|\Phi
_{+}(x,t)|^{2}$, respectively, at $t=0$ (left panels), $t=210$ (middle
panels), and $t=430$ (right panels), for $D=0.05$, $\protect\mu _{+}=0.95$
and $m=2$.}
\label{fig8}
\end{figure}
Finally, the waveform featuring the third excited state in the bright
component, characterized by a triple dark ring, exhibits complex evolution,
as seen in Fig.~\ref{fig9}. The rings get distorted, as is shown in the
second column of the figure --the outer nodal line is no longer a ring,
while the middle one has already been broken up. In the third column, the
outer and middle dark-ring patterns are severely distorted, resulting,
eventually (in the right column), in the transfer of the norm of the bright
component towards the center, although surrounded by a complex pattern
involving multiple nodal structures. Thus, one again sees a trend for the
aggregation of the norm of the bright component at the center, implying
spontaneous rearrangement of the mode into the GS. Here (as well as in the
case of the bright component shaped as the first excited state), the vortex
component suffers a more significant feedback from the instability
development in the bright one, resulting in complex patterns observed in the
dark component too. Nevertheless, the central core of the vortex remains
intact, thus maintaining the effective potential trapping the bright
waveform.
\begin{figure}[tbp]
\begin{center}
\vspace{-0.1cm}
\mbox{\hspace{-0.3cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.142\textheight, angle =0]{fig9a.eps}
\label{fig9a}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.142\textheight, angle =0]{fig9b.eps}
\label{fig9b}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.142\textheight, angle =0]{fig9c.eps}
\label{fig9c}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.142\textheight, angle =0]{fig9d.eps}
\label{fig9d}
}
}
\mbox{\hspace{-0.3cm}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.142\textheight, angle =0]{fig9e.eps}
\label{fig9e}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.142\textheight, angle =0]{fig9f.eps}
\label{fig9f}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.142\textheight, angle =0]{fig9g.eps}
\label{fig9g}
}
\subfigure[][]{\hspace{-0.3cm}
\includegraphics[height=.142\textheight, angle =0]{fig9h.eps}
\label{fig9h}
}
}
\end{center}
\par
\vspace{-0.55cm}
\caption{(Color online) Same as Fig.~\protect\ref{fig6}, but for the
complexes with the bright component represented by the third excited state.
Top and bottom rows display densities $|\Phi _{-}(x,t)|^{2}$ and $|\Phi
_{+}(x,t)|^{2}$, respectively, at $t=0$ (panels (a) and (e)), $t=130$
(panels (b) and (f)), $t=180$ (panels (c) and (g)), and $t=230$ (panels (d)
and (h)), for $D=0.04$, $\protect\mu _{+}=0.9825$ and $m=4$.}
\label{fig9}
\end{figure}
\section{Conclusion}
We have considered the two-component GP/NLS 2D system, chiefly with equal
strengths of the self- and cross-defocusing cubic nonlinearities, in which a
vortex in one component induces an effective trapping potential for the
other (bright) component. The system models heteronuclear BEC mixtures and
the copropagation of optical beams carried by different wavelengths.
Depending on the relative dispersion parameter of the second component, the
effective potential can trap not only the GS (ground state) in the bright
component, but also the first, second and even third excited radial states.
This results in complexes with multi-ring solitons in the bright component,
which produce a feedback on the vortex in the first component. Among these
complexes, the VB (vortex-bright) one, with the GS in the bright component,
has been identified as a spectrally stable state, which is, accordingly,
robust in the direct evolution.
On the other
hand, the complexes with the bright component represented by the excited
states are unstable, although the instability growth rate is broadly
tunable, and may be made very small, by means of the variation of the
relative dispersion parameter of the bright component. If the
harmonic-oscillator trapping potential, which is experimentally relevant in
atomic BECs, is included, the complete stabilization of the structures with
the excited state of the bright component can be achieved in suitable
parametric regions. The stabilization may also be provided by unequal
strengths of self- and cross-repulsive interactions, as shown in Fig. \ref%
{fig3_comp_s12}. The spectral stability and instability, predicted by the
analysis of small perturbations, were corroborated by direct simulations. In
particular, the unstable complexes with the excited bright component have
been shown to spontaneously rearrange into VB modes with the GS in the
bright constituent.
This work paves the way for exploration of related systems. First, we
actually considered only the bright component with zero vorticity, $n=0$ in
Eq.~(\ref{+}) (i.e., without the angular momentum). The existence and
stability of complexes with a vortical bright component, $n\neq 0$, is a
very relevant generalization. In particular, the stability may be quite
different for the same shape of the vortical bright mode with opposite signs
of the vorticity, $n=\pm 1$, while $S=+1$ is fixed in the first component,
cf. the stability of two-component trapped modes with the \textit{hidden
vorticity} studied in Ref. \cite{Nal}. Further, in this work we restrict the
considerations to 2D settings, while recent work \cite{adhi} has shown~that
3D vortical structures are capable of trapping bright states. Another
possibility, that we only briefly broached here, is to systematically
consider the states formed in the presence of the harmonic-oscillator
trapping potential, which is necessarily present in experiments with atomic
BECs. Finally, the present analysis is restricted to axially symmetric
states trapped by the vortex-induced effective potential. It is also
interesting to check if azimuthally modulated states (\textit{azimuthons}~%
\cite{desya}) may be produced in the present setup. Some of these extensions
are presently under consideration, and will be reported elsewhere.
\begin{acknowledgments}
P.G.K. and D.J.F. gratefully acknowledge the support of NPRP8-764-1-160. E.G.C. is
indebted to the Department of Physical Electronics, School of Electrical Engineering
at the Tel Aviv University for hospitality. This author thanks Hans Johnston (UMass)
for providing help in connection with the parallel computing performed in this work.
P.G.K. acknowledges support from the National Science Foundation under Grant
DMS-1312856 and from FP7-People under Grant No. IRSES-605096. The work of
D.J.F. was partially supported by the Special Account for Research Grants of
the University of Athens. The work of P.G.K., E.G.C., and B.A.M. was
supported in part by the U.S.-Israel Binational Science Foundation through
Grant No. 2010239.
\end{acknowledgments}
|
1,116,691,500,195 | arxiv |
\section{Introduction}
Today our sensitive data is being collected and analyzed by our service providers at a
scale that we are not, and cannot become, aware of. Though we often nominally own our data,
granting others access to it is no simple task. Solutions like OAuth \citep{OAuth} exist to manage data sharing but limit the permission scopes that can be granted to those chosen and implemented by the service provider. Additionally these solutions do not give us control over what happens to our data after permission is granted. For example, Facebook users agreed to share data with app developers but were surprised when in 2018 it came to light that that data had been subsequently shared with political consulting firm Cambridge Analytica \citep{10.1109/MC.2018.3191268}.
In an effort to combat this, governments have introduced regulations to limit the scope of
unauthorized data sharing. These data privacy regulations include the European Union's \emph{General Data Protection Regulation (GDPR)} \citep{EUdataregulations2018}, California's \emph{California Consumer Privacy Act (CCPA)} \citep{CCPA} and the United States' \emph{Health Insurance Portability and Accountability Act (HIPAA)} \citep{HIPAA}. Unfortunately, today's data processing systems make compliance with these regulations very difficult. This is due in large part to the fact that many existing data processing systems were designed and deployed prior to the conception of these regulations.
These two problems: the difficulty involved with complying with data regulation and the inability to control our own sensitive data, are deeply interrelated. The root cause is that today's approaches for collecting and processing sensitive personal data require data owners to \emph{give up control} over their data. Once data has left the owner's device, the data owner must \emph{trust} that the organization collecting the data will act in their interest and the organization must go through extreme lengths to ensure their compliance. As can be seen by the numerous recent cases of data misuse \citep{10.1109/MC.2018.3191268}\citep{gressin2017equifax}\citep{affleak}, many organizations have demonstrated that they are not deserving of this trust.
If organizations could ensure verifiable and automatic compliance with privacy policies, it would be safe to trust them with our data. Additionally the ability to specify these policies themselves would extend data owners far greater control and ownership over their data.
\paragraph{{\sc PrivFramework}\xspace: returning control to data owners.}
This paper presents {\sc PrivFramework}\xspace, an end-to-end framework for building and deploying scalable systems that collect and process sensitive data while enforcing security and privacy policies specified by data owners. {\sc PrivFramework}\xspace improves on previous approaches by allowing data owners to specify both \emph{who} may process their data and \emph{how} it may be used.
In {\sc PrivFramework}\xspace, data owners submit encrypted data to data collectors, and the data can be decrypted and processed only within \emph{Trusted Execution Environments (TEEs)} that keep the data confidential and ensure the integrity of the results. Data owners also submit policies governing their data, and {\sc PrivFramework}\xspace ensures via static analysis that processing tasks performed by the data collector do not violate these policies. These two properties together ensure that data is used in ways that are consistent with its owners' wishes, without requiring trust in the organizations which collect the data.
\paragraph{Related Work}
With the increasing visibility of data privacy issues, there has been increased attention paid to advancing technology in the data privacy space. Our work is most closely related to three directions of privacy research - (1) Data access control \& de-identification, (2) - Pure differential privacy tools, (3) Policy enforcement systems. In Data access control systems there have been advances such as Google DLP \citep{DLP} and Amazon Macie \citep{Macie}. These systems exist to reduce custodial risk but do not allow end users to specify their privacy preferences. Pure differential privacy tools exist in both centralized and decentralized variants and function by adding noise to data. By reducing the signal-to-noise ratio in the data, these solutions often render the data too noisy to provide accurate results. Other privacy policy enforcement systems exist such as those presented by Chowdhury et. al \citep{chowdhury2013privacy} and Sen et. al \citep{sen2014bootstrapping} but these systems do both offer the flexibility of {\sc PrivFramework}\xspace while also stopping violations \emph{before they occur}.
\section{The {\sc PrivFramework}\xspace System}
\label{sec:overview}
{\sc PrivFramework}\xspace provides an end-to-end framework for building configurable privacy-sensitive systems. The framework is comprised of a \emph{client-side API} for building applications that collect data and submit it for analysis, a \emph{collection \& analysis platform} for receiving submitted data, storing it, and executing analyses, and an \emph{analysis API} for writing statistical analyses and machine learning pipelines against the collected data.
Consider the motivating example of a consumer financial application used to track personal budgeting. Currently this requires the use of an interface (e.g. Plaid \citep{plaid} or Yodlee \citep{yodlee}) that requests end users' banking username and passwords, two pieces of extremely sensitive information. Requesting these credentials just to obtain read-only access to transaction history is an overreach that creates a severe and unnecessary privacy hazard. In addition there is no way to limit the scopes of the services' access to data; they have the ability to read and extract any and all information in the account in perpetuity. {\sc PrivFramework}\xspace solves these problems by allowing individuals to control the amount of information shared with service providers while simultaneously enabling service providers to provide strong privacy guarantees to reduce custodial risk and build customer trust.
To accomplish this we implement the following workflow:
\begin{enumerate}
\item The Data Owner uses a client application to upload their sensitive data, along with a formal policy specifying how the data may be processed, to a Data Manager. We use the Oasis Parcel Platform \citep{parcel} to store this data, encrypted with the Owner's key.
\item The Data Owner uses the Oasis Steward application to authorize Analysts to submit analysis programs to the Data Manager that will be run over the data.
\item An authorized Analyst submits an analysis program to the Data Manager. This programs is statically analyzed and constrained according to the specified policy by the Data Manager.
\item The Data Manager validates the analysis program and returns to the Analyst either the analysis results or a residual policy specifying unsatisfied constraints. At this point the Data Owner has not seen the details of the analysis program, and the analyst has not had direct access to the data.
\item If applicable, the Analyst composes a summary or result for the Data Owner.
\end{enumerate}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{images/PrivFramework_Figure.png}
\caption{Diagram of interactions in the {\sc PrivFramework}\xspace system. The circled numbers refer to the above workflow. The Data Manager represents the TEE-enabled system where decryption, compliance checks and analysis occur.}
\end{figure}
Enabling a secure and private workflow like the one described is challenging in several domains. The tool must adhere to strict privacy guarantees, have ergonomic APIs that developers can leverage easily, and provide accessible means for end users to specify their privacy policies. To address these challenges we present {\sc PrivFramework}\xspace. In the following sections we present our approach to specifying and enforcing policies, available APIs, the system architecture, and detail a demonstration of the previously described budgeting application designed using {\sc PrivFramework}\xspace.
\subsection{Specifying and Enforcing Privacy Policies}
A major challenge in designing privacy sensitive applications is accurately foreseeing the system's eventual use cases and breadth of end user preferences. To address this issue {\sc PrivFramework}\xspace is designed to dynamically enforce user specified privacy policies. We have developed a static analysis tool and a Domain Specific Language (DSL) to allow users to configure their own privacy policies, avoiding the need to specify parameters and options in advance. The {\sc PrivFramework}\xspace system relies on {\sc PrivGuard}\xspace, our static analysis framework, and {\sc PrivPolicy}\xspace, our DSL, to specify and enforce policies.
\paragraph{Data Capsules.}
Data submitted to {\sc PrivFramework}\xspace is stored in \emph{data capsules} \citep{10.1007/978-3-030-33752-0_1}, which pair submitted data with a formal policy governing its use. {\sc PrivFramework}\xspace provides analysts with a standard Python environment for processing data capsules, and uses static analysis to ensure compliance with the relevant policies. Our policy analysis supports libraries including Pandas \citep{reback2020pandas}, NumPy \citep{2020NumPy-Array}, TensorFlow \citep{tensorflow}, and PyTorch \citep{pytorch} out of the box. Organizations
can add extend the static analyzer by implementing tracing stubs for their library of choice.
The data capsule system is \emph{compositional}: processing tasks can be split up into multiple steps that form a processing pipeline. Each program in the pipeline outputs a new data capsule with a new, residual, policy. As programs in the pipeline satisfy policy requirements, those changes are reflected in the new policy attached to the output data capsule. The contents of a data capsule cannot be viewed by the analyst until all of the requirements of its policy have been satisfied.
\paragraph{Policies.} When submitting data to {\sc PrivFramework}\xspace, data owners attach policies that govern how the data may be processed. Policies in {\sc PrivFramework}\xspace are encoded using the {\sc PrivPolicy}\xspace language \citep{10.1007/978-3-030-33752-0_1}, which allows policies to encode both security and access control requirements (e.g. ``only my doctor may view my medical records'') and privacy requirements (e.g. ``my data must be aggregated with data from 100 others before it is processed'' or ``differential privacy must be used when processing my data'').
\paragraph{Static Analysis.} We use {\sc PrivGuard}\xspace to statically enforce the privacy policies over the Data Owner's data. Only after verifying that the analysis program will not violate any policies is the analysis actually run with access to the data. In the case of noncompliance we also generate a residual policy. This residual policy is packaged with an intermediate analysis state into a new data capsule. If the analysis is compliant then the program's outputs are returned. Otherwise the analyst is provided a reference to the new intermediate capsule. This analysis is done in a TEE to prevent the Data Manager from accessing the data at any time.
\subsection{Developing Applications with {\sc PrivFramework}\xspace}
As a tool meant for developers, all of our components are designed for scalability and interoperability with existing systems. The client-side API is easily integrated with Android applications, and can leverage data stored on the device or drawn from 3rd party sources. The analysis API is implemented as a library for Python programs, so that analysts can re-use existing code and do not need to learn a new programming language. The collection \& analysis platform is built on the Oasis Parcel platform and runs within the AMD SEV \citep{amdsev} Trusted Execution Environment. The collection \& analysis platform is responsible for collecting and processing data, while enforcing data owners' policies.
\paragraph{Client-side Application Development.} {\sc PrivFramework}\xspace's client-side API provides application developers with tools for the rapid development of {\sc PrivFramework}\xspace
applications. Existing applications can be modified to submit data to {\sc PrivFramework}\xspace
in just a few lines of code. The API also provides tools to help data owners
select a policy for submitted data. Applications can leverage the API to allow
data owners to select from a list of possible policies, specify values for
parameterized policies, or even design their own policies from scratch.
\section{Demonstration}
\begin{wrapfigure}{R}{0.4\textwidth}
\vspace{-10pt}
\caption{Choosing a policy in the client application is simple for the user and can be customized by the developer.}
\centering
\includegraphics[width=0.4\textwidth]{images/privguard_prompt.png}
\end{wrapfigure}
We have implemented a prototype of {\sc PrivFramework}\xspace that is capable of demonstrating all of its main features. In this section we will describe the budgeting example discussed in \hyperref[sec:overview]{Section 2}, one of several such applications we have implemented.
We present an application where a Data Owner downloads their own financial data using the APIs that are generally leveraged by service providers. This data is then packaged with a specified policy, uploaded to a Data Manager running a {\sc PrivFramework}\xspace agent, and encrypted at rest with their own key. Data Owners then authorize analysts to run analyses over their data. Analysts run these analyses and use the results to generate personal dashboards for the users.
In this process we display the key features of {\sc PrivFramework}\xspace:
\begin{enumerate}
\item The Data Owner privately keeps possession of all secret tokens, passwords, and unencrypted data.
\item Data Owners may easily specify policies and authorize analysts.
\item Analysts submit existing python programs with minimal adjustments to run analyses.
\item Data Owners do not have access to analysis program details nor do Analysts have direct access to the target data.
\end{enumerate}
\section{Conclusion \& Future Work}
In this paper we have described {\sc PrivFramework}\xspace, a new system for user-configurable and automated privacy regulation compliance.{\sc PrivFramework}\xspace has the potential to give data owners vastly increased control over their data while also easing organizations' compliance with local data privacy regulations. We implement {\sc PrivFramework}\xspace as well as a number of proof of concept applications on top of it, including a personal budgeting system.
In the future we hope to work on further use cases leveraging {\sc PrivFramework}\xspace and to extend the list of supported back-end analysis libraries. Future work may also introduce additional advanced privacy features to the policy language such as specialized differential privacy implementations.
|
1,116,691,500,196 | arxiv | \section{Introduction}
Accurate abundance measurements in the atmospheres of massive, upper
main sequence stars represent an important test of current models
of stellar evolution \citep{MM12,Pal13}. Accurate CNO abundances,
and nitrogen abundances in particular, are of special significance
due to their potential role as diagnostics of rotational mixing
(e.g. \citet{heg00b}, \citet{heg00a}, \citet{mey00}, \citet{brott11},
\citet{Eks12}, \citet{GH14}, \citet{Mea14}). Stellar rotation
velocities reach their peak on the main sequence among the early-B and
late-O-type stars \citep{Fuk82}, and rapid rotation is predicted to mix
CNO-processed material into the stellar atmospheres, with the resulting
nitrogen enhancement being the easiest to detect \citep{tal97, mey00,
heg00a,brott11}. Searches for enhanced nitrogen abundances among main
sequence B stars have produced mixed results \citep{mae09,hun09,NP14},
and surprisingly, no nitrogen enhancements have been found among the
Be stars, which are the most rapidly-rotating population on the main
sequence \citep{len05,dun11}.
Accurate abundance analysis for B stars is complicated by several factors:
departures from local thermodynamic equilibrium (LTE), rapid rotation
(which introduces several issues-- see below), and, in cases such
as the Be stars, potential emission from circumstellar material.
Departure from LTE is a well known problem in early-type and evolved
stars \citep{mih78,kur79}. In the non-LTE case, the calculation must
account for the non-local radiation field in the photosphere due to
photons coming from hotter, deeper layers and from photon loss through
the outer boundary. As a result, the level populations will differ from
the Saha-Boltzmann predictions at the local electron temperature and
density. To obtain the line source functions and optical depth scales
required for the line transfer problem, the coupled equations of radiative
transfer and statistical equilibrium must be solved in a self-consistent
manner \citep{mih78,can85}.
The large broadening of spectral lines in early-type stars due to rapid
rotation results in shallow and wide lines with low continuum contrast
and, potentially, strong line blending. In such cases, only a few, strong,
spectral lines of each element can be reliably measured. The use of
only the strongest lines of a given element can compromise the accuracy
of the abundance analysis due to (typically) stronger non-LTE effects,
the confounding influence of microturbulence, and the dependence of
the equivalent widths on uncertain damping parameters. In addition,
the traditional method of estimating uncertainties from the dispersion
of the measured elemental abundance from many measured weak and strong
lines cannot be applied. In the case where only a few strong lines of
a given element are available, a detailed theoretical error analysis is
required to give the measured abundances meaning. This can be handled by
Monte Carlo simulation of the errors introduced by uncertain atomic data
(including damping widths), uncertain stellar parameters ($T_{\rm eff}$,
$\log\,g$, and the microturbulence), and the uncertainty in the measured
equivalent width \citep{sig96}.
Complications due to rapid rotation include gravitational
darkening, where the stellar surface distorts and the temperature and
gravity become dependent on latitude \citep{von24}. As a consequence, the
strengths of spectral lines will be dependent on the stellar inclination
\citep{sto87}. Finally, in the case of the Be stars, there is the
possibility of contamination by circumstellar material \citep{por03}.
In this paper, the focus is on the limiting accuracy of predicted N\,{\sc
ii} equivalent widths due to uncertainties in the basic atomic data used
in the non-LTE calculation. The methodology follows that of \citet{sig96},
and the structure of the paper is as follows: In Section~(\ref{sec:previous}),
we give a brief summary of previous non-LTE calculations for N\,{\sc
ii}. In Section~(\ref{N-atom}), we discuss the atomic data used to
construct our nitrogen atom. In Section (\ref{result}), the results of
our non-LTE nitrogen calculations are given, and the error bounds on the
predicted equivalent widths due to random errors and systematic errors
are discussed. Section (\ref{disc}) gives conclusions.
\section{Previous Work}
\label{sec:previous}
\citet{duf79} investigated the atmospheric nitrogen abundance for a
number of main-sequence B-type stars using the complete linearization
method \citep{auer73} and a 13 level N\,{\sc ii} atom, which included
only singlet states. \citet{duf81} extended this work by including the
additional 14 lowest energy triplet levels. The calculation included 44
allowed radiative transitions, with the rates for 9 fully linearized.
Non-LTE and LTE equivalent widths for three singlet lines ($\lambda\,3995$,
$\lambda\,4447$, and $\lambda\,4228$)\footnote{All wavelengths in this paper
are given in Angstroms, unless otherwise noted.} and three triplet lines (
$\lambda\,4631$, $\lambda\,5045$ and $\lambda\,5680$)
were calculated for stellar $T_{\rm eff}$ between 20,000 and 32,500 K.
In general, the predicted non-LTE equivalent widths were significantly
stronger than the corresponding LTE values and the
difference increased with $T_{\rm eff}$.
An extensive, nitrogen atom was introduced by \citet{bb88}
and \citet{bb89}. They constructed non-LTE and LTE equivalent width
grids of 35 N\,{\sc ii} radiative transitions in the wavelength region
between 4000\,\AA\;and 5000\,\AA, calculated over stellar $T_{\rm eff}$
between 24,000 and 33,000 K. In this work, the non-LTE populations of
energy levels with principle quantum number up to 4 were included for
N\,{\sc i} and N\,{\sc ii}, and the lowest five levels of N\,{\sc iii}
and the ground level of N\,{\sc iv} were included in the linearization
method. Results showed that there was strong non-LTE strengthening
of the predicted equivalent widths for some lines that could lower the
estimated nitrogen abundance by up to 0.25 dex.
\citet{kor99} investigated the nitrogen abundance of $\gamma\,$Peg
(B2{\sc V}) in order to test nitrogen enrichment due to rotational
mixing, For this purpose, a nitrogen atom was constructed which
consisted of 109 levels: 3 ground levels of N\,{\sc i}, the 93 lowest
energy levels of N\,{\sc ii}, the 12 lowest levels of N\,{\sc iii},
and the ground state of N\,{\sc iv}. This calculation included all
allowed radiative transitions with wavelengths less than 10 $\rm \mu m$,
with 92 transitions computed in detail; the rates for the rest were
kept fixed. \citet{kor99} also provided non-LTE and LTE equivalent grids
for 23 N\,{\sc ii} transitions at stellar $T_{\rm eff}$ between 16,000
and 32000\,K. In general, there was good agreement with \citet{bb88},
but with some differences: firstly, the difference between LTE and
non-LTE equivalent widths were larger than those of \citet{bb88},
and the maximum differences occurred at lower $T_{\rm eff}$; secondly,
the maximum calculated equivalent widths occurred at lower $T_{\rm eff}$,
and this was attributed to the different (fixed, LTE) model atmospheres
used in the works.
Finally, \citet{prz01} performed non-LTE line formation for N\,{\sc
i}/N\,{\sc ii} in order to determine the nitrogen abundance of a number of
A and B type stars: Vega (A0V), and four late-A and early-B supergiants. In
this work, an extensive nitrogen atom was used with recent and accurate
atomic data. This work was mainly focused on studying objects with low
temperatures, $\rm T_{eff}\, \leq \,12,000$, where N\,{\sc ii} is not
the dominant ionization stage. They found weak non-LTE effects on
the N\,{\sc ii} lines and suggested further investigation at higher
effective temperatures.
In conclusion, many previous studies have investigated the non-LTE problem
of N\,{\sc ii}, aiming to get accurate equivalent widths using improved
techniques and more accurate atomic data. However, none of these works
present a detailed analysis of the uncertainties of the estimated
equivalent widths which is the main objective of the current work.
\section{The Nitrogen atom}
\label{N-atom}
\subsection{N\,{\sc ii} Atomic data}
\begin{table}
\centering{
\caption{Energy level data for the lowest 16 LS states of N\,{\sc ii} and
the ground states of N\,{\sc iii} and N\,{\sc iv}. \label{NII.Elevels}}}
\begin{center}
{\small
\begin{tabular}{crrrl}
\hline\hline
n & \multicolumn{1}{c}{Energy $(\rm cm^{-1})$} & g & $\lambda_{\rm thres}$(\AA) & Configuration\\
\hline
1 & $0.000$ & $9.0$ & \multicolumn{1}{l}{\;\;418.8} & \multicolumn{1}{l}{$\;2p^2\;\;^3P\;\; $} N\,{\sc ii} \\
2 & $15316.200$ & $5.0$ & \multicolumn{1}{l}{\;\;447.6} & \multicolumn{1}{l}{$\;2p^2\;\;^1D\;\;\;\;$}\\
3 & $32688.801$ & $1.0$ & \multicolumn{1}{l}{\;\;485.3} & \multicolumn{1}{l}{$\;2p^2\;\;^1S\;\;\;\;$}\\
4 & $ 46784.602$ & $ 5.0$ & \multicolumn{1}{l}{\;\;520.9} & \multicolumn{1}{l}{$\;2p^3\;\;^5S^o\;\;$} \\
5 & $ 92244.484$ & $ 15.0$ & \multicolumn{1}{l}{\;\;682.6} & \multicolumn{1}{l}{$\;2p^3\;\;^3D^o\;\;$} \\
6 & $109217.922$ & $ 9.0$ & \multicolumn{1}{l}{\;\;772.0} & \multicolumn{1}{l}{$\;2p^3\;\;^3P^o\;\;$}\\
7 & $144187.938$ & $5.0$ & \multicolumn{1}{l}{1057.5} & \multicolumn{1}{l}{$\;2p^3\;\;^1D^o\;\;\;$}\\
8 & $149012.406$ & $9.0$ & \multicolumn{1}{l}{1114.4} & \multicolumn{1}{l}{$\;3s\;\;\;^3P^o\;\;$} \\
9 & $149187.797$ & $3.0$ & \multicolumn{1}{l}{1116.5} & \multicolumn{1}{l}{$\;3s\;\;^1P^o\;\;$} \\
10 & $155126.734$ & $3.0$ & \multicolumn{1}{l}{1195.8} & \multicolumn{1}{l}{$\;2p^3\;\;^3S^o\;\;$}\\
11 & $164610.766$ & $3.0$ & \multicolumn{1}{l}{1348.8} & \multicolumn{1}{l}{$\;3p\;\;\;^1P\;\;\;$} \\
12 & $166615.188$ & $15.0$ & \multicolumn{1}{l}{1386.3} & \multicolumn{1}{l}{$\;3p\;\;^3D\;\;$} \\
13 & $166765.656$ & $3.0$ & \multicolumn{1}{l}{1389.2} & \multicolumn{1}{l}{$\;2p^3\;\;^1P^o\;\;$}\\
14 & $168892.203$ & $3.0$ & \multicolumn{1}{l}{1431.5} & \multicolumn{1}{l}{$\;3p\;\;\;^3S\;\;$} \\
15 & $170636.375$ & $9.0$ & \multicolumn{1}{l}{1468.1} & \multicolumn{1}{l}{$\;3p\;\;\;^3P\;\;$} \\
16 & $174212.031$ & $5.0$ & \multicolumn{1}{l}{1549.5} & \multicolumn{1}{l}{$\;3p\;\;^1D\;\;\;$} \\
\ldots & & & & \\
94 & $238750.300$ & $15.0$ & \multicolumn{1}{l}{\;\;261.3} & \multicolumn{1}{l}{$\;2p\;\;\;^2P^o\;\;$} N\,{\sc iii} \\
\ldots & & & & \\
106 & $621454.625$ & $1.0$ & \multicolumn{1}{l}{\;\;160.0} & \multicolumn{1}{l}{$\;2s^2\;\;^1S\;\;$} N\,{\sc iv} \\
\hline
\end{tabular}}
\end{center}
\vspace{0.1in}
\end{table}
Table~\ref{NII.Elevels} lists the experimental values for the first 16
N\,{\sc ii} energy levels, taken from \citet{moo93} and available through
NIST database.\footnote{http://www.nist.gov/pml/data/asd.cfm} The N\,{\sc ii} atom itself includes 93 energy levels,
complete through $n=6$. The oscillator strengths and photoionization
cross-sections were taken from the Opacity Project \citep{luo89},
through the TOPBASE database \citep{cun93}. In total, 580 radiative
transitions were included in the calculation, representing all
transitions between the included energy levels with $f\,\geq\,10^{-3}$.
Table~\ref{rbb-trans-data} lists the atomic data for a number of
bound-bound radiative transitions commonly used in N\,{\sc ii} abundance
determinations.
Note that the non-LTE calculation computes populations
for the total LS energy levels; populations for fine structure levels are
obtained by assuming these levels are populated relative to their
statistical weights. This is a very good approximation in a stellar
atmosphere due to the small energy spacing of the fine structure levels
and the large rates of collisional transitions between these levels.
\begin{table}
\centering{
\caption{Atomic data for several fine-structure N\;{\sc ii} transitions of interest.
\label{rbb-trans-data}}}
\begin{center}
{\small
\begin{tabular}{rrlcr}
\hline\hline
$\lambda\,$(\AA) & $A_{ul} (s^{-1})$ & Transition & $(l-u)$ & $\gamma_s$ (\AA) \\
\hline
3955.0 & $1.203^{+07}$& $3s \,^3P^o\,(1)\rightarrow\,3p \,^1D\,(2)$& 8 - 16 &$3.9^{-02}$\\
3995.0 & $1.386^{+08}$ & $3s \,^1P^o\,(1)\rightarrow\,3p \,^1D\,(2)$& 9 - 16 &$3.9^{-02}$\\
4601.5 & $2.325^{+07}$ & $3s \,^3P^o\,(1)\rightarrow\,3p \,^3P\,(2)$& 8 - 15 &$4.5^{-02}$\\
4607.2 & $3.310^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(0)\rightarrow\hspace{0.9cm}(1)$} & &\\
4613.9 & $2.227^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(1)\rightarrow\hspace{0.9cm}(1)$} & &\\
4621.4 & $9.474^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(1)\rightarrow\hspace{0.9cm}(0)$} & &\\
4630.5 & $7.878^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(2)\rightarrow\hspace{0.9cm}(2)$} & &\\
4643.1 & $4.611^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(2)\rightarrow\hspace{0.9cm}(1)$} & &\\
4447.0 & $1.174^{+08}$ &$3p \,^1P\;\;(1)\rightarrow\,3d \,^1D^o(2)$& 11 - 19 &$9.0^{-02}$\\
4987.4 & $7.474^{+07}$ &$3p \,^3S\;\;(1)\rightarrow\,3d \,^3P^o\,(0)$& 14 - 21 &$7.0^{-02}$\\
4994.4 & $7.583^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(1)\,\rightarrow\hspace{0.9cm}(1)$} & &\\
5007.3 & $7.956^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(1)\,\rightarrow\hspace{0.9cm}(2)$} & &\\
5001.1 & $9.719^{+07}$ &$3p \,^3D\;\;(1)\rightarrow\,3d \,^3F^o\,(2)$& 12 - 18 &$6.7^{-02}$\\
5001.5 & $1.046^{+08}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(2)\,\rightarrow\hspace{0.9cm}(3)$} & &\\
5005.1 & $1.155^{+08}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(3)\,\rightarrow\hspace{0.9cm}(4)$} & &\\
5016.4 & $1.581^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(2)\,\rightarrow\hspace{0.9cm}(2)$} & &\\
5025.7 & $1.055^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(3)\,\rightarrow\hspace{0.9cm}(3)$} & &\\
5040.7 & $4.722^{+05}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(3)\,\rightarrow\hspace{0.9cm}(2)$} & &\\
5002.7 & $8.661^{+06}$ &$3s \,^3P^o\,(0)\rightarrow\,3p \,^3S\,\;(1)$& 8 - 14 &$5.0^{-02}$\\
5010.6 & $2.165^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(1)\,\rightarrow\hspace{0.9cm}(1)$} & &\\
5045.1 & $3.481^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(2)\,\rightarrow\hspace{0.9cm}(1)$} & &\\
5666.6 & $3.608^{+07}$ &$3s \,^3P^o\,(1)\rightarrow\,3p \,^3D\,\;(2)$& 8 - 12 &$6.1^{-02}$\\
5676.0 & $2.916^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(0)\,\rightarrow\hspace{0.9cm}(1)$} & &\\
5679.6 & $5.194^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(2)\,\rightarrow\hspace{0.9cm}(3)$} & &\\
5686.2 & $1.875^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(1)\,\rightarrow\hspace{0.9cm}(1)$} & &\\
5710.8 & $1.229^{+07}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(2)\,\rightarrow\hspace{0.9cm}(2)$} & &\\
5730.7 & $1.221^{+06}$ &\multicolumn{1}{l}{$\hspace{0.9cm}(2)\,\rightarrow\hspace{0.9cm}(1)$} & &\\
6482.1 & $2.913^{+07}$ &$3s \,^1P^o\,(1)\rightarrow\,3p \,^1P\;\;(1)$& 9 - 11 &$1.3^{-01}$\\
\hline
\end{tabular}}
\end{center}
\vspace{0.1in}
\flushleft
\noindent{Note: Stark widths, $\gamma_s$, were calculated assuming an electron number density of $10^{+16}\;\rm cm^{-3}$ and a temperature of 30,000~K. The J values of the fine structure levels of each LS state are shown in brackets.}
\end{table}
Thermally-averaged collision strengths for bound-bound collisional
transitions between the lowest 23 LS states of N\,{\sc ii} were
taken from \citet{hud04,hud05a}. These were calculated in a 23 state,
close-coupling calculation using the ${\cal R}$-Matrix method. The impact
parameter approximation of \citet{sea62} was used for the remaining
bound-bound collision strengths for allowed transitions. The collision
strengths for forbidden transitions were assumed to be 0.1. The rates
of collisional ionization of N\,{\sc ii} energy levels to the N\,{\sc
iii} ground state were estimated using the procedure of \citet{sea62},
where the rate is proportional to the photoionization cross section at
threshold, as given in \citet{Jef68}.
The line profiles for all radiative transitions included natural
broadening, thermal broadening, and pressure broadening due to collisions
with charged and neutral particles. Quadratic Stark broadening,
due to quasi-static collisions with electrons, represents the most
important collisional contributor to the line width in the atmospheres
of hot stars. As the N\,{\sc ii} transitions are often strong, it
is important to have accurate Stark widths. For this reason, we have
calculated the Stark widths using the method developed for the Opacity
Project \citep{sea88}. We compare these Stark widths to the experiment
values of \cite{kon02} and to calculations using the semi-classical
approximation of \citet{SBS71} and the commonly-used formula of
\citet{kur79}, that represents a fit of the widths of \citet{SBS71},
in Figure~\ref{gamma_ratio_comp}. The comparison assumes $T_{\rm e}$=
28,000 K and the figure shows Stark width versus the effective principle
quantum number ($n^{\rm eff}$) of the upper level, defined for the
$i^{th}$ energy level as
\begin{equation*}
(n^{\rm eff}_i)^2=\frac{Z\,{\cal R}}{(I-E_i )}\,,
\end{equation*}
where $\cal R$ is the Rydberg constant, $I$ and $E_i$ are the ionization
energy of the atom and the energy of the $i^{th}$ state, and $Z$ is the
core charge.
Figure~\ref{gamma_ratio_comp} shows that the Stark widths
calculated following \citet{sea88} agree best with the experimental
values of \citet{kon02} at $T_{\rm e}=$ 28\,000 K. However, at
lower temperatures, the \citet{sea88} and \citet{SBS71} methods give
similar accuracies compared to experimental values, with the
\citet{sea88} formula tending to be higher than experiment and the
\citet{SBS71} method. We note that the formula adopted by
\citet{kur79} as an overall fit to the \citet{SBS71} gives
Stark widths that seem too small.
Finally, the thermal widths of the lines included the contribution
of microturbulence, $\xi_t$. Microturbulence is an important physical
process in stellar atmospheres that can affect the strength of stronger
spectral lines \citep{gra92}. Microturbulence represents the dispersion
of a non-thermal velocity field on a scale smaller than unit optical
depth which acts to broaden the atomic absorption. Microturbulence
is a confounding factor in abundance determinations. As weak lines
on the linear portion of their curves of growth are insensitive to
microturbulence, forcing the abundances from strong and weak lines to
agree can fix its value. However, if only strong lines are available
for the abundance analysis, microturbulence may represent a serious
limitation to the achievable accuracy. As to its physical origin,
\citet{cant09} show that the iron-peak in stellar opacities can lead to
the formation of convective cells close to the surface of hot stars which
could be the origin of such turbulence, through energy dissipated
from gravitational and pressure waves propagating outwardly from these
convective cells.
\begin{figure}
\includegraphics[scale= 0.45]{fig1.eps}
\caption{Ratio of theoretical-to-experimental quadratic Stark width versus
the effective quantum number of the upper level at $T_{\rm e}=28\,000\;$K.
The theoretical
widths were calculated using three different approximations with the
sources noted in the legend. The dotted lines indicate a factor of two uncertainty.
\label{gamma_ratio_comp}}
\end{figure}
\begin{table}
\centering{
\caption{Average ratios of calculated to experimental Stark widths.
S88 refers to \citet{sea88}, K79 refers to
\citet{kur79}, and SB71 refers to \citet{SBS71}.
\label{gamma_ratios}}}
\begin{center}
{\small
\begin{tabular}{lrrr}
\hline\hline
Temperature & \multicolumn{3}{c}{Average $\left(\gamma_{\rm calc}/\gamma_{\rm exp}\right)$} \\
(K) & S88 & K79 & SB71\\
\hline
8000 & 2.12 & 0.23 & 0.71\\
15000 & 1.92 & 0.22 & 0.61\\
28000 & 1.55 & 0.21 & 0.40\\
\hline
\end{tabular}}
\end{center}
\vspace{0.1in}
\end{table}
\subsection{N\,{\sc iii} \& N\,{\sc iv} Atomic Data}
N\,{\sc iii} and N\,{\sc iv} energy levels were taken from
\citet{moo93}. In total, the twelve lowest LS levels of N\,{\sc iii}
and the ground state of N\,{\sc iv} were included in the calculation.
The oscillator strengths of radiative bound-bound transitions of N\,{\sc
iii} were obtained from \citet{bell95} and \citet{fer99}, available from
the NIST database. The photoionization cross-sections were also taken
from \citet{fer99}. Similarly, the
collision strengths of excitation and ionization were calculated using
impact parameter approximation of \citet{sea62}.
\section{Calculations}\label{result}
The N\,{\sc ii} non-LTE line formation calculation was carried out for
all combinations of nine $T_{\rm eff}$, between 15,000 and 31,000 K in
steps of 2000~K, three surface gravities, $\log g$ (3.5, 4 and 4.5), four
microturbulent velocities, $\xi_t$ (0, 2, 5 and $10\;\rm km\,s^{-1}$),
and 7 nitrogen abundances, between 6.83 and 8.13~dex. The MULTI code,
v2 \citep{car92}, was used. MULTI solves the statistical equilibrium
and radiative transfer equations simultaneously in an iterative method
using the approximate lambda-operator technique of \cite{sch81}. The
option of using a local approximate lambda-operator was used in the
current calculations. A solar nitrogen abundance $\epsilon_{N}$= 7.83
was adopted from \cite{gre10}.
The fixed, background model photospheres providing $T(\tau)$ and $P(\tau)$
for the non-LTE calculation were taken from the LTE, line blanketed,
atmospheres of ATLAS9 \citep{kur93}. ATLAS9 was also used to provide the
mean intensity within the stellar atmosphere as a function of optical
depth, $J_{\nu}(\tau)$, used to compute all of the photoionization and
recombination rates which were kept fixed during the calculations.
Using LTE model atmospheres can be a source of error, particularly for
hotter stars; however, previous studies have shown that a comprehensive
inclusion of line-blanketing is more important than non-LTE effects up
to stellar effective temperatures of $\approx\,$30,000 K \citep{prz01}.
As MULTI was originally developed for the atmospheres of cool stars,
modification to the background opacities are required in order to
be suitable for early-type stars; its default opacity package was
replaced with the extensive package that is available through ATLAS9
\citep{sig96b}.
\subsection{Ionization Balance and Departure Coefficients}
\begin{figure}
\centering
\subfloat[]{
\includegraphics[scale= 0.45]{fig2a.eps}
\label{N2_frac}
}
\vspace{0.15cm}
\subfloat[]{
\includegraphics[scale= 0.45]{fig2b.eps}
\label{N3_frac}
}
\caption{Panel~(a) shows the fraction of N\,{\sc ii}
as a function of $\log\tau_{5000}$ for several stellar $T_{\rm eff}$
in atmospheres with $\log g$= 4.0, $\xi_t= 5\;\rm km\,s^{-1}$, and
$\epsilon_N$= 7.83. The circles represent the non-LTE fraction and
solid lines, the LTE fractions. Panel~(b) is the same,
but for N\,{\sc iii}. In both panels, the line colour indicates $T_{\rm eff}$
as 15000\,K (black), 23,000\,K (blue), and 31,000\,K (purple).\label{ion_frac}}
\end{figure}
Figure~\ref{ion_frac} shows the predicted LTE and non-LTE ionization
fractions of N\,{\sc ii} (top panel) and N\,{\sc iii} (bottom panel) as a
function of the continuum optical depth at $5000\,$\AA, $\log\tau_{5000}$,
for the range of the $T_{\rm eff}$ considered. These illustrated models
assume $\log\,g=4.0$, $\xi_t= 5\;\rm km\,s^{-1}$, and the solar nitrogen
abundance.
At $T_{\rm eff}$ less than $\sim 21,000\,$K, N\,{\sc ii} is the dominant
ionization stage throughout the formation region of the optical N\,{\sc
ii} lines, $-2\le\,\log \tau_{5000} \le 0$, and there is little deviation
from the LTE ionization fraction. Table~\ref{NII.Elevels} shows that the
six lowest energy levels of N\,{\sc ii} have photoionization thresholds
shortward of the Lyman limit at 912\,\AA, which remains optically
thick throughout most of the atmosphere. This leads to a strongly local
photoionizing radiation field, i.e.\ $J_{\nu}\,\approx\,B_{\nu}(T_e)$.
It is only by level 7, $2p\,3p\,^1D$, that the photoionization threshold
begin to lie in the short-wavelength region of the Balmer continuum where
the photoionizing radiation field can be both hot and non-local. This high
radiation temperature in the Balmer continuum can drive overionization;
however, even at the highest $T_{\rm eff}$ considered here, the predicted
non-LTE over-ionization of N\,{\sc ii} is quite small in the line
formation region (Figure~\ref{ion_frac}).
\begin{figure*}
\centering
\subfloat[]{
\includegraphics[scale= 0.4]{fig3a.eps}
\label{Departure_Coeff_T15000}
}
\hspace{0.1cm}
\vspace{0.005cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig3b.eps}
\label{Departure_Coeff_T21000}
}
\vspace{0.005cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig3c.eps}
\label{Departure_Coeff_T25000}
}
\hspace{0.1cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig3d.eps}
\label{Departure_Coeff_T29000}
}
\caption{Non-LTE departures coefficients, $\log\beta_i$, for the
lowest 16 energy levels of N\,{\sc ii} of Table~\ref{NII.Elevels} (blue
lines identified by level number) and the ground level of N\,{\sc iii}
(green line) as a function of $\tau_{5000}$. The stellar $T_{\rm eff}$
shown are 15,000\,K (panel~a), 21,000\,K (panel~b), 25000\,K (panel~c),
and 29,000\,K (panel~d). The surface gravity and microturbulent velocity
were fixed at $\log\,g=4.0$ and $\xi_t=5\;\rm km\,s^{-1}$, and the solar
nitrogen abundance was assumed. \label{Departure_Coeffs}}
\end{figure*}
The predicted departure coefficients for the 16 lowest LS states of N\,{\sc
ii} (Table~\ref{NII.Elevels}) and the ground state of N\,{\sc iii},
for four values of $T_{\rm eff}$
in models with $\log g=$ 4.0, $\xi_t= 5.0\;\rm km\,s^{-1}$, and the
solar nitrogen abundance, are shown in Figure \ref{Departure_Coeffs}. The
departure coefficient of the $i^{th}$ energy level, $\beta_i$, is defined
as the ratio of the non-LTE number density to the corresponding
LTE value computed by the Saha/Boltzmann equations for the local values
of $T_e$ and $N_e$,
\begin{equation}
\beta_i\equiv\frac{n_i}{n_i^*(T_e,N_e)}\;,
\end{equation}
where $n_i^*$ and $n_i$ are the predicted LTE and non-LTE number densities
of the $i^{th}$ level, respectively. Among the levels of particular
interest are 9 and 16, the upper and lower levels of $\lambda\,3995$,
and 9 and 11, the upper and lower levels of $\lambda\,6482$
(see Table~\ref{rbb-trans-data}). These transitions will be explicitly
discussed in the remainder of the text.
At $T_{\rm eff}=15,000\,$K, Figure~\ref{Departure_Coeff_T15000},
all of the low-lying levels ($\le 10$) are in LTE throughout the
atmosphere. As $T_{\rm eff}$ increases, there is a systematic trend
for overionization of all of the low-lying levels to set in. This is clear
by $T_{\rm eff}=25,000\,$K, Figure~\ref{Departure_Coeff_T25000}, where
near $\log\tau_{5000}\approx -0.5$, all of the low-lying levels share the
same departure coefficient, $\beta$, due to strong collisional coupling,
with $\beta<1$ due to photoionization in the short wavelength portion
of the Balmer continuum. By $T_{\rm eff}=25,000\,$K, the upper levels
with threshold in this region have sufficient population and N\,{\sc ii}
is no longer the dominant ionization stage, allowing the overionization
to occur. This trend continues for $T_{\rm eff}=29,000\,$K. For $T_{\rm
eff}<21,000\,$K, the overpopulation of the N\,{\sc iii} ground state leads
to general over population of the higher N\,{\sc ii} excited states,
although this effect diminishes by $T_{\rm eff}=25,000\,$K.
\begin{figure*}
\centering
\subfloat[]{
\includegraphics[scale= 0.4]{fig4a.eps}
\label{Equiv_Width_3995}
}
\hspace{0.3cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig4c.eps}
\label{Equiv_Width_6482}
}
\vspace{0.2cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig4b.eps}
\label{Equiv_Width_ratio_3995}
}
\hspace{0.3cm}
\vspace{0.2cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig4d.eps}
\label{Equiv_Width_ratio_6482}
}
\caption{The LTE and non-LTE equivalent widths of N\,{\sc ii}
$\lambda\,3995$; (panel~a) and $\lambda\,6482$, (panel~b) respectively,
as a function of $T_{\rm eff}$ for $\log g=$ 4.0, $\xi_t= 5\;\rm km\,s^{-1}$ and $
\epsilon_N=$ 7.95. The blue symbols are the current results and the red
symbols, \citet{kor99}. The two bottom panels, (c) and (d), give the ratio of
the non-LTE and LTE equivalent widths. \label{Equiv_Width}}
\end{figure*}
\subsection{ N\,{\sc ii} Equivalent Widths}
Figure~\ref{Equiv_Width} shows the predicted LTE and non-LTE equivalent
widths (in milli-\AA) for N\,{\sc ii} $\lambda\,3995$ and $\lambda\,6482$
representing transitions from levels 9 to 16 $(3s\,^1P^o\rightarrow
3p\,^3D)$ and levels 9 to 11 ($ 3s\,^1P^o \rightarrow 3p\,^1P$),
respectively. For comparison, the predictions of \citet{kor99}
are also shown. In general, there is reasonable agreement between
the two calculations. For $\lambda\,3995$, we find
smaller deviations from LTE near the line's maximum strength at
$T_{\rm eff}\approx 25,000\,$K (Figure~\ref{Equiv_Width_ratio_3995}).
For $\lambda\,6482$, agreement is good for $T_{\rm eff} < 24,000\,$K;
however, for hotter $T_{\rm eff}$, we again predict smaller departures
from the LTE equivalent width than \citet{kor99} (Figure~\ref{Equiv_Width_ratio_6482}).
\begin{figure*}
\centering
\subfloat[]{
\includegraphics[scale= 0.4]{fig5a.eps}
\label{Snu_3995_T21000}
}
\vspace{0.08cm}
\hspace{0.3cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig5b.eps}
\label{Snu_3995_T25000}
}
\vspace{0.01cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig5c.eps}
\label{Snu_6482_T21000}
}
\hspace{0.3cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig5d.eps}
\label{Snu_6482_T25000}
}
\caption{The line source functions as a function of $\log\tau_{5000}$
for N\,{\sc ii} $\lambda\,3995$
and $\lambda\,6482$ at two $T_{\rm eff}$, 21000 and 25000\,K. Both
models assumed $\log g=4.0$, $\xi_t= 5\;\rm km\,s^{-1}$ and the solar nitrogen
abundance. The departure coefficients of the upper and lower energy
levels are shown in the panel inserts. The arrows point to the depth
of formation of the line centre flux as defined by the contribution function
of X.}
\label{Source_fun}
\end{figure*}
In general, the non-LTE equivalent widths are predicted to be larger than
the corresponding LTE values. These differences result from deviations of
the line source function from the local Planck function and the non-LTE correction
to the line optical depth scale. In terms of the departure coefficients
of the upper and lower levels, $\beta_u$ and $\beta_l$, the line source function is
\begin{equation}
S_l =\frac{2h\nu^3}{c^2} \left(\frac{\beta_l}{\beta_u}e^{\frac{h\nu}{kT}}-1\right)^{-1}\,,
\end{equation}
and the line optical depth scale is,
\begin{equation}
d\tau^l_{\nu}=-\frac{h\nu}{4\pi}\left(\beta_l\,n^*_l - \beta_u\,n^*_u\right) \, \phi_{\nu}\,dz \,.
\end{equation}
Here $\phi_{\nu}$ is the line absorption profile, $n^*$ are the
LTE level populations, and $dz$ is the physical step (in cm) along
the ray. Complete redistribution, the equality of the line emission
and absorption profiles, has been assumed \citep{mih78}. Note that if
$h\nu\gg kT$ (i.e.\ the photon energy exceeds the local kinetic energy),
we have the scalings $S_l \propto \beta_u/\beta_l$ and $d\tau^l_{\nu}
\propto \beta_l$. As the Eddington-Barbier relation states that the
emergent intensity is characteristic of the source function at an optical
depth of $\approx 2/3$, we see that $\beta_l$ affects how deeply we see
into the atmosphere, whereas $\beta_u/\beta_l$ controls the value of the
source function at this point. This emphasizes that even in cases where
$S_l=B_{\nu}(T_{\rm e})$, i.e.\ $\beta_u/\beta_l=1$, there can still be
large non-LTE effects, for example, if $\beta_l=\beta_u \ll 1$.
\begin{figure*}
\centering
\subfloat[]{
\includegraphics[scale= 0.40]{fig6a.eps}
\label{Growth_Curve__3995A_T21000}
}
\vspace{0.005cm}
\hspace{0.1cm}
\subfloat[]{
\includegraphics[scale= 0.40]{fig6b.eps}
\label{Growth_Curve__6482A_T21000}
}
\vspace{0.005cm}
\subfloat[]{
\includegraphics[scale= 0.40]{fig6c.eps}
\label{Growth_Curve__3995A_T25000}
}
\hspace{0.1cm}
\subfloat[]{
\includegraphics[scale= 0.40]{fig6d.eps}
\label{Growth_Curve__6482A_T25000}
}
\caption{LTE and non-LTE curves of growth for N\,{\sc ii}
$\lambda\,6482$ and $\lambda\,3995$ as a function of nitrogen abundance,
$\rm \epsilon_N$, at the indicated $T_{\rm eff}$ with
$\log g$= 4.0, and $\xi_t= 5\;\rm km\,s^{-1}$. The blue, solid lines
represent the predicted non-LTE equivalent widths and the dashed, red,
dashed lines represent the corresponding LTE values.}
\label{Growth_Curve}
\end{figure*}
Panels~\subref{Snu_3995_T21000} and \subref{Snu_3995_T25000} of
Figure~\ref{Source_fun} show the ratio of the line source function
to Planck function for N\;{\sc ii} $\lambda\,3995$ as a function of
$\tau_{5000}$ at two $T_{\rm eff}$, 21,000 and 25,000 K. The departure
coefficients of the upper and lower levels are shown as inserts in
each figure.
Consider first $\lambda\,3995$, transition $9\rightarrow 16$. The
depth of formation of the line centre flux is marked with an arrow,
following the flux contribution function proposed by \cite{ajn91}.
At $T_{\rm eff}$ of 21,000\;K there is a small overpopulation of the the
ninth energy level and a small under-population of the sixteenth level,
while at $T_{\rm eff}$ of 25,000 K, there is under-population of both
levels. In both cases, the main effect is the larger under-population of
the upper energy level which acts to reduce the line source function in
the line-forming region leading to non-LTE strengthening of the line.
The behaviour of $\lambda\,6482\,$\AA, transition $9\rightarrow 11$, is
qualitatively similar. However, for $T_{\rm eff}$ greater than 28,000\;K,
there is a non-LTE weakening of the line driven by the overionization
of N\,{\sc ii} in such models.
For both $\lambda\,3995$ and $\lambda\,6482$, non-LTE strengthening
of the lines reaches a maximum near $T_{\rm eff}\approx 25,000\;$K
where the lines are about 20\% and 30\% stronger than the LTE
predictions, respectively (Figures~\ref{Equiv_Width_3995} and
\ref{Equiv_Width_6482}). This overall behaviour is in agreement
with the calculation of \citet{kor99}, although our overall non-LTE
strengthening of $\lambda\,3995$ is less and our non-LTE weakening of
$\lambda\,6482\,$\AA\ for high $T_{\rm eff}$ is also less. The stronger
non-LTE effects seen by \citet{kor99} may reflect their lower number of
bound-bound radiative transitions included, 266 transitions in total,
with 92 included in the linearization procedure and the rest kept at
fixed rates. In the current work, a total of 580 radiative transitions
were included, none with fixed rates, representing all LS transitions
with oscillator strengths greater than or equal to $10^{-3}$.
We confirm that it is the larger number of radiative transitions included in the present work that
explains most of the differences with K99 by constructing a 43~LS level N\,{\sc ii} atom that
includes the same set of allowed rbb transitions as K99. Using this atom,
their non-LTE equivalent widths could be reproduced with a good agreement as a function of
$T_{\rm eff}$ for the model $\log g$= 4.0, $\xi_t= 5\,\rm km\,s^{-1}$ and $\epsilon_N=7.95$, Table (5) in K99.
\begin{figure*}
\centering
\subfloat[]{
\includegraphics[scale= 0.4]{fig7a.eps}
\label{Snu_3995A_T21000_diff_abund}
}
\hspace{0.3cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig7b.eps}
\label{Snu_6482A_T21000_diff_abund}
}
\vspace{0.1cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig7c.eps}
\label{Snu_3995A_T25000_diff_abund}
}
\vspace{0.1cm}
\hspace{0.3cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig7d.eps}
\label{Snu_6482A_T25000_diff_abund}
}
\caption{The source functions of the N\;{\sc ii} $\lambda\,3995$ for two different
nitrogen abundances, $\epsilon_N=6.83$ (solid lines) and $8.53$ (dashed lines). The $T_{\rm eff}$
and $\log\,g$ of the models are as indicated. The microturbulent velocity was
$\xi_t=5\;\rm km\,s^{-1}$.}
\label{Snu_diff_abund}
\end{figure*}
There is also a dependence of the size of the predicted non-LTE effects on
the nitrogen abundance. This is illustrated in Figure~\ref{Growth_Curve}
which show curves of growth, $\log (W_\lambda/\lambda)$ versus
$\epsilon_N$, for both $\lambda\,3995$ and $\lambda\,6482$ at two
$T_{\rm eff}$, 19,000\;K and 27,000\;K. A gravity of $\log\,g=4.0$ and
a microturbulent velocity of $\xi_t=5\;\rm km\,s^{-1}$ were assumed for
all examples. For $\lambda\,3995$, there is non-LTE strengthening at
all nitrogen abundances, with the largest effect at the highest abundance
considered. For $\lambda\,6482$, there is a reversing trend with non-LTE
weakening predicted for small abundances and a non-LTE strengthening
at larger abundances. The transition occurs at abundances somewhat less
than the solar value.
This abundance effect is further explored in Figure~\ref{Snu_diff_abund}
which shows the line source functions and departure coefficients
of both $\lambda\,3995$ and $\lambda\,6482$ at the two extreme
values of the nitrogen abundance considered, $\epsilon_N=6.83$ ($-1.0\;$dex
relative to solar) and $\epsilon_N=8.53$ ($+0.7\;$dex relative to solar).
The figure shows that the increase of the nitrogen abundance reduces the line source function value and shifts the line formation regions to smaller optical depths ($\log \tau_{5000}$).
Grids of non-LTE equivalent widths for $\lambda\,3995$
and $\lambda\,6482$ over all $T_{\rm eff}$, $\log\,g$ and
$\epsilon_N$ considered are given in Tables~\ref{multi_wv3995_vturb5_2}
and \ref{multi_wv6482_zeta5}. The microturbulent velocity was set to
$5\;\rm km\,s^{-1}$. Full grids of equivalent widths for all transitions
of Table~\ref{rbb-trans-data} over all models and microturbulences
considered are available on-line.
Finally we note that the equivalent width of $\lambda\,6482$ at
the highest temperature considered in Table~\ref{multi_wv6482_zeta5},
31,000\;K, becomes weakly negative, indicating line emission. This is a
well-known non-LTE effect that can occur when $h\nu/kT\ll 1$: the source
function becomes very sensitive to small departures from LTE and can
rise in the outer layers, even though the photospheric temperature falls
with height. This effect is extensively discussed in \cite{car92_2}
and \cite{sig96b}.
\subsection{A Multi-MULTI Analysis}
\label{MM-analysis}
In order to investigate which of the radiative and collisional
transitions included in the atom have the most significant effects on
the predicted equivalent widths of the N\,{\sc ii} lines of interest,
a series of multi-MULTI analysis were carried out \citep{car92_2}. In
a multi-MULTI analysis, a single radiative or collisional transition
is perturbed by doubling its rate, and a new converged solution is
obtained for the perturbed atom. The predicted equivalent widths from
this new converged solution are compared with the reference solution of
the unperturbed atom. Table~\ref{multi-multi_23kK} shows the top ten
radiative/collisional transitions affecting the predicted equivalent
widths of $\lambda\,6482$ and $\lambda\,3995$ for $\rm
T_{eff}\,$= 23,000 K, $\log g$= 4.0, $\xi_t\,=$5 $\rm km\,s^{-1}$, and a solar
nitrogen abundance. Corresponding to Table~\ref{multi-multi_23kK},
Figure~\ref{Snu_3995_pret1} shows the effect on the line source function
and upper and lower level departure coefficients of the top four rates
for $\lambda\,3995$.
\begin{table}
\caption{multi-MULTI Analysis at $\rm T_{eff}$= 23000 K, $\log g$=4.0, and $\xi_t= 5.0\;\rm km\,s^{-1}$
\label{multi-multi_23kK}}
\begin{center}
{\small
\begin{tabular*}{0.49\textwidth}{@{}l @{}l @{}l @{}l @{}l @{}l @{}l l@{}}
\hline\hline
Transition & \multicolumn{5}{c}{Perturbed transition} & Type & $\%$\\
\hline
3995.0\,\AA & 9& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{rbb} &35.22\\
& 16& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p \;3p \;^1D$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{rbf} & 1.67\\
& 9& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{cbb} & -1.43\\
& 16& $\rightarrow $& 87& \multicolumn{1}{l}{($ 2p \;3p \;^1D$} & \multicolumn{1}{l}{-$\rm high\;l,n=6 $)} & \multicolumn{1}{l}{rbb} & -1.29\\
& 16& $\rightarrow $& 22& \multicolumn{1}{l}{($ 2p \;3p \;^1D$} & \multicolumn{1}{l}{-$ 2p \;3d \;^1F^o$)} & \multicolumn{1}{l}{rbb} & -1.21\\
& 22& $\rightarrow $& 87& \multicolumn{1}{l}{($ 2p \;3d \;^1F^o$} & \multicolumn{1}{l}{-$\rm high\;l,n=6 $)} & \multicolumn{1}{l}{rbb} & -0.94\\
& 8& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p \;3s \;^3P^o$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{rbf} & -0.79\\
& 16& $\rightarrow $& 25& \multicolumn{1}{l}{($ 2p \;3p \;^1D$} & \multicolumn{1}{l}{-$ 2p \;4s \;^1P^o$)} & \multicolumn{1}{l}{rbb} & -0.75\\
& 1& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p^2\;^3P$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{rbf} & 0.71\\
& 12& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3p \;^3D$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{cbb} & -0.68\\ \cline{2-8}
6482.0\,\AA & 9& $\rightarrow $& 11& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1P$)} & \multicolumn{1}{l}{rbb} & 71.67\\
& 11& $\rightarrow $& 19& \multicolumn{1}{l}{($ 2p \;3p \;^1P$} & \multicolumn{1}{l}{-$ 2p \;3d \;^1D^o$)} & \multicolumn{1}{l}{rbb} & -6.45\\
& 11& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p \;3p \;^1P$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{rbf} & 3.95\\
& 9& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{rbb} & 3.41\\
& 9& $\rightarrow $& 11& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1P$)} & \multicolumn{1}{l}{cbb} & -3.12\\
& 11& $\rightarrow $& 23& \multicolumn{1}{l}{($ 2p \;3p \;^1P$} & \multicolumn{1}{l}{-$ 2p \;3d \;^1P^o$)} & \multicolumn{1}{l}{rbb} & -2.95\\
& 19& $\rightarrow $& 87& \multicolumn{1}{l}{($ 2p \;3d \;^1D^o$} & \multicolumn{1}{l}{-$\rm high\;l,n=6 $)} & \multicolumn{1}{l}{rbb} & -2.91\\
& 7& $\rightarrow $& 11& \multicolumn{1}{l}{($2p^3 \;^1D^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1P$)} & \multicolumn{1}{l}{rbb} & 2.92\\
& 1& $\rightarrow $& 6& \multicolumn{1}{l}{($ 2p^2 \;^3P$} & \multicolumn{1}{l}{-$2p^3 \;^3P^o$)} & \multicolumn{1}{l}{rbb} & -2.03\\
& 1& $\rightarrow $& 6& \multicolumn{1}{l}{($ 2p^2 \;^3P$} & \multicolumn{1}{l}{-$2p^3 \;^3P^o$)} & \multicolumn{1}{l}{cbb} & 2.00\\
\hline
\end{tabular*}}
\end{center}
\noindent{ {\small Note: rbb and rbf refer to bound-bound and bound-free radiative transitions, respectively, and cbb and cbf refer to bound-bound and bound-free collisional transitions, respectively.}}
\end{table}
For $\lambda\,3995$, the equivalent width is most sensitive to its
own radiative transition rate controlled by the oscillator strength;
e.g. doubling the oscillator strength of the $\lambda\,3995$ line
leads to $\approx$ 40\% increase in the predicted equivalent width. The
increased oscillator strengths shifts the depth of formation of the
line to smaller optical depths ($\log\tau_{5000}$) where the line
source function is lower, leading to an increase in the line strength.
Next in importance was the photoionization rate from the upper level
of $\lambda\,3995$ (level 16). An increase in this rate by a factor of
two leads to an increase in the predicted non-LTE equivalent width of
$\approx\,2$\%. Increased photoionization from the upper level again
acts to reduce the line source function in the line-forming regions.
The collisional bound-bound transition between the lower and upper
energy levels of the $\lambda\,3995$ (the ninth energy level, and the
sixteenth energy level) is next in importance. Doubling the strength
of this collisional bound-bound transition increases the collisional
coupling between these two levels, and increased collisional coupling
tends to force LTE, i.e. $S_\nu$ comes closer to $B_\nu$ in Figure
\ref{Snu_3995_pret3}; this raises the source function in the line forming
region and therefore the line is weaker.
\begin{table}
\begin{center}
\centering{
\caption{multi-MULTI Analysis at $\rm T_{eff}$= 15000 K, $ \log g$=4.0, and $\xi_t= 5.0\;\rm km\,s^{-1}$
\label{multi-multi_15kK}}
{\small
\begin{tabular*}{0.45\textwidth}{@{}l @{}l @{}l @{}l @{}l @{}l @{}l l@{}}
\hline\hline
Transition & \multicolumn{5}{c}{Perturbed transition} & Type & $\%$\\
\hline
3995.0\,\AA & 9& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{rbb} & 59.47\\
& 9& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{cbb} & -1.35\\
& 12& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3p \;^3D$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{cbb} & -0.75\\
& 8& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^3P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{rbb} & 0.52\\
& 11& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3p \;^1P$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{cbb} & -0.50\\
& 8& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^3P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{cbb} & -0.43\\
& 16& $\rightarrow $& 22& \multicolumn{1}{l}{($ 2p \;3p \;^1D$} & \multicolumn{1}{l}{-$ 2p \;3d \;^1F^o$)} & \multicolumn{1}{l}{rbb} & -0.35\\
& 16& $\rightarrow $& 25& \multicolumn{1}{l}{($ 2p \;3p \;^1D$} & \multicolumn{1}{l}{-$ 2p \;4s \;^1P^o$)} & \multicolumn{1}{l}{rbb} & -0.33\\
& 16& $\rightarrow $& 22& \multicolumn{1}{l}{($ 2p \;3p \;^1D$} & \multicolumn{1}{l}{-$ 2p \;3d \;^1F^o$)} & \multicolumn{1}{l}{cbb} & -0.33\\
& 15& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3p \;^3P$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{cbb} & -0.25\\ \cline{2-8}
6482.0\,\AA & 9& $\rightarrow $& 11& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1P$)} & \multicolumn{1}{l}{rbb} & 105.76\\
& 11& $\rightarrow $& 14& \multicolumn{1}{l}{($ 2p \;3p \;^1P$} & \multicolumn{1}{l}{-$ 2p \;3p \;^3S$)} & \multicolumn{1}{l}{cbb} & -4.30\\
& 11& $\rightarrow $& 19& \multicolumn{1}{l}{($ 2p \;3p \;^1P$} & \multicolumn{1}{l}{-$ 2p \;3d \;^1D^o$)} & \multicolumn{1}{l}{rbb} & -3.87\\
& 8& $\rightarrow $& 14& \multicolumn{1}{l}{($ 2p \;3s \;^3P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^3S$)} & \multicolumn{1}{l}{rbb} & 3.61\\
& 9& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{rbb} & 3.27\\
& 1& $\rightarrow $& 6& \multicolumn{1}{l}{($ 2p^2 \;^3P$} & \multicolumn{1}{l}{-$2p^3 \;^3P^o$)} & \multicolumn{1}{l}{rbb} & -2.15\\
& 1& $\rightarrow $& 6& \multicolumn{1}{l}{($ 2p^2 \;^3P$} & \multicolumn{1}{l}{-$2p^3 \;^3P^o$)} & \multicolumn{1}{l}{cbb} & 2.15\\
& 11& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3p \;^1P$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{cbb} & 2.06\\
& 8& $\rightarrow $& 12& \multicolumn{1}{l}{($ 2p \;3s \;^3P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^3D$)} & \multicolumn{1}{l}{rbb} & 1.89\\
& 7& $\rightarrow $& 11& \multicolumn{1}{l}{($2p^3 \;^1D^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1P$)} & \multicolumn{1}{l}{rbb} & 1.72\\
\hline
\end{tabular*}}}
\end{center}
\end{table}
Similar results were obtained for the multi-MULTI analysis of the
$\lambda\,6482$. Doubling the oscillator strength of the radiative
transition itself causes an increase in the equivalent width by $\approx$
60\%, while doubling the collision strength of this transition reduces
the predicted non-LTE line strengthening and the non-LTE equivalent width
by $\approx$ 2\%.
Tables~\ref{multi-multi_15kK} and \ref{multi-multi_29kK} show the results
of a multi-MULTI analysis for the same two N\,{\sc ii} transitions but at
$T_{\rm eff}= 15,000$ and $29,000\;$K. At $\rm T_{eff}= 15,000\;$K, the
radiative bound-free transitions play little important role as N\,{\sc ii}
is the dominant ionization stage in the line forming region and changes
in the radiative bound-free (rbf) rates have only have only a minor
effects on the N\,{\sc ii} populations. At $\rm T_{eff}= 29,000\;$K,
the rbf transitions play a much more important role as N\,{\sc iii}
is the dominant ionization stage of the nitrogen atom.
\begin{table}
\begin{center}
\centering{
\caption{multi-MULTI Analysis at $\rm T_{eff}$= 29000 K, $ \log g$=4.0, and $\xi_t= 5.0\;\rm km\,s^{-1}$
\label{multi-multi_29kK}}
{\small
\begin{tabular*}{0.5\textwidth}{@{}l @{}l @{}l @{}l @{}l @{}l @{}l l@{}}
\hline\hline
Transition & \multicolumn{5}{c}{Perturbed transition} & Type & $\%$\\
\hline
3995.0\,\AA & 9& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{rbb} & 57.66\\
& 16& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p \;3p \;^1D$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{rbf} & 7.95\\
& 1& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p^2 \;^3P$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{rbf} & 6.83\\
& 2& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p2\;0p \;^1D$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{rbf} & 5.62\\
& 9& $\rightarrow $& 11& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1P$)} & \multicolumn{1}{l}{rbb} & 5.34\\
& 9& $\rightarrow $& 12& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^3D$)} & \multicolumn{1}{l}{rbb} & 4.97\\
& 11& $\rightarrow $& 19& \multicolumn{1}{l}{($ 2p \;3p \;^1P$} & \multicolumn{1}{l}{-$ 2p \;3d \;^1D^o$)} & \multicolumn{1}{l}{rbb} & 4.97\\
& 4& $\rightarrow $& 94& \multicolumn{1}{l}{($2p^3 \;^5S^o$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{rbf} & 4.96\\
& 8& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^3P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{rbb} & 4.96\\
& 9& $\rightarrow $& 15& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^3P$)} & \multicolumn{1}{l}{rbb} & 4.80\\ \cline{2-8}
6482.0\,\AA & 9& $\rightarrow $& 11& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1P$)} & \multicolumn{1}{l}{rbb} & 125.08\\
& 11& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p \;3p \;^1P$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{rbf} & 24.15\\
& 9& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{rbb} & 18.65\\
& 1& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p^2 \;^3P$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{rbf} & 16.16\\
& 19& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p \;3d \;^1D^o$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{rbf} & 14.79\\
& 7& $\rightarrow $& 11& \multicolumn{1}{l}{($2p^3\;^1D^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1P$)} & \multicolumn{1}{l}{rbb} & 14.41\\
& 2& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p^2\;^1D$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{rbf} & 12.94\\
& 9& $\rightarrow $& 12& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^3D$)} & \multicolumn{1}{l}{rbb} & 12.15\\
& 8& $\rightarrow $& 11& \multicolumn{1}{l}{($ 2p \;3s \;^3P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1P$)} & \multicolumn{1}{l}{rbb} & 11.44\\
& 9& $\rightarrow $& 15& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^3P$)} & \multicolumn{1}{l}{rbb} & 11.37\\
\hline
\end{tabular*}}}
\end{center}
\end{table}
\begin{figure*}
\centering
\subfloat[]{
\includegraphics[scale= 0.4]{fig8a.eps}
\label{Snu_3995_pret1}
}
\vspace{0.005cm}
\hspace{0.3cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig8b.eps}
\label{Snu_3995_pret2}
}
\vspace{0.005cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig8c.eps}
\label{Snu_3995_pret3}
}
\hspace{0.3cm}
\subfloat[]{
\includegraphics[scale= 0.4]{fig8d.eps}
\label{Snu_3995_pret4}
}
\caption{The line source function of N\;{\sc ii} $\lambda\,3995$ line for the perturbed nitrogen atom and the departure coefficients of the upper and lower energy levels of this transition (dashed lines) for $\rm T_{eff}=$ 23000 K, $\log g=$ 4.0, $\xi_t= 5\;\rm km\,s^{-1}$, and $\rm \epsilon_{N, solar}$= 7.83. For comparison the results of the unperturbed atom were added (solid lines), where the perturbed transition is displayed in the lower right corner of each panel.}
\label{Source_func_preturbed_atom}
\end{figure*}
\subsection{A Monte-MULTI Analysis}
\label{random-errors}
The accuracy of a non-LTE line formation calculation depends on many
factors, and an important one is the accuracy and completeness of the
basic atomic data used. The inclusion of atomic data from many different
sources, all with different accuracies, represents a source of random
errors in the estimated equivalent widths. In order to quantify these
errors, a series of Monte Carlo simulations were carried out following
the procedure developed by \citet{sig96}. Two hundred random realizations
of the nitrogen atom were generated with the atomic data varied within
the bounds given in Table~\ref{atomic_data_variation}. The choices for
the adopted uncertainties in the atomic data are justified as follows:
\begin{itemize}
\item The oscillator strengths ($\it f$-values) of the bound-bound
radiative transitions were varied within $\pm$10 \% for ${\it f}\geq 0.1$
and $\pm$50 \% for weaker transitions. These values were chosen because the
Opacity Project length and velocity $\it f$-values differ by approximately
these ranges. The difference between these two equivalent formalisms,
length and velocity, is a measure of the accuracy of the calculation
\citep{fro00}.
\item Stark damping widths were allowed to vary within $\pm$40\,\%, which
is on the order of the difference between our calculated Stark widths
using the OP formalism of \citet{sea88} and the available experimental
values-- see Table~\ref{gamma_ratios}.
\item The photoionization cross sections were allowed to vary within
$\pm$20$\%$. OP photoionization cross sections have uncertainties of
$\approx\,$10\,\% \citep{YS87} but this range was doubled to account
for possible errors in the photoionizing radiation field
predicted by the LTE, line-blanketed model atmospheres and used in the
calculation of the (fixed) photoionization rates.
\item Thermally-averaged collision strengths were allowed to vary over a
range determined by their source. The $\cal R$-matrix method represents
an accurate way to compute thermally-averaged collisional strengths
at low temperatures for low-lying levels in an ion. \citet{hud04}
show that their results agree with the the results of previous {\cal
R}-matrix calculations within $\rm \approx\,10\,\%$, which we adopt
as the uncertainty in such collision strengths. For majority of
radiatively-allowed transitions without ${\cal R}$-matrix collision
strengths, the impact parameter method \cite{sea62} was used, and a
factor of two uncertainly was employed. Note that \citet{sig96b} compared
${\cal R}$-matrix collision strengths to impact parameter predictions in
the case of Mg\,{\sc ii} and found about a factor of 2 in accuracy.
\item Collisional ionization rates are highly uncertain and the very crude
approximation of \citet{sea62} was used for all rates. An uncertainty of
a factor of 5 was assigned to all such values.
\end{itemize}
\begin{table}
\centering{
\caption{Rates of variation of the atomic data
\label{atomic_data_variation}}}
\begin{center}
{\small
\begin{tabular*}{0.43\textwidth}{@{}l @{}l }
\hline\hline
\multicolumn{1}{l}{Atomic parameter} & \multicolumn{1}{l}{Uncertainty}\\
\hline
\multicolumn{1}{l}{$\it f$-value}& \multicolumn{1}{l}{$\pm$ 10\% $\it f$-value $\ge$ 0.1}\\
& \multicolumn{1}{l}{$\pm$ 50\% $\it f$-value $<$0.1}\\
\multicolumn{1}{l}{Stark widths} & \multicolumn{1}{l}{$\pm$ 40\%}\\
\multicolumn{1}{l}{Photoionization cross section} & \multicolumn{1}{l}{$\pm$ 20\%}\\
\multicolumn{1}{l}{Collision strength (Excitation)} & \\
\multicolumn{1}{l}{\hspace{0.5cm}{\cal R}-Matrix} & \multicolumn{1}{l}{$\pm$ 10\%}\\
\multicolumn{1}{l}{\hspace{0.5cm}Impact parameter} & \multicolumn{1}{l}{Factor of 2}\\
\multicolumn{1}{l}{Collisional strength (Ionization)}& \multicolumn{1}{l}{Factor of 5}\\
\hline
\end{tabular*}}
\end{center}
\vspace{0.1in}
\end{table}
A converged non-LTE solution was found for each of the 200
randomly-realized atoms, and the distribution of the predicted equivalent
widths was taken to estimate the uncertainty. Example distributions for
$\lambda\,3995$ and $\lambda\,6482$ are shown in Figure~\ref{MC-hist}
for the models with $T_{\rm eff}=19,000$ and 23,000\;K, with $\log g=
4.0$, $\xi_t= 5\;\rm km\,s^{-1}$ and the solar nitrogen abundance.
A Gaussian fit to each distribution gives the standard deviation
and hence the associated uncertainty due to inaccuracies in the
atomic data (taken to be $2\,\sigma$). Tabular results are given
in Table~\ref{mc_eqw_res} for both transitions with $T_{\rm eff}$'s
between 15000 and 31000, and nitrogen abundances between 6.83 and 8.13.
Tables \ref{mc_eqw_res_g3.5_xi_t5} and \ref{mc_eqw_res_g4.5_xi_t5}
in appendix \ref{MCS_app} show the results of Monte Carlo simulations
for $\log g=\,$3.5 and 4.5, and $\xi=\,5.0\;\rm km\,s^{-1}$, over the
range of $T_{\rm eff}$ considered; given in each of these tables is the
average equivalent width of each transition and its $2\sigma$ variation.
\begin{figure}
\includegraphics[scale= 0.50]{fig9.eps}
\caption{The equivalent width distribution for the 200 N\,{\sc ii} model
atoms for $\lambda\,6482$ and
$\lambda\,3995$ at $\rm T_{eff}$= 19000 and 23000 K, $ \log g$,
equal to 4.0, $\xi_t= 5\;\rm km\,s^{-1}$, and solar nitrogen abundance were assumed.}
\label{MC-hist}
\end{figure}
In addition to the uncertainty itself, these random realizations allow
one to determine which individual rates most affect the uncertainty
in each transition's equivalent width. This is different from the
previous multi-MULTI calculation as a realistic uncertainty for each
rate is used, as opposed to an arbitrary doubling. However, as the
individual rates are not varied one at a time, it is necessary to look
at the correlations between the equivalent width of the transition of
interest and the 200 scalings of each rate. The largest correlation
coefficients of the N\;{\sc ii} $\lambda\,3995$ and $\lambda\,6482$
equivalent widths at $T_{\rm eff}$ of 19,000, 23,000 and 27,000 K,
$\log\,g=4.0$, $\xi_t=5\;\rm km\,s^{-1}$, and the solar nitrogen abundance
are shown in Table~(\ref{mc_corr}). For 200 random realizations,
a correlation coefficient of 0.18 is a statistical significant at 1\%
level \citep{bev69}. The variation in the predicted equivalent widths
of both transitions is most strongly correlated with the variations in
that transition's oscillator strength, as expected. The remainder of
the strongest correlations, at the level of $|r|\sim 0.22$, are with
collisional bound-bound rates between higher excitation levels. This
reflects the large uncertainties assigned to these rates as compared
to the oscillator strengths and $\cal R$-matrix collision strengths
(see Table~\ref{atomic_data_variation}).
\subsection{Limiting Accuracy of Nitrogen Abundances}
Finally, in order to quantify the ultimate accuracy of determined nitrogen
abundances due to uncertainties in the atomic data, the equivalent
widths predicted by the unperturbed atom for three singlet lines,
$\lambda\,3995$, $\lambda\,4447$ and $\lambda\,6482$, were used
as reference ``observed" equivalent widths for each $T_{\rm eff}$ assuming
$\log\,g=4.0$ and $\xi_t=5\;\rm km\,s^{-1}$. Then the curves of growth for
each of the 200 randomly-realized atoms were used to derive a nitrogen abundance
based on exactly the same stellar parameters and microturbulent value,
i.e.\ the ideal case. The dispersion in the abundances obtained from
the 200 curves-of-growth can then be taken as the limiting uncertainty
in the derived nitrogen abundance due to atomic data limitations. This process was then repeated for
all nitrogen abundances in the range considered, $\epsilon_{\rm N}=6.58$ to 8.53.
The results for four $T_{\rm eff}$ are shown in Figure~\ref{abund-hist}.
The figure shows that abundances obtained using the results of the
Monte Carlo simulations match the original abundances, and that the
errors in the estimated nitrogen abundances due to uncertain atomic
data increase with nitrogen abundance. For example, at $\rm T_{eff}=$
23,000 K, the uncertainty is $\rm \delta \epsilon=\pm 0.02\,$dex for $\rm
\epsilon_N=6.83\,$dex which rises to $\rm \delta \epsilon=\pm 0.11\,$dex
for $\rm \epsilon_N=8.13$\,dex.
Figure~\ref{abund-hist} also shows that the estimated errors also
vary with $T_{\rm eff}$. At the same nitrogen abundance, such as
$\epsilon_N=7.83\,$dex, the abundance uncertainty is $\rm \delta
\epsilon=\pm 0.05\,$dex for $\rm T_{eff}= 19,000\;$K, $\rm \delta
\epsilon=\pm 0.07\,$dex for $\rm T_{eff}= 23,000\;$K and $\rm \delta
\epsilon=\pm 0.05\,$dex for $\rm T_{eff}= 29,000\;$K. The highest error
occurs for $\rm T_{eff}$ between 23,000~K and 25,000~K. In addition,
including uncertainty in $\rm T_{eff}$ of $\pm 1000.0\,$K causes
additional uncertainty in the estimated abundance by a factor of up to
$\approx\pm0.1\,$dex at the lowest temperatures, while uncertainty
in $\log g$ by $\pm0.25\,$dex adds additional uncertainty up to
$\pm0.05\,$dex in the estimated uncertainty of abundance.
\begin{figure}
\includegraphics[scale= 0.48]{fig10.eps}
\caption{Nitrogen abundances determined from the curves-of-growth
constructed from the 200 randomly realized atoms using the equivalent
widths of $\lambda\,3995$, $\lambda\,4447$ and $\lambda\,6482$ as the
input observed values. Four $T_{\rm eff}$ are shown
and results were determined for a range of nitrogen abundances. The
error bars represent the uncertainties of the estimated abundances due
to inaccuracies in the atomic data at $2\,\sigma$. The blue dashed line
represents a difference of zero.}
\label{abund-hist}
\end{figure}
\begin{table}
\caption{ Results of Monte Carlo Simulations for N\,{\sc ii} at $ \log g$=4.0, $\xi_t= 5.0\;\rm km\,s^{-1}$, and solar nitrogen abundance ($\rm \epsilon_{N,solar}= 7.83$): Correlation Coefficients
\label{mc_corr}}
\begin{center}
{\scriptsize
\begin{tabular*}{0.51\textwidth}{l@{\hskip 0.09in}c@{\hskip 0.09in}c@{\hskip -0.01in}c@{\hskip -0.01in}c@{\hskip -0.05in}c@{\hskip -0.5in}c@{\hskip -0.2in}c@{\hskip 0.2in}c@{\hskip 0.2in}c}
\hline\hline
\multicolumn{1}{c}{$\lambda$} & $\rm T_{eff}$ & \multicolumn{5}{c}{Perturbed Transition} & \multicolumn{1}{l}{Type} & \multicolumn{1}{l}{r} \\
\multicolumn{1}{c}{(\AA)} & (K) & &\\
\hline
3995&19000& 9& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{rbb}& 0.988\\
& & 37& $\rightarrow $& 55& \multicolumn{1}{l}{($ 2p \;4f \;^1F$} & \multicolumn{1}{l}{-$ 2p \;5d \;^1D^o$)} & \multicolumn{1}{l}{cbb}& -0.271\\
& & 28& $\rightarrow $& 35& \multicolumn{1}{l}{($ 2p \;4p \;^3P$} & \multicolumn{1}{l}{-$ 2p \;4d \;^3D^o$)} & \multicolumn{1}{l}{cbb}& 0.235\\
& & 9& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{stk}& 0.222\\
& & 94& $\rightarrow $&104& \multicolumn{1}{l}{($ 2s^2\;2p\;^2P^o$ N\,{\sc iii}} & \multicolumn{1}{l}{-$ 2s^2\;3d\;^2D$ N\,{\sc iii})} & \multicolumn{1}{l}{cbb}& 0.221\\\cline{4-8}
&23000& 9& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{rbb}& 0.983\\
& & 37& $\rightarrow $& 55& \multicolumn{1}{l}{($ 2p \;4f \;^1F$} & \multicolumn{1}{l}{-$ 2p \;5d \;^1D^o$)} & \multicolumn{1}{l}{cbb}& -0.270\\
& & 28& $\rightarrow $& 35& \multicolumn{1}{l}{($ 2p \;4p \;^3P$} & \multicolumn{1}{l}{-$ 2p \;4d \;^3D^o$)} & \multicolumn{1}{l}{cbb}& 0.234\\
& & 94& $\rightarrow $&104& \multicolumn{1}{l}{($ 2s^2\;2p\;^2P^o$ N\,{\sc iii}} & \multicolumn{1}{l}{-$ 2s^2\;3d\;^2D$ N\,{\sc iii})} & \multicolumn{1}{l}{cbb}& 0.228\\
& & 8& $\rightarrow $& 49& \multicolumn{1}{l}{($ 2p \;3s \;^3P^o$} & \multicolumn{1}{l}{-$ 2p \;5p \;^3D$)} & \multicolumn{1}{l}{cbb}& -0.223\\\cline{3-8}
&27000& 9& $\rightarrow $& 16& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1D$)} & \multicolumn{1}{l}{rbb}& 0.967\\
& & 37& $\rightarrow $& 55& \multicolumn{1}{l}{($ 2p \;4f \;^1F$} & \multicolumn{1}{l}{-$ 2p \;5d \;^1D^o$)} & \multicolumn{1}{l}{cbb}& -0.265\\
& & 94& $\rightarrow $&104& \multicolumn{1}{l}{($ 2s^2\;2p\;^2P^o$ N\,{\sc iii}} & \multicolumn{1}{l}{-$ 2s^2\;3d\;^2D$ N\,{\sc iii})} & \multicolumn{1}{l}{cbb}& 0.234\\
& & 8& $\rightarrow $& 49& \multicolumn{1}{l}{($ 2p \;3s \;^3P^o$} & \multicolumn{1}{l}{-$ 2p \;5p \;^3D$)} & \multicolumn{1}{l}{cbb}& -0.231\\
& & 28& $\rightarrow $& 35& \multicolumn{1}{l}{($ 2p \;4p \;^3P$} & \multicolumn{1}{l}{-$ 2p \;4d \;^3D^o$)} & \multicolumn{1}{l}{cbb}& 0.225\\\cline{3-8}
6482&19000& 9& $\rightarrow $& 11& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1P$)} & \multicolumn{1}{l}{rbb}& 0.988\\
& & 14& $\rightarrow $& 73& \multicolumn{1}{l}{($ 2p \;3p \;^3S$} & \multicolumn{1}{l}{-$ 2p \;6s \;^3P^o$)} & \multicolumn{1}{l}{cbb}& -0.244\\
& & 55& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p \;5d \;^1D^o$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{cbf}& 0.227\\
& & 47& $\rightarrow $& 77& \multicolumn{1}{l}{($ 2p \;5s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;6p \;^1P$)} & \multicolumn{1}{l}{rbb}& -0.219\\
& & 56& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p \;5d \;^3D^o$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{cbf}& -0.218\\\cline{3-8.5}
&23000& 9& $\rightarrow $& 11& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1P$)} & \multicolumn{1}{l}{rbb}& 0.969\\
& & 14& $\rightarrow $& 73& \multicolumn{1}{l}{($ 2p \;3p \;^3S$} & \multicolumn{1}{l}{-$ 2p \;6s \;^3P^o$)} & \multicolumn{1}{l}{cbb}& -0.251\\
& & 55& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p \;5d \;^1D^o$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{cbf}& 0.231\\
& & 47& $\rightarrow $& 77& \multicolumn{1}{l}{($ 2p \;5s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;6p \;^1P$)} & \multicolumn{1}{l}{rbb}& -0.210\\
& & 56& $\rightarrow $& 80& \multicolumn{1}{l}{($ 2p \;5d \;^3D^o$} & \multicolumn{1}{l}{-$ 2p \;6p \;^3P$)} & \multicolumn{1}{l}{stk}& -0.210\\\cline{3-8}
&27000& 9& $\rightarrow $& 11& \multicolumn{1}{l}{($ 2p \;3s \;^1P^o$} & \multicolumn{1}{l}{-$ 2p \;3p \;^1P$)} & \multicolumn{1}{l}{rbb}& 0.910\\
& & 14& $\rightarrow $& 73& \multicolumn{1}{l}{($ 2p \;3p \;^3S$} & \multicolumn{1}{l}{-$ 2p \;6s \;^3P^o$)} & \multicolumn{1}{l}{cbb}& -0.252\\
& & 55& $\rightarrow $& 94& \multicolumn{1}{l}{($ 2p \;5d \;^1D^o$} & \multicolumn{1}{l}{-$ 2s^2\;2p\;^2P^o$ N\,{\sc iii})} & \multicolumn{1}{l}{cbf}& 0.225\\
& & 20& $\rightarrow $& 43& \multicolumn{1}{l}{($ 2p \;3d \;^3D^o$} & \multicolumn{1}{l}{-$ 2p \;4f \;^3D$)} & \multicolumn{1}{l}{rbb}& -0.216\\
& & 56& $\rightarrow $& 80& \multicolumn{1}{l}{($ 2p \;5d \;^3D^o$} & \multicolumn{1}{l}{-$ 2p \;6p \;^3P$)} & \multicolumn{1}{l}{stk}& -0.213\\
\hline
\hline
\end{tabular*}}
\end{center}
\noindent{Notes: All correlation coefficients are statistically significant to less than 1\% level, and "stk" refers to variation of the Stark width of the radiative transition.}
\end{table}
\subsection{Possible Systematic Errors}
Systematic errors are more difficult to quantify than the random errors
in the atomic data. Possible systematic errors can originate from many
sources, such the use of LTE stellar atmospheres with fixed elemental
abundances, fixed photoionization rates, and a truncated atomic model
in terms of included energy levels.
Widely available non-LTE stellar atmosphere models do not readily provide
grids of $J_\nu$ as a function of optical depth needed for calculating
the photoionization rates. On the other side, the opacity distribution
function treatment of \citet{kur79} provides $J_\nu$ over manageable
grids; this issue is discussed by \citet{prz01}.
Another source of systematic error is the completeness of the nitrogen
atom. Using atomic models with only a few energy levels can result
in large non-LTE effects, particularly for trace ions with low-lying
levels with photoionization thresholds in the short-wavelength region
of the Balmer continuum. Such calculations can predict too much
overionization in small atomic models because collisional coupling to
the dominant continuum is artificially suppressed \citep{sig96}. On the
other hand, there is a physical limit to how many energy levels can be
straightforwardly added to a non-LTE calculation; highly excited levels
are only weakly coupled to their parent ion and can be disrupted by the
surrounding plasma. This is most naturally described by introducing an
occupation probability, between 0 and 1, for each bound level to exist
\citep{hum88} and reformulating the statistical equilibrium equations
to include the occupation probabilities \citep{hub94}.
In order to test our N\,{\sc ii} atom for completeness, we have
systematically increased the number of included N\,{\sc ii} energy
levels from 93 to 150, and for each increase, recomputed the non-LTE
solution. Figure~\ref{eqw_nlevels} shows the effect on the equivalent
widths of $\lambda\,3995$ and $\lambda\,6482$. The effect is quite small,
a few percent at most, and hence we do not consider the size of the
N\,{\sc ii} atom used in the present work to be a significant source of
uncertainty for the transitions of focus. This lack of strong dependence
on the number of non-LTE levels is consistent with the nature of the
non-LTE effects in N\,{\sc ii}. Because the photoionization thresholds
of low-lying N\,{\sc ii} levels are in the Lyman continuum, there is
little predicted non-LTE overionization in the line forming region
and hence collisional coupling to the parent N\,{\sc iii} continuum is
less important.
\begin{figure}
\centering
\includegraphics[scale= 0.4]{fig11.eps}
\caption{The change in the predicted equivalent widths of $\lambda\,3995$
and $\lambda\,6482$ with an increasing number of levels
in the nitrogen atom at $\rm T_{eff}=$ 23,000 K, $\log g=$ 4.0, $\xi_t=
5\;\rm km\,s^{-1}$, and the solar nitrogen abundance.}
\label{eqw_nlevels}
\end{figure}
\section{Discussion and Conclusion}
\label{disc}
We have presented new, non-LTE line calculations for N\;{\sc ii}, using
the MULTI code of \citet{car92}, over the range of stellar $T_{\rm eff}$,
$\log\,g$, and microturbulent velocities appropriate to the main sequence
B~stars. Grids of equivalent widths for commonly-used, strong N\;{\sc
ii} lines in elemental abundance analysis have been provided. A detailed
error analysis was performed, and the error bounds on the tabulated
equivalent widths due to atomic data uncertainties provided.
We find reasonable agreement with the most recently, tabulated non-LTE
equivalents widths, those of \citet{kor99}, but, in general, we find weaker
non-LTE effects. We attribute this to the much larger number of radiative
bound-bound transitions explicitly included in the non-LTE calculation, as
opposed to treating many weaker transitions with fixed rates. In addition,
our careful treatment of quadratic Stark broadening makes our N\,{\sc ii}
equivalent widths more reliable at their maximum strengths.
We systematically investigated the completeness of our atomic model in terms
of included energy levels. In addition, an extensive Monte-MULTI calculation
quantified the effect of atomic data limitations on the N\,{\sc ii}
equivalent width predictions. In general, near their peak strengths, we find
that the limiting accuracy of using non-LTE equivalents widths computed with
existing atomic data is $\approx\pm0.1\;$dex, although this range is somewhat
larger for the highest $T_{\rm eff}$ considered.
To be most applicable to main sequence B stars, the effect of
gravitational darkening, particularly the latitude-dependent $T_{\rm
eff}$, on the optical N\,{\sc ii} spectrum should be investigated, and for
the Be stars, the potential of disk emission should be quantified. These
effects will be investigated in a subsequent work.
\vspace{0.3in}
This work is supported by the Canadian Natural Sciences and Engineering
Research Council (NESRC) through a Discovery Grant to TAAS.
|
1,116,691,500,197 | arxiv | \section{Introduction}
In a previous communication \citep{lownu} we present the application of the `low-$\nu$' method
to the extraction of neutrino ($\nu_\mu$) flux for energies ($E_\nu$) as low as 0.7 GeV. In this paper we extend the technique
to $E_\nu$ as low as 0.4 GeV and extract the relative energy dependence of the $\nu_\mu$ flux for the MiniBooNE experiment as an
example.
The charged current $\nu_\mu$ ($\nubar$) differential cross section can be written in terms of the square of the four momentum ($Q^2$) and energy transfer ($\nu$) to the target nucleus.
At low-$\nu$, if we integrate the
cross section from $\nu_{min}\approx 0$ up to $\nu$= $\nu_{cut}$ (where $\nu_{cut}$ is small), we can write the `low-$\nu$' cross section\citep{lownu} in terms of an energy independent term which is proportional to the structure function ${\cal W}_2$, and small energy dependent corrections which are proportional to $\nu/E$, or $m_\mu^2/E^2$ where $m-\mu$ is the mass of the muon.
$$ \sigma_{\nu cut}(E_\nu) = \displaystyle\int_{\nu_{min (E_\nu)}}^{\nu_{cut}} \displaystyle\frac{d^2\sigma}{dQ^2 d\nu}dQ^2 d\nu
= \sigma_{{W}_2} + \sigma_{2}+\sigma_{1}\pm\sigma_{3}+\sigma_{4} + \sigma_{5}$$
$$\sigma_{{W}_2} = C \displaystyle\int_{\nu_{min (E_\nu)}}^{\nu_{cut}} {\cal W}_2 ~d\nu;~~~~~~~~~~
~~~~~~~~\sigma_{2} =C \displaystyle\int_{\nu_{min (E_\nu)}}^{\nu_{cut}}
\left[ -\frac{\nu}{E_\nu} -\frac{Q^2+m_\mu^2}{4E_\nu^2} \right]{\cal W}_2 ~d\nu$$
$$ \sigma_{1}=C \displaystyle\int_{\nu_{min (E_\nu)}}^{\nu_{cut}}
\frac{(Q^2+m_\mu^2)}{2E_\nu^2} {\cal W}_1 ~d\nu;~~~
\sigma_{3}=C \displaystyle\int_{\nu_{min (E_\nu)}}^{\nu_{cut}}
\left[ \frac{Q^2}{2ME_\nu}-\frac{\nu}{4E_\nu} \frac { Q^2+m_\mu^2}{ME_\nu} \right] {\cal W}_3 ~d\nu,$$
where $\sigma_{4}$ and $\sigma_{5}$ are negligible\citep{lownu}, and $M$ is the nucleon mass.
The uncertainties in the modeling of the small energy dependent correction terms are small, thus we
can extract the relative $\nu_\mu$ flux from the number
of low-$\nu$ events at each $E_\nu$ bin.
\begin{figure}[h]
\includegraphics[height=0.20\textheight,width=0.33\textheight]{QEcarbonfcbarnumax10.pdf}
\includegraphics[height=0.20\textheight,width=0.33\textheight]{QEcarbonfcbarnumax20.pdf}
\label{fig-nu}
\caption{ The ratio of the neutrino `low-$\nu$' QE cross section (as a function of $E_\nu$) to the `low-$\nu$' QE cross section
at $E_\nu=1.1$ GeV for $\nu<0.1$ GeV (left) and $\nu<0.2$ GeV (right). }
\end{figure}
\begin{figure}[h]
\includegraphics[height=0.20\textheight,width=0.33\textheight]{qenumax10fcbarerror.pdf}
\includegraphics[height=0.20\textheight,width=0.33\textheight]{qenumax20fcbarerror.pdf}
\label{fig-nuerr}
\caption{ The model uncertainty in the ratio of the neutrino `low-$\nu$' QE cross section (as a function of $E_\nu$) to the `low-$\nu$' QE cross section
at $E_\nu=1.1$ GeV for $\nu<0.1$ GeV (left) and $\nu<0.2$ GeV (right).
}
\end{figure}
If we use the MINOS criteria that the fraction
of events in the `low-$\nu$' flux sample is lower than 60\% of the total number
of events in each $E_{\nu,\nubar}$ bin, we find that for neutrinos we can use events with $\nu<0.1$ GeV
to extract the relative flux for $E_\nu>0.4$ GeV, and events with
$\nu<0.2$ GeV for $E_\nu>0.7$ GeV. For these $\nu$ cuts, the cross section is dominated
by quasielastic (QE) scattering.
The flux extracted with the `low-$\nu$' method is only a relative flux as a function of energy.
It must be normalized at some energy. In this paper, we present the flux relative to the flux
at $E_\nu$= 1.1 GeV. In our calculation of QE cross sections we use
BBBA2007 electromagnetic form factors\citep{BBB}.
Figures \ref{fig-nu} ($\nu_\mu$) and \ref{fig-nubar} ($\nubar$) show the ratio of the `low-$\nu$' QE cross section (as a function of $E_{\nu,\nubar}$) to the `low-$\nu$' cross section
at $E_{\nu,\nubar}=1.1$ GeV for $\nu<0.1$ GeV (left) and $\nu<0.2$ GeV (right) for various models.
The data points are from the GENIE MC generator for a carbon target assuming a Fermi gas model and a dipole form for the axial form factor with $M_A$ = 0.99 GeV\citep{Kuzmin}. The ratio is independent
of the value $M_A$ as illustrated by the fact that
prediction of this ratio for a dipole axial vector mass $M_A$ = 1.014 GeV (solid red line) and $M_A$ = 1.3 GeV (solid blue line) are the same.
Also shown are the changes in the prediction when we include nuclear enhancement in the transverse vector form factors\citep{TE} (TE) (shown as the solid black line). For $E_{\nu}>0.4$ GeV and $\nu<$0.1 and for
$E_{\nu}>0.7$ GeV and $\nu<$0.2 the ratio is approximately constant. The uncertainties in the modeling of the energy dependence of
this ratio are small as shown in figures \ref{fig-nuerr} and \ref{fig-nubarerr}.
\begin{figure}[h]
\includegraphics[height=0.20\textheight,width=0.35\textheight]{AntiQEcarbonfcbarnumax10.pdf}
\includegraphics[height=0.20\textheight,width=0.35\textheight]{AntiQEcarbonfcbarnumax20.pdf}
\label{fig-nubar}
\caption{ The ratio of the $\nubar$ `low-$\nu$' QE cross section (as a function of $E_{\nubar}$) to the `low-$\nu$' QE cross section at $E_{\nubar}=1.1$ GeV for $\nu<0.1$ GeV (left) and $\nu<0.2$ GeV (right). }
\end{figure}
\begin{figure}[h]
\includegraphics[height=0.20\textheight,width=0.35\textheight]{antiqenumax10fcbarerror.pdf}
\includegraphics[height=0.20\textheight,width=0.35\textheight]{antiqenumax20fcbarerror.pdf}
\label{fig-nubarerr}
\caption{ The model uncertainty in the ratio of the $\nubar$ `low-$\nu$' QE cross section (as a function of $E_{\nubar}$) to the `low-$\nu$' QE cross section
at $E_{\nubar}=1.1$ GeV for $\nu<0.$ GeV (left) and $\nu<0.2$ GeV (right).
}
\end{figure}
As an example, we extract the relative energy dependence of the flux of the Fermilab booster neutrino beam from MiniBooNE data.
The MiniBooNE experiment published\citep{MiniBooNE} flux weighted double differential cross sections for QE neutrino scattering in bins of final state muon kinetic energy $T_\mu$ ($E_\mu=T_\mu +m_\mu$) and muon angle ($\cos\theta_{\mu}$).
We extract the central value of
$\nu^{QE}=E_\nu^{QE}-E_\mu$ for each ($T_\mu$, $\cos\theta_{\mu}$) bin using
\vspace{-0.1in}
\begin{eqnarray}
E_\nu^{QE}
&=&
\frac{2(M_n^{\prime})E_\mu-((M_n^{\prime})^2+m_\mu^2-M_p^2)}
{2\cdot[(M_n^{\prime})-E_\mu+\sqrt{E_\mu^2-m_\mu^2}\cos\theta_\mu]}.
\end{eqnarray}
where $M_n$ and $M_p$
are the neutron and proton mass, and $M_n^{\prime}=M_n-E_B$ ($E_B= 34$~MeV).
The left side of Fig.~\ref{fig-flux} shows the MiniBooNE bins of 0.1
GeV in $T_\mu$ and 0.1
in $\cos\theta_\mu$. The solid lines are
lines of constant $E_\nu^{QE}$. The blue and red dotted lines are
$\nu^{QE}<0.2$ GeV, and $\nu^{QE}<0.1$ GeV, respectively.
\begin{figure} [ht]
\includegraphics[height=0.25\textheight,width=0.37\textheight]{miniboone-2d.pdf}
\includegraphics[height=0.24\textheight,width=0.35\textheight]{flux-figure-cropped.pdf}
\label{fig-flux}
\caption{Left: The MiniBooNE QE cross section bins of 0.1
in $\cos\theta_\mu$, and 0.1 GeV in $T_\mu$. The solid lines are
lines of constant $E_\nu^{QE}$. The blue and red dotted lines are
$\nu^{QE}<0.2$ GeV, and $\nu^{QE}<0.1$ GeV, respectively. Right: The relative neutrino flux extracted
from $\nu^{QE}<0.2$ GeV cross sections (blue squares) and $\nu^{QE}<0.1$ GeV (red squares) shown with statistical errors only. The black (flux A) and purple (flux B) lines are possible deviations (which are consistent with the `low-$\nu$' flux) from the central values of the published flux. The green line
is the quoted systematic uncertainty in the nominal MiniBoonNE flux.
}
\end{figure}
\begin{figure} [ht]
\includegraphics[height=0.25\textheight,width=0.70\textheight]{fa_allasia_new.pdf}
\label{fig-fa}
\caption{ $F_A(Q^2)$ measurements on free nucleons (a) $F_A(Q^2)$ re-extracted from neutrino-deuterium
data divided by $G_D^{A}(Q^2)$ (with $M_A=1.015$ GeV). (b)
$F_A(Q^2)$ from pion electroproduction
divided by $G_D^{A}(Q^2)$ corrected for
for hadronic effects\citep{pion}.
Solid black line - duality based modified dipole fit with $M_A=1.015 GeV$\citep{BBB}.
Short-dashed line - $F_A(Q^2)_{A2=V2}$.
Dashed-dot line - constituent quark model\citep{quark}. Solid red line - duality based modified dipole with $M_A=1.10$ GeV, which is our best fit to the MiniBooNE data on Carbon (accounting for Transverse Enhancement).
}
\end{figure}
We extract the `low-$\nu$' flux from the MiniBooNE data as follows. Using the published MiniBooNE flux,
we first fit the flux-weighted doubly differential cross section to three models. The parameters
which are allowed to float within the models are the overall normalization and the axial vector mass
$M_A$. The first model is a Fermi gas model with BBBA2007 electromagnetic
form factors and a dipole form for the axial form factor. The second model includes Transverse
Enhancement for the vector form factors\citep{TE} (BBC-TE) and a dipole form for the axial form factor. The third model includes TE
for the vector form factors\citep{TE} (BBC-TE) and assumes a modified dipole form for the axial form factor
as given in ref.~\citep{BBB}. The modification to the dipole form factor are from a
fit\citep{BBB} to all neutrino scattering data and pion electroproduction on free (H and D targets) nucleons
as shown in fig. \ref{fig-fa}. The fit has the duality constraint that the vector and axial
parts of structure function $W_{2}$ for quasielastic scattering are equal
at large $Q^2$.
%
%
%
The ratio of the flux-weighted MiniBooNE measured cross sections at low-$\nu$ to the calculated
(with the nominal published MiniBooNE flux) flux-weighted
cross sections for any of the three models is
proportional to the ratio of the `low-$\nu$' flux to the nominal MiniBooNE flux. As expected, the relative
flux extracted as a function of neutrino energy is insensitive to the choice of model.
The left side of Fig. \ref{fig-flux} shows the ratio of the flux extracted from $\nu<0.1$ GeV events (red circles) and $\nu<0.2$ GeV events (blue squares) to the nominal flux. Only statistical errors are shown. The green line is the systematic error in nominal flux (as published by MiniBooNE). The extracted `low-$\nu$' flux is consistent with the nominal flux within the quoted systematic errors. The black curve (flux A) and purple curve (flux B) are possible deviations (which are consistent with the `low-$\nu$' flux) from the nominal flux.
Next we fit for the best value of $M_A$ for each of the three models. We find that if
we let the overall normalization float within the systematic error the extracted values of $M_A$
using the nominal flux, flux A, and flux B
are within $0.015$ GeV of each other
as shown in Table 1. We find that with Transverse Enhancement, and a modified dipole form factor, the fit to the $Q^2$ dependence of the MiniBooNE $d\sigma/dQ^2$ on carbon favors an axial mass $M_A=1.10\pm0.02$ GeV. The ratio of this modified dipole fit with $M_A=1.10$ GeV to the simple dipole parametrization with $M_A=1.015$ GeV is shown as the solid red line in fig. \ref{fig-fa}. The fit to the MiniBooNE data is more consistent with the values of $F_A(Q^2)$ extracted from pion electroproduction on free nucleons (shown in Fig. \ref{fig-fa}(b)), than with the values $F_A(Q^2)$ extracted from neutrino data on deuterium (Fig. \ref{fig-fa}(a)).
\begin{table}[ht]
\caption{\label{A:fits} Fits to MiniBooNE neutrino quasielastic scattering data on carbon}
\begin{tabular}{|c|c|c|c|c|c|} \hline
Form Factors &data~set & $M_A$ &$N$ & $\chi^2/NDF$ &flux \\
vector/axial & (2/D)/(1D) & (GeV) & normalization &model\\ \hline
BBBA07 & double diff (2D) & $1.35\pm 0.02$ & $0.99 \pm 0.01$ & $39.8/ 135=0.30$ &nominal \\
FA=Dipole & $d\sigma/dQ^2$(1D) & $1.41 \pm 0.04$ & $0.99 \pm 0.02$ & $11.7/15=0.78$ &nominal \\ \hline
BBC(TE) & double diff (2D) &$1.22 \pm 0.02$ & $1.01\pm 0.01$ & $43.6/135=0.32$ &nominal \\
FA=dipole & $d\sigma/dQ^2$(1D) & $1.17 \pm 0.03$ & $1.05 \pm 0.02$ & $19.7/15=1.31$ &nominal \\ \hline \hline
BBC(TE) & double diff (2D) & $1.17 \pm 0.02$ & $1.01 \pm 0.01$ & $35.2/135=0.28$ &nominal \\ FA=mod. dipole & $d\sigma/dQ^2$(1D) & $1.11 \pm 0.03$ & $1.04 \pm 0.02$ & $19.0/15=1.27$ &nominal \\
\hline
BBC(TE) & double diff (2D) & $1.17 \pm 0.02$ & $1.01 \pm 0.01$ & $35.4/135=0.26$ &Flux A \\ FA=mod. dipole & $d\sigma/dQ^2$(1D) & $1.10 \pm 0.03$ & $1.04 \pm 0.02$ & $17.8/15=1.18$ &Flux A \\ \hline
BBC(TE) & double diff (2D) & $1.17 \pm 0.02$ & $1.01 \pm 0.01$ & $38.3/135=0.28$ &Flux B \\ FA=mod. dipole & $d\sigma/dQ^2$(1D) & $1.09 \pm 0.03$ & $1.04 \pm 0.02$ & $17.7/15=1.18$ &Flux B \\
\hline
\end{tabular}
\end{table}
|
1,116,691,500,198 | arxiv | \section{Introduction}
How could an artificial intelligence (AI) help a pretend paladin hunt an orc through a forest? The research field of AI in Games, which should technically be able to answer this question, separates into two sub-fields. One dealing with how to make AIs that can play games to win, the other sub-field asking how AI can enhance the game experience. For an AI to play a live action role-playing (LARP) game well seems to be a somewhat far-fetched endeavor. It is not unimaginable that some day we will have a fully embodied AI that convincingly plays make-believe in a shared imagined world, navigating both the physical and the interpersonal challenges, while figuring out how to actually ``win'' in an open-ended, game-like interaction. For now, we are more interested in AI applications to enhance the game experience in LARP. Naively, AI might seem like a poor fit for a game genre that is often associated with a deliberate lack of modern technology. But there are already early attempts to integrate modern technology into LARP \cite{segura2017design,Segura2018, Dagan2019}.
And AI in games research in the past has focused on game design, believable characters, world building, story telling, automatic game balancing and player modeling \cite{YannakakisT15} - which all sound relevant for LARP. In this paper, we argue that there is a role for AI in LARP -- especially when focusing on those AI technologies that have already been successfully applied to other game genres.
First, we will give a short overview about what LARP is, and related it to similar game forms, such at tabletop role-play (TRP). We discuss specific properties, that make LARP both a suitable application for AI research, while also providing unique and new challenges. We specifically talk about the decomposition of the different functions that are usually all performed by a single game master in TRP. We then take a brief look at existing applications, both in LARP-like domains, and those that may easily be adapted. Finally, we put forward suggestions on how AI can both address existing challenges and further enhance game-play beyond what is possible. This part has both some more generic suggestions, as well as concrete illustrative examples. The overall goal of this paper is to point out possible avenues of AI in Games research in the underutilized domain of LARP.
\section{What is Live Action Role Play}
There are a range of LARP definitions which vary dependent on what tradition they are from. Salen et.al.~\cite{salen2004rules} see LARP as a descendant of TRP that takes place in a real physical space, where people act out their characters and their actions. Particularly in the US and the UK many early LARPs~\cite{harviainen2018live} were embodied games situated in fictional worlds based on either ``Dungeons and Dragons\cite{gygax1974dungeons,peterson2012playing}'' or ``Vampire the Masquerade\cite{rein1992vampire}''. LARP also has similarities with Reenactment, but differs in that the outcome of events, such as famous battles, are less predetermined, and more dependent on player's actions.
One might also explain LARP as an improvisational theater play in which one is playing a character, knows about the other characters and the world, but does not have a script to follow \cite{stark2012leaving}. This definition resonates with another LARP tradition, referred to as theater LARP, which generally focusses on character interaction and relationships, and is more lightweight on the rules. There are similarities to experimental theater, but players are active participants, play characters, and rely on a conflict resolution mechanic to determine what happens if two participants have conflicting ideas on how to progress the story. A classic example here is the murder mystery party, particularly a party in the later style where participants are assigned characters and become supporting players in each other's experience.
There are many other traditions and corresponding descriptions of what LARP is, see~\cite{harviainen2018live}. Central are usually a physical embodiment of at least some players as their character. Beyond that, there is a lot of diversity in existing LARP events nowadays. In the following section we discuss some properties of LARP, with a particular focus on those issues we think are relevant for AI facilitation in these games.
\subsection{Decomposition}
A relevant distinction between table-top and live role-play is the decomposition and realization of the different functions of the game master. In TRP a single game master acts as a storyteller, quest provider, information provider, arbiter of rules, world simulator, and often also host. Most of those functions operate on the fictional world that resides in the GMs mind. Players interface with this world verbally by stating their actions and the results are relayed to them via speech. In LARP, particularly in larger ones, these different functions are performed by different crew members, other players, and the actual physical world. It is typical that the different members in a LARP organization team have different roles and responsibilities. While some crew members might be in charge of the game's plot, other might just be there to ``monster'' \cite{mitchell2019volunteers}, i.e. play opponents and non-player characters, without having an understanding of the overall game or plot. This decomposition introduces challenges, but is also an opportunity to solve different aspects of the AI game master problem separately. Fig.~\ref{fig:decompose} provides a diagram of the functions we now describe in detail. We should note that this is largely oriented at big, entertainment focused, UK-style LARPS, such as Empire, etc. Other LARP traditions, such as Nordic LARP \cite{stenros2010nordic}, do have similar roles, but their functions or limitations might be slightly different.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{LARPchart.png}
\caption{Decomposition of different functions\,/\,roles for LARP organizers. Arrows denote flow of information. In tabletop role play all functions in the grey box are usually performed by the game master, a single person, and all interactions are mediated by a single, narrative interface. The interface in LARP is between each function and the players, and can take different forms. }
\label{fig:decompose}
\end{figure}
\subsubsection{World State} LARPs operate in a game world that is at least partly fictitious. Facts like who is dead, or who rules a given fictional nation only exist in this fictional game state. Players can and do create their own collaborative fiction, which unlike in TRP is hard to monitor for the organizers, but still part of the fictional world state.
\subsubsection{World Simulation} Roleplay can be described as a conversation between players and referees - where one side says, what do you do? and the other asks, what happens? When players act in the world, the effect of their actions has to be determined. Unlike TRP, The simulations are a collaborative effort between the players and the organiserss of the LARP, Updating the world state is time-consuming and requires significant coordination and skill.
\subsubsection{Event Generation / Storytelling}
While there are LARP events that are purely driven by player actions, many feature a plot presented to the players in the form of events. In a TRP this might be realized by the game master prompting groups of players towards certain outcomes. The process cannot be trivially individualised, which limits the ability of the GMs to adhere to different perspectives.
\subsubsection{Quest Generation} Another way to get players to act is to give them quests. While this is common in TRP, and is often woven into the play around the table, it is less common in LARP. By their very natures, LARPs are much more freeform, with players expected to act intrinsically. Story paths devised by players can, however, pose a challenge to the coherence of the game, as they might not be aligned with the overall scope.
\subsubsection{Information Provider} Another common role for non-player characters is to provide information about the world to the player. Since some of the game happens in a fictitious world that cannot be directly observed by the player, it is important to inform them about events or relevant facts. This in turn ends up requiring constant player briefing, which again creates limits the ability of the help players ``experienc'' to tell a story effectively.
\subsubsection{Rules Arbitration} Most LARP events rely on an honor system in which player self-police their compliance with the game rules. For example, it is common that players have life points, and count themselves how many they lose in any given fight, determining themselves when they die. Nevertheless, it requires from players to constantly monitor and report their own state and the state of other players. As we will see in the related work, there are also technical solutions to track such things.
\subsubsection{Hosting} In addition to managing the game's fiction, a LARP organizer is usually also responsible for hosting the event. This might include ensuring that there are adequate sleeping, food and hygiene arrangements, and that everyone is feeling safe and comfortable. This creates obligations to ensure a space that is free of dangers, harassment, etc. As with other games, it should be possible to withdraw from the game at any point without any negative real world consequences.
\subsection{Coherence}
\label{sec:coherence}
The size of LARP events usually forces organizers to split the previously mentioned roles across different people - which are often physically separated. So, while in an ideal world every meaningful player action would update the world state, and this update would be communicated to all relevant parties, that level of responsiveness is often not feasible. Additionally, non-player characters might lack certain pieces of information. Information might also be misremembered by players. Because there is no objective external reality against which this information can be verified, wrong information propagates more within the LARP than it would in real life. For example, a player who hears a rumour that the sky is red cannot simply glance up and disprove it. Taking into account that in a LARP there is locality, that is, the acting persons are usually not in the same location, the decoherence is to a certain extent also necessary in order to properly represent the game world. E.g., if an important new information is created in one location of the LARP game world, it may travel by means of gossip to more distant participants, but it would be unrealistic to distribute it instantly to all players, even if that would be technically possible.
Communication technology can address many of these issues, and some LARPs do rely on radios, wifi and databases to combat these problem. AI can cam into play and help monitor certain aspects of the player's LARP ``life'', so as to have a coherent picture delivered to every single player.
\subsection{Physicality} One defining characteristic of LARP, in contrast to TRP, is that at least some of its play is carried out in the physical world. This makes certain things easier - actions are sometimes easier to perform than describe, and certain elements of the world are more easily ``simulated'' in the real world than in someone's mind. For example, a physical sword fight in a LARP is usually quicker than a similar, simulated fight in Dungeons and Dragons \cite{gygax1974dungeons}. The challenge for AIs here is to operate on potentially faulty, incoherent and incomplete information, and then affect the physical world. Several challenges from the domain of human-robot and human computer interaction also arise naturally --- such as ad-hoc negotiation of collaboration or cooperation in a physical space.
\subsection{Scalability}
A lot of bottlenecks deal with the question of how the different parts of the organization exchange information and keep the game coherent. Scaling up a LARP often means adding more people for those roles, or splitting the roles among more people, and hence increases the challenge of keeping everyone in sync. In addition, scalability is an issue for generating quests or meaningful game content for more and more players. LARPs often feature a similar amount of plot regardless of size - resulting in a sparsity of things to do the more players there are around. This is often related to the problem of content creation. Things like background stories and world information usually have to be written, or at least reviewed by the organizers, and care has to be taken that all those stories are consistent with each other.
\subsection{Immersion} Another characteristic of LARP is immersion, i.e. creating the feeling of genuinely being there or experiencing what you are playing. A popular goal for LARP is the so called 360 immersion \cite{koljonen2007eye}, the idea that everything around you conforms to the scenario - that what you see and feel would be similar to what your character would see and feel. In a fantasy LARP this could mean banning drinking cans, making sure costumed look, or even are, authentic, and that there are no visible modern buildings in sight. AI devices can help by making this experience feel more authentic (e.g. spirits bound in trees can be virtual agents --- more on this later).
\subsection{Robustness}
In a lot of LARPs, particularly those with less commercial routes \cite{harviainen2018live}, there is an understanding among players that they are not just consumers of an experience, but are actively helping to create the same experience for others. Player are usually willing to overlook minor problems and help to make LARPs work. This comes both in the form of being willing to adapt and interpret inconsistent clues, and in a willingness to improvise to fill the gaps. This gives LARPs a certain degree of robustness, in contrast to i.e. computer RPGs, where an AI-player or AI-story must work, otherwise the game will crash. In LARP players might just fix minor problems.
\section{Related Work}
\label{sec:related}
We will now look at a selection of existing LARP and LARP like experiences. The point we try to make with these example is as follows: There are already examples of technology facilitation in LARP. Technology can make LARP easier to organize and run, and enable interaction not possible without said technology. This provides an interface, and even existing world representations that common AI in Games approaches could built upon.
\subsection{Existing LARPs and related projects}
\label{sec:existing}
\subsubsection{Wing and a Prayer}
is a LARP based on the experience of the Women’s Auxiliary Air Force (or WAAF) during World War 2, supporting the Royal Air Force \cite{ianthomasWaPStructure}. Female players interact with realistic radar set-up and data, and the outcome of simulated battles depends on the quality of instructions they give to pilots, played by male players. In this LARP, a computer simulation provides a more mimetic experience of wartime conditions than would otherwise have been possible. It is an existing example of how the complexity of world simulation can be offloaded on a computer, and how a system can be designed that allows players to interact with such a digital representation, without losing their immersion. This use of technology to mimetically portray information-giving devices mirrors one found in many escape rooms \cite{nicholson2015peeking}. Escape rooms are experiences designed for a small number of players (typically 4-8 at a time) over a shorter period of time (a 60 minute time limit is common) and with a heavy reliance on props to enforce all the rules of the game. In that context, it is common to find encryption devices, hackable computers, and similar real or simulated technology.
\subsubsection{Bad News} is a theatrical game where a single player interacts with a an actor, who portrays different characters from a procedurally generated American town, based on instructions from a computer \cite{Samuel2016}. While the physical interaction is limited to embodied conversation, this project illustrates how procedural generation can be used to generate a world state, and how an interface between a digital representation and an actor can be used to produce a range of characters that are consistent with the world model.
\subsubsection{Empire} is a UK based fantasy LARP with hundreds of players\footnote{\url{https://www.profounddecisions.co.uk/empire-wiki/Main_Page}}. Interesting to us is the fact that Empire maintains a database that stores a range of information for every player character. Between events, players can use an online interface to decide how they want to use their assets. They can send their warrior bands to support certain military conflicts, or send their ships to trade with specific foreign nations. Before each actual physical event, those inputs are processed, and as a results players are given information and resources for the actual event. Player might also perform actions during the event that affect this database, such as performing a ritual enhancing the yield of their farm, or using a cleric to inscribe a word on their soul \footnote{\url{https://www.profounddecisions.co.uk/empire-wiki/Testimony}}. Empire is typical example of how an data back-end can be used to enhance gameplay, and how it can be interfaced with the players, both during play and in between sessions.
\subsection{Technology}
Segura et.al.\cite{segura2017design} provide an overview of existing technology use in LARP, and propose a preliminary taxonomy. Existing technology has been used to simulate aspects of the world, tracking players, communication, etc. They also report on a strong focus in regards to the aesthetic experience. There are also several wearables \cite{Dagan2019,Segura2018,vanhee2018firefly} that were specifically designed for LARPs. They are all designed to be worn during play, and provide a range of game-play affordances. They allow for a display of character information, such as health or character affiliation, allowing the player to ``see'' a virtual fact that would be visible in the fiction - such as a character being hurt. These wearable also allow for a range of social affordances, such as healing a character from an ``power surge'' by touching their wearable. There is also a ``Technomancer Hoodie'' which is equipped with motion sensors that allows the wearer to ``cast'' a series of spells by making appropriate movements, which are then simulated with sounds and colored lights from the hoodie. There are also examples of apps \cite{segura2017design}, usually run on smart phones or similar devices, that are used to support LARP experiences.
In general, there are now quite a range of technologies that support and enhance the LARP experience, and several of them offer the opportunity to interface the player with an AI system. In this paper we want to particularly focus on the use of artificial intelligence to further enhance these approaches.
\section{Possible AI applications}
In this section we want to present some sketched out examples of how existing, or conceivable AI approaches, particularly those from the AI in Games domain \cite{YannakakisT15,yannakakis2018artificial}, could be applied to LARP. The goal is to both make organizing LARPs easier, by overcoming the previously outlined challenges, and to enhance gameplay in a way not possible without AI. We will make some assumptions about existing tech to facilitate the deployment of AI, and will try to relate this back to existing technology or prototypes. We will also try to provide examples in both a generic form, talking about a range of possibilities, and then specify them, with more concrete illustrative examples.
\subsection{Conversational Agents}
There is currently tremendous interest from the AI community towards building conversational agents - commonly known as chatbots. The technology is not mature yet to the point where general conversations can be had. There is a debate raging currently in the AI community as to whether more data or better algorithms are needed, but within a closed domain it seems reasonable results are achievable~\cite{gao2019neural}. But how could these chatbots be used in LARP?
One common interaction between players and ref is asking for rules clarification, such as ``Can I dodge epic damage?'' This is a role that could be filled by a straightforward question-answering system (e.g. see \cite{lan2019albert} for a modern take). As this is an ``out of game'' interaction, such a system could be deployed on a smart phone, or similar device, and there would be no need to simulate a character for that AI. According to our domain experts, this would already free up a large chunk of the time spend by the organization team.
Taking this a step further would be a conversational system that could provide ``in game'' information, such as ``Who is the current ruler of this place?'' In a most simple case, this could also just be a digital device with an question-answering system trained on a fixed text corpus, detailing the background of the world. But there are opportunities for improvement here. First, if a digital representation of a world model exists, similar to the data back-end for the game Empire, the system could provide access to real time information as it changes over the course of the game. This could then also be subject to restrictions, so that certain information is not accessible to the players. This has several advantages. While human NPCs can and often do perform the role of lore-giver, that task can become repetitive, and there is the risk of human error, with the result that different groups of players get different information from the lore-provider. There is also less danger of the AI improvising new world facts.
Secondly, one could take care to embed the conversational agent as an actual part of the game world. For example, the digital device could be hidden in a prop that makes it look like a talking magical book, to increase the immersion. To further the immersion the conversational AI could be imbued with character traits that manifest in the way it speaks. Taking this even further would be to employ techniques that give an appearance of agency. Interactions with the AI could lead to changes in its mood - and the forms of interaction available might depend on past interaction. For example, the magical book might also tell certain secrets to people that where nice to it. Having the AI character adapt to past interaction could also help with a differentiation between several, similar AI objects, who could have different relationships, or even different amounts of knowledge. Not all AIs might know the same world information, and they could even learn new information over time.
Chatbot-like digital games such as Event[0] (Ocelot Society), Don't Make Love (Maggese), and projects by LabLabLab\footnote{\url{https://www.lablablab.net/}} demonstrate some of the possibilities in this space, and some VR games using Alexa have pioneered the combination of chatbot effects with speech-to-text, as for instance in Starship Commander, where the player directs the starship using voice instructions \cite{KuelzStarship}. A similar command system could be deployed in the context of LARP, with the AI receiving voice commands and narrating information about what is happening next in some part of the game world. There is also increasing interest in the concept of "virtual humans" — persistent non-player characters who might interact with a player via a combination of games, VR experiences, and social media. The technology developed to support such characters for marketing, theme park, and entertainment applications might also be suitable for deployment in LARPs.
Providing real influence to conversational AI might also be an interesting and surprising way to increase the players' influence on the world. One of our interview practitioners suggested including AI gods, in the form of totems. Gods in LARP systems are often associated with specific ideals and rules, and as such could have distinct and clearly defined character traits and opinions. The could be embodied in a holy object, which would allow the player to talk to their respective deity. An additional boon here would be the fact that AIs could store their past interaction with a specific player, and reference back to those, even with month of time passing in between.
While unlimited, open-ended understanding of input is beyond the reach of current systems, though there are currently strides being made towards this direction~\cite{adiwardana2020towards}, the fictional context of prayer or divinatory question-asking might allow the game-master to teach the players a reliable set of conventions for interacting with the conversational agent. Ritualised language use is already deployed in some contexts in table-top RPGs, such as Ben Lehman's Polaris: Chivalric Tragedy at Utmost North.
Initially, an AI conversationalist might just be seen as a way to provide information to the player, or allow for some fun role-play opportunity, but it could later be revealed that these interactions and conversations could have actual consequences. A god that learned bad information from one player about another in conversation might decide to punish them, triggering some game mechanical consequences. Or certain deities might even change their nature, based on player interaction with them. In either case, the fact that those conversational AI could be either directly communicating with the stored world state, or at least accurately store all their conversation for review between two events, would allow organizers to overcome some of the bottleneck issues with integrating player information back into the game.
\subsection{Embodied AI agents}
There are also opportunities to having the previously mentioned AI agents physically embodied in the world. Let's revisit the earlier example of a smart phone stored in a talking magical book. It could use its GPS sensor to determine its current location, and then trigger certain interactions when it is carried into a certain area. It might for example say that a certain area has a lot of magical energy floating around, or that a lot of people died in a certain space. Precedent for such a system exists for instance in Nico Czaja's work \cite{nicoczaja} with xm:lab, creating phone-based narrators who tell an interactive story while the participant wanders through a space of real-world historical significance.
Similar interaction could be possible with the previously described wearables - which could take different roles in the fiction. There are already examples of tech that mimics the role of pip boys form the Fallout Game series, a fictional wearable that informs the player about their stats, and warns them when they are entering a radioactive area. Adding conversational AI to this might turn this into a game companion that rides around on your shoulder. The locative aspect could then also be used to trigger salient character based interaction, similar to the approach used in the computer game ``Heavens Vault''\footnote{\url{https://www.gamasutra.com/blogs/JonIngold/20180822/325018/Ideas_and_dynamic_conversation_in_Heavens_Vault.php}}. There the fictional companion robot selects dialogues from a range of sentences, based on both previous interactions, locations and elements that are currently present in the scene.
\subsection{Drama Management: AI director}
How to identify the time and circumstance for a pre-defined story beat? Narrative designers of conventional video games often use a system of storylets or quality-based narrative, in which story events are triggered whenever some pre-condition state is reached in the game world. They write individual moments that they want the player to experience at some point, and then allow the system to select the point when those moments are best presented during a particular player's playthrough. An academic survey of the uses and applications may be found in \cite{kreminski2018storylet}, and an overview intended for users in the video game industry can be found at \cite{horneman2017}.
LARP creators have written about building LARPs with similar gameplay beats in mind. Ian Thomas has written about starting with "moments" in his design for both ``All for One'', a Musketeer-themed LARP \cite{ianthomasAllforOne} and ``God Rest Ye Merry'', a Christmas ghost story set in the 1950s \cite{ianthomasStuntToStory}. In the case of existing LARPs, it typically falls to human GMs to determine when the moment has arrived to deliver a story beat, and there is little room for last-minute customization. An AI system able to track key elements of world state, however, would be able to select when to activate particular storylets, and potentially use grounding techniques similar to those used in video games to fill in elements of the delivery, customizing the story moment to the exact parameters that allowed it to be fired off.
During play, the AI might also detect players who appeared to be inactive or who hadn't recently made any game play discoveries. It might then trigger events to re-engage those players, to send information or NPCs to their locations, or to move combat in their direction.
An AI drama manager that also had profile data about player preferences — whether calculated or provided to the system by players themselves — could determine what types of re-engagement should be employed for these specific players, and what type of pacing they might prefer. An adjustable system with an awareness of player profiles might help in making LARP more accessible to players with a wide range of ability, interest, ans experience levels.
Work towards modeling players and their preferred storytelling experience has been done by \cite{ThueBulitko}, and towards the problem of storytelling for specific types of player \cite{GervasPeinado}, but primarily in the context of video games or tabletop roleplaying rather than in the context of LARP.
\subsection{AI Content Generation}
Before a LARP is first run, depending on the setting, there is usually a phase where the organizers create a world and setting. This can involve writing fictional history, developing a fictional cosmology and theology, and designing existing characters and their relationships. There usually is no great need for AI support here, but systems like ``Bad News'' \cite{Samuel2016} or the Legends Mode of Dwarf Fortress \cite{DFwiki} demonstrate that it is possible to create a setting and web of relationship based on certain historical settings. Other techniques for AI story generation are surveyed in \cite{Gervas09,kybartas2016survey}, and a more recent breakdown of the key challenges to be solved in this space can be found in \cite{Gervas19}.
There are two ways in which procedural content generation could help human designers. In a mixed initiative co-creative approach an AI system could produce fictional history or relationships, and a human designer could then select and refine. The system could also be used to generate inspiration of ideas.
On a more practical level an AI system could also provide more complexity after the rough brush strokes have been filled in by a human designer. This might be useful to engage players who want to engage in a more scholarly play style. For example, imagine the human designers have created a rough world design, with major historical events, places and characters being defined. An AI system could then fill in the gaps to create smaller places and additional characters. This result could then, for instance, be piped into a narrative generator that creates travel diaries of a minor historical character to these places \cite{ShortParrigues}. A human designer might then hide a few connected bits of information that could be combined to gain some important insight into something related to the event overarching plot. The end result could be a full book containing a range of stories about the game world, which could also afford ``academic research'' game-play, where player would study the book in detail to hunt for those bit of highly relevant information. Story generation techniques with an awareness of level-of-detail might prove valuable in this context~\cite{Flores2017LevelOD}.
Once a system like this was set up, it would be easily scaleable - in theory one could provide a whole library worth of research that is both related to the world, and allows for relevant research to be played out. A human designer usually cannot provide this amount of content due to time and labor constraints. In general, this could help to alleviate the problem of how to provide adaptive resolution in a physical setting. In a TRP a player might walk into a library and grab a random book, open it and start reading. The game master can then, on the fly, come up additional content for that book. But building a library for player to explore in the physical world is much harder, as it would require to produce that kind of complexity beforehand.
A similar issue to this is puzzle design. Procedural design of puzzles for point and click adventures are explored in \cite{FernandezVara}. A more physically-grounded variation on this idea might also be possible to deploy in a LARP environment.
\subsection{AI Story Hooks}
Another opportunity for AI content generation is to produce story hooks or quests for players - providing for more micro-questing in larger LARPs. As previously discussed, there is not necessarily a reason to evaluate the success or failure of a quest, already providing a goal could lead to the desired outcome, more interaction and role play.
There are already existing approaches to automatic quest generation, usually looking at existing NPCs, their role in the world ontology, and their desires \cite{Kybartas2014,lee2012dynamic,ashmore2007quest}. These tools could be easily modified to provide a range of quests to players, if a data based representation of the world and the players exists. These quests could then be offered in between events to players, as part of their event briefing package, or even prior to the event in digital form. This would allow the player to accept and reject certain type of quests - which could enable player modeling and more personalized quest generation. A player might, for example, not be interested in performing any illegal activity, as it clashes with their character concept.
Another way to approach this problem would be the generation of story hooks rather than quests. One way to provide narratives for computer games is to create a range of characters and associated conditions and then run a simulation to see what happens \cite{meehan1977tale,kybartas2016survey}. This approach has been quite popular for creating murder mystery plots \cite{stockdale2016cluegen,jaschek2019mysterious,mohr2018eliminating}, where agents are simulated until a murder happens, that can then be reconstructed by the player. We might do something similar in LARP, to create investigative mysteries, possibly with game relevant information, such as involving characters the players previously encountered. But there is also another application. We could model the player characters as virtual agents, and then try to assign them goals and resources, and simulate what would happen. After running multiple simulations we could then select a set of starting conditions that looked like it created an interesting story. Here we would not explicitly create a narrative, but rather the conditions that could potentially lead to one. For example, one character might be given the goal of finding a suitable partner for one of his siblings sons - while to other players might be given corresponding quest of finding a suitable match for a daughter. These story hooks could be customized in several ways to fit the given character, and adapt to existing family situations, preference for certain kinds of roleplay, character traits, etc. The simulation of what could happen could also help to design story hooks with a certain degree of redundancy and robustness. Overall, this could provide additional hooks for player to inspire their roleplay, and could fill in some of the sparsity of plot in larger LARP events.
\subsection{AI Aided World Simulation}
A big element and defining characteristic of LARP is how the player interacts with the real and the game world. Interaction with the fictional world usually requires a referee - and as such provide a bottleneck that designers try to minimize. AI could alleviate this by providing part of this complex, fictitious world simulation. One example here is a spaceship larp, where players interaction with various ship systems, such a engineering, navigation, piloting, resource control could be tracked, integrated and fed back to them in real time. AI simulators are able to integrate large amount of data and their effects, and even provide this preprocessed information to the organizers to reflect the collective player agency.
Another example is the use of AIs to evaluate player rituals. In UK LARP ritual magic is often performance-like and open ended, requiring a referee to determine how the intent of the players is translated into a game world effect. This evaluation is time consuming - to illustrate, a system like ``Guild of Darkness'' has one crew member who's full time responsibility is to evaluate rituals\footnote{\url{https://sites.google.com/site/guildofdarknesswiki/magic}}. Evaluation is based on performance, intent and in part dependent on some (hidden) world rules - which the referee has to consistently apply, so player can learn more about the world.
Part of this process could be automated by a neural network looking at the movements performed in the ritual. A system like Wekinator \cite{fiebrink2010wekinator} can be trained to take a sensory input, such as an image, and assign a sound or output value. This could be used to design a system where certain moods or elements need to be triggered at given stages (based on the desired effect), and the quality of the ritual would depend on how well these queues were hit. This would foreground the AI system more, and require some game design around the AI system, but has some advantages. Performing magic in a ritual is usually an attempt to understand an please an arcane and unknown system. Having this simulated by an AI would give the system a consistency that a human cannot provide, and would allow for a nearly arbitrary scaling up of the underlying complexity. So there could be a process of player getting successively better and better at understanding and triggering this system - allowing for role-play of actual skill development. It might even allow for differentiation, where different player are better at different types of rituals.
This idea could be even taken further - by designing a magic system directly around it. Imagine a system where there is specific magic site where player can perform movements in front of a camera, or other sensors, camouflaged as a mythical artifact. Specific inputs could be assigned to words, and as the player perform certain movement they could discover how to trigger these words. Part of the game-play could then be the discovery and refinement of the inputs the create these words. There should be some feedback, such as the mythical artifact uttering the triggered words, or even some indication once player get closer. Ultimately, the idea would be that those words have to be formed into sentences, which would then allow the players to talk to some entity. This could then be the organizers, or even one of the previously discussed god AI NPCs.
\subsection{Super LARP and Information Spread}
\label{sec:superlarp}
Once AI support and AI characters allow for better information transfer, information integration and game cohesion, more ambitious projects, such as coordinated over different locations or times. Next to the scalability issue, the most important problem for such an endeavor seems to be the organization of the information flow, which ties in with the consistency issue. Even for a large LARP as run nowadays, there is already a physical distribution of actors and information and this could be taken into account for by applying epidemic protocols for gradual information flow in distributed databases as suggested in \cite{Demers1987}. This type of information distribution is also called gossip communication~\cite{Jelasity2011} and is models very well how news spread in a distributed human society. Once such protocols would be established, different locations could play out organizations or events that mostly interact through information sharing, negotiation, or how they affect a joint world.
\section{Conclusion}
In summary, we believe that LARP is a domain well suited for the application of AI and AI and Games techniques. The listed, existing approaches demonstrate this, and the speculative examples show a range of relatively straight-forward extensions of existing AI and Games techniques, so they would be suitable for LARP. Doing so could overcome several existing challenges for LARP organizers, such as scalabiltiy and content generation issues. It could also provide for new forms of play that would not be possible without AI. LARP also provides an interesting test-bed for AI applications, particularly those that want to explore the interface between humans and AI, or how AI can interact with the physical world. Here the robustness of LARP, caused by the willingness of the participants to correct errors on the fly, could provide valuable for researchers. In general, AI in LARP research offers several unexplored opportunities, both to enhance the experience of players, and to explore the limitations and challenges of AI.
\section{Acknowledgments}
This paper is in large part based on a discussion group at the Dagstuhl Seminar 19511, ``Artificial and Computational Intelligence in Games: Revolutions in Computational Game AI''. We are also grateful to Jonathan Henderson and Ben Crossley, both of them UK based LARP organizers, who we interviewed to sanity check out assertions in this paper.
\bibliographystyle{IEEEtran}
|
1,116,691,500,199 | arxiv | \section{Introduction}
Let $\Omega\subseteq \mathbb C^n$ be a bounded domain. Let $dV$ denote the Lebesgue measure on $\mathbb C^n$. The Bergman projection $P$ is the orthogonal projection from $L^2(\Omega)$ onto the Bergman space $A^2(\Omega)$, the space of all square-integrable holomorphic functions.
Associated with $P$, there is a unique function $K_\Omega$ on $\Omega\times\Omega$ such that for any $f\in L^2(\Omega)$:
\begin{equation}
P(f)(z)=\int_{\Omega}K_\Omega(z;\bar w)f(w)dV(w).
\end{equation}
Let $P^+$ denote the positive Bergman operator defined by:
\begin{equation}
P^+(f)(z):=\int_{\Omega}|K_\Omega(z;\bar w)|f(w)dV(w).
\end{equation}
A question of importance in analytic function theory and harmonic analysis is to understand the boundedness of $P$ or $P^+$ on the space $L^p(\Omega, \sigma dV)$, where $\sigma$ is some non-negative locally integrable function on $\Omega$.
In \cite{BB78,Bekolle}, Bekoll\'e and Bonami established the following for $P$ and $P^+$ on the unit ball $\mathbb B_n\subseteq \mathbb C^n$:
\begin{thm}[Bekoll\'e-Bonami]
Let $T_z$ denote the Carleson tent over $z$ in $\mathbb B_n\in \mathbb C^n$ defined as below:
\begin{itemize}
\item $T_z:=\left\{w\in \mathbb B_n:\left|1-\bar w\frac{z}{|z|}\right|<1-|z|\right\}$ for $z\neq 0$, and
\item $T_z:= \mathbb B_n$ for $z=0$.
\end{itemize} Let the weight $\sigma$ be a positive, locally integrable function on $\mathbb B_n$. Let $1<p<\infty$. Then the following conditions are equivalent:
\begin{enumerate}[label=\textnormal{(\arabic*)}]
\item $P:L^p(\mathbb B_n,\sigma)\mapsto L^p(\mathbb B_n,\sigma)$ is bounded;
\item $P^+:L^p(\mathbb B _n,\sigma)\mapsto L^p(\mathbb B_n,\sigma)$ is bounded;
\item The Bekoll\'e-Bonami constant $\mathcal B_p(\sigma)$ is finite where: $$\mathcal B_p(\sigma):=\sup_{z\in \mathbb B_n}\frac{\int_{T_z}\sigma(w) dV(w)}{\int_{T_z}dV(w)}\left(\frac{\int_{T_z}\sigma^{-\frac{1}{p-1}} (w)dV(w)}{\int_{T_z}dV(w)}\right)^{p-1}.$$
\end{enumerate}
\end{thm}
In \cite{HWW}, we generalized Bekoll\'e and Bonami's result to a wide class of pseudoconvex domains of finite type. To do so, we combined the methods of Bekolle \cite{Bekolle} with McNeal \cite{McNeal2003}. This method of proof is qualitative, showing that the Bekoll\'e-Bonami class is sufficient for the weighted inequality of the projection to hold on those domains, and also necessary if the domain is strictly pseudoconvex. However, the method of good-lambda inequalities in \cite{Bekolle} seems unlikely to give optimal estimates for the norm of the Bergman projection. In this paper, we address the quantitative side of this question using sparse domination techniques.
Motivated by recent developments on the $A_2$-Conjecture by Hyt\"onen \cite{Hytonen} for singular integrals in the setting of Muckenhoupt weighted $L^p$ spaces, people have made progress on the dependence of the operator norm $\|P\|_{L^p(\mathbb B_n,\sigma)}$ on $\mathcal B_{p}(\sigma)$. In \cite{Pott}, Pott and Reguera gave a weighted $L^p$ estimate for the Bergman projection on the upper half plane. Their estimates are in terms of the Bekoll\'e-Bonami constant and the upper bound is sharp. Later, Rahm, Tchoundja, and Wick \cite{Rahm} generalized the results of Pott and Reguera to the unit ball case, and also obtained estimates for the Berezin transform. Weighted norm estimates of the Bergman projection have also been obtained \cite{ZhenghuiWick2} on the Hartogs triangle.
We use the known estimates of the Bergman kernel in \cite{Fefferman,Monvel,NRSW3,McNeal1,McNeal2, McNeal91} to establish the Bekoll\'e-Bonami type estimates for the Bergman projection on some classes of finite type domains. By finite type we mean that the D'Angelo 1-type \cite{D'Angelo82} is finite. The domains of finite type we focus on are:
\begin{enumerate}
\item domains of finite type in $\mathbb C^2$;
\item strictly pseudoconvex domains with smooth boundary in $\mathbb C^n$;
\item convex domains of finite type in $\mathbb C^n$;
\item decoupled domain of finite type in $\mathbb C^n$.
\end{enumerate}
Given functions of several variables $f$ and $g$, we use $f\lesssim g$ to denote that $f\leq Cg$ for a constant $C$. If $f\lesssim g$ and $g\lesssim f$, then we say $f$ is comparable to $g$ and write $f\approx g$.
The main result obtained in this paper is:
\begin{thm}
\label{t:main}
Let $1<p<\infty$, and $p'$ denote the H\"older conjugate to $p$.
Let $\sigma(z)$ be a positive, locally integrable function on $\Omega$. Set $\nu=\sigma^{-p^\prime/p}(z)$.
Then the Bergman projection $P$ satisfies the following norm estimate on the weighted space $L^p(\Omega,\sigma)$:
\begin{equation}\label{1.1}
\|P\|_{L^p(\Omega,\sigma)}\leq \|P^+\|_{L^p(\Omega,\sigma )}\lesssim \mathcal [\sigma]_p,
\end{equation}
where $[\sigma]_p=\left(\langle\sigma \rangle^{dV}_{\Omega}\left(\langle \nu\rangle^{dV}_{\Omega}\right)^{p-1}\right)^{1/p}+pp^\prime\left(\sup_{\epsilon_0>\delta>0, z\in \mathbf b\Omega}\langle\sigma \rangle^{dV}_{B^\#(z,\delta)}\left(\langle \nu\rangle^{dV}_{B^\#(z,\delta)}\right)^{p-1}\right)^{\max \{1,\frac{1}{p-1}\}}.$
\end{thm}
The tent $B^\#(z,\delta)$ above is slightly different from the tent we use in \cite{HWW} in order to fit in the machinery of dyadic harmonic analysis. These two tents are essentially equivalent. The construction of $B^\#(z,\delta)$ uses the existence of the projection map onto $\mathbf b\Omega$ which is defined in a small tubular neighborhood of $\mathbf b\Omega$. Hence the restriction $\delta<\epsilon_0$ is needed to make sure that $B^\#(z,\delta)$ is inside the tubular neighborhood. See Lemma \ref{lem3.2} and Definition \ref{de3.3} in Section 3. For the detailed definition of the constant $[\sigma]_p$ and its connection with the Bekoll\'e-Bonami constant $\mathcal B_p(\sigma)$, see Definition \ref{de3.4} and Remark \ref{Re3.5}. We provide a sharp example for the upper bound above. See Section 6.
Using the asymptotic expansion of the Bergman kernel on a strictly pseudoconvex domain \cite{Fefferman, Monvel}, we showed in \cite{HWW} that when $\Omega$ is smooth, bounded, and strictly pseudoconvex, the boundedness of the Bergman projection $P$ on the weighted space $L^p(\Omega,\sigma)$ implies that the weight $\sigma$ is in the $\mathcal B_p$ class. Here we also provide the corresponding quantitative result, giving a lower bound of the weighted norm of $P$:
\begin{thm} \label{t:main1}
Let $\Omega$ be a smooth, bounded, strictly pseudoconvex domain. Let $1<p<\infty$, and $p'$ denote the H\"older conjugate to $p$.
Let $\sigma(z)$ be a positive, locally integrable function on $\Omega$. Set $\nu=\sigma^{-p^\prime/p}(z)$. Suppose the projection $P$ is bounded on $L^p(\Omega,\sigma)$. Then we have
\begin{equation}\label{1.2}
\left(\sup_{\epsilon_0>\delta>0, z\in \mathbf b\Omega}\langle\sigma \rangle^{dV}_{B^\#(z,\delta)}\left(\langle \nu\rangle^{dV}_{B^\#(z,\delta)}\right)^{p-1}\right)^{\frac{1}{2p}}\lesssim\|P\|_{L^p(\Omega,\sigma)}.
\end{equation}
If $\Omega$ is also Reinhardt, then
\begin{equation}\label{1.3}
\left(\mathcal B_p(\sigma)\right)^{\frac{1}{2p}}\lesssim\|P\|_{L^p(\Omega,\sigma)},
\end{equation}
where $\mathcal B_p(\sigma)=\max{\left\{\langle\sigma \rangle^{dV}_{\Omega}\left(\langle \nu\rangle^{dV}_{\Omega}\right)^{p-1}, \;\;\sup_{\epsilon_0>\delta>0, z\in \mathbf b\Omega}\langle\sigma \rangle^{dV}_{B^\#(z,\delta)}\left(\langle \nu\rangle^{dV}_{B^\#(z,\delta)}\right)^{p-1}\right\}}.$
\end{thm}
When $\Omega$ is the unit ball in $\mathbb C^n$, the estimate (\ref{1.3}) was obtained in \cite{Rahm}. When $\Omega$ is smooth, bounded, and strictly pseudoconvex, it was proven in \cite{HWW} that if $P$ is bounded on $L^p(\Omega,\sigma)$, then the constant $\mathcal B_p(\sigma)$ is finite. It remains unclear to us that, for a general strictly pseudoconvex domain $\Omega$, how $\|P\|_{L^p(\Omega,\sigma)}$ dominates the constant $\langle\sigma \rangle^{dV}_{\Omega}\left(\langle \nu\rangle^{dV}_{\Omega}\right)^{p-1}$.
The approach we employ in this paper is similar to the ones in \cite{Pott} and \cite{Rahm}. To prove (\ref{1.1}), we show that $P$ and $P^+$ are controlled by a positive dyadic operator. Then an analysis of the weighted $L^p$ norm of the dyadic operator yields the desired estimate. The construction of the dyadic operator uses a doubling quasi-metric on the boundary $\mathbf b\Omega$ of the domain $\Omega$ and a result of Hyt\"onen and Kairema \cite{HK}. For the domains we consider, estimates of the Bergman kernel function in terms of those quasi-metrics are known so that a domination of the Bergman projection by the dyadic operator is possible. There are other domains where estimates for the Bergman kernel function are known. We just focus on the above four cases and do not attempt to obtain the most general result.
The paper is organized as follows: In Section 2, we recall the definitions and known results concerning the non-isotropic metrics and balls on the boundary of the domain. In Section 3, we give the definition of tents and the dyadic tents structure based on the non-isotropic balls in Section 2. In Section 4, we recall known estimates for the Bergman kernel function, and prove a pointwise domination of the (positive) Bergman kernel function by a positive dyadic kernel. In Section 5, we prove Theorem \ref{t:main}. In Section 6, we provide a sharp example for the upper bound in Theorem \ref{t:main}. In Section 7, we prove Theorem \ref{t:main1}. In section 8, we provide an additional (unweighted) application of the pointwise dyadic domination to show the Bergman projection is weak-type $(1,1)$. We point out some directions for generalization in Section 9.
\section*{Acknowledgment}
B. D. Wick's research is partially supported by National Science Foundation grants DMS \# 1560955 and DMS \# 1800057. N. A. Wagner's research is partially supported by National Science Foundation grant DGE \#1745038.
We would like thank John D'Angelo, Siqi Fu, Ming Xiao, and Yuan Yuan for their suggestions and comments. We would also like to acknowledge the work of Chun Gan, Bingyang Hu,
and Ilyas Khan, who independently obtained a result similar to main one in this paper
at around the same time (see \cite{Hu}).
\section{Non-isotropic balls on the boundary}
In this section, we recall various definitions of quasi-metrics and their associated balls on the boundary of $\Omega$. When $\Omega$ is of finite type in $\mathbb C^2$ or strictly pseudoconvex in $\mathbb C^n$, distances on the boundary can be well described using sub-Riemannian geometry. Properties and equivalence of these distances can be found in \cite{NSW1,NSW,Nagel,BaloghBonk}. For discussions about the sub-Riemannian geometry, see for example \cite{Bellaiche,Gromov1,Montgomery}.
For convex or decoupled domains of finite type in $\mathbb C^n$, the boundary geometry could be more complicated. We use quasi-metrics in \cite{McNeal2,McNeal3,McNeal91}. In fact, all four classes of domains we consider in this paper can be referred to as the so-called ``simple domains'' in \cite{McNeal2003}. There it has been shown that estimates for the kernel function on these domains fall into a unified framework. When $\Omega$ is of finite type in $\mathbb C^2$ or strictly pseudoconvex in $\mathbb C^n$, the boundary geometry of $\Omega$ is relatively straightforward. The quasi-metric induced by special coordinates systems in \cite{McNeal1} and \cite{McNeal2003} is essentially the same as the sub-Riemannian metric.
It is worth mentioning that estimates expressed using some quasi-metrics for the Bergman kernel function are known in other settings. See for example \cite{CD,Koenig,Raich}.
\subsection{Balls on the boundaries of domains of finite type in $\mathbb C^2$ or strictly pseudoconvex domains in $\mathbb C^n$} Let $\Omega$ be a bounded domain in $\mathbb C^n$ with $C^\infty$-smooth boundary. A defining function $\rho$ of $\Omega$ is a real-valued $C^\infty$ function on $\mathbb C^n$ with the following properties:
\begin{enumerate}
\item $\rho(z)<0$ for all $z\in \Omega$ and $\rho(z)>0$ for all $z\notin \Omega$.
\item $\partial \rho(z)\neq 0$ when $\rho(z)=0$.
\end{enumerate}
Such a $\rho$ can be constructed, for instance, using the Euclidean distance between the point $z$ and $\mathbf b\Omega$, the boundary of $\Omega$. One can also normalize $\rho$ so that $|\partial \rho|=1$. Let $T(\mathbf b\Omega)$ denote the tangent bundle of $\mathbf b\Omega$ and $\mathbb CT(\mathbf b\Omega)=T(\mathbf b\Omega)\otimes\mathbb C$ its complexification. Let $T^{1,0}(\mathbf b\Omega)$ denote the subbundle of $\mathbb CT(\mathbf b\Omega)$ whose sections are linear combinations of ${\partial}/{\partial z_j}$, and $T^{0,1}(\mathbf b\Omega)$ be its complex conjugate bundle. Their sum $H\mathbf (\mathbf b\Omega):=T^{1,0}(\mathbf b\Omega)+T^{0,1}(\mathbf b\Omega)$ is a bundle of real codimension one in the complex tangent bundle $\mathbb CT(\mathbf b\Omega)$. Let $\langle\cdot,\cdot\rangle$ denote the contraction of the one forms and vector fields, and let $[\cdot,\cdot]$ denote the Lie bracket of two vector fields.
Let $\lambda$ denote the Levi form, the Hermitian form on $T^{1,0}(\mathbf b\Omega)$ defined by
$$\lambda (L,\bar K)=\langle \frac{1}{2}(\partial-\bar \partial)\rho,[L,\bar K]\rangle \;\;\;\text{ for }\;\;L,K\in T^{1,0}(\mathbf b\Omega).$$ By the Cartan formula for the exterior derivative of a one form, one obtains
\begin{align*}
\lambda(L,\bar K)=\left\langle -d \left(\frac{1}{2}(\partial-\bar \partial)\rho\right),L\wedge\bar K\right\rangle=\langle\partial\bar \partial\rho, L\wedge\bar K\rangle.
\end{align*}
Hence, the Levi form is the complex Hessian $(\rho_{i\bar j})$ of $\rho$, restricted to $T^{1,0}(\mathbf b\Omega)$.
The domain $\Omega$ is called pseudoconvex (resp. strictly pseudoconvex) if $\lambda$ is positive semidefinite (resp. definite), i.e., the complex Hessian $(\rho_{i\bar j})$ is positive semidefinite (resp. definite) when restricted to $T^{1,0}(\mathbf b\Omega)$.
Given $L\in T^{1,0}(\mathbf b\Omega)$, we say the type of $L$ at a point $p\in \mathbf b\Omega$ is $k$ and write $\text{Type}_pL=k$ if
$k$ is the smallest integer such that there is a iterated commutator
$$[\dots[[L_1,L_2],L_3],\dots,L_k]=\Psi_k,$$
where each $L_j$ is either $L$ or $\bar L$ such that $\langle \Psi_k,(\partial-\bar \partial)\rho\rangle\neq 0$.
When $\Omega\subseteq \mathbb C^2$, the subbundle $T^{1,0}(\mathbf b\Omega)$ has dimension one at each boundary point $p$ and $\text{Type}_pL$ defines the type of the point $p$: A point $q\in \mathbf b\Omega$ is of finite type $m$ in the sense of Kohn \cite{Kohn} if
$\text{Type}_pL=m$. A domain is of Kohn finite type $m$ if every point $q\in \mathbf b\Omega$ is of Kohn finite type at most $m$. In the $\mathbb C^2$ case, Kohn's type and D'Angelo's 1-type are equivalent. See \cite{D'Angelo} for the proof.
When $\Omega$ is strictly pseudoconvex, the Levi form $\lambda$ is positive definite. Thus for every $L\in T^{1,0}(\mathbf b\Omega)$ and $p \in \mathbf b\Omega$, one has that $\text{Type}_pL=2$.
Using the defining function $\rho$, a local basis of $H (\mathbf b\Omega)$ can be chosen as follows. Let $p\in \mathbf b\Omega$ be a boundary point. We may assume that, after a unitary rotation, ${\partial}\rho(p)= dz_n$. Then there is a neighborhood $U$ of $p$ such that $\frac{\partial\rho}{\partial z_n}\neq0$ on $U$. We define $n-1$ local tangent vector fields on $\mathbf b\Omega\cap U$:
\[L_j=\rho_{z_n}\frac{\partial}{\partial z_j}-{\rho_{z_j}}{}\frac{\partial}{\partial z_n}\;\;\;\;\; \;\;\;\;\; j=1,2,3\dots, n-1;\]
and their conjugates:
\[\bar L_j=\rho_{\bar z_n}\frac{\partial}{\partial \bar z_j}-{\rho_{\bar z_j}}\frac{\partial}{\partial \bar z_n}\;\; \;\;\;\;\;\;\;\;j=1,2,3\dots, n-1.\]
Then the $L_j$'s span $T^{1,0}(\mathbf b\Omega)$ and the $\bar L_j$'s span $T^{0,1}(\mathbf b\Omega)$.
We set \[S=\sum_{j=1}^{n}{\rho_{\bar z_j}}\frac{\partial }{\partial z_j}\;\;\; \text{ and } \;\; \;T=S-\bar S.\]
Then the $L_j$'s, $\bar L_j$'s and $T$ span $\mathbb CT(\mathbf b\Omega)$.
Let $X_j$, $X_{n-1+j}$ be real vector fields such that \[L_j=X_j-iX_{n-1+j}\] for $j=1,\dots, n-1$. Then $X_j$'s and $T$ span the real tangent space of $\mathbf b\Omega$ near the point $p$.
For every $k$-tuple of integers $l^{(k)}=(l_1,\dots,l_k)$ with $k\geq 2$ and $l_j\in\{1,\dots, 2n-2\}$, we define $\lambda_{l^{(k)}}$ to be the smooth function such that
\[[X_{l_k},[X_{l_{k-1}},[\dots[X_{l_2},X_{l_1}]\dots]]]=\lambda_{l^{(k)}}T\;\;\text{ mod }\;X_1,\dots, X_{2n-2},\]
and define $\Lambda_k$ to be the smooth function
\[\Lambda_{k}(q)=\left(\sum_{\text{all }l^{(k)}}|\lambda_{l^{(k)}}(q)|^2\right)^{{1}/{2}}.\]
For $q\in U$ and $\delta>0$, we set \begin{align}\label{2.2}\Lambda(q,\delta)=\sum_{j=2}^{m}\Lambda_j(q)\delta^j.\end{align}
In the $\mathbb C^2$ case, a point $q\in \mathbf b\Omega$ is of finite type $m$ if and only if $\Lambda_2(q)=\cdots=\Lambda_{m-1}(q)=0$ but $\Lambda_m(q)\neq 0$. When $\Omega$ is strictly pseudoconvex in $\mathbb C^n$, the function $\Lambda_2\neq 0$ on $\mathbf b\Omega$.
Though the function $\Lambda$ is locally defined, one can construct a global $\Lambda$ that is defined on $\mathbf b{\Omega}$ and is comparable to its every local piece. In the finite type in $\mathbb C^2$ case and the strictly pseudoconvex case, a global construction can be realized without using partitions of unity. We explain this now. When $\Omega$ is strictly pseudoconvex, $\Lambda_2$ does not vanish on the boundary of $\Omega$, therefore we can simply set $\Lambda(q,\delta)=\delta^2$. When $\Omega$ is of finite type in $\mathbb C^2$, global tangent vector fields $L_1$ and $S$ can be chosen on $\mathbf b\Omega$:
\begin{align*}
&L_1={\rho_{z_2}}\frac{\partial}{\partial z_1}-{\rho_{z_1}}\frac{\partial}{\partial z_2},\\
&S={\rho_{\bar z_1}}\frac{\partial}{\partial z_1}+{\rho_{\bar z_2}}\frac{\partial}{\partial z_2}.
\end{align*}
Then the function $\Lambda$ induced by the above $L_1$ and $S$ is a smooth function defined on $\mathbf b\Omega$. From now on, we choose $\Lambda$ to be the smooth function induced by $L_1$ and $S$ on $\mathbf b\Omega$ when $\Omega$ is of finite type in $\mathbb C^2$, and choose $\Lambda(q,\delta)=\delta^2$ when $\Omega$ is strictly pseudoconvex in $\mathbb C^n$.
We recall several non-isotropic metrics on $\mathbf b\Omega$ that are locally equivalent:
\begin{de}
For $p,q\in \mathbf b\Omega$, the metric $d_{1}(\cdot,\cdot)$ is defined by:
\begin{align}
d_{1}(p,q)=\inf\Big\{& \int_{0}^{1}|\alpha^\prime(t)|dt: \alpha \text{ is any piecewise smooth map from }[0,1] \text{ to } \mathbf b \Omega\nonumber\\&\text{ with } \alpha(0)=p, \alpha(1)=q, \text{ and } \alpha^\prime(t)\in H_{\alpha(t)}(\mathbf b\Omega)\Big\}.
\end{align}
Equipped with the metric $d_1$, we define the ball $B_1$ centered at $p\in \mathbf b\Omega$ of radius $r$ to be \begin{align}B_{1}(p,r)=\{q\in \mathbf b\Omega:d_{1}(p,q)<r\}.\end{align}
\end{de}
\begin{de}
For $p,q\in \mathbf b\Omega$, the metric $d_{2}(\cdot,\cdot)$ is defined by:
\begin{align}
d_{2}(p,q)=\inf\Big\{& \delta: \text{ There is a piecewise smooth map } \alpha \text{ from } [0,1] \text{ to } \mathbf b \Omega\nonumber\\&\text{ with } \alpha(0)=p, \alpha(1)=q, \alpha^\prime(t)=\sum_{j=1}^{2n-2}a_j(t)X_j(\alpha(t)),\text{ and } |a_j(t)|<\delta\Big\}.
\end{align}
Equipped with the metric $d_2$, we define the ball $B_2$ centered at $p\in \mathbf b\Omega$ of radius $r$ to be \begin{align}B_{2}(p,r)=\{q\in \mathbf b\Omega:d_{2}(p,q)<r\}.\end{align}
\end{de}
\begin{de}
For $p,q\in \mathbf b\Omega$, the metric $d_{3}(\cdot,\cdot)$ is defined by:
\begin{align}
d_{3}(p,q)=\inf\Big\{& \delta: \text{ There is a piecewise smooth map } \alpha \text{ from } [0,1] \text{ to } \mathbf b \Omega \text{ with }\nonumber\\& \alpha(0)=p, \alpha(1)=q, \text{ and } \alpha^\prime(t)=\sum_{j=1}^{2n-2}a_j(t)X_j(\alpha(t))+b(t)T(\alpha(t)),\nonumber\\&\text{where } |a_j(t)|<\delta, |b(t)|<\Lambda(p,\delta)\Big\}.
\end{align}
Equipped with the metric $d_3$, we define the ball $B_3$ centered at $p\in \mathbf b\Omega$ of radius $r$ to be \begin{align}B_{3}(p,r)=\{q\in \mathbf b\Omega:d_{3}(p,q)<r\}.\end{align}
\end{de}
It is known that when the domain is strictly pseudoconvex in $\mathbb C^n$, or of finite type in $\mathbb C^2$, the quasi-metrics $d_1$, $d_2$, and $d_3$ are locally equivalent (cf. \cite{NSW1,NSW,Nagel}), i.e. there are positive constants $C_1$, $C_2$ and $\delta$ so that when $d_i(p,q)<\delta$ with $i\in\{1,2,3\}$, \[C_1d_j(p,q)<d_i(p,q)<C_2d_j(p,q) \;\;\;\text{ for } j\in\{1,2,3\}.\]
As a consequence, the balls $B_j$ are also equivalent in the sense that for small $\delta$, there are positive constants $C_1$, $C_2$ such that \[B_i(p,C_1\delta)\subseteq B_j(p,\delta)\subseteq B_i(p,C_2\delta) \;\;\;\text{ for } i,j\in\{1,2,3\}.\]
It is worth noting that the definition $d_1(\cdot,\cdot)$ does not depend on how we choose the local vector fields. Moreover, if $d_1(p,q)<\delta$, then for some positive constants $C_1,C_2$, \begin{align}\label{2.81}C_1\Lambda(p,\delta)\leq\Lambda(q,\delta)\leq C_2\Lambda(p,\delta).\end{align}
To introduce the Ball-Box Theorem below, we also need to define balls using the exponential map.
\begin{de}
For $q\in \mathbf b\Omega$ and $\delta>0$, set
\[B_4(q,\delta)=\left\{p\in \mathbf b\Omega:p=\exp\left(\sum_{j=1}^{2n-2}a_jX_j(q)+bT(q)\right), \text{ where }|a_j|<\delta, \text{ and }|b|<\Lambda(p,\delta)\right\}.\]
\end{de}
\begin{thm}[Ball-Box Theorem] \label{thm 2.5}
There exist positive constants $C_1, C_2$ such that for any $q\in\mathbf b\Omega$ and any sufficiently small $\delta>0$, \[B_j(q,C_1\delta)\subseteq B_4(q,\delta)\subseteq B_j(q,C_2\delta) \;\;\text{ for } j\in\{1,2,3\} .\]
\end{thm}
The proof of this theorem can be found in for example \cite{NSW,Bellaiche,Gromov1,Montgomery}. Variants of the Ball-Box Theorem also exist in the literature. The following version of the Ball-Box Theorem is a consequence of Theorem \ref{thm 2.5} and can be found in \cite{BaloghBonk}.
\begin{co}[Ball-Box Theorem] \label{Cor2.7} Let $\Omega$ be a smooth, bounded, strictly pseudoconvex domain.
There exist positive constants $C_1$, $C_2$ such that for any $q\in\mathbf b\Omega$ and any sufficiently small $\delta>0$, \[\text{Box}(q,C_1\delta)\subseteq B_j(q,\delta)\subseteq \text{Box}(q,C_2\delta) \;\;\text{ for } j\in\{1,2,3,4\}.\]
Here $\text{Box}(q,\delta)=\{q+Z_H+Z_N\in \mathbf b\Omega: |Z_H|<\delta, |Z_N|<\delta^2\}$ where $Z_{H}\in H_q(\mathbf b\Omega)$ and $Z_{N}$ is orthogonal to $H_q(\mathbf b\Omega)$.
\end{co}
We will only use this corollary for the strictly pseudoconvex case. See for example \cite{BaloghBonk}.
The next theorem provides estimates for the surface volume of $B_4$, and hence also for $B_j$ with $j=\{1,2,3\}$. See for example \cite{NSW}.
\begin{lem}\label{lem 2.7}
Let $\mu$ denote the Lebesgue surface measure on $\mathbf b\Omega$. Then there exist constants $C_1,C_2>0$ such that
\begin{align}
C_1 \delta^{2n-2}\Lambda(p,\delta)\leq\mu (B_{4}(p,\delta))\leq C_2 \delta^{2n-2}\Lambda(p,\delta).
\end{align}
\end{lem}
As a consequence of the definitions of $d_1$ and $\Lambda$ and Lemma \ref{lem 2.7}, we have the ``doubling measure property'' for the non-isotropic ball: There exists a positive constant $C$ such that for each $p\in \mathbf b\Omega$ and $\delta>0$,
\begin{align}\label{2.8}
\mu(B_{j}(p,\delta))\leq C\mu(B_{j}(p,\delta/2))\;\; \text{ for } j\in\{1,2,3,4\}.
\end{align}
\subsection{Balls on the boundary of a convex/decoupled domain of finite type}
When $\Omega$ is a convex/decoupled domain of finite type in $\mathbb C^n$, non-isotropic sets can be constructed using a special coordinate system of McNeal \cite{McNeal2,McNeal91,McNeal2003} near the boundary of $\Omega$. Let $p\in \mathbf b\Omega$ be a point of finite type $m$. For a small neighborhood $U$ of the point $p$, there exists a holomorphic coordinate system $z=(z_1,\dots,z_n)$ centered at a point $q\in U$ and defined on $U$ and quantities $\tau_1(q,\delta), \tau_2(q,\delta),\dots, \tau_n(q,\delta)$ such that
\begin{align}\label{2.10}\;\;\;\;\;\;\tau_1(q,\delta)=\delta\;\;\; \text{ and }\;\;\;\delta^{1/2}\lesssim\tau_j(q,\delta)\lesssim\delta^{1/m}\;\;\text{ for }\;\; j=2,3,\dots,n.\end{align}
Moreover, the polydisc $D(q,\delta)$ defined by:
\begin{align}
D(q,\delta)=\{z\in\mathbb C^n:|z_j|<\tau_j(q,\delta),j=1,\dots,n\}
\end{align}
is the largest one centered at $q$ on which the defining function $\rho$ changes by no more than $\delta$ from its value at $q$, i.e. if $z\in D(q,\delta)$, then $|\rho(z)-\rho(q)|\lesssim \delta$.
The polydisc $D(q,\delta)$ is known to satisfy several ``covering properties'' \cite{McNeal3}: \begin{enumerate}\item There exists a constant $C>0$, such that for points $q_1,q_2\in U\cap \Omega$ with $D(q_1,\delta)\cap D(q_2,\delta)\neq \emptyset$, we have
\begin{align}\label{2.11}
D(q_2,\delta)\subseteq CD(q_1,\delta) \text{ and } D(q_1,\delta)\subseteq CD(q_2,\delta).
\end{align}
\item There exists a constant $c>0$ such that for $q\in U\cap \Omega$ and $\delta>0$, we have \begin{align}\label{2.12}D(q,2\delta)\subseteq cD(q,\delta).\end{align}\end{enumerate}
It was also shown in \cite{McNeal3} that $D(p,\delta)$ induces a global quasi-metric on $\Omega$. Here we will use it to define a quasi-metric on $\mathbf b\Omega$.
For $q\in \mathbf b\Omega$ and $\delta>0$, we define the non-isotropic ball of radius $\delta$ to be the set
\[ B_5(q,\delta)={D(q,\delta)}\cap\mathbf b\Omega.\]
Set containments (\ref{2.11}), (\ref{2.12}), and the compactness and smoothness of $\mathbf b\Omega$ imply the following properties for $B_5$:
\begin{enumerate}\item There exists a constant $C$ such that for $q_1,q_2\in U\cap \mathbf b\Omega$ with $B_5(q_1,\delta)\cap B_5(q_2,\delta)\neq \emptyset$, \begin{align}\label{2.14}
B_5(q_2,\delta)\subseteq CB_5(q_1,\delta) \text{ and } B_5(q_1,\delta)\subseteq CB_5(q_2,\delta).
\end{align} \item There exists a constant $c>0$ such that for $q\in U\cap \Omega$ and $\delta>0$, we have \begin{align}\label{2.15}B_5(q,\delta)\subseteq cB_5(q,\delta/2)\;\;\;\;\;\text{ and }\;\;\;\;\;\mu(B_5(q,\delta))\approx \prod_{j=2}^{n}\tau_j^2(q,\delta).\end{align} \end{enumerate}
The balls $B_5$ induce a quasi-metric on $\mathbf b\Omega\cap U$. For $q,p\in \mathbf b \Omega\cap U$, we set
$\tilde d_5(q,p)=\inf\{\delta>0:p\in B_5(q,\delta)\}.$ To extend this quasi-metric $\tilde d_5(\cdot,\cdot)$ to a global quasi-metric $d_5(\cdot,\cdot)$ defined on $\mathbf b\Omega\times\mathbf b\Omega$, one can just patch the local metrics together in an appropriate way. The resulting quasi-metric is not continuous, but satisfies all the relevant properties. We refer the reader to \cite{McNeal3} for more details on this matter. Since $d_5(\cdot,\cdot)$ and $\tilde d_5(\cdot,\cdot)$ are equivalent, we may abuse the notation $B_5$ for the ball on the boundary induced by $d_5$. Then (\ref{2.14}) and (\ref{2.15}) still hold true for $B_5$.
\section{Tents and dyadic structures on $\Omega$}
From now on, the domain $\Omega$ will belong to one of the following cases:
\begin{itemize}
\item a bounded, smooth, pseudoconvex domain of finite type in $\mathbb C^2$, \item a bounded, smooth, strictly pseudoconvex domain in $\mathbb C^n$,
\item a bounded, smooth, convex domain of finite type in $\mathbb C^n$, or
\item a bounded, smooth, decoupled domain of finite type in $\mathbb C^n$.
\end{itemize}
Notations $d(\cdot,\cdot)$ and $B(p,\delta)$ will be used for \begin{itemize}
\item the metric $d_1(\cdot,\cdot)$ and the ball $B_1(p,\delta)$ if $\Omega$ is pseudoconvex of finite type in $\mathbb C^2$ or strictly pseudoconvex in $\mathbb C^n$;
\item the metric $d_5(\cdot,\cdot)$ and the ball $B_5(p,\delta)$ if $\Omega$ is a convex/decoupled domain of finite type.
\end{itemize}
\begin{rmk}It is worth noting that even though we use the same notation $B(p,\delta)$ for balls on the boundary of $\Omega$, the constant $\delta$ has different geometric meanings in different settings. When $\Omega$ is a bounded, smooth, pseudoconvex domain of finite type in $\mathbb C^2$, or a bounded, smooth, strictly pseudoconvex domain in $\mathbb C^n$, $\delta$ represents the radius of the sub-Riemannian ball $B_1(p,\delta)$. When $\Omega$ is a bounded, smooth, convex (or decoupled) domain of finite type in $\mathbb C^n$, $2\delta$ is the height in the $z_1$ coordinate of the polydisc $D(q,\delta)$ that defines $B_5(q,\delta)$. If $\Omega$ is the unit ball $\mathbb B_n$ which is strictly pseudoconvex, convex, and decoupled, the ball $B_1(q,\delta)$ will be of similar size as the ball $B_5(q,\sqrt{\delta})$.\end{rmk}
\subsection{Dyadic tents on $\Omega$ and the $\mathcal B_p(\sigma)$ constant} The non-isotropic ball $B(p,\delta)$ on the boundary $\mathbf b\Omega$ induces ``tents'' in the domain $\Omega$.
To define what ``tents'' are we need the orthogonal projection map near the boundary. Let $\operatorname{dist}(\cdot,\cdot)$ denote the Euclidean distance in $\mathbb C^n$. For small $\epsilon>0$, set \begin{align*}&N_{\epsilon}(\mathbf b\Omega)=\{w\in \mathbb C^n: \operatorname{dist}(w,\mathbf b\Omega)<\epsilon\}.\end{align*}
\begin{lem}\label{lem3.2}
For sufficiently small $\epsilon_0>0$, there exists a map $\pi:N_{\epsilon_0}(\mathbf b\Omega)\to \mathbf b\Omega$ such that
\begin{enumerate}[label=\textnormal{(\arabic*)}]
\item For each point $z\in N_{\epsilon_0}(\mathbf b\Omega)$ there exists a unique point $\pi(z)\in \mathbf b\Omega$ such that \[|z-\pi(z)|=\operatorname{dist}(z,\mathbf b\Omega).\]
\item For $p\in \mathbf b\Omega$, the fiber $\pi^{-1}(p)=\{p-\epsilon n(p): -\epsilon_0\leq \epsilon<\epsilon_0\}$ where $n(p)$ is the outer unit normal vector of $\mathbf b\Omega$ at point $p$.
\item The map $\pi$ is smooth on $N_{\epsilon_0}(\mathbf b\Omega)$.
\item If the defining function $\rho$ is the signed distance function to the boundary, the gradient $\triangledown\rho$ satisfies \[\triangledown\rho(z)=n(\pi(z)) \;\text{ for } \;z\in N_{\epsilon_0}(\mathbf b\Omega).\]
\end{enumerate}
\end{lem}
A proof of Lemma \ref{lem3.2} can be found in \cite{BaloghBonk}.
\begin{de}\label{de3.3}
Let $\epsilon_0$ and $\pi$ be as in Lemma (\ref{lem3.2}). For $z\in \mathbf b\Omega$ and sufficiently small $\delta>0$, the ``tent'' $B^\#(z,\delta)$ over the ball $B(z,\delta)$ is defined to be the subset of $N_{\epsilon_0}(\mathbf b\Omega)$ as follows:
When $\Omega$ is a pseudoconvex domain of finite type in $\mathbb C^2$ or a strictly pseudoconvex domain,
\[B^\#(z,\delta)=B_1^\#(z,\delta)=\{w\in \Omega: \pi(w)\in B_1(z,\delta), |\pi(w)-w|\leq \Lambda (\pi(w),\delta)\}.\]
When $\Omega$ is a convex (or decoupled) domain of finite type in $\mathbb C^n$,
\[B^\#(z,\delta)=B_5^\#(z,\delta)=\{w\in \Omega: \pi(w)\in B_5(z,\delta), |\pi(w)-w|\leq \delta\}.\]
For $\delta\gtrsim 1$ and any $z\in \mathbf b\Omega$, we set $B^\#(z,\delta)=\Omega$.
\end{de}
For the ``tent'' $B^\#(z,\delta)$ to be within $N_{\epsilon_0}(\mathbf b\Omega)$, the constant $\delta$ in Definition \ref{de3.3} needs to satisfy $\Lambda(z^\prime,\delta)<\epsilon_0$ for $z^\prime\in B_1(z,\delta)$ when $\Omega$ is of finite type in $\mathbb C^2$ or strictly pseudoconvex; and satisfy $\delta<\epsilon_0$ when $\Omega$ is a convex (or decoupled) domain in $\mathbb C^n$.
Given a subset $U\in \mathbb C^n$, let $V(U)$ denote the Lebesgue measure of $U$. By (\ref{2.81}) and the definitions of the tents $B^\#_1(z,\delta)$ and $B^\#_5(z,\delta)$, we have:
\begin{align}\label{3.1}
&V(B_1^\#(z,\delta))\approx \delta^{2n-2}\Lambda^2(z,\delta),\\\label{3.2}&V(B_5^\#(z,\delta))\approx \delta^{2}\prod_{j=2}^{n}\tau^2_j(z,\delta).
\end{align}
and hence also the ``doubling property'':
\begin{align}\label{3.3}
V(B^\#(z,\delta))\approx V(B^\#(z,\delta/2)).
\end{align} We give the definition of the Bekoll\'e-Bonami constant on $\Omega$.
For a weight $\sigma$ and a subset $U\subseteq \Omega$, we set $\sigma(U):=\int_U\sigma dV$ and let $\langle f\rangle^{\sigma dV}_U$ denote the average of the function $|f|$ with respect to the measure $\sigma dV$ on the set $U$:
\begin{equation*}
\langle f\rangle^{\sigma dV}_U=\frac{\int_{U}|f(w)|\sigma dV}{\sigma(U)}.
\end{equation*}
\begin{de}\label{de3.4}
Given weights $\sigma(z)$ and $\nu=\sigma^{-p^\prime/p}(z)$ on $\Omega$, the characteristic $ [\sigma]_p$ of the weight $\sigma$ is defined by
\begin{equation}\label{3.40}
[\sigma]_p:=\left(\langle\sigma \rangle^{dV}_{\Omega}\left(\langle \nu\rangle^{dV}_{\Omega}\right)^{p-1}\right)^{1/p}+pp^\prime\left(\sup_{\epsilon_0>\delta>0, z\in \mathbf b\Omega}\langle\sigma \rangle^{dV}_{B^\#(z,\delta)}\left(\langle \nu\rangle^{dV}_{B^\#(z,\delta)}\right)^{p-1}\right)^{\max \{1,\frac{1}{p-1}\}}.
\end{equation}
\end{de}
\begin{rmk}\label{Re3.5}
A natural generalization of the $\mathcal B_p$ constant in the above setting will be \[\mathcal B_p(\sigma)=\max\left\{\langle\sigma \rangle^{dV}_{\Omega}\left(\langle \nu\rangle^{dV}_{\Omega}\right)^{p-1},\sup_{\epsilon_0>\delta>0, z\in \mathbf b\Omega}\langle\sigma \rangle^{dV}_{B^\#(z,\delta)}\left(\langle \nu\rangle^{dV}_{B^\#(z,\delta)}\right)^{p-1}\right\}.\]
It is not hard to see that $\mathcal B_p(\sigma)$ and $[\sigma]_p$ are qualitatively equivalent, i.e., $\mathcal B_p(\sigma)$ is finite if and only if $[\sigma]_p$ is finite. But they are not quantitatively equivalent. As one will see in the proof of Theorem \ref{t:main}, the products of averages of $\sigma$ and $\sigma^{1/(1-p)}$ over the whole domain and over the small tents will have different impacts on the estimate for the weighted norm of the projection $P$. The $\mathcal B_p(\sigma)$ above fails to reflect such a difference, and hence is unable to give the sharp upper bound. For the same reason, the claimed sharpness of the Bekoll\'e-Bonami bound in \cite{Rahm} is not quite correct. See Remark 6.1. This issue did not occur in the upper half plane case \cite{Pott} since the average over the whole upper half plane is not included in the $\mathcal B_p$ constant there.
\end{rmk}
Now we are in the position of constructing dyadic systems on $\mathbf b\Omega$ and $\Omega$.
Note that the ball $B(\cdot,\delta)$ on $\mathbf b\Omega$ satisfies the ``doubling property'' as in (\ref{2.8}) and (\ref{2.15}). By (\ref{2.81}) and (\ref{2.11}), the surface area $\mu(B(q_1,\delta))\approx \mu(B(q_2,\delta))$ for any $q_1,q_2\in \mathbf b\Omega$ satisfying $d(q_1,q_2)\leq \delta$. Combining these facts yields that the metric $d(\cdot,\cdot)$ is a doubling metric, i.e. for every $q\in \mathbf b\Omega$ and $\delta>0$, the ball $B(q,\delta)$ can be covered by at most $M$ balls $B(x_i,\delta/2)$. Results of Hyt\"onen and Kairema in \cite{HK} then give the following lemmas:
\begin{lem}\label{lem3.5}Let $\delta$ be a positive constant that is sufficiently small and let $s>1$ be a parameter.
There exist reference points $\{p_j^{(k)}\}$ on the boundary $\mathbf b\Omega$ and an associated collection of subsets $\mathcal Q=\{Q_j^{k}\}$ of $\mathbf b\Omega$ with $p_j^{(k)}\in Q_j^{k}$ such that the following properties hold:
\begin{enumerate}[label=\textnormal{(\arabic*)}]
\item For each fixed $k$, $\{p_j^{(k)}\}$ is a largest set of points on $\mathbf b\Omega$ satisfying $d_1(p_j^{(k)},p_i^{(k)})> s^{-k}\delta$ for all $i,j$. In other words, if $p\in \mathbf b\Omega$ is a point that is not in $\{p_j^{(k)}\}$, then there exists an index $j_o$ such that $d_1(p,p_{j_o}^{(k)})\leq s^{-k}\delta$.
\item For each fixed $k$, $\bigcup_j Q^k_j=\mathbf b\Omega$ and $Q^k_j\bigcap Q^k_i=\emptyset$ when $i\neq j$.
\item For $k< l$ and any $i,j$, either $Q^k_j\supseteq Q^l_i$ or $Q^k_j\bigcap Q^l_i=\emptyset$.
\item There exist positive constants $c$ and $C$ such that for all $j$ and $k$, \[B(p_j^{(k)},cs^{-k}\delta)\subseteq Q^k_j\subseteq B(p_j^{(k)},Cs^{-k}\delta).\]
\item Each $Q_j^k$ contains of at most $N$ numbers of $Q^{k+1}_i$. Here $N$ does not depend on $k, j$.
\end{enumerate}
\end{lem}
\begin{lem}\label{lem3.6}
Let $\delta$ and $\{p^{(k)}_j\}$ be as in Lemma \ref{lem3.5}. There are finitely many collections $\{\mathcal Q_l\}_{l=1}^{N}$ such that the following hold:\begin{enumerate}[label=\textnormal{(\arabic*)}]\item Each collection $\mathcal Q_l$ is associated to some dyadic points $\{z^{(k)}_j\}$ and they satisfy all the properties in Lemma \ref{lem3.5}.\item For any $z\in \mathbf b\Omega$ and small $r>0$, there exist $Q_{j_1}^{k_1}\in \mathcal Q_{l_1}$ and $Q_{j_2}^{k_2}\in \mathcal Q_{l_2}$ such that
\[Q_{j_1}^{k_1}\subseteq B(z,r)\subseteq Q_{j_2}^{k_2}\;\;\;\text{ and }\;\;\;\mu(B(z,r))\approx\mu(Q_{j_1}^{k_1})\approx\mu(Q_{j_2}^{k_2}).\]\end{enumerate}
\end{lem}
Setting the sets $Q_j^k$ in Lemma \ref{lem3.5} as the bases, we construct dyadic tents in $\Omega$ as follows:
\begin{de}\label{de3.7}
Let $\delta$, $\{p^{(k)}_j\}$ and $\mathcal Q=\{Q_j^{k}\}$ be as in Lemma \ref{lem3.5}. We define the collection $ {\mathcal T}=\{\hat {K}_j^{k}\}$ of dyadic tents in the domain $\Omega$ as follows:
\begin{itemize}
\item When $\Omega$ is pseudoconvex of finite type in $\mathbb C^2$, or strictly pseudoconvex in $\mathbb C^n$, we define
\[\hat {K}_j^{k}:=\{z\in\Omega: \pi(z)\in Q_j^k \text{ and }|\pi(z)-z|<\Lambda(\pi(z),s^{-k}\delta)\}.\]
\item When $\Omega$ is a convex or decoupled domain of finite type in $\mathbb C^n$, we define
\[\hat {K}_j^{k}:=\{z\in\Omega: \pi(z)\in Q_j^k \text{ and }|\pi(z)-z|<s^{-k}\delta\}.\]
\end{itemize}
\end{de}
\begin{lem}\label{lem3.8}Let $\mathcal T=\{\hat K^k_j\}$ be a collection of dyadic tents in Definition \ref{de3.7} and let $\{\mathcal Q_l\}_{l=1}^{N}$ be a collection of subsets in Lemma \ref{lem3.6}. The following statements hold true:\begin{enumerate}[label=\textnormal{(\arabic*)}]\item For any $\hat {K}_j^{k}$, $\hat {K}_i^{k+1}$ in $\mathcal T$, either $\hat {K}_j^{k}\supseteq\hat {K}_i^{k+1}$ or $\hat {K}_j^{k}\bigcap\hat {K}_i^{k+1}=\emptyset$.\item For any $z\in \mathbf b\Omega$ and small $r>0$, there exist $Q_{j_1}^{k_1}\in \mathcal Q_{l_1}$ and $Q_{j_2}^{k_2}\in \mathcal Q_{l_2}$ such that
\[\hat K_{j_1}^{k_1}\subseteq B^\#(z,r)\subseteq \hat K_{j_2}^{k_2}\;\;\;\text{ and }\;\;\;V(B^\#(z,r))\approx V(\hat K_{j_1}^{k_1})\approx V(\hat K_{j_2}^{k_2}).\]\end{enumerate}\end{lem}
\begin{proof}
Statement (1) is a consequence of the definition of $\hat K^k_j$ and Lemma \ref{lem3.5}(3). Statement (2) is a consequence of the definitions of $B^\#(z,r)$, $\hat K^k_j$, and Lemma \ref{lem3.6}(2).
\end{proof}
By Lemma \ref{lem3.8}(2), we can replace $B^\#(z,\delta)$ by $\hat K^k_{j}$ in the definition of $[\sigma]_p$ to obtain a quantity of comparable size:
\begin{equation}
[\sigma]_p\approx\left(\langle\sigma \rangle^{dV}_{\Omega}\left(\langle \nu\rangle^{dV}_{\Omega}\right)^{p-1}\right)^{1/p}+pp'\left(\sup_{1\leq l\leq N}\sup_{\hat K^k_j\in \mathcal T_{l}}\langle\sigma \rangle^{dV}_{\hat K^k_j}\left(\langle \nu\rangle^{dV}_{\hat K^k_j}\right)^{p-1}\right)^{\max \{1,1/(p-1)\}}.
\end{equation}
From now on, we will abuse the notation $[\sigma]_p$ to represent both the supremum in $B^\#_{q}$ and the supremum in $\hat K^k_{j}$.
\subsection{Dyadic kubes on $\Omega$} By choosing the parameter $s$ in Lemmas \ref{lem3.5} and \ref{lem3.6} to be sufficiently large, we can also assume that for any $p\in Q_i^{k+1}\subset Q_j^{k}$, one has \begin{align}
\label{3.4}
\Lambda(p,s^{-k-1}\delta)<\frac{1}{4}\Lambda(p,s^{-k}\delta)\end{align}\begin{de}\label{de3.9}For a collection $\mathcal T$ of dyadic tents, we define the center $\alpha_j^{(k)}$ of each tent $\hat {K}_j^{k}$ to be the point satisfying
\begin{itemize}
\item $\pi(\alpha_j^{(k)})=p^{(k)}_j$; and
\item $|p^{(k)}_j-\alpha_j^{(k)}|=\frac{1}{2}\sup_{\pi(p)=p^{(k)}_j}\operatorname{dist}(p,\mathbf b\Omega)$.
\end{itemize}
We set $K^{k}_{-1}=\Omega\backslash\left(\bigcup_{j}\hat {K}_j^{0}\right)$, and for each point $\alpha_j^{(k)}$ or its corresponding tent $\hat K^k_j$, we define the dyadic ``kube''
${K}_j^{k}:=\hat {K}_j^{k}\backslash\left(\bigcup_{l}\hat {K}_l^{k+1}\right),$
where $l$ is any index with $p^{(k+1)}_l\in \hat{K}^{k}_j$. \end{de}
The following lemma for dyadic kubes holds true:\begin{lem}\label{3.10} Let $\mathcal T=\{\hat K^k_j\}$ be the system of tents induced by $\mathcal Q$ in Definition \ref{de3.7}. Let $K^k_j$ be the kubes of $\hat K^k_j$. Then \begin{enumerate}[label=\textnormal{(\arabic*)}]\item $K^k_j$'s are pairwise disjoint and\; $\bigcup_{j,k}K^k_j=\Omega$. \item When $\Omega$ is a finite type domain in $\mathbb C^2$ or a strictly pseudoconvex domain in $\mathbb C^n$, \begin{align}\label{3.5}V(K^k_j)\approx V(\hat K^k_j)\approx s^{-k(2n-2)}\delta^{2n-2}\Lambda(p^{(k)}_j,s^{-k}\delta).\end{align} When $\Omega$ is a convex or decoupled domain of finite type in $\mathbb C^n$, \begin{align}\label{3.6}V(K^k_j)\approx V(\hat K^k_j)\approx s^{-2k}\delta^{-2}\prod_{j=2}^{n}\tau_j^2(p^{(k)}_j,s^{-k}\delta).\end{align}\end{enumerate}\end{lem}
\begin{proof}
Statement (1) holds true by the definition of $K^k_j$. The estimates for $V(\hat K^k_j)$ in (\ref{3.5}) and (\ref{3.6}) follow from (\ref{3.1}), (\ref{3.2}) and Lemma \ref{lem3.8}(2). When the domain is convex or decoupled of finite type in $\mathbb C^n$, the height of $\hat K_j^k$ is $s$ times the height of the tent $\hat K_j^k\backslash K_j^k$. Thus $V(\hat K_j^k)\approx V(\hat K_j^k\backslash K_j^k)$ which also implies $V(\hat K_j^k)\approx V(K_j^k)$. For the finite type in $\mathbb C^2$ case and strictly pseudoconvex case, it follows by (\ref{3.4}) that $\Lambda(p,s^{-k-1}\delta)<\frac{1}{4}\Lambda(p,s^{-k}\delta)$ for any $p\in Q_i^{k+1}\subset Q_j^{k}$ . Hence the height of $\hat K^k_j$ will be at least 4 times the height of $\hat K^k_j\backslash K^k_j$. Thus $V(\hat K_j^k)\approx V(K_j^k)$.
\end{proof}
\subsection{Weighted maximal operator based on dyadic tents} \begin{de}\label{de3.12}Let $\sigma$ be a positive integrable function on $\Omega$. Let $\mathcal T_l$ be a collection of dyadic tents as in Definition 3.6. The weighted maximal operator $\mathcal M_{\mathcal T_l,\sigma}$ is defined by
\begin{equation}
\mathcal M_{\mathcal T_l,\sigma}f(w):=\sup_{\hat K^k_j\in\mathcal T_l}\frac{1_{\hat K^k_{j}}(w)}{\sigma(\hat K^k_{j})}\int_{\hat K^k_{j}}|f(z)|\sigma(z)dV(z).
\end{equation}
\end{de}
\begin{lem}\label{lem3.12}
$\mathcal M_{\mathcal T_l,\sigma}$ is bounded on $L^p(\Omega,\sigma)$ for $1<p\leq \infty$. Moreover \begin{align}\label{3.8}\|M_{\mathcal T_l,\sigma}\|_{L^p(\sigma)}\lesssim p/(p-1).\end{align}
\end{lem}
\begin{proof}
It's obvious that $\mathcal M_{\mathcal T_l,\sigma}$ is bounded on $L^\infty(\Omega,\sigma)$. We claim $\mathcal M_{\mathcal T_l,\sigma}$ is of weak-type $(1,1)$, i.e. for $f\in L^1(\Omega,\sigma)$, the following inequality holds for all $\lambda>0$:
\begin{align}\label{3.9}
\sigma(\{z\in\Omega:M_{\mathcal T_l,\sigma}(f)(z)>\lambda\})\lesssim \frac{\|f\|_{L^1(\Omega,\sigma)}}{\lambda}.
\end{align}
Then the Marcinkiewicz Interpolation Theorem implies the boundedness of $\mathcal M_{\mathcal T_l,\sigma}$ on $L^p(\Omega,\sigma)$ for $1<p\leq \infty$, and inequality (\ref{3.8}) follows from a standard argument for the Hardy-Littlewood maximal operator.
For a point $w\in \left\{z\in\Omega:\mathcal M_{\mathcal T_l,\sigma}f(z)>\lambda\right\}$, there exists a unique maximal tent $\hat K^k_j\in \mathcal T$ that contains $w$ and satisfies:
\begin{equation}
\frac{1_{\hat K^k_j}(w)}{\sigma(\hat K^k_{j})}\int_{\hat K^k_{j}}|f(z)|\sigma(z)dV(z)>\frac{\lambda}{2}.
\end{equation}
Let $\mathcal I_\lambda$ be the set of all such maximal tents $\hat K^k_j$. The union of these maximal tents covers the set $\left\{z\in \Omega:\mathcal M_{\mathcal T_l,\sigma}f(z)>\lambda\right\}$. Since the tents $\hat K^k_j$ are maximal, they are also pairwise disjoint. Hence
\begin{equation*}
\sigma(\left\{z\in \Omega:\mathcal M_{\mathcal T_l,\sigma}f(z)>\lambda\right\})\leq \sum_{\hat K^k_j \in \mathcal I_\lambda}\sigma(\hat K^k_j)\leq \sum_{\hat K^k_j\in \mathcal I_\lambda}\frac{2}{\lambda}\int_{ \hat{K}^k_j}f(z)\sigma(z)dV(z)\leq\frac{2\|f\|_{L^1(\Omega,\sigma)}}{\lambda}.
\end{equation*}
Thus inequality (\ref{3.9}) holds and $\mathcal M_{\mathcal T_l,\sigma}$ is weak-type (1,1).
\end{proof}
\section{Estimates for the Bergman kernel function}
We recall known estimates for the Bergman kernel function, and their relation with the volume of the tents in the previous section.
\subsection{Finite Type in $\mathbb C^2$ Case} In \cite{NRSW3}, the estimate of the Bergman kernel has been expressed in terms of $d_1$ and $\Lambda(p,\delta)$. Similar results were also obtained in \cite{McNeal1}.
\begin{thm}[{\hspace{1sp}\cite{NRSW3,McNeal1}}] \label{thm4.1} Let $\epsilon_0$ be the same as in Lemma \ref{lem3.2}. Then for points $p,q\in N_{\epsilon_0}(\mathbf b\Omega)$, one has
\begin{align}
|K_{\Omega}(p;\bar q)|\lesssim d_1(p,q)^{-2}\Lambda(\pi(p),d_1(p,q))^{-2}.
\end{align}
As a consequence, there is a constant $c$ such that $p,q\in B^\#_1(\pi(p),cd_1(p,q))$ and
\begin{align}
|K_{\Omega}(p;\bar q)|\lesssim (V( B^\#_1(\pi(p),cd_1(p,q))))^{-1}.
\end{align}
\end{thm}
\subsection{Strictly Pseudoconvex Case} When $\Omega$ is bounded, strictly pseudoconvex with smooth boundary, the behavior of the Bergman kernel function is well understood. In \cite{Fefferman,Monvel}, asymptotic expansions of the kernel function were obtained on and off the diagonal. To obtain the $L^p$ mapping property of the Bergman projection, a weaker estimate as in \cite{CuckovicMcNeal} would suffice. The proof of the following theorem can be found in \cite{McNeal2003}.
\begin{thm}[{\hspace{1sp}\cite{McNeal2003}}]\label{thm4.2}
Let $\Omega$ be a smooth, bounded, strictly pseudoconvex domain in $\mathbb C^n$ with a defining function $\rho$. For each $p\in \mathbf b\Omega$, there exists a neighborhood $U$ of $p$, holomorphic coordinates $(\zeta_1,\dots,\zeta_n)$ and a constant $C>0$, such that for $p,q\in U\bigcap \Omega$,
\begin{align}
|K_{\Omega}(p;\bar q)|\leq C\left(|\rho(p)|+|\rho(q)|+|p_n-q_n|+\sum_{j=1}^{n-1}|p_k-q_k|^2\right)^{-n-1}.
\end{align}
Here $p=(p_1,\dots,p_n)$ is in $\zeta$-coordinates.
\end{thm}
Up to a unitary rotation and a translation, we may assume that, under the original $z$-coordinates, $\partial \rho(p)=dz_n$ and $p=0$, then the holomorphic coordinates $(\zeta_1,\dots,\zeta_n)$ in Theorem \ref{thm4.2} can be expressed as the biholomorphic mapping $\Phi(z)=\zeta$ with
\begin{align*}
&\zeta_1=z_1\\
&\vdots\\
&\zeta_{n-1}=z_{n-1}\\
&\zeta_n=z_n+\frac{1}{2}\sum_{k,l=1}^{n}\frac{\partial^2\rho}{\partial z_l\partial z_k}(p)z_kz_l.
\end{align*}
The next theorem relates the estimate in Theorem \ref{thm4.2} to the measure of the tents:
\begin{thm} \label{thm4.3}Let $p$, $q$, and $(p_1,\dots,p_n)$ be the same as in Theorem \ref{thm4.2} .
There exists a constant $r>0$ such that the tent $B_1^\#(p,r)$ contains points $p$ and $q$, and \begin{align}\label{4.2}r^2\approx\max\left\{|\rho(p)|+|\rho(q)|,|p_n-q_n|+\sum_{j=1}^{n-1}|p_k-q_k|^2\right\}.\end{align}Moreover, $|K_{\Omega}(p;\bar q)|\lesssim \left(V(B_1^\#(p,r))\right)^{-1}$.
\end{thm}
\begin{proof}Note that $\Phi$ is biholomorphic and can be approximated by the identity map near $p$. For any points $w,\eta$ in the neighborhood $U$ of the point $p$ in Theorem \ref{thm4.2}, the distance $d_1(w,\eta)$ is about the same when computed in coordinates $(\zeta_1,\dots,\zeta_n)$. $\Phi$ is also measure preserving since the complex Jacobian determinant $J_{\mathbb C}\Phi=1$. Therefore we may assume that those results about $d_1$ and volumes of the tents in Sections 2 and 3 hold true in $\zeta$-coordinates.
Then by (\ref{3.1}) and the strict pseudoconvexity ($\Lambda(\pi(p),\epsilon)=\epsilon^2$), the estimate $$|K_{\Omega}(p;\bar q)|\lesssim (V(B_1^\#(p,r)))^{-1}$$ holds true for $r>0$ that satisfies (\ref{4.2}). Therefore it is enough to show the existence of such a constant $r$. Set $r_1=\sqrt{|p_n-q_n|+\sum_{j=1}^{n-1}|p_k-q_k|^2}$ and $r_2=\sqrt{|\rho(p)|+|\rho(q)|}$. Note that $\partial/\partial\zeta_1,\dots,\partial/\partial\zeta_{n-1}$ are in $H_p(\mathbf b\Omega)$ and $\partial/\zeta_n$ is orthogonal to $H_p(\mathbf b\Omega)$. It follows from the fact $\Lambda(\pi(p),\epsilon)=\epsilon^2$
and Corollary \ref{Cor2.7} that there exists a constant $C_1$ such that the boundary point $\pi(q)\in \text{Box}(\pi(p),C_1r_1)$. On the other hand,
$
|\rho(p)|+|\rho(q)|\approx \operatorname{dist}(p,\mathbf b\Omega)+\operatorname{dist}(q,\mathbf b\Omega).
$
Therefore there exists a constant $C_2$ such that $\Lambda(\pi(p),C_2r_2)>|\rho(p)|+|\rho(q)|$. Set $r=\max\{C_1r_1,C_2r_2\}$. Then $B_1^\#(\pi(p),r)$ contains both points $p,q$ and inequality (\ref{4.2}) holds.
\end{proof}
\subsection{Convex/Decoupled Finite Type Case} When $\Omega$ is a smooth, bounded, convex (or decoupled) domain of finite type in $\mathbb C^n$, estimates of the Bergman kernel function on $\Omega$ were obtained in \cite{McNeal2,McNeal91,McNeal2003}. See also \cite{NPT} for a correction of a minor issue in \cite{McNeal2}.
\begin{thm}\label{thm4.4}
Let $\Omega$ be a smooth, bounded, convex (or decoupled) domain of finite type in $\mathbb C^n$. Let $p$ be a boundary point of $\Omega$. There exists a neighborhood $U$ of $p$ so that for all $q_1,q_2\in U\cap \Omega$,
\begin{align}
|K_\Omega(q_1;\bar q_2)|\lesssim \delta^{-2}\prod_{j=2}^{n}\tau_j(q_1,\delta)^{-2},
\end{align}
where $\delta=|\rho(q_1)|+|\rho(q_2)|+\inf\{\epsilon>0:q_2\in D(q_1,\epsilon)\}$.
\end{thm}
We can reformulate Theorem \ref{thm4.4} as below.
\begin{thm}\label{thm4.5}
Let $\Omega$ be a smooth, bounded, convex (or decoupled) domain of finite type in $\mathbb C^n$. Let $p$ be a boundary point of $\Omega$. There exists a neighborhood $U$ of $p$ so that for all $q_1,q_2\in U\cap \Omega$,
\begin{align}\label{4.6}
|K_\Omega(q_1;\bar q_2)|\lesssim \left(V(B_5^\#(\pi(q_1),\delta))\right)^{-1},
\end{align}
where $\delta=|\rho(q_1)|+|\rho(q_2)|+\inf\{\epsilon>0:q_2\in D(q_1,\epsilon)\}$. Moreover, there exists a constant $c$ such that $q_1, q_2\in B_5^\#(\pi(q_1),c\delta)$.
\end{thm}
Here the
estimate (\ref{4.6}) follows from (\ref{3.2}). Recall that the polydisc $D(q,\delta)$ induces a global quasi-metric \cite{McNeal3} on $\Omega$. Then a triangle inequality argument using this quasi-metric yields the containment $q_2\in B_5^\#(\pi(q_1),c\delta)$.
\subsection{Dyadic Operator Domination}
\begin{thm} \label{thm4.6} Let $\hat K_j^k$, $K_j^k$ be the tents and kubes with respect to $d$ and $B^\#$. Let $\{\mathcal T_l\}_{l=1}^N$ be the finite collections of tents induced by $\{\mathcal Q_l\}_{l=1}^N$ in Lemma \ref{lem3.6}. Then for $p,q\in \Omega$,
\begin{align}\label{4.7}
|K_{\Omega}(p;\bar q)|\lesssim (V(\Omega))^{-1}1_{\Omega\times{\Omega}}(p,q)+ \sum_{l=1}^{N}\sum_{\hat K_j^k\in \mathcal T_l}(V(\hat K_j^k))^{-1}1_{\hat K_j^k\times{\hat K_j^k}}(p,q).
\end{align}
\end{thm}
\begin{proof}It suffices to show that for every $p,q$, there exists a $\hat K^k_j\in \mathcal T_l$ for some $l$ such that $$|K_{\Omega}(p;\bar q)|\lesssim (V(\hat K_j^k))^{-1}1_{\hat K_j^k\times{\hat K_j^k}}(p,q).$$
When $\operatorname{dist}(p,q)\approx1$ or $\operatorname{dist}(p,\mathbf b\Omega)+\operatorname{dist}(q,\mathbf b\Omega)\approx 1$, the pair $(p,q)$ is away from the boundary diagonal of $\Omega\times \Omega$. By Kerzman's Theorem \cite{Kerzman,Boas}, we have $$|K_{\Omega}(p;\bar q)|\lesssim 1\approx (V(\Omega))^{-1}\approx (V(\Omega))^{-1}1_{\Omega\times{\Omega}}(p,q).$$
We turn to the case when $\operatorname{dist}(p,q)$ and $\operatorname{dist}(p,\mathbf b\Omega)+\operatorname{dist}(q,\mathbf b\Omega)$ are both small and we may assume that both $p,q\in \Omega\cap N_{\epsilon_0}(\mathbf b\Omega)$. By Theorems \ref{thm4.1}, \ref{thm4.3}, and \ref{thm4.5}, there exists a small constant $r>0$ such that $p,q\in B^\#(\pi(p),r)$ and $$|K_{\Omega}(p;\bar q)|\lesssim (V(B^\#(\pi(p),r)))^{-1}.$$ By Lemma \ref{lem3.8}, there exists a tent $\hat K^k_j\in \mathcal T_l$ for some $l$ such that $B^\#(\pi(p),r)\subseteq \hat K^k_j$ and $$V(\hat K^k_j)\approx V(B^\#(\pi(p),r)).$$ Thus $p,q\in \hat K^k_j$ and $|K_{\Omega}(p;\bar q)|\lesssim (V(\hat K^k_j))^{-1}1_{\hat K_j^k\times{\hat K_j^k}}(p,q).$
\end{proof}
\section{Proof of Theorem \ref{t:main}}
Given a function $h$ on $\Omega$, we set $M_h$ to be the multiplication operator by $h$:
\[M_h(f)(z):=h(z)f(z).\]
Let $\sigma$ be a weight on $\Omega$. Set $\nu(z):=\sigma^{{-p^\prime}/{p}}(z)$ where $p^\prime$ is the H\"older conjugate index of $p$. Then it follows that the operator norms of $P$ and $P^+$ on the weighted space $L^p(\Omega,\sigma)$ satisfy:
\begin{align}\label{5.1}
&\|P:L^p(\Omega, \sigma)\to L^p(\Omega, \sigma) \|=\|PM_\nu:L^p(\Omega,\nu)\to L^p(\Omega,\sigma)\|;\\&
\label{2.191}
\|P^+:L^p(\Omega, \sigma)\to L^p(\Omega, \sigma) \|=\|P^+M_\nu:L^p(\Omega,\nu)\to L^p(\Omega,\sigma)\|.
\end{align}
It suffices to prove the inequality for $\|P^+M_\nu:L^p(\Omega,\nu)\to L^p(\Omega,\sigma)\|$.
Let $\{\mathcal T_l\}_{l=1}^N$ be the finite collections of tents in Theorem \ref{thm4.6}. Then inequality (\ref{4.7}) holds: for $p,q\in \Omega$,
\begin{align}
|K_{\Omega}(p,\bar q)|\lesssim (V(\Omega))^{-1}1_{\Omega\times{\Omega}}(p,q)+\sum_{l=1}^{N}\sum_{\hat K_j^k\in \mathcal T_l}(V(\hat K_j^k))^{-1}1_{\hat K_j^k\times{\hat K_j^k}}(p,q).
\end{align}
Applying this inequality to the operator $P^+M_\nu$ yields
\begin{align}
\left|P^+M_\nu f(z)\right|=&\int_{\Omega}|K_{\Omega}(z;\bar w)\nu(w)f(w)|dV(w)
\nonumber\\\lesssim&\langle f\nu\rangle^{dV}_{\Omega}+\int_{\Omega}\sum_{l=1}^{N}\sum_{\hat K^k_j\in \mathcal T_l}\frac{1_{\hat K^k_{j}}(z)1_{\hat K^k_{j}}(w)\left|\nu(w) f(w)\right|}{V(\hat K^k_{j})}dV(w)\nonumber\\=&\langle f\nu\rangle^{dV}_{\Omega}+\sum_{l=1}^{N}\sum_{\hat K^k_j\in \mathcal T_l}{1_{\hat K^k_{j}}(z)}\langle f\nu\rangle^{dV}_{\hat K^k_{j}}.
\end{align}
Set $Q^+_{0,\nu}(f)(z):=\langle f\nu\rangle^{dV}_{\Omega}$ and $Q^+_{l,\nu}f(z):=\sum_{\hat K^k_j\in \mathcal T_l}{1_{\hat K^k_{j}}(z)}\langle f\nu\rangle^{dV}_{\hat K^k_{j}}$. Then it suffices to estimate the norm for $Q^+_{l,\nu}$ with $l=0,1,\dots, N$. The proof given below uses the argument for the upper bound of sparse operators in weighted theory of harmonic analysis, see for example \cite{Moen2012} and \cite{Lacey2017}. An estimate for the norm of $Q^+_{0,\nu}$ is easy to obtain by H\"older's inequality:
\begin{align}\label{5.50}
\frac{\|Q^+_{0,\nu}(f)\|^p_{L^p(\Omega,\sigma)}}{\|f\|^p_{L^p(\Omega,\nu)}}\lesssim \frac{(\langle f\nu\rangle^{dV}_{\Omega})^p\langle\sigma\rangle^{dV}_{\Omega}}{\int_\Omega|f|^p\nu dV}\lesssim \langle\sigma\rangle^{dV}_{\Omega}(\langle\nu\rangle^{dV}_{\Omega})^{p-1}.
\end{align}
Now we turn to $Q^+_{l,\nu}$ for $l\neq0$.
Assume $p>2$. For any $g\in L^{p^\prime}(\Omega,\sigma)$,
\begin{align}\label{5.5}
\left|\left\langle Q^+_{l,\nu} f(z), g(z)\sigma(z)\right\rangle\right|=&\left|\int_{\Omega} Q^+_{l,\nu} f(z)g(z)\sigma(z)dV(z)\right|\nonumber\\=&\left|\int_{\Omega}\sum_{\hat K^k_j\in \mathcal T_l}{1_{\hat K^k_{j}}(z)}\langle f\nu\rangle^{dV}_{\hat K^k_{j}}g(z)\sigma(z) dV(z)\right|\nonumber\\\leq &\sum_{\hat K^k_j\in \mathcal T_l}\langle f\nu\rangle^{dV}_{\hat K^k_{j}}\int_{\hat K^k_{j}}|g(z)|\sigma (z) dV(z)\nonumber\\=&\sum_{\hat K^k_j\in \mathcal T_l}\langle f\nu\rangle^{dV}_{\hat K^k_{j}}\langle g\sigma\rangle^{dV}_{\hat K^k_{j}}V(\hat K^k_{j}).
\end{align}
Since $\langle f\nu\rangle^{dV}_{\hat K^k_{j}}\langle g\sigma\rangle^{dV}_{\hat K^k_{j}}=\langle f\rangle^{\nu dV}_{\hat K^k_{j}}\langle\nu\rangle^{dV}_{\hat K^k_{j}}\langle g\rangle^{\sigma dV}_{\hat K^k_{j}} \langle \sigma\rangle^{dV}_{\hat K^k_{j}}$, it follows that
\begin{align}\label{5.51}
\sum_{\hat K^k_j\in \mathcal T_l}\langle f\nu\rangle^{dV}_{\hat K^k_{j}}\langle g\sigma\rangle^{dV}_{\hat K^k_{j}}V(\hat K^k_{j})=&\sum_{\hat K^k_j\in \mathcal T_{l}}\langle f\rangle^{\nu dV}_{\hat K^k_{j}}\langle\nu\rangle^{dV}_{\hat K^k_{j}}\langle g\rangle^{\sigma dV}_{\hat K^k_{j}} \langle \sigma\rangle^{dV}_{\hat K^k_{j}}V(\hat K^k_{j})\nonumber\\=&\sum_{\hat K^k_j\in \mathcal T_{l}}\left(\langle\nu \rangle^{dV}_{\hat K^k_{j}}\right)^{p-1} \langle \sigma\rangle^{dV}_{\hat K^k_{j}}\langle f\rangle^{\nu dV}_{\hat K^k_{j}}\langle g\rangle^{\sigma dV}_{\hat K^k_{j}}V(\hat K^k_{j})\left(\langle\nu\rangle^{dV}_{\hat K^k_{j}}\right)^{2-p}.
\end{align}
Then \begin{align}\label{5.6}
&\sum_{\hat K^k_j\in \mathcal T_{l}}\left(\langle\nu \rangle^{dV}_{\hat K^k_{j}}\right)^{p-1} \langle \sigma\rangle^{dV}_{\hat K^k_{j}}\langle f\rangle^{\nu dV}_{\hat K^k_{j}}\langle g\rangle^{\sigma dV}_{\hat K^k_{j}}V(\hat K^k_{j})\left(\langle\nu\rangle^{dV}_{\hat K^k_{j}}\right)^{2-p}\nonumber\\\leq&\sup_{1\leq l\leq N}\sup_{\hat K^k_j\in \mathcal T_{l}}\langle\sigma \rangle^{dV}_{\hat K^k_j}\left(\langle \nu\rangle^{dV}_{\hat K^k_j}\right)^{p-1}\sum_{\hat K^k_j\in \mathcal T_{l}}\langle f\rangle^{\nu dV}_{\hat K^k_{j}}\langle g\rangle^{\sigma dV}_{\hat K^k_{j}}\left(V(\hat K^k_{j})\right)^{p-1}\left(\nu(\hat K^k_{j})\right)^{2-p}.
\end{align}
By Lemma \ref{3.10}, one has $V(\hat K^k_j)\approx V(K^k_j)$. The fact that $p\geq 2$ and the containment $K^k_{j}\subseteq \hat K^k_{j}$ gives the inequality
$\left(\nu(\hat K^k_{j})\right)^{2-p}\leq\left(\nu( K^k_{j})\right)^{2-p}$. This inequality yields:
\begin{align}
\left(V(\hat K^k_{j})\right)^{p-1}\left(\nu(\hat K^k_{j})\right)^{2-p}\lesssim \left(V( K^k_{j})\right)^{p-1}\left(\nu( K^k_{j})\right)^{2-p}.
\end{align}
By H\"older's inequality, $$V(K^k_{j})\leq \left(\nu(K^k_{j})\right)^{\frac{1}{p^\prime}}\left(\sigma({K^k_{j}})\right)^{\frac{1}{p}}.$$
Therefore,
\begin{equation}
\left(V( K^k_{j})\right)^{p-1}\left(\nu( K^k_{j})\right)^{2-p}\leq \left(\nu( K^k_{j})\right)^{\frac{1}{p}}\left(\sigma({K^k_{j}})\right)^{\frac{1}{p^\prime}}.
\end{equation}
Substituting these inequalities into the last line of (\ref{5.6}), we obtain
\begin{align*}
&\sum_{\hat K^k_j\in \mathcal T_{l}}\langle f\rangle^{\nu dV}_{\hat K^k_{j}}\langle g\rangle^{\sigma dV}_{\hat K^k_{j}}\left(V(\hat K^k_{j})\right)^{p-1}\left(\nu(\hat K^k_{j})\right)^{2-p}\lesssim\mathcal \sum_{\hat K^k_j\in \mathcal T_{l}}\langle f\rangle^{\nu dV}_{\hat K^k_{j}}\langle g\rangle^{\sigma dV}_{\hat K^k_{j}}\left(\nu( K^k_{j})\right)^{\frac{1}{p}}\left(\sigma({K^k_{j}})\right)^{\frac{1}{p^\prime}}.
\end{align*}
Applying H\"older's inequality again the sum above gives
\begin{align}\label{5.80}
&\sum_{\hat K^k_j\in \mathcal T_{l}}\langle f\rangle^{\nu dV}_{\hat K^k_{j}}\langle g\rangle^{\sigma dV}_{\hat K^k_{j}}\left(\nu( K^k_{j})\right)^{\frac{1}{p}}\left(\sigma({K^k_{j}})\right)^{\frac{1}{p^\prime}}\nonumber
\\\leq&\left(\sum_{\hat K^k_j\in \mathcal T_{l}}\left(\langle f\rangle^{\nu dV}_{\hat K^k_{j}}\right)^{p}\nu(K^k_{j})\right)^{\frac{1}{p}}\left(\sum_{\hat K^k_j\in \mathcal T_{l}}\left(\langle g\rangle^{\sigma dV}_{K^k_{j}}\right)^{p^\prime}\sigma({K^k_{j}})\right)^{\frac{1}{p^\prime}}.
\end{align}
By the disjointness of $K^k_{j}$ and Lemma \ref{lem3.12}, we have
\begin{equation}\label{5.8}
\sum_{\hat K^k_j\in \mathcal T_{l}}\left(\langle f\rangle^{\nu dV}_{\hat K^k_{j}}\right)^{p}\nu( K^k_{j})\leq \int_{\Omega} (\mathcal M_{\mathcal T_{l},\nu}f)^p\nu dV\leq (p^\prime)^{p}\|f\|^{p}_{L^p(\Omega,\nu)}.
\end{equation}
Similarly, we also have
\begin{equation}\label{5.9}
\sum_{\hat K^k_j\in \mathcal T_{l}}\left(\langle g\rangle^{\sigma dV}_{\hat K^k_{j}}\right)^{p^\prime}\sigma( K^k_{j})\leq \int_{\Omega} (\mathcal M_{\mathcal T_{l},\sigma}g)^{p^\prime}\sigma dV\leq (p)^{p^\prime}\|g\|^{p^\prime}_{L^{p^\prime}(\Omega,\sigma)}.
\end{equation}
Substituting (\ref{5.8}) and (\ref{5.9}) back into (\ref{5.80}) and (\ref{5.5}), we finally obtain
\begin{equation*}
\left\langle Q^+_{l,\nu} f, g\sigma\right\rangle\lesssim pp^\prime \sup_{1\leq l\leq N}\sup_{\hat K^k_j\in \mathcal T_{l}}\langle\sigma \rangle^{dV}_{\hat K^k_j}\left(\langle \nu\rangle^{dV}_{\hat K^k_j}\right)^{p-1}\|f\|_{L^p(\Omega,\nu )} \|g\|_{L^{p^\prime}(\Omega,\sigma)}.
\end{equation*}
Therefore $\sum_{l=1}^N\|Q^+_{l,\nu}\|_{L^p(\Omega, \sigma)}\lesssim pp^\prime \sup_{1\leq l\leq N}\sup_{\hat K^k_j\in \mathcal T_{l}}\langle\sigma \rangle^{dV}_{\hat K^k_j}\left(\langle \nu\rangle^{dV}_{\hat K^k_j}\right)^{p-1}$.
Now we turn to the case $1<p<2$ and show that for all $f\in L^p(\Omega,\nu)$ and $g\in L^{p^\prime}(\Omega,\sigma)$
\begin{equation*}
\left \langle Q^+_{l,\nu} f, g\sigma\right\rangle\lesssim\left(\sup_{1\leq l\leq N}\sup_{\hat K^k_j\in \mathcal T_{l}}\langle\sigma \rangle^{dV}_{\hat K^k_j}\left(\langle \nu\rangle^{dV}_{\hat K^k_j}\right)^{p-1}\right)^{\frac{1}{p-1}}\|f\|_{L^p(\Omega,\nu)} \|g\|_{L^{p^\prime}(\Omega,\sigma)}.
\end{equation*}
By the definition of $Q^+_{l,\nu}$,
\begin{align}\label{5.11}
\left \langle Q^+_{l,\nu} f, g\sigma\right\rangle&=\left\langle\sum_{\hat K^k_j\in \mathcal T_{l}}1_{\hat K^k_{j}}(w)\langle f\nu\rangle^{dV}_{\hat K^k_{j}},g\sigma\right\rangle\nonumber\\&=\sum_{\hat K^k_j\in \mathcal T_{l}}\langle f\nu\rangle^{dV}_{\hat K^k_{j}}\langle g\sigma\rangle^{dV}_{\hat K^k_{j}}V(\hat K^k_{j})\nonumber
\\&=\sum_{\hat K^k_j\in \mathcal T_{l}}\left\langle 1_{\hat K^k_{j}}(w)\langle g\sigma\rangle^{dV}_{\hat K^k_{j}},f\nu\right\rangle
\nonumber\\&=\left\langle Q^+_{l,\sigma}(g),f\nu\right\rangle.
\end{align}Since $1<p<2$, $p^\prime>2$. Then, replacing $p$ by $p^\prime$, interchanging $\sigma$ and $\nu$, and adopting the same argument for the case $p\geq2$ yields that
\begin{align*}
\|Q^+_{l,\sigma} \|_{L^{p^\prime}(\Omega,\nu )}&\lesssim p^\prime p\sup_{1\leq l\leq N}\sup_{\hat K^k_j\in \mathcal T_{l}}\langle\nu \rangle^{dV}_{\hat K^k_j}\left(\langle \sigma\rangle^{dV}_{\hat K^k_j}\right)^{p^\prime-1}
\nonumber\\&=pp^\prime\left(\sup_{1\leq l\leq N}\sup_{\hat K^k_j\in \mathcal T_{l}}\langle\sigma \rangle^{dV}_{\hat K^k_j}\left(\langle \nu\rangle^{dV}_{\hat K^k_j}\right)^{p-1}\right)^{\frac{1}{p-1}}.
\end{align*}
Thus we have $$\left \langle Q^+_{l,\nu} f, g\sigma\right\rangle\lesssim pp^\prime\left(\sup_{1\leq l\leq N}\sup_{\hat K^k_j\in \mathcal T_{l}}\langle\sigma \rangle^{dV}_{\hat K^k_j}\left(\langle \nu\rangle^{dV}_{\hat K^k_j}\right)^{p-1}\right)^\frac{1}{p-1}\|g\|_{L^{p^\prime}(\Omega,\sigma )}\|f\|_{L^p(\Omega,\nu)},$$ and $$\|Q^+_{l,\nu}:L^p(\Omega,\nu)\to L^p(\Omega,\sigma)\|\lesssim pp^\prime\left(\sup_{1\leq l\leq N}\sup_{\hat K^k_j\in \mathcal T_{l}}\langle\sigma \rangle^{dV}_{\hat K^k_j}\left(\langle \nu\rangle^{dV}_{\hat K^k_j}\right)^{p-1}\right)^{\frac{1}{p-1}}.$$
Combining the results for $Q^+_{l,\nu}$ with $l\neq 0$ for $1<p< 2$ and $p\geq2$ and inequality (\ref{5.50}) for $Q^+_{0,\nu}$ gives the estimate in Theorem \ref{t:main}: $$\|P^+\|_{L^p(\Omega,\sigma)}\lesssim [\sigma]_p.$$
\section{A sharp example}
In this section, we provide an example to show that the estimate in Theorem \ref{t:main} is sharp. Our example is for the case $1<p\leq2$. The case $p>2$ follows by a duality argument.
The idea is similar to the ones in \cite{Pott,Rahm}. Since $\Omega$ is a pseudoconvex domain of finite type, Kerzman's Theorem \cite{Kerzman,Boas} implies that the kernel function $K_\Omega$ extends to a $C^\infty$ function away from a neighborhood of the boundary diagonal of $\Omega\times\Omega$. Let $w_\circ\in \Omega$ be a point that is away from the set $N_{\epsilon_0}(\mathbf b\Omega)$ and satisfies $K_\Omega(w_\circ;\bar w_\circ)=2C_1>0$ for some constant $C_1$. Then the maximum principle implies that $\{z\in\mathbf b\Omega:|K_\Omega(z;\bar w_\circ)|>C_1\}$ is a non-empty open subset of $\mathbf b\Omega$. We claim that there exists a strictly pseudoconvex point $z_\circ$ in the set $\{z\in\mathbf b\Omega:|K_\Omega(z;\bar w_\circ)|>C_1\}$. Let $p$ be a point in $\{z\in\mathbf b\Omega:|K_\Omega(z;\bar w_\circ)|>C_1\}$. Since $p$ is a point of finite 1-type in the sense of D'Angelo, the determinant of the Levi form does not vanish identically near point $p$. Thus the determinant of the Levi form is strictly positive at some point in every neighborhood of $p$, i.e., there is a sequence of strictly pseudoconvex points converging to $p$. Since $\{z\in\mathbf b\Omega:|K_\Omega(z;\bar w_\circ)|>C_1\}$ is a neighborhood of $p$,
there exists a strictly pseudoconvex point $z_\circ$ such that \begin{equation}\label{7.10}|K_{\Omega}(z_\circ;\bar w_\circ)|> C_1.\end{equation} There are several possible proofs in the literature for the existence of the nonvanishing point of the determinant of the Levi form near a point of finite type. See for example \cite{Nicoara} or the forthcoming thesis of Fassina \cite{Fassina}. Nevertheless, we choose the strictly pseudoconvex point $z_\circ$ above only for the simplicity of the construction of our example and it is not required. See Remark \ref{re 6.1} below. By Kerzman's Theorem again, there exists a small constant $\delta_0$ such that for any pair of points $(z,w)\in B^\#(z_\circ,\delta_0)\times \{w\in\Omega:\operatorname{dist}(w,w_\circ)<\delta_0\}$,
\[|K_\Omega(z, \bar w)-K_{\Omega}(z_\circ;\bar w_\circ)|\leq C_1/10.\]
Thus for $(z,w)\in B^\#(z_\circ,\delta_0)\times \{w\in\Omega:\operatorname{dist}(w,w_\circ)<\delta_0\}$, one has
\[|K_\Omega(z, \bar w)|\approx C_1.\]
Moreover, elementary geometric reasoning yields that \begin{align}\label{7.11}\arg\{K_{\Omega}(z;\bar w),K_{\Omega}(z_\circ,w_\circ)\}\in [-\sin^{-1}(1/10),\sin^{-1}(1/10)].\end{align} From now on, we let $\delta_0$ be a fixed constant.
For $z\in\Omega$, we set
\begin{align}\label{6.3}
h(z)=\inf\{\delta>0:z\in B^\#(z_\circ,\delta)\}, \text{ and }\;\;l(z)=\operatorname{dist}(z,w_\circ).
\end{align}
Let $1<p\leq 2$. Let $s$ be a positive constant that is sufficiently close to $0$. We define the weight function $\sigma$ on $\Omega$ to be
\begin{align}
\sigma(w)=\frac{(h(w))^{(p-1)(2+2n-2s)}}{(l(w))^{2n-2s}}.
\end{align}
We claim that the constant $ [\sigma]_p\approx s^{-1}$.
First, we consider the average of $\sigma$ and $\sigma^{\frac{1}{1-p}}$ over the tent $B^\#(z,\delta)$. Note that \[\sigma^{\frac{1}{1-p}}(w)=\frac{(h(w))^{(2s-2n-2)}}{(l(w))^{(2s-2n)/(p-1)}}.\] If the tent $B^\#(z,\delta)$ does not intersect $B^\#(z_\circ,\delta)$, then for any $w\in B^\#(z,\delta)$ we have $h(w)\approx x+\delta$ and $l(w)\approx 1$ with $x\gtrsim \delta$. Thus we have
\begin{equation}\label{7.4}\langle \sigma\rangle^{dV}_{B^\#(z,\delta)}\left(\langle \sigma^{\frac{1}{1-p}}\rangle^{dV}_{B^\#(z,\delta)}\right)^{p-1}\approx {(x+\delta)^{(p-1)(2+2n-2s)}}\left({(x+\delta)^{(2s-2n-2)}}\right)^{p-1}= 1.\end{equation} If $B^\#(z,\delta)$ intersects $B^\#(z_\circ,\delta)$, then there exists a constant $C$ so that $B^\#(z_\circ,C\delta)$ contains $B^\#(z,\delta)$ with $|B^\#(z_\circ,C\delta)|\approx |B^\#(z,\delta)|$ by the doubling property of the ball $B^\#$.
Hence
\[\langle \sigma\rangle^{dV}_{B^\#(z,\delta)}\left(\langle \sigma^{\frac{1}{1-p}}\rangle^{dV}_{B^\#(z,\delta)}\right)^{p-1}\lesssim \langle \sigma\rangle^{dV}_{B^\#(z_\circ,C\delta)}\left(\langle \sigma^{\frac{1}{1-p}}\rangle^{dV}_{B^\#(z_\circ,C\delta)}\right)^{p-1}.\]
Since $w_\circ$ is away from the set $N_{\epsilon_0}(\mathbf b\Omega)$ and hence is away from any tents, $l(w)\approx 1$ for any $w\in B^\#(z_\circ,C\delta)$. It follows that
\[\langle \sigma\rangle^{dV}_{B^\#(z_\circ,C\delta)}\approx\int_{B^\#(z_\circ,C\delta)}h(w)^{(p-1)(2n+2-2s)}dV(w)(V(B^\#(z,\delta)))^{-1}.\]
Recall that $z_\circ$ is a strictly pseudoconvex point. There exist special holomorphic coordinates $(\zeta_1,\dots,\zeta_n)$ in a neighborhood of $z_\circ$ as in Theorem \ref{thm4.2} so that $z_\circ=(0,\dots, 0)$ and the tent
\[D(z_\circ,\delta):=\{w=(\zeta_1,\dots,\zeta_n)\in \Omega:|\zeta_n|<\delta^2, |\zeta_j|<\delta, j=1,\dots, n-1\},\]
is equivalent to $B^\#(z_\circ,C\delta)$ in the sense that there exist constants $c_1$ and $c_2$ so that \[D(z_\circ,c_1\delta)\subseteq B^\#(z_\circ,C\delta)\subseteq D(z_\circ,c_2\delta).\]
Moreover, $h(w)\approx (|\zeta_1|^2+\cdots+|\zeta_{n-1}|^2+|\zeta_n|)^{\frac{1}{2}}$. Therefore
\begin{equation}
\label{6.5}
\langle \sigma\rangle^{dV}_{B^\#(z_\circ,C\delta)}\approx\int_{B^\#(z_\circ,C\delta)}h(w)^{(p-1)(2n+2-2s)}dV(w)(V(B^\#(z,\delta)))^{-1}\approx\delta^{(p-1)(2n+2-2s)}.\end{equation}
Similarly,
\begin{equation}\label{6.6}\langle \sigma^{\frac{1}{1-p}}\rangle^{dV}_{B^\#(z_\circ,C\delta)}\approx\int_{B^\#(z_\circ,C\delta)}h(w)^{2s-2n-2}dV(w)(V(B^\#(z,\delta)))^{-1}\approx s^{-1}\delta^{(2s-2n-2)},\end{equation}
where $s^{-1}$ comes from the power rule $\int_0^a t^{s-1}dt={a^s}/{s}$.
Combining these inequalities yields
\begin{equation}\label{7.5}\langle \sigma\rangle^{dV}_{B^\#(z,\delta)}\left(\langle \sigma^{\frac{1}{1-p}}\rangle^{dV}_{B^\#(z,\delta)}\right)^{p-1}\lesssim\langle \sigma\rangle^{dV}_{B^\#(z_\circ,C\delta)}(\langle \sigma^{\frac{1}{1-p}}\rangle^{dV}_{B^\#(z_\circ,C\delta)})^{p-1}\approx s^{-(p-1)}.\end{equation}
Now we turn to the average of $\sigma$ and $\sigma^{\frac{1}{1-p}}$ over the entire domain $\Omega$. Note that $$\langle \sigma\rangle^{dV}_{\Omega}\approx\int_\Omega l(w)^{2s-2n}d V(w).$$ A computation using polar coordinates yields that
$\langle \sigma\rangle^{dV}_{\Omega} \approx s^{-1}$. Also,
\begin{align*}\langle \sigma^{\frac{1}{1-p}}\rangle^{dV}_{\Omega}&\approx\int_\Omega h(w)^{2s-2n-2}d V(w)\\&=\int_{B^\#(z_\circ,\delta_0)} h(w)^{2s-2n-2}d V(w)+\int_{\Omega\backslash B^\#(z_\circ,\delta_0)} h(w)^{2s-2n-2}d V(w)\\&\approx \delta_0^{2s-2n-2}s^{-1}+ \delta_0^{2s-2n-2}\approx s^{-1},\end{align*}
where the third approximation sign follows by $s$ being sufficiently small and $\delta_0$ being a fixed constant. Thus
\begin{equation}\label{7.6}\langle \sigma\rangle^{dV}_{\Omega}\left(\langle \sigma^{\frac{1}{1-p}}\rangle^{dV}_{\Omega}\right)^{p-1}\approx s^{-1}(s^{-1})^{p-1}\approx s^{-p}.\end{equation}
This estimate together with inequalities (\ref{7.4}) and (\ref{7.5}) yields that $[\sigma]_p\approx s^{-1}$.
Now we consider the function
\[f(w)=\sigma^{\frac{1}{1-p}}(w)1_{B^\#(z_\circ,\delta_0)},\]
where $\delta_0$ is the same fixed constant so that (\ref{7.11}) holds.
Since $z_\circ$ is a point away from $w_\circ$, $l(w)\approx 1$ for $w\in B^\#(z_\circ,\delta_0)$. Thus
\[\|f\|^p_{L^p(\Omega,\sigma)}=\langle \sigma^{\frac{1}{1-p}}\rangle^{dV}_{B^\#(z_\circ,\delta_0)}V(B^\#(z_\circ,\delta_0))\approx s^{-1}.\]
When $z\in \{w\in\Omega:\operatorname{dist}(w,w_\circ)<\delta_0\}$, (\ref{7.10}) and (\ref{7.11}) imply that
\begin{align}\label{6.9}
|P(f)(z)|&=\left|\int_{B^\#(z_\circ,\delta_0)}K_\Omega(z;\bar w)f(w)dV(w)\right|\nonumber\\&\approx \int_{B^\#(z_\circ,\delta_0)}|K_\Omega(z_0;\bar w_0)|f(w)dV(w)\nonumber\\&\approx \int_{B^\#(z_\circ,\delta_0)}f(w)dV(w)=\langle\sigma^{\frac{1}{1-p}}\rangle^{dV}_{B^\#(z_\circ,\delta_0)}V(B^\#(z_\circ,\delta_0))\approx s^{-1}.
\end{align}
By (\ref{6.9}) and the fact that $\delta_0\approx 1$, we obtain the desired estimate:
\begin{align}
\|P(f)\|^p_{L^p(\Omega,\sigma)}&=\int_\Omega|P(f)(z)|^p\sigma(z)dV(z)\nonumber\\&\geq\int_{\{z\in\Omega:\operatorname{dist}(z,w_\circ)<\delta_0\}}|P(f)(z)|^p\sigma(z)dV(z)\nonumber\\
&\gtrsim s^{-p}\int_{\{z\in\Omega:\operatorname{dist}(z,w_\circ)<\delta_0\}} (\operatorname{dist}(z,w_\circ))^{2s-2n}dV(z)\nonumber\\&\approx s^{-p}s^{-1}\approx \mathcal ([\sigma]_p)^p\|f\|^p_{L^p(\Omega,\sigma)}.
\end{align}
\begin{rmk}In the particular case of the unit ball $\mathbb{B}_n$ we can make our example more explicit: the weight $\sigma(w)=|w-z_\circ|^{(p-1)(2+2n-2s)}/|w|^{2n-2s}$ with $z_\circ=(1,0,\dots,0)$ and the test function $f(w)=\sigma^{\frac{1}{1-p}}(w)1_{B^\#(z_\circ,1/2)}(w)$. One can compute explicitly in this case that $\sigma$ is in the $\mathcal B_p$ class.
We further remark that in \cite{Rahm}, the authors produce an upper and lower bound in terms of a Bekoll\'{e}-Bonami condition that doesn't utilize information about the large tents. The upper bound they produce is correct, however the claimed sharpness of the Bekoll\'e-Bonami condition without testing the large tents is not quite correct. The example they construct does appropriately capture the behavior of small tents, but fails to do so in the case of large tents and this characteristic fails to capture the sharpness. It is for this reason that we have had to modify the definition of the Bekoll\'{e}-Bonami characteristic in Definition \ref{de3.4} to reflect the behavior of both large and small tents.\end{rmk}
\begin{rmk}\label{re 6.1}In this example, we require $z_\circ$ to be a strictly pseudoconvex point only for the simplicity of the construction of the weight $\sigma$ and the test function $f$, and the computation. For every $z\in\mathbf b\Omega$, the geometry of the tent $B^\#(z,\delta)$ is well understood. Thus $\sigma$ and $f$ can be modified accordingly so that the estimate (\ref{7.5}) still holds true.
\end{rmk}
\begin{rmk}
For a different example, we can also choose $z_\circ$ to be a point in $\Omega$ that is away from both $N_{\epsilon_0}(\mathbf b\Omega)$ and $w_\circ$, and change $h(w)$ in (\ref{6.3}) to be $(\operatorname{dist}(z_\circ, w))^{(p-1)(2n-2s)}$. The average of $\sigma$ and $\sigma^{\frac{1}{1-p}}$ over tents is controlled by a constant since all tents are away from points $z_\circ$ and $w_\circ$. Moreover, $\langle\sigma\rangle^{dV}_\Omega(\langle\sigma^{\frac{1}{p-1}}\rangle^{dV}_\Omega)^{p-1}\approx s^{-p}$ by a computation using polar coordinates.
Thus $[\sigma]_p\approx s^{-1}$ and a similar argument yields the sharpness of the bound in Theorem \ref{t:main}. We did not adopt this example since it does not reflect the connection between the weighted norm of the projection and the average of $\sigma$ and $\sigma^{\frac{1}{1-p}}$ over small tents.
\end{rmk}
\begin{rmk}When the weight $\sigma\equiv1$, the constant $[\sigma]_p\approx pp^\prime$. Theorem \ref{t:main} then gives an estimate for the $L^p$ norm of the Bergman projection: \[\|P\|_{L^p(\Omega)}\lesssim pp^\prime.\] For the strictly pseudoconvex case, such an estimate was obtained and proven to be sharp by \v{C}u\v{c}kovi\'{c} \cite{Cuckovic17}. Therefore, the constant $pp^\prime$ in $[\sigma]_p$ is necessary.\end{rmk}
\section{Proof of Theorem \ref{t:main1}}
We first show that the lower bound (\ref{1.2}) in Theorem \ref{t:main1} with the assumption that $\Omega$ is bounded, smooth, and strictly pseudoconvex.
We begin by recalling the following two lemmas from \cite{HWW}.
\begin{lem}\label{Lem7.0}
Let $\Omega$ be a smooth, bounded, strictly pseudoconvex domain. If the Bergman projection $P$ is bounded on the weighted space $L^p(\Omega,\sigma)$, then the weight $\sigma$ and its dual weight $\nu=\sigma^{\frac{1}{1-p}}$ are integrable on $\Omega$.
\end{lem}
\begin{lem}\label{Lem7.1}
Let $\Omega$ be a smooth, bounded, strictly pseudoconvex domain. Let $\delta$ be a small constant. For a boundary point $z_1$, let $B^\#(z_1,\delta)$ be a tent defined as in Definition \ref{de3.3}. Then there exists a tent $B^\#(z_2,\delta)$ with $d(B(z_1,\delta),B(z_2,\delta))\approx \delta$ so that if $f\geq 0$ is a function supported in $B^\#(z_i,\delta)$ and $z\in B^\#(z_j,\delta)$ with $i\neq j$ and $i,j\in \{1,2\}$, then we have
$$|P(f)(z)|\gtrsim \langle f\rangle^{dV}_{B^\#(z_i,\delta)}.$$
\end{lem}
Recall that $\nu=\sigma^{1/(1-p)}$. By (\ref{5.1}),
\begin{equation*}
\|P\|_{L^p(\Omega,\sigma dV)}= \|PM_\nu:L^p(\Omega,\nu dV)\to L^p(\Omega,\sigma dV)\|.
\end{equation*}
It suffices to show that $$\sup_{\epsilon_0>\delta>0, z\in \mathbf b\Omega}\langle\sigma \rangle^{dV}_{B^\#(z,\delta)}\left(\langle \nu\rangle^{dV}_{B^\#(z,\delta)}\right)^{p-1}\lesssim \|PM_\nu:L^p(\Omega,\nu dV)\to L^p(\Omega,\sigma dV)\|^{2p}.$$ For simplicity, we set $\mathcal A:=\|PM_\nu:L^p(\Omega,\nu dV)\to L^p(\Omega,\sigma dV)\|$. If $\mathcal A<\infty$, then we have a weak-type $(p,p)$ estimate:
\begin{equation}\label{3.19}
\sigma\{w\in\Omega:|PM_\nu f(w)|>\lambda\}\lesssim\frac{\mathcal A^{p}}{\lambda^p}\|f\|^p_{L^p(\Omega,\nu dV)}.
\end{equation}
Let $\delta_0$ be a fixed constant so that Lemma \ref{Lem7.1} is true for all $\delta<\delta_0$. Set $f(w)=1_{B^\#(z_1,\delta)}(w)$. Lemma \ref{Lem7.1} implies that for any $z\in B^\#(z_2,\delta)$,
\begin{align}
|PM_\nu1_{B^\#(z_1,\delta)}(z)|=&\left|\int_{ B^\#(z_1,\delta)}K_\Omega(z;\bar w)\nu(w) dV(w)\right|> \langle \nu\rangle^{dV}_{B^\#(z_1,\delta)}.
\end{align}
It follows that
\begin{equation}
B^\#(z_2,\delta)\subseteq \{w\in\Omega:|PM_\nu f(w)|>\langle \nu\rangle^{dV}_{B^\#(z_1,\delta)}\}.
\end{equation}
By Lemma \ref{Lem7.0}, $\langle \nu\rangle^{dV}_{B^\#(z_1,\delta)}$ is finite. Then inequality (\ref{3.19}) implies
\begin{equation}
\sigma(B^\#(z_2,\delta))\leq\mathcal A^p\left(\langle\nu\rangle_{B^\#(z_1,\delta)}^{dV}\right)^{-p}\nu(B^\#(z_1,\delta)),
\end{equation}
which is equivalent to $\langle\sigma\rangle_{B^\#(z_2,\delta)}^{dV}\left(\langle\nu\rangle_{B^\#(z_1,\delta)}^{dV}\right)^{p-1}\lesssim \mathcal A^p$. Since one can interchange the roles of $z_1$ and $z_2$ in Lemma \ref{Lem7.1}, it follows that $$\langle\sigma\rangle_{B^\#(z_1,\delta)}^{dV}\left(\langle\nu\rangle_{B^\#(z_2,\delta)}^{dV}\right)^{p-1}\lesssim \mathcal A^p.$$ Combining these two inequalities, we have
\begin{equation}\label{3.25}
\langle\sigma\rangle_{B^\#(z_1,\delta)}^{dV}\left(\langle\nu\rangle_{B^\#(z_2,\delta)}^{dV}\right)^{p-1}\langle\sigma\rangle_{B^\#(z_2,\delta)}^{dV}\left(\langle\nu\rangle_{B^\#(z_1,\delta)}^{dV}\right)^{p-1}\lesssim \mathcal A^{2p}.
\end{equation}
By H\"older's inequality,
\begin{equation}
V(B^\#(z_2,\delta))^{p}\leq \int_{B^\#(z_2,\delta)}\sigma dV\left(\int_{ B^\#(z_2,\delta)}\nu dV\right)^{p-1}.
\end{equation}
Therefore $\langle\sigma\rangle_{B^\#(z_2,\delta)}^{dV}\left(\langle\nu\rangle_{B^\#(z_2,\delta)}^{dV}\right)^{p-1}\gtrsim 1$. Applying this to (\ref{3.25}) and taking the supremum of the left side of (\ref{3.25}) for all tents $B^\#(z_1,\delta)$ where $\delta<\delta_0$ yields
\begin{equation}\label{3.27}
\sup_{\substack{\delta<\delta_0,\\z_1\in \mathbf b\Omega}}\langle \sigma\rangle_{B^\#(z_1,\delta)}^{dV}\left(\langle\nu\rangle_{B^\#(z_1,\delta)}^{dV}\right)^{p-1}\lesssim \mathcal A^{2p}.
\end{equation}
Since the constant $\epsilon_0$ in Lemma \ref{lem3.2} can be chosen to be $\delta_0$, inequality (\ref{1.2}) is proved.
Now we turn to prove (\ref{1.3}) and assume in addition that $\Omega$ is Reinhardt. Since inequality (\ref{3.27}) still holds true, it suffices to show
\begin{equation}\label{7.8}
\langle \sigma\rangle_{\Omega}^{dV}\left(\langle\nu\rangle_{\Omega}^{dV}\right)^{p-1}\lesssim \mathcal A^{2p}.
\end{equation}
Because $\Omega$ is Reinhardt, the monomials form a complete orthogonal system for the Bergman space $A^2(\Omega)$. Thus the kernel function $K_\Omega$ has the following series expression:
\begin{align}
K_\Omega(z;\bar w)=\sum_{\alpha\in \mathbb N^n} \frac{z^\alpha \bar w^\alpha}{\|z^\alpha\|^2_{L^2(\Omega)}}.
\end{align}
This implies that $K_\Omega(z;0)=\|1\|^{-2}_{L^2(\Omega)}$ for any $z\in\Omega$. By either the asymptotic expansion of $K_\Omega$ \cite{Fefferman, Monvel} or Kerzman's Theorem \cite{Kerzman}, we can find a precompact neighborhood $U$ of the origin such that for any $z\in \Omega$ and $w\in U$,
\begin{align}
|K_\Omega(z;\bar w)|\approx 1 \;\;\;\;\text{ and }\;\;\;\;\arg\{K_\Omega(z;\bar w), K_\Omega(z;0)\}\in [-1/4,1/4].
\end{align}
Let $f(w)=1_U\left(w\right)$. Then for any $z\in \Omega$,
\begin{align*}&\left|PM_\nu(f)\left(z\right)\right|=\left|\int_{U}K_\Omega(z;\bar w)\nu dV(w)\right|> c\|f\|_{L^1(\Omega,\nu dV)},\end{align*} for some constant $c$.
Therefore,
\begin{equation*}
\Omega\subseteq\left\{z\in \Omega:|PM_\nu(f)(z)|>c\|f\|_{L^1(\Omega,\nu dV)}\right\}.
\end{equation*}
Applying this containment and the fact that $\|f\|_{L^1(\Omega,\nu dV)}=\|f\|^p_{L^p(\Omega,\nu dV)}$ to (\ref{3.19}) yields
\begin{equation}\label{7.111}
\sigma(\Omega)\leq\frac{\mathcal A^p}{c^p\|f\|^p_{L^1(\Omega,\nu dV)}}\|f\|^p_{L^p(\Omega,\nu dV)}\leq \frac{\mathcal A^p}{c^p\|f\|^{p-1}_{L^1(\Omega,\nu dV)}}<\infty.
\end{equation}
Thus \begin{align}\label{7.12}\langle\sigma\rangle^{dV}_\Omega\left(\langle\nu\rangle^{dV}_U\right)^{p-1}\lesssim \mathcal A^p.\end{align} Interchanging the role of $z$ and $w$ in the argument above, we also have
\begin{equation*}
U\subseteq\left\{w\in \Omega:|PM_\nu(1)(w)|>c\|1\|_{L^1(\Omega,\nu dV)}\right\},
\end{equation*}
and
\begin{equation}\label{7.112}
\sigma(U)\leq\frac{\mathcal A^p}{c^p\|1\|^p_{L^1(\Omega,\nu dV)}}\|1\|^p_{L^p(\Omega,\nu dV)}\leq \frac{\mathcal A^p}{c^p\|1\|^{p-1}_{L^1(\Omega,\nu dV)}}<\infty.
\end{equation}
Thus
\begin{align}\label{7.13}\langle\sigma\rangle^{dV}_U\left(\langle\nu\rangle^{dV}_\Omega\right)^{p-1}\lesssim \mathcal A^p.\end{align}
Combining (\ref{7.12}), (\ref{7.13}) and using the fact that $$\langle\sigma\rangle^{dV}_U\left(\langle\nu\rangle^{dV}_U\right)^{p-1}\geq 1,$$ we obtain the desired estimate: \begin{align}\label{7.15}\langle\sigma\rangle^{dV}_\Omega\left(\langle\nu\rangle^{dV}_\Omega\right)^{p-1}\lesssim\langle\sigma\rangle^{dV}_\Omega\left(\langle\nu\rangle^{dV}_\Omega\right)^{p-1}\langle\sigma\rangle^{dV}_U\left(\langle\nu\rangle^{dV}_U\right)^{p-1}\lesssim \mathcal A^{2p}.\end{align}
Estimates (\ref{7.15}) and (\ref{3.27}) then give (\ref{1.3}). The proof is complete.
\section{An application to the weak $L^1$ estimate}
In \cite{McNeal3}, the weak-type $(1,1)$ boundedness of the Bergman projection on simple domains was obtained using a Calderon-Zygmund type decomposition. In this section, we use Theorem \ref{4.6} to provide an alternative approach to establish the weak-type bound for the Bergman projection. We follow the argument in \cite{CACPO17} since we have a ``sparse domination" for the Bergman projection.
\begin{thm}
There exists a constant $C>0$ so that for all $f \in L^1(\Omega),$
$$\sup_{\lambda} \lambda V(\{z: |Pf(z)|>\lambda\})< C \|f\|_{L^{1}(\Omega)}.$$
\begin{proof}
By a well-known equivalence of weak-type norms (see for example \cite{Grafakos}), it suffices to show
\begin{equation}\label{8.1} \sup_{\substack {f_1 \\ \|f_1\|_{L^1(\Omega)}=1}} \sup_{G \subset \Omega} \inf_{\substack{G' \subset G \\ V(G)< 2 V(G')}} \sup_{\substack{f_2 \\ |f_2| \leq 1_{G'}}} |\langle P f_1,f_2 \rangle| < \infty. \end{equation}
In light of Theorem \ref{4.6}, we may replace $P$ by $Q^+_{\ell_0,1}$ (using our previous notation) for some fixed $\ell_0$ with $1\leq \ell \leq N$. As in Definition \ref{de3.12}, we consider the (now unweighted) dyadic maximal function $\mathcal{M}_{\mathcal T_{\ell_0},1}$. For convenience in what follows, we will simply write $\mathcal M_{\mathcal T_{\ell_0}}$. By Lemma \ref{lem3.12}, we know this operator is of weak-type $(1,1)$. Fix $f_1$ with norm $1$, $G \subset \Omega$ and constants $C_1$, $C_2$ to be chosen later. Define sets
$$H=\{z \in \Omega: \mathcal M_{\mathcal T_{\ell_0}}f_1(z)>C_1 V(G)^{-1}\}$$
and
$$\tilde{H}= \bigcup_{\hat{K}_j^k \in \mathcal{K}} \hat{K}_j^k$$
where
$$\mathcal{K}=\left\{\text{maximal tents $\hat{K}_j^k$ in $\mathcal{T}_{\ell_0}$ so $V(\hat{K}_j^k \cap H)>C_2 V(\hat{K}_j^k)$}\right\}.$$
Note if $C_1$ is chosen sufficiently large relative to $C_2^{-1}$, the weak-type estimate of $\mathcal M_{\mathcal T_{\ell_0}}$ implies
\begin{align*}
V(\tilde{H})& =V\left( \bigcup_{\hat{K}_j^k \in \mathcal{K}} \hat{K}_j^k \right)\\
& \leq \sum_{\hat{K}_j^k \in \mathcal{K}} C_2^{-1}V(\hat{K}_j^k \cap H)\\
& \leq C_2^{-1} V(H)\\
& \leq C_2^{-1} C_1^{-1} V(G) \|f_1\|_{L^1(\Omega)}\\
& \leq \frac{1}{2} V(G).
\end{align*}
It is then clear if we let $G'=G \setminus \tilde{H}$, then $V(G)<2 V(G')$, so $G'$ is a candidate set in the infimum in \eqref{8.1}.
If $z \in H^c$, then, by definition,
\begin{equation}\mathcal M_{\mathcal T_{\ell_0}} f_1(z) \leq C_1 V(G)^{-1}. \end{equation} \label{8.2}
Using the distribution function,
\begin{align}
\|\mathcal {M}_{\mathcal T_{\ell_0}} f_1\|^{2}_{L^{2}(H^c)}&=2\int_{0}^{ C_1 V(G)^{-1}} tV(\{z\in H^c:\mathcal {M}_{\mathcal T_{\ell_0}} f_1(z)>t\}) dt\nonumber\\&\leq 2\int_{0}^{ C_1 V(G)^{-1}}dt\|\mathcal {M}_{\mathcal T_{\ell_0}} \|_{L^{1,\infty}(H^c)}\|f_1\|_{L^1(\Omega)}\nonumber\\
& \lesssim C_1V(G)^{-1}\label{8.4}.
\end{align}
Now let $|f_2 |\leq 1_G$ be fixed. We have
\begin{equation} |\langle Q_{\ell_0,1}^{+} f_1, f_2 \rangle|= \sum_{\hat{K}_j^k \in \mathcal T_{\ell_0}} V(\hat{K}_j^k) \langle f_1 \rangle_{\hat{K}_j^k} \langle f_2 \rangle_{\hat{K}_j^k}. \label{8.5} \end{equation}
Note that for $\hat{K}_j^k \in \mathcal T_{\ell_0}$, if $V(\hat{K}_j^k \cap H)> C_2 V(\hat{K}_j^k)$ then $\hat{K}_j^k \subset \tilde{H}$. But $f_2$ is supported on $G' \subset \tilde{H}^c$, so for such a tent $\langle f_2 \rangle_{\hat{K}_j^k}=0$. Thus, examining \eqref{8.5}, we may assume without loss of generality that if $\hat{K}_j^k \in \mathcal T_{\ell_0}$ then
\begin{equation} V(\hat{K}_j^k \cap H)\leq C_2 V(\hat{K}_j^k). \label{8.6}\end{equation}
Then note that \eqref{8.6} implies the following holds true for the kubes $K_j^k$, provided $C_2$ is chosen sufficiently small:
\begin{align*}
V(K_j^k \cap H^c) & = V(K_j^k)- V(K_j^k \cap H)\\
& \geq CV(\hat{K}_j^k)-V(\hat{K}_j^k \cap H)\\
& \gtrsim V(\hat{K}_j^k)\\
& \geq V(K_j^k)
\end{align*}
where we let $C$ be the implicit constant in Lemma \ref{3.10}. Thus we have
\begin{equation} V(K_j^k) \lesssim V(K_j^k \cap H^c). \label{8.7} \end{equation}
Therefore, continuing from \eqref{8.5} and using \eqref{8.4} and \eqref{8.7}, we obtain
\begin{align*}
|\langle Q_{\ell_0,1}^+f_1,f_2 \rangle| & \lesssim
\sum_{\hat{K}_j^k \in \mathcal T_{\ell_0}} V(K_j^k) \langle f_1 \rangle_{\hat{K}_j^k} \langle f_2 \rangle_{\hat{K}_j^k} \\
& \lesssim \sum_{\hat{K}_j^k \in \mathcal T_{\ell_0}} V(K_j^k \cap H^c) \langle f_1 \rangle_{\hat{K}_j^k} \langle f_2 \rangle_{\hat{K}_j^k}\\
& \leq \int_{H^c}(\mathcal M_{\mathcal T{\ell_0}}f_1)(\mathcal M_{\mathcal T{\ell_0}}f_2) \mathop{dV}\\
& \leq \|\mathcal M_{\mathcal T_{\ell_0}}f_1\|_{L^{2}(H^c)} \|\mathcal M_{\mathcal T_{\ell_0}}f_2\|_{L^2(\Omega)}\\
& \lesssim V(G)^{-\frac{1}{2}} \|f_2\|_{L^2(\Omega)}\\
& \leq V(G)^{-\frac{1}{2}} V(G)^{\frac{1}{2}}\\
& = 1,
\end{align*}
which establishes the result.
\end{proof}
\end{thm}
\section{Directions for generalization}
\paragraph{1}
The example in Section 6.1 showed the upper bound estimate in Theorem \ref{t:main} is sharp. It is not clear if the lower bound estimates given in Theorem \ref{t:main}, or in \cite{Pott} and \cite{Rahm} are sharp. It would be interesting to see what a sharp lower bound is in terms of the Bekoll\'e-Bonami type constant.
\vskip 5pt
\paragraph{2} Our lower bound estimate in Theorem \ref{t:main1} uses the asymptotic expansion of the Bergman kernel function and hence only works for bounded, smooth, strictly pseudoconvex domains. An interesting question would be whether similar lower bound estimates hold true for the Bergman projection when the domain is of finite type in $\mathbb C^2$, convex and of finite type in $\mathbb C^n$, or decoupled and of finite type in $\mathbb C^n$.
\vskip 5pt
\paragraph{3} We focus on the weighted estimates for the Bergman projection for the simplicity of the computation. In \cite{Rahm}, Rahm, Tchoundja, and Wick obtained the weighted estimates for operators $S_{a,b}$ and $S^+_{a,b}$ defined by
\begin{align*}
S_{a,b}f(z)&:=(1-|z|^2)^a\int_{ \mathbb B_n }\frac{f(w)(1-|w|^2)^b}{(1-z\bar w)^{n+1+a+b}}dV(w);
\\S^+_{a,b}f(z)&:=(1-|z|^2)^a\int_{ \mathbb B_n }\frac{f(w)(1-|w|^2)^b}{|1-z\bar w|^{n+1+a+b}}dV(w),
\end{align*}
on the weighted space $L^p(\mathbb B_n,(1-|w|^2)^b\mu dV)$. Using the methods in this paper, it is possible to obtain weighted estimates for analogues of $S_{a,b}$ and $S^+_{a,b}$ in the settings we considered in this paper.
\bibliographystyle{alpha}
|
1,116,691,500,200 | arxiv | \section{Introduction}
Consider a slotted time system with slots $t\in\{1,2,3,\ldots\}$ with an independent and identically distributed (i.i.d.) process $\{w[t]\}_{t=1}^{\infty}$, which takes values in an arbitrary set $W$ with a probability distribution unknown to the controller.
At each time slot, the controller observes the realization $w[t]$ and picks a decision vector $\mathbf{z}[t]\triangleq(z_0[t],~z_1[t],\ldots,~z_L[t])\in\mathcal{A}(w[t])$, where $\mathcal{A}(w[t])\in\mathbb{R}^{L+1}$ is an option set which possibly depends on $w[t]$.
The goal is to minimize the time average of the objective $z_0[t]$ subject to $L$ time average constraints on processes $z_{l}[t]$, $l=0,1,\ldots,L$, respectively. Let
\begin{align*}
\overline{z}_l=&\limsup_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^Tz_l[t],~l\in\{0,1,2,\ldots,L\}.
\end{align*}
Then we write the stochastic optimization problem as
\begin{align}
\min~~&\overline{z}_0\label{obj-problem1}\\
\textrm{s.t.}~~&\overline{z}_l\leq0,~~\forall l\in\{1,2,\ldots,L\},\label{constraint-1-problem1}\\
&\mathbf{z}[t]\in\mathcal{A}(w[t]),~\forall t\in\{1,2,\ldots\},\label{constraint-2-problem1}
\end{align}
where both the minimum and constraints are taken in an almost sure (probability 1) sense. We assume the problem is feasible and the minimum does exist.
\subsection{Related problems and applications}
Problems with the above formulation is common in wireless communications and networking. For example, $w[t]$ can represent a vector of the time varying channel conditions, and $\{z_0[t],\ldots,~z_L[t]\}$ include instantaneous communication rates, power allocations and other metrics for different devices in the network. Specific examples involving this formulation include beamforming (\cite{SDL06}, \cite{NST09}), cognitive radio networks (\cite{QCS08}), energy-aware task scheduling (\cite{Neely2010}, \cite{scheduling-shroff}, \cite{stochastic-scheduling-shroff}) and stock market trading (\cite{Ne10}). In Section \ref{section:simulation}, we present a concrete example of formulation \eqref{obj-problem1}-\eqref{constraint-2-problem1} related to dynamic server scheduling.
Furthermore, it is shown in Chapter 5 of \cite{Neely2010} that the following more general stochastic convex optimization problem can be mapped back to a problem with structure \eqref{obj-problem1}-\eqref{constraint-2-problem1} via a vector of auxiliary variables:
\begin{align*}
\min~~&f(\overline{\mathbf{z}})\\
\textrm{s.t.}~~&g_k(\overline{\mathbf{z}})\leq0,~~\forall k\in\{1,2,\ldots,K\},\\
&\mathbf{z}[t]\in\mathcal{A}(w[t]),~\forall t\in\{1,2,\ldots\},
\end{align*}
with $f(\cdot)$, $g_k(\cdot)$ continuous and convex and $\mathcal{A}(w[t])$ subset of $\mathbb{R}^M$. The above formulation arises, for example, in network throughput-utility optimization where $\overline{\mathbf{z}}$ represents a vector of achieved throughput, $\mathcal{A}(w[t])$ is a proper subset of $\mathbb{R}^M$ and $f(\cdot)$ is a convex function measuring the network fairness. A typical example of such function when $\mathbf{z}[t]$ is nonnegative is
\[f(\mathbf{z})=-\sum_{m=1}^M\log(1+v_mz_m),\]
where $\{v_m\}_{m=1}^M$ are nonnegative weights.
For more details on network utility optimization, see \cite{network-utility}, \cite{Neely2010}, \cite{atilla-primal-dual-jsac}-\cite{stolyar-greedy}.
\textbf{
Finally, we make several remarks on the i.i.d. assumption of the random event $w[t]$. The i.i.d. assumption is posed here mainly for the ease of analysis, although it appears fairly often in engineering models. For example, in wireless communication scenario, the i.i.d. Rayleigh distribution is often used to model the receiver channel condition. For more detailed elaboration on i.i.d. Rayleigh fading channel, see chapter 7.3.8 of \cite{TV05}.
The i.i.d. assumption is also adopted in a number of other literatures such as \cite{energy-aware}, \cite{ergodic-optimization} and \cite{stochastic-scheduling-shroff}.
}
\subsection{The drift-plus-penalty algorithm}\label{introduction-algorithm}
\textbf{
This subsection briefly introduces the drift-plus-penalty algorithm. This algorithm is previously introduced in \cite{network-utility}, \cite{Neely2010} and \cite{JAM2012} with a provable $\mathcal{O}(\varepsilon)$ approximation solution to \eqref{obj-problem1}-\eqref{constraint-2-problem1} as $T$ goes to infinity. It has been applied to solve various problems in wireless communications and networking (\cite{Neely2010}\cite{GNT06}). For more recent applications in encounter-based network and mobile edge network, see \cite{AWKN16} and \cite{GIT16}.
}
Define $L$ virtual queues $Q_l[t],~l\in\{1,2,\ldots,L\}$ which are 0 at $t=1$ and updated as follows:
\begin{equation}\label{queue_update}
Q_l[t+1]=\max\left\{Q_l[t]+z_l[t],0\right\}.
\end{equation}
Meanwhile, denote $\mathbf{Q}[t]=(Q_1[t],~Q_2[t],~\ldots~,~Q_L[t])$. The basic intuition behind the virtual queue idea is that if an algorithm can stabilize $Q_l[t],~\forall l\in\{1,2,\ldots,L\}$, then, the average ``rate'' $\overline{z}_l$ is below 0 and the constraints are satisfied.
Then, the algorithm proceeds as follows via a fixed trade off parameter $V>0$:
\begin{itemize}
\item At the beginning of each time slot $t$, observe $w[t]$, $Q_l[t]$ and take $\mathbf{z}[t]\in\mathcal{A}(w[t])$ so as to solve the following unconstrained optimization problem:
\begin{equation}\label{dpp_minimization}
\min_{\mathbf{z}[t]\in\mathcal{A}(w[t])}~~Vz_0[t]+\sum_{l=1}^LQ_l[t]z_l[t].
\end{equation}
In other words, the solution treats $Q_l[t]$ and $w[t]$ as given constants and chooses $\mathbf{z}[t]$ in $\mathcal{A}(w[t])$ to minimize the above expression.
\item Update the virtual queues
\[Q_l[t+1]=\max\{Q_l[t]+z_l[t],0\}~~\forall l\in\{1,2,\ldots,L\}.\]
\end{itemize}
In the current work, we focus on its
sample path analysis in finite time: It computes the convergence time required to achieve an $\mathcal{O}(\varepsilon)$ approximation with high probability, as discussed in the next subsections.
\subsection{Related algorithms and convergence time}
The algorithm introduced in the last section is closely related to the idea of opportunistic scheduling, which was pioneered by Tassiulas and Ephremides in \cite{queue-stable-tassiulas-1} and \cite{queue-stable-tassiulas-2}. A max-weight algorithm was first introduced in their works to stabilize multiple parallel queues in data networks. The drift-plus-penalty algorithm builds upon max-weight algorithm to further maximize the network utility or minimize energy consumption while stabilizing the queues in the network at the same time (\cite{network-utility} and \cite{Neely2010}). A sample path asymptotic analysis is presented in \cite{JAM2012} using the strong law of large numbers for supermartingale difference sequences. Under mild assumptions on $\mathbf{z}[t]$, it shows that the drift-plus-penalty algorithm satisfies constraints \eqref{constraint-1-problem1}-\eqref{constraint-2-problem1} and achieves the near optimality
\[\overline{z}_0\leq z^{opt}+\mathcal{O}(\varepsilon),\]
with probability 1, where $z^{opt}$ is the minimum achieved by the optimization problem \eqref{obj-problem1}-\eqref{constraint-2-problem1}. \textbf{Throughout the paper, we use the notation $\mathcal{O}(\varepsilon)$ to hide an absolute constant $M$ meaning for all sufficiently small $\varepsilon$, there exists a constant $M>0$ such that $\overline{z}_0\leq z^{opt}+M\varepsilon$.}
Next, consider the problem of convergence time analysis, i.e., the number of slots needed for the desired near optimality to kick in. Most previous works (such as \cite{correlated-scheduling}, \cite{sucha-convergence} and \cite{energy-aware}) focus on
the expected time average performance and only require the constraints to hold in an expected sense. The work in \cite{correlated-scheduling} proves that the same drift-plus-penalty algorithm described in section \ref{introduction-algorithm} gives an $\mathcal{O}(\varepsilon)$ approximation defined in the following manner:
\begin{align}
\frac{1}{T}\sum_{t=1}^T\expect{z_0[t]}&\leq z^{opt}+\mathcal{O}(\varepsilon),\label{expected-near-optimality}\\
\frac{1}{T}\sum_{t=1}^T\expect{z_l[t]}&\leq \mathcal{O}(\varepsilon),~\forall l\in\{1,2,\ldots,L\}, \label{expect-constraint-violation}
\end{align}
with the convergence time $T=\mathcal{O}(1/\varepsilon^2)$. Work in \cite{energy-aware} demonstrates near-optimal $\mathcal{O}(\log(1/\varepsilon)/\varepsilon)$ convergence time for one constraint. An improved convergence time of $\mathcal{O}(1/\varepsilon)$ is shown in \cite{sucha-convergence} in deterministic problems.
The drift-plus-penalty algorithm can also be viewed as a dual algorithm with averaged primals. A similar stochastic dual algorithm for constrained stochastic optimization in \cite{ergodic-optimization} is shown to satisfy the constraints and achieve the near optimality asymptotically with probability 1 as well. A similar $\mathcal{O}(1/\epsilon^2)$ convergence time result for a dual subgradient method is shown in \cite{ozdaglar-dual-subgradient} in the case of deterministic convex optimization. Related work in \cite{scheduling-shroff} applies a dual subgradient method for non-stochastic optimization in a network scheduling problem. The work in \cite{stochastic-scheduling-shroff} further considers the dual subgradient method with stochastic approximations in network scheduling. Other related optimization methods for
queueing networks are also treated
via fluid limits for Markov chains in \cite{atilla-primal-dual-jsac}-\cite{stolyar-gpd-gen}.
\subsection{Contributions and roadmap to proof}
This paper considers the drift-plus-penalty algorithm for stochastic optimization problem \eqref{obj-problem1}-\eqref{constraint-2-problem1} and, for the first time, gives a sample path convergence time result. Specifically, for a general stochastic optimization of the form \eqref{obj-problem1}-\eqref{constraint-2-problem1},
we show that for any $\delta>0$ and $\varepsilon>0$, with probability at least $1-2\delta$, the drift-plus-penalty algorithm gives an $\mathcal{O}(\varepsilon)$ approximation as follows:
\begin{align*}
\frac{1}{T}\sum_{t=1}^Tz_0[t]&\leq z^{opt}+\mathcal{O}(\varepsilon),\\
\frac{1}{T}\sum_{t=1}^Tz_l[t]&\leq \mathcal{O}(\varepsilon),~\forall l\in\{1,2,\ldots,L\},
\end{align*}
with convergence time $T=\frac{1}{\varepsilon^2}\max\left\{\log^2\frac1\varepsilon\log\frac2\delta,~\log^3\frac2\delta\right\}$. Compared to \eqref{expected-near-optimality}-\eqref{expect-constraint-violation}, we removed the expectations at the cost of an extra logarithm factor on the convergence time. Furthermore, when there is only one time average constraint in \eqref{obj-problem1}-\eqref{constraint-2-problem1} (i.e. $L=1$), we show that the convergence time can be improved to $\frac{1}{\varepsilon^2}\log^2\frac1\delta$.
The proof starts by showing the sum-up drift-plus penalty expression is a supermartingale.
The prime difficulty is that the difference sequence of this supermartingale is potentially unbounded, which prevents us from using established concentration inequalities. We overcome this difficulty by truncation. Specifically, in the general case where there are multiple constraints, we
proceed with the following three steps:
\begin{enumerate}
\item Truncate the original supermartingale using a stopping time, which gives us another supermartingale with bounded difference.
\item Show that the tail probability of the original supermartingale is upper bounded by the tail probability of the truncated one plus the probability of occurrence of the stopping time.
\item Bound the tail probability of the truncated supermartingale by a concentration result and bound the probability of stopping time occurrence by an exponential tail bound of the virtual queue processes.
\end{enumerate}
In the special case where there is only one constraint, we show that performing a truncation using a deterministic constant instead of a stopping time is enough to construct a supermartingale with bounded difference, thereby giving a better convergence time result.
\section{Assumptions and Preliminaries}
\subsection{Basic assumptions}\label{section-assumption}
\begin{assumption}\label{assumption-1}
For any $t\in\{1,2,\ldots\}$, the vector $\mathbf{z}[t]\in\mathcal{A}(w[t])$ satisfies
\begin{align*}
&|z_0[t]|\leq z_{\max},\\
&\sqrt{\sum_{l=1}^Lz_l[t]^2}\leq B.
\end{align*}
where $z_{\max}$ and $B$ are positive constants.
\end{assumption}
In addition, we also need the following compactness assumption.
\begin{assumption}
For any $w\in W$, the set $\mathcal{A}(w)$ is a compact subset of $\mathbb{R}^{L+1}$.
\end{assumption}
This assumption is not crucial in our analysis. However, it guarantees that there is always an optimal solution to \eqref{dpp_minimization} within the drift-plus-penalty algorithm, and thereby relieves us from unnecessary complexities in the convergence time analysis.
\begin{assumption}\label{assumption-3}
($\xi$-slackness)
There exists a \textit{randomized stationary policy} $\mathbf{z}^{(\xi)}[t]$ such that all constraints are satisfied with $\xi>0$ slackness, i.e.
\[\expect{z_l^{(\xi)}[t]}\leq-\xi,~~\forall l\in\{1,2,\ldots,L\}.\]
\end{assumption}
The sets $\mathcal{A}(w[t])$ are not required to have any additional structure beyond these assumptions. In particular, the sets $\mathcal{A}(w[t])$ might be finite, infinite, convex, or nonconvex.
\subsection{Interpretation of drift-plus-penalty}\label{interpretation}
We define the squared norm of the virtual queue vector as
\[\|\mathbf{Q}[t]\|^2=\sum_{l=1}^LQ_l[t]^2.\]
Define the drift of the virtual queue vector as follows:
\[\Delta[t]=\frac12\left(\|\mathbf{Q}[t+1]\|^2-\|\mathbf{Q}[t]\|^2\right).\]
The drift-plus-penalty algorithm observes the vector $\mathbf{Q}[t]$ and random event $w[t]$ at every slot $t$, and then makes a decision $\mathbf{z}[t]\in\mathcal{A}(w[t])$ to greedily minimize an upper bound on the \emph{drift-plus-penalty} expression
\[\expect{\Delta[t]+Vz_0[t]~|~\mathbf{Q}[t],w[t]}.\]
To bound $\Delta[t]$, for any $l\in\{1,2,\ldots,L\}$, we square \eqref{queue_update} from both sides and use the fact that $\max\{z,0\}^2\leq z^2$ to obtain:
\[Q_l[t+1]^2\leq Q_l[t]^2+z_l[t]^2+2Q_l[t]z_l[t].\]
According to Assumption \ref{assumption-1},
\begin{equation}\label{pre_dpp_upper_bound}
\Delta[t]
\leq \frac {B^2}{2}+\sum_{l=1}^LQ_l[t]z_l[t].
\end{equation}
Thus, adding $Vz_0[t]$ from both sides and taking the conditional expectation gives,
\begin{align}\label{dpp_upper_bound}
&\expect{\Delta[t]+Vz_0[t]~|~\mathbf{Q}[t],w[t]}\nonumber\\
\leq& \frac {B^2}{2}+\expect{\left.\sum_{l=1}^LQ_l[t]z_l[t]+Vz_0[t]~\right|~\mathbf{Q}[t],w[t]}\nonumber\\
=& \frac {B^2}{2}+\sum_{l=1}^LQ_l[t]z_l[t]+Vz_0[t],
\end{align}
where the equality comes from the fact that given $\mathbf{Q}[t]$ and $w[t]$, the term $\sum_{l=1}^LQ_l[t]z_l[t]+Vz_0[t]$ is a constant. Thus, as we have already seen in section \ref{introduction-algorithm}, the drift-plus-penalty algorithm observes the vector $\mathbf{Q}[t]$ and random event $w[t]$ at slot $t$, and minimizes the right hand side of \eqref{dpp_upper_bound}.
\subsection{Optimization over randomized stationary algorithms}
A key feature of the drift-plus-penalty algorithm is that it is performed without knowing the probability distribution of the random events. Suppose for the moment that the controller does know the probability distribution of the random events. Then, consider the following class of \emph{randomized stationary algorithms}:
At the beginning of each time slot $t$, after observing the random event $w[t]$, the controller selects a decision vector $\mathbf{z}^*[t]\in\mathcal{A}(w[t])$ according to some probability distribution which depends only on $w[t]$.
\textbf{The following lemma shows that the optimal solution to \eqref{obj-problem1}-\eqref{constraint-2-problem1} is achievable over the closure of all one-shot expectations of $\mathbf{z}^*[t]$:
\begin{theorem}[Lemma 4.6 of \cite{Neely2010}]\label{theorem-stat-opt}
Let $z^{opt}$ be the minimum achieved by \eqref{obj-problem1}-\eqref{constraint-2-problem1},
Let $\mathcal{P}$ be the subset of $\mathbb{R}^{L+1}$ consisting of one-shot expectations $\expect{\mathbf{z}^*[t]}$ achieved by all randomized stationary algorithms. Then, there exists a vector $\mathbf{z}^*\in\overline{\mathcal{P}}$, the closure of $\mathcal{P}$, such that
\begin{align}
z_0^*&=z^{opt}\label{iid-obj}\\
z_l^*&\leq0,~\forall l\in\{1,2,\ldots,L\}, \label{iid-constraint}
\end{align}
i.e., the optimality is achievable within $\overline{\mathcal{P}}$.
\end{theorem}
Note that $\overline{\mathcal{P}}$ cannot be explicitly constructed if the controller does not know the probability distribtion of $w[t]$. Thus, Theorem \ref{theorem-stat-opt} cannot be used to compute the optimal solution to \eqref{obj-problem1}-\eqref{constraint-2-problem1}. However, we can use it to prove important results as is shown in the following section.
}
\section{Convergence Time Analysis}
For the rest of the paper, we implicitly assume that all lemmas and theorems are built on Assumption \ref{assumption-1} to \ref{assumption-3} and \eqref{obj-problem1}-\eqref{constraint-2-problem1} is feasible.
\subsection{Construction of a supermartingale}
Define $\mathcal{F}_t$ as the system history up to slot $t$.\footnote{Formally, $\mathcal{F}_t$ is the sigma algebra generated by all random variables from slot 1 to slot $t$, which include $\{w[s]\}_{s=1}^t$, $\{\mathbf{z}[s]\}_{s=1}^t$ and $\{\mathbf{Q}[s]\}_{s=1}^t$.}
The following lemma illustrates the key feature of the drift-plus-penalty algorithm.
\begin{lemma}
The following inequality holds regarding the drift-plus-penalty algorithm for any $t\in\{1,2,3,\ldots\}$:
\begin{equation}\label{key-feature}
\expect{\left.V(z_0[t]-z^{opt})+\sum_{l=1}^LQ_l[t]z_l[t]~\right|~\mathcal{F}_{t-1}}\leq0,
\end{equation}
\end{lemma}
\begin{proof}
Since the drift-plus-penalty algorithm minimizes the term on the right hand side of \eqref{dpp_upper_bound} over all possible decisions at time $t$, it must achieve a smaller value on that term compared to that of any randomized stationary algorithm $\mathbf{z}^*[t]$, Formally, this is
\[\sum_{l=1}^LQ_l[t]z_l[t]+Vz_0[t]
\leq\sum_{l=1}^LQ[t]z_l^*[t]+Vz_0^*[t].\]
Since for any $l\in\{1,2,\ldots,L\}$,
$$Q_l[t]=\max\{Q_l[t-1]+z_l[t-1],0\}\in\mathcal{F}_{t-1},$$
taking expectations from both sides regarding $w[t]$ gives
\textbf{
\begin{align*}
&\expect{\left.\sum_{l=1}^LQ_l[t]z_l[t]+Vz_0[t]~\right|~\mathcal{F}_{t-1}}\\
\leq&\expect{\left.\sum_{l=1}^LQ_l[t]z_l^*[t]+Vz_0^*[t]~\right|~\mathcal{F}_{t-1}}\\
=&\sum_{l=1}^LQ_l[t]\expect{z_l^*[t]}+V\expect{z_0^*[t]},
\end{align*}
where the last equality follows from the fact that the randomized stationary algorithm chooses $\mathbf{z}^*[t]$ based only on $w[t]$ and thus independent of the virtual queues at time $t$. Since $\expect{\mathbf{z}^*[t]}\in\overline{\mathcal{P}}$, and above inequality holds for any randomized stationary algorithm, it follows
\begin{align*}
\expect{\left.\sum_{l=1}^LQ_l[t]z_l[t]+Vz_0[t]~\right|~\mathcal{F}_{t-1}}
\leq Vz^*+\sum_{l=1}^LQ_l[t]z^*_l,
\end{align*}
which implies the claim combining with Theorem \ref{theorem-stat-opt}.
}
\end{proof}
With the help of the above lemma, we construct a supermartingale as follows,
\begin{lemma}\label{supMG}
Define a process $\{X[t]\}_{t=0}^\infty$ such that $X[0]=0$ and
\[X[t]=\sum_{i=1}^t\left(V(z_0[i]-z^{opt})+\sum_{l=1}^LQ_l[i]z_l[i]\right).\]
Then, $\{X[t]\}_{t=0}^\infty$ is a supermartingale.
\end{lemma}
\begin{proof}
First, it is obvious that $|X[t]|<\infty$ and $X[t]\in\mathcal{F}_t$ for any $t\geq0$.
Then, by \eqref{key-feature}, the following holds for any $t\geq1$:
\begin{align*}
&\expect{X[t]|\mathcal{F}_{t-1}}\\
=&\expect{\left.V(z_0[t]-z^{opt})+\sum_{l=1}^LQ_l[t]z_l[t]\right|\mathcal{F}_{t-1}}+X[t-1]\\
\leq& X[t-1].
\end{align*}
Thus, $\{X[t]\}_{t=0}^\infty$ is a supermartingale.
\end{proof}
\subsection{Truncation by a stopping time}
Although the process $\{X[t]\}_{t=0}^\infty$ is a supermartingale, its difference sequence $V(z_0[t]-z^{opt})+\sum_{l=1}^LQ_l[t]z_l[t]$ is potentially unbounded because the virtual queue process $\{Q_l[t]\}_{t=1}^\infty$ is not bounded. On the other hand, the bounded difference property is crucial for any well established concentration result to work (see \cite{old&new} for details). The way to circumvent this problem is to use truncation. Intuitively, we can ``stop'' the process whenever the difference gets too large and this stopped process then satisfies the bounded difference property.
The idea of truncation has been used under different scenarios (see \cite{Durrett} for sequential analysis and \cite{Hajek} for queue stability analysis). For basic definitions and lemmas related to stopping time and supermatringale, see Appendix \ref{app:basics}.
For the rest of the paper, define $a\wedge b\triangleq\min\{a,b\}$.
The following lemma introduces a truncated process $\{Y[t]\}_{t=0}^{\infty}$ which has some desired properties.
\begin{lemma}\label{truncated_supMG}
For any $c_1>0$, define
\[\tau\triangleq\inf\{t>0:\|\mathbf{Q}[t]\|>c_1\}.\]
Meanwhile, for any $t\geq0$, define
$$Y[t]=X[t\wedge(\tau-1)].$$
Then, $Y[t]$ has the following two properties:\\
1.The process $\{Y[t]\}_{t=0}^\infty$ is a supermartingale.\\
2.The process $\{Y[t]\}_{t=0}^\infty$ has bounded one-step differences,
\[\left|Y[t+1]-Y[t]\right|\leq c_2,~\forall t\geq0,\]
where $c_2=2Vz_{\max}+Bc_1$.
\end{lemma}
\begin{proof}
\emph{Proof of Property 1:} In order to apply Theorem \ref{stopping_time} it is enough to show that $\tau-1$ is a valid stopping time, i.e. $\{\tau-1=t\}\in\mathcal{F}_{t},~\forall t\geq0$. Indeed, since $Q_l[t+1]=\max\{Q_l[t]+z_l[t],0\}\in\mathcal{F}_{t},~\forall t\geq0$ and $\forall l\in\{1,2,\ldots,L\}$, it follows that
\begin{align*}
&\{\tau-1=t\}=\{\tau=t+1\}\\
=&\{\|\mathbf{Q}[t+1]\|>c_1\}\cap\{\|\mathbf{Q}[i]\|\leq c_1,~\forall i\leq t\}\in\mathcal{F}_{t}.
\end{align*}
Since $\{Y[t]\}_{t=0}^\infty$ is a supermartingale truncated by a stopping time, we apply Lemma \ref{stopping_time} in Appendix \ref{app:basics} to conclude that it is also a supermartingale.
\emph{Proof of Property 2:} The proof is provided in Appendix \ref{app_property_2}.
\end{proof}
The following lemma gives a concentration result for $\{X[t]\}_{t=0}^\infty$ which is a supermartingale with possibly unbounded differences. The result is proved by a union bound argument.
\begin{lemma}\label{supMG_ineq}
For the same sequence $\{X[t]\}_{t=0}^\infty$ defined in Lemma \ref{supMG}, fix a time period $T$ and define the ``bad event'' for each $t\in\{1,2,\ldots,T\}$ as follows:
\[\mathcal{B}_t\triangleq\{\|\mathbf{Q}[t]\|>c_1\}\]
for some $c_1>0$. Then, we have for any $\lambda>0$,
\[Pr\left(X[T]\geq\lambda\right)\leq\exp\left(-\frac{\lambda^2}{2Tc_2^2}\right)
+\sum_{t=1}^TPr\left(\mathcal{B}_t\right).\]
\end{lemma}
\begin{proof}
We first show that for any $t\geq1$, $\left\{X[t]\neq Y[t]\right\}\subseteq \cup_{i=1}^t\mathcal{B}_i$. The proof is simple. Any event $X[t]\neq Y[t]$ is equivalent to $X[t\wedge(\tau-1)]\neq X[t]$. Thus, it follows
$t\wedge(\tau-1)\neq t$. This implies $\tau-1<t$ and $\|\mathbf{Q}[t]\|$ must exceed $c_1$ within the first $t$ slots.
Thus, the event $\left\{X[t]\neq Y[t]\right\}$ must belong to $\cup_{i=1}^t\mathcal{B}_i$.
Now, we can bound the probability that $X[T]$ ever gets large using a union bound, i.e.
\begin{align*}
Pr\left(X[T]\geq\lambda\right)
=&Pr\left(X[T]=Y[T],~Y[T]\geq\lambda\right)\\
&+Pr\left(X[T]\neq Y[T],~X[T]\geq\lambda\right)\\
\leq& Pr\left(Y[T]\geq \lambda\right)+Pr(X[T]\neq Y[T])\\
\leq&\exp\left(-\frac{\lambda^2}{2Tc_2^2}\right)+Pr\left(\cup_{t=1}^T\mathcal{B}_t\right)\\
\leq&\exp\left(-\frac{\lambda^2}{2Tc_2^2}\right)+\sum_{t=1}^TPr(\mathcal{B}_t).
\end{align*}
where the second-to-last inequality uses $\left\{X[t]\neq Y[t]\right\}\subseteq\cup_{i=1}^t\mathcal{B}_i$ and Lemma \ref{azuma-inequality} in Appendix \ref{app:basics}.
\end{proof}
\subsection{Exponential tail bound of the virtual queue}
According to Lemma \ref{supMG_ineq}, it remains to show that the probability that the bad event occurs (i.e. $Pr(\mathcal{B}_t)$) is small.
\textbf{The following preliminary lemma comes from the $\xi$-slackness assumption which states that $\|\mathbf{Q}[t]\|$ is not expected to be very large, which leads to a bound for $Pr(\mathcal{B}_t)$ shown in Cororllary \ref{geo_queue_bound}.}
\textbf{
Note that the key intuition here is the queue length has a exponential tail bound under the well-known max-weight algorithm (See also Lemma 3 in \cite{ES12}). Our drift-plus-penalty algorithm builds upon max-weight algorithm and thus also has exponential tail bound on the queues (Lemma 5 and Corollary 1 below). Thus, intuitively, we are not ``losing too much" if the queue is truncated at some appropriate level. This drives our truncation technique on the constructed supermartingales.
}
\begin{lemma}\label{geometric_bound}
The following holds for any $t\in\mathbb{N}^+$ under the drift-plus-penalty algorithm,
\begin{align*}
\left|\|\mathbf{Q}[t+1]\|-\|\mathbf{Q}[t]\|\right|&\leq B,\\
\expect{\|\mathbf{Q}[t+1]\|-\|\mathbf{Q}[t]\||\mathbf{Q}[t]}&\leq\left\{
\begin{array}{ll}
B, & \hbox{if $\|\mathbf{Q}[t]\|\leq C_0V$;} \\
-\xi/2, & \hbox{if $\|\mathbf{Q}[t]\|> C_0V$.}
\end{array}
\right.
\end{align*}
where $C_0\triangleq (4z_{\max}+\frac{B^2}{V}-\frac{\xi^2}{4V})/\xi$, $B$, $z_{\max}$ are defined in Assumption \ref{assumption-1} and $\xi$ is defined in Assumption \ref{assumption-3}.
\end{lemma}
\begin{proof}
First of all, by definition of $B$ and $\xi$, we have $B\geq\xi$ and $C_0$ is always positive.
According to Assumption \ref{assumption-1}, the increase (or decrease) of all queues are bounded during each slot, it follows for any $t$,
\begin{align*}
&\left|\|\mathbf{Q}[t+1]\|-\|\mathbf{Q}[t]\|\right|\\
&\leq\|\mathbf{Q}[t+1]-\mathbf{Q}[t]\|\\
&=\sqrt{\sum_{l=1}^L(\max\{Q_l[t]+z_l[t],0\}-Q_l[t])^2}\\
&\leq\sqrt{\sum_{l=1}^Lz_l[t]^2}\leq B,
\end{align*}
where the first inequality follows from triangle inequality and the second from the last inequality follows from $|\max\{a+b,0\}-a|\leq |b|,~\forall a,b\in\mathbb{R}$.
Next, suppose $\|\mathbf{Q}[t]\|> C_0V$. Then, since the drift-plus-penalty algorithm minimizes the term on the right hand side of \eqref{dpp_upper_bound} over all possible decisions at time $t$, it must achieve smaller value on that term compared to that of $\xi$-slackness policy $\mathbf{z}^{(\xi)}[t]$. Formally, this is
\begin{align*}
&\expect{\left.\sum_{l=1}^LQ_l[t]z_l[t]+Vz_0[t]~\right|~\mathbf{Q}[t],w[t]}\\
\leq&\expect{\left.\sum_{l=1}^LQ[t]z_l^{(\xi)}[t]+Vz_0^{(\xi)}[t]~\right|~\mathbf{Q}[t],w[t]}.
\end{align*}
Substitute this bound into the right hand side of \eqref{dpp_upper_bound} and take expectations from both sides to obtain
\begin{align*}
&\expect{\Delta[t]+Vz_0[t]~|~\mathbf{Q}[t]}\\
\leq&\frac{B^2}{2}+\expect{\left.\sum_{l=1}^LQ_l[t]z_l^{(\xi)}[t]+Vz_0^{(\xi)}[t]~\right|~\mathbf{Q}[t]}.
\end{align*}
This implies
\begin{align*}
&\expect{\|\mathbf{Q}[t+1]\|^2-\|\mathbf{Q}[t]\|^2~|~\mathbf{Q}[t]}\\
\leq&B^2+2\sum_{l=1}^LQ_l[t]\expect{\left.z_l^{(\xi)}[t]\right|\mathbf{Q}[t]}+4Vz_{\max}\\
\leq&B^2-2\xi\sum_{l=1}^LQ_l[t]+4Vz_{\max}\\
\leq&B^2-2\xi\|\mathbf{Q}[t]\|+4Vz_{\max},
\end{align*}
where the second inequality follows from the $\xi$-slackness property and the assumption that $z_l^{(\xi)}[t]$ is i.i.d. over slots and hence independent of $Q_l[t]$. This further implies
\begin{align*
&\expect{\|\mathbf{Q}[t+1]\|^2~|~\mathbf{Q}[t]}\\
\leq&\|\mathbf{Q}[t]\|^2-2\xi\|\mathbf{Q}[t]\|+B^2+4Vz_{\max}\\
=&\|\mathbf{Q}[t]\|^2-2\xi\|\mathbf{Q}[t]\|+B^2+4Vz_{\max}-\frac{\xi^2}{4}+\frac{\xi^2}{4}\\
=&\|\mathbf{Q}[t]\|^2-2\xi\|\mathbf{Q}[t]\|+\frac{B^2+4Vz_{\max}-\frac{\xi^2}{4}}{\xi}\cdot\xi+\frac{\xi^2}{4}\\
=&\|\mathbf{Q}[t]\|^2-2\xi\|\mathbf{Q}[t]\|+C_0V\cdot\xi+\frac{\xi^2}{4}\\
\leq&\|\mathbf{Q}[t]\|^2-\xi\|\mathbf{Q}[t]\|+\frac{\xi^2}{4}=\left(\|\mathbf{Q}[t]\|-\frac\xi2\right)^2,
\end{align*}
where the first inequality follows from the definition of $C_0$ and the second inequality follows from the assumption $\|\mathbf{Q}[t]\|\geq C_0V$. Now take the square root from both sides to obtain
\[\sqrt{\expect{\|\mathbf{Q}[t+1]\|^2~|~\mathbf{Q}[t]}}\leq\|\mathbf{Q}[t]\|-\frac\xi2.\]
By concavity of the $\sqrt{x}$ function, we have $\expect{\left.\|\mathbf{Q}[t+1]\|~\right|~\mathbf{Q}[t]}\leq\sqrt{\expect{\|\mathbf{Q}[t+1]\|^2~|~\mathbf{Q}[t]}}$, thus,
\[\expect{\left.\|\mathbf{Q}[t+1]\|~\right|~\mathbf{Q}[t]}\leq\|\mathbf{Q}[t]\|-\frac\xi2,\]
finishing the proof.
\end{proof}
The following lemma gives us a bound on the moments whenever a random process satisfies the drift condition in Lemma \ref{geometric_bound}. Its proof is given in \cite{energy-aware}.
\begin{lemma}\label{drift_lemma}
Let $K[n]$ be a real random process over $n\in \{1,2,\ldots\}$ satisfying
\begin{align*}
|K[n+1]-K[n]|&\leq\gamma\\
\expect{K[n+1]-K[n]~|~K[n]}&\leq\left\{
\begin{array}{ll}
\gamma, & \hbox{$K[n]<\sigma$;} \\
-\beta, & \hbox{$K[n]\geq \sigma$.}
\end{array}
\right.
\end{align*}
for some positive real-valued $\sigma$, and $0<\beta\leq \gamma$. Suppose $K[0]\in\mathbb{R}$ is finite. Then, at every $n\in\{1,2,\ldots\}$, the following holds:
\[\expect{e^{r K[n]}}\leq D+(e^{r K[1]}-D)\rho^n,\]
where $0<\rho<1$ and
\begin{align*}
r=\frac{\beta}{\gamma^2+\gamma\beta/3},~~
\rho=1-\frac{r\beta}{2},~~
D=\frac{(e^{r\gamma}-\rho)e^{r\sigma}}{1-\rho}.
\end{align*}
\end{lemma}
Using the above two lemmas, we have the following important corollary regarding the virtual queue vector.
\begin{corollary}\label{geo_queue_bound}
The following hold for any $t\in\{1,2,\ldots\}$ under the drift-plus-penalty algorithm,\\
1. Bounded moments of virtual queues:
\begin{equation}
\expect{e^{r\|\mathbf{Q}[t]\|}}\leq D,
\end{equation}
where
$$r=\frac{3\xi}{6B^2+B\xi},~D=\frac{(4e^{rB}+r\xi-4)e^{rC_0V}}{r\xi},$$
$B$ is defined in Assumption \ref{assumption-1}, $\xi$ is defined in \ref{assumption-3} and $C_0$ is defined in Lemma \ref{geometric_bound}. \\
2. Exponential tail bound: For any $c_1>0$,
\begin{equation}\label{high_prob_bound_queue}
Pr\left(\|\mathbf{Q}[t]\|>c_1\right)\leq De^{-rc_1}.
\end{equation}
\textbf{
3. Asymptotic feasibility: For any $l\in\{1,2,\cdots,L\}$
\begin{equation}
\limsup_{T\rightarrow\infty}\frac1T\sum_{t=1}^{T-1}z_l[t]\leq 0,~~w.p.1.
\end{equation}}
\end{corollary}
\begin{proof}
The first part follows directly from Lemma \ref{geometric_bound} and Lemma \ref{drift_lemma} by plugging in $\gamma=B$ and $\beta=\xi/2$ in Lemma \ref{drift_lemma}. The second part follows from
\begin{align*}
Pr\left(\|\mathbf{Q}[t]\|>c_1\right)
&=Pr\left(e^{r\|\mathbf{Q}[t]\|}>e^{rc_1}\right)\\
&\leq \frac{\expect{e^{r\|\mathbf{Q}[t]\|}}}{e^{rc_1}}\leq De^{-rc_1}.
\end{align*}
which is a direct application of Markov inequality. \textbf{For the third part of the claim, taking $c_1=\varepsilon T$ and obtain
\begin{align*}
Pr(Q_l[T]>\varepsilon T)\leq De^{-r\varepsilon T}.
\end{align*}
Thus, we have
\begin{align*}
\sum_{T=1}^{\infty}Pr(Q_l[T]>\varepsilon T)\leq D\sum_{T=1}^{\infty}e^{-r\varepsilon T}<+\infty.
\end{align*}
Thus, by the Borel-Cantelli lemma,
\[Pr\left(Q_l[T]>\varepsilon T~\textrm{for infinitely many}~T\right)=0.\]
Since $\varepsilon>0$ is arbitrary, letting $\varepsilon\rightarrow0$ gives
\[Pr\left(\lim_{T\rightarrow\infty}\frac{Q_l[T]}{T}=0\right)=1.\]
On the other hand, by queue updating rule $Q_l[T]\geq Q_l[1] +\sum_{t=1}^{T-1}z_l[t]=\sum_{t=1}^{T-1}z_l[t]$, and thus, the claim follows.}
\end{proof}
\subsection{Convergence time bound}
The following is our main lemma on the performance of drift-plus-penalty algorithm.
\begin{lemma}\label{main_lemma}
Under the proposed drift-plus-penalty algorithm, for any $\delta\in(0,1)$, and any $T\in\mathbb{N}$,
\begin{align*}
&Pr\left(\frac{1}{T}\sum_{t=1}^T\left(V(z_0[t]-z^{opt})+\sum_{l=1}^LQ_l[t]z_l[t]\right)\right.\\
&\left.\leq CV\frac{\max\left\{\log T\log^{1/2}\frac2\delta,\log^{3/2}\frac2\delta\right\}}{\sqrt{T}}\right)\geq 1-\delta,
\end{align*}
where
$C=2\sqrt{2}(2z_{\max}+\frac{B}{rV}+\frac{B}{rV}\log\left(\frac{8e^{rB}+2r\xi-8}{r\xi}\right)+BC_0)$, $C_0$, $r$ are defined in Lemma \ref{geometric_bound} and Corollary \ref{geo_queue_bound}, respectively, $B$, $z_{\max}$ are defined in Assumption \ref{assumption-1}, and $\xi$ is defined in Assumption \ref{assumption-3}.
\end{lemma}
\begin{proof}
First of all, according to Corollary \ref{geo_queue_bound} and the definition of $\mathcal{B}_t$,
\begin{align*}
Pr(\mathcal{B}_t)\leq De^{-rc_1}.
\end{align*}
Thus,
\[\sum_{t=1}^TPr(\mathcal{B}_t)\leq DTe^{-rc_1}.\]
Then by Lemma \ref{supMG_ineq}, we have
\begin{equation}\label{interim}
Pr(X[T]\geq\lambda)\leq\exp\left(-\frac{\lambda^2}{2Tc_2^2}\right)+DTe^{-rc_1}.
\end{equation}
For any $\delta\in(0,1)$, set $DTe^{-rc_1}=\delta/2$, so that $c_1=\frac1r\log\frac{2DT}{\delta}$. Then, set
\[\exp\left(-\frac{\lambda^2}{2Tc_2^2}\right)=\delta/2,\]
which, by substituting the definition that $c_2=2Vz_{\max}+Bc_1$, implies
\begin{align*}
\lambda&=\sqrt{2T\log\frac2\delta}\left(2Vz_{\max}+\frac{B}{r}\log\frac{2DT}{\delta}\right)\\
=& \frac{\sqrt{2}B}{r}\sqrt{T}\left(\log T\log^{1/2}\frac2\delta
+\log^{3/2}\frac2\delta\right)\\
&+\sqrt{2}\left(2Vz_{\max}+\frac{B}{r}\log(2D)\right)\sqrt{T}\log^{1/2}\frac 2\delta\\
\leq& \frac{CV}{2}\sqrt{T}\left(\log T\log^{1/2}\frac2\delta
+\log^{3/2}\frac2\delta\right)\\
\leq&CV\sqrt{T}\max\left\{\log T\log^{1/2}\frac2\delta,\log^{3/2}\frac2\delta\right\},
\end{align*}
where the second equality follows from simple algebra, the first inequality follows by substituting the definition of $D$ in Corollary \ref{geo_queue_bound} and doing some simple algebra, and the final inequality follows from the fact that $a+b\leq2\max\{a,b\}$. Substitute this choice of $\lambda$ and the definition of $X[T]$ in Lemma \ref{supMG} into \eqref{interim} gives
\begin{align*}
&Pr\left(\frac{1}{T}\sum_{t=1}^T\left(V(z_0[t]-z^{opt})+\sum_{l=1}^LQ_l[t]z_l[t]\right)\right.\\
&\left.\geq\frac{CV\max\left\{\log T\log^{1/2}\frac2\delta,\log^{3/2}\frac2\delta\right\}}{\sqrt{T}}\right)\leq\delta,
\end{align*}
which implies the claim.
\end{proof}
With the help of Lemma \ref{main_lemma}, we have the following theorem,
\begin{theorem}\label{main-theorem}
Fix $\varepsilon>0$ and $\delta\in(0,1)$ and define $V=1/\varepsilon$. Then, for any $T\geq\frac{1}{\varepsilon^2}\max\left\{\log^2\frac1\varepsilon\log\frac2\delta,~\log^3\frac2\delta\right\}$, with probability at least $1-2\delta$, one has:
\begin{align}
\frac{1}{T}\sum_{t=1}^Tz_0[t]&\leq z^{opt}+\mathcal{O}(\varepsilon),\label{near-optimality}\\
\frac{1}{T}\sum_{t=1}^Tz_l[t]&\leq \mathcal{O}(\varepsilon),~\forall l\in\{1,2,\ldots,L\}, \label{constraint-violation}
\end{align}
and thus the drift-plus-penalty algorithm with parameter $V=1/\varepsilon$ provides an $\mathcal{O}(\varepsilon)$ approximation with a convergence time $\frac{1}{\varepsilon^2}\max\left\{\log^2\frac1\varepsilon\log\frac2\delta,~\log^3\frac2\delta\right\}$.
\end{theorem}
The proof is given in Appendix \ref{proof-theorem-2} by applying Lemma \ref{main_lemma} together with Corollary \ref{geo_queue_bound}. First, using the relation between $\Delta[t]$ and $\sum_{i=1}^LQ_l[t]z_l[t]$ in \eqref{pre_dpp_upper_bound} and the fact that $\sum_{t=1}^T\Delta[t]$ is a telescoping sum equal to $\frac12\|\mathbf{Q}[T+1]\|^2$, we can get rid of the term $\sum_{i=1}^LQ_l[t]z_l[t]$ in Lemma \ref{main_lemma} and prove the $\varepsilon$-suboptimality of the objective. Then, we use the exponential decay of the virtual queue vector in Corollary \ref{geo_queue_bound} to bound the constraint violation.
\section{Improved Convergence Time under One Constraint}
In this section, we show that when the problem \eqref{obj-problem1}-\eqref{constraint-2-problem1} has only one constraint (i.e. $L=1$), we can achieve the same $\varepsilon$ approximation as \eqref{near-optimality} and \eqref{constraint-violation} with probability at least $1-2\delta$ with a convergence time
$\frac{1}{\varepsilon^2}\log^2\frac2\delta$. Compared to the previous result, we have improved by a logarithm factor.
\subsection{A deterministic truncation}
In the previous section, we truncate the original supermartingale $\{X[t]\}_{t=1}^\infty$ using a stopping time. In this section, we show that if there is only one constraint, then, a deterministic truncation is enough to construct a new supermartingale with a bounded difference.
Again assume the $\xi$-slackness assumption holds. The translation of Lemma \ref{geometric_bound} to the case of one constraint implies that:
\begin{align}
\left|Q_1[t+1]-Q_1[t]\right|&\leq B, \label{one-constraint-1}\\
\expect{Q_1[t+1]-Q_1[t]|Q_1[t]}&\leq\left\{
\begin{array}{ll}
B, & \hbox{if $Q_1[t]\leq C_0V$;} \\
-\xi/2, & \hbox{if $Q_1[t]> C_0V$,}
\end{array}
\right. \label{one-constraint-2}
\end{align}
\begin{lemma}\label{deter-truncation}
If $L=1$ in problem \eqref{obj-problem1}-\eqref{constraint-2-problem1}, then, the following inequality holds regarding the drift-plus-penalty algorithm for any $t\in\{1,2,3,\ldots\}$ and $V\geq B/C_0$:
\begin{equation}
\expect{\left.V(z_0[t]-z^{opt})+(Q_1[t]\wedge C_0V)z_1[t]~\right|~\mathcal{F}_{t-1}}\leq0,\nonumber
\end{equation}
\end{lemma}
\begin{proof}
Since $Q_1[t]\in\mathcal{F}_{t-1}$, we analyze the conditional expectation for the following two cases:\\
1. If $Q_1[t]\leq C_0V$, then, the key feature inequality \eqref{key-feature} implies that
\begin{align*}
&\expect{\left.V(z_0[t]-z^{opt})+(Q_1[t]\wedge C_0V)z_1[t]~\right|~\mathcal{F}_{t-1}}\\
=&\expect{\left.V(z_0[t]-z^{opt})+Q_1[t]z_1[t]~\right|~\mathcal{F}_{t-1}}\leq0.
\end{align*}
2. If $Q_1[t]> C_0V$, then,
\begin{align*}
-\frac\xi2\geq&\expect{Q_1[t+1]-Q_1[t]|\mathcal{F}_{t-1}}\\
=&\expect{\max\{Q_1[t]+z_1[t],0\}-Q[t]|\mathcal{F}_{t-1}}\\
=&\expect{z_1[t]|\mathcal{F}_{t-1}},
\end{align*}
where the inequality follows from \eqref{one-constraint-2},
the first equality follows from the queue updating rule \eqref{queue_update}, and the second equality follows from the fact when $Q_1[t]>C_0V$ and $V\geq B/C_0$, the $\max\{\cdot\}$ can be removed.
Thus, the following chain of inequalities holds
\begin{align*}
&\expect{\left.V(z_0[t]-z^{opt})+(Q_1[t]\wedge C_0V)z_1[t]~\right|~\mathcal{F}_{t-1}}\\
=&\expect{\left.V(z_0[t]-z^{opt})+C_0Vz_1[t]~\right|~\mathcal{F}_{t-1}}\\
\leq&\expect{\left.2Vz_{\max}+C_0Vz_1[t]~\right|~\mathcal{F}_{t-1}}\\
=&2Vz_{\max}+C_0V\expect{z_1[t]|\mathcal{F}_{t-1}}\\
\leq&2Vz_{\max}-C_0V\xi/2.
\end{align*}
Substitute the definition $C_0= (4z_{\max}+\frac{B^2}{V}-\frac{\xi^2}{4V})/\xi$,
\begin{align*}
&\expect{\left.V(z_0[t]-z^{opt})+(Q_1[t]\wedge C_0V)z_1[t]~\right|~\mathcal{F}_{t-1}}\\
\leq&2Vz_{\max}-\frac{V\xi}{2}\cdot\frac{4z_{\max}+\frac{B^2}{V}-\frac{\xi^2}{4V}}{\xi}\\
=&-(B^2-\xi^2/4)/2<0,
\end{align*}
since $B$ defined in Assumption \ref{assumption-1} satisfies $|z_1[t]|\leq B,~\forall t$.
\end{proof}
The following corollary follows directly from the above lemma. Its proof is similar to that of Lemma \ref{supMG}.
\begin{corollary}\label{supMG-2}
Define a process $\{G[t]\}_{t=0}^\infty$ such that $G[0]=0$ and
\[G[t]\triangleq\sum_{i=1}^t\left(V(z_0[i]-z^{opt})+(Q_1[i]\wedge C_0V)z_1[i]\right).\]
Then, $\{G[t]\}_{t=0}^\infty$ is a supermartingale.
\end{corollary}
\subsection{A detailed analysis of the truncated queue process}
In Section \ref{interpretation}, we illustrated the relation between $\Delta[t]$ and $\sum_{i=1}^LQ_l[t]z_l[t]$ in \eqref{pre_dpp_upper_bound} thereby using the fact that $\sum_{t=1}^T\Delta[t]$ is a telescoping sum equal to $\frac12\|\mathbf{Q}[T+1]\|^2$ to prove Theorem \ref{main-theorem}. However, since we have truncated the queue in the process $G[t]$, the telescoping relation does not hold for $\sum_{t=1}^T(Q_1[t]\wedge C_0V)z_1[t]$ anymore. The following argument shows that $\sum_{t=1}^T(Q_1[t]\wedge C_0V)z_1[t]$ is actually not far away from a telescoping sum.
Let $n_j$ be the $j$-th time the queue process $Q_1[t]$ visits $[0,C_0V]$ and $n_0=1$.\footnote{If $Q_1[t]$ stays within $[0,C_0V]$ for two consecutive time slots $t$ and $t+1$, then, this is counted as two units.} Define $\tau_j\triangleq n_{j+1}-n_j$ as the inter-visit time period. Let $n_J$ be the last time slot in $\{1,2,\ldots,T+1\}$ that $Q_1[t]$ visits $[0,C_0V]$. The following lemma analyzes the partial sum of $(Q_1[t]\wedge C_0V)z_1[t]$ from 1 to $n_J-1$. Its proof involves a series of algebraic manipulations and is postponed to Appendix \ref{proof-of-telescoping}.
\begin{lemma}
Suppose $n_J>1$, then, the following holds for $V\geq B/C_0$,
\[\left|\sum_{t=1}^{n_J-1}(Q_1[t]\wedge C_0V)z_1[t]-\frac12Q[n_J]^2\right|\leq\frac52B^2(n_J-1),\]
where $B$ is defined in Assumption \ref{assumption-1} and $C_0$ is defined in Lemma \ref{geometric_bound}.
\end{lemma}
The following lemma bounds the time average of the truncated ``drift'' term at any time $T$ (i.e., $\frac1T\sum_{t=1}^T(Q_1[t]\wedge C_0V)z_1[t]$) from below by a constant using the previous lemma,
thereby demonstrating that the ``drift'' term cannot get very small.
\begin{lemma}\label{appr-telescoping-sum}
For any $T\in\mathbb{N}$ and $V\geq B/C_0$, we have
\[\frac1T\sum_{t=1}^T(Q_1[t]\wedge C_0V)z_1[t]\geq-\frac52B^2.\]
\end{lemma}
\begin{proof}
We analyze the following two cases:\\
1. If $n_J=T+1$, then, by the above lemma,
\begin{align*}
\sum_{t=1}^T(Q_1[t]\wedge C_0V)z_1[t]=&\sum_{t=1}^{n_J-1}(Q_1[t]\wedge C_0V)z_1[t]\\
\geq&\frac12Q_1[T+1]^2-\frac52B^2T\\
\geq&-\frac52B^2T.
\end{align*}
2. If $n_J<T+1$, then, this implies that $Q_1[t]>C_0V$, for all $t\in\{n_J+1,\ldots,T+1\}$. Thus,
\begin{align*}
&\sum_{t=1}^T(Q_1[t]\wedge C_0V)z_1[t]\\
=&\sum_{t=1}^{n_J-1}(Q_1[t]\wedge C_0V)z_1[t]+\sum_{t=n_J}^{T}(Q_1[t]\wedge C_0V)z_1[t]\\
\geq&-\frac52B^2(n_{J}-1)+\frac{1}{2}Q_1[n_{J}]^2+\sum_{t=n_J}^{T}(Q_1[t]\wedge C_0V)z_1[t]\\
=&-\frac52B^2(n_{J}-1)+\frac{1}{2}Q_1[n_{J}]^2+\sum_{t=n_J+1}^{T}C_0Vz_1[t]\\
&+Q_1[n_J]z_1[n_J]\\
\geq&-\frac52B^2(n_{J}-1)+\frac{1}{2}Q_1[n_{J}]^2+C_0V\sum_{t=n_J}^{T}z_1[t]-B^2.
\end{align*}
where the second to last steps follows from the fact $Q_1[t]>C_0V$ for $t\in\{n_J+1,\ldots,T\}$ and
the last step follows from $C_0V\geq Q_1[n_J]>C_0V-B$.
Since $V\geq B/C_0$, it follows, $Q_1[t+1]=Q_1[t]+z_1[t],~\forall t\in\{n_J,\ldots,T\}$.
Since $Q_1[T+1]>C_0V$ $Q_1[n_J]\leq C_0V$, we have
\[\sum_{t=n_J}^{T}z_1[t]=Q_1[T+1]-Q_1[n_J]>0.\]
Thus,
\[\sum_{t=1}^T(Q_1[t]\wedge C_0V)z_1[t] \geq-\frac52B^2n_{J}\geq-\frac52B^2T,\]
finishing the proof.
\end{proof}
\subsection{Convergence time analysis}
The following lemma is a direct application of Lemma \ref{azuma-inequality} and Corollary \ref{supMG-2}.
\begin{lemma}\label{main_lemma-2}
Under the proposed drift-plus-penalty algorithm, for any $\delta\in(0,1)$, any $T\in\mathbb{N}$ and any $V\geq B/C_0$,
\begin{align*}
&Pr\left(\frac{1}{T}\sum_{t=1}^T\left(V(z_0[t]-z^{opt})+(Q_1[t]\wedge C_0V) z_1[t]\right)\right.\\
&\left.\leq 2C_2V\frac{\log(1/\delta)}{\sqrt{T}}\right)\geq 1-\delta,
\end{align*}
where
$C_2=2z_{\max}+C_0B$, $C_0$ is defined in Lemma \ref{geometric_bound}, and $B$, $z_{\max}$ are defined in Assumption \ref{assumption-1}.
\end{lemma}
\begin{proof}
First of all,
we have
\begin{align*}
|G[t]-G[t-1]|=&|V(z_0[t]-z^{opt})+(Q_1[t]\wedge C_0V) z_1[t]|.\\
\leq& 2Vz_{\max}+C_0VB
\end{align*}
Thus, by Lemma \ref{azuma-inequality} and Corollary \ref{supMG-2},
\[Pr(G[T]\geq\lambda)\leq\exp\left(-\frac{\lambda^2}{2TC_2^2V^2}\right).\]
Let $\delta=\exp\left(-\frac{\lambda^2}{2TC_2^2V^2}\right)$, so we get $\lambda=2C_2\sqrt{T}\log\frac1\delta$. Thus,
\[Pr\left(G[T]\geq2C_2V\sqrt{T}\log\frac1\delta\right)\leq\delta,\]
which implies the claim.
\end{proof}
The following is our main theorem regarding the convergence time for this $L=1$ case.
\begin{theorem}\label{main-theorem-2}
Fix $\varepsilon\in(0,C_0/B]$, $\delta\in(0,1)$ and define $V=1/\varepsilon$. Then, for any $T\geq\frac{1}{\varepsilon^2}\log^2\frac1\delta$, with probability at least $1-2\delta$, one has:
\begin{align}
\frac{1}{T}\sum_{t=1}^Tz_0[t]&\leq z^{opt}+\mathcal{O}(\varepsilon),\label{near-optimality-2}\\
\frac{1}{T}\sum_{t=1}^Tz_1[t]&\leq \mathcal{O}(\varepsilon), \label{constraint-violation-2}
\end{align}
and so the drift-plus-penalty algorithm with parameter $V=1/\varepsilon$ provides an $\mathcal{O}(\varepsilon)$ approximation with a convergence time $\frac{1}{\varepsilon^2}\log^2\frac1\delta$.
\end{theorem}
The proof given in Appendix \ref{proof-theorem-3} is similar to the proof of Theorem \ref{main-theorem}. First, we use Lemma \ref{appr-telescoping-sum} to get rid of the term $(Q_1[t]\wedge C_0V) z_1[t]$ in Lemma \ref{main_lemma-2}, thereby proving the $\varepsilon$-suboptimality on the objective. Then, we pass the exponential decay of the virtual queue in Corollary \ref{geo_queue_bound} to one constraint case and bound the constraint violation.
\section{Application on Dynamic Server Scheduling}\label{section:simulation}
\begin{figure}[htbp]
\centering
\includegraphics[height=1.5in]{Server-schedule}
\caption{A 3-queue 2-server system.}
\label{fig:Stupendous1}
\end{figure}
In this section, we demonstrate the performance of drift-plus-penalty algorithm via a dynamic server scheduling example.
Consider a 3-queue 2-server system with an i.i.d. packet arrival process $\{\mathbf{a}[t]\}_{t=1}^\infty$ shown in Fig. \ref{fig:Stupendous1}. All packets have fixed length and each entry of $\mathbf{a}[t]$ is Bernoulli with mean
\[\expect{\mathbf{a}[t]}=(0.5~0.7~0.4).\]
During each time slot, the controller chooses which two queues to be served by the server. A queue that is allocated a server on a given slot can server exactly one packet on that slot. A single queue cannot receive two servers during the same slot. Thus, the service is given by
\[
b_i[t]=
\begin{cases}
1,~\textrm{if queue $i$ gets served at time $t$,}\\
0,~\textrm{otherwise,}
\end{cases}
\]
and the service vector $\mathbf{b}[t]\in\{(1,1,0),~(1,0,1),~(0,1,1)\}$. Suppose further that choosing $(1,1,0)$ and $(1,0,1)$ consumes 1 unit of energy whereas choosing $(0,1,1)$ consumes 2 units of energy. Let $p[t]$ be the energy consumption at time slot $t$.
The goal is to minimize the time average energy consumption while stabilizing all the queues.
In view of the formulation \eqref{obj-problem1}-\eqref{constraint-2-problem1}, $w[t]=\mathbf{a}[t]$, $z_0[t]=p[t]$,
$(z_1[t]~z_2[t]~z_3[t])=\mathbf{a}[t]-\mathbf{b}[t]$ and
$$\mathcal{A}(w[t])=\{\mathbf{a}[t]-(1,1,0),~\mathbf{a}[t]-(1,0,1),~\mathbf{a}[t]-(0,1,1)\}.$$
Thus, we can write \eqref{obj-problem1}-\eqref{constraint-2-problem1} as
\begin{align*}
\min~&\overline{p}\\
s.t.~&\overline{a_i-b_i}\leq0,~i\in\{1,2,3\}.
\end{align*}
where
$$\overline{p}=\limsup_{T\rightarrow\infty}\frac1T\sum_{t=1}^Tp[t]$$
and
$$\overline{a_i-b_i}=\limsup_{T\rightarrow\infty}\frac1T\sum_{t=1}^T(a_i[t]-b_i[t]).$$
Using drift-plus penalty algorithm, we can solving the problem via the following:
\begin{align*}
\min~~&Vp[t]-\sum_{i=1}^3Q_i[t]b_i[t]\\
s.t.~~&\mathbf{b}[t]\in\{(1,1,0),~(1,0,1),~(0,1,1)\}.
\end{align*}
This can be easily solved via comparing the following values:
\begin{itemize}
\item Option $(1,1,0)$: $V-Q_1[t]-Q_2[t]$.
\item Option $(1,0,1)$: $V-Q_1[t]-Q_3[t]$.
\item Option $(0,1,1)$: $2V-Q_2[t]-Q_3[t]$.
\end{itemize}
Thus, during each time slot, the controller picks the option with the smallest of the above three values, breaking ties arbitrarily. This is a simple dynamic scheduling algorithm which does not need the statistics of the arrivals.
The benchmark we compare to is the optimal stationary algorithm which is i.i.d. over slots. It can be computed offline with the knowledge of the means of arrivals via the following linear program.
\begin{align*}
\min~~&q_1+q_2+2q_3\\
s.t.~~&q_1\geq0,~q_2\geq0,~q_3\geq0,\\
&q_1+q_2+q_3=1,\\
&
\left(
\begin{array}{ccc}
1 & 1 & 0 \\
1 & 0 & 1 \\
0 & 1 & 1
\end{array}
\right)
\left(
\begin{array}{ccc}
q_1 \\
q_2 \\
q_3
\end{array}
\right)
\geq
\left(
\begin{array}{ccc}
0.5 \\
0.7 \\
0.4
\end{array}
\right),
\end{align*}
where $q_1,~q_2,~q_3$ stand for the probabilities of choosing corresponding options. Simple computations gives the solution: $q_1=0.6,~q_2=0.3,~q_3=0.1$ and the average energy consumption is 1.1 unit.
\begin{figure}[htbp]
\centering
\includegraphics[height=2.5in]{V-E}
\caption{Time average energy consumption with different V values.}
\label{fig:Stupendous2}
\end{figure}
Fig. \ref{fig:Stupendous2} plots the time average energy consumption up to time $T$ (i.e., $\frac1T\sum_{t=1}^Tp[t]$) verses $T$. It can be seen that as $V$ gets larger, the time average approaches the optimal average energy consumption but it takes longer to get close to the optimal. Similarly, Fig. \ref{fig:Stupendous3} plots the time average sum-up queue size up to time $T$ (i.e.; $\frac1T\sum_{t=1}^T\sum_{i=1}^3Q_i[t]$) verses $T$. As $V$ gets larger, the time average queue size gets larger and it takes longer to stabilize the queues.
\begin{figure}[htbp]
\centering
\includegraphics[height=2.5in]{V-Q}
\caption{Time average sum-up queue size with different V values.}
\label{fig:Stupendous3}
\end{figure}
\section{Conclusions}
This paper analyzes the non-asymptotic performance of the drift-plus-penalty algorithm for stochastic constrained optimization via a truncation technique. With proper truncation level, we show that the drift plus penalty algorithm gives $\mathcal{O}(\varepsilon)$ approximation with a provably bounded convergence time. Furthermore, if there is only one constraint, the convergence time analysis can be improved significantly.
\appendices
\section{Basic definitions and lemmas}\label{app:basics}
The following definition of stopping time can be found in Chapter 4 of \cite{Durrett}.
\begin{definition}
Given a probability space $(\Omega, \mathcal{F}, P)$ and a filtration
$\{\varnothing, \Omega\}=\mathcal{F}_0\subseteq\mathcal{F}_1\subseteq\mathcal{F}_2\ldots$
in $\mathcal{F}$, a stopping time $N$ is a random variable such that for any $n<\infty$,
\[\{N=n\}\in\mathcal{F}_n,\]
i.e. the event that the stopping time occurs at time $n$ is contained in the information up to time $n$.
\end{definition}
The following theorem formalizes the idea of truncation.
\begin{lemma}\label{stopping_time}
(\textit{Theorem 5.2.6 in \cite{Durrett}}) If $N$ is a stopping time and $X[n]$ is a supermartingale, then $X[n\wedge N]$ is also a supermartingale, where $a\wedge b\triangleq\min\{a,b\}$.
\end{lemma}
\begin{lemma}\label{azuma-inequality}
Consider a supermartingale $\{Y[t]\}_{t=0}^\infty$ with bounded difference
\[|Y[t]-Y[t-1]|\leq c_2,~\forall t,\]
and $Y[0]=0$,
For any fixed scale $T$ and any $\lambda>0$, the following concentration inequality holds:
\[Pr(Y[T]\geq\lambda)\leq\exp\left(-\frac{\lambda^2}{2Tc_2^2}\right).\]
\end{lemma}
This is one generalized version of Azuma's inequality to supermartingales. Similar types of supermartingale concentration inequalities and proofs can also be found in Chapter 2 of \cite{old&new}.
\begin{proof}[proof of Lemma \ref{azuma-inequality}]
We first establish a generalized Hoeffding's lemma as follows: For any $i\in\mathbb{N}$ and any $s>0$,
\begin{equation}\label{hoeffding_lemma}
\expect{\left.e^{s(Y[t]-Y[t-1])}\right|\mathcal{F}_{t-1}}\leq e^{s^2c_2^2/2}.
\end{equation}
For simplicity of notations, let $X\triangleq Y[t]-Y[t-1]$, then we have for any $t\in\mathbb{N}$ and any $s>0$,
\begin{align*}
e^{sX}=&e^{sc_2\cdot\frac {X}{c_2}}\leq\frac{1-\frac {X}{c_2}}{2}e^{-sc_2}+\frac{1+\frac {X}{c_2}}{2}e^{sc_2}\\
\leq&\frac12\left(e^{sc_2}+e^{-sc_2}\right)+\frac{X}{2c_2}\left(e^{sc_2}-e^{-sc_2}\right)\\
\leq&e^{s^2c_2^2/2}+\frac{X}{2c_2}\left(e^{sc_2}-e^{-sc_2}\right),
\end{align*}
where the first inequality follows from convexity of the function $e^{sc_2x}$ and the fact that $\frac {X}{c_2}\in[-1,1]$, the third inequality follows from taking Taylor expansion of the term $e^{sc_2}+e^{-sc_2}$. Take conditional expectation from both sides gives
\begin{align*}
\expect{e^{sX}|\mathcal{F}_{t-1}}&=e^{s^2c_2^2/2}+\frac{\expect{X|\mathcal{F}_{t-1}}}{2c_2}\left(e^{sc_2}-e^{-sc_2}\right)\\
&\leq e^{s^2c_2^2/2},
\end{align*}
where the inequality follows from the fact that $Y[t]$ is a supermartingale thus $\expect{X|\mathcal{F}_{t-1}}\leq0$. This finishes the proof of \eqref{hoeffding_lemma}. Now, consider
\begin{align*}
\expect{e^{s Y[T]}}=&\expect{e^{s(Y[T]-Y[T-1]+Y[T-1])}}\\
=&\expect{\expect{\left.e^{s(Y[T]-Y[T-1]+Y[T-1])}\right|\mathcal{F}_{T-1}}}\\
=&\expect{e^{s Y[T-1]}\expect{\left.e^{s(Y[T]-Y[T-1])}\right|\mathcal{F}_{T-1}}}\\
\leq&\expect{e^{s Y[T-1]}}e^{s^2c_2^2/2}.
\end{align*}
By recursively applying above technique $T-1$ times with the fact that $Y[0]=0$, we get
$$\expect{e^{s Y[T]}}\leq e^{\frac{s^2Tc_2^2}{2}}.$$
Finally, by applying above inequality,
\begin{align*}
Pr\left(Y[T]\geq\lambda\right)=&Pr\left(e^{sY[T]}\geq e^{s\lambda}\right)
\leq\frac{\expect{e^{sY[T]}}}{e^{s\lambda}}\\
\leq& e^{-s\lambda+\frac{s^2Tc_2^2}{2}}.
\end{align*}
Optimize the bound regarding $s$ on the right hand side gives the optimal $s^*=\frac{\lambda}{Tc_2^2}$ and the bound follows.
\end{proof}
\section{Proof of Property 2 in Lemma 3}\label{app_property_2}
\begin{proof}
This appendix proves that the truncated process $\{Y[t]\}_{t=0}^\infty$ has bounded difference. We have for any $t\geq0$,
\begin{align*}
&|Y[t]-Y[t-1]|\\
=&\left|X[t\wedge(\tau-1)]-X[(t-1)\wedge(\tau-1)]\right|\\
=&\left|\left(X[t\wedge(\tau-1)]-X[(t-1)\wedge(\tau-1)]\right)1_{\{\tau-1\geq t \}}\right.\\
&\left.+\left(X[t\wedge(\tau-1)]-X[(t-1)\wedge(\tau-1)]\right)1_{\{\tau-1< t \}}\right|\\
=&\left|X[t]-X[t-1]\right|1_{\{\tau-1\geq t \}}.
\end{align*}
On the other hand, according to the definition of $X[t]$ in Lemma \ref{supMG}, we have
\begin{align*}
\left|X[t]-X[t-1]\right|&=\left|V(z_0[t]-z^{opt})+\sum_{l=1}^LQ_l[t]z_l[t]\right|\\
&\leq 2Vz_{\max}+\left|\sum_{l=1}^LQ_l[t]z_l[t]\right|\\
&\leq 2Vz_{\max}+B\|\mathbf{Q}[t]\|,
\end{align*}
where the second inequality follows from the Cauchy-Schwarz inequality and Assumption \ref{assumption-1}.
On the set $\{\tau-1\geq t \}$, $\|\mathbf{Q}[t]\|\leq c_1$, thus, we get,
\begin{align*}
\left|X[t]-X[t-1]\right|1_{\{\tau-1\geq t \}}&\leq\left(2Vz_{\max}+Bc_1\right)1_{\{\tau-1\geq t \}}\\
&\leq 2Vz_{\max}+Bc_1,
\end{align*}
finishing the proof.
\end{proof}
\section{Proof of Theorem 2}\label{proof-theorem-2}
\begin{enumerate}
\item We first show the $\varepsilon$ near optimality. According to \eqref{pre_dpp_upper_bound}, it follows
\begin{align*}
&\frac{1}{T}\sum_{t=1}^T\left(V(z_0[t]-z^{opt})+\sum_{l=1}^LQ_l[t]z_l[t]\right)\\
\geq&\frac{1}{T}\sum_{t=1}^T\left(\Delta[t]-\frac{B^2}{2}+V(z_0[t]-z^{opt})\right)\\
=&\frac{1}{T}\|\mathbf{Q}[t+1]\|^2-\frac{B^2}{2}+\frac{1}{T}\sum_{t=1}^TV(z_0[t]-z^{opt}).
\end{align*}
Thus, by Lemma \ref{main_lemma}, with probability $1-\delta$,
\begin{align*}
&\frac{1}{T}\|\mathbf{Q}[t+1]\|^2-\frac{B^2}{2}+\frac{1}{T}\sum_{t=1}^TV(z_0[t]-z^{opt})\\
\leq& CV\frac{\max\left\{\log T\log^{1/2}\frac2\delta,\log^{3/2}\frac2\delta\right\}}{\sqrt{T}},
\end{align*}
which implies
\begin{align*}
\frac{1}{T}\sum_{t=1}^Tz_0[t]\leq& z^{opt}+\frac{C\max\left\{\log T\log^{1/2}\frac2\delta,\log^{3/2}\frac2\delta\right\}}{\sqrt{T}}\\
&+\frac{B^2}{2V}.
\end{align*}
Now, substituting $V=1/\varepsilon$ gives
\begin{align*}
\frac{1}{T}\sum_{t=1}^Tz_0[t]\leq& z^{opt}+\frac{C\max\left\{\log T\log^{1/2}\frac2\delta,\log^{3/2}\frac2\delta\right\}}{\sqrt{T}}\\
&+\frac{B^2\varepsilon}{2},
\end{align*}
where $C$ has the order $\mathcal{O}(1)$ when $\varepsilon$ is small.
Then, substitute $T\geq\frac{1}{\varepsilon^2}\max\left\{\log^2\frac1\varepsilon\log\frac2\delta,~\log^3\frac2\delta\right\}$ on the right hand side gives \eqref{single-column}.
\begin{figure*}
\normalsize
\begin{align}
\frac{C\max\left\{\log T\log^{1/2}\frac2\delta,\log^{3/2}\frac2\delta\right\}}{\sqrt{T}}
\leq&\frac{C\max\left\{\left(2\log\frac1\varepsilon
+\log\left(\max\left\{\log\frac1\varepsilon\log^{\frac12}\frac2\delta,~\log^{\frac32}\frac2\delta\right\}\right)\right)\log^{\frac12}\frac2\delta
,~\log^{\frac32}\frac2\delta\right\}}
{\frac1\varepsilon\max\left\{\log\frac1\varepsilon\log^{\frac12}\frac2\delta,~\log^{\frac32}\frac2\delta\right\}}\nonumber\\
\leq&\frac{C\left(3\log\frac1\varepsilon\log^{\frac12}\frac2\delta+3\log^{\frac32}\frac2\delta\right)}
{\frac1\varepsilon\max\left\{\log\frac1\varepsilon\log^{\frac12}\frac2\delta,~\log^{\frac32}\frac2\delta\right\}}\label{single-column}
\leq 6C\varepsilon.
\end{align}
\end{figure*}
Thus, with probability at least $1-\delta$,
\[\frac{1}{T}\sum_{t=1}^Tz_0[t]\leq z^{opt}+\mathcal{O}(\varepsilon).\]
\item We then show the constraint violation. By queue updating rule \eqref{queue_update}, for any $t$ and any $l\in\{1,2,\ldots,L\}$, one has
\[Q_l[t+1]\geq Q_l[t]+z_l[t].\]
Thus, summing over $t=1,2,\ldots,T$,
\[Q_l[T+1]\geq Q_l[1]+\sum_{t=1}^Tz_l[t].\]
Since $Q_l[1]=0$,
\begin{equation}\label{inter_constraint_violation}
\frac1T\sum_{t=1}^Tz_l[t]\leq\frac{Q_l[T+1]}{T}.
\end{equation}
Recall from Corollary \ref{geo_queue_bound} that we have
\[Pr\left(\|\mathbf{Q}[T+1]\|>c_1\right)\leq De^{-rc_1}.\]
Let $De^{-rc_1}=\delta$ and substituting the definition of $D$ in Corollary \ref{geo_queue_bound} give
$$c_1=\frac1r\log\frac D\delta=\frac1r\log\frac1\delta+C_0V.$$
Thus,
\[Pr\left(\|\mathbf{Q}[T+1]\|>\frac1r\log\frac1\delta+C_0V\right)<\delta.\]
This implies with probability at least $1-\delta$, for any $l\in\{1,2,\ldots,L\}$,
\[\frac1TQ_l[T+1]\leq\frac{1}{rT}\log\frac1\delta+\frac{C_0V}{T}.\]
Substitute $T\geq\frac{1}{\varepsilon^2}\max\left\{\log^2\frac1\varepsilon\log\frac2\delta,~\log^3\frac2\delta\right\}$ and $V=1/\varepsilon$ on the right hand side gives
\[\frac1TQ_l[T+1]\leq\mathcal{O}(\varepsilon).\]
By \eqref{inter_constraint_violation}, we have with probability at least $1-\delta$,
\[\frac1T\sum_{t=1}^Tz_l[t]\leq\mathcal{O}(\varepsilon).\]
\end{enumerate}
Since both \eqref{near-optimality} and \eqref{constraint-violation} hold individually with probability at least $1-\delta$, by union bound, with probability at least $1-2\delta$, we have the two conditions hold simultaneously.
\section{Proof of Lemma 10}\label{proof-of-telescoping}
Consider any inter-visit period $n_j$ to $n_{j+1}$,
1. If $\tau_j=1$, then,
\begin{align*}
&\left|\frac{1}{2}\left(Q_1[n_{j+1}]^2-Q_1[n_j]^2\right)-(Q[n_j]\wedge C_0V)z_1[n_j]\right|\\
=&\left|\frac{1}{2}\left(Q_1[n_j+1]^2-Q_1[n_j]^2\right)-Q[n_j]z_1[n_j]\right|\\
\leq& \frac{1}{2}z_1[n_j]^2\leq \frac12B^2,
\end{align*}
where the equality follows from the fact that $Q_1[n_j]\in[0,C_0V]$, and the inequality follows by expanding the $Q_1[n_j+1]^2$ term.
2. If $\tau_j>1$, then,
\begin{align*}
&\left|\frac{1}{2}\left(Q_1[n_{j}+\tau_j]^2-Q_1[n_j]^2\right)-\sum_{t=n_j}^{n_j+\tau_j-1}(Q_1[t]\wedge C_0V)z_1[t]\right|\\
\leq&\left|\frac{1}{2}\left(Q_1[n_{j}+\tau_j]^2-Q_1[n_j+1]^2\right)-C_0V\sum_{t=n_j+1}^{n_j+\tau_j-1}z_1[t]\right|\\
&+\left|\frac{1}{2}\left(Q_1[n_j+1]^2-Q_1[n_j]^2\right)-Q[n_j]z_1[n_j]\right|\\
\leq&\left|\frac{1}{2}\left(Q_1[n_{j}+\tau_j]^2-Q_1[n_j+1]^2\right)-C_0V\sum_{t=n_j+1}^{n_j+\tau_j-1}z_1[t]\right|\\
&+B^2/2\\
\leq&\left|\frac{1}{2}\left(Q_1[n_{j}+\tau_j]^2-Q_1[n_j+1]^2\right)-Q_1[n_j+1]\right.\\
&\left.\cdot\sum_{t=n_j+1}^{n_j+\tau_j-1}z_1[t]\right|+\left|(Q_1[n_j+1]-C_0V)\sum_{t=n_j+1}^{n_j+\tau_j-1}z_1[t]\right|\\
&+B^2/2
\end{align*}
where the first inequality follows from $Q_1[t]\wedge C_0V=C_0V$ from $n_j+1$ to $n_j+\tau_j-1$ and triangle inequality, the second inequality follows from $\tau_j=1$ case and the last inequality follows from triangle inequality again. Now we try to bound the two terms in the last inequality separately.
\begin{itemize}
\item $\left|\frac{1}{2}\left(Q_1[n_{j}+\tau_j]^2-Q_1[n_j+1]^2\right)-Q_1[n_j+1]\sum_{t=n_j+1}^{n_j+\tau_j-1}\right.$ $\left.z_1[t]\right|\leq 2B^2$:
Since $V\geq B/C_0$ and $Q_1[t]> C_0V$ for any $t\in\{n_j+1, \ldots,n_j+\tau_j-1\}$, according to the queue updating rule, $Q_1[t+1]=Q_1[t]+z_1[t],~\forall t\in\{n_j, \ldots,n_j+\tau_j-1\}$. This gives
\[Q_1[n_j+\tau_j]=Q_1[n_j+1]+\sum_{t=n_j+1}^{n_j+\tau_j-1}z_1[t].\]
Thus,
\begin{align*}
&\left|\frac{1}{2}(Q_1[n_{j}+\tau_j]^2-Q_1[n_j+1]^2)- Q_1[n_j+1]\right.\\
&\left.\cdot\sum_{t=n_j+1}^{n_j+\tau_j-1}z_1[t]\right|=\frac{1}{2}\left|\sum_{t=n_j+1}^{n_j+\tau_j-1}z_1[t]\right|^2.
\end{align*}
Notice that
\begin{align*}
&C_0V\geq Q_1[n_{j}+\tau_j]\geq Q_1[n_{j}+\tau_j-1]-B\geq C_0V-B,\\
&C_0V\leq Q_1[n_{j}+1]\leq Q_1[n_{j}]+B\leq C_0V+B,
\end{align*}
thus,
$\left|\sum_{t=n_j+1}^{n_j+\tau_j-1}z_1[t]\right|^2=\left|Q_1[n_{j}+\tau_j]-Q_1[n_{j}+1]\right|^2\\
\leq (2B)^2=4B^2$,
and this gives the desired bound.
\item $\left|(Q_1[n_j+1]-C_0V)\sum_{t=n_j+1}^{n_j+\tau_j-1}z_1[i]\right|\leq (\tau_j-1)B^2$:
Since $C_0V\leq Q[n_j+1]\leq C_0V+B$, it follows
\[|Q[n_j+1]-C_0V|\leq B.\]
Thus, the desired bound follows from $|z_1[t]|\leq B$.
\end{itemize}
Above all, we get for $\tau_j>1$,
\begin{align*}
&\left|\frac{1}{2}\left(Q_1[n_{j}+\tau_j]^2-Q_1[n_j]^2\right)-\sum_{t=n_j}^{n_j+\tau_j-1}(Q_1[t]\wedge C_0V)z_1[t]\right|\\
&\leq2B^2+(\tau_j-1)B^2+B^2/2=\left(\tau_j+\frac32\right)B^2.
\end{align*}
Thus, for any $\tau_j$,
\begin{align*}
&\left|\frac{1}{2}\left(Q_1[n_{j}+\tau_j]^2-Q[n_j]^2\right)-\sum_{t=n_j}^{n_j+\tau_j-1}(Q_1[t]\wedge C_0V)z_1[t]\right|\\
&\leq\left(\tau_j+\frac32\right)B^2,
\end{align*}
Take the sums of $j$ from 0 to $J-1$ from both sides gives
\begin{align*}
&\sum_{j=0}^{J-1}\left|\sum_{t=n_j}^{n_j+\tau_j-1}(Q_1[t]\wedge C_0V)z_1[t]
-\frac{1}{2}\left(Q_1[n_{j}+\tau_j]^2\right.\right.\\
&\left.\left.-Q_1[n_j]^2\right)\right|\leq \sum_{j=0}^{J-1}\left(\tau_j+\frac32\right)B^2.
\end{align*}
By triangle inequality,
\begin{align*}
&\left|\sum_{t=0}^{n_J-1}(Q_1[t]\wedge C_0V)z_1[t]-\frac{1}{2}Q_1[n_{J}]^2\right|\\
&\leq\sum_{j=0}^{J-1}\left(\tau_j+\frac32\right)B^2\leq \frac52B^2(n_{J}-1),
\end{align*}
where the last inequality follows from the fact that $\sum_{j=0}^{J-1}\tau_j=n_J-1$ and $J\leq n_J-1$.
\section{Proof of Theorem 3}\label{proof-theorem-3}
First of all, substitute the bound of $\sum_{t=1}^T(Q_1[t]\wedge C_0V) z_1[t]$ derived in Lemma \ref{appr-telescoping-sum} into Lemma \ref{main_lemma-2} gives,
with probability at least $1-\delta$,
\[\frac1T\sum_{t=1}^TV(z_0[t]-z^{opt})\leq2C_2V\frac{\log(1/\delta)}{\sqrt{T}}+\frac52B^2.\]
Divide $V$ from both sides gives
\[\frac1T\sum_{t=1}^T(z_0[t]-z^{opt})\leq2C_2\frac{\log(1/\delta)}{\sqrt{T}}+\frac52\frac{B^2}{V}.\]
Substitute $V=1/\varepsilon$ and $T=\frac{1}{\varepsilon^2}\log^2\frac1\delta$ into the right hand side of the above inequality gives
\begin{equation}\label{inter-near-optimality-2}
\frac1T\sum_{t=1}^T(z_0[t]-z^{opt})\leq2C_2\varepsilon+\frac52B^2\varepsilon=\mathcal{O}(\varepsilon).
\end{equation}
Then, we do the constraint violation.
By queue updating rule \eqref{queue_update}, for any $t$, one has
\[Q_1[t+1]\geq Q_1[t]+z_1[t].\]
Thus, summing over $t=1,2,\ldots,T$,
\[Q_1[T+1]\geq Q_1[1]+\sum_{t=1}^Tz_1[t].\]
Since $Q_1[1]=0$,
\begin{equation}\label{inter_constraint_violation-2}
\frac1T\sum_{t=1}^Tz_1[t]\leq\frac{Q_1[T+1]}{T}.
\end{equation}
Recall from Corollary \ref{geo_queue_bound} that we have
\[Pr\left(Q_1[T+1]>c_1\right)\leq De^{-rc_1}.\]
Let $De^{-rc_1}=\delta$ and substituting the definition of $D$ in Corollary \ref{geo_queue_bound} give
$$c_1=\frac1r\log\frac D\delta=\frac1r\log\frac1\delta+C_0V.$$
Thus,
\[Pr\left(Q_1[T+1]>\frac1r\log\frac1\delta+C_0V\right)<\delta.\]
This implies with probability at least $1-\delta$,
\[\frac1TQ_1[T+1]\leq\frac{1}{rT}\log\frac1\delta+\frac{C_0V}{T}.\]
Substitute $V=1/\varepsilon$ and $T=\frac{1}{\varepsilon^2}\log^2\frac1\delta$ into the right hand side of the above inequality gives
\[\frac1TQ_1[T+1]\leq\mathcal{O}(\varepsilon).\]
By \eqref{inter_constraint_violation-2}, we have with probability at least $1-\delta$,
\[\frac1T\sum_{t=1}^Tz_1[t]\leq\mathcal{O}(\varepsilon).\]
Since both \eqref{near-optimality-2} and \eqref{constraint-violation-2} hold individually with probability at least $1-\delta$, by union bound, with probability at least $1-2\delta$, we have the two conditions hold simultaneously.
\bibliographystyle{unsrt}
|
1,116,691,500,201 | arxiv | \section{Introduction and Preliminary concepts}
\indent
Lie polynomials appeared at the end of $19${th} century and the beginning of the $20${th} century in the work of Campbell, Baker, and Hausdorff on exponential mapping in a Lie group, which has lead to the so called the Campbell--Baker--Hausdorff formula. Around $1930$, Witt introduced the Lie algebra of Lie polynomials and showed that the Lie algebra of Lie polynomials is actually a free Lie algebra and that its enveloping algebra is the associative algebra of noncommutative polynomials. He proved what is now called the Poincaré--Birkhoff--Witt theorem and showed how a free Lie algebra is related lower central series of the free group. About at the same time, Hall \cite{M. Hall,Hall} and Magnus \cite{Magnus}, with their commutator calculus, opened the way to bases of the free Lie algebra. For more details about a historical review of free Lie algebras, we refer the reader to the reference \cite{[21]} and the references therein.
The concept of basic commutators is defined in groups and Lie algebras, and there is also a way to construct and identify them. Moreover, a formula for calculating their number is obtained.
In 1962, Shirshov \cite{Shirshov} gave a method that generalizes Hall’s method \cite{M. Hall} for choosing a basis in a free Lie algebra.
Basic commutators are of particular importance in calculating the dimensions of different spaces and are therefore highly regarded. Niroomand and Parvizi \cite{Niroomand-Parvizi-M^2(L)} investigated some more results about $2$-nilpotent multiplier $\mathcal{M}^{(2)}(L)$ of a finite-dimensional nilpotent Lie algebra $L$ and by using the Witt formula, calculated its dimension. Moreover, Salemkar, Edalatzadeh, and Araskhan \cite{Salemkar-Edalatzadeh-Araskhan} introduced the concept of $c$-nilpotent multiplier $\mathcal{M}^{(c)}(L)$ of a finite-dimensional Lie algebra $L$ and obtained some bounds for $\mathcal{M}^{(c)}(L)$ by using the Witt formula and basic commutators.
In 1985, Filippov \cite{vtf} introduced the concept of \textit{$n$-Lie algebras}, as an $n$-ary multilinear and skew-symmetric operation $[x_1,\ldots,x_n]$ that satisfies the following generalized Jacobi identity:
\[[[x_1,\ldots,x_n],y_2,\ldots,y_n]=\sum_{i=1}^n[x_1,\ldots,[x_i,y_2,\ldots,y_n],\ldots,x_n].\]
Clearly, such an algebra becomes an ordinary Lie algebra when $n=2$.
Let $L_1,L_2,\ldots,L_n$ be subalgebras of an $n$-Lie algebra $L$. Denote by $[L_1, L_2, \ldots, L_n]$ the subalgebra of $L$ generated by all vectors $[x_1, x_2, \ldots, x_n]$, where $x_i\in L_i$, $i=1, 2, \ldots, n.$ The subalgebra $[L,L, \ldots, L]$ is called the \textit{derived algebra} of $L$, and it is denoted by $L^2$. If $L^2=0$, then $L$ is called an abelian algebra. An \textit{ideal} $I$ of an $n$-Lie algebra $L$ is a subspace of $L$ such that $[I,L, \ldots,L] \subseteq I.$ If $[I, I,L, \ldots ,L]=0$, then $I$ is called an \textit{abelian ideal}.
Let $L$ be an $n$-Lie algebra and let $z\in L$ such that $[z,L,\ldots,L] = 0$. We call the collection of all such elements in $L$ the center of $L$ and denote it by $Z(L)$. One can check that $Z(L)$ is an ideal of $L$. Put $Z_0(L)=Z(L)$, and define $Z(L/Z_{i-1}(L))=Z_i(L)/Z_{i-1}(L)$. Hence we have $Z_{i-1}(L)\unlhd Z_i(L)$, for all $i\in\Bbb N$. So we can make the following chain:
\[0\unlhd Z_0(L)\unlhd Z_1(L)\unlhd Z_2(L)\unlhd\cdots\unlhd Z_i(L)\unlhd\cdots.\]
The above series is known as the upper central series.
Another well-known chain is the lower central series; Let $L^1=L$, let $L^2=[L,L,\ldots,L,L]$, and let $L^{i+1}=[L^{i},L,\ldots,L,L]$. Then we have $L^{i+1}\unlhd L^i$, for $i\geq2$. Thus the following series, known as the lower central series, can be formed as
\[\cdots\subseteq L^i\subseteq L^{i-1}\subseteq \cdots\subseteq L^3\subseteq L^2\subseteq L^1=L.\]
An $n$-Lie algebra $L$ is called nilpotent of class $c$ (for some positive and integer number $c$), if $L^{c+1}=0$ and $L^c\neq0$. This is equivalent to $Z_{c-1}(L)\subsetneq Z_c(L)=L$. Then $c$ is said the nilpotency class of $L$, and we write $cl(L)=c$. Note that the nilpotency property of $n$-Lie algebras is closed under subalgebra, ideal, and homomorphism image, but it is not closed under the extension property.
For more information about nilpotency and solvability, we refer the interested reader to \cite{Eghdami-Gholami}.
Let $L$ be an $n$-Lie algebra over a field $\mathbb{F}$ with a free presentation
\[0\longrightarrow R\longrightarrow F\longrightarrow L\lra0,\]
where $F$ is a free $n$-Lie algebra. Then the $c$-nilpotent multiplier $\mathcal{M}^{(c)}(L)$ of $L$ is defined as
\[\mathcal{M}^{(c)}(L):=\frac{R\cap F^{c+1}}{\gamma_{c+1}[R,F,\ldots,F]}.\]
So far, several studies have been done in the case $n=2$, that is, for Lie algebras. For more information, we refer to \cite{Bosko, Eshrati-Saeedi-Darabi,Niroomand-Parvizi-M^2(L), Niroomand-Russo, Salemkaretal,Salemkar-Edalatzadeh-Araskhan, c-multiplier}.
The author and Saeedi \cite{Akbarossadat-Saeedi-3} introduced the wedge (exterior) product of two $n$-Lie algebra. Also,
The author and Saeedi \cite{Akbarossadat-Saeedi1} defined the non-abelian tensor products of two $n$-Lie algebras and the modular $n$-tensor products as follows.
\begin{definition}[Modular $n$-tensor product/$n$-tensor spaces]
Let $n\geq1$ be a natural number. Let $V_1$ and $V_2$ be two vector spaces over a field $\mathbb{F}$ of finite dimensions $d_1$ and $d_2$, respectively. Also, let $V_1^{\times i}\times V_2^{\times (n-i)}$ denote the Cartesian product
\[\underbrace{V_1\times\cdots\times V_1}_{i\ \text{times}}\times\underbrace{V_2\times\cdots\times V_2}_{n-i\ \text{times}}\]
for all $1\leq i\leq n-1$.
A function $f$ from $V_1^{\times i}\times V_2^{\times (n-i)}$ to a vector space $W$ is multilinear (or $n$-linear) if the restriction of $f$ on every component of $V_1^{\times i}\times V_2^{\times (n-i)}$ is linear. \\
Let $\{e_{ij}:1\leq j\leq d_i\}$ be a basis of $V_i$ for $i=1,2$. Then there exists a unique multilinear function $f:V_1^{\times i}\times V_2^{\times (n-i)}\longrightarrow W$ admitting the legal values on the elements of
\begin{multline}\label{multilinear}
\{(e_{1 j_1},\ldots,e_{1 j_i},e_{2 j_{i+1}},\ldots,e_{2 j_n}):\\
1\leq j_k\leq d_1,\ 1\leq k\leq i,\ 1\leq j_s\leq d_2,\ i+1\leq s\leq n\}.
\end{multline}
Note that the above set contains $d_1^i\times d_2^{n-i}$ elements, which exceeds the dimension of $V_1^{\times i}\times V_2^{\times (n-i)}$ most of the time.
A pair $(\mathbb{T}_i,\Phi_i)$ (where $\mathbb{T}_i$ is a vector space and $\Phi_i$ is a multilinear function from $V_1^{\times i}\times V_2^{\times (n-i)}$ to $\mathbb{T}_i$) satisfies the universal factorization property if for each vector space $W$ and an $n$-linear function $f:V_1^{\times i}\times V_2^{\times (n-i)}\longrightarrow W$, there is a linear function $h_i:\mathbb{T}_i\longrightarrow W$ such that $f=h_i\Phi_i$. The existence of the universal pair can be proved easily. Also, up to isomorphism, there exists a unique universal pair $(\mathbb{T}_i,\Phi_i)$ satisfying the universal factorization property. Hence, $\mathbb{T}_i$ is a modular $i$-tensor product, which we may denote it by
\[\underbrace{V_1\otimes\cdots\otimes V_1}_{i\ \text{times}}\otimes\underbrace{V_2\otimes\cdots\otimes V_2}_{n-i\ \text{times}},\]
or simply, by $V_1^{\otimes i}\otimes V_2^{\otimes (n-i)}$ for all $1\leq i\leq n-1$. Now, let
\[V\otimes_{\mathrm{mod}}^n W=\mathrm{span}\{\mathbb{T}_i:1\leq i\leq n-1\}.\]
The vector space $V\otimes_{\mathrm{mod}}^n W$ is called the modular tensor product (or the abelian tensor product) of $V$ and $W$.
It is evident that $\mathbb{T}_i=\mathbb{T}_j=V\otimes_{\mathrm{mod}}^n V$ for all $1\leq i,j\leq n-1$ whenever $V=W$, and that $V\otimes_{\mathrm{mod}}^n W$ coincides with the ordinary tensor product of two vector spaces when $n=2$.
\end{definition}
\begin{remark}\label{dim-modular-tensor}
For every finite-dimensional vector spaces $V$ and $W$ with $\dim V=d_V$ and $\dim W=d_W$, we have
\[\dim(V\otimes_{\mathrm{mod}}^n W)=\sum_{i=1}^{n-1}d_V^id_W^{n-i},\]
and also,
\[\dim((V\otimes_{\mathrm{mod}}^n V)\otimes_{\mathrm{mod}}^n W)=\sum_{i=1}^{n-1}d_V^{ni}d_W^{n-i}.\]
\end{remark}
\begin{definition}[$c$-Capable $n$-Lie algebras]
An $n$-Lie algebra $L$ is called $c$-capable, if there exists an $n$-Lie algebra $H$ such that $L\cong H/Z_c(H)$.
\end{definition}
\begin{definition}
Let $L$ be an $n$-Lie algebra. We define $Z_c^*(L)$ to be the smallest ideal as $M$ of $L$ such that the $n$-Lie algebra $L/M$ is $c$-capable.
\end{definition}
It is easy to check that $Z_c^*(L)$ is a characteristic ideal of $L$ and also, that $Z_c^*(L/Z_c^*(L))=0$.
The notation $Z^*(L)$ was defined for Lie algebras in \cite{Salemkaretal}, and it was shown that a Lie algebra $L$ is capable if and only if $Z^*(L)=0$. Similarly, it can be proved for $n$-Lie algebras.
\begin{lemma} \label{Z^*=0}
Let $L$ be an $n$-Lie algebra. Then $Z^*(L)=0$ if and only if $L$ is capable.
\end{lemma}
\begin{proof}
Let $Z^*(L)=0$. By the definition of $Z^*(L)$ we know that $L/Z^*(L)$ is capable, that is, there is an $n$-Lie algebra $M$ such that $L/Z^*(L)\cong M/Z(M)$. Since $Z^*(L)=0$, so $L/Z^*(L)\cong L$ and hence $L$ is capable.
On the other hand, if $L$ is capable, then there exists an $n$-Lie algebra $M$ such that $L\cong M/Z(M)$. Since $Z^*(L)$ is the smallest ideal of $L$ such that $L/Z^*(L)$ is capable, we obtain that $Z^*(L)=0$.
\end{proof}
The following theorem was proved for Lie algebras in \cite{c-multiplier}.
\begin{theorem} \label{natural epimorphism}
Let $L$ be an $n$-Lie algebra with the free representation $F/R$. If the map $\pi:F/\gamma_{c+1}(R,F,\dots,F)\longrightarrow F/R$ is an epimorphism, then $Z_c^*(L)=\pi\left(Z_c(F/\gamma_{c+1}(R,F,\dots,F))\right)$.
\end{theorem}
\begin{proof}
To prove this, we have to prove that $L/\pi\left(Z_c(F/\gamma_{c+1}(R,F,\dots,F))\right)$ is $c$-capable. Also, we show that it is the smallest ideal of $L$ such that $L/\pi\left(Z_c(F/\gamma_{c+1}(R,F,\dots,F))\right)$ is $c$-capable.
We know that $Z_c(F/\gamma_{c+1}(R,F,\dots,F))=\dfrac{Z_c(F)+\gamma_{c+1}(R,F,\dots,F)}{\gamma_{c+1}(R,F,\dots,F)}$, and hence $\pi(Z_c(F/\gamma_{c+1}(R,F,\dots,F)))=Z_c(F)/R$. Therefore, by choosing $H=F$, we have
\[\dfrac{H}{Z_c(H)}=\dfrac{F}{Z_c(F)}\cong \dfrac{F/R}{Z_c(F)/R}=\dfrac{L}{\pi\left(Z_c(F/\gamma_{c+1}(R,F,\dots,F))\right)}.\]
Let $M$ be an ideal of $L$ such that $L/M$ is $c$-capable and $M\subseteq \pi\left(Z_c(F/\gamma_{c+1}(R,F,\dots,F))\right)$. Since $L/M$ is $c$-capable, there is an $n$-Lie algebra $N$ such that $L/M\cong N/Z_c(N)$. We have $\pi\left(Z_c(F/\gamma_{c+1}(R,F,\dots,F))\right)/M\unlhd L/M$, since $M\subseteq \pi\left(Z_c(F/\gamma_{c+1}(R,F,\dots,F))\right)$. Therefore,
\[\dim\dfrac{L}{\pi\left(Z_c(F/\gamma_{c+1}(R,F,\dots,F))\right)}=\dim\dfrac{L/M}{\pi\left(Z_c(F/\gamma_{c+1}(R,F,\dots,F))\right)/M}\leq \dim N/Z_c(N),\]
and hence $M=\pi\left(Z_c(F/\gamma_{c+1}(R,F,\dots,F))\right)$. Thus $Z_c^*(L)=\pi\left(Z_c(F/\gamma_{c+1}(R,F,\dots,F))\right)$.
\end{proof}
\begin{lemma} \label{lem2.1}
Let $L$ be an $n$-Lie algebra and let $N$ be an ideal of $L$. Then
$N\subseteq Z^*(L)$ if and only if the natural map $\mathcal{M}(L)\longrightarrow\mathcal{M}(L/N)$
is a monomorphism.
\end{lemma}
\begin{proof}
According to the definition of multiplier of $n$-Lie algebras, we know that
\[\mathcal{M}(L)\cong\dfrac{R\cap F^2}{\gamma_2(R,F,\dots,F)},\qquad \mathcal{M}(L/N)\cong\dfrac{S\cap F^2}{\gamma_2(S,F,\dots,F)},\]
where $F/R$ and $S/R$ are free representations of $L$ and $N$, respectively. Thus $R\subseteq S\unlhd F$, and hence $R\cap F^2\subseteq S\cap F^2$. If $N\subseteq Z^*(L)$, then by Theorem \ref{natural epimorphism},
\[S/R\subseteq \pi(Z(F/\gamma_2(R,F,\dots,F))=\pi((Z(F)+\gamma_2(R,F,\dots,F))/\gamma_2(R,F,\dots,F)).\]
Therefore, the natural map $\alpha:\mathcal{M}(L)\longrightarrow\mathcal{M}(L/N)$
is a monomorphism.
Now conversely, let $\alpha$ be a monomorphism and let $N\not\subseteq Z^*(L)$. Then there exists $n_0\in N$ such that $n_0\notin Z^*(L)=\pi(Z(F/\gamma_2(R,F\dots,F)))$. Assume that $n_0=s_0+R\in S/R=N$, for some $s_0\in S$. Then $0\neq \pi^{-1}(s_0+R)=s_0+\gamma_2(R,F,\dots,F)\not\in Z(F/\gamma_2(R,F,\dots,F))$. Therefore, there are $f_i+\gamma_2(R,F,\dots,F)\in F/\gamma_2(R,F,\dots,F)$, for $i=2,\ldots,n$, such that $\bar{x}=[s_0,f_2,\dots,f_n]+\gamma_2(R,F,\dots,F)\neq 0$. Now, there are two cases: First, if $[s_0,f_2,\dots,f_n]\not\in R$, then $\alpha(\bar{x})$ is not defined, and hence the statement is hold. Second, if $[s_0,f_2,\dots,f_n]\in R$, then
\[\alpha(\bar{x})=\alpha([s_0,f_2,\dots,f_n]+\gamma_2(R,F,\dots,F))=[s_0,f_2,\dots,f_n]+\gamma_2(S,F,\dots,F)=0,\]
and this contradicts with $\ker\alpha=0$.
\end{proof}
\begin{theorem}[\cite{capability-darabi}] \label{abelian-capable}
The $d$-dimensional abelian $n$-Lie algebra $F(d)$ is capable if and only if $d\geq n$.
\end{theorem}
The following proposition is similar to Lie algebras.
\begin{proposition} \label{Z* subset I}
Let $L$ be an $n$-Lie algebra and let $M$ be an ideal of it, such that $L/M$ is capable. Then
$Z^*(L)\subseteq M$.
\end{proposition}
\begin{proof}
To prove this, it is enough to show that $Z^*(L)=\bigcap\limits_{i=1}^r M_i$, where $M_i$'s are ideals of $L$ such that $L/M_i$ is capable, for each $i=1,2,\dots,r$. It is clear that $Z^*(L)\subseteq\bigcap\limits_{i=1}^r M_i$, because $Z^*(L)$ is the smallest ideal of $L$ such that $L/Z^*(L)$ is capable. Hence $Z^*(L)\subseteq M_i$, for all $i$.
On the other hand, since $L/Z^*(L)$ is capable, $Z^*(L)=M_j$, for some $1\leq j\leq r$. Thus $\bigcap\limits_{i=1}^r M_i\subseteq M_j=Z^*(L)$. Therefore, $Z^*(L)=\bigcap\limits_{i=1}^r M_i$.
\end{proof}
\begin{proposition}\label{Z* subset L^2}
Let $L$ be a nonabelian nilpotent $n$-Lie algebra of finite dimension. Then
$Z^*(L)\subseteq L^2$.
\end{proposition}
\begin{proof}
We know that $L/L^2$ is abelian with $\dim L/L^2\geq n$. Hence by Theorem \ref{abelian-capable}, $L/L^2$ is capable and so by Proposition \ref{Z* subset I}, we have $Z^*(L)\subseteq L^2$.
\end{proof}
The proof of the following lemmas is omitted because of their similarity with Lie algebraic states.
The exterior center of $n$-Lie algebra of $L$, denoted by $Z^\wedge(L)$, is defined by
\[Z^\wedge(L)=\{ l\in L;~l\wedge l_2\wedge\dots\wedge l_n=0,~\text{ for all } l_2,\dots,l_n\in L\}.\]
\begin{lemma} \label{lem2.2}
Let $L$ be an $n$-Lie algebra and let $N$ be a central ideal of $L$. Then
\begin{equation*}
\mathcal{M}(L)\stackrel{\alpha}{\longrightarrow} \mathcal{M}(L/N)\stackrel{\beta}{\longrightarrow}N\cap L^2\longrightarrow 0.
\end{equation*}
\end{lemma}
\begin{proof}
The homomorphism $\alpha$ is defined in Lemma \ref{lem2.1}. Assume that $F/R$ and $S/R$ are free representations of $L$ and $N$, respectively, where $R\subseteq S\unlhd F$. Then according to the definition of $c$-nilpotent multiplier, we have $\mathcal{M}(L/N)=(S\cap F^2)/\gamma_2(S,F,\dots,F)$ and $N\cap L^2=S/R\cap (F^2+R)/R$. Consider
\[\beta:\mathcal{M}(L/N)=(S\cap F^2)/\gamma_2(S,F,\dots,F)\longrightarrow N\cap L^2=S/R\cap (F^2+R)/R\]
by $s+\gamma_2(S,F,\dots,F)\longmapsto s+R$. It is easy to check that $\beta$ is an epimorphism and also that $\ker \beta=\mathrm{Im}\alpha$.
\end{proof}
\begin{corollary}
Let $L$ be an $n$-Lie algebra. Then
\begin{equation*}
\mathcal{M}(L)\longrightarrow \mathcal{M}(L^{ab}){\longrightarrow} L^2\longrightarrow 0.
\end{equation*}
\end{corollary}
\begin{proof}
It is enough to put $N=L^2$ in Lemma \ref{lem2.2}.
\end{proof}
\begin{proposition}
Let $L$ be a finite-dimensional $n$-Lie algebra and let $N$ be a central ideal of $L$. Then
\begin{enumerate}
\item
$\dim(\mathcal{M}(L/N))\leq \dim(\mathcal{M}(L))+\dim(L^2\cap N)$ and
\item
$\dim(\mathcal{M}(L/N)) = \dim(\mathcal{M}(L))+\dim(L^2\cap N)$
if and only if
$N\subseteq Z^*(L)$.
\end{enumerate}
\end{proposition}
\begin{proof}
According to Lemma \ref{lem2.2}, we have
\begin{equation*}
\mathcal{M}(L)\stackrel{\alpha}{\longrightarrow} \mathcal{M}(L/N){\longrightarrow}N\cap L^2\longrightarrow 0.
\end{equation*}
Since $\alpha$ is not a monomorphism, s
$\mathcal{M}(L/N)\subseteq \mathcal{M}(L)\oplus N\cap L^2$, and hence
\[\dim(\mathcal{M}(L/N))\leq \dim(\mathcal{M}(L))+\dim(L^2\cap N).\]
If $N\subseteq Z^*(L)$, then by Lemma \ref{lem2.1}, $\alpha$ is a monomorphism. Hence
\begin{equation*}
0\longrightarrow \mathcal{M}(L)\stackrel{\alpha}{\longrightarrow} \mathcal{M}(L/N){\longrightarrow}N\cap L^2\longrightarrow 0.
\end{equation*}
Also, $\mathcal{M}(L/N)=\mathcal{M}(L)\oplus N\cap L^2$. Therefore,
\[\dim(\mathcal{M}(L/N)) = \dim(\mathcal{M}(L))+\dim(L^2\cap N).\]
\end{proof}
\begin{corollary} \label{cor2.3}
Let $L$ be an $n$-Lie algebra and let $N$ a central ideal of $L$. It holds that $N \subseteq Z^\wedge(L)$ if and only if the natural map
$L\wedge L\longrightarrow L/N \wedge L/N$
is a monomorphism.
\end{corollary}
\begin{proof}
Employing
$L\wedge L\cong \dfrac{L\otimes L}{L\square L}$
and
\cite[Corollary 3.2]{Akbarossadat-Saeedi2} implies that there exists an exact series $N\otimes L\longrightarrow L\otimes L\longrightarrow L/N \otimes L/N\longrightarrow 0$ such that
$N\subseteq L^2$.
The definition of the nonabelian exterior product completes the proof.
\end{proof}
\begin{lemma} \label{Z^=Z*}
Let $L$ be a finite-dimensional $n$-Lie algebra. Then
$Z^\wedge(L)=Z^*(L)$.
\end{lemma}
\begin{proof}
It can be proved similar to \cite{Niroomand-parvizi-russo}, which was proved for Lie algebras.
\end{proof}
\begin{corollary}
Let $L$ be an $n$-Lie algebra and let $N$ be an one-dimensional central ideal of $L$. Then $L$ is capable if and only if
$\mathcal{M}(L)\longrightarrow \mathcal{M}(L/N)$ has a nontrivial kernel.
\end{corollary}
\begin{proof}
It is easy to prove from Lemmas \ref{Z^*=0}, \ref{lem2.1}, and \ref{Z^=Z*}.
\end{proof}
The Heisenberg Lie algebras are the ubiquities in the theory of Lie algebras, in particular nilpotent Lie algebras and their classifications. An $n$-Lie algebra $L$ is called Heisenberg if $L^2=Z(L)$ and $\dim(L^2)=1$.
\begin{theorem}[\cite{Eshrati-Saeedi-Darabi}]\
\begin{itemize}
\item[(a).]Every Heisenberg Lie algebra is isomorphic to the following Lie algebra of odd dimension:
\[H(2,m)=\gen{x,x_1,\ldots,x_{2m}:[x_{2i-1},x_{2i}]=x,i=1,\ldots,m}.\]
\item[(b).]Let $H(n,m)$ be a Heisenberg $n$-Lie algebra of dimension $mn+1$. Then
\[\dim(\mathcal{M}(H(n,m)))=\begin{cases}n,&m=1,\\\\\binom{mn}{n}-1,&m>1.\end{cases}\]
\end{itemize}
\end{theorem}
The following lemma establishes the structure of every finite-dimensional nilpotent $n$-Lie algebra $L$ satisfying $\dim(L^2)=1$.
\begin{lemma}[\cite{Eshrati-Saeedi-Darabi}]\label{L=H+A}
Let $L$ be a nilpotent $n$-Lie algebra of dimension $d$ with $\dim(L^2)=1$. Then there exists $m\geq1$ such that
\[L\cong H(n,m)\oplus F(d-mn-1),\]
where $F(d-mn-1)$ is an abelian $n$-Lie algebra of dimension $d-mn-1$.
\end{lemma}
\begin{theorem}[\cite{capability-darabi}] \label{Heisenberg-capable}
The $n$-Lie algebra $H(n,m)\oplus F(k)$ with dimension $d=mn+k+1$ is capable if and only if $m=1$.
\end{theorem}
\begin{lemma} \label{Z^(L)=L^2=Z(L)}
Let $H(n,m)$ be a Heisenberg $n$-Lie algebra with $m>1$. Then
\[Z^\wedge(H(n,m))=H^2(n,m)=Z(H(n,m)).\]
\end{lemma}
\begin{proof}
Since $m>1$, by Theorem \ref{Heisenberg-capable}, $H(n,m)$ is not capable. Hence by Lemma \ref{Z^*=0}, $Z^*(H(n,m))\neq0$. On the other hand, by Lemmas \ref{Z* subset L^2} and \ref{Z^=Z*}, we have $Z^\wedge(H(n,m))=Z^*(H(,n,m))\subseteq H^2(n,m)=Z(H(n,m))$, because $H(n,m)$ is nilpotent with dimension $mn+1<\infty$. Also, by \cite[Proposition 4.1]{Akbarossadat-Saeedi2}, we know that
\[H(n,m)\otimes H(n,m)\cong \dfrac{H(n,m)}{H^2(n,m)}\otimes \dfrac{H(n,m)}{H^2(n,m)},\]
and so $H(n,m)\wedge H(n,m)\cong \dfrac{H(n,m)}{H^2(n,m)}\wedge \dfrac{H(n,m)}{H^2(n,m)}$. Thus by Corollary \ref{cor2.3}, we obtain
\[Z^\wedge(H(n,m))=Z^*(H(n,m))=H^2(n,m)=Z(H(n,m)).\]
\end{proof}
For the first time, the author and Saeedi \cite{Akbarossadat-Saeedi-3} introduced the concept of free $n$-Lie algebras. Then the author \cite{Akbarossadat-Basic} defined the concept of basic commutators of weight $w$ in $d$-dimensional $n$-Lie algebras and also proved some of its properties and the formula to calculate the number of them. In what follows, we review it.
\begin{theorem}[\cite{Akbarossadat-Basic}]\label{main-formula-weightw}
Let the set $X=\{x_i|x_{i+1}>x_i;~i=1,2,\dots,d\}$ be an ordered set and a basis for the free $n$-Lie algebra $F$
and let $w$ be a positive integer number. Then the number of basic commutators of weight $w$ is
\begin{equation}\label{basicweightw}
l_d^n(w)=\sum_{j=1}^{\alpha_0}\beta_{j^*}\left(\sum_{i=2}^{w-1}\alpha_i{{{{d}\choose{n-1}}}\choose{w-i}}\right),
\end{equation}
where $\alpha_0={{d-1}\choose{n-1}}$, $\alpha_i$, ($2\leq i\leq w-1$) is the coefficient of the $(i-2)$th sentence in Newton's binomial expansion $(a+b)^{w-3}$, $({\text{i.e.}}~ \alpha_i={{w-3}\choose{i-2}})$ and if ${{k-1}\choose{n-1}}+1\leq j\leq {{k}\choose{n-1}}$, (for $k=n-1,n,n+1,n+2,\dots,d-1$), then $j^*={{k-1}\choose{n-1}}+1$ and $\beta_{j^*}=(d-n-j^*+2)$.
\end{theorem}
\begin{corollary}[\cite{Akbarossadat-Basic}] \label{Cor-l_n^n(w)}
If $n=d$, then
\begin{enumerate}
\item
$l_n^n(1)=n$.
\item
$l_n^n(2)={{n}\choose{n}}=1$.
\item
$l_n^n(3)={{n}\choose{n-1}}=n$.
\item
$l_n^n(4
={{n}\choose{2}}+n$.
\end{enumerate}
\end{corollary}
\begin{theorem}[\cite{Akbarossadat-Basic}] \label{F^i/F^j}
Let $F$ be a free $n$-Lie algebra and let $F^i$ be the $i$th term of the lower central series of $F$, for each $i\in\Bbb N$. Then $\dfrac{F^i}{F^{i+c}}$ is the abelian of dimension $\sum\limits_{j=0}^{c-1}l_d^n(i+j)$, where $c=1,2,\ldots$.
\end{theorem}
\section{$2$-multipliers of $n$-Lie algebras}
\indent
Salemkar and Aslizadeh \cite{nilpotent-multiplier of direct sum-Salemkar-Aslizadeh} presented an explicit formula for the $c$-nilpotent multipliers of the direct sum of Lie algebras whose abelianisations are of finite dimension, and under some conditions, extended it for arbitrary Lie algebras.
They considered two Lie algebras $A$ and $B$ with dimension $d$ and $d'$, respectively. Then they made the abelian Lie algebra $\Gamma_{c+1}(A,B)$ with dimension $l_{d+d'}(c +1)-l_d(c +1)-l_{d'}(c +1)$ (the symbol $\Gamma$ does not indicate the Whitehead quadratic functor or any generalization of it). So they identified the structure of $\Gamma_{c+1}(A,B)$ and determined its dimension. Thus they were able to calculate the $c$-nilpotent multiplier of direct sum of two Lie algebras.
Niroomand and Parvizi \cite{2-nilpotent-Niroomand-Parvizi} chose another method for identifying $2$-nilpotent multipliers of a direct sum of Lie algebras. First, they considered a free representation in the form $0\longrightarrow R_1\longrightarrow F_1\longrightarrow L_1\longrightarrow 0$ and $0\longrightarrow R_2\longrightarrow F_2\longrightarrow L_2\longrightarrow 0$ for each of the Lie algebras $L_1$ and $L_2$, respectively. Then, with the help of the free product of $F_1$ and $F_2$, they made a free representation $F=F_1*F_2$ for $L_1\oplus L_2$, such that $R=R_1+R_2+[F_2,F_1]$. So, they computed the $2$-nilpotent multiplier of $L_1\oplus L_2$ in terms of $F_i$’s and $R_i$’s as follows:
\[\mathcal{M}^{(2)}(L_1\oplus L_2)=\dfrac{R\cap F^3}{[[R,F], F]}=\dfrac{(R_1+R_2+[F_2,F_1])\cap (F_1* F_2)^3}{[R_1+R_2+[F_2,F_1],F_1*F_2,F_1*F_2]}.\]
They showed that $\mathcal{M}^{(2)}(L_1)\oplus\mathcal{M}^{(2)}(L_2)$ is a direct summand of $\mathcal{M}^{(2)}(L_1\oplus L_2)$, and so $\mathcal{M}^{(2)}(L_1\oplus L_2)\cong\mathcal{M}^{(2)}(L_1)\oplus\mathcal{M}^{(2)}(L_2)\oplus{\bf K}$, for some subalgebra ${\bf K}$ of $\mathcal{M}^{(2)}(L_1\oplus L_2)$. Furthermost, they proved that $F^3=[F_2, F_1, F_1]+[F_2,F_1,F_2]+F_1^3+F_2^3$ and $\ker\alpha \equiv [F_2, F_1,F_1]+[F_2,F_1,F_2] (\mathrm{mod}[R,F,F])$, where $\alpha:\mathcal{M}^{(2)}(L_1\oplus L_2)\longrightarrow\mathcal{M}^{(2)}(L_1)\oplus\mathcal{M}^{(2)}(L_2)$ is an epimorphism. Also, they proved $[F_2, F_1,F_1]+[F_2,F_1,F_2]$ (${\mathrm{mod}} F^4$) is an abelian Lie algebra generated by all basic commutators of the form $[y_i,x_j,x_k]$ and $[y_r,x_s,y_t]$, where $y_i,y_r,y_t$ and $x_j,x_k,x_s$ are taken from basic sets $Y$ and $X$ of $F_2$ and $F_1$, respectively. Finally, they showed that the following equation holds:
\[{\bf K}=[F_2, F_1,F_1]+[F_2,F_1,F_2]\equiv (L_2^{ab}\otimes L_1^{ab}\otimes L_1^{ab})\oplus
(L_2^{ab}\otimes L_1^{ab}\otimes L_2^{ab})\quad({\mathrm{mod}} [R, F, F]).\]
Indeed, here we use a new method to prove this decomposition for $n$-Lie algebras. First, we have to prove the following lemma.
\begin{lemma} \label{2-multiplier-exact sequence}
Let $L$ be an $n$-Lie algebra. Then there exists the following exact sequence:
\[0\longrightarrow \mathcal{M}^{(2)}(L)\longrightarrow(L\wedge L)\wedge L\longrightarrow L^3\longrightarrow 0.\]
\end{lemma}
\begin{proof}
Let $0\longrightarrow R\longrightarrow F\longrightarrow L\longrightarrow 0$ be a free representation of $L$ ,and so $L\cong F/R$. We know that $(L\wedge L)\wedge L\cong\left(\dfrac{F}{R}\wedge\dfrac{F}{R}\right)\dfrac{F}{R}$ and that $ \mathcal{M}^{(2)}(L)=\dfrac{R\cap F^3}{\gamma_3[R,F,\dots,F]}$. Consider the following maps:
\begin{align*}
\begin{array}{rcl}
\Psi:\mathcal{M}^{(2)}(L)&\longrightarrow & \left(\dfrac{F}{R}\wedge\dfrac{F}{R}\right)\dfrac{F}{R}\\ \\
\left[\left[f_1,\dots,f_n\right],f'_2,\dots,f'_n\right]+\gamma_3\left[R,F,\dots,F\right]&\longmapsto & (\bar{f_1}\wedge\dots\wedge \bar{f_n})\wedge \bar{f'_2}\wedge\dots\wedge \bar{f'_n}
\end{array}
\end{align*}
\begin{align*}
\begin{array}{rcl}
\Phi: \left(\dfrac{F}{R}\wedge\dfrac{F}{R}\right)\dfrac{F}{R}&\longrightarrow & \mathcal{M}^{(2)}(L)\\ \\
(\bar{f_1}\wedge\dots\wedge \bar{f_n})\wedge \bar{f'_2}\wedge\dots\wedge \bar{f'_n}&\longmapsto &[[f_1,\dots,f_n],f'_2,\dots,f'_n]+\gamma_3[R,F,\dots,F] .
\end{array}
\end{align*}
It is easy to check that $\Psi$ and $\Phi$ are monomorphism and epimorphism, respectively. Also, $\Phi\Psi=0$ and $\ker\Phi\subseteq \mathrm{Im}\Psi$, and hence $\ker\Phi=\mathrm{Im}\Psi$.
\end{proof}
The next theorem plays an essential role in proving the main results of this paper. In this theorem, we get the $2$-nilpotent multiplier of the direct sum of two $n$-Lie algebras.
\begin{theorem} \label{2-multiplier of direct sum}
Let $L$ and $M$ be two finite-dimensional $n$-Lie algebras. Then
\begin{equation*}
\mathcal{M}^{(2)}(L\oplus M)\cong \mathcal{M}^{(2)}(L)\oplus \mathcal{M}^{(2)}(M)\oplus ((L^{ab}\otimes_{\mathrm{mod}}^n L^{ab}) \otimes_{\mathrm{mod}}^n M^{ab})\oplus ((M^{ab}\otimes_{\mathrm{mod}}^n M^{ab})\otimes_{\mathrm{mod}}^n L^{ab}).
\end{equation*}
\end{theorem}
\begin{proof}
By Lemma \ref{2-multiplier-exact sequence}, we have
\begin{align}
&0\longrightarrow \mathcal{M}^{(2)}(L\oplus M)\longrightarrow((L\oplus M)\wedge (L\oplus M))\wedge (L\oplus M)\longrightarrow (L\oplus M)^3\longrightarrow 0,\label{exact-sequ-1}\\
&0\longrightarrow \mathcal{M}^{(2)}(L)\longrightarrow(L\wedge L)\wedge L\longrightarrow L^3\longrightarrow 0,\label{exact-sequ-2}\\
&0\longrightarrow \mathcal{M}^{(2)}(M)\longrightarrow(M\wedge M)\wedge M\longrightarrow M^3\longrightarrow 0.\label{exact-sequ-3}
\end{align}
On the other hand,
\begin{align}
&((L\oplus M)\wedge (L\oplus M))\wedge (L\oplus M)\nonumber\\
&\cong \left((L\wedge L)\oplus (L^{ab}\otimes_{\mathrm{mod}}^n M^{ab})\oplus (M\wedge M)\right)\wedge(L\oplus M)\nonumber\\
&\cong ((L\wedge L)\wedge (L\oplus M))\oplus \left(\left(L^{ab}\otimes_{\mathrm{mod}}^n M^{ab}\right)\wedge (L\oplus M)\right)\oplus \left((M\wedge M)\wedge (L\oplus M)\right)\nonumber\\
&\cong (((L\wedge L)\wedge L)\oplus ((L\wedge L)\wedge M)\oplus \left((( L^{ab}\otimes_{\mathrm{mod}}^n M^{ab})\wedge L)\oplus ((L^{ab}\otimes_{\mathrm{mod}}^n M^{ab})\wedge M)\right)\nonumber\\
&\quad\oplus(((M\wedge M)\wedge L)\oplus ((M\wedge M)\wedge M))={\mathbf{K}}.\label{K}
\end{align}
Since $L$ and $M$ act trivially on $L^{ab}$ and $M^{ab}$, respectively (and conversely), there are the following isomorphims:
\begin{align}
&(L^{ab}\otimes_{\mathrm{mod}}^n M^{ab})\wedge L^{ab}\cong (L^{ab}\otimes_{\mathrm{mod}}^n M^{ab})\otimes_{\mathrm{mod}}^n L^{ab} \cong (L^{ab}\otimes_{\mathrm{mod}}^n L^{ab})\otimes_{\mathrm{mod}}^n M^{ab},\nonumber\\
&(L^{ab}\otimes_{\mathrm{mod}}^n M^{ab})\wedge M^{ab}\cong (L^{ab}\otimes_{\mathrm{mod}}^n M^{ab})\otimes_{\mathrm{mod}}^n M^{ab} \cong (M^{ab}\otimes_{\mathrm{mod}}^n M^{ab})\otimes_{\mathrm{mod}}^n M^{ab}.\label{isomorohisms}
\end{align}
We know that
\begin{equation}\label{(L+M)^3}
(L\oplus M)^3\cong L^3\oplus M^3.
\end{equation}
Moreover, we have the following exact sequences:
\begin{align}
&0\rightarrow (L^{ab}\otimes_{\mathrm{mod}}^n L^{ab})\otimes_{\mathrm{mod}}^n M^{ab}\rightarrow ((L\wedge L)\wedge M)\oplus (L^{ab}\otimes_{\mathrm{mod}}^n L^{ab})\otimes_{\mathrm{mod}}^n M^{ab}\rightarrow 0\rightarrow 0, \label{exact-sequ-6}\\
&0\rightarrow (M^{ab}\otimes_{\mathrm{mod}}^n M^{ab})\otimes_{\mathrm{mod}}^n L^{ab}\rightarrow ((M\wedge M)\wedge L)\oplus (M^{ab}\otimes_{\mathrm{mod}}^n M^{ab})\otimes_{\mathrm{mod}}^n L^{ab}\rightarrow 0\rightarrow 0. \label{exact-sequ-7}
\end{align}
Now, by calculating the direct sum of the corresponding sentences in sequences \eqref{exact-sequ-2}, \eqref{exact-sequ-3}, \eqref{exact-sequ-6}, and \eqref{exact-sequ-7} and applying the isomorphisms \eqref{isomorohisms} and equation \eqref{(L+M)^3}, we obtain the following exact sequence:
\begin{equation}\label{exact-sequ-8}
0\rightarrow {\mathbf{P}}\rightarrow\mathbf{K}\rightarrow L^3\oplus M^3\rightarrow 0,
\end{equation}
where ${\mathbf{P}}=\mathcal{M}^{(2)}(L)\oplus \mathcal{M}^{(2)}(M)\oplus ( (L^{ab}\otimes_{\mathrm{mod}}^n L^{ab})\otimes_{\mathrm{mod}}^n M^{ab})\oplus ((M^{ab}\otimes_{\mathrm{mod}}^n M^{ab})\otimes_{\mathrm{mod}}^n L^{ab})$. \\
Since the sequences \eqref{exact-sequ-1} and \eqref{exact-sequ-8} are exact, by comparing them with each other, we conclude that
\begin{equation*}
\mathcal{M}^{(2)}(L\oplus M)\cong \mathcal{M}^{(2)}(L)\oplus \mathcal{M}^{(2)}(M)\oplus ((L^{ab}\otimes_{\mathrm{mod}}^n L^{ab}) \otimes_{\mathrm{mod}}^n M^{ab})\oplus ((M^{ab}\otimes_{\mathrm{mod}}^n M^{ab})\otimes_{\mathrm{mod}}^n L^{ab}).
\end{equation*}
\end{proof}
Note that since every one-dimensional $n$-Lie algebra $L$ is abelian and hence isomorphic to $A(1)$, by the definition of $c$-nilpotent multiplier of $n$-Lie algebras, $\mathcal{M}^{(2)}(L)=0$.
In what follows, we state a result that is also proved in Lie algebras.
\begin{theorem} \label{2-multiplier-abelain}
Let $L$ be an abelian $n$-Lie algebra with finite dimension $d$. Then
$\dim\mathcal{M}^{(c)}(L) = l^n_d(c + 1)$.
In particular,
$\dim\mathcal{M}(L) =l_d^n(2)=\dfrac{1}{2}d(d-1)$.
\end{theorem}
\begin{proof}
Let $F$ be a free $n$-Lie algebra on $d$ elements. By Theorem \ref{F^i/F^j}, $F/F^2$ is an
abelian $n$-Lie algebra of dimension $d$, and so it is isomorphic to $L$ and hence $R=F^2$. Thus
\[\mathcal{M}^{(c)}(L)=\dfrac{R\cap \gamma_{c+1}(F)}{\gamma_{c+1}(R,F)}=\dfrac{F^2\cap \gamma_{c+1}(F)}{\gamma_{c+1}(F^2,F)}=\dfrac{\gamma_{c+1}(F)}{\gamma_{c+2}(F)}.\]
Hence $\dim\mathcal{M}^{(c)}(L)=\dim \gamma_{c+1}(F)/\gamma_{c+2}(F)=l_d^n(c+1)$,
which gives the result.
\end{proof}
Eshrati, Saeedi, and Darabi \cite{Eshrati-Saeedi-Darabi} proved that every finite-dimensional nilpotent $n$-Lie algebra can be decomposed into the direct sum of one Heisenberg $n$-Lie algebra and one abelian $n$-Lie algebra.
So in order to get the $2$-nilpotent multiplier of each $n$-Lie algebra, we must first calculate the $2$-multiplier of each Heisenberg $n$-Lie algebra. In the following two theorems, we identify its $2$-nilpotent multiplier.
\begin{theorem} \label{2-multiplier-H(n,1)}
Let $H(n,1)$ be a Heisenberg $n$-Lie algebra of dimension $n+1$. Then
\[\mathcal{M}^{(2)}(H(n,1))\cong A\left(\dfrac{n^2+3n}{2}\right
.\]
\end{theorem}
\begin{proof}
Since $H(n,1)$ is a nilpotent of class $2$, its free representation is $F/F^3$ and so $R=F^3$, where $F$ is a free $n$-Lie algebra on $n$ letters. By the definition of $2$-nilpotent multiplier of $n$-Lie algebras, we have
\begin{align*}
\mathcal{M}^{(2)}(H(n,1))=\dfrac{R\cap F^3}{\gamma_3(R,F,\dots,F)}=\dfrac{F^3}{[[F^3,F,\dots,F],F,\dots,F]}=\dfrac{F^3}{F^5}.
\end{align*}
On the other hand, by Theorem \ref{F^i/F^j}, we know that
$\dim(F^3/F^5)=l_n^n(3)+l_n^n(4)$,
where $l_n^n(i)$ ($i=3,4$) is the number of basic commutators of weight $i$ on $n$ letters in free $n$-Lie algebras. Since $\mathcal{M}^{(2)}(H(n,1))$ is abelian,
\begin{align*}
\mathcal{M}^{(2)}(H(n,1))\cong A(l_n^n(3)+l_n^n(4))&=A\left(n+{{n}\choose{2}}+n\right)\\
&=A\left(2n+\dfrac{n(n-1)}{2}\right)\\
&=A\left(\dfrac{n^2+3n}{2}\right).
\end{align*}
\end{proof}
We know that a Heisenberg $n$-Lie algebra $H(n,m)$ is capable if and only if $m=1$ and that if $m\geq2$, then $H(n,m)$ is not capable. Thus the following theorem can be proved for noncapable Heisenberg $n$-Lie algebras.
\begin{theorem} \label{2-multiplier-H(n,m)}
Let $H(n,m)$ be a Heisenberg $n$-Lie algebra of dimension $mn+1$ with $m\geq2$. Then
\[\mathcal{M}^{(2)}(H(n,m))\cong A(l_{mn}^n(3))
.\]
\end{theorem}
\begin{proof}
Since $m\geq2$, so $H(n,m)$ is not capable, and hence by Lemma \ref{Z^(L)=L^2=Z(L)}, we have $Z^\wedge(H(n,m))=H^2(n,m)=Z(H(n,m))$.
On the other hand, if put $I=Z^\wedge(H(n,m))$, then by Lemma \ref{lem2.3}, we have
$\mathcal{M}^{(2)}(H(n,m))\cong \mathcal{M}^{(2)}(H(n,m)/H^2(n,m))$. Since $H(n,m)/H^2(n,m)$ is abelian of dimension $mn$, by Theorem \ref{2-multiplier-abelain} for $c=2$, we have
\[\mathcal{M}^{(2)}(H(n,m)/H^2(n,m))\cong A(l_{mn}^n(3))
.\]
Therefore,
\[\mathcal{M}^{(2)}(H(n,m))\cong A(l_{mn}^n(3))
.\]
\end{proof}
In the following theorem, we determine the $2$-nilpotent multiplier of $n$-Lie algebras with one-dimensional derived subalgebra.
\begin{theorem} \label{2-multiplier-n-Lie algebras}
Let $L$ be an $n$-Lie algebra of dimension $d$ and with $\dim L^2=m$. Then
\[\mathcal{M}^{(2)}(L)=\begin{cases}
{{A\left(\dfrac{n^2+3n}{2}+l_{d-n-1}^n(3)+\displaystyle\sum_{i=1}^{n-1}n^{ni}(d-n-1)^{n-i}+(d-n-1)^{ni}n^{n-i}\right)}}, \\
\qquad\hspace{11.5cm} m=1, \\
{{A\left(l_{mn}^n(3)+l_{d-mn-1}^n(3)+\displaystyle\sum_{i=1}^{n-1}(mn)^{ni}(d-mn-1)^{n-i}+(d-mn-1)^{ni}(mn)^{n-i}\right)}} ,\\
\qquad\hspace{11.5cm} m\geq2.
\end{cases}\]
\end{theorem}
\begin{proof}
Since $\dim L^2=1$, by Lemma \ref{L=H+A}, $L\cong H(n,m)\oplus A(d-mn-1)$. Thus by Lemma \ref{2-multiplier of direct sum}, we have
\begin{align}
\mathcal{M}^{(2)}(L)&\cong \mathcal{M}^{(2)}(H(n,m)\oplus A(d-mn-1))\nonumber\\
&\cong \mathcal{M}^{(2)}(H(n,m))\oplus \mathcal{M}^{(2)}(A(d-mn-1))\nonumber\\
&\quad\oplus \left((H^{ab}(n,m)\otimes_{\mathrm{mod}^n}A(d-mn-1))\otimes_{\mathrm{mod}^n}H^{ab}(n,m) \right)\nonumber\\
&\quad\oplus \left((H^{ab}(n,m)\otimes_{\mathrm{mod}^n} A(d-mn-1))\otimes_{\mathrm{mod}^n} A(d-mn-1)\right).\label{equ1}
\end{align}
Now, we consider two following cases:
\begin{enumerate}
\item[(a).]
Assume that $m=1$. By Theorems \ref{2-multiplier-abelain} and \ref{2-multiplier-H(n,1)}, we have
\begin{align*}
\mathcal{M}^{(2)}(L)&\cong A(l_n^n(3)+l_n^n(4)) \oplus A(l_{d-n-1}^n(3))\\
&\quad\oplus A\left(\sum_{i=1}^{n-1}n^{ni}(d-n-1)^{n-i}\right)\\
&\quad\oplus A\left(\sum_{i=1}^{n-1}(d-n-1)^{ni}n^{n-i}\right) \\
&\cong A\left(l_n^n(3)+l_n^n(4)+l_{d-n-1}^n(3)+\sum_{i=1}^{n-1}n^{ni}(d-n-1)^{n-i}+(d-n-1)^{ni}n^{n-i}\right)\\
& \cong A\left(\dfrac{n^2+3n}{2}+l_{d-n-1}^n(3)+\sum_{i=1}^{n-1}n^{ni}(d-n-1)^{n-i}+(d-n-1)^{ni}n^{n-i}\right).
\end{align*}
\item[(b).]
Now, suppose that $m\geq2$. By Theorems \ref{2-multiplier-abelain} and \ref{2-multiplier-H(n,m)}, we have
\begin{align*}
\mathcal{M}^{(2)}(L)&\cong A\left(l_{mn}^n(3)
\right) \oplus A\left(l_{d-mn-1}^n(3)\right)\\
&\quad\oplus A\left(\sum_{i=1}^{n-1}(mn)^{ni}(d-mn-1)^{n-i}\right)\\
&\quad\oplus A\left(\sum_{i=1}^{n-1}(d-mn-1)^{ni}(mn)^{n-i}\right) \\
&\cong A\left(l_{mn}^n(3)+l_{d-mn-1}^n(3)+\sum_{i=1}^{n-1}(mn)^{ni}(d-mn-1)^{n-i}+(d-mn-1)^{ni}(mn)^{n-i}\right).
\end{align*}
\end{enumerate}
\end{proof}
\begin{proposition} \label{Q_3}
Let $L$ be an $n$-Lie algebra with free representation $F/R$ and let $M$ be an ideal of $L$, such that $M\cong S/R$, for some ideal $S$ of $F$. Then $Q_3=\dfrac{\gamma_3(S,F,\dots,F)}{\gamma_3(R,F,\dots,F)}$ is an isomorphic image of $(M\wedge L)\wedge L$. Moreover, if $M$ is an $r$-central ideal, that is, $M\subseteq Z_r(L)$, then $Q_3$ is the isomorphic image of $(M\wedge K)\wedge K$, where $K=L/\gamma_{r+1}(L)$.
\end{proposition}
\begin{proof}
We know that
\[Q_3=\dfrac{\gamma_3(S,F,\dots,F)}{\gamma_3(R,F,\dots,F)}=\dfrac{[[
S,F,\dots,F],F,\dots,F]}{[[R,F,\dots,F],F,\dots,F]}.\]
Since $L\cong F/R$, there exists an epimorphism $g:F\longrightarrow L$, with $\ker g=R$, and hence $\bar{g}:F/R\longrightarrow L$ is an isomorphism. Consider then map $\alpha:(M\wedge L)\wedge L\rightarrow Q_3$, such that $\alpha$ maps every element $\left(\sum_{j=1}^{n-1}(m_1\wedge \dots\wedge m_j\wedge l_{j+1}\wedge \dots\wedge l_n)\wedge l_{i+1}'\wedge \dots\wedge l_n'\right)$ from $(M\wedge L)\wedge L$ to $\sum_{j=1}^{n-1}[[s_1,\dots,s_j,f_{j+1},\dots,f_n],f_{i+1}',\dots,f_n']+\gamma_3(R,F,\dots,F)$,
where $\bar{g}^{-1}(m_i)=s_i+R$, $\bar{g}^{-1}(l_{k})=f_k+R$, for $1\leq i\leq j$, $i+1\leq k\leq n$, $1\leq j\leq n-1$. Since $\bar{g}$ is an isomorphism and the bracket of $n$-Lie algebras is an $n$-linear, $\alpha$ is a well-defined homomorphism of $n$-Lie algebras. Also, it is easy to check that $\alpha$ is onto. \\
The second part can be proved with a similar argument.
\end{proof}
The following proposition is an important tool to prove the main theorem of this section.
\begin{proposition} \label{lem2.3}
Let $L$ be an $n$-Lie algebra and let $M$ be an ideal of it. Then
\begin{equation}\label{inequality-(1)}
\dim\mathcal{M}^{(2)}(L/M)\leq \dim\mathcal{M}^{(2)}(L)+\dim\dfrac{M\cap L^2}{\gamma_3(M,L,\dots, L)}.
\end{equation}
Moreover, if $M$ is a $2$-central subalgebra (i.e., $M\subseteq Z_2(L)$), then
\begin{align}
&(M\wedge L)\wedge L\longrightarrow\mathcal{M}^{(2)}(L)\longrightarrow \mathcal{M}^{(2)}(L/M)\longrightarrow M\cap L^3\longrightarrow 0,\label{inequality-(2)}\\
&\dim\mathcal{M}^{(2)}(L)+\dim M\cap L^3\leq \dim\mathcal{M}^{(2)}(L/M)+\dim (M\otimes L/L^3)\otimes L/L^3.\label{inequality-(3)}
\end{align}
\end{proposition}
\begin{proof}
By attention to the notations of Proposition \ref{Q_3} and $Q_3\subseteq \mathcal{M}^{(2)}(L)$, we have the following exact sequence:
\begin{equation*}
(M\wedge L)\wedge L\stackrel{\alpha}{\longrightarrow} \mathcal{M}^{(2)}(L)\stackrel{\beta}{\longrightarrow} \mathcal{M}^{(2)}(L/M)\stackrel{\gamma}{\longrightarrow} \dfrac{M\cap L^3}{\gamma_3(M,L,\dots,L)}\longrightarrow 0,
\end{equation*}
where $\alpha$ is defined in Proposition \ref{Q_3}. Since $R\subseteq S$, so
\[R\cap F^3\subseteq S\cap F^3,\qquad \gamma_3(R,F,\dots,F)\subseteq \gamma_3(S,F,\dots,F).\]
Hence $\beta:\mathcal{M}^{(2)}(L)=\dfrac{R\cap F^3}{\gamma_3(R,F,\dots,F)}\longrightarrow \mathcal{M}^{(2)}(L/M)=\dfrac{S\cap F^3}{\gamma_3(S,F,\dots,F)}$. It is easy to check that $\beta$ is a Lie homomorphism with $\ker\beta=Q_3$. Also, since $\mathcal{M}^{(2)}(L/M)=S\cap F^3/\gamma_3(S,F,\ldots,F)$, we define $\gamma:\mathcal{M}^{(2)}(L/M)\longrightarrow M\cap L^3/\gamma_3(M,L,\ldots,L)$ such that $s+\gamma_3(S,F,\dots,F)\longmapsto g(s)+\gamma_3(M,L,\dots,L)$. It is obvious that since $g$ is an epimorphism so is $\gamma$ as well. Therefore,
the inequality \eqref{inequality-(1)} is obtained. Furthermore, if $M$ is $2$-central, then $\gamma_3(M,L,\dots,L)=0$ and $(M\wedge L)\wedge L\cong (M\otimes L/L^3)\otimes L/L^3$. Hence we have the exact sequence \eqref{inequality-(2)} and the inequality \eqref{inequality-(3)}.
\end{proof}
The next theorem is the main result of this section.
\begin{theorem} \label{2-multiplier-dimL^2=k>1}
Let $L$ be a nilpotent $n$-Lie algebra of dimension $d$, with $\dim L^2=k\geq 1$. Then
\[\dim\mathcal{M}^{(2)}(L)\leq \dfrac{n^2+3n}{2}+l_{d-n-k}^n(3)+\displaystyle\sum_{i=1}^{n-1}n^{ni}(d-n-k)^{n-i}+(d-n-k)^{ni}n^{n-i}+(d-k)^{2n-2}-k+1.\]
\end{theorem}
\begin{proof}
In Theorem \ref{2-multiplier-n-Lie algebras}, it can be seen that in the first case, that is, $m=1$, the dimension of $\mathcal{M}^{(2)}(L)$ is larger than the second case, that is, $m>1$. Also, in the case $m>1$, the dimension of $\mathcal{M}^{(2)}(L)$ is decreasing with respect to $m$. Now, we do induction on $k$. For $k=1$, the result follows from Theorem \ref{2-multiplier-n-Lie algebras}. Now, let $k=2$ and let $I$ be a one-dimensional central ideal of $L$. Then $\dim L/M=d-1$ and $\dim (L/M)^2=\dim L^2/M=1$. Also, since $M\subseteq Z(L)\subseteq Z_2(L)$, thus $M$ and $L/L^3$ act on each other trivially. Hence $(M\otimes L/L^3)\otimes L/L^3\cong (M\otimes_{\mathrm{mod}}^n (L/L^3)^{ab})\otimes_{\mathrm{mod}}^n (L/L^3)^{ab}$. So by equation \eqref{inequality-(3)} of Proposition \ref{lem2.3} and the first step of induction, we have
\begin{align*}
\dim\mathcal{M}^{(2)}(L)&\leq \dim\mathcal{M}^{(2)}(L/M)+\dim (M\otimes L/L^3)\otimes L/L^3-\dim M\cap L^3\\
&\leq \dim\mathcal{M}^{(2)}(L/M)+\dim (M\otimes_{\mathrm{mod}}^n (L/L^3)^{ab})\otimes_{\mathrm{mod}}^n (L/L^3)^{ab}-\dim M\cap L^3\\
&\leq \dim\mathcal{M}^{(2)}(L/M)+\dim (M\otimes_{\mathrm{mod}}^n \dfrac{L/L^3}{(L/L^3)^2})\otimes_{\mathrm{mod}}^n \dfrac{L/L^3}{(L/L^3)^2}-\dim M\cap L^3\\
&\leq \dim\mathcal{M}^{(2)}(L/M)+\dim (M\otimes_{\mathrm{mod}}^n \dfrac{L/L^3}{L^2/L^3})\otimes_{\mathrm{mod}}^n \dfrac{L/L^3}{L^2/L^3}-\dim M\cap L^3\\
&\leq \dim\mathcal{M}^{(2)}(L/M)+\dim (M\otimes_{\mathrm{mod}}^n L/L^2)\otimes_{\mathrm{mod}}^n L/L^2-\dim M\cap L^3\\
&\leq \dfrac{n^2+3n}{2}+l_{(d-1)-n-1}^n(3)\\
&\quad+\displaystyle\sum_{i=1}^{n-1}n^{ni}((d-1)-n-1)^{n-i}+((d-1)-n-1)^{ni}n^{n-i}
+(d-2)^{2n-2}-1.
\end{align*}
Now suppose that the inequality holds for $k-1$, and let $M$ be a one-dimensional central ideal of $L$, with $\dim L/M=d-1$ and $\dim L^2=k$. Then
\begin{align*}
\dim\mathcal{M}^{(2)}(L)&\leq \dim\mathcal{M}^{(2)}(L/M)-\dim(M\cap L^3)+\dim (M\otimes_{\mathrm{mod}}^n L/L^2)\otimes_{\mathrm{mod}}^n L/L^2\\
&\leq \dfrac{n^2+3n}{2}+l_{(d-1)-n-(k-1)}^n(3)\\
&\quad+\displaystyle\sum_{i=1}^{n-1}n^{ni}((d-1)-n-(k-1))^{n-i}+((d-1)-n-(k-1))^{ni}n^{n-i}\\
&\quad-\dim(M\cap L^3)+\dim (M\otimes_{\mathrm{mod}}^n L/L^2)\otimes_{\mathrm{mod}}^n L/L^2
\end{align*}
\begin{align*}
\qquad\qquad\qquad &\leq \dfrac{n^2+3n}{2}+l_{d-n-k}^n(3)\\
&\quad+\displaystyle\sum_{i=1}^{n-1}n^{ni}(d-n-k)^{n-i}+(d-n-k)^{ni}n^{n-i}-k+1+(d-k)^{2n-2}.
\end{align*}
\end{proof}
\section{$c$-Capability of $n$-Lie algebras}
\indent
Detection of $2$-capable $n$-Lie algebras is one of the applications of $2$-nilpotent multipliers. Hence in this section, we are going to state the conditions for detecting $2$-capable $n$-Lie algebras.
\begin{proposition} \label{Pro3.1}
Let $L$ be an $n$-Lie algebra. Then $L$ is $2$-capable if and only if $Z_2^*(L)=0$.
\end{proposition}
\begin{proof}
If $L$ is $2$-capable, then there exists an $n$-Lie algebra $H$ such that $L\cong H/Z_2(H)$. Assume that $F'/R'$ is a free representation of $H$. Then $Z_2(H)\cong S/R'$, where $R'\subseteq S\unlhd F'$. Thus
\[L\cong \dfrac{H}{Z_2(H)}\cong \dfrac{F'/R'}{S'/R'}\cong \dfrac{F'}{S'}.\]
On the other hand, since $S'\unlhd F'$, so $\gamma_3(S',F',\dots,F')\subseteq S'$, $\dfrac{F'/\gamma_3(S',F',\dots,F')}{S'/\gamma_3(S',F',\dots,F')}\cong F'/S'$, and also the map $\pi:F'/\gamma_3(S',F',\dots,F')\rightarrow F'/S'$ is the natural epimorphism. Hence by Theorem \ref{natural epimorphism}, $Z_2^*(L)=\pi\left(Z_2(F'/\gamma_3(S',F',\dots,F'))\right)$. Therefore, it is enough to show that $Z_2(F'/\gamma_3(S',F',\dots,F')\subseteq \ker\pi$. Since $S/\gamma_3(S',F',\dots,F')=Z_2(F'/\gamma_3(S',F',\dots,F'))$, so $S'/\gamma_3(S',F',\dots,F')\subseteq \ker\pi$, and hence
$\pi(S'/\gamma_3(S',F',\dots,F'))=0$. Thus $Z_2^*(L)=0$.
Now, let $Z_2^*(L)=0$, and suppose that $F/R$ is a free representation of $L$. We have $\gamma_3(R,F,\dots,F)\subseteq R\unlhd F$, and hence $\pi:F/\gamma_3(R,F,\dots,F)\rightarrow F/R$ is an epimorphism with $\ker\pi=S/\gamma_3(R,F,\dots,F)$. Also,
\begin{equation}\label{equ3}
\dfrac{F/\gamma_3(R,F,\dots,F)}{\ker \pi}\cong\dfrac{F/\gamma_3(R,F,\dots,F)}{R/\gamma_3(R,F,\dots,F)}\cong \dfrac{F}{R}=\mathrm{Im} \pi=L.
\end{equation}
Put $H=F/\gamma_3(R,F,\dots,F)$. Using the definition of $Z_2(H)$, it is easy to check that
\begin{equation}\label{equ4}
Z_2(H)=Z_2(F/\gamma_3(R,F,\dots,F))=\dfrac{R}{\gamma_3(R,F,\dots,F)}.
\end{equation}
Therefore, $L\cong H/Z_2(H)$ and hence is $2$-capable.
\end{proof}
\begin{theorem} \label{Theo3.2}
Let $L$ be an $n$-Lie algebra and let $M$ be its ideal such that $M\subseteq Z_2^*(L)$. Then the natural homomorphism $\psi:\mathcal{M}^{(2)}(L)\longrightarrow \mathcal{M}^{(2)}(L/M)$ is one-to-one.
\end{theorem}
\begin{proof}
Suppose that $F/R$ and $S/R$ are free representations of $L$ and $M$, respectively, for some $S\unlhd F$ with $R\subseteq S$. Then $L/M\cong F/S$, and we have
\[\mathcal{M}^{(2)}(L)=\dfrac{R\cap F^3}{\gamma_3(R,F,\dots,F)},\qquad \mathcal{M}^{(2)}(L/M)=\dfrac{S\cap F^3}{\gamma_3(S,F,\dots,F)}.\]
Since $R\subseteq S$, so $R\cap F^3\subseteq S\cap F^3$ and
\begin{equation}\label{equ-1}
\gamma_3(R,F,\dots,F)\subseteq \gamma_3(S,F,\dots,F)\subseteq S.
\end{equation}
Hence we define the homomorphism $\psi:(R\cap F^3)/\gamma_3(R,F,\dots,F)\longrightarrow (S\cap F^3)/\gamma_3(S,F,\dots,F)$ by $\psi(x+\gamma_3(R,F,\dots,F))=x+\gamma_3(S,F,\dots,F)$.
On the other hand, since $I\subseteq Z_2^*(L)$, so $S/R\subseteq Z_2(F/R)$, or, equivalently,
\begin{equation}\label{equ-3}
[[S,F,\dots,F],F,\dots,F]=\gamma_3(S,F,\dots,F)\subseteq R.
\end{equation}
By equations \eqref{equ-1} and \eqref{equ-3}, we obtain
\[\gamma_3(R,F,\dots,F)\subseteq \gamma_3(S,F,\dots,F)\subseteq R\cap S=R.\]
Therefore one of the two situations $\gamma_3(S,F,\dots,F)=R$ or $\gamma_3(R,F,\dots,F)=\gamma_3(S,F,\dots,F)$ can occur. If $\gamma_3(S,F,\dots,F)=R$, then $S=R$ must be, which means $I=S/R=0$, a contradiction. Thus $\gamma_3(R,F,\dots,F)=\gamma_3(S,F,\dots,F)$. Hence for arbitrary $x+\gamma_3(R,F,\dots,F)\in\ker\psi$, we have
\[\psi(\gamma_3(R,F,\dots,F))=x+\gamma_3(S,F,\dots,F)=\gamma_3(S,F,\dots,F).\]
So $x\in\gamma_3(S,F,\dots,F)=\gamma_3(R,F,\dots,F)$. Thus $\ker\psi=0$.
\end{proof}
In the following theorem, we prove that the necessary condition for $H(n,m)$ to be a $2$-capable $n$-Lie algebra is that $m=1$.
\begin{theorem} \label{necessary condition for capabilitty}
If the Heisenberg $n$-Lie algebra $H(n,m)$ is $2$-capable, then $m=1$.
\end{theorem}
\begin{proof}
Suppose that $H(n,m)$ is a $2$-capable Heisenberg $n$-Lie algebra. Then there is an $n$-Lie algebra $K$ such that $H(n,m)\cong K/Z_2(K)$, and we have
\[H(n,m)\cong K/Z_2(K)\cong \dfrac{K/Z(K)}{Z_2(K)/Z(K)}=\dfrac{K/Z(K)}{Z(K/Z(K))}.\]
Thus putting $P=K/Z(K)$, we conclude that $H(n,m)=P/Z(P)$. Hence $H(n,m)$ is capable, and so by Theorem \ref{Heisenberg-capable}, $m=1$.
\end{proof}
In the following result, we show that there is no $2$-capable Heisenberg $n$-Lie algebras, for $n>2$.
\begin{theorem} \label{non-capability of H(n,1)}
The Heisenberg $n$-Lie algebra $H(n,1)$ is not $2$-capable, for $n\geq3$.
\end{theorem}
\begin{proof}
Assume that the Heisenberg $n$-Lie algebra $H(n,1)$ is $2$-capable, and $n\neq2$. So there is an $n$-Lie algebra $H$, such that $H(n,1)\cong H/Z_2(H)$ and $Z_2^*(H(n,1))=0$. By Theorem \ref{natural epimorphism}, $\pi(Z_2(F/\gamma_3(R,F,\dots,F))=0$. That is,
\[Z_2(F/\gamma_3(R,F,\dots,F)\subseteq \ker\pi=K/\gamma_3(R,F,\dots,F),\]
for some ideal $K$ of $F$. Hence $Z_2(F)+\gamma_3(R,F,\dots,F)/\gamma_3(R,F,\dots,F)\subseteq K/\gamma_3(R,F,\dots,F)$, and so $Z_2(F)\subseteq K$.
On the other hand, $Z_2^*(H(n,1))=\pi(Z_2(F/\gamma_3(R,F,\dots,F))=0$ implies that
\[\pi(Z_2(F/\gamma_3(R,F,\dots,F))=\pi(Z_2(F)+\gamma_3(R,F,\dots,F))+\gamma_3(R,F,\dots,F)=R.\]
Since $c$-central ideals are characteristic, thus $\pi(Z_2(F))=Z_2(F)\subseteq R$.
Therefore, $Z_2(F)\subseteq K\cap R$ and that is $Z_2(F/R)=0$, which is a contradiction.
\end{proof}
According to the above discussion, we have the following corollary.
\begin{corollary}
The Heisenberg $n$-Lie algebra $H(1)$ is the only $2$-capable Heisenberg $n$-Lie algebra.
\end{corollary}
\begin{proof}
It is proved that $H(2,1)=H(1)$ is $2$-capable; see \cite[Theorem 3.3]{Niroomand-Parvizi-M^2(L)}. Also, by attention to Theorems \ref{non-capability of H(n,1)} and \ref{necessary condition for capabilitty}, the proof is completed.
\end{proof}
The following theorem is similar to \cite[Lemma 4.2]{Moneyhun}. Moneyhun proved that if $\dim L/Z(L)=d$, then $\dim L^2=\dim\gamma_2(L)\leq\dfrac{1}{2}d(d-1)=l_d(2)$.
\begin{theorem}
Let $L$ be an $n$-Lie algebra such that $\dim L/Z_2(L)=d$. Then the dimension of $\gamma_{3}(L)$ is at most $l_d^n(3)$.
\end{theorem}
\begin{proof}
Suppose that $F/R$ is the free representation of $L$ and that $\{x_1,x_2,\dots,x_d\}$ is a basis for $L/Z_2(L)$. Thus every element of $L/Z_2(L)$ can be represented as $z+\sum_{i=1}^dc_ix_i$, where $c_i$s are scalers. Hence
\begin{align*}
\dim\gamma_3(L)=\dim\gamma_3(F/R)&=\dim \dfrac{F^3+R}{R}
=\dim \dfrac{F^3}{F^3\cap R}
=\dim\dfrac{F^3/F^4}{(F^3\cap R)/F^4}
\leq \dim\dfrac{F^3}{F^4}.
\end{align*}
By Theorem \ref{F^i/F^j}, we know that the basis of $F^3/F^4$ is the set of all basic commutators of weight $3$, and so $\dim\gamma_3(L)\leq l_d^n(3)$.
\end{proof}
|
1,116,691,500,202 | arxiv | \section{#1} \setcounter{equation}{0}}
\makeatletter
\@addtoreset{equation}{section}
\makeatother
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\begin{center}
{\hfill SLAC--PUB--11511\\ \hfill SU--ITP--05/29}
{\Large \bf{Complex/Symplectic Mirrors}}
\bigskip
Wu--yen Chuang$^{1,2}$, Shamit Kachru$^{1,2}$ and Alessandro Tomasiello$^1$
\medskip
$^1$ {\it ITP, Stanford University, Stanford, CA 94305, USA}
$^2$ {\it SLAC, Stanford University, Menlo Park, CA 94309, USA }
\bigskip
{\bf Abstract}
\end{center}
We construct a class of symplectic non--K\"ahler and complex non--K\"ahler
string theory vacua, extending and providing
evidence for an earlier suggestion by Polchinski and Strominger.
The class admits a mirror pairing by construction. Comparing hints from
a variety of sources, including ten--dimensional supergravity and KK reduction
on SU(3)--structure manifolds, suggests a picture in which
string theory extends Reid's fantasy to connect classes of both complex
non-K\"ahler and symplectic
non-K\"ahler manifolds.
\section{Complex and symplectic vacua}
The study of string theory on
Calabi--Yau\ manifolds has provided both the most popular
vacua of the theory, and some of the best tests
of theoretical ideas about its
dynamics.
Most manifolds, of course, are not Calabi--Yau. What
is the next simplest class for theorists to explore?
The answer, obviously, depends on what the definition of ``simplest'' is.
However, many leads seem to be pointing to the same suspects. First
of all, it has been
suggested long ago \cite{ps}
that type II vacua exist, preserving ${\cal N}=2$
supersymmetry ({\it the same} as for Calabi--Yau's), on manifolds which are complex
and non--K\"ahler\ (and enjoy vanishing $c_1$). Calabi--Yau\ manifolds are simultaneously complex and symplectic,
and mirror symmetry can be viewed as an exchange of these two
properties \cite{kontsevich}.
The same logic seems to suggest that the proposal of \cite{ps} should
also include symplectic non--K\"ahler\ manifolds
as mirrors of the complex non--K\"ahler\ ones. Attempts
at providing mirrors of this type (without using a physical interpretation)
have indeed already been made in \cite{sty,st}.
In a different direction, complex non--K\"ahler\ manifolds have also featured in
supersym\-metry--preserving vacua of supergravity, already in
\cite{andy}. More recently the general conditions
for preserving ${\cal N}=1$ supersymmetry in supergravity have been reduced to
geometrical conditions
\cite{gmpt2}; in particular, the manifold has to be generalized complex
\cite{hitchin}. The most prominent examples of generalized complex manifolds
are complex and symplectic manifolds, neither necessarily K\"ahler. It should
also be noted that complex and symplectic manifolds seem to be natural
in topological strings.
In this paper we tie these ideas together. We find that vacua of the type
described in \cite{ps} can be found for a large class of complex non--K\"ahler\
manifolds in type IIB and symplectic non--K\"ahler\ manifolds in type
IIA, and observe
that these vacua come in mirror pairs. Although these vacua are not
fully amenable to ten--dimensional supergravity analysis for reasons
that we will explain
(this
despite the fact that they preserve ${\cal N}=2$ rather than ${\cal N}=1$ supersymmetry), this is in agreement with
the supergravity picture that all (RR) SU(3)--structure IIA vacua are
symplectic \cite{gmpt}, and all IIB vacua are complex \cite{dall,frey,gmpt},
possibly suggesting a deeper structure.
In section \ref{sec:vacua}, in an analysis formally identical to \cite{ps},
we argue for the existence of the new vacua.
In section \ref{sec:geom} we show that the corresponding internal manifolds are not Calabi--Yau\ but rather complex or symplectic. More specifically, in both
theories, they are obtained from a transition that does not preserve
the Calabi--Yau\ property. As evidence for this, we show that
the expected physical spectrum agrees with the one obtained on
the proposed manifolds. The part of this check that concerns
the massless spectrum is straightforward;
we can extend it to low-lying massive fields by combining results from
geometry \cite{dish} and KK reduction on manifolds of SU(3) structure.
We actually try in section \ref{sec:spec}
to infer from our class of examples a few properties which
should give more control over this kind of KK reduction. Specifically, we
suggest that the lightest massive fields should
be in correspondence with pseudo--holomorphic
curves or pseudo--Special--Lagrangian three--cycles (a notion we will
define at the appropriate juncture).
Among the motivations for this paper were also a number of more grandiose
questions about the effective potential of string theory. One
of the motivations
for mathematicians to study the generalized type of transition we consider
in this paper is the hope that many moduli spaces actually happen
to be submanifolds of a bigger moduli space, not unlike \cite{reid}
the realization
of the various 19--dimensional
moduli spaces of algebraic K3's as submanifolds
of the 20--dimensional moduli space of abstract K3's. It might be that
string theory provides a natural candidate for such a space, at least for the
${\cal N}=2$ theories,
whose points would be all
SU(3)--structure manifolds (not necessarily complex or symplectic),
very possibly augmented by non--geometrical points \cite{Wati}. We would
not call it a moduli space, but rather a configuration space:
on it, a potential would be defined, whose zero
locus would then be the moduli space of ${\cal N}=2$ supersymmetric
string theory vacua,
including in particular the complex and symplectic
vacua described here. In this context, what this paper is studying is a
small neighborhood where the moduli space of ${\cal N}=2$
non-K\"ahler compactifications
meets up with the moduli space of
Calabi-Yau compactifications
with RR flux,
inside this bigger configuration space of manifolds.
\section{Four--dimensional description of the vacua}
\label{sec:vacua}
We will now adapt the ideas from \cite{ps} to our needs.
The strategy is as follows. We begin by compactifying the IIB and IIA
strings on Calabi-Yau threefolds,
and we switch on internal RR fluxes, $F_3$ in IIB and
$F_4$ in IIA (our eventual interest will be the case where the theories
are compactified on mirror manifolds ${\cal M}$ and ${\cal W}$,
and the fluxes are mirror
to one another). As also first noted in \cite{ps}, this will make the
four--dimensional ${\cal N}=2$ supergravity gauged; in particular, it will
create a potential on the moduli space. This potential has
supersymmetric vacua only at points where the Calabi--Yau\ is singular. However, on
those loci of the moduli space new massless brane hypermultiplets have to
be taken into account, which will then produce the new vacua.
\subsection{The singularities we consider}
\label{sec:sing}
Let us first be more precise about the types of singularities we will consider.
In IIB, as we will review shortly, if we switch on
$F_3$ with a non--zero integral along a cycle $B_3$ of a Calabi-Yau ${\cal M}$,
a supersymmetric vacuum will exist on a point in moduli space
in which only the cycle $A_3$ conjugate to $B_3$ under intersection pairing
shrinks. It is often the case that several cycles shrink
simultaneously, with effects that we will review in the next section,
but there are definitely examples in which a single $B$ cycle shrinks.
These are the cases we will be interested in. (We will briefly
explain in section \ref{sec:ncytrans} how this condition could be
relaxed.)
In IIA, switching on $F_4$ with a non--zero integral on a four--cycle
$\tilde A_4$ of ${\cal W}$ will
generate a potential which will be zero only in points in
which the quantum--corrected volume of the conjugate two--cycle $\tilde B_2$
(the Poincar\'e dual to $F_4$) vanishes. This will happen on a wall
between two birationally equivalent Calabi--Yau's, connected by a flop of
$\tilde B_2$.
These points will be mirror to the ones we described above for IIB.
The converse is not always true: there can be shrinking three--cycles which are
mirror to points in the IIA moduli space in which the quantum volume
of the whole Calabi--Yau\ goes to zero. These walls separate geometrical and
Landau--Ginzburg, or, hybrid, phases. One would obtain a vacuum at
such a point by switching on
$F_0$ instead of $F_4$, for instance. The example
discussed in \cite{ps} (the quintic) is precisely such a case.
Since in the end we want to give
geometrical interpretations to the vacua we will obtain, we will restrict
our attention only to cases in which a curve shrinks in ${\cal W}$ --
that is, when a flop happens. Although this is not
strictly necessary for IIB, keeping mirror symmetry in mind we will
restrict our attention to cases in which the stricter IIA condition is valid,
not only the IIB one: in the mirror pairs of interest to us,
the conifold singularity in
${\cal M}$ is mirror to a flop in ${\cal W}$. It would be interesting, of
course, to find the IIA mirrors to all the other complex non--K\"ahler\ manifolds
in IIB.
Looking for flops is not too difficult, as there is
a general strategy. If the Calabi--Yau\ ${\cal W}$ is realized as hypersurface in
a toric manifold $V$,
the ``enlarged K\"ahler\ moduli space'' \cite{greene,agm}
(or at least, the part
of it which comes from pull--back of moduli of $V$) is a toric
manifold $W_K$ itself. The cones of the fan of $W_K$
are described by different triangulations of the cone over the
toric polyhedron of $V$. Each of these cones will be a phase
\cite{Witten}; there will be
many non--geometrical phases (Landau--Ginzburg or hybrid). Fortunately,
the geometrical ones are characterized as the triangulations of the toric
polyhedron of $V$ itself (as opposed to triangulations of the cone over it).
This subset of cones gives an open set in $W_K$ which is called the
``partially enlarged'' K\"ahler\ moduli space.
This is not the end of the story, however. In many examples, it will happen
that a flop between two geometrical phases will involve more than one curve
at a time, an effect due to restriction from $V$ to ${\cal W}$.
Worse still, these curves might have relations, and sometimes there is
no quick way to determine this.
Even so, we expect that there should be many cases in which a single
curve shrinks (or many, but without relations).
Such an example is readily found in the literature \cite{mv,lsty}:
taking ${\cal W}$ to be an elliptic fibration
over ${\Bbb F}_1$ (a Calabi--Yau\ whose Hodge numbers are $h^{1,1}=3$ and
$h^{2,1}=243$), there is a point in moduli space in which a single curve
shrinks (see Appendix \ref{app:toric} for more details). By counting of
multiplets and mirror symmetry, on the mirror
${\cal M}$ there will be a single three--cycle which will shrink. This implies
that the mirror singularity will be a conifold singularity. Indeed,
it is a hypersurface singularity, and as such the shrinking cycle
is classified by the so--called Milnor number. This
has to be one if there is a single shrinking cycle, and the only
hypersurface singularity with Milnor number one is the conifold.
\subsection{Gauged supergravity analysis}
\label{sec:gauged}
After these generalities, we will now show how turning on fluxes drives
the theory to a conifold point in the moduli space; more importantly,
we will then show how including the new massless hypermultiplets generates
new vacua. We will do this in detail in the IIB theory on ${\cal M}$,
as its IIA counterpart
is then straightforward. The analysis is formally identical to the one in
\cite{ps} (see also \cite{michelson,Ferrara}); the
differences have been explained in the previous subsection.
As usual, define the symplectic basis of three--cycles $A^I$, $B_J$ and
their Poincar\'e duals $\alpha_I$, $\beta^I$ such that
\begin{equation}
A^I \cdot B_J = \delta^I{}_J\ , \qquad
\int_{A^J} \alpha_I = \int_{B_I} \beta^{J} = \delta_I{}^J
\end{equation}
along with the periods $X^I = \int_{A^I} \Omega$ and $F_I = \int_{B_I}\Omega$.
Additionally, the basis is taken so that the cycle of interest described
in subsection \ref{sec:sing} is $A=A^1$.
When $X^1=0$, the cycle $A^1$ degenerates to the zero
size and ${\cal M}$ develops a conifold singularity.
By the monodromy argument,
the symplectic basis $(X^1, F_1)$ will transform as follows when
we circle the discriminant locus in the complex moduli space
defined by $X^1=0$:
\begin{equation}
X^1 \to X^1 \ \ \ \ F_1 \to F_1 + X^1\ .
\end{equation}
From this we know $F_1$ near the singularity:
\begin{equation}
F_1 = {\rm constant} + \frac{1}{2 \pi i} X^1 {\rm ln} X^1 + \dots
\end{equation}
The metric on the moduli space can be calculated from the formulae
\begin{equation}
\mathcal{G}_{I \bar{J}} = \partial_I \partial_{\bar{J}} K_V\ , \qquad
K_V= - \ln i ( \bar{X}^I F_I - X_I \bar{F}^I)\ .
\end{equation}
Therefore we obtain
\begin{equation}
\mathcal{G}_{1 \bar{1}} \sim \ln( X^1 \bar{X^1})\ .
\end{equation}
Now, the internal flux we want to switch on is $F_3=n_1 \beta^1$. The
vectors come from
\begin{equation}
F_5=F_2^I\wedge \alpha_I - G_{2,I} \wedge \beta^I\ ,
\end{equation}
where the $F_2^{I}$ ($G_{2I}$) is the electric (magnetic) field
strength. The Chern-Simons coupling in the IIB supergravity action is then
\begin{equation}
\epsilon^{ij} \int_{M_4\times CY} \tilde{F}_5 \wedge H_3^{i} \wedge B_2^{j}=
n_1\int_{M_4} F_2^1 \wedge B_2
\end{equation}
where $M_4$ is the spacetime.
By integration by parts, and since $B_2$ dualizes to one of
the (pseudo)scalars in the universal hypermultiplets, we see that the
latter is gauged under the field $A^1$ whose field strength is $dA^1=F_2^1$.
The potential is now given by the ``electric'' formula
\begin{equation}
\label{eq:elec}
V= h_{uv} k_I^{u} k_J^{v} \bar{X}^{I} X^{J} e^{K_V} + (U^{IJ} - 3
\bar{X}^{I} X^{J} e^{K_V} ) \mathcal{P}_I^{\alpha} \mathcal{P}_J^{\alpha}\
\end{equation}
where
\begin{equation}
U^{IJ}= D_a X^I g^{a\bar b} D_{\bar b} X^J \
\end{equation}
and the ${\cal P}^\alpha$ are together the so--called Killing prepotential,
or hypermomentum map.
In our situation only the flux over $B_1$ is turned on, and the Killing
prepotential is given by
\begin{equation}
\mathcal{P}_1^{1}=\mathcal{P}_1^{2}=0\ ; \quad
\mathcal{P}_1^{3}= -e^{\tilde{K}_H} n_{eI}^{(2)} = - e^{2\phi} n_{eI}^{(2)}
\end{equation}
where $\phi$ is the dilaton.
The potential will
then only depend
on the period of the dual $A^1$ cycle, call it $X^1$:
\begin{equation}
V\sim \frac{(n_1)^2}{\ln X^1 \bar X^1}\ .
\end{equation}
The theory will thus be driven to the conifold point where $X^1=0$.
This is not the end of the story: at the singular point, one has a new massless
hypermultiplet $B$ coming from a brane wrapping the shrinking cycle $A^1$.
The world--volume coupling between the
D3-brane and $F_5$ gives then $\int_{{\Bbb R}\times A^1} A_4 = \int_{{\Bbb R}} A^1$,
where ${\Bbb R}$ is the worldline of the resulting light particle in $M_4$. (The
coincidence between the notation for the cycle $A^1$ and the corresponding
vector potential $A^1$ is rather unfortunate, if standard.)
This means that both the universal and the brane hypermultiplet are
charged under the same vector; we can then say that they are all
electrically charged and still use the electric formula for the potential
(\ref{eq:elec}), with the only change being that the Killing prepotential is
modified to be
\begin{equation}
\label{eq:kprep}
\mathcal{P}_1^\alpha
= \mathcal{P}_1^\alpha |_{B=0} + B^{+} \sigma^\alpha B\ ;
\end{equation}
the black hole hypermultiplet is an $SU(2)$ doublet with components
$(B_1, B_2)$.
Loci on which the ${\cal P}^\alpha$'s are
zero are new vacua: it is easy to see that they are given by
\begin{equation}
\label{eq:vacua}
B=( (e^{\tilde{K}_H} n_1)^{1/2}, 0) =(e^{\phi} n_1^{1/2} ,0)
\ .
\end{equation}
The situation here is similar to \cite{ps}: the expectation value
of the new brane hypermultiplet is of the order $g_s= e^{\phi}$.
So, as in that paper, the two requirements that $g_s$ is small and
that $B$ be small (the expression for the ${\cal P}^\alpha$ is a
Taylor expansion and will be modified for large $B$) coincide, and
with these choices we can trust these vacua. After the Higgsing
the flat direction of the potential, namely, the massless
hypermultiplet $\tilde{B}_0$, would be a linear combination of the
brane hypermultiplet and the universal hypermultiplet while the
other combination would become a massive one $\tilde{B}_m$.
\subsection{The field theory capturing the transition}
\label{sec:hyper}
It is useful to understand the physics of the transition from a 4d
field theory perspective, in a region very close to the transition point
on moduli space.
While this analysis is in principle a simple limit of the gauged supergravity
in the previous subsection, going through it will
both provide more intuition and also allow us to infer some additional
lessons.
In fact, in the IIB theory with $n_1$ units of RR
flux, the theory close to the transition point (focusing on the
relevant degrees of freedom) is simply a U(1) gauge
theory with two charged hypers, of charges $1$ and $n_1$.
Let us focus on the case $n_1=1$ for concreteness.
Let us call the ${\cal N}=1$ chiral multiplets in the two hypers
$B, \tilde B$ and $C, \tilde C$.
In ${\cal N}=1$ language, this theory has a superpotential
\begin{equation}
W \sim \tilde B \varphi B + \tilde C \varphi C
\end{equation}
where $\varphi$ is the neutral chiral multiplet in the ${\cal N}=2$
U(1) vector multiplet.
It also has a D-term potential
\begin{equation}
\vert D \vert^2 \sim (|\tilde B|^2 - |B|^2 + |\tilde C|^2 - |C|^2)^2~.
\end{equation}
There are two branches of the moduli space of vacua: a Coulomb branch
where $\langle \varphi \rangle \neq 0$ and the charged matter fields
vanish, and a Higgs branch where
$\langle \varphi \rangle = 0$ and the hypers have non-vanishing vevs
(consistent with $F$ and $D$ flatness).
The first branch has complex dimension one, the second has quaternionic
dimension one. These branches meet at the point where all fields
have vanishing expectation value.
At this point, the theory has an SU(2) global flavor symmetry. This
implies that locally, the hypermultiplet moduli space
will take the form ${\Bbb C}^2/{\Bbb Z}_2$ \cite{Seiberg}. In fact, the precise
geometry of the hypermultiplet
moduli space, including quantum corrections,
can then be determined by a variety of arguments
\cite{Seiberg,SS} (another type of argument \cite{Vafa} implies the
same result for the case where the hypermultiplets coming from
shrinking three--cycles in IIB).
The result is the following. Locally, the quaternionic space reduces
to a hyperK\"ahler manifold which is an elliptic fibration,
with fiber coordinates $t,x$ and a (complex) base coordinate $z$. Let
us denote the K\"ahler class of the elliptic fiber by $\lambda^2$.
Then, the metric takes the form
\begin{equation}
\label{metis}
ds^2 = \lambda^2 \left( V^{-1} (dt - {\bf A} \cdot {\bf dy})^2 + V ({\bf dy})^2
\right)
\end{equation}
where ${\bf y}$ is the three-vector with components $(x,{z \over \lambda},
{{\bar z}\over \lambda})$.
Here, the function $V$ and the vector of functions ${\bf A}$ are given by
\begin{equation}
\label{vis}
V = {1\over 2\pi} \sum_{n=-\infty}^{\infty}\left( {1\over {\sqrt{(x-n)^2
+ {|z|^2 \over \lambda^2}}}} - {1\over |n|} \right) ~+~{\rm constant}
\end{equation}
and
\begin{equation}
\label{ais}
{\bf \nabla} \times {\bf A} ~=~{\bf \nabla} V~.
\end{equation}
This provides us with detailed knowledge of the metric on the
hypermultiplet moduli space emanating from the singularity, though
it is hard to explicitly map the flat direction to a combination
of the universal hypermultiplet and the geometrical parameters of
${\cal M}'$ or ${\cal W}'$. We shall discuss some qualitative aspects
of this map in \S3.3.
For the reader who is confused by the existence of a Coulomb branch at all,
given that e.g. in the IIB picture $F_3 \neq 0$, we note that the
Coulomb branch will clearly exist on a locus where $g_s \to 0$ (since
the hypermultiplet vevs must vanish). This is consistent with
supergravity intuition, since in the 4d Einstein frame, the energetic
cost of the RR fluxes vanishes as $g_s \to 0$.
\section{Geometry of the vacua}
\label{sec:geom}
We will first of all show that the vacua obtained in the previous section
cannot come from a transition to another Calabi--Yau. To this aim, in the next
subsection we will review Calabi--Yau\ extremal transitions. We will then proceed
in subsection \ref{sec:ncytrans}
to review the less well--known {\it non}--Calabi--Yau\ extremal transitions, and
then compare them to the vacua we previously found
in subsection \ref{sec:compare}.
\subsection{Calabi--Yau\ extremal transitions}
\label{sec:cytrans}
Calabi--Yau\ extremal transitions sew together moduli spaces for Calabi--Yaus
whose Hodge numbers differ; let us quickly review how. For more details
on this physically well--studied case, the reader might want to consult
\cite{cgh,gms,cggk,greene}.
Consider IIB theory on a Calabi--Yau\ ${\cal M}$. (Some of the explanations in this
paper are given in the IIB case only, whenever the IIA case would be
an obvious enough modification).
Suppose that at
a particular point in moduli space, ${\cal M}$ develops $N$ nodes
(conifold points)
by shrinking as many three--cycles $A_a$, $a=1,\ldots,N$, and that these
three--cycles satisfy $R$ relations
\begin{equation}
\sum_{a=1}^N r_i^a A_a ~=~0,~~i=1,\cdots,R
\end{equation}
in $H_3$.
We are {\it not} using the same notation for the index on the cycles
as in section \ref{sec:vacua},
as these $A_a$ are not all elements of a
basis (as they are linearly dependent).
Notice that it is already evident that this case is precisely the one
we excluded with the specifications in section \ref{sec:sing}.
To give a classic example \cite{cgh}, there is a known
transition where ${\cal M}$ is the quintic,
$N=16$ and $R=1$.
Physically, there will be $N$ brane hypermultiplets $B_a$
becoming massless
at this point in moduli space. Vectors come from $h^{2,1}$; since the
$B_a$ only span $N - R$ directions in $H^3$, they will be charged under
$N-R$ vectors $X^A$ only, $A=1,\ldots N-R$. Call the matrix of
charges $Q_A^a$, $A=1,\ldots,N-R$, $a=1,\ldots N$.
In this case, when looking for vacua, we will still be
setting the Killing prepotential ${\cal P}_a$ (which is a simple
extension of the one in (\ref{eq:kprep})) to zero:
the flux is now absent, and the
$B^2$ term now reads
\begin{equation}
{\cal P}_A=\sum_a Q^a_A B_a^+ \sigma^\alpha B_a\ .
\end{equation}
Notice that we have switched no flux on in this case; crucially, ${\cal P}=0$
now will have an $R$--dimensional space of solutions, due to the relations.
Let us suppose this new branch is actually the moduli space for a new Calabi--Yau.
This new manifold would have
$h^{2,1}-(N-R)$ vectors, because all the $X^A$ have been Higgsed; and
$h^{1,1}+R$ hypers, because of the $N$ $B_a$, only $R$ flat directions
have survived.
This is exactly the same result one would get from a small
resolution of all the $N$ nodes. Indeed, let us call the Calabi--Yau\ resulting
from such a procedure ${\cal M}'$, and let us compute its Betti numbers.
It is actually simpler to first consider a case in which
a single three--cycle undergoes surgery\footnote{This is a purely topological
computation; in a topological context,
an extremal transition is called a surgery, and we will use this term
when we want to emphasize we are considering purely the topology of the
manifolds involved.}, which is the case without relations specified in
section \ref{sec:sing}; we will go back to the Calabi--Yau\ case, in which relations
are necessary, momentarily.
The result of this single surgery along a three--cycle is that
$H^3\to H^3-2$, $H^2\to H^2$. This might be a bit surprising: one is used
to think that an extremal transition replaces
a three--cycle by a two--cycle. But this intuition comes from the noncompact
case, in which indeed it holds. In the compact case, when we perform a
surgery along a three--cycle, we really are also losing its conjugate
under Poincar\'e pairing; and we gain no two--cycle.
The difference is
illustrated in a low--dimensional analogue in figure
\ref{fig:trans}, in which $H^2$ and $H^3$
are replaced by $H^0$ and $H^1$.
\begin{figure}[ht]
\centering
\begin{picture}(200,200)
\put(208,85){\small $C$}\put(-80,0){\epsfxsize=5in\epsfbox{trans.eps}}
\put(200,10){\small $B$}\put(180,70){\small $A$}
\put(205,40){\small $D$}
\end{picture}
\caption{Difference between compact and non--compact surgery: in the
noncompact case (up), one loses an element in $H^1$
and one gains an element in $H^0$ (a connected component).
In the compact case (down), one loses an element in $H^1$ again, but the
would--be new element in $H^0$ is actually trivial, so $H^0$ remains the
same. This figure is meant to help intuition about the conifold
transition in dimension 6, where
$H^0$ and $H^1$ are replaced by $H^2$ and $H^3$. We also have depicted
various chains on the result of the compact transition, for later use.}
\label{fig:trans}
\end{figure}
Coming back to the Calabi--Yau\ case of interest in this subsection, let us now
consider $N$ shrinking three--cycles with $R$ relations.
First of all $H^3$ only changes by $2(N-R)$, because this is the number
of independent cycles we are losing. But this is not the only effect on the
homology. A relation can be viewed as
a four--chain $F$ whose boundary is $\sum A_a$. After surgery, the boundary
of $F$ by definition shrinks to points; hence $F$ becomes a four--cycle
in its own right. This gives $R$ new elements in $H_4$ (or equivalently,
in $H^2$). The change
in homology is summarized in Table \ref{table}, along with the IIA case and,
more importantly, in a more general context that we will explain. By comparing
with the physical counting above, we find evidence that the new branches
of the moduli space correspond to new Calabi--Yau\ manifolds obtained by extremal
transitions.
To summarize, Calabi--Yau\ extremal transitions are possible without fluxes, but
they require relations among the shrinking cycles. This is to be
contrasted with the vacua in the previous section, where there are no
relations among the shrinking cycles to provide flat directions. Instead,
the
flux (and resulting gauging) lifts the old Calabi-Yau moduli space
(as long as $g_s \neq 0$),
but makes up for this by producing a new branch
of moduli space (emanating from the conifold point or its mirror).
\subsection{Non--Calabi--Yau\ extremal transitions}
\label{sec:ncytrans}
In this section we will waive the Calabi--Yau\ condition to reproduce the vacua
of the previous section. This is, remember, a case in which cycles shrink
without relations. However, we will start with a review of results in the
more general case, to put in perspective both the case we will eventually
consider and the usual Calabi--Yau\ case.
We will consider both usual conifold transitions, in which three--cycles
are shrunk and replaced by curves, and so--called reverse conifold transitions, in which the converse happens.\footnote{Implicit in the use of the word
``conifold'' is the assumption that several
cycles do not collapse together in a single point of the manifold ${\cal M}$.
More general cases are also interesting to consider, see for example
\cite{cggk}
for the complex case and \cite{st} for the symplectic case.}
As a hopefully useful shorthand, we will call
the first type a $3\to2$ transition and the second a $2\to3$.
Though the manifolds will no longer be (necessarily) Calabi--Yau, we will still
call the initial and final manifold ${\cal M}$ and ${\cal M}'$ in the
$3\to2$ case (which is relevant for our IIB picture), and ${\cal W}$
and ${\cal W}'$ in the $2\to3$ case (which is relevant for our IIA picture).
We will first ask whether a $3\to2$ transition takes a complex, or symplectic,
${\cal M}$ into a complex, or symplectic, ${\cal M}'$, and then turn to
the same questions about ${\cal W}, ~{\cal W}'$
for $2\to 3$ transitions. These questions have to be phrased a bit more
precisely, and we will do so case by case.
It is also useful to recall at this point the definitions of symplectic
and complex manifolds, which we will do by embedding them in a bigger
framework.
In both cases, we can start with a weaker concept
called {\it $G$--structure}. By
this we mean the possibility of taking the transition functions on the
tangent bundle of ${\cal M}$ to be in a group $G$. This is typically
accomplished
by finding a geometrical object (a tensor, or a spinor) whose stabilizer
is precisely $G$. If we find a two--form $J$ such that $J\wedge J \wedge J$
is nowhere zero,
it gives an Sp$(6,{\Bbb R})$ structure. In presence of a tensor $(1,1)$ tensor (one
index up and one down) $j$ such that $j^2=-1$ (an almost complex
structure), we speak of a Gl$(3,{\Bbb C})$
structure. For us the presence of both will be
important; but we also impose a compatibility condition, which says that the
tensor $j_m{}^p J_{pn}$ is symmetric and of positive signature. This tensor
is then nothing but a Riemannian metric. The triple is
an almost hermitian metric: this
gives a structure Sp$(6,{\Bbb R})\cap $ Gl$(3,{\Bbb R})=$U(3).
By themselves, these reductions of structure do not give much of a restriction
on the manifold.
But in all these cases we can now consider an appropriate integrability
condition, a differential equation which makes the manifold with the
given structure more rigid.
In the case of $J$, we can impose that $dJ=0$. In this case we say that the
manifold is symplectic. For $j$, a more complicated condition (that we will
detail later, when considering SU(3) structures) leads to complex manifolds.
Let us now consider a complex manifold ${\cal M}$ (which we will also take
to have trivial canonical class $K=0$). First order complex deformations are
parameterized by
$H^1({\cal M},T)=H^{2,1}$. Suppose that for some value
of the complex moduli $N$ three--cycles shrink. Replace now these
$N$ nodes by small resolutions. The definition of small resolution, just like
the one of blowup, can be given locally around the node and then patched
without any problem with the rest of the manifold. So the new manifold
${\cal M}'$ is still complex.
Also, the canonical class $K$ is not modified by the transition because
a small resolution does not
create a new divisor, only a new curve.\footnote{
The conditions for ${\cal N}=1$--preserving vacua in ten--dimensional type II
supergravity actually only require $c_1=0$. The role of this condition is
less clear for example in the topological
string: for the A model it would seem to unnecessary, as there is no anomaly
to cancel; for the B model, it would look like the stronger condition
$K=0$ is required, which means that the canonical bundle should be trivial
even holomorphically.}
Actually, the conjecture that all Calabi--Yau\ are connected
was initially formulated by Reid \cite{reid} for all {\it complex} manifolds
(and not only Calabi--Yaus) with $K=0$, extending ideas by
Hirzebruch \cite{hirzebruch}.
If now we consider a symplectic ${\cal M}$, the story is different. For one
thing, now symplectic moduli are given by $H^2({\cal M}, \Bbb R)$
\cite{moser}, so it does
not seem promising to look for a point in moduli space where three--cycles
shrink. But 2.1 in \cite{sty} shows that we can nevertheless shrink
a three--cycle symplectically, and replace it by a two--cycle. Whether
the resulting ${\cal M}'$ will also be symplectic is
not automatic, however.
This can be decided using Theorem 2.9 in \cite{sty}: the answer is yes
precisely when there is at least one relation in homology among
the three--cycles.\footnote{We should add that the relations must involve
all the three--cycles. If there is one three--cycle $A$ which is not involved
in any relation, it is possible to resolve symplectically all the other
cycles but not $A$. Examples of this type are found in \cite{vgw,werner}
when ${\cal M}$ is K\"ahler, which is the case of interest to us and to which
we will turn shortly.
These examples would play in our favor, allowing us to find even more
examples of non--K\"ahler\ ${\cal M}'$, but
for simplicity of exposition we will mostly ignore them in the following.}
The case of interest in this paper is actually a blending
of the two questions considered so far, whether complex or symplectic
properties are preserved. In IIB, we will take a Calabi--Yau\ ${\cal M}$
(which has both properties) and
follow it in moduli space to a point at which it develops a conifold
singularity.
Now we perform a small resolution to obtain a manifold ${\cal M}'$ and ask
whether this new manifold is still K\"ahler; this question has been
considered also by \cite{werner}. As we have seen, the complex property
is kept, and the symplectic property is not (though the question in \cite{sty}
regards more generally symplectic manifolds, disregarding the complex
structure,
and in particular being more interesting without such a path in complex
structure moduli space).
Let us see why ${\cal M}'$ cannot be K\"ahler\ in our case. A first argument
is not too different from an argument given
after figure \ref{fig:trans} to count four--cycles.
If the manifold ${\cal M}'$ after the transition
is K\"ahler, there will be an element
$\omega
\in H_4$ dual to the K\"ahler form. This will have non--zero intersection
$\omega\cdot C_a= vol (C_a)$ with all the curves $C_a$ produced by the small
resolutions. Before the transition, then, in ${\cal M}$, $\omega$ will develop
a boundary,
since the $C_a$ are replaced by three--cycles $A_a$; more precisely,
$\partial \omega =\sum r^a A_a$ for some coefficients
$r^a$. This proves there will have to be at least one relation between
the collapsing three-cycles.
We can rephrase this in yet another way.
Let us consider the case in which only
one nontrivial three--cycle $A$ is shrinking.
Since, as remarked earlier (see figure \ref{fig:trans}),
in the compact case the curve $C$
created by the transition is trivial in homology, there exists
a three--chain $B$ such that $C=\partial B$; then we have, if $J$
is the two--form of the SU(3) structure,
\begin{equation}
0\neq \int_C J = \int_B dJ\ .
\end{equation}
Hence $dJ\neq 0$: the manifold cannot be symplectic.\footnote{
In the mirror picture, a similar argument shows immediately that
$d\Omega \neq 0$ on ${\cal W}'$, and hence the manifold cannot be complex.}
Even if a symplectic $J$ fails to exist, there is
actually a non--degenerate $J$ compatible with $j$ (since
the inclusion U(3) $\subset$ Sp$(6,{\Bbb R})$ is a homotopy equivalence, not
unlike the way the homotopy equivalence
O$(n)$ $\subset$ Gl$(n)$ allows one
to find a Riemannian metric on any manifold).
In other words, the integrable complex
structure $j$ can be completed to a U(3) structure (and then
to an SU(3) structure, as we will see), though not to a K\"ahler\ one.
This is also a good point to make some remarks about the nature of the curve
$C$ that we will need later on. The concept of holomorphic curve
makes sense even without an integrable
complex structure; the definition is still
that $(\delta +i j)^m{}_n
\partial X^n=0$, where $X$ is the embedding $C$ in ${\cal M}$. For $j$ integrable
this is the usual condition that the curve be holomorphic. But this condition
makes sense even for an almost complex structure, a fact which is expressed
by calling the curve {\it pseudo}--holomorphic \cite{gromov}. We will often
drop this
prefix in the following. In many of the usual manipulations involving calibrated
cycles, one never uses integrability properties for the almost complex or
symplectic structures on ${\cal M}$. For example, it is still true that
the restriction of $J$ to $C$ is its volume form $\mathrm{vol}_C$.
Exactly in the same way, one can speak of Special Lagrangian submanifolds
even without integrability (after having defined an SU(3) structure, which
we will in the next section), and sometimes we will qualify them as ``pseudo"
to signify this.
Let us now consider $2\to3$ transitions. It will turn out that the results
are just mirror of the ones we gave for $3\to2$, but in this case it is
probably helpful to review them separately. After all, mirror symmetry for
complex--symplectic pairs is not as well established as for Calabi--Yau s, which
is one of the motivations of the present work. (Evidence so far includes
mathematical insight \cite{kontsevich}, and, in the slightly more general
context of SU(3) structure manifolds, comparisons of four--dimensional
theories \cite{glmw,glw} and direct SYZ computation \cite{fmt}.)
Suppose now
we start (in the IIA theory) with a
symplectic manifold ${\cal W}$ (whose moduli space
is, as we said, modeled on $H^2({\cal M}, \Bbb R)$), and that
for some value of the symplectic moduli some curves shrink. Then, it turns out
that one can always replace the resulting singularities by some
three--cycles, and still get a symplectic manifold (Theorem 2.7, \cite{sty}).
The trick is that $T^* S^3$, the deformed conifold, is
naturally symplectic, since it is a cotangent bundle.
Then \cite{sty} proves
that this holds even globally: there is no problem in
patching together the modifications around each conifold point. One
should compare this with the construction used by Hirzebruch and Reid
cited above.
It is not automatic that
the resulting manifold ${\cal W}'$ is
complex, even if ${\cal W}$ is complex itself. The criterion is that
there should be at least one relation in homology between
the collapsing curves $C_a$ \cite{Fr,Ti} (see also \cite{lt} for
an interesting application).\footnote{Actually, the criterion also
assumes ${\cal W}$ to satisfy the $\partial \bar\partial$--lemma,
to ensure that $H^{2,1}\subset H^3$, which is not always true on
complex non--K\"ahler\ manifolds; this assumption is trivially valid in the
cases we consider, where ${\cal W}$ is a Calabi--Yau.}
Let us collect the transitions considered so far in a table; we also anticipate
in which string theory each transition will be relevant for us. The symmetry
among these results is clear; we will not need all of them, though.
\begin{table}[hbt]
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c|c|c|c|c|c|}
\cline{2-6}
&transition & keeps symplectic & keeps complex & $\Delta b_2$ &$\Delta b_3$\\\hline\hline
\multicolumn{1}{|c|}{IIA} & $2\to3$ & yes (\cite{sty}, 2.7) &
if $\sum r^a_i C_a=0$ (\cite{Fr,Ti}) & $N-R$ & $2R$ \\\hline
\multicolumn{1}{|c|}{IIB} & $3\to 2$ & if $\sum r^a_i B_a =0$
(\cite{sty}, 2.9) & yes
& $R$ & $2(N-R)$\\
\hline
\end{tabular}
\caption{The conditions for a transition to send a
complex or symplectic conifold to a complex or symplectic
manifold.}\label{table}
\end{table}
\subsection{Vacua versus geometry}
\label{sec:compare}
We can now apply the results reviewed in the previous subsection to our vacua.
Remember that in IIB we have chosen a point in moduli space in which a single
three--cycle shrinks, and in IIA one in which a single curve shrinks.
From our assumptions, the singularities affect the manifold only locally
(as opposed for example to the IIA case of \cite{ps}, in which the quantum
volume of the whole manifold is shrinking); it is hence natural to assume that
the vacua of section \ref{sec:vacua} are still geometrical.
Given the experience with the Calabi--Yau\ case, it is also natural that the brane
hypermultiplet $B$ describes a surgery. But then we can use the results of
the previous subsection.
In IIB, where we have shrunk a
three--cycle, we now know that the manifold obtained by replacing the node
with a curve will be naturally complex, but will not be symplectic, since
by assumption we do not have any relations.
As we have explained, the
reason for this is that on the manifold ${\cal M}'$ after the transition,
there will be a holomorphic curve $C$ which is homologically trivial; and
by Stokes, we conclude that the manifold cannot be symplectic.
Summing up, we are proposing that in IIB the vacua we are finding are given
by a complex non--symplectic (and hence non--K\"ahler\footnote{There might
actually be, theoretically speaking, a K\"ahler\ structure on the manifold which
has nothing to do with the surgery. This question is natural
mathematically \cite{sty}, but irrelevant physically: such a K\"ahler\ structure
would be in some other branch of moduli space, far from the one we are
considering, which is connected and close to the original Calabi--Yau\ by
construction.}) manifold. This manifold ${\cal M}'$ is defined by a small
resolution on the singular point of ${\cal M}$, and it has (see table
\ref{table}) Betti numbers
\begin{equation}
b_2({\cal M}')=b_2({\cal M}), ~~b_3({\cal M}')=
b_3({\cal M})-2~.
\end{equation}
In the example described in section \ref{sec:sing},
when ${\cal M}$ is the mirror of an elliptic fibration over $\Bbb F_1$,
${\cal M}'$ has $b_2=243$, $b_3/2=3$.
In IIA, a similar reasoning lets us conjecture that the new vacua correspond
to having a symplectic non--complex (and hence non--K\"ahler)
manifold ${\cal W}'$,
obtained from the original Calabi--Yau\ ${\cal W}$
by replacing the node with a three--cycle.
This manifold ${\cal W}'$ has
\begin{equation}
b_2({\cal W}')= b_2({\cal W}) - 1,~~
b_3({\cal W}')=b_3({\cal W})~.
\end{equation}
In the example from section \ref{sec:sing},
when ${\cal W}$ is an elliptic fibration over $\Bbb F_1$, ${\cal W}'$ has
$b_2=2$, $b_3/2=244$.
Notice that these two sets of vacua are mirror by construction: we localize
in IIA and in IIB to points which are mirror to each other, and in both cases
we add the appropriate brane hypermultiplets to reveal new lines of vacua.
What is conjectural is simply the interpretation of the vacua. We now
proceed to give evidence for that conjecture.
In the IIB case, the spectrum before the transition is clearly given by
$b_3({\cal M})/2 - 1$ vector multiplets and $b_2({\cal M}) + 1$
hypermultiplets (the $``+1$" is the universal hypermultiplet).
We have seen that the potential generated by $F_3$ gives mass to one of the
vector multiplets, fixing it at a certain point in the complex moduli space.
On the other side, the number of massless hypermultiplets remains the same.
Indeed, we have added a brane hypermultiplet $B$; but this combines with the
universal hypermultiplet to give only one massless direction, the one given in
(\ref{eq:vacua}).
This is to be compared with the Betti numbers of the proposed
${\cal M}'$ from table \ref{table}: indeed, $b_2$ remains the same and $b_3$
changes by 2. Since the manifold is now non--K\"ahler, we have to be careful in
drawing conclusions: ``K\"ahler\ moduli'' a priori do not make sense any more, and
though complex moduli are still given by $H^{2,1}$ (by Kodaira--Spencer and
$K=0$), a priori this number is $\neq b_3/2 - 1$, since
the manifold is non--K\"ahler.
However, two circumstances help us. The first is that, by
construction, the moduli of the manifolds we have constructed are identified
with the moduli of the singular Calabi--Yau\ on which the small resolution is
performed. Then, indeed we can say that there should be
$b_3({\cal M}')/2 - 1 + b_2({\cal M}')$ complex geometrical moduli in total
(after
complexifying the
moduli from $b_2$ with periods of the anti-symmetric tensor field
appropriately, and neglecting the scalars arising from periods of RR gauge
fields).
A more insightful approach exists, and will also allow us to compare low--lying
massive states. Reduction on a general manifold of SU(3) structure (along
with a more general class which will not concern us here) has been
performed recently in \cite{glw}. (Manifolds with $SU(3)$ structures
and various differential conditions
were also considered from the perspective of supergravity vacua, starting
with \cite{cs,gmpw}).
We have introduced a U(3) structure in the previous section as
the presence on the manifold of both a complex and a symplectic structure
with a compatibility condition. The almost complex structure $j$ allows
us to define the bundle of $(3,0)$ forms, which is called the canonical
bundle as in the integrable case. If this bundle is
topologically trivial the structure reduces further to SU(3).
The global section $\Omega$ of the canonical bundle can actually be
used to define the almost complex structure by
\begin{equation}
T^*_{\mathrm {hol}}= \{ v_1 \in T^* | v_1 \wedge \bar\Omega=0 \}\ .
\end{equation}
The integrability of the almost complex structure is then defined by
$(d\Omega)_{2,2}=0$, something we will not always require.
Let us now review the construction in \cite{glw} from our perspective.
In general the results of \cite{glw} require one to know the spectrum of
the Laplacian on the manifold, which is not always at hand; but in our
case we have hints for the spectrum, as we will see shortly.
We have seen that a U(3) structure, and hence also an SU(3) structure, defines
a metric. Let us see it again:
since $J\wedge \Omega=0$, $J$ is of type $(1,1)$, and then a metric can be
defined as usual: $g_{i\bar j}= -i J_{i\bar j}$.
We can now consider the Laplacian associated to this metric.
The suggestion in \cite{glw, glmw}
is to add some low--lying massive eigen--forms
to the cohomology. Since $[\Delta, d]=0$ and
$[\Delta, *]=0$, at a given mass level there will be eigen--forms of different
degrees. Suppose for example $\Delta \omega_2=m^2 \omega_2$ for a certain
$m$. Then
\begin{equation}
d \omega_2 \equiv m \beta_3
\end{equation}
will also satisfy $\Delta \beta_3=m^2 \beta_3$, and similarly
for $\alpha_3\equiv *\beta_3$ and $\omega_4 \equiv *\omega_2$. (The indices
denote the degrees of the forms.) We can repeat
this trick with several mass levels, even if coincident.
After having added these massive forms to the cohomology, we can
use the resulting combined basis to expand
$\Omega= X^I \alpha_I + \beta^I F_I$ and $J= t_i \omega_i$, formally as usual
but with some of the $\alpha$'s, $\beta$'s and $\omega$'s now being massive.
Finally, these expansions for $\Omega$ and $J$ can be plugged into certain
``universal'' expressions for the K\"ahler\ prepotential ${\cal P}^\alpha$.
Without fluxes (we will return on this point later) and with some dilaton
factor suppressed, this looks like \cite{glw}
\begin{equation}
\label{eq:kprep'}
{\cal P}^1 + i {\cal P}^2 = \int d(B+iJ) \wedge \Omega,
\qquad {\cal P}^3 = \int (dC_2 - C_0 dB) \wedge \Omega.
\end{equation}
Since the reader may be confused about the interpretation of the expressions
$\int d(B+iJ) \wedge \Omega$ and $\int (dC_2 - C_0 dB) \wedge \Omega$
which appear above (given the ability
to integrate by parts), let us pause to give
some explanation. Our IIB solutions indeed correspond to complex manifolds,
equipped with a preferred closed 3-form which has $d\Omega = 0$.
However, the 4d fields which are given a ${\it mass}$ by the gauging
actually include deformations of the geometry which yield $d\Omega \neq 0$,
as we discussed above. Therefore, the potential which follows from
(\ref{eq:kprep'}) is a nontrivial function on our field space.
Let us try to apply the KK construction just reviewed to the manifold
${\cal M}'$. First of all we need some information about its spectrum.
We are arguing that ${\cal M}'$ is obtained from surgery. In \cite{dish},
it is found that the spectrum of the Dirac operator changes little, in an
appropriate sense, under surgery.
If we {\it assume} that this result goes through after twisting the Dirac
operator, we can in particular consider the Dirac operator on bispinors, also
known as the signature operator, which has the same spectrum as the Laplacian.
All this suggests that for very small $B$ and $g_s$ the spectrum on ${\cal M}'$
will be very close to the one on ${\cal M}$. Hence there will be an
eigenform of the Laplacian $\omega$ with a relatively small eigenvalue $m$
(and its partners discussed above),
corresponding to the extra harmonic forms generating $H^3$ before the
surgery.
By the reasoning above, this will also give eigenforms $\alpha$, $\beta$
and $\tilde\omega$.
Expanding now $\Omega= X^1 \alpha + \Omega_0$,
$J= t^1 \omega + J_0$, $B = b^1 \omega + B_0$ and $C_2= c^1 \omega + C_{20}$
(where $\Omega_0$, $J_0$, $B_0$ and $C_{20}$ represent the part of
the expansion in cohomology) and using the relation $\int_{{\cal M}'} \beta_3 \wedge
\alpha_3 = 1$, we get from (\ref{eq:kprep'}):
\begin{equation}
\label{eq:kp}
{\cal P}^1 + i {\cal P}^2 \sim m (b^1 + i t^1) X^1 , \qquad {\cal P}^3
\sim m (c^1-C_0 b^1) X^1 \ .
\end{equation}
The parameter $m$ measures the non-K\"ahlerness away from the
Calabi-Yau manifold $\cal{M}$, and should be proportional to the vev of
the brane hypermultiplet $\tilde{B}_0$ of \S2.2.
Clearly the formula is reminiscent of
the quadratic dependence on the $B$ hypermultiplet in (\ref{eq:kprep}). The
size of the curve $C$ is measured by $t^1$. Of course
$\tilde{B}_0$ is really a function of the $t^1$ and universal hypermultiplets.
Presumably, it and
the massive hyper $\tilde{B}_m$ in section \ref{sec:gauged} are different
linear combinations of the curve volume and $g_s$.
It is even tempting to map the ${\cal M}$ and ${\cal M}'$ variables
by mapping $B$ directly to $\int_C J = t^1$, and (very reasonably) mapping
the dilaton hypermultiplet on ${\cal M}$ directly into the one for
${\cal M}'$. Indeed, the size of $C$ would then be proportional to
$g_s$ (at least when both are small), which is consistent
with both being zero at the transition point.
Fixing this would
require more detailed knowledge of the map between variables.
However, since the formula for the Killing prepotentials has the
universal hypermultiplet in it (which can be seen from
(\ref{eq:kp}), where $C_0$ is mixed with other hypers and some
dilaton factor is omitted in the front), it could have $\alpha'$
corrections. Moreover, (\ref{eq:kprep'}) is only valid in the
supergravity regime where all the cycles are large compared with
the string length. Hence an exact matching between the Killing
prepotentials is lacking.
We can now attempt the following comparison between the spectrum of the vacua
and the KK spectrum on the conjectural ${\cal M}'$:
\begin{itemize}
\item On ${\cal M}$, one of the vectors, $X^1$, is given a mass by the
gauging $\int F_3 \wedge \Omega$.
On ${\cal M}'$, this vector becomes a deformation of
$\Omega$ which makes it not closed, $\Omega\to \Omega+ \alpha$,
$\Delta \alpha= m^2 \alpha$. In both pictures, the vacuum is at the point
$X^1=0$. On ${\cal M}$, this is because we have fixed the complex modulus
at the point in which $A^1$ shrinks. On ${\cal M}'$, the manifold which is
natural to propose from table \ref{table} is complex, and hence $d\Omega=0$.
\item The remaining vectors are untouched by either gauging and remain
massless.
\item Both for ${\cal M}$ and for ${\cal M}'$, there are $b_2 + 1$ massless
hypermultiplets.
\item From the perspective of the gauged supergravity analysis on
${\cal M}$ there is a massive hypermultiplet too: $B$ and the
universal hypermultiplet have mixed to give a massless direction, but
another combination will be massive. On ${\cal M}'$, there is also a massive
hypermultiplet: it is some combination of $g_s$ and $t^1$,
which multiplies the massive form $\omega$ (with
$\Delta \omega= m^2 \omega$) in the expansion of $J$.
To determine the precise combination one
needs better knowledge of $m(t^1,g_s)$ in (\ref{eq:kp}).
\end{itemize}
Again, this comparison uses the fact that there is a positive eigenvalue
of the Laplacian which is much smaller than the rest of the KK tower, and this
fact is inspired by the work in \cite{dish}.
This comparison cannot be made too precise for a number or reasons. One is,
as we have already noticed, that it is hard to control the spectrum, and we
had to inspire ourselves from work which seemed relevant.
Another is that the KK reduction of ten-dimensional
supergravity on the manifold ${\cal M}'$
will not capture the full effective field
theory
precisely, as we are close (at small $B$ vevs) to a
point where a geometric transition has
occurred. Hence, curvatures are large in localized parts of ${\cal M}'$,
though the bulk of the space can be large and weakly curved.
And indeed, we know that
ten--dimensional type II supergravities do not allow
${\cal N}=2$ Minkowski vacua from non-K\"ahler
compactification manifolds in a regime where
all cycles are large enough
to trust supergravity
(though inclusion of further
ingredients like orientifolds, which are present in string theory,
can yield large radius ${\cal N}=2$ Minkowski vacua in this context
\cite{Orient}).
The vacua of \cite{ps}, and our own models, presumably evade this no-go
theorem via stringy corrections arising in the region localized around
the small resolution. Some of these corrections are captured
by the local field theory analysis reviewed in \S2.3, which gives
us a reasonable knowledge of the hyper moduli space close to the
singularity.
It should be noted that the family of vacua we have
found cannot simply disappear as one increases the expectation values of
the $B$ fields and $e^{\phi}$: the moduli space of ${\cal N}=2$ vacua is expected
to be analytic even for the fully--fledged string theory. However, new
terms in the expansion of the ${\cal P}^\alpha$'s in terms of the $B$
hypermultiplet will deform the line; and large $g_s$
will make the perturbative
type II description unreliable.
An issue that deserves separate treatment is the following. Why have we
assumed $F_3=0$ in (\ref{eq:kprep'})? It would seem that the integral
$\int_B F_3$ cannot simply go away. Usually, in conifold transitions
(especially noncompact ones) a flux becomes a brane, as the cycle becomes
contractible and surrounds a locus on which, by Gauss' law, there must be
a brane. This would be the case if, in figure \ref{fig:trans}, the
flux were on $A$: this would really mean a brane on $C$. In our case,
the flux is on $B$, on a chain which surrounds nothing. Without sources,
and without being non--trivial in cohomology, $F_3$ has no choice but
disappear on ${\cal M}'$.
\bigskip
To summarize this section, we have conjectured to which manifolds the vacua
found in section \ref{sec:vacua} correspond. In this way, we have also
provided explicit symplectic--complex non--K\"ahler\ mirror pairs.
\section{The big picture: a space of geometries}
\label{sec:spec}
There are a few remarks that can be made about the type of complex
and symplectic manifolds that we have just analyzed, and that suggest
a more general picture. This is a speculative section, and it should be
taken as such.
One of the questions which motivated us is the following. The KK reduction
in \cite{glw} says that $\int dJ \wedge \Omega$ encodes the gauging of
the four--dimensional effective supergravity on ${\cal M}'$. Hence in some
appropriate sense (to be discussed below), $dJ$ must be integral
-- one would like
$\int dJ \wedge \Omega$ to be expressed in terms of integral combinations
of periods of $\Omega$. This is just because the allowed gauge charges
in the full string theory form an integral lattice.
But from existing discussions, the integral nature of $dJ$ is far
from evident.
Though one can normalize the massive forms
appropriately in such a way that the expression does give an integer, this
does not distinguish between several possible values for the gauging:
it is just a renormalization, not a quantization.
Without really answering this question, we want to suggest that there must be
a natural modification of cohomology that somehow encodes some of the
massive eigenvalues of the Laplacian, and that has integrality
built in. It will be helpful to refer again to figure \ref{fig:trans}: on
${\cal M}'$ (the manifold on the right in the lower line of figure 1),
we have depicted a few relevant
chains, obviously in a low--dimensional analogy. What used to be called
the $A$ cycle is now still a cycle, but trivial in homology, as it is bounded
by a four--cycle $D$. The dual $B$ cycle, from other side, now is no longer
a cycle at all, but merely a chain, its boundary being the curve $C$.
This curve has already played a crucial role in showing that ${\cal M}'$
cannot be symplectic.
We want to suggest that a special role is played by relative cohomology
groups $H_3({\cal M}',C)$ and $H_4({\cal M}',A)$.
Remember that relative homology is
the hypercohomology of $C_\bullet(C)
\buildrel {\iota_C}\over\longrightarrow C_\bullet({\cal M}')$,
with $C_k$ being chains and the map $\iota_C$ being the inclusion. In
plain English, chains in $C_k({\cal M}',C)$ are pairs of chains
$(c_k,\tilde c_{k-1})\in C_k({\cal M}') \times C_{k-1}(C)$,
and homology is given by considering the differential
\begin{equation}
\partial(c_k, \tilde c_{k-1}) = (\partial c_k + \iota_C(\tilde c_{k-1}),
- \partial \tilde c_{k-1})\ .
\end{equation}
So cycles in $H_k({\cal M}',C)$, for example,
are ordinary chains which have boundary on $C$. $B$ is precisely
such a chain. A long exact sequence can be used to show that, when $C$ is
a curve trivial in $H_2({\cal M}')$ as is our case,
$\mathrm{dim}(H_3({\cal M}',C))= \mathrm{dim}(H_3({\cal M}'))+ 1$.
So $(B,C)$ and the usual cycles generate $H_3({\cal M}',C)$. Similarly,
$\mathrm{dim}(H_4({\cal M}',A))= \mathrm{dim}(H_4({\cal M}'))+ 1$, and
the new generator is $(D,A)$.
Similar and dual statements are valid in cohomology. This is defined similarly
as for homology: pairs
$(\omega_k,\tilde\omega_{k-1}) \in \Omega^k({\cal M}')\times
\Omega^{k-1}(C)$, with a differential
\begin{equation}
d(\omega_k, \tilde\omega_{k-1})=(d\omega_k, \iota_C^*(\omega_k) -
d \tilde\omega_{k-1})
\end{equation}
A non--trivial element of $H^3({\cal M}',C)$ is $(0,\mathrm{vol}_C)$.
Since $C$ is a holomorphic curve, $\mathrm{vol}_C=J_{|C}\equiv
\iota_C^* J$ and hence this
representative is also equivalent to $(dJ,0)$, using the differential
above.
When we deform ${\cal M}'$ with the scalar in the massive vector multiplet
$X^1$, the manifold becomes non--complex, as we have shown in the previous
section; but
one does not require the almost complex structure to be
integrable to define an
appropriate notion of holomorphic curve.
In fact, one might expect then that, when
$d\Omega\neq0$, which corresponds to ${\cal M}'$ being non--complex,
one can also choose $A$ to be SLag (as we remarked earlier, the definition will not
really require that the almost symplectic structure be closed).\footnote{
The reader should not confuse this potential SLag, which may
exist off-shell in the IIB theory, with the pseudo-SLag
manifold that exists on ${\cal W}'$ where $d\Omega \neq 0$ even on the
${\cal N}=2$ supersymmetric solutions.}
Definitely, the logic would hold the other way around -- if such a SLag $A$ can
be found, $\int_A \Omega\neq 0$ and then, again by integration by parts,
it follows that $d\Omega\neq 0$.
In our example, we expect the number of units $n_1$ of $F_3$ flux present
before the transition in the IIB picture, to map to
``$n_1$ units of $dJ$'' on ${\cal M}'$.
The phrase in quotes has not been precisely defined, but
it is reasonable to think that it is defined by some kind of intersection
theory in relative homology. We will now try to make this more precise.
As we have seen, the dimension of
the relative $H_3$ can be
odd (and it is in our case), so we should not expect a pairing between
$A$ and $B$ cycles within the same group. One might try nevertheless to
define a pairing between chains in $H_3({\cal M}', C)$ and
$H_4({\cal M}', A)$; it would be defined by
\begin{equation}
(B,C)\cdot(D,A) \equiv \#(B\cap A)=\#(C\cap D)\ .
\end{equation}
In fact, if we think of another lower--dimensional analogy, in which both
$A$ and $C$ are {\it one}--dimensional in a three--dimensional manifold, it
is easy to see that what we
have just defined is a linking number between $C$ and $A$. Indeed,
$\mathrm{dim}(C)+\mathrm{dim}(A)= \mathrm{dim}({\cal M}')-1$.
This can also be rephrased in relative cohomology. Consider a bump--form
$\delta_A$ which is concentrated around $A$ and has only components transverse
to it, and similarly for $C$. These can be defined more precisely using
tubular neighborhoods and the Thom isomorphism \cite{Bott}.
Since $A$ and $C$ are trivial
in homology, we cannot quite say that these bump forms
are the Poincar\'e duals of $A$ and $C$.
But we can say that $(\delta_A,0) \in H^3(M,C)$ is the Poincar\'e dual to the
cycle $(D,A) \in H_4(M,A)$, with natural definitions for the pairing between
homology and cohomology. $\delta_A$ is non--trivial in relative cohomology but
trivial in the ordinary cohomology $H^3(M)$, and hence there exists an
$F_A$ such that $d F_A = \delta_A$. Then we have
\begin{equation}
\int_{{\cal M}'} F_A \wedge \delta_C = \int_C F_A = \int_B dF_A = \#(C\cap D)
\equiv L(A,C)
\ .
\end{equation}
In other words, in cohomology we have $L(A,C)= \int d^{-1}(\delta_A)
\wedge \delta_C$.
Suppose we have now another form $\tilde \delta_A$
which can represent the Poincar\'e dual (in relative cohomology)
to $(D,A)$. Then we can use this other form as well to compute the linking, with
identical result. This is because $(\delta_A,0)\sim (\tilde\delta_A,0)$ in
$H^3({\cal M}',A)$ means that, by the definition of the differential above,
$\delta_A - \tilde\delta_A= d \omega_2$ with $\omega_2$ satisfying
$\iota_C^*\omega_2= d\tilde\omega_1$ for some form $\tilde\omega_1$ on $C$.
Then
\begin{equation}
\int_{{\cal M}'} d^{-1}(\delta_A - \tilde\delta_A) \wedge \delta_C=
\int_{{\cal M}'} \omega_2 \wedge\delta_C = \int_C \omega_2 =
\int_C d\tilde\omega_1 =0
\end{equation}
so $L(A,C)$ does not depend on the choice of the Poincar\'e dual. But now,
remember that $(dJ,0)$ is also a non--trivial element of $H^3({\cal M}', C)$;
if we normalize the volume of $C$ to 1, it then has an equally valid claim
to be called a Poincar\'e dual to $(D,A)$. Indeed, $\int_{(B,C)}(dJ,0)
\equiv \int_B dJ= \int_C J= 1=
(D,A)\cdot(B,C)$, and for all other cycles the result is
zero. Similar reasonings apply to $d\Omega$. Then we can apply the steps above
and conclude that
\begin{equation}
L(A,C)= \int_{{\cal M}'} dJ\wedge\Omega\ .
\end{equation}
In doing this we have normalized the volumes of $C$ and $A$ to one; if
we reinstall those volumes, we get precisely that $\int dJ \wedge \Omega$ is
a linear function of the vectors and hypers with an integral slope.
Another point which seems to be suggesting itself is the relation between
homologically trivial
Special Lagrangians and holomorphic curves on one side, and massive terms
in the expansion of $\Omega$ and $J$ on the other.
The presence of
a holomorphic but trivial curve, as we have already recalled, implies
that $dJ\neq0$: in the previous section we have seen that one actually
expects that such curves are in one--to--one correspondence with massive
eigenforms of the Laplacian present in the expansion of
$J$ (whose coefficients represent massive fields, which vanish in
vacuum). We have argued for this
relation close
to the transition point, and for the ${\cal M}'$ that we have constructed,
but it might be that this link persists in general.
This would mean that inside an arbitrary SU(3) structure manifold,
one would have massive fields which are naturally singled out, associated to
homologically trivial holomorphic curves.
Similarly, in the IIA on ${\cal W}'$, there is a 3-cycle which is (pseudo)
Special Lagrangian but homologically trivial.
Its presence implies that $d\Omega \neq 0$, in keeping with the fact
that the IIA vacua are non-complex.
Reid's fantasy \cite{reid}\ involved the conjecture that by shrinking
-1 curves, and then deforming, one may find a connected configuration
space of complex threefolds with $K=0$.
Here, we see that it is natural to extend this fantasy to include
a mirror conjecture: that the space of symplectic non-complex manifolds
with SU(3) structure is similarly connected, perhaps via transitions
involving the contraction of (pseudo) Special Lagrangian cycles, followed
by small resolutions. The specialization to -1 curves in \cite{reid}\
is probably mirror to the requirement that the SLags be rigid, in
the sense that $b^1=0$.
In either IIB or IIA, we have seen that (at least close to the
transition) there is a natural
set of massive fields to include in the low-energy theory, associated
with the classes of cycles described above.
Allowing these fields to take on expectation values may allow one to
move off-shell, filling out a
finite--dimensional (but large) configuration space, inside
which complex and symplectic
manifolds would be zeros of a stringy effective potential.
While finding such an ${\cal N}=2$ configuration space together with
an appropriate
potential to reveal all ${\cal N}=2$ vacua is clearly an ambitious goal,
it may also provide a fruitful warm-up problem for
the more general question of
characterizing the string theory ``landscape'' of ${\cal N} \leq 1$ vacua
\cite{landscape}.
In this bigger picture, this paper is a Taylor expansion of the
master potential around a corner in which the moduli space of ${\cal M}'$
meets the moduli space of compactifications on
${\cal M}$ with RR flux.
{\bf Acknowledgments:} We would like to thank P.~Aspinwall, B.~Florea
and A.~Kashani-Poor for useful discussions, and I.~Smith and R.~Thomas for
some patient explanations of their work.
The authors received support from the DOE under contract
DE-AC03-76SF00515 and from the National Science Foundation under grant
0244728.
SK was also supported by a David and Lucile Packard Foundation Fellowship
for Science and Engineering.
|
1,116,691,500,203 | arxiv | \section{Introduction}
Open quantum system techniques are vital for many studies in quantum mechanics \cite{gardiner_00,breuer_02,rivas_12}. This happens because closed quantum systems are just an idealisation of real systems\footnote{The same happens with closed classical systems.}, as in Nature nothing can be isolated. In practical problems, the interaction of the system of interest with the environment cannot be avoided, and we require an approach in which the environment can be effectively removed from the equations of motion.
The general problem addressed by Open Quantum Theory is sketched in Figure \ref{fig:fig0}. In the most general picture, we have a total system that conforms a closed quantum system by itself. We are mostly interested in a subsystem of the total one (we call it just ``system'' instead ``total system''). Therefore, the whole system is divided into our system of interest and an environment. The goal of Open Quantum Theory is to infer the equations of motions of the reduced systems from the equation of motion of the total system. For practical purposes, the reduced equations of motion should be easier to solve than the full dynamics of the system. Because of his requirement, several approximations are usually made in the derivation of the reduced dynamics.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.2]{figure0}
\end{center}
\caption{A total system divided into the system of interest, ``System'', and the environment. }
\label{fig:fig0}
\end{figure}
One particular, and interesting, case of study is the dynamics of a system connected to several baths modelled by a Markovian interaction. In this case the most general quantum dynamics is generated by the Lindblad equation (also called Gorini-Kossakowski-Sudarshan-Lindblad equation) \cite{lindblad:cmp76,gorini:jmp76}. It is difficult to overemphasize the importance of this Master Equation. It plays an important role in fields as quantum optics \cite{gardiner_00,manzano:sr16}, condensed matter \cite{prosen:prl11,manzano:pre12,manzano:njp16,olmos:prl12}, atomic physics \cite{metz:prl06,jones:pra18}, quantum information \cite{lidar:prl98,kraus:08}, decoherence \cite{brun:pra00,schlosshauer_07}, and quantum biology \cite{plenio:njp08,mohseni:jcp08, manzano:po13}.
The purpose of this paper is to provide basic knowledge about the Lindblad Master Equation. In Section \ref{sec:math}, the mathematical requirements are introduced while in Section \ref{sec:qm} there is a brief review of quantum mechanical concepts that are required to understand the paper. Section \ref{sec:fl}, includes a description of a mathematical framework, the Fock-Liouville space, that is especially useful to work in this problem. In Section \ref{cpt}, we define the concept of CPT-Maps, derive the Lindblad Master Equation from two different approaches, and we discus several properties of the equation. Finally, Section \ref{sec:resolution} is devoted to the resolution of the master equation using different methods. To deepen in the techniques of solving the Lindblad equation, an example consisting of a two-level system with decay is analysed, illustrating the content of every section. The problems proposed are solved by the use of Mathematica notebooks that can be found at \cite{notebook}.
\section{Mathematical basis}
\label{sec:math}
The primary mathematical tool in quantum mechanics is the theory of Hilbert spaces. This mathematical framework allows extending many results from finite linear vector spaces to infinite ones. In any case, this tutorial deals only with finite systems and, therefore, the expressions `Hilbert space' and `linear space' are equivalent. We assume that the reader is skilled in operating in Hilbert spaces. To deepen in the field of Hilbert spaces we recommend the book by Debnath and Mikusi\'nki \cite{debnath_05}. If the reader needs a brief review of the main concepts required for understanding this paper, we may recommend Nielsen and Chuang's Quantum Computing book \cite{nielsen_00}. It is also required some basic knowledge about infinitesimal calculus, like integration, derivation, and the resolution of simple differential equations, To help the readers, we have made a glossary of the most used mathematical terms. It can be used also as a checklist of terms the reader should be familiar with.
\vspace{0.25cm}
\noindent
{\bf Glossary:}
\begin{itemize}
\item ${\cal H}$ represents a Hilbert space, usually the space of pure states of a system.
\item $\ket{\psi}\in {\cal H}$ represents a vector of the Hilbert space ${\cal H}$ (a column vector).
\item $\bra{\psi}\in {\cal H}$ represents a vector of the dual Hilbert space of ${\cal H}$ (a row vector).
\item $\bracket{\psi}{\phi}\in \mathbb{C}$ is the scalar product of vectors $\ket{\psi}$ and $\ket{\phi}$.
\item $\norm{\ket{\psi}}$ is the norm of vector $\ket{\psi}$. $\norm{\ket{\psi}}\equiv\sqrt{\bracket{\psi}{\psi}}$.
\item $B({\cal H})$ represents the space of bounded operators acting on the Hilbert space $B:{\cal H} \to {\cal H}$.
\item $\mathbb 1_{{\cal H}}\in B({\cal H})$ is the Identity Operator of the Hilbert space ${\cal H}$ s.t. $\mathbb 1_{{\cal H}}\ket{\psi}=\ket{\psi},\; \; \forall \ket{\psi}\in {\cal H} $.
\item $\op{\psi}{\phi}\in B({\cal H})$ is the operator such that $\pare{\op{\psi}{\phi}} \ket{\varphi}=\bracket{\phi}{\varphi} \ket{\psi},\; \; \forall \ket{\varphi} \in {\cal H}$.
\item $O^\dagger\in B({\cal H})$ is the Hermitian conjugate of the operator $O\in B({\cal H})$.
\item $U\in B({\cal H})$ is a unitary operator iff $U U^{\dagger}=U^{\dagger}U=\mathbb 1$.
\item $H\in B({\cal H})$ is a Hermitian operator iff $H=H^{\dagger}$.
\item $A\in B({\cal H})$ is a positive operator $\pare{A> 0}$ iff $\bra{\phi} A \ket{\phi}\ge0,\;\; \forall \ket{\phi}\in {\cal H}$
\item $P\in B({\cal H})$ is a proyector iff $P P=P$.
\item $\textrm{Tr}\cor{B}$ represents the trace of operator $B$.
\item $\rho\pare{{\cal L}}$ represents the space of density matrices, meaning the space of bounded operators acting on ${\cal H}$ with trace $1$ and positive.
\item $\kket{\rho}$ is a vector in the Fock-Liouville space.
\item $\bbracket{A}{B}=\textrm{Tr}\cor{A^\dagger B}$ is the scalar product of operators $A,B\in B({\cal H})$ in the Fock-Liouville space.
\item $\tilde{{\cal L}}$ is the matrix representation of a superoperator in the Fock-Liouville space.
\end{itemize}
\section{(Very short) Introduction to quantum mechanics}
\label{sec:qm}
The purpose of this chapter is to refresh the main concepts of quantum mechanics necessary to understand the Lindblad Master Equation. Of course, this is NOT a full quantum mechanics course. If a reader has no background in this field, just reading this chapter would be insufficient to understand the remaining of this tutorial. Therefore, if the reader is unsure of his/her capacities, we recommend to go first through a quantum mechanics course or to read an introductory book carefully. There are many great quantum mechanics books in the market. For beginners, we recommend Sakurai's book \cite{sakurai_94} or Nielsen and Chuang's Quantum Computing book \cite{nielsen_00}. For more advanced students, looking for a solid mathematical description of quantum mechanics methods, we recommend Galindo and Pascual \cite{galindo_pascual_90}. Finally, for a more philosophical discussion, you should go to Peres' book \cite{peres_95}.
We start stating the quantum mechanics postulates that we need to understand the derivation and application of the Lindblad Master Equation. The first postulate is related to the concept of a quantum state.
\vspace{0.5cm}
\begin{postulate}
Associated to any isolated physical system, there is a complex Hilbert space ${\cal H}$, known as the {\bf state space} of the system. The state of the system is entirely described by a {\it state vector}, which is a unit vector of the Hilbert space $(\ket{\psi}\in {\cal H})$.
\end{postulate}
\vspace{0.5cm}
\noindent
As quantum mechanics is a general theory (or a set of theories), it does not tell us which is the proper Hilbert space for each system. This is usually done system by system. A natural question to ask is if there is a one-to-one correspondence between unit vectors and physical states, meaning that if every unit vector corresponds to a physical system. This is resolved by the following corollary that is a primary ingredient for quantum computation theory (see Ref. \cite{nielsen_00} Chapter 7).
\vspace{0.5cm}
\begin{corollary}
All unit vectors of a finite Hilbert space correspond to possible physical states of a system.
\end{corollary}
\vspace{0.5cm}
\noindent
Unit vectors are also called {\it pure states}. If we know the pure state of a system, we have all physical information about it, and we can calculate the probabilistic outcomes of any potential measurement (see the next postulate). This is a very improbable situation as experimental settings are not perfect, and in most cases, we have only imperfect information about the state. Most generally, we may know that a quantum system can be in one state of a set $\key{\ket{\psi_i}}$ with probabilities $p_i$. Therefore, our knowledge of the system is given by an {\it ensemble of pure states} described by the set $\key{\ket{\psi_i}, \; p_i}$. If more than one $p_i$ is different from zero the state is not pure anymore, and it is called a {\it mixed state}. The mathematical tool that describes our knowledge of the system, in this case, is the {\it density operator} (or {\it density matrix}).
\begin{equation}
\rho \equiv \sum_i p_i \op{\psi_i}{\psi_i}.
\label{eq:dm}
\end{equation}
Density matrices are bounded operators that fulfil two mathematical conditions
\begin{enumerate}
\item A density matrix $\rho$ has unit trace $\pare{\textrm{Tr}[\rho]=1 }$.
\item A density matrix is a positive matrix $\rho>0$.
\end{enumerate}
Any operator fulfilling these two properties is considered a density operator. It can be proved trivially that density matrices are also Hermitian.
If we are given a density matrix, it is easy to verify if it belongs to a pure or a mixed state. For pure states, and only for them, $\textrm{Tr}[\rho^2]=\textrm{Tr}[\rho]=1$. Therefore, if $\textrm{Tr}[\rho^2]<1$ the system is mixed. The quantity $\textrm{Tr}[\rho^2]$ is called the purity of the states, and it fulfils the bounds $\frac{1}{d} \le \textrm{Tr}[\rho^2] \le 1$, being $d$ the dimension of the Hilbert space.
If we fix an arbitrary basis $\key{\ket{i}}_{i=1}^N$ of the Hilbert space the density matrix in this basis is written as $\rho=\sum_{i,j=1}^N \rho_{i,j} \op{i}{j}$, or
\begin{equation}
\rho=
\begin{pmatrix}
\rho_{00} & \rho_{01} & \cdots & \rho_{0N} \\
\rho_{10} & \rho_{11} & \cdots & \rho_{1N} \\
\vdots & \vdots & \ddots & \vdots \\
\rho_{N0} & \rho_{N1} & \cdots & \rho_{NN}
\end{pmatrix},
\end{equation}
where the diagonal elements are called {\it populations} $\pare{\rho_{ii}\in\mathbb{R}_0^+\text{ and } \sum_{i} \rho_{i,i}=1}$, while the off-diagonal elements are called {\it coherences} $\pare{ \rho^{\phantom{*}}_{i,j} \in \mathbb{C} \text{ and } \rho^{\phantom{*}}_{i,j}=\rho_{j,i}^*}$. Note that this notation is base-dependent.
\begin{center}
\vspace{0.5cm}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 1. State of a two-level system (qubit)}
\vspace{0.25cm}
The Hilbert space of a two-level system is just the two-dimension lineal space ${\cal H}_2$. Examples of this kind of system are $\frac{1}{2}$-spins and two-level atoms. We can define a basis of it by the orthonormal vectors: $\key{ \ket{0},\;\ket{1}}$. A pure state of the system would be any unit vector of ${\cal H}_2$. It can always be expressed as a $\ket{\psi}=a\ket{0} + b\ket {1}$ with $a,b \in \mathbb{C}$ s. t. $\abs{a}^2 + \abs{b}^2=1$.
\vspace{0.25cm}
A mixed state is therefore represented by a positive unit trace operator $ \rho\in O({\cal H}_2)$.
\begin{equation}
\rho =
\begin{pmatrix}
\rho_{00} & \rho_{01} \\
\rho_{10} & \rho_{11}
\end{pmatrix}
= \rho_{00} \proj{0} + \rho_{01} \op{0}{1} + \rho_{10} \op{1}{0} + \rho_{11} \proj{1},
\label{eq:denmat}
\end{equation}
ant it should fulfil $\rho_{00}+\rho_{11}=1$ and $\rho_{01}^{\phantom{*}}=\rho_{10}^*$.
\vspace{0.25cm}
\end{minipage}
\label{minipage1}
}
\end{center}
\vspace{0.5cm}
\noindent
Once we know the state of a system, it is natural to ask about the possible outcomes of experiments (see Ref. \cite{sakurai_94}, Section 1.4).
\vspace{0.5cm}
\begin{postulate}
All possible measurements in a quantum system are described by a Hermitian operator or {\bf observable}. Due to the Spectral Theorem we know that any observable $O$ has a spectral decomposition in the form\footnote{For simplicity, we assume a non-degenerated spectrum.}
\begin{equation}
O=\sum_i a_i \op{a_i}{a_i},
\end{equation}
being $a_i\in\mathbb{R}$ the eigenvalues of the observable and $\ket{a_i}$ their corresponding eigenvectors. The probability of obtaining the result $a_i$ when measuring the property described by observable $O$ in a state $\ket{\psi}$ is given by
\begin{equation}
P(a_i)= \left| \bracket{\psi}{a_i} \right|^2.
\end{equation}
After the measurement we obtain the state $\ket{a_i}$ if the outcome $a_i$ was measured. This is called the {\it post-measurement state}.
\label{post4}
\end{postulate}
\vspace{0.5cm}
This postulate allow us to calculate the possible outputs of a system, the probability of these outcomes, as well as the after-measurement state. A measurement usually changes the state, as it can only remain unchanged if it was already in an eigenstate of the observable.
It is possible to calculate the expectation value of the outcome of a measurement defined by operator $O$ in a state $\ket{\psi}$ by just applying the simple formula
\begin{equation}
\mean{O}= \bra{\psi} O \ket{\psi}.
\end{equation}
With a little algebra we can translate this postulate to mixed states. In this case, the probability of obtaining an output $a_i$ that corresponds to an eigenvector $\ket{a_i}$ is
\begin{equation}
P(a_i)=\textrm{Tr}\cor{\proj{a_i}\rho},
\end{equation}
and the expectation value of operator $O$ is
\begin{equation}
\mean{O}=\textrm{Tr} \cor{O\rho}.
\end{equation}
\begin{center}
\vspace{0.5cm}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 2. Measurement in a two-level system.}
\vspace{0.25cm}
A possible test to perform in our minimal model is to measure the energetic state of a system, assuming that both states have a different energy. The observable corresponding to this measurement would be
\begin{equation}
H=E_0 \proj{0} + E_1 \proj{1}.
\end{equation}
This operator has two eigenvalues $\key{E_0,\;E_1}$ with two corresponding eigenvectors $\key{\ket{0},\; \ket{1}}$.
\vspace{0.25cm}
If we have a pure state $\psi=a\ket{0} + b \ket{1}$ the probability of measuring the energy $E_0$ would be $P(E_0)=\abs{\bracket{0}{\psi}}^2=\abs{a}^2$. The probability of finding $E_1$ would be $P(E_1)=\abs{\bracket{1}{\psi}}^2=\abs{b}^2$. The expected value of the measurement is $\mean{H}= E_0\abs{a}^2+ E_1\abs{b}^2$.
\vspace{0.25cm}
In the more general case of having a mixed state $\rho=\rho_{00} \proj{0} + \rho_{01} \op{ 0}{1} + \rho_{10} \op{1}{0} + \rho_{11} \proj{1}$ the probability of finding the ground state energy is $P(0)=\textrm{Tr} \cor{ \proj{0} \rho }= \rho_{00}$, and the expected value of the energy would be $\mean{H}=\textrm{Tr} \cor{H\rho}= E_0 \rho_{00} + E_1 \rho_{11}$.
\vspace{0.25cm}
\end{minipage}
\label{minipage2}
}
\end{center}
\vspace{0.5cm}
\noindent
Another natural question to ask is how quantum systems evolve. The time-evolution of a pure state of a closed quantum system is given by the Schr\"odinger equation (see \cite{galindo_pascual_90}, Section 2.9).
\vspace{0.5cm}
\begin{postulate}
Time evolution of a pure state of a closed quantum system is given by the Schr\"odinger equation
\begin{equation}
\frac{d}{dt} \ket{\psi(t)} = -i\hbar H\ket{\psi(t)},
\label{eq:sch}
\end{equation}
where $H$ is the {\it Hamiltonian} of the system and it is a Hermitian operator of the Hilbert space of the system state (from now on we avoid including Planck's constant by selecting the units such that $\hbar=1)$.
\label{post3}
\end{postulate}
\vspace{0.5cm}
\noindent
The Hamiltonian of a system is the operator corresponding to its energy, and it can be non-trivial to realise.
Schr\"odinger equation can be formally solved in the following way. If at $t=0$ the state of a system is given by $\ket{\psi(0)}$ at time $t$ it will be
\begin{equation}
\ket{\psi(t)}=e^{-i Ht } \ket{\psi(0)}.
\end{equation}
As $H$ is a Hermitian operator, the operator $U=e^{-i Ht }$ is unitary. This gives us another way of phrasing Postulate \ref{post3}.
\vspace{0.5cm}
{\bf Postulate 3'}
{\it The evolution of a closed system is given by a unitary operator of the Hilbert space of the system }
\begin{equation}
\ket{\psi(t)}=U \ket{\psi(0)},
\label{eq:evol}
\end{equation}
{\it with} $U\in {\cal B}\pare{{\cal H}}$ {\it s.t.} $U U^{\dagger}=U^{\dagger}U=\mathbb 1$.
\vspace{0.5cm}
\noindent
It is easy to prove that unitary operators preserve the norm of vectors and, therefore, transform pure states into pure states. As we did with the state of a system, it is reasonable to wonder if any unitary operator corresponds to the evolution of a real physical system. The answer is yes.
\vspace{0.5cm}
\begin{lemma}
All unitary evolutions of a state belonging to a finite Hilbert space can be constructed in several physical realisations like photons and cold atoms.
\end{lemma}
\noindent
The proof of this lemma can be found at \cite{nielsen_00}.
The time evolution of a mixed state can be calculated just by combining Eqs. (\ref{eq:sch}) and (\ref{eq:dm}), giving the von-Neumann equation.
\begin{equation}
\dot{\rho} = - i \cor{H,\rho}\equiv {\cal L} \rho,
\label{eq:vne}
\end{equation}
where we have used the commutator $\cor{A,B}=AB-BA$, and ${\cal L}$ is the so-called Liouvillian superoperator.
It is easy to prove that the Hamiltonian dynamics does not change the purity of a system
\begin{equation}
\frac{d}{dt} \textrm{Tr}\cor{\rho^2} = \textrm{Tr}\cor{ \frac{d \rho^2}{dt} } = \textrm{Tr}\cor{ 2\rho \dot{\rho} } = -2 i \textrm{Tr}\cor{ \rho\pare{ H\rho -\rho H } }=0,
\end{equation}
where we have used the cyclic property of the trace. This result illustrates that the mixing rate of a state does not change due to the quantum evolution.
\begin{center}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 3. Time evolution of a two-level system.}
\vspace{0.25cm}
The evolution of our isolated two-level system is described by its Hamiltonian
\begin{equation}
H_{\text{free}}=E_0 \proj{0} + E_1 \proj{1},
\label{eq:atomham}
\end{equation}
As the states $\ket{0}$ and $\ket{1}$ are Hamiltonian eigenstates if at $t=0$ the atom is at the excited state $\ket{\psi(0)}=\ket{1}$ after a time $t$ the state would be $\ket{\psi(t)}=e^{-iHt} \ket{1}=e^{-i E_1 t} \ket{1}$.
\vspace{0.1cm}
As the system was already in an eigenvector of the Hamiltonian, its time-evolution consists only in adding a phase to the state, without changing its physical properties. (If an excited state does not change, why do atoms decay?) Without losing any generality we can fix the energy of the ground state as zero, obtaining
\begin{equation}
H_{\text{free}}= E \proj{1},
\label{eq:atomham2}
\end{equation}
with $E\equiv E_1$. To make the model more interesting we can include a driving that coherently switches between both states. The total Hamiltonian would be then
\begin{equation}
H=E \proj{1} + \Omega \pare{\op{0}{1} +\op{1}{0}},
\end{equation}
where $\Omega$ is the frequency of driving. By using the von-Neumann equation (\ref{eq:vne}) we can calculate the populations $\pare{\rho_{00},\rho_{11}}$ as a function of time. The system is then driven between the states, and the populations present Rabi oscillations, as it is shown in Fig. \ref{figure1}.
\begin{center}
\includegraphics[scale=.8]{figure1}
\captionof{figure}{Population dynamics under a quantum dynamics (Parameters are $\Omega=1,\; E=1$). The blue line represents $\rho_{11}$ and the orange one $\rho_{00}$.}
\label{figure1}
\end{center}
\end{minipage}
}\end{center}
\vspace{0.5cm}
\noindent
Finally, as we are interested in composite quantum systems, we need to postulate how to work with them.
\vspace{0.5cm}
\begin{postulate}
The state-space of a composite physical system, composed by $N$ subsystems, is the tensor product of the state space of each component
${\cal H}={\cal H}_1 \otimes {\cal H}_2 \otimes \cdots \otimes {\cal H}_N$. The state of the composite physical system is given by a unit vector of ${\cal H}$. Moreover, if each subsystem belonging to ${\cal H}_i$ is prepared in the state $\ket{\psi_i}$ the total state is given by $\ket{\psi}=\ket{\psi_1} \otimes \ket{\psi_2} \otimes \cdots \otimes\ket{\psi_N}$.
\end{postulate}
\vspace{0.5cm}
\noindent
The symbol $\otimes$ represents the tensor product of Hilbert spaces, vectors, and operators. If we have a composited mixed state where each component is prepared in the state $\rho_i$ the total state is given by $\rho=\rho_1 \otimes \rho_2 \otimes \cdots \otimes\rho_N$.
States that can be expressed in the simple form $\ket{\psi}=\ket{\psi_1} \otimes \ket{\psi_2}$, in any specific basis, are very particular and they are called {\it separable states} (For this discussion, we use a bipartite system as an example. The extension to a general multipartite system is straightforward.) . In general, any arbitrary state should be described as $\ket{\psi}=\sum_{i,j} \ket{\psi_i} \otimes \ket{\psi_j}$ (or $\rho=\sum_{i,j} \rho_i \otimes \rho_j$ for mixed states). Non-separable states are called {\it entangled states}.
Now that we know how to compose systems, but we can be interested in going the other way around. If we have a system belonging to a bipartite Hilbert space in the form ${\cal H}={\cal H}_a \otimes {\cal H}_b$ we can be interested in studying some properties of the subsystem corresponding to one of the subspaces. To do so, we define the {\it reduced density matrix}. If the state of our system is described by a density matrix $\rho$ the reduced density operator of the subsystem $a$ is defined by the operator
\begin{equation}
\rho_{a} \equiv \textrm{Tr}_{b} \cor{\rho},
\end{equation}
were $\textrm{Tr}_b$ is the partial trace over subspace $b$ and it is defined as \cite{nielsen_00}
\begin{equation}
\textrm{Tr}_b \cor{ \sum_{i,j,k,l} \op{a_i}{a_j} \otimes \op{b_k}{b_l} } \equiv\sum_{i,j} \op{a_i}{a_j} \textrm{Tr} \cor{ \sum_{k,l} \op{b_k}{b_l}}.
\end{equation}
The concepts of reduced density matrix and partial trace are essential in the study of open quantum systems. If we want to calculate the equation of motions of a system affected by an environment, we should trace out this environment and deal only with the reduced density matrix of the system. This is the main idea of the theory of open quantum systems.
\newpage
\begin{center}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 4. Two two-level atoms}
\vspace{0.25cm}
If we have two two-level systems, the total Hilbert space is given by ${\cal H}={\cal H}_2\otimes {\cal H}_2$. A basis of this Hilbert space would be given by the set $\left\{ \ket{00}\equiv \ket{0}_1 \otimes\ket{0}_2,\; \ket{01}\equiv \ket{0}_1 \otimes\ket{1}_2,\; \ket{10}\equiv \ket{1}_1 \otimes\ket{0}_2,\;\ket{11}\equiv \ket{1}_1 \otimes\ket{1}_2 \right\}$. If both systems are in their ground state, we can describe the total state by the separable vector
\begin{equation}
\ket{\psi}_G=\ket{00}.
\end{equation}
A more complex, but still separable, state can be formed if both systems are in superposition.
\begin{eqnarray}
\ket{\psi}_S&=&\frac{1}{\sqrt{2}} \left( \ket{0}_1 +\ket{1}_1 \right) \otimes \frac{1}{ \sqrt{2}} \left( \ket{0}_2 +\ket{1}_2 \right) \nonumber\\
&=& \frac{1}{2} \left( \ket{00} + \ket{10} + \ket{01} + \ket{11} \right)
\end{eqnarray}
An entangled state would be
\begin{equation}
\ket{\psi}_E=\frac{1}{ \sqrt{2}} \left( \ket{00} +\ket{11} \right).
\end{equation}
This state cannot be separated into a direct product of each subsystem. If we want to obtain a reduced description of subsystem $1$ (or $2$) we have to use the partial trace. To do so, we need first to calculate the density matrix corresponding to the pure state $\ket{\psi}_E$.
\begin{equation}
\rho_E=\ket{\psi}\bra{\psi}_E = \frac{1}{2} \left( \proj{00} + \op{00}{11} + \op{11}{00} + \proj{11} \right).
\end{equation}
We can now calculate the reduced density matrix of the subsystem $1$ by using the partial trace.
\begin{equation}
\rho_E^{(1)}= \bra{0}_2 \rho_E \ket{0}_2 + \bra{1}_2 \rho_E \ket{1}_2 = \frac{1}{2} \left( \proj{00}_1 + \proj{11}_2 \right).
\end{equation}
From this reduced density matrix, we can calculate all the measurement statistics of subsystem $1$.
\vspace{0.25cm}
\end{minipage}
}
\end{center}
\vspace{0.5cm}
\newpage
\section{The Fock-Liouville Hilbert space. The Liouville superoperator}
\label{sec:fl}
In this section, we revise a useful framework for both analytical and numerical calculations. It is clear that some linear combinations of density matrices are valid density matrices (as long as they preserve positivity and trace $1$). Because of that, we can create a Hilbert space of density matrices just by defining a scalar product. This is clear for finite systems because in this case scalar space and Hilbert space are the same things. It also happens to be true for infinite spaces. This allows us to define a linear space of matrices, converting the matrices effectively into vectors ($\rho\to\kket{\rho}$). This is called Fock-Liouville space (FLS). The usual definition of the scalar product of matrices $\phi$ and $\rho$ is defined as $\bbracket{\phi}{\rho}\equiv\textrm{Tr}\cor{\phi^\dagger\rho}$. The Liouville super-operator from Eq. (\ref{eq:vne}) is now an operator acting on the Hilbert space of density matrices. The main utility of the FLS is to allow the matrix representation of the evolution operator.
\vspace{0.5cm}
\begin{center}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 5. Time evolution of a two-level system.}
\vspace{0.25cm}
The density matrix of our system (\ref{eq:denmat}) can be expressed in the FLS as
\begin{equation}
\kket{\rho}=
\begin{pmatrix}
\rho_{00} \\
\rho_{01} \\
\rho_{10} \\
\rho_{11}
\end{pmatrix}.
\end{equation}
\vspace{0.25cm}
The time evolution of a mixed state is given by the von-Neumann equation (\ref{eq:vne}). The Liouvillian superoperator can now be expressed as a matrix
\begin{equation}
\tilde{{\cal L}}=
\left(
\begin{array}{cccc}
0 & i \Omega & -i\Omega & 0 \\
i\Omega & i E &0 & -i\Omega \\
-i\Omega & 0 & -iE & i\Omega \\
0 & -i\Omega & i\Omega & 0
\end{array}
\right),
\end{equation}
where each row is calculated just by observing the output of the operation $-i \cor{H,\rho}$ in the computational basis of the density matrices space. The time evolution of the system now corresponds to the matrix equation $\frac{d \kket{\rho}}{dt}=\tilde{{\cal L}} \kket{\rho}$, that in matrix notation would be
\begin{equation}
\begin{pmatrix}
\dot{\rho}_{00} \\
\dot{\rho}_{01} \\
\dot{\rho}_{10} \\
\dot{\rho}_{11}
\end{pmatrix}
=
\left(
\begin{array}{cccc}
0 & i \Omega & -i\Omega & 0 \\
i\Omega & i E &0 & -i\Omega \\
-i\Omega & 0 & -i E & i\Omega \\
0 & -i\Omega & i\Omega & 0
\end{array}
\right)
\begin{pmatrix}
\rho_{00} \\
\rho_{01} \\
\rho_{10} \\
\rho_{11}
\end{pmatrix}
\end{equation}
\vspace{0.25cm}
\label{minipage4}
\end{minipage}
}
\end{center}
\vspace{0.5cm}
\newpage
\section{CPT-maps and the Lindblad Master Equation.}
\label{cpt}
\subsection{Completely positive maps}
The problem we want to study is to find the most general Markovian transformation set between density matrices. Until now, we have seen that quantum systems can evolve in two way, by a coherent evolution given (Postulate \ref{post3}) and by collapsing after a measurement (Postulate \ref{post4}). Many efforts have been made to unify these two ways of evolving \cite{schlosshauer_07}, without giving a definite answer so far. It is reasonable to ask what is the most general transformation that can be performed in a quantum system, and what is the dynamical equation that describes this transformation.
We are looking for maps that transform density matrices into density matrices. We define $\rho({\cal H})$ as the space of all density matrices in the Hilbert space ${\cal H}$. Therefore, we are looking for a map of this space onto itself, ${\cal V}:\rho({\cal H})\to\rho({\cal H})$. To ensure that the output of the map is a density matrix this should fulfil the following properties
\begin{itemize}
\item Trace preserving. $\textrm{Tr}\cor{{\cal V} A}=\textrm{Tr}\cor{A},$ $\forall A\in O({\cal H})$.
\item Completely positive (see below).
\end{itemize}
Any map that fulfils these two properties is called a {\it completely positive and trace-preserving map (CPT-maps)}. The first property is quite apparent, and it does not require more thinking. The second one is a little more complicated, and it requires an intermediate definition.
\vspace{0.5cm}
\begin{definition}
A map ${\cal V}$ is positive iff $\forall A\in B({\cal H})$ s.t. $A \ge 0 \Rightarrow {\cal V} A \ge 0$.
\end{definition}
\vspace{0.5cm}
\noindent
This definition is based in the idea that, as density matrices are positive, any physical map should transform positive matrices into positive matrices. One could naively think that this condition must be sufficient to guarantee the physical validity of a map. It is not. As we know, there exist composite systems, and our density matrix could be the partial trace of a more complicated state. Because of that, we need to impose a more general condition.
\vspace{0.5cm}
\begin{definition}
A map ${\cal V}$ is completely positive iff $\forall n\in \mathbb{N}$, ${\cal V}\otimes \mathbb 1_n$ is positive.
\end{definition}
\vspace{0.5cm}
\noindent
To prove that not all positive maps are completely positive, we need a counterexample. A canonical example of an operation that is positive but fails to be completely positive is the matrix transposition. If we have a Bell state in the form $\ket{\psi_B}=\frac{1}{\sqrt{2}} \pare{\ket{01}+\ket{10}}$ its density matrix can be expressed as
\begin{equation}
\rho_B=\frac{1}{2} \pare{\op{0}{0} \otimes \op{1}{1} + \op{1}{1} \otimes \op{0}{0} + \op{0}{1} \otimes\op{1}{0} + \op{1}{0} \otimes\op{0}{1}},
\end{equation}
with a matrix representation
\begin{eqnarray}
\rho_B= \frac{1}{2} \left\{
\left(
\begin{array}{cc}
1 & 0 \\
0 & 0 \\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
0 & 0 \\
0 & 1 \\
\end{array}
\right)
+
\left(
\begin{array}{cc}
0 & 0 \\
0 & 1 \\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
1 & 0 \\
0 & 0 \\
\end{array}
\right)
\right.
\nonumber\\
\left.
\otimes
\left(
\begin{array}{cc}
0 & 0 \\
1 & 0 \\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
0 & 1 \\
0 & 0 \\
\end{array}
\right)
+
\left(
\begin{array}{cc}
0 & 1 \\
0 & 0\\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
0 & 0 \\
1 & 0 \\
\end{array}
\right)
\right\}.
\end{eqnarray}
A little algebra shows that the full form of this matrix is
\begin{equation}
\rho_B=\left(
\begin{array}{cccc}
0 & 0 & 0 & 0\\
0 & 1 & 1 & 0\\
0 & 1 & 1 & 0\\
0 & 0 & 0 & 0
\end{array}
\right),
\end{equation}
and it is positive.
It is easy to check that the transformation $\mathbb 1\otimes T_2 $, meaning that we transpose the matrix of the second subsystem leads to a non-positive matrix
\begin{eqnarray}
\pare{ \mathbb 1\otimes T_2 } \rho_B= \frac{1}{2} \left\{
\left(
\begin{array}{cc}
1 & 0 \\
0 & 0 \\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
0 & 1 \\
0 & 0 \\
\end{array}
\right)
+
\left(
\begin{array}{cc}
0 & 0 \\
0 & 1 \\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
0 & 0 \\
1 & 0 \\
\end{array}
\right)
\right.
\nonumber\\
\left.
\otimes
\left(
\begin{array}{cc}
0 & 0 \\
1 & 0 \\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
0 & 0 \\
1 & 0 \\
\end{array}
\right)
+
\left(
\begin{array}{cc}
0 & 0 \\
0 & 1 \\
\end{array}
\right)
\otimes
\left(
\begin{array}{cc}
0 & 1 \\
0 & 0 \\
\end{array}
\right)
\right\}.
\end{eqnarray}
The total matrix is
\begin{equation}
\pare{ \mathbb 1\otimes T_2 } \rho_B=
\left(
\begin{array}{cccc}
0 & 0 & 0 & 1\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
1 & 0 & 0 & 0
\end{array}
\right),
\end{equation}
with $-1$ as an eigenvalue. This example illustrates how the non-separability of quantum mechanics restrict the operations we can perform in a subsystem. By imposing this two conditions, we can derive a unique master equation as the generator of any possible Markovian CPT-map.
\subsection{Derivation of the Lindblad Equation from microscopic dynamics}
The most common derivation of the Lindblad master equation is based on Open Quantum Theory. The Lindblad equation is then an effective motion equation for a subsystem that belongs to a more complicated system. This derivation can be found in several textbooks like Breuer and Petruccione's \cite{breuer_02} as well as Gardiner and Zoller's \cite{gardiner_00}. Here, we follow the derivation presented in Ref. \cite{manzano:av18}. Our initial point is displayed in Figure \ref{fig:fig01}. A total system belonging to a Hilbert space ${\cal H}_T$ is divided into our system of interest, belonging to a Hilbert space ${\cal H}$, and the environment living in ${\cal H}_E$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.2]{figure01}
\end{center}
\caption{A total system (belonging to a Hilbert space ${\cal H}_T$, with states described by density matrices $\rho_T$, and with dynamics determined by a Hamiltonian $H_T$) divided into the system of interest, `System', and the environment. }
\label{fig:fig01}
\end{figure}
The evolution of the total system is given by the von Neumann equation (\ref{eq:vne}).
\begin{equation}
\dot{\rho_T}(t)=-i \left[ H_T,\rho_T(t) \right].
\end{equation}
As we are interested in the dynamics of the system, without the environment, we trace over the environment degrees of freedom to obtain the reduced density matrix of the system $\rho(t)=\textrm{Tr}_E[\rho_T]$. To separate the effect of the total hamiltonian in the system and the environment we divide it in the form $H_T=H_S \otimes \mathbb 1_E + \mathbb 1_S \otimes H_E + \alpha H_I $, with $H\in{\cal H}$, $H_E\in {\cal H}_E$, and $H_I\in {\cal H}_T$, and being $\alpha$ a measure of the strength of the system-environment interaction. Therefore, we have a part acting on the system, a part acting on the environment, and the interaction term. Without losing any generality, the interaction term can be decomposed in the following way
\begin{equation}
H_I = \sum_i S_i \otimes E_i,
\label{eq:hint}
\end{equation}
with $S_i\in B({\cal H}$) and $E_i\in B({\cal H}_E)$\footnote{From now on we will not writethe identity operators of the Hamiltonian parts explicitly when they can be inferred from the context.}.
To better describe the dynamics of the system, it is useful to work in the interaction picture (see Ref. \cite{galindo_pascual_90} for a detailed explanation about Schr\"odinger, Heisenberg, and interaction pictures). In the interaction picture, density matrices evolve with time due to the interaction Hamiltonian, while operators evolve with the system and environment Hamiltonian. An arbitrary operator $O\in {\cal B}({\cal H}_T)$ is represented in this picture by the time-dependent operator $\hat{O}(t)$, and its time evolution is
\begin{equation}
\hat{O}(t) = e^{i(H+H_E)t} \,O\, e^{-i(H+H_E)t}.
\end{equation}
The time evolution of the total density matrix is given in this picture by
\begin{equation}
\frac{d \hat{\rho}_T(t)}{dt} = -i \alpha \cor{\hat{H}_I(t),\hat{\rho}_T(t)}.
\label{eq:vnint}
\end{equation}
This equation can be easily integrated to give
\begin{equation}
\hat{\rho}_T(t) = \hat{\rho}_T(0) -i \alpha \int_0^t ds \cor{\hat{H}_I(s),\hat{\rho}_T(s)}.
\label{eq:integint}
\end{equation}
By this formula, we can obtain the exact solution, but it still has the complication of calculating an integral in the total Hilbert space. It is also troublesome the fact that the state $\tilde{\rho}(t)$ depends on the integration of the density matrix in all previous time. To avoid that we can introduce Eq. (\ref{eq:integint}) into Eq. (\ref{eq:vnint}) giving
\begin{equation}
\frac{d \hat{\rho}_T(t)}{dt} = -i \alpha \cor{\hat{H}_I(t),\hat{\rho}_T(0)} -\alpha^2 \int_0^t ds \cor{ \hat{H}_I(t),\cor{\hat{H}_I(s),\hat{\rho}_T(s)} }.
\end{equation}
By applying this method one more time we obtain
\begin{equation}
\frac{d \hat{\rho}_T(t)}{dt} = -i \alpha \cor{\hat{H}_I(t),\hat{\rho}_T(0)} -\alpha^2 \int_0^t ds \cor{ \hat{H}_I(t),\cor{\hat{H}_I(s),\hat{\rho}_T(t)} } + O(\alpha^3).
\label{eq:orderthree}
\end{equation}
After this substitution, the integration of the previous states of the system is included only in the terms that are $O(\alpha^3)$ or higher. At this moment, we perform our first approximation by considering that the strength of the interaction between the system and the environment is small. Therefore, we can avoid high-orders in Eq. (\ref{eq:orderthree}). Under this approximation we have
\begin{equation}
\frac{d \hat{\rho}_T(t)}{dt} = -i \alpha \cor{\hat{H}_I(t),\hat{\rho}_T(0)} -\alpha^2 \int_0^t ds \cor{ \hat{H}_I(t),\cor{\hat{H}_I(s),\hat{\rho}_T(t)} }.
\label{eq:exp}
\end{equation}
We are interested in finding an equation of motion for $\rho$, so we trace over the environment degrees of freedom
\begin{equation}
\frac{d \hat{\rho}(t)}{dt}= \textrm{Tr}_E \left[\frac{d \hat{\rho}_T(t)}{dt} \right] = -i \alpha \textrm{Tr}_E\cor{\hat{H}_I(t),\hat{\rho}_T(0)} -\alpha^2 \int_0^t ds \textrm{Tr}_E\cor{ \hat{H}_I(t),\cor{\hat{H}_I(s),\hat{\rho}_T(t)} }.
\label{eq:exp2}
\end{equation}
This is not a closed time-evolution equation for $\hat{\rho}(t)$, because the time derivative still depends on the full density matrix $\hat{\rho}_T(t)$. To proceed, we need to make two more assumptions. First, we assume that $t=0$ the system and the environment have a separable state in the form $\rho_T(0)=\rho(0) \otimes \rho_E(0)$. This means that there are not correlations between the system and the environment. This may be the case if the system and the environment have not interacted at previous times or if the correlations between them are short-lived. Second, we assume that the initial state of the environment is thermal, meaning that it is described by a density matrix in the form $\rho_E(0)=\exp\left( -H_E/T \right)/\textrm{Tr}[\exp\left( -H_E/T \right)]$, being $T$ the temperature and taking the Boltzmann constant as $k_B=1$. By using these assumptions, and the expansion of $H_I$ (\ref{eq:hint}), we can calculate an expression for the first element of the r.h.s of Eq. (\ref{eq:exp2}).
\begin{equation}
\textrm{Tr}_E\cor{\hat{H}_I(t),\hat{\rho}_T(0)} = \sum_i \left( \hat{S}_i(t) \hat{\rho}(0) \textrm{Tr}_E \left[ \hat{E}_i(t) \hat{\rho}_E(0) \right] -
\hat{\rho}(0) \hat{S}_i(t) \textrm{Tr}_E \left[ \hat{\rho}_E(0) \hat{E}_i(t) \right] \right).
\label{eq:zero}
\end{equation}
To calculate the explicit value of this term, we may use that $\left< E_i\right>=\textrm{Tr}[E_i \rho_E(0)]=0$ for all values of $i$. This looks like a strong assumption, but it is not. If our total Hamiltonian does not fulfil it, we can always rewrite it as
$H_T=\left( H+ \alpha \sum_i \left< E_i\right> S_i\right) + H_E + \alpha H_i'$, with $H'_i= \sum_i S_i \otimes (E_i- \left< E_i\right>)$. It is clear that now $\left< E'_i \right>=0$, with $E'_i=E_i- \left< E_i\right>$, and the system Hamiltonian is changed just by the addition of an energy shift that does no affect the system dynamics. Because of that, we can assume that $\left< E_i\right>=0$ for all $i$. Using the cyclic property of the trace, it is easy to prove that the term of Eq. (\ref{eq:zero}) is equal to zero, and the equation of motion (\ref{eq:exp2}) reduces to
\begin{equation}
\dot{\hat{\rho}}(t)= -\alpha^2 \int_0^t ds \textrm{Tr}_E\cor{ \hat{H}_I(t),\cor{\hat{H}_I(s),\hat{\rho}_T(t)} }.
\label{eq:exp3}
\end{equation}
This equation still includes the entire state of the system and environment. To unravel the system from the environment, we have to make a more restrictive assumption. As we are working in the weak coupling regime, we may suppose that the system and the environment are non-correlated during all the time evolution. Of course, this is only an approximation. Due to the interaction Hamiltonian, some correlations between system and environment are expected to appear. On the other hand, we may assume that the timescales of correlation ($\tau_\text{corr}$) and relaxation of the environment ($\tau_\text{rel}$) are much smaller than the typical system timescale ($\tau_\text{sys}$), as the coupling strength is very small ($\alpha<<$). Therefore, under this strong assumption, we can assume that the environment state is always thermal and is decoupled from the system state, $\hat{\rho}_T(t)=\hat{\rho}(t) \otimes \hat{\rho}_E(0)$. Eq. (\ref{eq:exp3}) then transforms into
\begin{equation}
\dot{\hat{\rho}}(t)= -\alpha^2 \int_0^t ds \textrm{Tr}_E\cor{ \hat{H}_I(t),\cor{\hat{H}_I(s),\hat{\rho}(t) \otimes \hat{\rho}_E(0)} }.
\end{equation}
The equation of motion is now independent for the system and local in time. It is still non-Markovian, as it depends on the initial state preparation of the system. We can obtain a Markovian equation by realising that the kernel in the integration and that we can extend the upper limit of the integration to infinity with no real change in the outcome. By doing so, and by changing the integral variable to $s\rightarrow t-s$, we obtain the famous {\it Redfield equation} \cite{redfield:IBM57}.
\begin{equation}
\dot{\hat{\rho}}(t)= -\alpha^2 \int_0^{\infty} ds \textrm{Tr}_E\cor{ \hat{H}_I(t),\cor{\hat{H}_I(s-t),\hat{\rho}(t) \otimes \hat{\rho}_E(0)} }.
\label{eq:redfield}
\end{equation}
It is known that this equation does not warrant the positivity of the map, and it sometimes gives rise to density matrices that are non-positive. To ensure complete positivity, we need to perform one further approximation, the {\it rotating wave} approximation. To do so, we need to use the spectrum of the superoperator $\tilde{H}A\equiv \cor{H,A}$, $\forall A\in {\cal B}({\cal H})$. The eigenvectors of this superoperator form a complete basis of space ${\cal B}({\cal H})$ and, therefore, we can expand the system-environment operators from Eq. (\ref{eq:hint}) in this basis
\begin{equation}
S_i = \sum_{\omega} S_i(\omega),
\label{eq:expeigen}
\end{equation}
where the operators $S_i(\omega)$ fulfils
\begin{equation}
\cor{H,S_i(\omega)}= -\omega S_i(\omega),
\end{equation}
being $\omega$ the eigenvalues of $\tilde{H}$. It is easy to take also the Hermitian conjugated
\begin{equation}
\cor{H,S_i^{\dagger}(\omega)}= \omega S_i^{\dagger}(\omega).
\end{equation}
To apply this decomposition, we need to change back to the Schr\"odinger picture for the term of the interaction Hamiltonian acting on the system's Hilbert space. This is done by the expression $\hat{S}_k= e^{it H} S_k e^{-it H}$. By using the eigen-expansion (\ref{eq:expeigen}) we arrive to
\begin{equation}
\tilde{H}_i(t) = \sum_{k,\omega} e^{-i\omega t} S_k(\omega) \otimes \tilde{E}_k (t) = \sum_{k,\omega} e^{i\omega t} S_k^{\dagger}(\omega) \otimes \tilde{E}_k^{\dagger} (t).
\end{equation}
To combine this decomposition with Redfield equation (\ref{eq:redfield}), we first may expand the commutators.
\begin{eqnarray}
\hspace{-2cm}
\dot{\hat{\rho}}(t)= -\alpha^2 \textrm{Tr}\left[ \int_0^\infty ds\, \hat{H}_I (t) \hat{H}_I (t-s) \hat{\rho} (t) \otimes \hat{\rho}_E(0)
- \int_0^\infty ds\, \hat{H}_I (t) \hat{\rho} (t) \otimes \hat{\rho}_E(0) \hat{H}_I (t-s) \right. \nonumber \\
\hspace{-2cm}
\left. - \int_0^\infty ds\, \hat{H}_I (t-s) \hat{\rho} (t) \otimes \hat{\rho}_E(0) \hat{H}_I (t)
+ \int_0^\infty ds\, \hat{\rho} (t) \otimes \hat{\rho}_E(0) \hat{H}_I (t-s) \hat{H}_I (t)
\right].
\end{eqnarray}
We now apply the eigenvalue decomposition in terms of $S_k(\omega)$ for $\hat{H}_I(t-s)$ and in terms of $S_k^{\dagger}(\omega')$ for $\hat{H}_I(t)$. By using the permutation property of the trace and the fact that $\cor{H_E,\rho_E(0)}=0$, and after some non-trivial algebra we obtain
\begin{equation}
\dot{\hat{\rho}}(t) =\sum_{\substack{\omega,\omega'\\ k,l }} \left( e^{i (\omega'-\omega)t }\, \Gamma_{kl} (\omega) \cor{S_l(\omega)\hat{\rho} (t), S_k^\dagger(\omega') } +
e^{i (\omega-\omega')t } \, \Gamma_{lk}^* (\omega') \cor{S_l(\omega), \hat{\rho} (t) S_k^\dagger(\omega') } \right),
\label{eq:rwae}
\end{equation}
where the effect of the environment has been absorbed into the factors
\begin{equation}
\Gamma_{kl} (\omega) \equiv \int_0^{\infty} ds\, e^{i\omega s} \textrm{Tr}_E \left[ \tilde{E}_k^\dagger (t) \tilde{E}_l (t-s) \rho_E(0) \right],
\end{equation}
where we are writing the environment operators of the interaction Hamiltonian in the interaction picture ($\hat{E}_l(t)=e^{iH_Et} E_l e^{-iH_Et} $). At this point, we can already perform the rotating wave approximation. By considering the time-dependency on Eq. (\ref{eq:rwae}), we conclude that the terms with $\left| \omega-\omega' \right|>>\alpha^2$ will oscillate much faster than the typical timescale of the system evolution. Therefore, they do not contribute to the evolution of the system. In the low-coupling regime $(\alpha\rightarrow 0)$ we can consider that only the resonant terms, $\omega=\omega'$, contribute to the dynamics and remove all the others. By applying this approximation to Eq. (\ref{eq:rwae}) reduces to
\begin{equation}
\dot{\hat{\rho}}(t) =\sum_{\substack{\omega\\ k,l }} \left( \Gamma_{kl} (\omega) \cor{S_l(\omega)\hat{\rho} (t), S_k^\dagger(\omega) } +
\Gamma_{lk}^* (\omega) \cor{S_l(\omega), \hat{\rho} (t) S_k^\dagger(\omega) } \right).
\end{equation}
To divide the dynamics into Hamiltonian and non-Hamiltonian we now decompose the operators $\Gamma_{kl}$ into Hermitian and non-Hermitian parts, $\Gamma_{kl}(\omega) = \frac{1}{2} \gamma_{kl}(\omega) + i\pi_{kl} $, with
\begin{eqnarray}
\pi_{kl}(\omega) \equiv \frac{-i}{2} \left(\Gamma_{kl} (\omega) - \Gamma_{kl}^*(\omega) \right) \nonumber\\
\gamma_{kl}(\omega) \equiv \Gamma_{kl} (\omega) + \Gamma_{kl}^*(\omega) = \int_{-\infty}^{\infty} ds e^{i\omega s}
\textrm{Tr}\left[ \hat{E}_k^\dagger (s) E_l \hat{\rho}_E(0)\right].
\end{eqnarray}
By these definitions we can separate the Hermitian and non-Hermitian parts of the dynamics and we can transform back to the Schr\"odinger picture
\begin{equation}
\dot{\rho}(t) = -i \left[ H+H_{Ls} ,\rho(t) \right] +
\sum_{\substack{\omega\\ k,l }} \gamma_{kl} (\omega) \left( S_l (\omega) \rho(t) S_k^\dagger (\omega) -
\frac{1}{2} \left\{ S_k^\dagger S_l (\omega) , \rho(t) \right\} \right).
\label{eq:me1}
\end{equation}
The Hamiltonian dynamics now is influenced by a term $H_{Ls} = \sum_{\omega,k,l} \pi_{kl} (\omega) S_k^\dagger (\omega)S_l (\omega)$. This is usually called a {\it Lamb shift} Hamiltonian and its role is to renormalize the system energy levels due to the interaction with the environment. Eq. (\ref{eq:me1}) is the first version of the Markovian Master Equation, but it is not in the Lindblad form yet.
It can be easily proved that the matrix formed by the coefficients $\gamma_{kl}(\omega)$ is positive as they are the Fourier's transform of a positive function $\left(\textrm{Tr}\left[ \hat{E}_k^\dagger (s) E_l \hat{\rho}_E(0)\right]\right)$. Therefore, this matrix can be diagonalised. This means that we can find a unitary operator, $O$, s.t.
\begin{equation}
O\gamma(\omega) O^\dagger=
\left(\begin{array}{cccc}
d_1(\omega) & 0 & \cdots & 0\\
0 & d_2(\omega) & \cdots & 0 \\
\vdots & \vdots & \ddots & 0\\
0 & 0 & \cdots & d_N(\omega)
\end{array}\right).
\end{equation}
We can now write the master equation in a diagonal form
\begin{equation}
\dot{\rho}(t) = -i \left[ H+H_{Ls} ,\rho(t) \right] +
\sum_{i,\omega} \left( L_i (\omega) \rho(t) L_i^\dagger (\omega) -
\frac{1}{2} \left\{ L_i^\dagger L_i (\omega) , \rho(t) \right\} \right)\equiv {\cal L} \rho(t).
\label{eq:me2}
\end{equation}
This is the celebrated Lindblad (or Lindblad-Gorini-Kossakowski-Sudarshan) Master Equation. In the simplest case, there will be only one relevant frequency $\omega$, and the equation can be further simplified to
\begin{equation}
\dot{\rho}(t) = -i \left[ H+H_{Ls} ,\rho(t) \right] +
\sum_{i} \left( L_i \rho(t) L_i^\dagger -
\frac{1}{2} \left\{ L_i^\dagger L_i, \rho(t) \right\} \right)\equiv {\cal L} \rho(t).
\label{eq:me3}
\end{equation}
The operators $L_i$ are usually referred to as {\it jump operators}.
\subsection{Derivation of the Lindblad Equation as a CPT generator}
The second way of deriving Lindblad equation comes from the following question: What is the most general (Markovian) way of mapping density matrix onto density matrices? This is usually the approach from quantum information researchers that look for general transformations of quantum systems. We analyse this problem following mainly Ref. \cite{wilde_17}.
To start, we need to know what is the form of a general CPT-map.
\vspace{0.5cm}
\begin{lemma}
Any map ${\cal V}:B\pare{{\cal H}}\to B\pare{{\cal H}}$ that can be written in the form ${\cal V}\rho=V^\dagger \rho V^{\phantom{\dagger}}$ with $V\in B\pare{{\cal H}}$ is positive.
\end{lemma}
\vspace{0.5cm}
\noindent
The proof of the lemma requires a little algebra and a known property of normal matrices
\vspace{0.5cm}
\noindent
\textbf{Proof.}
\noindent
If $\rho\ge0 \Rightarrow \rho=A^\dagger A^{\phantom{\dagger}}$, with $A\in B({\cal H})$. Therefore, ${\cal V}\rho=V^\dagger\rho V^{\phantom{\dagger}}\Rightarrow \bra{\psi} V^\dagger\rho V\ket{\psi} = \bra{\psi} V^\dagger A^{\dagger}A V\ket{\psi}=\norm{ AV\ket{\psi}}\ge 0$. Therefore, if $\rho$ is positive, the output of the map is also positive.
\vspace{0.25 cm}
\noindent
\textbf{End of the proof.}
\vspace{0.5cm}
\noindent
This is a sufficient condition for the positivity of a map, but it is not necessary. It could happen that there are maps that cannot be written in this form, but they are still positive. To go further, we need a more general condition, and this comes in the form of the next theorem.
\vspace{0.5cm}
\begin{theorem}
Choi's Theorem.
\noindent
A linear map ${\cal V}:B({\cal H})\to B({\cal H})$ is completely positive iff it can be expressed as
\begin{equation}
{\cal V}\rho=\sum_i V_i^\dagger \rho V^{\phantom{\dagger}}_i
\label{eq:choimap}
\end{equation}
with $V_i\in B({\cal H})$.
\end{theorem}
\vspace{0.5cm}
\noindent
The proof of this theorem requires some algebra.
\vspace{0.5cm}
\noindent
\textbf{Proof}
\noindent
The `if' implication is a trivial consequence of the previous lemma. To prove the converse, we need to extend the dimension of our system by the use of an auxiliary system. If $d$ is the dimension of the Hilbert space of pure states, ${\cal H}$, we define a new Hilbert space of the same dimension ${\cal H}_A$.
We define a maximally entangled pure state in the bipartition ${\cal H}_A \otimes {\cal H}$ in the way
\begin{equation}
\ket{\Gamma}\equiv \sum_{i=0}^{d-1} \ket{i}_A \otimes \ket{i},
\label{eq:gamma}
\end{equation}
being $\key{\ket{i}}$ and ${\key{\ket{i}_A}}$ arbitrary orthonormal bases for ${\cal H}$ and ${\cal H}_A$.
We can extend the action of our original map ${\cal V}$, that acts on ${\cal B}({\cal H})$ to our extended Hilbert space by defining the map ${\cal V}_2:{\cal B}( {\cal H}_A) \otimes {\cal B}({\cal H}) \to {\cal B}( {\cal H}_A) \otimes {\cal B}({\cal H})$ as
\begin{equation}
{\cal V}_2\equiv \mathbb 1_{{\cal B}( {\cal H}_A)} \otimes {\cal V}.
\end{equation}
Note that the idea behind this map is to leave the auxiliary subsystem invariant while applying the original map to the original system. This map is positive because ${\cal V}$ is completely positive. This may appear trivial, but as it has been explained before complete positivity is a more restrictive property than positivity, and we are looking for a condition to ensure complete positivity.
We can now apply the extended map to the density matrix corresponding to the maximally entangled state (\ref{eq:gamma}), obtaining
\begin{equation}
{\cal V}_2 \proj{\Gamma} = \sum_{i,j=0}^{d-1} \op{i}{j} \otimes {\cal V} \op{i}{j}.
\label{eq:map2}
\end{equation}
Now we can use the maximal entanglement of the state $\ket{\Gamma}$ to relate the original map ${\cal V}$ and the action ${\cal V}_2 \proj{\Gamma}$ by taking the matrix elements with respect to ${\cal H}_A$.
\begin{equation}
{\cal V}\op{i}{j} = \bra{i}_A \pare{ {\cal V}_2\op{\Gamma}{\Gamma} }\ket{j}_A.
\label{eq:elementsv}
\end{equation}
To relate this operation to the action of the map to an arbitrary vector $\ket{\psi}\in {\cal H}_A \otimes {\cal H}$, we can expand it in this basis as
\begin{equation}
\ket{\psi} = \sum_{i=0}^{d-1} \sum_{j=0}^{d-1} \alpha_{ij} \ket{i}_A \otimes \ket{j}.
\end{equation}
We can also define an operator $V_{\ket{\psi}} \in {\cal B} \pare{{\cal H}}$ s.t. it transforms $\ket{\Gamma}$ into $\ket{\psi}$. Its explicit action would be written as
\begin{eqnarray}
\hspace{-2cm}
\pare{\mathbb 1_A \otimes V_{\ket{\psi}}} \ket{\Gamma}=&\sum_{i,j=0}^{d-1} \alpha_{ij} \pare{\mathbb 1_A \otimes \op{j}{i} } \pare{ \sum_{k=0}^{d-1} \ket{k} \otimes \ket{k}
= \sum_{i,j,k=0}^{d-1} \alpha_{ij} \pare{\ket{k} \otimes \ket{j}} \bracket{i}{k} \nonumber\\
&= \sum_{i,j,k=0}^{d-q} \alpha_{ij} \pare{\ket{k} \otimes \ket{j}} \delta_{i,k}
= \sum_{i,j=0}^{d-1} \alpha_{ij} \ket{i}\otimes \ket{j} = \ket{\psi}.
\label{eq:opsi}
\end{eqnarray}
At this point, we have related the vectors in the extended space ${\cal H}_A \otimes {\cal H}$ to operators acting on ${\cal H}$. This can only be done because the vector $\ket{\Gamma}$ is maximally entangled. We go now back to our extended map ${\cal V}_2$. Its action on $\proj{\Gamma}$ is given by Eq. (\ref{eq:map2}) and as it is a positive map it can be expanded as
\begin{equation}
{\cal V}_2\pare{\proj{\Gamma}} =\sum_{l=0}^{d^2-1}\proj{v_l}.
\label{eq:exp}
\end{equation}
with $\ket{v_l}\in{\cal H}_A\otimes {\cal H}$. The vectors $\ket{v_l}$ can be related to operators in ${\cal H}$ as in Eq. (\ref{eq:opsi}).
\begin{equation}
\ket{v_l}=\pare{\mathbb 1_A\otimes\ V_l}\ket{\Gamma}.
\end{equation}
Based on this result we can calculate the product of an arbitrary vector $\ket{i}_A\in{\cal H}_A$ with $\ket{v_l}$.
\begin{equation}
\bra{i}_A \ket{v_l}=\bra{i}_A \pare{\mathbb 1_A \otimes V_l} \ket{\Gamma}=V_l \sum_{k=0}^{d-1} \bracket{i}{k}_A \otimes\ket{k}.
\end{equation}
This is the last ingredient we need for the proof.
We come back to the original question, we want to characterise the map ${\cal V}$. We do so by applying it to an arbitrary basis element $\op{i}{j}$ of ${\cal B}\pare{{\cal H}}$.
\begin{eqnarray}
\hspace{-2.cm}
{\cal V}\pare{\op{i}{j}} = \pare{ \bra{i}_A \otimes \mathbb 1_A} {\cal V}_2\pare{\proj{\Gamma}} \pare{\ket{j}_A\otimes\mathbb 1_A}
= \pare{\bra{i}_A \otimes \mathbb 1_A} \cor{ \sum_{l=0}^{d^2-1} \proj{v_l} } \pare{ \ket{j}_A\otimes\mathbb 1_A } \nonumber\\
= \sum_{l=0}^{d^2-1} \cor{\pare{\bra{i}_A \otimes \mathbb 1_A}\ket{v_l}} \cor{\bra{v_l}\pare{\ket{j}_A \otimes \mathbb 1_A}}
= \sum_{l=0}^{d^2-1} V_l\op{i}{j} V_l.
\end{eqnarray}
As $\op{i}{j}$ is an arbitrary element of a basis any operator can be expanded in this basis. Therefore, it is straightforward to prove that
\begin{equation}
{\cal V} \rho=\sum_l^{d^2-l} V^{\dagger}_l\rho V^{\phantom{\dagger}}_l.
\nonumber
\end{equation}
\vspace{0.25 cm}
\noindent
\textbf{End of the proof.}
\vspace{0.5cm}
\noindent
Thanks to Choi's Theorem, we know the general form of CP-maps, but there is still an issue to address. As density matrices should have trace one, we need to require any physical maps to be also trace-preserving. This requirement gives as a new constraint that completely defines all CPT-maps. This requirement comes from the following theorem.
\vspace{0.5cm}
\begin{theorem}
Choi-Kraus' Theorem.
\noindent
A linear map ${\cal V}:B({\cal H})\to B({\cal H})$ is completely positive and trace-preserving iff it can be expressed as
\begin{equation}
{\cal V}\rho=\sum_l V_l^\dagger \rho V^{\phantom{\dagger}}_l
\label{eq:choi2}
\end{equation}
with $V_l\in B({\cal H})$ fulfilling
\begin{equation}
\sum_l V^{\phantom{\dagger}}_l V_l^\dagger=\mathbb 1_{{\cal H}}.
\label{eq:krauss}
\end{equation}
\end{theorem}
\vspace{0.5cm}
\noindent
\textbf{Proof.}
\noindent
We have already proved that this is a completely positive map, we only need to prove that it is also trace-preserving and that all trace preserving-maps fulfil Eq. (\ref{eq:krauss}). The `if' proof is quite simple by applying the cyclic permutations and linearity properties of the trace operator.
\begin{equation}
\textrm{Tr}\cor{{\cal V}\rho}=\textrm{Tr} \cor{ \sum_{l=1}^{d^2-1} V^{\phantom{\dagger}}_l\rho V_l^\dagger } = \textrm{Tr} \cor{ \pare{\sum_{l=1}^{d^2-1} V_l^\dagger V^{\phantom{\dagger}}_l }\rho } =\textrm{Tr}\cor{\rho}.
\end{equation}
We have to prove also that any map in the form (\ref{eq:choi2}) is trace-preserving only if the operators $V_l$ fulfil (\ref{eq:krauss}). We start by stating that if the map is trace-preserving by applying it to an any arbitrary element of a basis of ${\cal B}\pare{{\cal H}}$ we should obtain
\begin{equation}
\textrm{Tr}\cor{{\cal V} \pare{\op{i}{j}}}=\textrm{Tr} \cor{\op{i}{j}}=\delta_{i,j}.
\end{equation}
As the map has a form given by (\ref{eq:choi2}) we can calculate this same trace in an alternative way.
\begin{eqnarray}
\textrm{Tr}\cor{{\cal V} \pare{\op{i}{j}}} &=& \textrm{Tr} \cor{ \sum_{l=1}^{d^2-1} V^{\phantom{\dagger}}_l \op{i}{j} V_l^\dagger }
= \textrm{Tr} \cor{ \sum_{l=1}^{d^2-1} V_l^\dagger V^{\phantom{\dagger}}_l \op{i}{j} } \nonumber \\
&= &\sum_{k} \bra{k} \pare{ \sum_{l=1}^{d^2-1} V_l^\dagger V^{\phantom{\dagger}}_l \op{i}{j} } \ket{k}
= \bra{j} \pare{\sum_{l=1}^{d^2-1} V_l^\dagger V^{\phantom{\dagger}}_l }\ket{i},
\end{eqnarray}
where $\left\{ \ket{k} \right\}$ is an arbitrary basis of ${\cal H}$.
As both equalities should be right we obtain
\begin{equation}
\bra{j} \pare{\sum_{l=1}^{d^2-1} V^{\phantom{\dagger}}_l V^\dagger_l }\ket{i} = \delta_{i,j},
\end{equation}
and therefore, the condition (\ref{eq:krauss}) should be fulfilled.
\vspace{0.25 cm}
\noindent
\textbf{End of the proof.}
\vspace{0.5cm}
\noindent
Operators $V_i$ of a map fulfilling condition (\ref{eq:krauss}) are called {\it Krauss operators}. Because of that, sometimes CPT-maps are also called {\it Krauss maps}, especially when they are presented as a collection of Krauss operators. Both concepts are ubiquitous in quantum information science. Krauss operators can also be time-dependent as long as they fulfil relation (\ref{eq:krauss}) for all times.
At this point, we already know the form of CPT-maps, but we do not have a master equation, that is a continuous set of differential equations. This means that we know how to perform an arbitrary operation in a system, but we do not have an equation to describe its time evolution. To do so, we need to find a time-independent generator ${\cal L}$ such that
\begin{equation}
\frac{d}{dt} \rho\pare{t}= {\cal L}\rho(t),
\label{eq:difeq}
\end{equation}
and therefore our CPT-map could be expressed as ${\cal V}(t)=e^{{\cal L} t}$. The following calculation is about founding the explicit expression of ${\cal L}$. We start by choosing an orthonormal basis of the bounded space of operators ${\cal B}({\cal H})$, $\key{F_i}_{i=1}^{d^2}$. To be orthonormal it should satisfy the following condition
\begin{equation}
\bbracket{F_i}{F_j}\equiv \textrm{Tr}\cor{F_i^\dagger F_j}=\delta_{i,j}.
\label{eq:orthonormal}
\end{equation}
Without any loss of generality, we select one of the elements of the basis to be proportional to the identity, $F_{d^2}=\frac{1}{\sqrt{d}} \mathbb 1_{{\cal H}}$. It is trivial to prove that the norm of this element is one, and it is easy to see from Eq. (\ref{eq:orthonormal}) that all the other elements of the basis should have trace zero.
\begin{equation}
\textrm{Tr}\cor{F_i}=0 \qquad \forall i=1,\dots,d^2-1.
\end{equation}
The closure relation of this basis is $\mathbb 1_{{\cal B}({\cal H})}=\sum_i \bproj{F_i}{F_i}$. Therefore, the Krauss operators can be expanded in this basis by using the Fock-Liouville notation
\begin{equation}
V_l(t)= \sum_{i=1}^{d^2} \bbracket{F_i}{V_l(t)} \kket{F_i}.
\label{eq:kraussexpansion}
\end{equation}
As the map ${\cal V}(t)$ is in the form (\ref{eq:choimap}) we can apply (\ref{eq:kraussexpansion}) to obtain\footnote{ For simplicity, in this discussion we omit the explicit time-dependency of the density matrix.}.
\begin{equation}
\hspace{-2cm}
{\cal V}(t) \rho = \sum_l\cor{\sum_{i=1}^{d^2} \bbracket{F_i}{V_l(t)} F_i \;\rho \sum_{j=1}^{d^2} F_j^\dagger \bbracket{V_l(t)}{F_j}}
= \sum_{i,j=1}^{d^2} c_{i,j}(t) F_i^{\phantom{\dagger}}\rho F_j^{\dagger},
\end{equation}
where we have absorved the sumation over the Krauss operators in the terms $c_{i,j}(t)= \sum_l \bbracket{F_i}{V_l} \bbracket{V_l}{F_j}$. We go back now to the original problem by applying this expansion into the time-derivative of Eq. (\ref{eq:difeq})
\begin{eqnarray}
\frac{d \rho }{dt} &=&\lim_{\Delta t\to0} \frac{1}{\Delta t} \pare{{\cal V}(\Delta t)\rho-\rho}
= \lim_{\Delta t\to 0} \pare{\sum_{i,j=1}^{d^2} c_{i,j}(\Delta t) F_i^{\phantom{\dagger}} \rho F_j^\dagger - \rho} \nonumber\\
&=& \lim_{\Delta t\to 0} \left( \sum_{i,j=0}^{d^2-1} c_{i,j}(\Delta t) F_i^{\phantom{\dagger}}\rho F_j^\dagger + \sum_{i=1}^{d^2-1} c_{i,d^2} F_i^{\phantom{\dagger}}\rho F_{d^2}^\dagger \right. \nonumber\\
&&\left. + \sum_{j=1}^{d^2-1} c_{d^2,j} (\Delta t) F_{d^2}^{\phantom{\dagger}} \rho F_j^\dagger + c_{d^2,d^2}(\Delta t) F_{d^2}^{\phantom{\dagger}} \rho F_{d^2}^\dagger - \rho \right),
\end{eqnarray}
where we have separated the summations to take into account that $F_{d^2}=\frac{1}{\sqrt{d}}\mathbb 1_{{\cal H}}$. By using this property this equation simplifies to
\begin{eqnarray}
\frac{d \rho}{dt} &=&\lim_{\Delta t\to0} \frac{1}{\Delta t} \left( \sum_{i,j=1}^{d^2-1} c_{i,j}(\Delta t) F_i^{\phantom{\dagger}} \rho F_j^\dagger + \frac{1}{\sqrt{d}} \sum_{i=1}^{d^2-1}
c_{i,d^2}(\Delta t) F_i^{\phantom{\dagger}}\rho \right. \nonumber\\
&+&\left. \frac{1}{\sqrt{d}} \sum_{j=1}^{d^2-1} c_{d^2,j} (\Delta t) \rho F_j^\dagger + \frac{1}{d} c_{d^2,d^2}(\Delta t) \rho-\rho \right).
\label{eq:derivative2}
\end{eqnarray}
The next step is to eliminate the explicit dependence with time. To do so, we define new constants to absorb all the time intervals.
\begin{eqnarray}
g_{i,j}&\equiv& \lim_{\Delta t\to 0} \frac{c_{i,j} (\Delta t) }{\Delta t} \qquad (i,j<d^2), \nonumber\\
g_{i,d^2}&\equiv& \lim_{\Delta t \to 0} \frac{c_{i,d^2} (\Delta t) }{\Delta t} \qquad (i<d^2), \nonumber\\
g_{d^2,j}&\equiv& \lim_{\Delta t \to 0} \frac{c_{d^2,j} (\Delta t) }{\Delta t} \qquad (j<d^2), \\
g_{d^2,d^2}&\equiv& \lim_{\Delta t\to 0} \frac{c_{d^2,d^2}(\Delta t)-d}{\Delta t}. \nonumber
\end{eqnarray}
Introducing these coefficients in Eq (\ref{eq:derivative2}) we obtain an equation with no explicit dependence in time.
\begin{eqnarray}
\frac{d \rho}{dt} = \sum_{i,j=1}^{d^2-1} g_{i,j} F_i \rho F_j^\dagger + \frac{1}{\sqrt{d}} \sum_{i=1}^{d^2-1} g_{i,d^2} F_i \rho + \frac{1}{\sqrt{d}} \sum_{j=1}^{d^2-1} g_{d^2,j} \rho F_j^\dagger + \frac{g_{d^2,d^2}}{d} \rho.\nonumber \\
\label{eq:derivative3}
\end{eqnarray}
As we are already summing up over all the Krauss operators it is useful to define a new operator
\begin{equation}
F\equiv \frac{1}{\sqrt{d}} \sum_{i=1}^{d^2-1} g_{i,d^2} F_i.
\end{equation}
Applying it to Eq. (\ref{eq:derivative2}).
\begin{equation}
\frac{d \rho}{dt} = \sum_{i,j=1}^{d^2-1} g_{i,j} F_i \rho F_j^\dagger + F \rho + \rho F^\dagger + \frac{g_{d^2,d^2}}{d} \rho.
\label{eq:derivative4}
\end{equation}
At this point, we want to separate the dynamics of the density matrix into a Hermitian (equivalent to von Neunmann equation) and an incoherent part. We split the operator $F$ in two to obtain a Hermitian and anti-Hermitian part.
\begin{equation}
F=\frac{F+F^\dagger}{2} + i\frac{F-F^\dagger}{2i} \equiv G-iH,
\end{equation}
where we have used the notation $H$ for the Hermitian part for obvious reasons. If we take this definition to Eq. (\ref{eq:derivative4}) we obtain
\begin{equation}
\frac{d \rho}{dt} = g_{i,j} F_i \rho F_j^\dagger + \key{G,\rho} - i \cor{H,\rho} + \frac{g_{d^2,d^2}}{d} \rho.
\end{equation}
We define now the last operator for this proof, $G_2\equiv G+\frac{g_{d^2,d^2}}{2d}$, and the expression of the time derivative leads to
\begin{equation}
\frac{d \rho}{dt} = \sum_{i,j=1}^{d^2-1} g_{i,j} F_i \rho F_j^\dagger + \key{G_2,\rho} -i\cor{H,\rho}.
\end{equation}
Until now we have imposed the complete positivity of the map, as we have required it to be written in terms of Krauss maps, but we have not used the trace-preserving property. We impose now this property, and by using the cyclic property of the trace, we obtain a new condition
\begin{equation}
\textrm{Tr}\cor{\frac{d \rho}{dt}}=\textrm{Tr}\cor{ \sum_{i,j=1}^{d^2-1} F_j^\dagger F_i \rho + 2 G_2 \rho }=0.
\end{equation}
Therefore, $G_2$ should fulfil
\begin{equation}
G_2=\frac{1}{2} \sum_{i,j=1}^{d^2-1} g_{i,j} F_j^\dagger F_i\rho.
\end{equation}
By applying this condition, we arrive at the Lindblad master equation
\begin{equation}
\frac{d \rho}{dt}= -i\cor{H,\rho} + \sum_{i,j=1}^{d^2-1} g_{i,j} \pare{ F_i^{\phantom{\dagger}}\rho F_j^\dagger - \frac{1}{2} \key{F_j^\dagger F_i^{\phantom{\dagger}},\rho }}.
\end{equation}
Finally, by definition the coefficients $g_{i,j}$ can be arranged to form a Hermitian, and therefore diagonalisable, matrix. By diagonalising it, we obtain the diagonal form of the Lindblad master equation.
\begin{equation}
\frac{d}{dt} \rho= -i \cor{H,\rho} + \sum_{k} \Gamma_k \pare{ L_k^{\phantom{\dagger}} \rho L_k^\dagger - \frac{1}{2} \key{L_k^{\phantom{\dagger}} L_k^\dagger,\rho}} \equiv {\cal L} \rho.
\label{eq:lindblad}
\end{equation}
\subsection{Properties of the Lindblad Master Equation}
Some interesting properties of the Lindblad equation are:
\begin{itemize}
\item Under a Lindblad dynamics, if all the jump operators are Hermitian, the purity of a system fulfils $\frac{d}{dt}\pare{ \textrm{Tr} \cor{\rho^2 }} \le 0$. The proof is given in \ref{sec:purity}.
\item The Lindblad Master Equation is invariant under unitary transformations of the jump operators
\begin{equation}
\sqrt{\Gamma_i} L_i\to \sqrt{\Gamma'_i} L_i'= \sum_j v_{ij} \sqrt{\Gamma_j} L_j,
\end{equation}
with $v$ representing a unitary matrix. It is also invariant under inhomogeneous transformations in the form
\begin{eqnarray}
L_i &\to& L'_i= L_i + a_i \nonumber\\
H&\to& H'=H+\frac{1}{2i} \sum_j \Gamma_j \pare{a_j^* A_j - a_j A_j^\dagger }+ b,
\end{eqnarray}
where $a_i \in \mathbb{C}$ and $b \in \mathbb{R}$. The proof of this can be found in Ref. \cite{breuer_02} (Section 3).
\item Thanks to the previous properties it is possible to find traceless jump operators without loss of generality.
\end{itemize}
\newpage
\vspace{0.5cm}
\begin{center}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 6. A master equation for a two-level system with decay.}
\vspace{0.25cm}
Continuing our example of a two-level atom, we can make it more realistic by including the possibility of atom decay by the emission of a photon. This emission happens due to the interaction of the atom with the surrounding vacuum state\footnote{This is why atoms decay.}. The complete quantum system would be in this case the `atom+vacuum' system and its time evolution should be given by the von Neumann equation (\ref{eq:vne}), where $H$ represents the total `atom+vacuum' Hamiltonian. This system belongs to an infinite-dimension Hilbert space, as the radiation field has infinite modes. If we are interested only in the time dependence state of the atom, we can derive a Markovian master equation for the reduced density matrix of the atom (see for instance Refs. \cite{breuer_02,gardiner_00}). The master equation we will study is
\begin{eqnarray}
\frac{d}{dt}\rho(t) = -i\cor{H,\rho} + \Gamma \left( \sigma^- \rho \sigma^+ -\frac{1}{2} \left\{\sigma^+ \sigma^-,\rho \right\} \right),
\label{eq:lindtotal}
\end{eqnarray}
where $\Gamma$ is the coupling between the atom and the vacuum.
In the Fock-Liouvillian space (following the same ordering as in Eq. (\ref{eq:denmat})) the Liouvillian corresponding to evolution (\ref{eq:lindtotal}) is
\begin{equation}
{\cal L}=
\left(
\begin{array}{cccc}
0 & i \Omega & -i\Omega & \Gamma \\
i\Omega & -i E - \frac{\Gamma}{2} & 0 & -i\Omega\\
-i\Omega & 0 & -i E -\frac{\Gamma}{2} & i\Omega \\
0 & -i\Omega & i\Omega & -\Gamma \\
\end{array}
\right).
\label{eq:liou2}
\end{equation}
Expressing explicitly the set of differential equations we obtain
\begin{eqnarray}
\dot{\rho}_{00} &= & i \Omega \rho_{01} -i\Omega \rho_{10} + \Gamma \rho_{11} \nonumber \\
\dot{\rho}_{01} &=& i\Omega \rho_{00} - \pare{ iE - \frac{\Gamma}{2} } \rho_{01} -i\Omega \rho_{11} \nonumber\\
\dot{\rho}_{10} &=& -i\Omega \rho_{00} \pare{ -i E - \frac{\Gamma}{2}} \rho_{10} + i\Omega \rho_{11} \\
\dot{\rho}_{10} &=& -i\Omega \rho_{01} + i\Omega \rho_{10} -\Gamma \rho_{11} \nonumber
\end{eqnarray}
\vspace{0.25cm}
\label{minipage5}
\end{minipage}
}
\end{center}
\vspace{0.5cm}
\newpage
\section{Resolution of the Lindblad Master Equation}
\label{sec:resolution}
\subsection{Integration}
To calculate the time evolution of a system determined by a Master Equation in the form (\ref{eq:lindtotal}) we need to solve a set of equations with as many equations as the dimension of the density matrix. In our example, this means to solve a 4 variable set of equations, but the dimension of the problem increases exponentially with the system size. Because of this, for bigger systems techniques for dimension reduction are required.
To solve systems of partial differential equations there are several canonical algorithms. This can be done analytically only for a few simple systems and by using sophisticated techniques as damping bases \cite{briegel:pra93}. In most cases, we have to rely on numerical approximated methods. One of the most popular approaches is the $4^{th}$-order Runge-Kutta algorithm (see, for instance, \cite{numericalrecipes} for an explanation of the algorithm). By integrating the equations of motion, we can calculate the density matrix at any time $t$.
The steady-state of a system can be obtained by evolving it for a long time $\pare{t \rightarrow \infty}$. Unfortunately, this method presents two difficulties. First, if the dimension of the system is big, the computing time would be huge. This means that for systems beyond a few qubits, it will take too long to reach the steady-state. Even worse is the problem of stability of the algorithms for integrating differential equations. Due to small errors in the calculation of derivatives by the use of finite differences, the trace of the density matrix may not be constantly equal to one. This error accumulates during the propagation of the state, giving non-physical results after a finite time. One solution to this problem is the use of algorithms specifically designed to preserve the trace, as Crank-Nicholson algorithm \cite{goldberg:ajp67}. The problem with this kind of algorithms is that they consume more computational power than Runge-Kutta, and therefore they are not useful to calculate the long-time behaviour of big systems. An analysis of different methods and their advantages and disadvantages can be found at Ref. \cite{riesch:jcp19}.
\vspace{.35cm}
\begin{center}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 7. Time dependency of the two-level system with decay.}
\vspace{0.25cm}
In this box we show some results of solving Eq (\ref{eq:lindtotal}) and calculating the density matrix as a function of time. A Mathematica notebook solving this problem can be found at \cite{notebook}. To illustrate the time behaviour of this system, we calculate the evolution for different state parameters. In all cases, we start with an initial state that represents the state being excited $\rho_{11}=1$, with no coherence between different states, meaning $\rho_{01}=\rho_{10}=0$. If the decay parameter $\Gamma$ is equal to zero, the problem reduces to solve von Neumann equation, and the result is displayed in Figure \ref{figure1}. The other extreme case would be a system with no coherent dynamics ($\Omega=0$) but with decay. In this case, we observe an exponential decay of the population of the excited state. Finally, we can calculate the dynamics of a system with both coherent driving and decay. In this case, both behaviours coexist, and there are oscillations and decay.
\begin{center}
\includegraphics[scale=.5]{figure2}
\includegraphics[scale=.5]{figure3}
\captionof{figure}{Left: Population dynamics under a pure incoherent dynamics ($\Gamma=0.1,\; n=1,\; \Omega=0,\; E=1$). Right: Population dynamics under both coherent and incoherent dynamics ($\Gamma=0.1,\; n=1,\; \Omega=1,\; E=1)$. In both the blue lines represent $\rho_{11}$ and the orange one $\rho_{00}$.}
\end{center}
\vspace{0.1cm}
\end{minipage}
\label{minipage6}
}
\end{center}
\newpage
\subsection{Diagonalisation}
As we have discussed before, in the Fock-Liouville space the Liouvillian corresponds to a complex matrix (in general complex, non-hermitian, and non-symmetric). By diagonalising it we can calculate both the time-dependent and the steady-state of the density matrices. For most purposes, in the short time regime integrating the differential equations may be more efficient than diagonalising. This is due to the high dimensionality of the Liouvillian that makes the diagonalisation process very costly in computing power. On the other hand, in order to calculate the steady-state, the diagonalisation is the most used method due to the problems of integrating the equation of motions discussed in the previous section.
Let see first how we use diagonalisation to calculate the time evolution of a system. As the Liouvillian matrix is non-Hermitian, we cannot apply the spectral theorem to it, and it may have different left and right eigenvectors. For a specific eigenvalue $\Lambda_i$ we can obtain the eigenvectors $\kket{\Lambda_i^R}$ and $\kket{\Lambda_i^L}$ s. t.
\begin{eqnarray}
\hspace{2cm}
\tilde{{\cal L}} \; \kket{\Lambda_i^R} = \Lambda_i \kket{\Lambda_i^R} \nonumber\\
\hspace{2cm}
\bbra{\Lambda_i^L}\; \tilde{{\cal L}} = \Lambda_i \bbra{\Lambda_i^L}
\end{eqnarray}
An arbitrary system can be expanded in the eigenbasis of $\tilde{{\cal L}}$ as \cite{thingna:sr16,gardiner_00}
\begin{equation}
\kket{\rho(0)}= \sum_i \kket{\Lambda_i^R} \bbracket{\Lambda_i^L}{\rho(0)}.
\end{equation}
Therefore, the state of the system at a time $t$ can be calculated in the form
\begin{equation}
\kket{\rho(t)}= \sum_i e^{\Lambda_i t} \kket{\Lambda_i^R} \bbracket{\Lambda_i^L}{\rho(0)}.
\end{equation}
Note that in this case to calculate the state a time $t$ we do not need to integrate into the interval $\cor{0,t}$, as we have to do if we use a numerical solution of the differential set of equations. This is an advantage when we want to calculate long-time behaviour. Furthermore, to calculate the steady-state of a system, we can look to the eigenvector that has zero eigenvalue, as this is the only one that survives when $t\to\infty$.
For any finite system, Evans' Theorem ensures the existence of at least one zero eigenvalue of the Liouvillian matrix \cite{evans:cmp77,evans:jfa79}. The eigenvector corresponding to this zero eigenvalue would be the steady-state of the system. In exceptional cases, a Liouvillian can present more than one zero eigenvalues due to the presence of symmetry in the system \cite{buca:njp12,manzano:prb14,manzano:av18}. This is a non-generic case, and for most purposes, we can assume the existence of a unique fixed point in the dynamics of the system. Therefore, diagonalising can be used to calculate the steady-state without calculating the full evolution of the system. This can be done even analytically for small systems, and when numerical approaches are required this technique gives better precision than integrating the equations of motion. The spectrum of Liouvillian superoperators has been analysed in several recent papers \cite{albert:pra14,thingna:sr16}.
\newpage
\vspace{0.5cm}
\framebox[15.5cm][l]{
\begin{minipage}[l]{15cm}
\vspace{0.25cm}
{\bf Box 8. Spectrum-analysis of the Liouvillian for the two-level system with decay.}
Here we diagonalise (\ref{eq:liou2}) and obtain its steady state. A Mathematica notebook solving this problem can be downloaded from \cite{notebook}. This specific case is straightforward to diagonalize as the dimension of the system is very low. We obtain $4$ different eigenvalues, two of them are real while the other two form a conjugated pair. Figure \ref{fig:spectrum} sisplays the spectrum of the superoperator ${\cal L}$ given in (\ref{eq:liou2}).
\begin{center}
\includegraphics[scale=.9]{figure4}
\captionof{figure}{Spectrum of the Liouvillian matrix given by (\ref{eq:liou2}) for the general case of both coherent and incoherent dynamics ($\Gamma=0.2,\; n=1,\; \Omega=0,\; E=1$).}
\label{fig:spectrum}
\end{center}
As there only one zero eigenvalue we can conclude that there is only one steady-state, and any initial density matrix will evolve to it after an infinite-time evolution. By selecting the right eigenvector corresponding to the zero-eigenvalue and normalizing it we obtain the density matrix. This can be done even analytically. The result is the matrix:
\begin{equation}
\rho_{SS}=
\left(
\begin{array}{cc}
\frac{(1+n) \left(4\, E^2+(\Gamma +2 n\, \Gamma )^2\right)+4 (1+2 n) \Omega ^2}{(1+2 n) \left(4 \,E^2+(\Gamma +2 n \,\Gamma )^2+8 \Omega^2\right)} & \frac{2 (-2 \,E-i (\Gamma +2 n \Gamma )) \Omega }{(1+2 n) \left(4 \,E^2+(\Gamma +2 n \,\Gamma )^2+8 \,\Omega ^2\right)} \\
\frac{2 (-2 \,E+i (\Gamma +2 n\, \Gamma )) \Omega }{(1+2 n) \left(4\, E^2+(\Gamma +2 n\, \Gamma )^2+8 \Omega ^2\right)} & \frac{n \left(4E^2+(\Gamma +2 n \Gamma )^2\right)+4 (1+2 n) \Omega ^2}{(1+2 n) \left(4\, E^2+(\Gamma +2 n \Gamma )^2+8\, \Omega ^2\right)} \\
\end{array}
\right)
\end{equation}
\vspace{0.25cm}
\vspace{0.25cm}
\end{minipage}
}
\section{Acknowledgements}
The author wants to acknowledge the Spanish Ministry and the Agencia Espa{\~n}ola de Investigaci{\'o}n (AEI) for financial support under grant
FIS2017-84256-P (FEDER funds).
|
1,116,691,500,204 | arxiv | \section{Introduction}\label{sec:1}
Structural relaxation and plastic deformation in disordered materials, such as supercooled liquids, metallic glasses, and granular materials, occur via the correlated motion of atoms or constituent particles \cite{adam_1965,argon_1979,falk_1998,falk_2011}. Such collective motions characterize dynamical heterogeneities \cite{kob_1997, donati_1998, zhang_2005, widmer_2006, biroli_2008, fan_2014, qiao_2017, wang_2019}. To elucidate the correlation between particles, one may introduce a four-point function $\chi (r, t)$, which measures the spatial correlation between two different positions at two different times \cite{glotzer_2000}. Ref.~\cite{lacevic_2003} is among the most extensive studies of the four-point function. There, the authors posit that the four-point function describes dynamical heterogeneities and exhibit a correlation length that grows with decreasing temperature in a glass-forming liquid, indicative of the correlated motion of atoms whose cluster size increases with cooling.
From the four-point function $\chi (r, t)$, one may define a four-point susceptibility, or global four-point function $\chi_4 (t)$ from which one can compute the size of a typical dynamical heterogeneity, via a simple counting argument described in \cite{abate_2007}. Indeed, if one defines the time-dependent order parameter $Q(t)$ by
\begin{equation}
Q(t) = \dfrac{1}{N} \sum_{i = 1}^N w_i (t) ,
\end{equation}
where $w_i(t)$ is the threshold function
\begin{equation}\label{eq:w_i}
w_i (t) =
\begin{cases}
1 , \quad \text{if $| \mathbf{r}_i (t) - \mathbf{r}_i (0) | < d$} ; \\
0 , \quad \text{if $| \mathbf{r}_i (t) - \mathbf{r}_i (0) | \geq d$},
\end{cases}
\end{equation}
for each of the $N$ atoms $i$ in the system, i.e., $w_i(t)$ equals unity if the displacement of atom $i$ over time $t$ does not exceed some threshold $d$, and zero otherwise, then one can naturally define the global four-point function
\begin{equation}\label{eq:chi4_def}
\chi_4 (t) = N \left( \left\langle Q (t) ^2 \right \rangle - \left\langle Q(t) \right\rangle^2 \right) .
\end{equation}
It can be shown easily \cite{abate_2007} that the typical number of particles in a dynamical heterogeneity is given by
\begin{equation}
n = \dfrac{\chi_4 (t^*)}{1 - Q(t^*)},
\end{equation}
where $t^*$ is the value of $t$ at which $\chi_4 (t)$ attains its peak value. The reader may immediately appreciate that the classification of whether a particle belongs to a dynamical heterogeneity depends on the threshold displacement $d$.
However, the precise physical meaning of the four-point correlation function $\chi (r, t)$ itself remains elusive. It is recently proposed \cite{ryu_2019,ryu_2021,egami_2021,lieou_2022a} that the occurrence of dynamical heterogeneities is associated with medium-range order (MRO) in glass-forming materials. The MRO in a glass-forming liquid is observed as oscillations beyond the nearest neighbor peak in the Van Hove function, $G(r, t)$ \cite{vanhove_1954}, or the pair distribution function (PDF), $g(r)$ \cite{egelstaff_1991}. The oscillations decay with a characteristic coherence length, $\xi$, which depicts the range of cooperativity. Both the Van Hove function and PDF are defined in a much more straightforward manner, with rather transparent physical meanings. They can be determined by experiment through x-ray or neutron scattering \cite{egami_2020b,warren_1969}, and $\xi$ is directly related to viscosity \cite{ryu_2019} and liquid fragility \cite{ryu_2020}. This motivates the present work, in which we investigate the relationship between $\chi (r, t)$ and the Van Hove function $G(r, t)$, and examine whether $\chi(r, t)$ carries information about MRO.
The rest of the paper is structured as follows. In Sec. \ref{sec:2} we define the four-point function. In contrast to the approach taken by Ref. \cite{lacevic_2003}, we use the \textit{global} four-point susceptibility in Eq.~\eqref{eq:chi4_def} to motivate the local, position-dependent four-point function. We compare the four-point function to $g(r)$ and $G(r, t)$ in Sec.~\ref{sec:3}, and posit that MRO is the origin of dynamical heterogeneities. We conclude the paper with some additional remarks and insights in Sec.~\ref{sec:4}.
\section{The four-point function}
\label{sec:2}
Motivated by the definition in Eq.~\eqref{eq:chi4_def}, we define the \textit{local}, position-dependent four-point function
\begin{equation}\label{eq:chi_rr}
\chi_4 (\mathbf{r}', \mathbf{r}'' , t) \equiv \left\langle Q ( \mathbf{r}', t ) Q (\mathbf{r}'', t ) \right\rangle - \left\langle Q (\mathbf{r}', t) \right\rangle \left\langle Q(\mathbf{r}'', t) \right\rangle, ~~~~~
\end{equation}
where
\begin{equation}
Q (\mathbf{r}', t) = \sum_{i=1}^N Q_i (t) \delta (\mathbf{r}' - \mathbf{r}_i(0)) ,
\end{equation}
and we define the quantity $Q_i$ for atom $i$ in one the following ways:
\begin{eqnarray}
\label{eq:Q1} Q_i (t) &\equiv& Q_{N,i} (t) = \sum_j \dfrac{1}{z_i + 1} w_j (t) ; \quad \text{or} \\
\label{eq:Q2} Q_i (t) &\equiv& Q_{S,i} (t) = w_i (t) .
\end{eqnarray}
In Eq.~\eqref{eq:Q1} the sum is over all $z_i$ neighbors (within some cutoff distance) of each atom $i$ plus itself; $w_i (t)$ is the thresholding function defined above in Eq.~\eqref{eq:w_i}. To convert this into a radially symmetric function that depends only on the relative position $\mathbf{r} \equiv \mathbf{r}' - \mathbf{r} ''$, integrate Eq.~\eqref{eq:chi_rr} to get
\begin{eqnarray}
\nonumber & & \int d \mathbf{r}' \, d \mathbf{r}'' \chi_4 (\mathbf{r}', \mathbf{r}'', t) \\ \nonumber &=& \sum_i \sum_j \int d \mathbf{r} \left\langle Q_i(t) Q_j(t) \delta (\mathbf{r} - \mathbf{r}_i + \mathbf{r}_j ) \right\rangle \\ & & - \sum_i \sum_j \left\langle Q_i(t) \right\rangle \left\langle Q_j(t) \right\rangle . ~~~~~
\end{eqnarray}
In the above and what follows, we use $\mathbf{r}_i$ as a shorthand for $\mathbf{r}_i (0)$, the position of particle $i$ at the initial time $t = 0$. To get a dimensionless, intensive quantity independent of system size, and anticipating the connection to the PDF $g(r) = g(\mathbf{r})$, multiply this through by $V / N^2$, where $V$ is the extensive volume of the system. Then we see immediately that the appropriate definition of the $\mathbf{r}$-dependent four-point function is
\begin{eqnarray}\label{eq:chi_r}
\nonumber \chi (\mathbf{r}, t) &=& \dfrac{V}{N^2} \sum_{i} \sum_{j} \left\langle Q_i (t) Q_j(t) \delta (\mathbf{r} - \mathbf{r}_i + \mathbf{r}_j) \right\rangle \\ & & - \sum_i \sum_j \left\langle \dfrac{Q_i(t)}{N} \right\rangle \left\langle \dfrac{Q_j(t)}{N} \right\rangle .
\end{eqnarray}
One sees that if we take $Q_i(t) = w_i(t)$ in Eq.~\eqref{eq:chi_r}, as defined in Eq.~\eqref{eq:Q2} above, we arrive at a function that is identical to what is called $g_4(r)$ in \cite{lacevic_2003}; we shall call this $\chi_S (\mathbf{r}, t)$. If we choose $Q_i(t)$ according to Eq.~\eqref{eq:Q1}, we call the resultant four-point function $\chi_N (\mathbf{r}, t)$. Also, we see that in the $t \rightarrow 0$ limit,
\begin{equation}\label{eq:chi_r_t0}
\chi(\mathbf{r}, 0) = \dfrac{V}{N^2} \sum_i \sum_j \left\langle \delta( \mathbf{r} - \mathbf{r}_i + \mathbf{r}_j ) \right\rangle - 1 = g(\mathbf{r}) - 1.
\end{equation}
is indeed related to the PDF, because $Q_i (t \rightarrow 0) = 1$ -- as all atoms stay within their cages in this limit. This holds for both $\chi_S(\mathbf{r}, t)$ and $\chi_N (\mathbf{r}, t)$. It is also obvious that one can rewrite Eq.~\eqref{eq:chi_r} to reflect the isotropy:
\begin{eqnarray}
\nonumber \chi (r, t) &=& \dfrac{V}{N^2} \sum_i \sum_j \dfrac{\left\langle Q_i Q_j \delta(r - | \mathbf{r}_i - \mathbf{r}_j|)\right\rangle}{4 \pi r^2} \\ & & - \sum_i \sum_j \left\langle \dfrac{Q_i(t)}{N} \right\rangle \left\langle \dfrac{Q_j(t)}{N} \right\rangle .
\end{eqnarray}
\section{Medium-range order: simulations and results}
\label{sec:3}
We performed molecular dynamics simulations using the Large Scale Massively Parallel Simulator (LAMMPS) \cite{plimpton_1995}. The supercooled Fe sample was prepared with a melt-quench method, starting with $N = 16000$ Fe atoms, interacting with the modified Johnson potential and in a bcc lattice with a cubic side length of 58.86 \r{A}, melted at 5000 K, with periodic boundary conditions. The sample was then supercooled at a rate of $10^3$ K/ps in an NVT ensemble and equilibrated over 1 $\mu$s, in intervals of 100 K, down to a temperature of 1500 K, which is well above the glass transition temperature ($\sim 950$ K). Then we hold the temperature fixed at 1500 K and compute the four-point functions $\chi_N (r, t)$ and $\chi_S (r, t)$, with thresholds (cf.~Eq.~\eqref{eq:w_i}) $d = a/2$, $a$, and $2 a$, where $a = 0.824$ \r{A} is the mean-square displacement over $\Delta t = 1$ ps. We also compute the Van Hove function
\begin{equation}
G(r, t) = \dfrac{V}{N^2} \sum_i \sum_j \dfrac{\left\langle \delta(r - |\mathbf{r}_i (t) - \mathbf{r}_j (0)|)\right\rangle}{4 \pi r^2} ,
\end{equation}
and compare its behavior to that of the four-point functions.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.5]{fig_coherence27_1500_all.pdf}
\caption{\label{fig:coherence_all}The pair-distribution function (PDF) $g(r)$, the Van Hove correlation function $G(r, t)$, and the four-point functions $\chi_N (r, t)$ and $\chi_S (r, t)$ with thresholds $d = a/2$, $a$, and $2 a$, where $a$ is the mean-square displacement over 1 ps, for various times (a) $t = 1$ ps, (b) $t = 2$ ps, and (c) $t = 5$ ps.}
\end{center}
\end{figure}
Figure \ref{fig:coherence_all} is a direct comparison of the PDF $g(r)$, the Van Hove function $G(r, t)$, and the four-point functions $\chi_N(r, t)$ and $\chi_S (r, t)$, for various displacement thresholds $d$, at times $t = 1$, 2, and 5 ps. It becomes immediately apparent that the four-point functions decay from $\chi_{N,S} (r, t = 0) = g(r)$ and that their peak positions are unchanged with time. The decay is more pronounced with decreasing threshold distance $d$; this is easily understood, for a reduction of $d$ means that more atoms are regarded to have been displaced outside of their cage of size $d$ over the same amount of time, decreasing the correlation. The quantities $\chi_N (r, t)$ and $\chi_S (r, t)$ behave almost identically, at all times and for all threshold distances. In particular, the choice $d = a$ gives rise to a four-point function which roughly traces the behavior of the Van Hove function $G(r, t)$ at all times; because $a$ is the mean-square displacement over 1 ps, this suggest that the characteristic time scale of liquid Fe at 1500 K is indeed roughly equal to 1 ps. In any case, the observation that the behaviors of $\chi_N (r, t)$ and $\chi_S (r, t)$ mirror that of $G(r, t)$ suggests that these quantities carry the same information about medium-range order in the glass-forming liquid. As such, we identify MRO to be the origin of dynamical heterogeneities in glass-forming liquids.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.5]{fig_coherence27_1500_fp.pdf}
\caption{\label{fig:coherence_fp}Decay of the first peak of the Van Hove function $G(r, t)$ and the four-point functions $\chi_N(r, t)$ and $\chi_S(r, t)$ for thresholds $d = a/2$, $a$, and $2 a$, where $a$ is the mean-square displacement over 1 ps.}
\end{center}
\end{figure}
One can visualize the structural relaxation time by investigating the decay of the first peak of the quantities $G(r, t)$, $\chi_N (r, t)$ and $\chi_S (r, t)$; this is shown in Fig.~\ref{fig:coherence_fp}. The decay of $\chi_N (r, t)$ and $\chi_S (r, t)$ for $d = a$ closely follows that of the Van Hove function, as we have already remarked above in our discussion of Fig.~\ref{fig:coherence_all}. For $d = 2a$ the decay of the four-point functions is more sluggish; this is a manifestation of the mantra that large objects evolve more slowly. In contrast, for $d = a/2$ the relaxation time of the four-point functions is substantially shorter, as a displacement of $a/2$ is typical in the phonon regime, where motion is ballistic and not caged, and the relaxation time is short. At long times ($t \gtrsim 8$ ps) the decay time constant of the four-point functions equals $\tau = 2.77$, 3.62, and 7.88 ps, for $d = a/2$, $a$, and $2 a$.
\section{Discussions and Concluding Remarks}
\label{sec:4}
We have seen that the four-point functions $\chi_S (r, t)$ and $\chi_N (r, t)$ -- defined by thresholding the atomic displacements of each atom $i$ or all atoms within some distance from it -- behave fairly similarly. Importantly, while the relaxation time of these four point functions increases with increasing threshold distance $d$, they carry the same information about medium-range order as the Van Hove function $G(r, t)$ and the pair-distribution function $g(r)$ do. In particular, if one chooses the threshold distance $d = a$, the mean-square displacement over 1 ps, then the four-point functions are almost equal to $G(r, t)$.
The four-point functions with a window function were introduced in order to exclude small local fluctuations in atomic position and focus on coarse-grained density fluctuations. However, the MRO already represents such coarse-grained density fluctuations \cite{egami_2020a}. The first peak in $g(r)$ or $G(r, t)$ depicts atomic positions of near-neighbor atoms. In contrast, peaks beyond the first peak cover many atomic distances, and depict coarse-grained density correlations. In other words, they describe the point-to-set correlations, not the point-to-point correlations \cite{berthier_2012}. For this reason, the MRO portions of the PDF and the Van Hove function are already similar to the four-point functions in practice. This observation prompts us to identify medium-range order as the origin of dynamical heterogeneities.
Because the Van Hove function is much more easily determined in experiments than the four-point functions, and is also easier to interpret physically, it appears that the former has significantly greater utility than the latter. Indeed, since $g(r)$ already contains information about medium-range order and cooperative motion \cite{ryu_2021,egami_2021,lieou_2022a}, we posit that the four-point functions may not be needed, after all, to describe dynamical heterogeneities, structural relaxation, and deformation in glass-forming liquids. However, the four-point functions remain an interesting pedagogical tool that enables us to interpret the Van Hove function.
In an earlier paper \cite{wu_2018} it was argued that the de Gennes narrowing phenomenon, or the characteristic slowing down of dynamics near the maximum scattering intensity, has a geometric origin and does not necessarily imply dynamic cooperativity. Even for high-temperature liquids in which dynamic correlations are negligible, the relaxation time of the Van Hove function increases linearly with distance, resulting in apparent de Gennes narrowing. Thus, part of our present observation that the relaxation time scale of the four-point function depends on the size of atomic displacements may simply be related to geometrical de Gennes narrowing. When a size-dependent relaxation time is observed for a key quantitative tool commonly used to describe dynamical heterogeneities, one should not attribute it to dynamical heterogeneities without careful examination of the size dependence.
\begin{acknowledgments}
This work was supported by the US Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division.
\end{acknowledgments}
|
1,116,691,500,205 | arxiv | \section{Introduction}\label{sec:intro}
Stokes in his classical treatise~\cite{Stokes1847} (see also \cite{Stokes1880}) made formal but far-reaching considerations about periodic waves at the surface of an incompressible inviscid fluid in two dimensions, under the influence of gravity, which travel a long distance at a practically constant velocity without change of form. For instance, he observed that the crests become sharper and the troughs flatter as the amplitude increases, and that the `wave of greatest height' exhibits a $120^\circ$ corner at the crest.
It would be impossible to give a complete account of Stokes waves here. We encourage the interested reader to some excellent surveys \cite{Toland1996, BT2003, Strauss2010}.
We merely pause to remark that in an irrotational flow of infinite depth, notable recent advances were based on a formulation of the problem as a nonlinear pseudodifferential equation, involving the periodic Hilbert transform, originally due to Babenko~\cite{Babenko} (see also \cite{LH1978, Plotnikov1992, DKSZ1996}). For instance, \cite{BDT2002a, BDT2002b} (see also \cite{BT2003} and references therein) rigorously addressed the existence in-the-large, and \cite{DLK2016, Lushnikov2016, LDS2017} numerically approximated the wave of greatest height and revealed the structure of the complex singularities in great detail.
The irrotational flow assumption is well justified in some circumstances. But rotational effects are significant in many others, for instance, for wind driven waves, waves in a shear flow, or waves near a ship or pier.
Constant vorticity is of particular interest because it greatly simplifies the mathematics.
Moreover, for short waves, compared with the characteristic lengthscale of vorticity, the vorticity at the fluid surface would be dominant. For long waves, compared with the fluid depth, the mean vorticity would be dominant (see the discussion in \cite{PTdS1988}).
Simmen and Saffman~\cite{SS1985} and Teles da Silva and Peregrine~\cite{PTdS1988}, among others, employed a boundary integral method and numerically computed Stokes waves in a constant vorticity flow. Their results include overhanging profiles and interior stagnation points. To compare, a Stokes wave in an irrotational flow is necessarily the graph of a single valued function and each fluid particle must move at a velocity less than the wave speed.
Recently, Constantin, Strauss and Varvaruca~\cite{CSV2016} used a conformal mapping, modified the Babenko equation and supplemented it with a scalar constraint, to permit constant vorticity and finite depth, and they rigorously established a global bifurcation result. The authors~\cite{DH1} rediscovered the modified Babenko equation and the scalar constraint, and numerically solved by means of the Newton-GMRES method (see also \cite{Choi2009, RMN2017}). More recently, the authors~\cite{DH2} eliminated the Bernoulli constant from the modified Babenko equation and, hence, the scalar constraint. The associated linearized operator is self-adjoint, whereby efficiently handled by means of the conjugate gradient method. Here we review the analytical formulation and numerical findings of \cite{DH1,DH2}.
For strong positive vorticity, the amplitude increases, decreases and increases during the continuation of the numerical solution. Namely, a {\em fold} develops in the wave speed versus amplitude plane, and it becomes larger as the vorticity strength increases. For nonpositive vorticity, on the other hand, the amplitude increases monotonically. For stronger positive vorticity, a {\em gap} develops in the wave speed versus amplitude plane, bounded by two {\em touching waves}, whose profile contacts with itself at the trough line, enclosing a bubble of air, and the gap becomes larger as the vorticity strength increases. By the way, the numerical method of \cite{SS1985, PTdS1988} and others diverges in a gap. More folds and gaps follow as the vorticity strength increases even further.
Moreover, touching waves at the beginnings of the lowest gaps tend to the {\em limiting Crapper wave} (see \cite{Crapper}) as the vorticity strength increases indefinitely --- a striking and surprising link between rotational and capillary effects --- while they tend to a {\em fluid disk in rigid body rotation} at the ends of the gaps. Touching waves at the beginnings of the second gaps tend to the circular vortex wave on top of the limiting Crapper wave in the infinite vorticity limit, and the circular vortex wave on top of itself at the ends of the gaps. Touching waves at the boundaries of higher gaps contain more circular vortices in like manner.
\section{Formulation}\label{sec:formulation}
The water wave problem, in the simplest form, concerns the wave motion at the surface of an incompressible inviscid fluid in two dimensions, under the influence of gravity. Although an incompressible fluid may have variable density, we assume for simplicity that the density~$=1$.
Suppose for definiteness that in Cartesian coordinates, the $x$ axis points in the direction of wave propagation and the $y$ axis vertically upward. Suppose that the fluid at time $t$ occupies a region in the $(x,y)$ plane, bounded above by a free surface $y=\eta(x,t)$ and below by the rigid bottom $y=-h$ for some constant~$h$, possibly infinite. Let
\[
\varOmega(t)=\{(x,y)\in\mathbb{R}^2:-h<y<\eta(x,t)\}\quad\text{and}\quad\varGamma(t)=\{(x,\eta(x,t)):x\in\mathbb{R}\}.
\]
Let $\boldsymbol{u}=\boldsymbol{u}(x,y,t)$ denote the velocity of the fluid at the point $(x,y)$ and time $t$, and $P=P(x,y,t)$ the pressure. They satisfy the Euler equations for an incompressible fluid:
\begin{subequations}\label{E:ww0}
\begin{equation}\label{E:Euler}
\boldsymbol{u}_t+(\boldsymbol{u}\cdot\nabla)\boldsymbol{u}=-\nabla P+(0,-g)\quad\text{and}\quad
\nabla\cdot\boldsymbol{u}=0\quad\text{in $\varOmega(t)$},
\end{equation}
where $g$ is the constant due to gravitational acceleration. Let
\[
\omega:=\nabla\times\boldsymbol{u}
\]
denote constant vorticity. By the way, if the vorticity is constant throughout the fluid at the initial time then Kelvin's circulation theorem implies that it remains so at later times. We assume that there is no motion in the air and we neglect the effects of surface tension. The kinematic and dynamic conditions:
\begin{equation}\label{E:surface}
\eta_t+\boldsymbol{u}\cdot\nabla (\eta-y)=0\quad\text{and}\quad P=P_{atm}\quad\text{at $\varGamma(t)$}
\end{equation}
express that each fluid particle at the surface remains so at all times, and that the pressure there equals the constant atmospheric pressure $=P_{atm}$. In the finite depth, $h<\infty$, the kinematic condition states
\begin{equation}\label{E:bottom}
\boldsymbol{u}\cdot(0,-1)=0\quad\text{at $y=-h$}.
\end{equation}
\end{subequations}
We assume without loss of generality that the solutions of \eqref{E:ww0} are $2\pi$ periodic in the $x$ variable.
For any $h\in(0,\infty)$, $\omega\in \mathbb{R}$ and $c\in\mathbb{R}$, clearly,
\begin{equation}\label{def:shear}
\eta(x,t)=0,\quad\boldsymbol{u}(x,y,t)=(-\omega y-c,0)\quad\text{and}\quad P(x,y,t)=P_{atm}-gy
\end{equation}
solve \eqref{E:ww0}. We assume that some external effects such as wind produce such a constant vorticity flow and restrict the attention to waves propagating in \eqref{def:shear}.
Let
\begin{equation}\label{def:Phi}
\boldsymbol{u}=(-\omega y-c,0)+\nabla\varPhi,
\end{equation}
whence $\Delta\varPhi=0$ in $\varOmega(t)$ by the latter equation of \eqref{E:Euler}. Naemly, $\varPhi$ is a velocity potential for the irrotational perturbation from \eqref{def:shear}. For nonconstant vorticity, $\varPhi$ is no longer viable to use. Let $\varPsi$ be a harmonic conjugate of $\varPhi$. Substituting \eqref{def:Phi} into the former equation of \eqref{E:Euler}, we make an explicit calculation to arrive at
\begin{equation}\label{E:bernoulli}
\varPhi_t+\frac12(\varPhi_x^2+\varPhi_y^2)-(\omega y+c)\varPhi_x+\omega\varPsi+P-P_{atm}+gy=b(t)
\end{equation}
for some function $b(t)$. We substitute \eqref{def:Phi} into the other equations of \eqref{E:ww0}, likewise. The result becomes, by abuse of notation,
\begin{subequations}\label{E:ww}
\begin{align}
&\Delta\varPhi=0 \quad &&\text{in $\varOmega(t)$}\\
&\eta_t+(\varPhi_x-\omega\eta-c)\eta_x=\varPhi_y &&\text{at $\varGamma(t)$}, \label{E:ww;K}\\
&\varPhi_t+\frac12|\nabla\varPhi|^2-(\omega\eta+c)\varPhi_x+\omega\varPsi+g\eta=0 &&\text{at $\varGamma(t)$},\label{E:ww;B}\\
&\varPhi_y=0 &&\text{at $y=-h$.}\label{E:ww;bottom}
\end{align}
By the way, since $\varPhi$ and $\varPsi$ are determined up to arbitrary functions of $t$, we may take without loss of generality that $b(t)=0$ at all times!
In the infinite depth, $h=\infty$, we replace \eqref{E:ww;bottom} by
\begin{equation}\label{E:ww;infty}
\varPhi,\varPsi\to0\quad\text{as $y\to-\infty$}\quad\text{uniformly for $x\in\mathbb{R}$}.
\end{equation}
\end{subequations}
See \cite{DH1,DH2}, for instance, for details.
\subsection{Reformulations in conformal coordinates}\label{sec:reformulation}
To proceed, we reformulate \eqref{E:ww} in conformal coordinates. Details may be found in \cite{DH1, DH2}. In what follows, we identify $\mathbb{R}^2$ with $\mathbb{C}$ whenever it is convenient to do so.
Let
\begin{equation}\label{def:conformal}
z=z(w,t),\quad \text{where}\quad w=u+iv\quad\text{and}\quad z=x+iy,
\end{equation}
conformally map $\Sigma_{d}:=\{u+iv\in\mathbb{C}:-d<v<0\}$ of $2\pi$ period in the $u$ variable, to $\varOmega(t)$ of $2\pi$ period in the $x$ variable, for some $d$, possibly infinite. Let \eqref{def:conformal} extend to map $\{u+i0:u\in\mathbb{R}\}$ to $\varGamma(t)$, and $\{u-id:u\in\mathbb{R}\}$ to $\{x-ih:x\in\mathbb{R}\}$ if $d,h<\infty$, and $-i\infty$ to $-i\infty$ if $d,h=\infty$, where $d=\langle y\rangle+h$ (see \cite{DH1} for detail). Here and elsewhere,
\[
\langle f\rangle=\frac{1}{2\pi}\int_{-\pi}^{\pi} f(u)~du
\]
denotes the mean of a $2\pi$ periodic function $f$ over one period.
\subsubsection*{Periodic Hilbert transforms for a strip}
For $d$ in the range $(0,\infty)$, let
\begin{align}
\mathcal{H}_de^{iku}=&-i\tanh(kd)e^{iku}&&\hspace*{-50pt}\text{for $k\in\mathbb{Z}$}\notag
\intertext{and}
\mathcal{T}_de^{iku}=&-i\coth(kd)e^{iku}&&\hspace*{-50pt}\text{for $k\neq0,\in\mathbb{Z}$}.\label{def:T}
\intertext{Let}
\mathcal{H}e^{iku}=&-i\,\text{sgn}(k)e^{iku}&&\hspace*{-50pt}\text{for $k\in\mathbb{Z}$}.\notag
\end{align}
When $d<\infty$, if $F$ is holomorphic in $\Sigma_d$ and $2\pi$ periodic in the $u$ variable and if $\text{Re}\,F(\cdot+i0)=f$ and $(\text{Re}\,F)_v(\cdot-id)=0$ then
\begin{equation}\label{E:1-iH}
F(\cdot+i0)=(1-i\mathcal{H}_d)f
\end{equation}
up to the addition by a purely imaginary constant. Namely, $1-i\mathcal{H}_d$ is the surface value of a periodic holomorphic function in a strip, the normal derivative of whose real part vanishes at the bottom. If $\text{Im}\,F(\cdot+i0)=f$ and $\text{Im}\,F(\cdot-id)=0$, and if $\langle f\rangle=0$, instead, then
\begin{equation}\label{E:T+i}
F(\cdot+i0)=(\mathcal{T}_d+i)f
\end{equation}
up to the addition by a real constant. Namely, $\mathcal{T}_d+i$ is the surface value of a periodic holomorphic function in a strip, whose imaginary part is of mean zero at the surface and vanishes at the bottom. Moreover, when $d=\infty$, if $F$ is holomorphic in $\Sigma_\infty$ and $2\pi$ periodic in the $u$ variable and if $F$ vanishes sufficiently rapidly at $-i\infty$ then the real and imaginary parts of $F(\cdot+i0)$ are the periodic Hilbert transforms for each other (see \cite{Zygmund}, for instance).
\subsubsection*{Implicit form}
Note that $(x+iy)(u,t)$, $u\in\mathbb{R}$, makes a conformal parametrization of the fluid surface. In the finite depth, $d,h<\infty$, it follows from the Cauchy-Riemann equations and \eqref{def:T} that
\begin{equation}\label{E:Ty}
(x+iy)(u,t)=u+(\mathcal{T}_d+i)y(u,t).
\end{equation}
In the infinite depth, $d,h=\infty$, $\mathcal{H}$ replaces $\mathcal{T}_d$ (see \cite{DKSZ1996, DH2}, for instance).
Moreover, let
\[
(\phi+i\psi)(w,t)=(\varPhi+i\varPsi)(z(w,t),t)\quad\text{for $w\in\Sigma_d$}.
\]
Namely, it is a conformal velocity potential for the irrotational perturbation from \eqref{def:shear}. In the finite or infinite depth, it follows from \eqref{E:1-iH} that
\begin{equation}\label{E:Hp}
(\phi+i\psi)(u,t)=(1-i\mathcal{H}_d)\phi(u,t)
\end{equation}
up to the addition by a purely imaginary constant.
In the finite depth, substituting \eqref{E:Ty} and \eqref{E:Hp} into \eqref{E:ww;K}-\eqref{E:ww;B}, we make an explicit calculation to arrive at
\begin{equation}\label{E:implicit}
\begin{aligned}
&(1+\mathcal{T}_dy_u)y_t-y_u\mathcal{T}_dy_t-\mathcal{H}_d\phi_u-(\omega y+c)y_u=0,\\
&((1+\mathcal{T}_dy_u)^2+y_u^2)(\phi_t+gy-\omega\mathcal{H}_d\phi)\\
&\quad-((1+\mathcal{T}_dy_u)\mathcal{T}_dy_t+y_uy_t)\phi_u
+(y_u\mathcal{T}_dy_t-(1+\mathcal{T}_dy_u)y_t)\mathcal{H}_d\phi_u \\
&\quad\quad+\frac12(\phi_u^2+(\mathcal{H}_d\phi_u)^2)
-(\omega y+c)((1+\mathcal{T}_dy_u)\phi_u-y_u\mathcal{H}_d\phi_u)=0.
\end{aligned}
\end{equation}
In the infinite depth, $\mathcal{H}$ replaces $\mathcal{H}_d$ and $\mathcal{T}_d$. See \cite{DH1}, for instance, for details.
\subsubsection*{Explicit form}
In the finite depth, note that $z_t/z_u$ is holomorphic in $\Sigma_d$,
\[
\text{Im}\frac{z_t}{z_u}=\frac{\mathcal{H}_d\phi_u+(\omega y+c)y_u}{|z_u|^2}\quad\text{at $v=0$}
\]
by the former equation of \eqref{E:implicit}, and $\text{Im}(z_t/z_u)=0$ at $v=-d$ by \eqref{E:ww;bottom}. Note that $\langle \text{Im}(z_t/z_u)\rangle=0$ for any $v\in[-d,0]$ by the Cauchy-Riemann equations and \eqref{E:ww;bottom}. It then follows from \eqref{E:T+i} that
\begin{equation}\label{E:zt/zu}
\frac{z_t}{z_u}=(\mathcal{T}_d+i)\Big(-\frac{(\mathcal{H}_d\phi+\frac12\omega y^2+cy)_u}{|z_u|^2}\Big)\quad\text{at $v=0$}.
\end{equation}
Moreover, note that $(\phi_u-i\mathcal{H}_d\phi_u)^2$ is the surface value of a holomorphic and $2\pi$ periodic function in $\Sigma_d$, the normal derivative of whose real part vanishes at the bottom. It then follows from \eqref{E:1-iH} and \eqref{def:T} that
\begin{equation}\label{E:phi2}
\phi_u^2-(\mathcal{H}_d\phi_u)^2=-2\mathcal{T}_d(\phi_u\mathcal{H}_d\phi_u).
\end{equation}
We use \eqref{E:zt/zu} and \eqref{E:phi2}, and make a lengthy but explicit calculation to solve \eqref{E:implicit} as
\begin{align}\label{E:explicit}
y_t=&(1+\mathcal{T}_dy_u+y_u\mathcal{T}_d)
\Big(\frac{\mathcal{H}_d\phi_u+(\omega y+c)y_u}{(1+\mathcal{T}_dy_u)^2+y_u^2}\Big), \notag \\
\phi_t=&-\phi_u\mathcal{T}_d
\Big(\frac{\mathcal{H}_d\phi_u+(\omega y+c)y_u}{(1+\mathcal{T}_dy_u)^2+y_u^2}\Big) \notag \\
&+\frac{1}{(1+\mathcal{T}_dy_u)^2+y_u^2}(\mathcal{T}_d(\phi_u\mathcal{H}_d\phi_u)
+(\omega y+c)(1+\mathcal{T}_dy_u)\phi_u)+\omega\mathcal{H}_d\phi-gy.
\end{align}
In the infinite depth, $\mathcal{H}$ replaces $\mathcal{H}_d$ and $\mathcal{T}_d$. See \cite{DH1}, for instance, for details.
\subsection{The Stokes wave problem in a constant vorticity flow}\label{sec:Stokes}
We turn the attention to the solutions of \eqref{E:explicit}, for which $y_t, \phi_t=0$.
In the finite depth, substituting $y_t=0$ into the former equation of \eqref{E:explicit}, we arrive at
\begin{equation}\label{E:phi'}
\phi'=\mathcal{T}_d(\omega yy'+cy')\quad\text{at $v=0$}.
\end{equation}
Here and elsewhere, the prime denotes ordinary differentiation.
Substituting $\phi_t=0$ into the latter equation of \eqref{E:explicit}, likewise, we use \eqref{E:phi'} and we make an explicit calculation to arrive at
\begin{equation}\label{E:Stokes}
(c+\omega y(1+\mathcal{T}_dy')-\omega\mathcal{T}_d(yy'))^2=(c^2-2gy)((1+\mathcal{T}_dy')^2+(y')^2).
\end{equation}
In the infinite depth, $\mathcal{H}$ replaces $\mathcal{T}_d$.
If we were to take \eqref{E:bernoulli}, rather than \eqref{E:ww;B}, where $b=0$, then the result would become
\begin{equation}\label{E:DH1b}
(c+\omega y(1+\mathcal{T}_dy')-\omega\mathcal{T}_d(yy'))^2=(c^2+2b-2gy)((1+\mathcal{T}_dy')^2+(y')^2),
\end{equation}
and one must determine $b$ as part of the solution. See \cite{DH1}, for instance, for details.
\subsubsection*{The modified Babenko equation}
Unfortunately, \eqref{E:Stokes} or \eqref{E:DH1b} is not suitable for numerical solution, because one would have to work with rational functions of $y$. We reformulate \eqref{E:Stokes} as in a more convenient form. Details may be found in \cite{DH1, DH2}.
In the finite depth, we rearrange \eqref{E:Stokes} as
\begin{multline*}
(c-\omega\mathcal{T}_d(yy'))^2+2\omega y(c-\omega\mathcal{T}_d(yy'))(1+\mathcal{T}_dy')
-\omega^2y^2(y')^2 \\ =(c^2-2gy-\omega^2y^2)((1+\mathcal{T}_dy')^2+(y')^2).
\end{multline*}
Note that $(c-\omega(\mathcal{T}_d+i)(yy'))^2$ is the surface value of a holomorphic and $2\pi$ periodic function in $\Sigma_d$, whose imaginary part is of mean zero at the surface and vanishes at the bottom. Hence, so is
\begin{align*}
(&c^2-2gy-\omega^2y^2)((1+\mathcal{T}_dy')^2+(y')^2)
-2\omega y(c-\omega\mathcal{T}_d(yy'))(1+\mathcal{T}_dy'+iy') \\
&=((c^2-2gy-\omega^2y^2)(1+\mathcal{T}_dy'-iy')
-2\omega y(c-\omega\mathcal{T}_d(yy')))(1+\mathcal{T}_dy'+iy').
\end{align*}
Moreover, note that $1/(1+\mathcal{T}_dy'+iy')$ is the surface value of the holomorphic and $2\pi$ periodic function $=1/z_u$ in $\Sigma_d$, whose imaginary part is of mean zero at the surface and vanishes at the bottom.
Hence, so is
\[
(c^2-2gy-\omega^2y^2)(1+\mathcal{T}_dy'-iy')-2\omega y(c-\omega\mathcal{T}_d(yy')).
\]
Therefore, it follows from \eqref{E:Ty} that
\[
(c^2-2gy-\omega^2y^2)(1+\mathcal{T}_dy')-2\omega y(c-\omega\mathcal{T}_d(yy'))
=-\mathcal{T}_d((c^2-2gy-\omega^2y^2)y')
\]
up to the addition by a real constant. Or, equivalently,
\begin{multline}\label{E:babenko}
c^2\mathcal{T}_dy'-(g+c\omega)y-g(y\mathcal{T}_dy'+\mathcal{T}_d(yy'))\\
-\frac12\omega^2(y^2+\mathcal{T}_d(y^2y')+y^2\mathcal{T}_dy'-2y\mathcal{T}_d(yy'))=0
\end{multline}
and
\begin{equation}\label{E:mean}
g\langle y(1+\mathcal{T}_dy')\rangle+c\omega\langle y\rangle+\frac12\omega^2\langle y^2\rangle=0.
\end{equation}
Indeed, $\langle \mathcal{T}_df'\rangle=0$ for any function $f$ by \eqref{def:T} and
\[
\langle y^2\mathcal{T}_dy'\rangle =\frac{1}{2\pi}\int^{\pi}_{-\pi} y^2\mathcal{T}_dy'~du
=-\frac{1}{2\pi}\int^{\pi}_{-\pi} y\mathcal{T}_d(y^2)'~du=-\langle 2y\mathcal{T}_d(yy')\rangle.
\]
In the infinite depth, $\mathcal{H}$ replaces $\mathcal{T}_d$. Conversely, a solution of \eqref{E:babenko}-\eqref{E:mean} gives rise to a traveling wave of \eqref{E:ww} and, hence, \eqref{E:ww0}, provided that
\begin{subequations}\label{C:limiting}
\begin{gather}
u\mapsto (u+\mathcal{T}_dy(u), y(u)), u\in\mathbb{R}, \text{is injective} \label{C:touching}
\intertext{and}
((1+\mathcal{T}_dy_u)^2+y_u^2)(u)\neq0\quad\text{for any $u\in\mathbb{R}$}. \label{C:extreme}
\end{gather}
\end{subequations}
See \cite{DH1,DH2}, for instance, for details.
The Stokes wave problem in a constant vorticity flow is to find $\omega\in\mathbb{R}$, $d\in(0,\infty]$, $c\in\mathbb{R}$ an a $2\pi$ periodic function $y$, satisfying \eqref{C:limiting}, which together solve \eqref{E:babenko}-\eqref{E:mean}.
In what follows, we assume that $y$ is even (see \cite{Hur2007}, for instance, for arbitrary vorticity).
In an irrotational flow of infinite depth, $\omega=0$ and $d=\infty$, \eqref{E:babenko} and \eqref{E:mean} simplify to
\begin{equation}\label{E:babenko0}
c^2\mathcal{H}y'-gy-g(y\mathcal{H}y'+\mathcal{H}(yy'))=0
\end{equation}
and $\langle y(1+\mathcal{T}_dy')\rangle=0$. Longuet-Higgins~\cite{LH1978} discovered a set of identities among the Fourier coefficients of a Stokes wave, which Babenko~\cite{Babenko} rediscovered in the form of \eqref{E:babenko0} and, independently, \cite{Plotnikov1992, DKSZ1996} among others. One may regard \eqref{E:babenko}-\eqref{E:mean} as the modified Babenko equation, permitting constant vorticity and finite depth.
If we were to take \eqref{E:bernoulli}, rather than \eqref{E:ww;B}, where $b=0$, then \eqref{E:babenko} would become
\begin{equation}\label{E:DH1y}
\begin{aligned}
(c^2+2b)\mathcal{T}_dy'-&(g+c\omega)y-g(y\mathcal{T}_dy'+\mathcal{T}_d(yy'))\\
-&\frac12\omega^2(y^2+y^2\mathcal{T}_dy'+\mathcal{T}_d(y^2y')-2y\mathcal{T}_d(yy'))=0,
\end{aligned}
\end{equation}
which is supplemented with
\begin{equation}\label{E:DH1-mean}
\langle(c+\omega y(1+\mathcal{T}_dy')-\omega\mathcal{T}_d(yy'))^2\rangle
=\langle (c^2+2b-2gy)((1+\mathcal{T}_dy')^2+(y')^2)\rangle.
\end{equation}
This is what \cite{CSV2016, DH1} derived.
\section{Numerical method}\label{sec:numerical}
We write \eqref{E:babenko} in the operator form as $\mathcal{G}(y; c, \omega, d)=0$ and solve it iteratively using the Newton method. Let $y^{(n+1)}=y^{(n)}+\delta y^{(n)}$, $n=0,1,2,\dots$, where $y^{(0)}$ is an initial guess, to be supplied (see \cite{DH1, DH2}, for instance), and $\delta y^{(n)}$ solves
\begin{equation}\label{E:dF}
\delta \mathcal{G}(y^{(n)};c,\omega,d)\delta y^{(n)}=-\mathcal{G}(y^{(n)};c,\omega,d),
\end{equation}
$\delta \mathcal{G}(y^{(n)};c,\omega,d)$ is the linearization of $\mathcal{G}(y;c,\omega,d)$ with respect to $y$ and evaluated at $y=y^{(n)}$.
We exploit an auxiliary conformal mapping, involving Jacobi elliptic functions (see \cite{DH2} and references therein), and take efficient, albeit highly nonuniform, grid points in $u\in[-\pi,\pi]$. We approximate $y^{(n)}$ by a discrete Fourier transform and numerically evaluate $y^{(n)}$, $\mathcal{T}_dy^{(n)}$ and others using a fast Fourier transform. Since
\begin{align*}
\delta\mathcal{G}(y;c,\omega,d)\delta y=&c^2\mathcal{T}_d(\delta y)'-(g+c\omega)\delta y
-g(\delta y\mathcal{T}_dy'+y\mathcal{T}_d(\delta y)'+\mathcal{T}_d(y\delta y)') \\
&-\frac12\omega^2(2y\delta y+\mathcal{T}_d(y^2\delta y)'-[2y\delta y,y]+[y^2,\delta y]),
\end{align*}
where $[f_1,f_2]=f_1\mathcal{T}_df_2-f_2\mathcal{T}_df_1$, is self-adjoint, we solve \eqref{E:dF} using the conjugate gradient (CG) method.
We employ \eqref{E:mean} to determine the zeroth Fourier coefficient.
Once we arrive at a convergent solution, we continue it along in the parameters. See \cite{DH2}, for instance, for details.
If we were to take \eqref{E:DH1y}-\eqref{E:DH1-mean}, rather than \eqref{E:babenko}, where $b=0$, then the associated linearized operator includes
\begin{align*}
(\delta y,\delta b)\mapsto &(c^2+2b)\mathcal{T}_d(\delta y)'\hspace*{-2pt}+\hspace*{-2pt}2\delta b\mathcal{T}_dy'
\hspace*{-2pt}-\hspace*{-2pt}(g+c\omega)\delta y
\hspace*{-2pt}-\hspace*{-2pt}g(\delta y\mathcal{T}_dy'+y\mathcal{T}_d(\delta y)'+\mathcal{T}_d(y\delta y)')\\
&-\frac12\omega^2(2y\delta y+\mathcal{T}_d(y^2\delta y)'-[2y\delta y,y]+[y^2,\delta y]),
\end{align*}
which is {\em not} self-adjoint, whence the CG or conjugate residual method may not apply. The authors~\cite{DH1} used the generalized minimal residual (\mbox{GMRES}) method and achieved some success. But it would take too much time to accurately resolve a numerical solution when it requires excessively many grid points. The CG method is more powerful than the GMRES for self-adjoint equations, and it leads to new findings, which we discuss promptly.
\section{Results}\label{sec:results}
Summarized below are the key findings of \cite{DH1,DH2}.
We take without loss of generality that $c$ is positive, and allow $\omega$ positive or negative, representing waves propagating upstream or downstream, respectively (see the discussion in \cite{PTdS1988}).
We take for simplicity that $g=1$ and $d=\infty$. By the way, the effects of finite depth change the amplitude of a Stokes waves and others, but they are insignificant otherwise (see \cite{DH1}, for instance).
In what follows, the steepness $s$ measures the crest-to-trough wave height divided by the period~$=~2\pi$.
\subsection{Folds and gaps}\label{sec:fold/gap}
\begin{figure}[h]
\includegraphics[scale=1.0]{Figure1a.pdf}
\includegraphics[scale=1.0]{Figure1b.pdf}
\caption{On the left, wave speed vs. steepness for $\omega=0$ and~$-1$. Insets are closeups near the endpoints of the continuation of the numerical solution. On the right, the profiles of almost extreme waves.}
\label{fig:w=0,-1}
\end{figure}
For zero and negative constant vorticity, for instance, for $\omega=0$ and $-1$, the left panel of Figure~\ref{fig:w=0,-1} collects the wave speed versus steepness from the continuation of the numerical solution. For $\omega=0$, Longuet-Higgins and Fox~\cite{LHFox1978}, among others, predicted that $c$ oscillates infinitely many times whereas $s$ increases monotonically toward the wave of greatest height or the extreme wave, whose profile exhibits a $120^\circ$ corner at the crest. Numerical computations (see \cite{DLK2016, LDS2017}, for instance, and references therein) bear it out. The insets reproduce the well-known result and suggest likewise when $\omega=-1$.
The right panel displays the profiles of almost extreme waves, in the $(x,y)$ plane in the range $x\in[-\pi,\pi]$. Troughs are at $y=0$. Note that the steepness when $\omega=-1$ is noticeably less than $\omega=0$.
\begin{figure}[h]
\includegraphics[scale=1.0]{Figure2a.pdf}
\includegraphics[scale=1.0]{Figure2b.pdf}
\includegraphics[scale=1.0]{Figure2c.pdf}
\includegraphics[scale=1.0]{Figure2d.pdf}
\caption{For $\omega=2.5$. Clockwise from upper left: wave speed vs. steepness; the profiles of eight solutions, labelled by $A$ to $E$, and $G$ to $I$; the profile of an unphysical solution labelled by~$F$.}
\label{fig:w=2.5}
\end{figure}
For a large value of positive constant vorticity, for instance, for $\omega=2.5$, Figure~\ref{fig:w=2.5} includes the wave speed versus steepness and the profiles at the indicated points along the $c=c(s)$ curve, in the $(x,y)$ plane, where $x\in[-3\pi,3\pi]$. Troughs are at $y=0$. The upper left panel reveals that $s$ increases and decreases from $s=0$ to wave $D$. Namely, a {\em fold} develops in the $c=c(s)$ curve. For $s$ small, for instance, for wave~$A$, the profile is single valued. But we observe that the profile becomes more rounded as $s$ increases along the fold, so that overhanging waves appear, whose profile is no longer single valued. Moreover, we arrive at a {\em touching wave}, whose profile becomes vertical and contacts with itself somewhere the trough line, whereby enclosing a bubble of air. Wave $B$ is an almost touching wave.
Past the touching wave, a numerical solution is unphysical because \eqref{C:touching} no longer holds true (see \cite{DH1, DH2} for examples). Moreover, we observe that the profile becomes less rounded as $s$ decreases along the fold, so that we arrive at another touching wave; past the touching wave, a numerical solution is physical. Wave~$C$ is an almost touching wave and wave $D$ is physical. Together, a {\em gap} develops in the $c=c(s)$ curve, consisting of unphysical numerical solutions and bounded by two touching waves. We remark that wave $C$ encloses a larger bubble of air than wave $B$.
Past the end of the fold, interestingly, the upper left panel reveals another fold and another gap. The steepness increases from waves $D$ to $F$, and decreases from waves $F$ to $H$. Waves $E$ and $G$ are almost touching waves and numerical solutions between are unphysical. For instance, for wave $F$, the profile intersects itself and the fluid region overlaps itself.
Past the end of the second fold, we observe that $s$ increases monotonically, although $c$ oscillates (see \cite{DH2}, for instance, for details), like when $\omega=0$; moreover, overhanging profiles disappear as $s$ increases and the crests become sharper, like when $\omega=0$. Therefore, we may claim that an {\em extreme wave} ultimately appears, whose profile exhibits a sharp corner at the crest. Wave $I$ is an almost extreme wave. One may not continue the numerical solution past the extreme wave because \eqref{C:extreme} would no longer hold true.
\begin{figure}[h]
\includegraphics[scale=1.0]{Figure3.pdf}
\caption{Wave speed vs. steepness for five values of positive constant vorticity. Solid curves for physical solutions and dashed curves unphysical. The inset distinguishes the lowest fold and gap.}
\label{fig:c=c(s)}
\end{figure}
Figure~\ref{fig:c=c(s)} includes the wave speed versus steepness for several values of positive constant vorticity. For zero vorticity, one predicts that $c$ experiences infinitely many oscillations whereas $s$ increases monotonically (see \cite{LHFox1978}, for instance). For negative constant vorticity, numerical computations (see \cite{SS1985, PTdS1988, DH1}, among others) suggest that the crests become sharper and lower. Figure~\ref{fig:w=0,-1} bears it out.
For positive constant vorticity, for instance, for $\omega=1.7$, on the other hand, Figure~\ref{fig:c=c(s)} reveals that the lowest oscillation of $c$ deforms into a fold. Consequently, there correspond two or three solutions for some values of $s$. Moreover, the extreme wave seems not the wave of greatest height. We observe that the fold becomes larger in size as $\omega$ increases. For a larger value of the vorticity, for instance, for $\omega=1.74$, the figure reveals that part of the fold transforms into a gap.
We observe that the gap becomes larger in size as $\omega$ increases. See \cite{DH1}, for instance, for details.
Moreover, for $\omega=2.4$, Figure~\ref{fig:c=c(s)} reveals that the second oscillation of $c$ deforms into another fold, and we observe that the second fold becomes larger in size as $\omega$ increases. For $\omega=2.5$, part of the second fold transforms into another gap, and we observe that the second gap becomes larger in size as $\omega$ increases. We merely pause to remark that the numerical method of \cite{SS1985, PTdS1988} and others diverges in a gap and is incapable of locating a second gap. The numerical method of \cite{DH1} converges in a gap, but it would take too much time to accurately resolve a numerical solution along a second fold.
We take matters further and claim that higher folds and higher gaps develop in like manner as $\omega>0$ increases. For instance, for $\omega=4$, Figure~\ref{fig:c=c(s)} reveals five folds and five gaps! Moreover, we claim that past all the folds, the steepness increases monotonically toward an extreme wave. Numerical computations (see \cite{DH2}, for instance) suggest that the extreme profile is single valued and exhibits a $120^\circ$ corner at the crest, regardless of the value of the vorticity.
\subsection{Touching waves in the infinite vorticity limit}\label{sec:touching}
\begin{figure}[h]
\centerline{
\includegraphics[scale=1]{Figure4a.pdf}
\includegraphics[scale=1]{Figure4b.pdf}}
\caption{On the left, touching waves at the beginnings of the lowest gaps for four values of vorticity. The dashed curved line is the limiting Crapper wave. On the right, touching waves at the ends of the gaps. The dashed curved line is a circle.}
\label{fig:touching1}
\end{figure}
The left panel of Figure~\ref{fig:touching1} displays the profiles of almost touching waves near the beginnings of the lowest gaps, and the right panel near the ends of the gaps, for four values of positive constant vorticity, in the $(x,y)$ plane in the range $x\in[-2\pi,2\pi]$. Touching is at $y=0$. The profiles on the left resemble that in \cite[Figure~4(b)]{VB1996}.
At the beginnings of the gaps, we observe that $s$ decreases monotonically toward $\approx0.73$ as $\omega\to\infty$ (see \cite{DH1}, for instance). Crapper~\cite{Crapper} derived a remarkable formula of periodic capillary waves (in the absence of gravitational effects) in an irrotational flow of infinite depth, and calculated that $s\approx0.73$ for the wave of greatest height. Moreover, the left panel reveals that, for instance, for $\omega=14$, the profile of an almost touching wave is in excellent agreement with the limiting Crapper wave. Therefore, we may claim that touching waves at the beginnings of the lowest gaps tend to the {\em limiting Crapper wave} as the value of positive constant vorticity increases indefinitely. It reveals a striking and surprising link between positive constant vorticity and capillarity!
At the ends of the gaps, on the other hand, we observe that $s\to1$ as $\omega\to\infty$ (see \cite{DH1}, for instance). Teles da Silva and Peregrine~\cite{PTdS1988}, among others, numerically computed periodic waves in a constant vorticity flow in the absence of gravitational effects, and argued that a limiting wave has a circular shape made up of fluid in rigid body rotation (see also \cite{VB1996}). Moreover, the right panel reveals that, for instance, for $\omega=14$, the profile of an almost touching wave is nearly circular. Therefore, we may claim that touching waves at the ends of the lowest gaps tend to a {\em fluid disk in rigid body rotation} in the infinite vorticity limit.
s
It is interesting to analytically explain the limiting Crapper wave and the circular vortex wave in the infinite vorticity limit.
\begin{figure}[h]
\centerline{
\includegraphics[scale=1]{Figure5a.pdf}
\includegraphics[scale=1]{Figure5b.pdf}}
\caption{On the left, touching waves at the beginnings of the second gaps for three values of vorticity (solid) and the circular vortex wave on top of the limiting Crapper wave (dashed). On the right, touching waves at the ends of the gaps (solid) and the circular vortex wave on top of itself (dashed).}
\label{fig:touching2}
\end{figure}
Moreover in the left panel of Figure~\ref{fig:touching2} are the profiles of almost touching waves near the beginnings of the second gaps, and the right panel near the ends of the gaps, for three values of positive constant vorticity, in the $(x,y)$ plane, where $x\in[-2\pi,2\pi]$. The profile on the left for $\omega=14$ resembles that in \cite[Figure~5(c)]{VB1996}, and the profile on the right resembles \cite[Figure~6]{VB1996}. We may claim that touching waves at the beginnings of the second gaps tend to the circular vortex on top of the limiting Crapper wave as the value of positive constant vorticity increases indefinitely, whereas the circular vortex wave on top of itself at the ends of the gaps.
We take matters further and claim that touching waves at the boundaries of higher gaps accommodate more circular vortices in like manner. See \cite{DH2}, for instance, for a profile nearly enclosing five circular vortices!
\subsection*{Acknowledgment}
VMH is supported by the US National Science Foundation under the Faculty Early Career Development (CAREER) Award DMS-1352597, and SD is supported by the National Science Foundation under DMS-1716822. VMH is grateful to the Erwin Schr\"odinger International Institute for Mathematics and Physics for its hospitality during the workshop Nonlinear Water Waves.
|
1,116,691,500,206 | arxiv | \section{Introduction}
Recent advanced material technologies have made it possible to access
low-dimensional quantum systems. Furthermore, material synthesis
has offered a great opportunity to explore more intriguing
lower-dimensional spin systems
rather than well-understood conventional spin systems
\cite{Verdaguer}.
In such a low-dimensional system, for instance, alternating bond
interactions and/or
less symmetry interactions
in spin lattices can be realizable in synthesizing two different magnetic atoms.
Of particular importance, therefore, is
understanding quantum phase transitions in which one-dimensional spin systems
are unlikely found naturally.
Normally, quantum fluctuations in a low-dimensional spin system
are stronger than higher dimensional spin systems \cite{sachdev}.
Quantum phase transitions driven by stronger quantum
fluctuations then exhibit more interesting and novel quantum phenomena
in low-dimensional spin systems.
The effects of alternating bond
interactions, especially,
have been intensively studied theoretically
in spin systems such as
antiferromagnetic Heisenberg chains
\cite{Kolezhuk,Yamamoto97,Dukelsky,Aplesnin,Onishi,Narumi},
Heisenberg chains with next-nearest-neighbor bond alternations
\cite{Capriotti,Maeshima},
a tetrameric Heisenberg antiferromagnetic chain \cite{Gong},
and two-leg spin ladders \cite{Fukui,Almeida}.
A recent experiment has demonstrated a realization
of a bond-alternating chain by
applying magnetic fields in
a spin-1/2 chain antiferromagnet \cite{Canevet}.
In this study, we will consider one-dimensional Ising-type spin
chains with an alternating exchange coupling.
Actually, this bond alternation cannot destroy the
antiferromagnetic phase of the uniform bond case but just quantitatively changes
the ground state properties originating from a dimerization of the spin lattice.
Then, a less symmetric interaction can play a significant role to induce a
quantum phase transition. To see a quantum phase transition,
we will employ a Dzyaloshinskii-Moriya (DM) interaction
\cite{Moriya} which results from the spin-orbit coupling.
Based on the ground state fidelity \cite{Zhou} with the iMPS presentation \cite{iTEBD},
we discuss the quantum criticality in the system.
It is shown that a uniform DM interaction can destroy the antiferromagnetic
phase, which is a continuous quantum phase transition,
and its critical value is inversely proportional to
the alternating exchange coupling strength.
\section{Model and numerical method}
Let us start with a spin-1/2 Ising chain with antisymmetric
anisotropic, and alternative bond interactions on the infinite-size lattice.
Our system can be described by the spin Hamiltonian
\begin{equation}
H= \sum_{i=\infty}^\infty J_i S^{z}_{i}S^{z}_{i+1}
+ \vec{D}_{i}\cdot(\vec{S}_{i} \times \vec{S}_{i+1}), \label{Hamt}
\end{equation}
where
$\vec{S}_i=(S^x_i, S^y_i, S^z_i)$ are the spin operators acting
on the $i$-th site.
The exchange interaction is chosen as $J_i =1-(-1)^i r$
and the alternative bond interaction
is characterized by the relative strength $r$ of exchange coupling
for the even and odd lattice sites.
To describe an antisymmetric anisotropic exchange coupling between
the two spins on the lattice, we employ
a uniform DM interaction $\vec{D}_i=\vec{D}$,
that is characterized by
the DM vector $\vec{D}=(D_x,D_y,D_z)$.
For $r=0$ and $\vec{D}=0$, Eq. (\ref{Hamt}) is reduced to the
conventional Ising chain Hamiltonian.
If $r=0$ and $\vec{D} = D \hat{z}$,
Eq. (\ref{Hamt}) can be mapped onto the XXZ spin chain model
which has a quantum phase transition from the gapped Neel or antiferromagnetic (AFM) phase
to the gapless Luttinger Liquid (LL) phase at the critical point $D_c=1$ \cite{Soltani}.
This study will then be focused on the antiferromagnetic exchange
interaction $J_i \geq 0$, i.e., $0 \leqslant r \leqslant 1$, and
a transverse DM interaction denoting $\vec{D}=(0, 0, D)$.
The Hamiltonian in Eq. (\ref{Hamt}) is actually invariant
under the transformation $U = \prod U_{2i}\otimes U_{2i+1}$
with $U_{2i}=\sigma^x$ for $2i$-th site and
$U_{2i+1}=\sigma^y$ for ($2i+1$)-th site.
Our model Hamiltonian then possesses a $Z_2$ symmetry
generated by the transformation $U$.
The ground state of the system may undergo a spontaneous
$Z_2$ symmetry breaking which gives rise to a quantum
phase transition between an ordered phase and a
disordered phase.
For a quantum spin system with a finite $N$ lattice site,
its wave function with the periodic boundary condition can be
expressed in the matrix product state (MPS)
representation \cite{Verstrate} as
$|\Psi\rangle = \mathrm{Tr}\left[A^{[1]}A^{[2]} \cdots A^{[N]}\right]
\, |s^{[1]}s^{[2]} \cdots s^{[N]}\rangle$, where
$A^{[i]}$ is a site-dependent $\chi\times\chi$ matrix with the truncation
dimension $\chi$ of the local Hilbert space at the $i$-th site,
$|s^{[i]}\rangle$ is a basis of the local Hilbert space at the $i$-th
site, and the physical index $s$ takes value $1,\cdots,d$ with the
dimension $d$ of the local Hilbert space.
This MPS representation for a finite lattice system can be extended to describe an infinite
lattice system. To do this, for an infinite lattice,
one may replace
the matrix $A^{[i]}$ with $\Gamma^{[i]}\lambda^{[i]}$ \cite{iTEBD}, where
$\Gamma^{[i]}$ is a three-index tensor and $\lambda^{[i]}$ is a
diagonal matrix at the $i$-th site,
which is called the \textit{canonical infinite matrix product state} (iMPS) representation.
If system Hamiltonian is translational invariant for an infinite lattice, for instance,
our system Hamiltonian describe by Eq. (\ref{Hamt}) has a two-site translational invariance,
the two-site translational invariance allows us to reexpress the Hamiltonian
as $H=\Sigma_i h^{[i,i+1]}$, where $h^{[i,i+1]}$ is
the nearest-neighbor two-body Hamiltonian density.
In such a case, one can introduce a two-site translational invariant iMPS representation,
i.e., for the even (odd) sites A (B),
only two three-index tensors $\Gamma_{A(B)}$
and two diagonal matrices $\lambda_{A(B)}$
can be applied in representing a system wave function:
\begin{equation}
|\Psi\rangle
=\sum_{\{s^{[i]}\}}
\cdots \Gamma_{A}\lambda_{A}\Gamma_{B}\lambda_{B}\Gamma_{A}
\lambda_{A}\Gamma_{B}\lambda_{B} \cdots
|\cdots s^{[i]}s^{[i+1]}s^{[i+2]}s^{[i+3]} \cdots \rangle. \label{wave}
\end{equation}
Note that, actually, for an infinite lattice sites,
the diagonal elements of the matrix $\lambda_i$
are the normalized Schmidt decomposition coefficients of the
bipartition between the semi-infinite chains
$L(-\infty,...,i)$ and $R(i+1,...,\infty)$.
In order to find a ground state of our system in the iMPS
representation, the infinite time-evolving block decimation
(iTEBD) algorithm introduced by Vidal \cite{iTEBD} is employed.
For a given initial state $|\Psi(0)\rangle$ and the Hamiltonian $H$,
a ground-state wave function
can be yield by the imaginary time evolution
$|\Psi(\tau)\rangle = \exp\left[-H\tau\right]|\Psi(0)\rangle /|\exp(-H\tau)|\Psi(0)\rangle|$
for a large enough $\tau$.
To realize the imaginary time evolution operation numerically, the imaginary
time $\tau$ is divided into the time slices $\delta\tau= \tau/N$
and a sequence of the time slice evolution gates approximates
the continuous time evolution.
Meanwhile,
the time evolution operator $\exp\left[-h^{[i,i+1]} \delta\tau\right]$
for $\delta\tau \ll 1$ is expanded to a product of the evolution operators acting
on two successive $i$ and $i+1$ sites through the Suzuki-Trotter decomposition \cite{Suzuki}.
After absorbing a time-slice evolution gate,
in order to recover the iMPS representation,
a singular value decomposition of a matrix is performed, which contracted from
$\Gamma_A$, $\Gamma_B$, one $\lambda_A$ , two $\lambda_B$
and the evolution operators. Then only the $\chi$ largest singular values are retained.
This procedure yields the new tensors $\Gamma_A$, $\Gamma_B$ and $\lambda_A$
that are used to update the tensors for all the sites.
As a result, the translational invariance
under two-site shifts is recovered.
Repeating the above procedure until the ground-state energy
converges yields the ground-state wave function of the system in the iMPS representation.
\section{Fidelity per lattice site and bifurcations}
One can define a fidelity $F(D,D')= | \langle\Psi(D')|
\Psi(D)\rangle|$ from a ground state wavefunction $\Psi(D)$.
A fidelity per lattice site (FLS) $d$ \cite{Zhou} can be
defined as
\begin{equation}
\ln d(D,D') = \lim_{N \rightarrow \infty} \frac {\ln F(D,D')}{N}.
\label{dinf}
\end{equation}
where $N$ is the system size. Remarkably, FLS is well
defined and plays a similar role to an order parameter
although $F(D,D')$ trivially becomes
zero in the thermodynamic limit when $N$ approaches infinity.
The FLS satisfies the properties inherited from
fidelity $F(D,D')$:
(i) normalization $d(D,D)=1$, (ii) symmetry $d(D,D')=d(D',D)$,
and (iii) range $0 \le d(D,D')\le 1$.
\begin{figure}
\includegraphics [width=0.45\textwidth]{Fig1.eps}
\caption{ (Color online)
Fidelity per lattice site $d(D,D'=0.26)$ as a function of
$D$ for various truncation dimensions $\chi$ with $r=0.5$.
For higher truncation dimension, a bifurcation point
$D_B(\chi)$ moves to saturate to its critical value.
Inset: Extrapolation of bifurcation point $D_B(\chi)$.
From the numerical fitting function
$D_B(\chi)=D_B(\infty)+ b\chi^{-c}$ with $b=0.335$ and
$c=1.994$, the critical point is estimated as
$D_c=D_B(\infty)=0.264$.}
\label{FidelityBifur}
\end{figure}
On adapting the transfer matrix approach \cite{iPEPS-fidelity},
the iMPS representation of the ground-state wave-functions
allows us to calculate the fidelity per lattice site (FLS)
$d(D,D')$. Let us choose $|\Psi(D')\rangle$ as a reference
state for the FLS $d(D,D')$.
For $D'=0.26$, in Fig. \ref{FidelityBifur},
the FLS $d(D,D')$ is plotted as a function of the DM
interaction parameter $D$
for various values of the truncation
dimensions with a randomly chosen initial state
in the iMPS representation.
Figure \ref{FidelityBifur} shows a singular behavior of the FLS
$d(D,0.26)$, which indicates that there occurs a quantum phase
transition across the singular point.
A bifurcation behavior of the FLS $d(D,0.26)$
is also shown when the interaction parameter $D$ becomes smaller
than its characteristic singular
value that can be called a `bifurcation point' $D_B$.
The bifurcation points depend on
the truncation dimension $\chi$, i.e., $D_B=D_B(\chi)$.
As the truncation dimension $\chi$ increases,
the bifurcation occurs starting at the lower value of $D$.
For $\chi \rightarrow \infty$,
the bifurcation point $D_B(\infty)$ at which a bifurcation starts to occur
can be extrapolated.
In the inset of Fig. \ref{FidelityBifur},
we use an extrapolation function $D_B(\chi)=a\,
+\, b\, \chi^{-c}$, characterized by the coefficients $a$, $b$,
and $c$ being a positive real
number,
which guarantees that
$D_B(\infty)$ becomes a finite value.
The numerical fitting gives $a = 0.264$, $b=0.335$, and $c=1.994$.
From the extrapolation,
the bifurcation point is shown to saturate to $D_B(\infty) \simeq a$ which
can be regarded as a critical point $D_c = D_B(\infty)$ \cite{Bifur,Dai}.
Actually, the critical point also corresponds
to the pinch point of the FLS in the thermodynamics limit,
i.e., $\chi \rightarrow \infty$.
Consequently, a FLS bifurcation
point $D_B(\chi)$ plays the role of a pseudo phase transition point for a given
finite truncation dimension $\chi$ in the MPS representation.
In addition, the continuous function behavior of the
FLS across the bifurcation point implies
that a continuous quantum phase transition occurs
at the critical point \cite{Zhou}.
In Fig.~\ref{FidelityBifur}, the bifurcation occurs for $D <
D_B(\chi)$, which is captured in the iMPS representation with a
randomly chosen initial state.
In fact, the bifurcation behavior of the FLS means
that there are two possible ground
states for $D < D_B(\chi)$ while there is a single ground state for $D >
D_B(\chi)$.
Such a property of the ground states
can be understood by the $Z_2$ symmetry of the Hamiltonian
from the invariant transformation $U H U^\dagger=H$.
In the thermodynamic limit, there are two possible ground states
satisfying $U H U^\dagger=H$, that is,
$\Psi_g$ or $U\Psi_g$.
Once a spontaneous symmetry breaking happens,
the system can choose one of two possible ground
states $\Psi_g$ or $U\Psi_g$, which indicate a broken symmetry
phase.
For symmetric phase, the system has a single ground state,
being a linear combination of two possible ground state,
which should satisfy the transformation invariance.
In Fig.~\ref{FidelityBifur}, then, the FLS is plotted from
the two fidelities, i.e., $F= |\langle \Psi_g(0.26) |
\Psi_g (D)\rangle |$ (upper lines) and $F = |\langle \Psi_g(0.26) |
U|\Psi_g (D) \rangle |$ (lower lines).
Then, the bifurcation point is the transition point between
the symmetry phase and the broken-symmetry phase.
\section{Phase diagram}
From the iMPS with the numerical extrapolation of the bifurcation points,
in Fig.~\ref{PhaseDigram},
we draw the ground-state phase diagram in the interaction parameter
$(r,D)$ plane.
Below the phase boundary (red solid line) the system is in an antiferromagnetic
phase while above the boundary
the system is in a disordered phase.
The phase diagram shows
that the $D_c$ is inversely proportional to the $r_c$.
A best fitting function (dotted line) of the critical points $(r_c,D_c)$
is given by
$D_c \approx (a(a+1))/(\sqrt{r_c}+a)-a$ with a single parameter $a=3.4$.
The characteristic phase boundary can be understood as follows:
If $r \neq 0$ and $D=0$,
the lattice sites are dimerised and
the Hamiltonian in Eq. (\ref{Hamt})
has a two-site translational
invariance due to the alternating bond coupling $r$.
As assumed, for the antiferromagnetic exchange interaction $J_i > 0$, i.e.,
for $0 \leq r \leq 1$, the Ising chain with the alternating
bond coupling is in an antiferromagnetic state even though
the lattice sites are strongly dimerised.
If $D \neq 0$,
the antisymmetric anisotropic DM interaction can destroy the
antiferromagnetic order originating from the antiferromagnetic
exchange interaction $J_i$.
The antiferromagnetic order
may be destroyed more easily by the uniform
antisymmetric anisotropic DM
interaction for the dimerised lattice sites than for the Ising chain without
the alternating bond coupling
because the antiferromagnetic correlation
between the sites becomes weaker due to the dimerised lattice sites.
Then, to destroy the antiferromagnetic order,
a stronger dimerisation of the lattice sites (bigger $r$)
requires a much weaker uniform
antisymmetric anisotropic interaction (much smaller $D$).
Consequently,
the phase boundary separating the antiferromagnetic phase
and a disordered phase might have a inversely proportional relation
between $D_c$ and $r_c$.
\begin{figure}
\includegraphics[width=0.45\textwidth] {Fig2.eps}
\caption{(Color online)
Ground state phase diagram in the plane of the DM interaction and
the alternating bond strengths.
From the iMPS, the phase boundary shows that the $D_c$ is inversely
proportional to the $\sqrt{r_c}$.
A best fitting is denoted by the dotted line. This is in sharp contrast with the results from the
real space renormolaztion group (RSRG) method~\cite{XiangHao2010}.
}
\label{PhaseDigram}
\end{figure}
Very recently, a real space renormalization
group (RSRG) approach \cite{XiangHao2010} has been applied in the
same model and has shown that a symmetric phase boundary is given
by $D_c\simeq \sqrt{1-r^2_c}$ (blue dashed line in Fig. \ref{PhaseDigram}).
Compared with the RSRG approach,
our results from the iMPS show that there is a quite
significant discrepancy in the phase boundary line because,
contrast to the iMPS,
the RSRG is an approximate method based on low-energy states, which implies that
it does not capture properly a contribution from relevant higher energy states.
Then, the ground state phase diagram from our fidelity approach based on
the iMPS is more reliable and accurate.
\section{Quantum entanglement and phase transition}
In order to understand more clearly the quantum phase transition
in our system, let us consider quantum entanglement that
can also detect a quantum phase transition \cite{ent_review}.
To quantify the quantum entanglement,
we employ the von Neumann entropy, which is a good measure of
bipartite entanglement between two subsystems of a pure
state \cite{Bennett}, because our ground states are in a pure state.
Then, the spin chain can be partitioned into the two parts
denoted by the left semi-infinite chain $L$ and the right semi-infinite chain $R$.
The von Neumann entropy is defined as
$S=-\mathrm{Tr}\varrho_L\log_2\varrho_L=-\mathrm{Tr}\varrho_R\log_2\varrho_R$
in terms of the reduced density matrix $\varrho_L$ or $\rho_R$ of the
subsystems $L$ and $R$.
In the iMPS representation, the von Neumann entropy
for the semi-infinite chains $L$ or $R$ becomes
\begin{equation}
S_i=-\sum_{\alpha=1}^\chi \lambda_{i,\alpha}^2 \log_2 \lambda_{i,\alpha}^2,
\label{entropy}
\end{equation}
where $\lambda_{i,\alpha}$'s are diagonal elements of the matrix
$\lambda$ that
could be directly obtained in the iMPS algorithm.
This is because, when one partitions the two semi-infinite
chains $L(-\infty,\cdots,i)$ and $R(i+1, \cdots,\infty)$, one gets
the Schmidt decomposition
$|\Psi\rangle=\sum_{\alpha=1}^{\chi}\lambda_{i,\alpha} |\phi_L\rangle|\phi_R\rangle$.
From the spectral decomposition, $\lambda_{i,\alpha}^2$ are actually
eigenvalues of the reduced density matrices for the two
semi-infinite chains $L$ and $R$. In our two-site translational
invariant iMPS representation, there are two Schmidt coefficient
matrices $\lambda_A$ and $\lambda_B$ that describe two possible ways
of the partitions, i.e., one is on the odd sites, the other is on
the even sites.
From the $\lambda_A$ and $\lambda_B$,
one can obtain the two von Neumann entropies depending on the odd- or even-site partitions.
\begin{figure}
\includegraphics [width=0.45\textwidth]{Fig3.eps}
\caption{(Color online)
The von Neumann entropy $S$ between left and right halves of a chain as
a function of $D$ for $r=0.5$ and $\chi=32$.
Both the von Neumann entropies for odd and even bonds
show a singularity at $D_c = 0.264$.}
\label{Entanglement}
\end{figure}
In Fig.~\ref{Entanglement}, we plot
the von Neumann entropy $S$
as a function of the DM interaction strength $D$
for even ($2i$ site) and odd (($2i+1$) site) bonds with $r=0.5$.
The von Neumann entropy for the odd bonds
is always larger than those for the even bonds
because, by the definition,
the odd-site exchange interaction $J_{2i}$
is stronger that the even-site exchange interaction $J_{2i+1}$.
Furthermore,
it is shown that both the entropies for the even and odd bonds
have a singularity at the same value of the DM interaction strength $D$.
Note that the singularities of the entropies occur at
the critical point $D_c=0.264$.
This result then shows clearly that
both the FLS $d$ and the von Neumann entropy $S$
give the same phase transition point.
As a consequence,
in fact, the von Neumann entropy $S$ gives the same phase diagram
from the FLS in Fig. \ref{PhaseDigram}.
As discussed, for the antiferromagnetic state of our system,
there are two possible ground states
that are connected by the unitary transformation $U$.
From the bifurcation of FLS, then, one might expect
a bifurcation in the von Neumann entropy too.
However, contrary to the FLS $d$, in Fig.~\ref{Entanglement},
no bifurcation is seen in the von Neumann entropy $S$ for
the antiferromagnetic state even though the initial state
is randomly chosen in the iMPS representation.
The reason for the absence of bifurcation in the von Neumann entropy
is why the singular values $\lambda_i$ in Eq.~(\ref{entropy})
do not depend on the unitary transformation because the
unitary transformation $U$ acts only on a single site of
the spin lattice in the iMPS representation.
\section{Central charge and universality class}
At a critical point, characteristic singular behaviors
of thermodynamics system properties depend only on few features
such as dimensionality and symmetry, which can be classified by the
concept of universality classes.
Especially, the central charge can be used for the classification
of universality classes \cite{Cardy,Calabrese}.
Owing to implement the iMPS representation,
we can obtain a central charge $c$ and a so-called
finite-entanglement scaling exponent $\kappa$ numerically
via the unique behaviors
of the correlation length $\xi$ and the von Neumann entropy $S$
with respect to the truncation dimension $\chi$ at a critical point
\cite{Corrlen,GVidal}, i.e.,
\begin{eqnarray}
\xi & = & a\chi^\kappa, \\
S & = & \frac{c\kappa}{6}\log_2{\chi}.
\end{eqnarray}
\begin{figure}
\includegraphics[width=0.45\textwidth]{Fig4.eps}
\caption{\label{Fig4}
(Color online) (a) Correlation length $\xi$ as a function of the
truncation dimension $\chi$ at the critical point. The power curve fitting
$\xi=a \chi^ \kappa$ yields $a=0.112$ and $\kappa=2.132$.
(b) Scaling of the von Neumann entropy $S$ with the truncation
dimension $\chi$ at the critical point.
For $\kappa=2.132$ from (a), the linear fitting
$S=(c\kappa/6)\log_2{\chi}+b$ yields the central charge
$c\approx 0.494$.
Here, the alternating bond strength is chosen as $r=0.5$.
}
\end{figure}
In Fig.~\ref{Fig4}, the correlation length $\xi$
and the von Neumann entropy $S$ as a function of the truncation dimension
$\chi$ at the critical point $D_c$ for $r=0.5$.
Here, the truncation dimensions are taken as $\chi= 4, 6, 8, 12, 16, 24, 32$, and $48$.
It is shown that both the correlation length $\xi$
and the von Neumann entropy $S$ diverges as the truncation
dimension $\chi$ increases.
From a power-law fitting on the correlation length $\xi$,
we have $\kappa = 2.132$ and $a = 0.112$.
As shown in Fig. \ref{Fig4} (b),
our numerical result demonstrates a linear scaling behavior,
which gives a central charge $c \simeq 0.494$ with $\kappa = 2.132$.
Our central charge is close to
the exact value $c = 1/2$. Consequently, the quantum phase transition in
our system
is in the same universality class as the quantum
transverse field Ising model.
\section{Summary}
Quantum phase transitions have been investigated in the Ising
chain with the Dzyaloshinskii-Moriya interaction as well as the
alternating bond-coupling.
The FLS and its bifurcation have clearly shown a characteristic singular point
as a signature of the quantum phase transition and
behaves as a continuous function, which shows a continuous phase transition
occurring at the critical point.
The phase diagram was obtained from the FLS and the von Neumann
entropy.
With a finite-entanglement scaling of the von Neumann entropy
with respect to the truncation dimension in the iMPS representation,
a central charge was estimated to be $c \simeq 0.5$,
which shows that
the system is in the same universality class with the quantum transverse
field Ising model.
\begin{acknowledgments}
We thank Huan-Qiang Zhou for helpful discussions.
This work was supported by the Fundamental Research Funds for the
Central Universities (Project No. CDJZR10100027). SYC
acknowledges the support from the NSFC under Grant No.10874252.
\end{acknowledgments}
|
1,116,691,500,207 | arxiv | \section{Introduction}
Consider the real-valued periodic Korteweg-de Vries (KdV) equation
\begin{equation}\label{kdvt}
\left|\begin{array}{l} u_t + u_{xxx} = \partial_x ( u^2); \qquad (t,x)\in \mathbf{R}\times \mathbf{T}\\
u(0,x)= u_0 \in H^{-s} (\mathbf{T})\end{array}\right.
\end{equation}
where $\mathbf{T} = \mathbf{R} \mod [0,2\pi]$. This initial value problem has been extensively studied in literature. In \cite{Bour}, Bourgain developed a weighted Sobolev space to prove the global well-posedness of \eqref{kdvt} on $L^2$. The key idea of Bourgain space is to penalize space-time Fourier support away from that the of the linear solution~$\{(\tau,\xi): \tau-\xi^3 =0\}$, thus assuming that the nonlinear solution is near the free solution~$e^{-t\partial_x^3} u_0$.
Kenig, Ponce, Vega \cite{KPV1} adapted this idea to prove the local well-posedness of \eqref{kdvt} in $H^s$ for $s>-1/2$; and Colliander, Keel, Staffilani, Takaoka and Tao \cite{I1} improved the local theory to $s\geq -1/2$ and also proved the global well-posedness of \eqref{kdvt} for $s\geq -1/2$. Here, the authors introduced \emph{I-method} for constructing almost-conserved quantities in order to iterate the local solutions to an arbitrary time interval~$[0,T]$.
In \cite{CCT}, Christ, Colliander, Tao proved that \eqref{kdvt} is ill-posed in $H^{s}$ for $s<-1/2$ in the sense that the solution map fails to be uniformly continuous. In particular, this implies that any attempt at the contraction argument around $e^{-t\partial_x^3}u_0$ will fail. However, Kappeler and Topalov \cite{Kap} showed that \eqref{kdvt} is globally well-posed in $H^{-1}$ via inverse scattering method. This result implies that although the solution map of the periodic KdV is not smooth below $H^{-1/2}$, it is $C^0$ up to $H^{-1}$. Molinet, \cite{mol1, mol2} showed that this result is sharp in the sense that the solution map of \eqref{kdvt} is discontinuous at every $C^{\infty}$~point in $H^{-1-}$.
For initial data in $L^2$, nearly linear dynamics of the nonlinear evolution of the periodic KdV has been studied by Babin, Ilyn, Titi \cite{titi} and by Erdogan, Tzirakis, Zharnitsky \cite{niko2, niko3}. Furthermore, Erdogan, Tzirakis \cite{niko} proved that, given $u_0\in H^{s}$ for $s>-1/2$, the solution~$u(t)$ of \eqref{kdvt} satisfies $u(t) - e^{-t\partial_x^3} u_0 \in H^{s_1}$ where $s_1 <\min(3s+1, s+1)$.
The last result by Erdogan and Tzirakis is the standard notion of nonlinear smoothing, based on the idea that the nonlinear solution is a smooth perturbation of the linear one. Bourgain in \cite{bour3, bour4} noted that the nonlinear Duhamel term is in many cases smoother than the initial data. This proved to be a useful tool for analysing the growth bound of high Sobolev norms in dispersive PDEs. Colliander, Staffilani and Takaoka \cite{Col} also used this smoothing effect to show the global well-posedness of KdV on $\mathbf{R}$ below $L^2$. We remark that this was also the main heuristic behind the author's works with Stefanov \cite{OS2} and \cite{OS1} , where we proved the local well-posedness respectively of the periodic ``good'' Boussinesq equation in $H^{-3/8+}$ and of 1-D quadratic Schr\"odinger equation in $H^{-1+}$. The smoothing of such type was necessary in these settings because the standard bilinear Bourgain space estimates were shown to fail below $H^{-1/4}$ \cite{FS} and $H^{-3/4}$ \cite{KPV2} respectively.
Due to the non-uniform continuity statement of \cite{CCT}, it is clear that the linear solution no longer dominates the evolution when $s<-1/2$. As expected, the smoothing of such type given in \cite{niko2} vanishes as $s \searrow -1/2$. In this paper, we will show that when the nonlinear solution is considered to be a perturbation of the resonant evolution term~$R^*[u_0]$ described below, one still gains almost a full derivative in $H^{-\f{1}{2}+}$. That is, when $u_0 \in H^s$, $u(t) -R^*[u_0](t) \in H^{s+1-}$ for all $s>-1/2$. This can also be expressed as $[\mathcal{S}u](t) - e^{-t\partial_x^3} u_0 \in H^{s+1-}$, where $\mathcal{S}$ is a continuous phase-shift operator in $H^s$ described in \eqref{def:phase}. We will also show that when $u_0$ lives near $L^2$, the effect of the phase-shift is in fact smoothing, i.e. $[\mathcal{S}v](t) - v(t) \in H^{1+3s}$ for any $v(t) \in H^s$ for $s>-1/2$. Thus, the fully nonlinear smoothing effect is also recovered by triangular inequality.
This non-resonant smoothing effect can be regarded as an evidence that the solutions of \eqref{kdvt} tends more toward $R^*[u_0]$, which is essentially the linear solution with a \emph{resonant phase-shift} in low-regularity settings. This is more convincing when considering that $R^*[\cdot]$ is not uniformly continuous for $s<-1/2$ (see Remark~\ref{rem:nonuniform}). Thus in the $C^0$~evolution of KdV in $H^s$ for $-1\leq s\leq -1/2$, it appears that $R^*[u_0]$ should dominate the nonlinear evolution.
Another evidence of this phenomena is found in the analysis of periodic modified KdV equation, i.e. \eqref{kdvt} with $\partial_x(u^2)$ replaced with $\partial_x(u^3)$. The failure of uniform continuity is shown \cite{CCT} in $H^s$ for $s<1/2$, although $C^0$~global well-posedness in $L^2$ is implied \cite{KT} the periodic KdV via the bi-analyticity of Miura map. Although harmonic analysis methods have not been able to reproduce the well-posedness at $L^2$, Takaoka, Tsutsumi \cite{TT} and Nakanishi, Takaoka, Tsutsumi \cite{NTT2} showed the local well-posedness of mKdV in $H^{3/8}$ and in $H^{1/3}$ respectively by considering the resonant phase-shift corresponding to the periodic mKdV. A new type of Bourgain space was developed here, which penalizes functions whose space-time Fourier support is far from that of the resonant solution. Considering the uniform discontinuity of the resonant solution below $H^{1/2}$ \cite[Exercise 4.21]{tao2} similar to Remark~\ref{rem:nonuniform}, their results seem to indicate that the resonant solution dominates the evolution of the periodic mKdV below $H^{1/2}$.
The main technique used in this paper is the \emph{normal form method}. This method was first introduced by Shatah \cite{shatah1} in the study of Klein-Gordon equation with a quadratic non-linearity. It has been adapted for many different types of quasi-linear disersive PDEs, for instance \cite{titi,NTT2,TT} etc. Recently this concept was reformulated Germain, Masmoudi, Shatah, \cite{GMS1,GMS2} as the \emph{space-time resonance method}, and the authors produced a number of new results in literature using this method.
For the periodic KdV, Babin, Ilyin, Titi \cite{titi} applied the normal form transform (or \emph{differentiation by parts}), which smoothed the non-linearity as observed by Erdogan, Tzirakis in \cite{niko}. The authors noted that the transform results in a trilinear resonant term, which can be reduced to an \emph{almost-linear} term due to massive cancellations as in the periodic mKdV \cite{Bour, TT}. This is essentially why the resonant correction in this model is a simple phase-shift, rather than a \emph{genuinely non-linear} operation (i.e. there are no nonlinear interactions among different space-time Fourier modes). Instead of adapting the \emph{differentiation by parts} approach of \cite{titi}, we use a bilinear (and a trilinear) pseudo-differential operator to perform the normal form transform. These operators are expressed as $T(\cdot, \cdot)$ and $J(\cdot,\cdot,\cdot)$ in Section~\ref{sec:setup}.
The following is the main result of this work:
\begin{theorem}\label{thm2}
Let $0\leq s<1/2$, $0<10\delta <1-2s$, $0< \gamma \leq 1-10\delta$. For any real-valued $u_0 \in H^{-s}(\mathbf{T})$ with $\widehat{u_0}(0) =0$, the solution~$u(t)$ of \eqref{kdvt} satisfies $u(t) - R^*[u_0](t) \in H^{-s+\gamma}(\mathbf{T})$ for all $t\in\mathbf{R}$. More precisely, there exist constants $C= C(\delta, \|u_0\|_{H^{-s}})$ and $\alpha(\delta)=O(1/\delta)$ such that
\begin{equation}\label{eq:smoothing}
\n{\mathcal{S}[u](t) - e^{-t\partial_x^3} u_0}{H^{-s+\gamma}} \leq C \lan{t}^{\alpha(\delta)}
\end{equation}
where we define the \emph{resonant phase-shift operator}~$\mathcal{S}$ to be
\begin{equation}\label{def:phase}
\mathcal F_x [\mathcal{S}u](t,\xi) := \exp \left(-2i\f{|\wh{u_0}|^2(\xi)}{\xi}t\right) \wh{u}(t,\xi).
\end{equation}
Furthermore, we can write the Lipschitz property of the solution map in a smoother space:
\begin{equation}\label{eq:thmlip}
\|u(t)-v(t)\|_{H_x^{-s+\gamma}} \leq C_{N,\delta} \lan{t}^{\alpha(\delta)}\|u_0 - v_0\|_{H_x^{-s +\gamma} },
\end{equation}
where $\|u_0\|_{H^{-s}} + \|v_0\|_{H^{-s}} <N$.
\end{theorem}
It should be noted that although one can take any $\gamma<1$ above, the constant~$C$ and the power~$\alpha(\delta)$ blow up as $\delta \searrow 0$.
The effect of $\mathcal{S}$ is a shift of space-time Fourier transform of $u$ by $2|\wh{u_0}|^2(\xi)|/ \xi$. Note that when $u_0 \in H^{-s}$ for $s<-1/2$ and $\wh{u_0}(0)= 0$, such phase-shift is continuous in because $|\wh{u_0}(\xi)|^2/ \xi$ is a bounded quantity. This is also shown in Corollary~\ref{trivial}.
If we define $R^*[u_0]$ according to the definition given in \eqref{defr} and statements following, \eqref{eq:smoothing} is equivalent to
\begin{equation}\label{eq:smoothing2}
\n{u(t) - R^*[u_0](t)}{H^{-s+\gamma}} \leq C\lan{t}^{\alpha(\delta)}.
\end{equation}
Thus, we will prove \eqref{eq:smoothing2} in place of \eqref{eq:smoothing}.
We derive a fully nonlinear smoothing result without a phase-shift (cf. \cite{niko}).
\begin{corollary}\label{trivial}
Let $0\leq s < 1/2$. Then the solution $u$ of \eqref{kdvt} with the initial data $u_0\in H^{-s}$ satisfies the following non-linear smoothing property $u(t) - e^{-t\partial_x^3} u_0 \in H^{-s+\sigma}$ for $0\leq \sigma < 1-2s$. More precisely, the same $C$, $\alpha(\delta)$ in Theorem~\ref{thm2} satisfies
\[
\n{u(t) - e^{-t\partial_x^3} u_0 }{H^{-s+\sigma}} \leq C \lan{t}^{\alpha(\delta)}
\]
\end{corollary}
\begin{proof}
By triangular inequality,
\[
\n{u(t) - e^{-t\partial_x^3} u_0}{H^{-s+\sigma}} \leq \n{u(t) - [\mathcal{S}u](t)}{H^{-s+\sigma}} + \n{[\mathcal{S}u](t)- u(t)}{H^{-s+\sigma}}
\]
where $\mathcal{S}$ is defined in \eqref{def:phase}. Note that the first term on RHS belongs in $C^0_t H_x^{-s+\sigma}$ for $\sigma < 1$ by \eqref{eq:smoothing}. The second term is estimated via the following claim.
\textbf{Claim:} Let $u_0 \in H^{-s}$ with mean zero. Then if $v(t)\in H^{\alpha}$ for any $t, \alpha\in {\mathbf R}$,
\[
\n{\mathcal{S}[v](t) - v(t)}{H_x^{\alpha + 1-2s}} \lesssim |t|\|u_0\|_{H^{-s}}^2 \|v(t)\|_{H^{\alpha}}.
\]
To prove the claim, we use mean-value theorem.
\begin{align*}
\n{\mathcal{S}[v](t) - v(t)}{H^{\alpha + 1-2s}_x} &= \n{\lan{\xi}^{\alpha + 1-2s}\wh{v}(t,\xi) \left(\exp \left(-2i\frac{|\widehat{u_0}(\xi)|^2}{\xi} t\right) -1 \right)}{l^2_{\xi}}\\
&\lesssim \n{\lan{\xi}^{\alpha+ 1-2s} |\wh{v}|(t,\xi) |t|\frac{|\widehat{u_0}(\xi)|^2}{\xi}}{l^2_{\xi}(\mathbb{Z}\setminus\{0\})}\\
&\lesssim |t|\|\lan{\xi}^{-2s}|\wh{u_0}|^2\|_{l^{\infty}_{\xi}} \|\lan{\xi}^{\alpha} |\wh{v}|(t,\xi)\|_{l^{2}_{\xi}} \leq |t|\|u_0\|^2_{H^{-s}} \|v(t)\|_{H^{\alpha}_x}.
\end{align*}
\end{proof}
The paper is organized as follows. In Section \ref{s2}, we introduce the $X^{s,b}$ spaces and discuss previously obtained results on \eqref{kdvt}. Section~\ref{s3} contains the proof of Theorem~\ref{thm2} in the following manner: In Section~\ref{sec:setup}, we construct the normal form and perform a few change of variables to reach the new formulation of the equation \eqref{kdvt} to optimize the smoothing effect. In \ref{sec:properties}, we derive mapping properties of the resonant solution operator~$R[\cdot]$ and the normal form operators~$T(\cdot, \cdot)$ and $J(\cdot, \cdot, \cdot)$. Section~\ref{sec:nonlinearity} contains the main multilinear estimates necessary for the contraction mapping, and Section~\ref{sec:local} contains the proof of the local statement for Theorem~\ref{thm2}. In Section \ref{global}, we conclude the proof of the theorem by iterating the local steps and using the global-in-time bound obtained in \cite{I1}.\\
{\bf Acknowledgement:} The author thanks Atanas Stefanov and Vladimir Georgiev for helpful suggestions, and Nikolaos Tzirakis for pointing out some key references.
\section{Notations and preliminaries}\label{s2}
\subsection{Notations}\label{s21}
We adopt the standard notations in approximate inequalities as follows:
By $A \lesssim B$, we mean that there exists an absolute constant $C>0$ with $A \leq B$. $A \ll B$ means that the implicit constant is taken to be a \emph{sufficiently} large positive number. For any number of quantities $\alpha_1, \ldots, \alpha_k$, $A\lesssim_{\alpha_1, \ldots, \alpha_k} B$ means that the implicit constant depends only on $\alpha_1, \ldots, \alpha_k$.
Finally, by $A\sim B$, we mean $A\lesssim B$ and $B\lesssim A$.
We indicate by $\eta$ a smooth time cut-off function which is supported on $[-2,2]$ and equals $1$ on $[-1,1]$. Notations here will be relaxed, since the exact expression of $\eta$ will not influence the outcome. For any normed space~$\mathcal{Y}$, we denote the quantity $\|\cdot\|_{\mathcal{Y}_T}$ by the expression $\|u\|_{\mathcal{Y}_T} = \|\eta(t/T)u \|_{\mathcal{Y}}$.\\
The spatial, space-time Fourier transforms and spatial inverse Fourier transform are
\begin{align*}
\widehat{f}(\xi) &= \int_{\mathbf{T}} f(x) e^{-i x\xi}\, dx, \\
\widetilde{u}(\tau,\xi) &= \int_{ \mathbf{T}\times\mathbf{R}} u(t,x) e^{-i(x\xi+ t\tau)} \,dx\, dt\\
\mathcal{F}_{\xi}^{-1}[a_{\xi}](x) &= \frac{1}{2\pi} \sum_{\xi \in \mathbf{Z}} a_{\xi} e^{i\xi x}
\end{align*}
where $\xi \in \mathbf{Z}$.
If $u$ is real-valued, then by above definition $\widehat{u}(-\xi) = \overline{\widehat{u}} (\xi)$. This is in fact an essential ingredient in our proof, which makes the resonant term autonomous in time.
For a reasonable expression~$\sigma$, we denote differential operators with symbol $\sigma (\cdot)$ via $\sigma (\nabla) f = \sigma(\partial_x) f := \mathcal{F}_{\xi}^{-1} [ \sigma(i\xi) \widehat{f}(\xi)]$. Also, we define $\langle \xi \rangle := 1+|\xi|$.\\
\subsection{$X^{s,b}$ spaces and and local-wellposedness theory.}
\label{s22}
Bourgain spaces are constructed as the completion of \emph{smooth} functions with with respect to the norm
$$
\|u\|_{X^{s,b}} := \left( \sum_{\xi\in \mathbf{Z}} (1+ |\xi|)^{2s} (1+ |\tau-\xi^3|)^{2b} |\widetilde{u}(\tau,\xi)|^2 \, d\tau \right)^{\frac{1}{2}}.
$$
The expression above shows $X^{s,0} = L^2_t H^s_x$. The added weight~$\tau-\xi^3$ (called \emph{modulation frequency}) measures the distance between a space-time Fourier support of the function and the Fourier support of the linear solution~$e^{-t\partial_x^3}u_0$. For instance, the free Airy solution with $L^2$~initial data lies in this space, given an appropriate time cut-off~$\eta \in \mathcal{S}_t$.
\begin{equation}\label{airyfree}
\| \eta (t) e^{-t\partial_x^3}f\|_{X^{0,b}} = \|(1+| \tau-\xi^3|)^{b} \widehat{\eta}(\tau-\xi^3) \widehat{f}(\xi)\|_{L^2_{\tau} l^2_{\xi}} \lesssim_{\eta, b} \|f\|_{L^2_x}
\end{equation}
due to the fast decay of $\widehat{\eta} \in \mathcal{S}_t$. For $\varepsilon, \delta>0$, we have the embedding properties obtained in Bourgain \cite{Bour},
\begin{align}
\|u\|_{C^0_t H^s_x (\mathbf{R}\times \mathbf{T})} &\lesssim_{\delta} \|u\|_{X^{s,\frac{1}{2}+\delta}},\label{xsb1}\\
\|u\|_{L^4_{t,x}([0,1]\times \mathbf{T})} &\lesssim_{\varepsilon, \delta} \|u\|_{X^{\varepsilon,\frac{1}{3}}},\label{xsb3}\\
\|u\|_{L^6_{t,x}([0,1]\times \mathbf{T})} &\lesssim_{\varepsilon, \delta} \|u\|_{X^{\varepsilon,\frac{1}{2}+\delta}}.\label{xsb2}
\end{align}
The following two estimates provide a convenient framework in our proof. The proofs in \cite[Prop. 2.12, Lemma 2.11]{tao2}, which argues for $x\in \mathbf{R}^d$ are also valid for the periodic case.
\begin{proposition}\label{xsb}
For $\delta>0$ and $s\in {\mathbf R}$,
\[
\|\eta (t) \int_{0}^t e^{-(t-s)\partial_x^3} F(s) \, ds\|_{X^{s,\frac{1}{2}+\delta}} \lesssim_{\eta,\delta} \|F\|_{X^{s,-\frac{1}{2}+\delta}}.
\]
\end{proposition}
\begin{proposition} \label{timeloc}
Let $\eta \in \mathcal{S}_t (\mathbf{R})$ and $T\in (0,1)$. Then for $-\frac{1}{2}< b' \leq b <\frac{1}{2}$, $s\in \mathbf{R}$,
\[
\|\eta(t/T) u \|_{X^{s,b'}} \lesssim_{\eta, b,b'} T^{b-b'} \|u\|_{X^{s,b}}.
\]
\end{proposition}
We use Proposition~\ref{timeloc} to gain positive powers of $T$ by yielding a small portion of modulation weight~$\lan{\tau-\xi^3}$.
Mean-zero assumption of the initial data is standard through literature for the periodic KdV equation. This is justified by the mean conservation property of \eqref{kdvt}. By a change of variable $v(t)= u(t)-\int_{{\mathbf T}}u_0$ in \cite{Bour}, one can transfer the mean-zero condition to a order 1 perturbation, which can then be removed by gauge transform $t':=t-cx$. Thus we assume that all initial data~$u_0$ satisfies the mean-zero condition.
We define a closed subspace $Y^{s,b}$ of $X^{s,b}$ (with the same norm) as the image of orthogonal projection $\mathbf{P}: X^{s,b} \to Y^{s,b}$ defined by $\displaystyle \mathbf{P} (u) (x) := u(x) - \int_{\mathbf{T}} u\, dx$. This will enforce the mean-zero condition on the contraction space.
The analysis of periodic KdV has developed around $X^{-s,\f{1}{2}}$ space, rather than $X^{s,\f{1}{2}+}$ which is properly contained in $C^0_t H^s_x$. This is mainly due to the failure of $X^{s,b}$ bilinear estimate for $b>1/2$, \cite{KPV1}. This was remedied in \cite{I1} by introducing $Y^{-s,\f{1}{2}}\cap \wt{l^2_{\xi}(\lan{\xi}^{-s}; L^1_{\tau})}$ where
\[
\|u\|_{\wt{l^2_{\xi}(\lan{\xi}^{-s}; L^1_{\tau})}} := \n{\lan{\xi}^{-s} \wt{u}(\tau,\xi)}{l^2_{\xi} L^1_{\tau}}.
\]
Clearly, $Y^{-s,\f{1}{2}+} \subset \wt{l^2_{\xi}(\lan{\xi}^{-s}; L^1_{\tau})} \subset C^0_t H^{-s}$. Use of the space $Y^{-s,\f{1}{2}}\cap \wt{l^2_{\xi}(\lan{\xi}^{-s}; L^1_{\tau})}$ increases the number of estimates required for contraction, thus one would prefer to show contraction in $Y^{-s,\f{1}{2}+}$. In this paper, we do not encounter the same problem described in \cite{KPV1}, since our estimates will be essentially trilinear. However, we will take advantage of the fact that the global-in-time solution lives in $Y^{-s,\f{1}{2}}\cap \wt{l^2_{\xi}(\lan{\xi}^{-s}; L^1_{\tau})}$. The following statement is obtained from \cite{I1}.
\begin{proposition}\label{pro:timebound}
The PBIVP is globally well-posed in $H^{-s}$ when $s\in (0,1/2]$. Furthermore, the solution~$\eta(t/T)u \in Y^{-s,\f{1}{2}}\cap \wt{l^2_{\xi}(\lan{\xi}^{-s}; L^1_{\tau})}$ for any arbitrary time $T$ where $\eta$ is a smooth cut-off function, and for any $T>0$,
\[
\|u(T)\|_{H^{-s}} \lesssim
\|\eta(\cdot/T) u(\cdot)\|_{\wt{l^2_{\xi} (\lan{\xi}^{-s}; L^1_{\tau})}} \lesssim_{\eta} \lan{T}^s \|u_0\|_{H^{-s}}.
\]
\end{proposition}
\section{Proof of Theorem \ref{thm2}}
\label{s3}
\subsection{Setting of the problem}
\label{sec:setup}
We now turn our attention to \eqref{kdvt}. Let $u$ be the global-in-time solution of \eqref{kdvt} described in Proposition~\ref{pro:timebound}. Setting $v := \langle \nabla \rangle^{-s} u$, $v$ satisfies
\begin{equation}\label{periodic}
v_t + v_{xxx} = \mathcal{N} (v,v), \qquad v(0) = f := \lan{\nabla}^{-s} u_0 \in L^2(\mathbf{T})
\end{equation}
where $\mathcal{N} (u,v) := \partial_x \langle \nabla \rangle^{-s} [\langle \nabla \rangle^s u \langle \nabla \rangle^s v ]$. In particular, the bilinear operator~$\mathcal{N}$ contains a spatial derivative, thus $\mathcal{N} \equiv \mathbf{P} \circ \mathcal{N}$ itself has mean-zero.
We construct the bilinear pseudo-differential operator $T$ by the formula
$$
T (u,v) := -\f{1}{3}\sum_{\xi_1 \xi_2 (\xi_1+\xi_2) \neq 0} \frac{\langle\xi_1\rangle^s \langle\xi_2\rangle^s}{\langle\xi_1 + \xi_2\rangle^s} \frac{1}{\xi_1 \xi_2} \widehat{u}(\xi_1) \widehat{v} (\xi_2) e^{i(\xi_1 + \xi_2) x}.
$$
Considering the algebraic identity
\[
(\tau_1 +\tau_2) - (\xi_1 + \xi_2)^3 = (\tau_1 - \xi_1^3) + (\tau_2 - \xi_2^3) -3\xi_1 \xi_1(\xi_1 + \xi),
\]
the differential operator $\partial_t - \partial_{xxx}$ acts on $T$ in the following manner:
$$
(\partial_t + \partial_{xxx}) T (u,v) = T ((\partial_t + \partial_{xxx}) u,v) + T(u,(\partial_t+ \partial_{xxx}) v) + \mathcal{N} (\mathbf{P}u,\mathbf{P}v).
$$
If we write $h= T (v,v)$ where $v$ solves \eqref{periodic} (recall $v=\mathbf{P}v$) and change variable by $v= h+z$, then $z$ satisfies
\begin{equation}\label{zperiodic}
\left| \begin{array}{l} (\partial_t + \partial_{xxx}) z = -2 T (\mathcal{N}(v,v), v);\\
z(0) = f-T(f,f). \end{array}\right.
\end{equation}
For the right side of \eqref{zperiodic}, we note that
$$
T(\mathcal{N}(v,v),v) = \mathbf{P}\langle \nabla \rangle^{-s} (\mathbf{P}[\langle \nabla \rangle^s v \langle \nabla \rangle^s v] \frac{\langle \nabla \rangle^{s}}{\nabla} v).
$$
We adapt the computations in \cite{NTT2} to simplify Fourier coefficients of the above expression as follows. For $\xi\neq 0$ (recall $\widehat{v}(0)=0$ and $\widehat{v}(-\xi) = \overline{\widehat{v}}(\xi)$),
\begin{align*}
\mathcal{F}[\mathbf{P}\langle \nabla \rangle^{-s} (\mathbf{P}[\langle \nabla \rangle^s v \langle \nabla \rangle^s v] \frac{\langle \nabla \rangle^{s}}{\nabla} v)](\xi) &= \sum_{\tiny \begin{array}{c}\xi_1+\xi_2\neq 0, \quad\xi_j \neq 0\\ \xi_1+\xi_2+\xi_3 = \xi\end{array}} \frac{\langle \xi_1\rangle^s \langle\xi_2\rangle^s \langle \xi_3\rangle^{s}}{i\xi_3\langle \xi\rangle^s} \widehat{v}(\xi_1) \widehat{v}(\xi_2) \widehat{v}(\xi_3)\\
&\hspace{-150pt}= \sum_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi, \quad\xi_j \neq 0 \end{array} } \frac{\langle \xi_1\rangle^s \langle\xi_2\rangle^s \langle \xi_3\rangle^{s}}{i\xi_3\langle \xi\rangle^s} \widehat{v}(\xi_1) \widehat{v}(\xi_2) \widehat{v}(\xi_3) + \frac{\langle \xi\rangle^{2s}}{-i\xi} \widehat{v}(\xi) \widehat{v}(\xi) \widehat{v}(-\xi)\\
&\hspace{-105pt}+ \sum_{\xi_3\neq 0} \frac{\langle\xi\rangle^s \langle \xi_3\rangle^{2s}\langle \xi\rangle^s}{i\xi_3\langle \xi\rangle^s} \widehat{v}(-\xi_3) \widehat{v}(\xi) \widehat{v}(\xi_3)+ \sum_{\xi_3\neq 0} \frac{\langle \xi\rangle^s \langle\xi_3\rangle^{2s}}{i\xi_3\langle \xi\rangle^s} \widehat{v}(\xi) \widehat{v}(-\xi_3) \widehat{v}(\xi_3)\\
&\hspace{-150pt}= \sum_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi, \quad\xi_j \neq 0 \end{array} } \frac{\langle \xi_1\rangle^s \langle\xi_2\rangle^s \langle \xi_3\rangle^{s}}{i\xi_3\langle \xi\rangle^s} \widehat{v}(\xi_1) \widehat{v}(\xi_2) \widehat{v}(\xi_3) - \frac{\langle \xi\rangle^{2s}}{i\xi} |\widehat{v}|^2(\xi) \widehat{v}(\xi).
\end{align*}
The first term on the right side of above is the \emph{non-resonant} term, denoted $\mathcal{NR}(v,v,v)(\xi)$. The second one is \emph{resonant} and is denoted $\mathcal{R}(v,v,v)(\xi)$. Then we can rewrite \eqref{zperiodic} as
$$
(\partial_t + \partial_{xxx}) z = -2 \mathcal{F}_{\xi}^{-1} \left[\mathcal{NR}(v,v,v)\right] + 2\mathcal{F}_{\xi}^{-1} \left[\mathcal{R}(v,v,v)\right].
$$
When $s$ is near $1/2$, the resonant term does not gain any derivatives since the oscillatory gain $(\xi_1 + \xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)$ is not available. This is why the gain of derivatives vanish for $u-e^{-t\partial_x^3}u_0$ in \cite{niko}. To overcome this problem, we want to efficiently \emph{filter out} the roughest term. To this end, we find an explicit solution for the IVP
$$
\left| \begin{array}{l}(\partial_t+\partial_{xxx})v_* = -2 \sum_{\xi \neq 0} \frac{\langle \xi \rangle^{2s}}{i\xi} |\widehat{v_*}(\xi)|^2 \widehat{v_*} (\xi) e^{i\xi x} \\
v_*(0) = f \in L^2(\mathbf{T}). \end{array} \right.
$$
The solution to the IVP can be written explicitly as
\begin{equation}\label{defr}
R [f] (t,x) := \sum_{\xi \neq 0} \widehat{f}(\xi) e^{2i\frac{\langle \xi \rangle^{2s}}{\xi} |\widehat{f}(\xi)|^2 t} e^{i(\xi x + \xi^3 t)}.
\end{equation}
$R^*[u_0]$ in the statement of Theorem~\ref{thm2} corresponds to $\langle \nabla \rangle^s R[\langle \nabla \rangle^{-s} u_0]$. For the method of such constructions, refer to \cite[Exercise 4.21]{tao2}. Clearly, $R[f]$ is a unitary (non-linear) operator on $H^s$ for any $s\in {\mathbf R}$ and maps $H^s_x \to C^{0}_t H^s_x$.
We perform another change of variable~$z = R[f] + y$ to filter out the roughest resonant term. Then $y$ satisfies
\begin{align}\label{eqyper}
(\partial_t + \partial_{xxx}) y &= -2\mathcal{F}_{\xi}^{-1} [\mathcal{NR}(R[f]+h+y,R[f]+h+y,R[f]+h+y)]\\
&+2 \sum_{\xi\neq 0} \frac{\langle \xi \rangle^{2s}}{i\xi} B(\widehat{R[f]}(\xi), \widehat{h}(\xi), \widehat{y}(\xi)) e^{i\xi x}\notag
\end{align}
with the initial condition~$y(0) = -T(f,f)$, where $B(\alpha,\beta,\gamma):= |\alpha+\beta+\gamma|^2(\beta+\gamma) + \alpha|\beta+\gamma|^2+ \alpha^2 (\overline{\beta+\gamma}) + |\alpha|^2 (\beta+\gamma)$ for $\alpha,\beta,\gamma \in \mathbf{C}$. The precise form of the polynomial~$B$ is not important. The main idea is that the Fourier coefficient~$B(\widehat{R[f]}(\xi), \widehat{h}(\xi), \widehat{y}(\xi))$ does not contain $|\wh{R[f]}(\xi)|^2 \wh{R[f]}(\xi)$. This is important heuristically since $R[f]$ is the least smooth term among the three. Since at least one smooth term is present at any cubic resonant term, the cubic resonant term now should be as smooth as $y$ and $h$.
Now consider the non-resonant component in \eqref{eqyper}. If $h$ and $y$ are smoother than $R[f]$, then the most problematic term should be $\mathcal{NR}(R[f],R[f],R[f])$. Indeed, this is the only non-linearity in \eqref{eqyper} which obstructs the full derivative gain. It can be shown that $\mathcal{NR}(R[f],R[f],R[f])$ only gains $1-s$~derivatives. Filtering out this term by means of normal form will be enough to improve this to a full derivative gain. We want to solve
\[
(\partial_t + \partial_{xxx}) y_* = 2\mathcal F^{-1}_{\xi}[\mathcal{NR}(R[f],R[f],R[f])].
\]
We define the trilinear pseudo-differential operator~$J$ by $J(u,v,w)(x) := $
\[
-\f{2}{3}\sum_{\tiny \begin{array}{c} (\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1 + \xi_2 + \xi_3 = \xi, \qquad \xi_j \neq 0\end{array}} \f{\lan{\xi_1}^s \lan{\xi_2}^s \lan{\xi_3}^s}{i\xi_3 \lan{\xi}^s (\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)} \wh{u}(\xi_1) \wh{v}(\xi_2) \wh{w}(\xi_3) e^{i\xi x}.
\]
Note that $J(\cdot,\cdot,\cdot)$ is symmetric in the first two variables. Also, considering the algebraic identity
\begin{equation}\label{eq:cubic}
(\tau_1 + \tau_2 +\tau_3) - (\xi_1+\xi_2+\xi_3)^3 = \sum_{j=1}^3 (\tau_j - \xi_j^3) -3(\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1),
\end{equation}
we can see that the Airy operator acts on $J$ as follows.
\begin{align*}
(\partial_t + \partial_{x}^3)J(u,v,w) =\, &J((\partial_t + \partial_x^3)u,v,w) + J(u,(\partial_t + \partial_x^3)v,w)+J(u,v,(\partial_t + \partial_x^3)w)\\
&-2 \mathcal F^{-1}_{\xi}[\mathcal{NR}(u,v,w)].
\end{align*}
If we let $k := J(R[f], R[f], R[f])$, then by above
\begin{align*}
(\partial_t + \partial_{x}^3)k &= 2\mathcal F^{-1}_{\xi}[\mathcal{NR}(u,v,w)]+ 2J(-2\mathcal F^{-1}_{\xi}[\mathcal{R}(R[f],R[f],R[f])],R[f],R[f])\\
&\quad + J(R[f],R[f], -2\mathcal F^{-1}_{\xi}[\mathcal{R}(R[f],R[f],R[f])]).
\end{align*}
with initial data $k(0) = J(f,f,f)$. We write the Fourier coefficients of the quintilinear terms above. For $\xi\neq 0$,
$\mathcal F\left[J(\mathcal F^{-1}_{\xi}[\mathcal{R}(R[f],R[f],R[f])],R[f],R[f])\right] (\xi) = $
\[
\sum_{\tiny \begin{array}{c} (\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1 + \xi_2 + \xi_3 = \xi, \qquad \xi_j \neq 0\end{array}} \f{\lan{\xi_1}^{3s} \lan{\xi_2}^s \lan{\xi_3}^s |\wh{f}(\xi_1)|^2}{-\xi_1 \xi_3 \lan{\xi}^s (\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)} \wh{R[f]}(\xi_1) \wh{R[f]}(\xi_2) \wh{R[f]}(\xi_3).
\]
Similarly, $\mathcal F\left[J(\mathcal F^{-1}_{\xi}[\mathcal{R}(R[f],R[f],R[f])],R[f],R[f])\right](\xi)=$
\[
\sum_{\tiny \begin{array}{c} (\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1 + \xi_2 + \xi_3 = \xi, \qquad \xi_j \neq 0\end{array}} \f{\lan{\xi_1}^{s} \lan{\xi_2}^s \lan{\xi_3}^{3s} |\wh{f}(\xi_3)|^2}{-\xi_3^2 \lan{\xi}^s (\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)} \wh{R[f]}(\xi_1) \wh{R[f]}(\xi_2) \wh{R[f]}(\xi_3).
\]
Note that above expressions are not genuinely quintilinear since there are only three different space-time Fourier modes present. In this sense, estimates for these terms are essentially trilinear.
Writing $y = k + w$, we obtain our final equation
\begin{equation}\label{eqwper}
\left| \begin{array}{l} w_t + w_{xxx} = \mathcal{NR} + \mathcal{R} + \mathcal{Q}\\
w(0) = -T(f,f) - J(f,f,f)\end{array}\right.
\end{equation}
where $\mathcal{NR}$, $\mathcal{R}$ and $\mathcal{Q}$ represent respectively remaining trilinear non-resonant terms, trilinear resonant terms and quintilinear terms as categorized below.
The non-resonant terms in $\mathcal{NR}$ are of the following type:
\begin{align}
&\mathcal{NR}(w, R[f]+h+k+w, R[f]+h+k+w); \label{eq:nr1}\\
&\mathcal{NR}(h+k, R[f]+h+k+w,R[f]+h+k+w);\label{eq:nr2}\\
&\mathcal{NR}(R[f]+h+k+w, R[f]+h+k+w, w);\label{eq:nr3}\\
&\mathcal{NR}(R[f]+h+k+w,R[f]+h+k+w, h+k).\label{eq:nr4}
\end{align}
The resonant term~$\mathcal{R}$ is same as the last term in \eqref{eqyper}, when $y$ is replaced with $k+w$.
The quintilinear terms in $\mathcal{Q}$ are of the following type:
\begin{align}
&J(\mathcal F^{-1}_{\xi}[\mathcal{R}(R[f],R[f],R[f])],R[f],R[f]);\label{eq:q1}\\
&J(R[f],R[f], \mathcal F^{-1}_{\xi}[\mathcal{R}(R[f],R[f],R[f])]).\label{eq:q2}
\end{align}
In the following section, we will examine in which functional spaces $R[f]$, $h$ and $k$ so that we can state the necessary estimates for $\mathcal{NR}$, $\mathcal{R}$ and $\mathcal{Q}$.
\subsection{Properties and regularity of $R[f]$, $h$ and $k$}\label{sec:properties}
We begin by examining an appropriate function space for $R[f]$. We have already noted that $R[f]\in C^0_t L^2_x$. The next lemma shows that $R[f] \in X_T^{0,b}$ for any $b\geq 0$, thus very close to the free solution~$e^{-t\partial_x^3}f$.
\begin{lemma}\label{rffree}
Given $f\in L^2$, $s\leq \f{1}{2}$, $b\geq 0$ and $\eta \in \mathcal S_t(\mathbf{R})$, we have
$$
\|\eta R[f]\|_{X^{0,b}} \lesssim \|\eta\|_{H^b} \max\left(\n{f}{L^2}, \n{f}{L^2}^{2b+1}\right)
$$
\end{lemma}
\begin{proof}
From \eqref{defr}, we have
$$
\wt{\eta \cdot R[f]}(\tau,\xi)= \wh{f}(\xi) \wh{\eta}\left(\tau - 2\f{\lan{\xi}^{2s}}{\xi} |\wh{f}(\xi)|^2 - \xi^3\right).
$$
Let $a_{\xi}:= 2\f{\lan{\xi}^{2s}}{\xi}|\wh{f}(\xi)|^2$, then
\begin{align*}
\|\eta R[f]\|_{X^{0,b}} &= \n{\lan{\tau-\xi^3}^b \wh{f}(\xi) \wh{\eta}(\tau - a_{\xi} - \xi^3)}{L^2_{\tau}l^2_{\xi}}\\
&\lesssim \n{\lan{\tau-a_{\xi} -\xi^3}^b \lan{a_{\xi}}^b \wh{f}(\xi) \wh{\eta}(\tau - a_{\xi} - \xi^3)}{l^2_{\xi} L^2_{\tau}}\\
&\lesssim \n{\lan{\tau}^b \wh{\eta}(\tau)}{L^2_{\tau}} \n{\lan{a_{\xi}}^b \wh{f}(\xi)}{l^2_{\xi}}.
\end{align*}
Noting that $\sup_{\xi} |a_{\xi}| \lesssim \|f\|_{L^2}$, we have the desired estimate.
\end{proof}
Next, we establish the Lipschitz continuity of the map $R[f]$ on $L^2(\mathbf{T})$.
\begin{lemma}\label{lipschitz}
Let $R$ be defined as in \eqref{defr} with $s<1/2$ and $\gamma\in \mathbf{R}$. Then for any $f, g\in L^2(\mathbf{T})$ with $f-g \in H^{\gamma}$,
$$
\n{R[f] - R[g]}{C^0_t H^{\gamma}_x([0,T]\times \mathbf{T})} \leq C_{N,T} \|f-g\|_{H^{\gamma}(\mathbf{T})}
$$
where $\|f\|_{L^2}+\|g\|_{L^2} < N$.
\end{lemma}
\begin{proof}
First we write $\widehat{f}(\xi) = |\widehat{f}(\xi)| e^{i \alpha_{\xi}}$ and $\widehat{g}(\xi) = |\widehat{g}(\xi)| e^{i \beta_{\xi}}$. Denote $\theta_{\xi} :=\alpha_{\xi} - \beta_{\xi} $ Then, the Law of cosines, triangle and H\"older's inequality gives
\begin{align*}
\n{R[f] - R[g]}{C^0_t H^{\gamma}_x([0,T]\times \mathbf{T})} &= \sup_{t\in [0,T]} \n{\langle \xi\rangle^{\gamma} (|\widehat{f}| e^{2it\frac{\langle \xi\rangle^{2s}}{\xi}(|\widehat{f}|^2 - |\widehat{g}|^2 )+ i\theta_{\xi} } - |\widehat{g}| )}{l^2_{\xi} (\mathbf{Z}\setminus \{0\})}\\
&\hspace{-120pt}= \sup_{t\in [0,T]} \n{\langle \xi\rangle^{\gamma} \left( |\widehat{f}|^2 + |\widehat{g}|^2 - 2 |\widehat{f}| |\widehat{g}| \cos ( 2t\frac{\langle \xi\rangle^{2s}}{\xi}(|\widehat{f}|^2 - |\widehat{g}|^2 ) + \theta_{\xi} ) \right)^{\frac{1}{2}}}{l^2_{\xi} (\mathbf{Z}\setminus \{0\})}\\
&\hspace{-120pt}\lesssim \|\langle \xi\rangle^{\gamma} (|\widehat{f}|-|\widehat{g}|)\|_{l^2_{\xi}} + 2 \sup_{t\in [0,T]} \n{\langle \xi\rangle^{2\gamma} |\widehat{f}| |\widehat{g}|( 1 - \cos ( 2t\frac{\langle \xi\rangle^{2s}}{\xi}(|\widehat{f}|^2 - |\widehat{g}|^2 )+ \theta_{\xi} )}{l^1_{\xi} (\mathbf{Z}\setminus \{0\})}^{\frac{1}{2}}\\
&\hspace{-120pt}\lesssim \|f-g\|_{H^{\gamma}} + 4\sup_{t\in [0,T]} \n{\langle \xi\rangle^{2\gamma} |\widehat{f}| |\widehat{g}| \sin^2 \left(t\frac{\langle \xi\rangle^{2s}}{\xi}(|\widehat{f}|^2 - |\widehat{g}|^2 )+ \theta_{\xi} \right)}{l^{1}_{\xi} (\mathbf{Z}\setminus \{0\})}^{\frac{1}{2}}.
\end{align*}
Using $\sin^2 (A+B) \lesssim A^2 + \sin^2 B$ and the assumption $s<1/2$, we need to estimate
\begin{align}
& \n{\langle \xi\rangle^{2\gamma} |\widehat{f}| |\widehat{g}| (|\widehat{f}|^2 - |\widehat{g}|^2 )^2}{l^{1}_{\xi} (\mathbf{Z}\setminus \{0\})}^{\frac{1}{2}},\label{lip1}\\
&\n{\langle \xi\rangle^{2\gamma} |\widehat{f}| |\widehat{g}| \sin^2 \theta_{\xi}}{l^{1}_{\xi} (\mathbf{Z}\setminus \{0\})}^{\frac{1}{2}}.\label{lip2}
\end{align}
The bound for \eqref{lip1} is straight-forward. By H\"older's and triangle inequalty,
$$
\eqref{lip1} \lesssim \|f\|_{L^2}^{\frac{1}{2}} \|g\|_{L^2}^{\frac{1}{2}} \n{\langle \xi \rangle^{\gamma} (\widehat{f} - \widehat{g}) (|\widehat{f}| + |\widehat{g}|)}{l^{\infty}_{\xi}} \lesssim \|f\|_{L^2}^{\frac{3}{2}} \|g\|_{L^2}^{\frac{3}{2}} \|f-g\|_{H^{\gamma}}.
$$
For \eqref{lip2}, we apply the Law of sines. Without loss of generality, we can assume $\theta_{\xi} \in (0,\pi)$. Noting that the triangle with side-lengths equal to $|\widehat{f}|, |\widehat{g}|, |\widehat{f} - \widehat{g}|$ has the angle~$\theta_{\xi}$ which is opposite to the side with length $|\widehat{f}-\widehat{g}|$, we can deduce that $|\widehat{f}|\sin \theta_{\xi}\leq |\widehat{f} - \widehat{g}|$ and likewise for $|\widehat{g}|$. Thus,
$$
\eqref{lip2} \leq \n{\langle \xi\rangle^{2\gamma} |\widehat{f} - \widehat{g}|^2 }{l^1_{\xi}}^{\frac{1}{2}} \sim \n{\langle \xi \rangle^{\gamma} |\widehat{f} - \widehat{g}|}{l^2_{\xi}} \sim \|f-g\|_{H^{\gamma}}.
$$
\end{proof}
We take a detour to make a remark about the uniform discontinuity of $R[\cdot]$ when $s>1/2$ (equivalently, uniform discontinuity of $R^*[\cdot]$ in $H^{-s}$). The following remark serves as a heuristic evidence that $R^*[f]$ dominates the evolution of the periodic KdV below $H^{-1/2}$.
\begin{remark}\label{rem:nonuniform}
For any $\varepsilon>0$, $R[\cdot]: L^2 \to L^{\infty}_t([0,\varepsilon]; L^2_x)$ is not uniformly continuous on a bounded set of $L^2({\mathbf T})$.
\end{remark}
\begin{proof}
Given any $0<\delta, \varepsilon \ll 1$, we show that there exists $f,g \in L^2$ such that $\|f\|_{L^2}+\|g\|_{L^2}\leq 2$ and $\|f-g\|_{L^2} \leq \delta$, but $\sup_{t\in [0,\varepsilon]} \|R[f](t)-R[g](t)\|_{L^2_x} \geq 1$.
For some $\xi\in \mathbf{Z}\setminus\{0\}$ to be determined, define $f(x) := e^{ix\xi}$ and $g(x) := (1-\delta) e^{ix\xi}$. Then applying the law of cosines,
\begin{align*}
\n{R[f](t) - R[g](t)}{L^2}^2 &= \left| e^{2i\f{\lan{\xi}^{2s}}{\xi} t + \xi^3 t} - (1-\delta) e^{2i\f{\lan{\xi}^{2s}}{\xi} |1-\delta|^2 t + \xi^3 t} \right|^2\\
&= 1 + (1-\delta)^2 - 2(1-\delta) \cos\left(2\f{\lan{\xi}^{2s}}{\xi}(1-(1-\delta)^2)t\right)
\\
&= \delta^2 + 2(1-\delta)\left[ 1- \cos \left(2\f{\lan{\xi}^{2s}}{\xi}(2\delta -\delta^2)t\right) \right]\\
&= \delta^2 + 4(1-\delta)\sin^2\left(\f{\lan{\xi}^{2s}}{\xi}\delta(2-\delta)t\right) \geq 2\sin^2\left(\f{\lan{\xi}^{2s}}{\xi}\delta(2-\delta)t\right).
\end{align*}
If $s>1/2$, choose $\xi\in \mathbf{Z}$ so that $\f{\lan{\xi}^{2s}}{\xi}\delta(2-\delta)t = \arcsin (1/\sqrt{2})$ for some $t\in [0,\varepsilon]$.
\end{proof}
Next, we consider mapping properties of the bilinear operator~$T$, which will give us the appropriate functional space for $h$ as well as the necessary regularity for the initial data $w(0)$.
The following lemma implies that $T(f,f) \in H^1(\mathbf{T})$ and also that $h\in C^0_t([0,T]; H^1_x) \subset X^{1,0}_T$ for any $T>0$.
\begin{lemma}\label{tpmap1}
$T: L^2(\mathbf{T}) \times L^2(\mathbf{T}) \to H^{1}(\mathbf{T})$ is a bounded bilinear operator.
\end{lemma}
\begin{proof}
Let $u,v\in C^{\infty}(\mathbf{T})$. Then
\begin{equation}\label{tpeq}
\| T(u,v)\|_{H^{1}} \sim \n{\sum_{\xi_1 (\xi-\xi_1)\neq 0} \frac{\langle\xi_1\rangle^s \langle\xi-\xi_1\rangle^s \langle\xi\rangle^{1-s} }{\xi_1 (\xi-\xi_1)} \widehat{u}(\xi_1) \widehat{v}(\xi-\xi_1)}{l^2_{\xi}(\mathbf{Z}\setminus \{0\}) }.
\end{equation}
By symmetry, we can assume $|\xi_1| \geq |\xi-\xi_1|$. Then by H\"older and Sobolev embedding,
\begin{align*}
\eqref{tpeq} &\lesssim M_{\varepsilon} \n{\sum_{\xi_1 \neq \xi} |\widehat{u}|(\xi_1) \frac{|\widehat{v}|(\xi-\xi_1)}{|\xi-\xi_1|^{\frac{1}{2}+\varepsilon}}}{l^2_{\xi}(\mathbf{Z}) }\\
&\lesssim M_{\varepsilon} \n{\mathcal{F}^{-1}[ |\widehat{u}|] |\partial_x |^{-\frac{1}{2}-\varepsilon} \mathcal{F}^{-1}[|\widehat{v}|]}{L^2_{x}(\mathbf{T})} \lesssim_{\varepsilon} M_{\varepsilon} \| u\|_{L^2(\mathbf{T})} \| v \|_{L^2_{x}(\mathbf{T})}
\end{align*}
where we take $\varepsilon>0$ small and
$$
M_{\varepsilon} := \sup_{\xi \xi_1 (\xi-\xi_1)\neq 0}\frac{\langle\xi_1\rangle^s \langle\xi-\xi_1\rangle^s \langle\xi\rangle^{1-s} }{|\xi_1| |\xi-\xi_1|^{\frac{1}{2}-\varepsilon}}.
$$
It is easy to see that $M$ is a bounded quantity if $s<1/2$, thus the claim follows.
\end{proof}
The following lemma gives an estimate on the term~$h$.
\begin{lemma}\label{le:tspace}
Let $u,v \in X^{0,\f{1}{2}}\cap \wt{l^2_{\xi} L^1_{\tau}}$. Then
\[
\|T(u,v)\|_{X^{0, \f{1}{2}}} \lesssim \|u\|_{X^{0,\f{1}{2}}\cap \wt{l^2_{\xi} L^1_{\tau}}} \|v\|_{X^{0,\f{1}{2}}\cap \wt{l^2_{\xi} L^1_{\tau}}}.
\]
\end{lemma}
\textbf{Remark:} By interpolating this estimate with Lemma~\ref{tpmap1}, we also obtain for $b\in [0,1/2]$
\begin{equation}\label{eq:tspace}
\|T(v,v)\|_{X^{1-2b, b}_1} \lesssim \|v\|^2_{X^{0,\f{1}{2}}\cap \wt{l^2_{\xi} L^1_{\tau}}}.
\end{equation}
\begin{proof}
Recall that $\n{T(u,v)}{X^{0, \f{1}{2}}} =$
\begin{equation}\label{eq:tuv}
\n{\lan{\tau-\xi^3}^{\f{1}{2}} \int_{\mathbf{R}}\sum_{\xi_1(\xi-\xi_1) \neq 0} \frac{\langle\xi_1\rangle^s \langle\xi -\xi_1\rangle^s}{\xi_1 (\xi - \xi_1)\lan{\xi}^{s}} \wt{u}(\tau_1, \xi_1) \wt{v} (\tau - \tau_1,\xi-\xi_1) \, d\tau_1}{L^2_{\tau} l^2_{\xi}}
\end{equation}
Since $[\tau_1 - \xi_1^3] + [(\tau-\tau_1) - (\xi-\xi_1)^3] = [\tau - \xi^3] + 3\xi \xi_1 (\xi-\xi_1)$, we write
\begin{equation}\label{eq:tauxi}
\lan{\tau-\xi^3}^{\f{1}{2}} \lesssim \lan{\tau_1 - \xi_1^3}^{\f{1}{2}} + \lan{(\tau-\tau_1) -(\xi-\xi_1)^3}^{\f{1}{2}} + \lan{\xi}^{\f{1}{2}} \lan{\xi_1}^{\f{1}{2}} \lan{\xi-\xi_1}^{\f{1}{2}}.
\end{equation}
Since \eqref{eq:tuv} is symmetric in $\xi_1, \xi-\xi_1$ variable, we ignore the middle term on RHS of \eqref{eq:tauxi}.
To estimate the component~$I_1$ of \eqref{eq:tuv} containing $\lan{\tau_1 -\xi_1^3}^b$, note
\[
\sup_{\xi \xi_1 (\xi-\xi_1)\neq 0} \f{\lan{\xi_1}^s \lan{\xi-\xi_1}^s }{|\xi_1| |\xi-\xi_1|^{s}\lan{\xi}^{s}} <\infty
\]
Thus we apply Plancherel, H\"older's and Sobolev embedding to obtain
\begin{align*}
I_1 &\lesssim \n{\int_{\mathbf{R}} \sum_{\xi-\xi_1\neq 0} \lan{\tau_1 -\xi_1^3}^{\f{1}{2}} |\wt{u}|(\tau_1, \xi_1) \f{1}{|\xi-\xi_1|^{1-s}}|\wt{v}|(\tau-\tau_1, \xi-\xi_1) \, d\tau_1}{L^2_{\tau} l^2_{\xi}}\\
&\lesssim \n{\lan{\tau -\xi^3}^{\f{1}{2}} |\wt{u}|}{L^2_{\tau}l^2_{\xi}} \n{\f{1}{|\nabla|^{1-s}} \mathcal F^{-1}_{\tau,\xi} [|\wt{v}|]}{L^{\infty}_{t} L^{\infty}_{x}}\\
&\lesssim_s \n{u}{X^{0,\f{1}{2}}} \n{\mathcal F^{-1}_{\tau,\xi}[|\wt{v}|]}{L^{\infty}_t L^2_x} \lesssim \n{u}{X^{0,\f{1}{2}}} \n{\wt{v}}{l^2_{\xi} L^1_{\tau}}.
\end{align*}
The other component~$I_2$ containing the last term of \eqref{eq:tuv} is more direct.
\begin{align*}
I_2 &\lesssim \n{\int_{\mathbf{R}}\sum_{\xi_1(\xi-\xi_1) \neq 0} \frac{\langle\xi_1\rangle^{s+\f{1}{2}} \langle\xi -\xi_1\rangle^{s+\f{1}{2}}\lan{\xi}^{\f{1}{2}-s}}{\xi_1 (\xi - \xi_1)} |\wt{u}|(\tau_1, \xi_1) |\wt{v}| (\tau - \tau_1,\xi-\xi_1) \, d\tau_1}{L^2_{\tau} l^2_{\xi}}\\
&\lesssim \sup_{\xi_1(\xi-\xi_1)\neq 0} \frac{\langle\xi_1\rangle^{s+\f{1}{2}} \langle\xi -\xi_1\rangle^{s+\f{1}{2}}\lan{\xi}^{\f{1}{2}-s}}{|\xi_1| |\xi - \xi_1|} \n{\int_{\mathbf{R}}\sum_{\xi_1(\xi-\xi_1) \neq 0} |\wt{u}|(\tau_1, \xi_1) |\wt{v}| (\tau - \tau_1,\xi-\xi_1) \, d\tau_1}{L^2_{\tau} l^2_{\xi}}\\
&\lesssim \| \mathcal F^{-1}_{\tau,\xi}[|\wt{u}|] \|_{L^4_{t,x}} \|\mathcal F^{-1}_{\tau,\xi}[|\wt{u}|]\|_{L^4_{t,x}} \lesssim \|u\|_{X^{0,\f{1}{3}}} \|v\|_{X^{0,\f{1}{3}}}
\end{align*}
where the last inequality follows from \eqref{xsb3}. Thus we have the claim.
\end{proof}
Finally, we consider mapping properties of the trilinear operator~$J$ and derive an appropriate function space for $k$. The following lemma implies that $J(f,f,f)\in H^1({\mathbf T})$ and also that $k \in C^0_t([0,T];H^1_x)\subset X^{1,0}_T$.
\begin{lemma}\label{le:jmap}
$J: L^2({\mathbf T}) \times L^2({\mathbf T})\times L^2({\mathbf T}) \to H^1({\mathbf T})$ is a bounded trilinear operator.
\end{lemma}
\begin{proof}
Let $u,v,w\in C^{\infty}({\mathbf T})$. Then $\|J(u,v,w)\|_{H^1}\sim $
\[
\n{\sum_{\tiny \begin{array}{c} (\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1 + \xi_2 + \xi_3 = \xi, \qquad \xi_j \neq 0\end{array}} \f{\lan{\xi}^{1-s} \lan{\xi_1}^s \lan{\xi_2}^s \lan{\xi_3}^s}{i\xi_3 (\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)} \wh{u}(\xi_1) \wh{v}(\xi_2) \wh{w}(\xi_3)}{l^2_{\xi}}.
\]
Note that the above expression is symmetric in $\xi_1$, $\xi_2$ and also that the estimate will be easier when $|\xi_3| \sim \max(|\xi|, |\xi_1|, |\xi_2|, |\xi_3|)$. Thus, we can assume without loss of generality that $|\xi|\sim |\xi_1| \gtrsim \max(|\xi_3|, |\xi_2|)$. Then, assuming that the summation is restricted to the set $\{(\xi_1, \xi_2,\xi_3)\in \mathbf{Z}^3: (\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)\neq 0,\, \xi_1 + \xi_2 + \xi_3 = \xi,\, \xi_1\xi_2\xi_3 \neq 0\}$
\[
\|J(u,v,w)\|_{H^1} \lesssim \n{\sum_{|\xi|\sim |\xi_1| \gtrsim \max(|\xi_3|, |\xi_2|)} \f{\lan{\xi}^{1} \lan{\xi_2}^s |\wh{u}|(\xi_1) |\wh{v}|(\xi_2) |\wh{w}|(\xi_3)}{\lan{\xi_3}^{1-s} |\xi_1 + \xi_2||\xi_2+ \xi_3||\xi_3+\xi_1|} }{l^2_{\xi}}.
\]
We split the sum into two pieces.
\begin{itemize}
\item[Case 1.] $\xi_1 \sim \xi_2 \sim \xi_3 \sim \xi$. Note that in this case
\begin{equation}\label{eq:xigain}
|\xi_1 + \xi_2||\xi_2+ \xi_3||\xi_3+\xi_1| \gtrsim |\xi| \min(|\xi_1 + \xi_2|, |\xi_2+ \xi_3|, |\xi_3+ \xi_1|)^2.
\end{equation}
Splitting the sum again into three pieces, we apply Young's and H\"older's inequalities to obtain
\begin{align*}
&\n{\sum_{|\xi_1 + \xi_2| \leq |\xi_2+\xi_3|, |\xi_3+\xi_1|} \lan{\xi_1 + \xi_2}^{-2}|\wh{u}|(\xi_1) |\wh{v}|(\xi_2) |\wh{w}|(\xi_3) }{l^2_{\xi}}\\
&\hspace{30pt}\lesssim \n{ \sum_{\xi_3\in \mathbf{Z}} |\wh{w}|(\xi_3) \lan{\xi-\xi_3}^{-2} \left[ |\wh{u}|*_{\xi_1} |\wh{v}|\right](\xi-\xi_3)}{l^2_{\xi}}\\
&\hspace{30pt}\lesssim \n{ |\wh{w}|}{l^2_{\xi}} \n{ \lan{\xi}^{-2} \left[ |\wh{u}|*_{\xi_1} |\wh{v}|\right]}{l^1_{\xi}} \lesssim \n{\lan{\xi}^{-2}}{l^1_{\xi}} \n{\wh{w}}{l^2_{\xi}} \n{\wh{u}}{l^2_{\xi}} \n{\wh{v}}{l^{2}_{\xi}}.
\end{align*}
\item[Case 2.] If $\xi\sim \xi_1 \gg \xi_3$, then
\begin{equation}\label{eq:xisquaregain}
|\xi_1 + \xi_2||\xi_2+ \xi_3||\xi_3+\xi_1| \gtrsim |\xi|^2 \min(|\xi_1 + \xi_2|,\, |\xi_2+ \xi_3|).
\end{equation}
By splitting the sum, and we can assume $|\xi_1 + \xi_2|\geq |\xi_2+ \xi_3|$. Apply H\"older's and Young's inequalities to obtain
\begin{align*}
&\n{\lan{\xi}^{-1+s}\sum_{\xi_1\in \mathbf{Z}}|\wh{u}|(\xi_1)\lan{\xi-\xi_1}^{-1} \left[ |\wh{v}|*_{\xi_2} |\wh{w}|\right](\xi-\xi_1)\ }{l^2_{\xi}}\\
&\hspace{30pt}\lesssim \n{\xi^{-1+s}}{l^2_{\xi}} \n{ \wh{u}}{l^2_{\xi}} \n{\lan{\xi}^{-1} \left[ |\wh{v}|* |\wh{w}|\right]}{l^2_{\xi}}\\
&\hspace{30pt}\lesssim \n{\xi^{-1+s}}{l^2_{\xi}} \n{\lan{\xi}^{-1}}{l^2_{\xi}} \n{ \wh{u}}{l^2_{\xi}}\n{ |\wh{v}|* |\wh{w}|}{l^{\infty}_{\xi}} \lesssim_{s} \n{ \wh{u}}{l^2_{\xi}}\n{ \wh{v}}{l^2_{\xi}}\n{ \wh{w}}{l^2_{\xi}}.
\end{align*}
\end{itemize}
This concludes the proof.
\end{proof}
The following lemma gives the appropriate space for $k$.
\begin{lemma}\label{le:kspace}
Let $u,v,w \in X^{0,1+2\delta}$ for $0<10\delta<1-2s$. Then
\[
\|J(u,v,w)\|_{X^{0,\f{1+\delta}{2}}} \lesssim_{\delta} \|u\|_{X^{0,1+2\delta}} \|v\|_{X^{0,1+2\delta}}\|w\|_{X^{0,1+2\delta}}.
\]
\end{lemma}
\textbf{Remark:} It is likely that the bounded property $J: \left(X^{0,\f{1}{2}}\cap \wt{l^2_{\xi} L^1_{\tau}}\right)^3 \to X^{0,\f{1}{2}}$ also holds. But Lemma~\ref{le:kspace} will suffice for our purpose since the only argument of $J(\cdot, \cdot,\cdot)$ is $R[f]$, which is smooth in the modulation frequency as stated in Lemma~\ref{rffree}.
In fact, the proof will imply that
\begin{equation}\label{eq:jspace}
\|J(R[f],R[f],R[f])\|_{X_T^{0, \f{1+\delta}{2}}} \lesssim \|R[f]\|_{X_T^{0,1+2\delta}}\|R[f]\|^2_{X_T^{0,\f{1}{2}+\delta}}.
\end{equation}
Considering \eqref{xsb1} and Lemma~\ref{rffree}, we can interpolate above with Lemma~\ref{le:jmap} to obtain $k \in X_T^{1-\f{2b}{1+\delta},\, b}$ for all $b\in [0,\f{1+\delta}{2}]$ for a fixed $\delta>0$ small.
\begin{proof}
Recall that $\|J(u,v,w)\|_{X^{0,\f{1+\delta}{2}}} = $
\begin{equation*}
\n{\sum_{\tiny \begin{array}{c} (\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1 + \xi_2 + \xi_3 = \xi, \qquad \xi_j \neq 0\end{array}} \f{\lan{\tau-\xi^3}^{\f{1+\delta}{2}} \lan{\xi_1}^s \lan{\xi_2}^s \lan{\xi_3}^s}{i\xi_3 \lan{\xi}^{s}(\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)} \wh{u}(\xi_1) \wh{v}(\xi_2) \wh{w}(\xi_3)}{L^2_{\tau} l^2_{\xi}}.
\end{equation*}
For this trilinear estimate, we use the embedding \eqref{xsb2}.
First we localize each variable in terms of its dispersive frequencies, i.e. $\langle \tau_j- \xi_j^3\rangle \sim L_j$ for $j=1,2,3$ and $\langle \tau-\xi^3\rangle \sim L$, where $L, L_j\gtrsim 1$ are dyadic indices. We only need to insure that the final estimate includes $L_{\max}^{-\varepsilon}$ for some $\varepsilon>0$ so that sum in these indices and also gain a small positive power of $T$ if necessary.
Consider the algebraic identity~\eqref{eq:cubic}. Note that we must have either that $L \lesssim |\xi_1+ \xi_2||\xi_2+\xi_3||\xi_3+\xi_1|$ or $L \lesssim L_{j}$ for some $j=1,2,3$.
First, we consider when $L \lesssim |\xi_1+ \xi_2||\xi_2+\xi_3||\xi_3+\xi_1|$. Apply Plancherel and H\"older followed by \eqref{xsb2} to obtain
\begin{align*}
\|J(u,v,w)\|_{X^{0,\f{1+\delta}{2}}} &\lesssim M \| \widetilde{u_{-\delta}} * ( \widetilde{v_{-\delta}} * \widetilde{w_{-\delta}})\|_{L^2_{\tau}l^2_{\xi}} \sim M \| u_{-\delta}v_{-\delta}w_{-\delta}\|_{L^2_{t,x}}\\
&\lesssim M \| u_{-\delta}\|_{L^6_{t,x}} \| v_{-\delta}\|_{L^6_{t,x}} \| w_{-\delta}\|_{L^6_{t,x}} \lesssim_{\delta} M \|v\|_{X^{0,\frac{1}{2}+\delta}}\|v\|_{X^{0,\frac{1}{2}+\delta}}\|v\|_{X^{0,\frac{1}{2}+\delta}}
\end{align*}
where $\widetilde{v_{-\delta}} (\tau,\xi) := \langle \xi \rangle^{-\delta} \lan{\tau-\xi^3}^{\delta/2} |\widetilde{v}|(\tau,\xi)$ and
\begin{equation*}
M := \sup_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi,\quad \xi_j \neq 0 \end{array}} \frac{\lan{\xi_1}^{s+\delta} \lan{\xi_2}^{s+\delta} }{L_{\max}^{\f{\delta}{2}} \lan{\xi_3}^{1-s-\delta}\lan{\xi}^{s} |(\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)|^{\f{1-2\delta}{2}}}.
\end{equation*}
Thus, it suffices to show that there exists an absolute upper bound $C$ for
\begin{equation}\label{eq:mbound}
\frac{\lan{\xi_1}^{s+\delta} \lan{\xi_2}^{s+\delta} }{\lan{\xi_3}^{1-s-\delta}\lan{\xi}^{s} |(\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)|^{\f{1-2\delta}{2}}}
\end{equation}
for all $\xi\in \mathbf{Z}\setminus \{0\}$ and $\xi_j$ for $j=1,2,3$ satisfying the same restrictions as before.
As observed in \eqref{eq:xigain} and \eqref{eq:xisquaregain}, the weight $(\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)$ gains $\xi_{\max}^2$ most of the time, unless $\xi_1 \sim \xi_2 \sim \xi_3$ in which case it still gains $\xi_{\max}$.
If $\xi_1 \sim \xi_2 \sim \xi_3$, then \eqref{eq:mbound} is bounded by $C\xi_{\max}^{3s+4\delta -\f{3}{2}}$. Thus letting $4\delta < \f{3}{2} - 3s$ suffice to bound \eqref{eq:mbound}. Otherwise, \eqref{eq:mbound} is bounded by $C\xi_{\max}^{2s+3\delta - 1}$, which is bounded if $3\delta \leq 1-2s$.
Now, it remains to estimate the case $L \gg |\xi_1+\xi_2| |\xi_2 + \xi_3| |\xi_3+ \xi_1|$. As noted previously, this forces $L \lesssim L_j$ for some $j=1,2,3$ due to \eqref{eq:cubic}. Without loss of generality, assume $L \lesssim L_1=L_{\max}$.
\begin{align*}
\|J(u,v,w)\|_{X^{0,\f{1+\delta}{2}}} &\lesssim M' \| \lan{\xi}^{-\delta} L^{\f{1+\delta}{2}} \widetilde{u_{-\delta}} * ( \widetilde{v_{-\delta}} * \widetilde{w_{-\delta}})\|_{L^2_{\tau}l^2_{\xi}}\\
&\hspace{-50pt}\sim M' L_1^{1+\delta} L^{-\f{1+\delta}{2}} \sup_{\|z\|_{L^2_{\tau}l^2_{\xi}}=1} \left| \int_{\tau_1+\tau_2+\tau_3=\tau} \sum_{\xi_1+\xi_2+\xi_3=\xi} \widetilde{u_{-\delta}}\, \widetilde{v_{-\delta}}\, \widetilde{w_{-\delta}}\, \lan{\xi}^{-\delta} z(\tau,\xi) d\sigma\right| \\
&\hspace{-50pt} \lesssim M' L_1^{1+\delta}\|u\|_{L^2_t l^2_{\tau}} \sup_{\|z\|_{L^2_{\tau}l^2_{\xi}}=1} \n{ (L^{-\f{1+\delta}{2}}\lan{\cdot}^{-\delta}z) * ( \widetilde{v_{-\delta}}*\widetilde{w_{-\delta}})}{L^2_{\tau} l^2_{\xi}}\\
&\hspace{-50pt}\lesssim M' L_{\max}^{-\delta} \|u\|_{X^{0,1+2\delta}} \sup_{\|z\|_{L^2_{\tau}l^2_{\xi}}=1} \n{\mathcal{F}_{\tau,\xi}^{-1} \left[\lan{\tau-\xi^3}^{-\frac{1+\delta}{2}}\langle \xi \rangle^{-\delta}z\right] v_{-\delta} w_{-\delta}}{L^2_{t,x}}\\
&\hspace{-50pt}\lesssim_{\delta} M' \|u\|_{X^{0,1+\delta}}\|v\|_{X^{0,\frac{1}{2}+\delta}}\|w\|_{X^{0,\frac{1}{2}+\delta}}
\end{align*}
where we have used H\"older and \eqref{xsb2} for the last inequality, and
\[
M' := \sup_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi,\quad \xi_j \neq 0 \end{array}} \frac{\lan{\xi_1}^{s+\delta} \lan{\xi_2}^{s+\delta} }{ \lan{\xi_3}^{1-s-\delta}\lan{\xi}^{s-\delta} |(\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)|}.
\]
It is easy to see that $M' \lesssim \eqref{eq:mbound}$, so we obtain the desired estimate.
\end{proof}
Finally, based on Lemmas~\ref{rffree}-\ref{le:kspace}, we assign appropriate function spaces for $R[f]$, $h$, $k$ as follows:
\[
R[f] \in X^{0,1+2\delta}_T; \qquad h \in C^0_t H^1_x \cap \left(\cap_{b\in [0,\f{1}{2}]} X_T^{1-2b, b}\right); \qquad k \in \cap_{b\in [0,\f{1+\delta}{2}]} X_T^{1-\f{2b}{1+\delta}, b}
\]
where $0<\delta <(1-2s)/10$ is fixed. Note that the function space for $k$ continuously embeds in the space for $h$, i.e. $k$ is more regular than $h$. In our multilinear estimate, we will assume both $h,k \in X^{1, 0} \cap X^{0,\f{1}{2}}$ and $R[f]\in X^{0,\f{1}{2}}$ to reduce the number of necessary estimates.
\subsection{Estimates for the non-linearities.}\label{sec:nonlinearity}
Consider the IVP~\eqref{eqwper}. We want to give a contraction argument for $w$ in $Y^{\gamma, \f{1}{2}+\delta}$. First consider the nonlinearity~$\mathcal{NR}$. The estimates \eqref{eq:wff}-\eqref{eq:ffh} will be useful in managing the terms \eqref{eq:nr1}-\eqref{eq:nr4} respectively.
\begin{lemma}\label{nonres}
Let $u,v,w$ be in appropriate function spaces. Given any $0\leq s <1/2$ and $0<10\delta < 1-2s$, we have for all $0<\gamma <1-10\delta$
\begin{align}
\n{\mathcal{F}_{\xi}^{-1}[\mathcal{NR}(u,v,w)] }{X_T^{\gamma,-\frac{1}{2}+\delta}} &\lesssim_{\delta} T^{\delta} \|u\|_{X^{\gamma,\f{1}{2}+\delta}} \|v\|_{X^{0,\frac{1}{2}}} \|w\|_{X^{0,\f{1}{2}}};\label{eq:wff}\\
\n{\mathcal{F}_{\xi}^{-1}[\mathcal{NR}(u,v,w)] }{X_T^{\gamma,-\frac{1}{2}+\delta}} &\lesssim_{\delta} T^{\delta} \|u\|_{X^{1, 0} \cap X^{0,\f{1}{2}}} \|v\|_{X^{0,\frac{1}{2}}} \|w\|_{X^{0,\f{1}{2}}}; \label{eq:hff}\\
\n{\mathcal{F}_{\xi}^{-1}[\mathcal{NR}(u,v,w)] }{X_T^{\gamma,-\frac{1}{2}+\delta}} &\lesssim_{\delta} T^{\delta} \|u\|_{X^{0,\f{1}{2}}} \|v\|_{X^{0,\frac{1}{2}}} \|w\|_{X^{\gamma,\f{1}{2}+\delta}};\label{eq:ffw}\\
\n{\mathcal{F}_{\xi}^{-1}[\mathcal{NR}(u,v,w)] }{X_T^{\gamma,-\frac{1}{2}+\delta}} &\lesssim_{\delta} T^{\delta} \|u\|_{X^{0,\f{1}{2}}} \|v\|_{X^{0,\frac{1}{2}}} \|w\|_{X^{1, 0} \cap X^{0,\f{1}{2}}}.\label{eq:ffh}
\end{align}
\end{lemma}
\textbf{Remark:} Note that one may choose any $\gamma<1$ for the estimates above. However, when one reduces the size of $\delta$, one is penalized by shorter contraction time which in turn leads to a larger growth bound. Also the implicit constant due to \eqref{xsb2} goes to infinity according to Bourgain, \cite{Bour}. It appears that the growth bound explodes as $\gamma \nearrow 1$.
\begin{proof}
Since this is a trilinear estimate, we will need to use the $L^6$~embedding~\eqref{xsb2} in order to maximize our gain. However, this has the disadvantage when one wants to produce a large power of $T$. One can either sacrifice the derivative gain to obtain a better polynomial-in-time growth bound or sacrifice the better growth bound to obtain the maximal derivative gain. The goal of this paper better aligns with the latter path.
As in the proof of Lemma~\ref{le:kspace}, we localize each variable in terms of its modulation frequencies so that $\langle \tau_j- \xi_j^3\rangle \sim L_j$ for $j=1,2,3$ and $\langle \tau-\xi^3\rangle \sim L$, where $L, L_j\gtrsim 1$ are dyadic indices. We aim to estimate each localized components by $C L_{\max}^{-\delta}$ for a constant~$C=C(\delta)$. This will suffice in summing all the component as well as gaining $T^{\delta}$ factor (one can accomplish both by picking $\delta': 0<10\delta <10\delta' <1-2s$. We remark that the constant $10$ in front of $\delta$ is somewhat arbitrary and is subject to improvements.
Recall $L_{\max} \gtrsim |(\xi_1+ \xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)|$ due to the algebraic identity~\eqref{eq:cubic}. We will use this fact throughout the proof.
\textbf{Proof of \eqref{eq:wff}:} If $L\sim L_{\max}$, we have $\n{\mathcal{F}_{\xi}^{-1}[\mathcal{NR}(u,v,w)] }{X_T^{\gamma,-\frac{1}{2}+\delta}}= $
\begin{align*}
& \n{ \sum_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi, \quad \xi_j \neq 0 \end{array} } \frac{\langle \xi_1\rangle^s \langle\xi_2\rangle^s \langle \xi_3\rangle^{s}\langle \xi\rangle^{\gamma-s}}{i\xi_3\langle\tau-\xi^3 \rangle^{\frac{1}{2}-\delta}} [\widetilde{u}(\xi_1)*_{\tau} \widetilde{v}(\xi_2)*_{\tau} \widetilde{w}(\xi_3)](\tau)}{L^2_{\tau} l^2_{\xi}(\mathbf{Z}\setminus \{0\})}\\
&\lesssim_{\delta} M_1 \n{ \left[\lan{\cdot}^{\gamma-\delta}|\wt{u}|\right] * ( \widetilde{v^{-\delta}} * \widetilde{w^{-\delta}})}{L^2_{\tau}l^2_{\xi}} \sim M_1 \n{ \left[\lan{\nabla}^{\gamma-\delta} \mathcal F^{-1}_{\tau,\xi}[|\wt{u}|]\right] \, v^{-\delta} \,w^{-\delta} }{L^2_{t,x}}\\
&\lesssim M_1 \n{ \left[\lan{\nabla}^{\gamma-\delta} \mathcal F^{-1}_{\tau,\xi}[|\wt{u}|]\right] }{L^6_{t,x}} \| v^{-\delta}\|_{L^6_{t,x}} \| w^{-\delta}\|_{L^6_{t,x}}\\ &\lesssim_{\delta} M_1\| u\|_{X^{\gamma,\f{1}{2}+\delta}} \| v\|_{X^{0,\f{1}{2}}} \| w\|_{X^{0,\f{1}{2}}}
\end{align*}
where $\widetilde{v^{-\delta}} (\tau,\xi) := \langle \xi \rangle^{-\delta} \lan{\tau-\xi^3}^{-\delta/2} |\widetilde{v}|(\tau,\xi)$ and
\begin{align*}
M_1 &:= \sup_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi,\quad \xi_j \neq 0 \end{array}} \f{\langle\xi_2\rangle^{s+\delta} \langle \xi_3\rangle^{s+\delta}\langle \xi\rangle^{\gamma-s}}{|\xi_3|\lan{\xi_1}^{\gamma-s-\delta} L_{\max}^{\frac{1}{2}-2\delta}} \\
&\lesssim \sup_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi \end{array}} \frac{L_{\max}^{-\delta} \langle\xi_2\rangle^{s+\delta} \langle \xi\rangle^{\gamma-s}}{\lan{\xi_1}^{\gamma-s-\delta}\lan{\xi_3}^{1-s-\delta} (|\xi_1+ \xi_2||\xi_2+\xi_3||\xi_3+\xi_1|)^{\frac{1}{2}-3\delta} }.
\end{align*}
It now suffices to show that there exists an absolute constant $C$ which bounds
\begin{equation}\label{mpdef}
\frac{\langle\xi_2\rangle^{s+\delta} \langle \xi\rangle^{\gamma-s}}{\lan{\xi_1}^{\gamma-s-\delta}\lan{\xi_3}^{1-s-\delta} (|\xi_1+ \xi_2||\xi_2+\xi_3||\xi_3+\xi_1|)^{\frac{1}{2}-3\delta} }
\end{equation}
for $\xi, \xi_1,\xi_2,\xi_3\in \mathbf{z}$ satisfying $(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0$ and $\xi_1+\xi_2+\xi_3=\xi$.
First, consider $|\xi|\sim |\xi_1|\sim |\xi_2| \sim |\xi_3|$, when $|\xi_1+ \xi_2||\xi_2+\xi_3||\xi_3+\xi_1| \gtrsim \xi_{\max}$. In this case, $\eqref{mpdef} \lesssim \xi_{\max}^{2s+6\delta -\f{3}{2}}$ is easily bounded. Now for the remaining cases, we can assume $|\xi_1+ \xi_2||\xi_2+\xi_3||\xi_3+\xi_1| \gtrsim \xi_{\max}^2$
As expected, the most dangerous scenario is $|\xi|\sim |\xi_2| \gg \max(|\xi_1|,|\xi_3|)$. In this case, we have $\eqref{mpdef} \lesssim \xi_{\max}^{\gamma+7\delta-1}$. \emph{This is the reason for the condition~$\gamma<1-10\delta$}. Note that the other cases are strictly nicer and naturally follow from this case.
On the other hand, if $L\ll L_{\max}$, we can reduce it to the first case via an argument similar to one in the proof of Lemma~\ref{le:kspace}. For example, let $L_2=L_{\max}$. Using duality,
\begin{align*}
\n{\mathcal{F}_{\xi}^{-1}[\mathcal{NR}(u,v,w)] }{X^{\gamma,-\frac{1}{2}+\delta}} &\lesssim M_1^* L_{\max}^{\f{1}{2}-\delta} \n{\frac{\left[\lan{\cdot}^{\gamma-\delta}|\widetilde{u}|\right]*\left[L_2^{-\f{\delta}{2}} |\widetilde{v}|\right] * \widetilde{w^{\delta}}](\tau,\xi)}{\lan{\xi}^{\delta}\langle \tau - \xi^3\rangle^{\frac{1}{2}-\delta}}}{L^2_{\tau} l^2_{\xi}}\\
&\hspace{-120pt}\lesssim M_1^* L_{2}^{\f{1-3\delta}{2}} \sup_{\|\wt{z}\|_{L^2_{\tau}l^2_{\xi}}=1} \left| \int_{\tau_1+\tau_2+\tau_3=\tau} \sum_{\xi_1+\xi_2+\xi_3=\xi} \left[\lan{\xi_1}^{\gamma-\delta}|\widetilde{u}|\right] |\widetilde{v}|\,\widetilde{w^{\delta}}\left[ \f{\lan{\xi}^{-\delta}\wt{z}(\tau,\xi)}{\langle \tau-\xi^3 \rangle^{\frac{1}{2}-\delta}}\right] d\sigma\right| \\
&\hspace{-120pt}\lesssim M_1^* L_{2}^{\f{1}{2}}\sup_{\|z\|_{L^2_{\tau}l^2_{\xi}}=1} \n{\left(|\widetilde{v}|\right) \cdot\left(\left[\lan{\cdot}^{\gamma-\delta} |\widetilde{u}|\right]*\widetilde{w^{\delta}}*\left[L^{-\f{1}{2}}\wt{z^{\delta}}\right]\right)}{L^1_{\tau}l^1_{\xi}}\\
&\hspace{-120pt}\lesssim M_1^* \|v\|_{X^{0,\frac{1}{2}}} \sup_{\|z\|_{L^2_{\tau}l^2_{\xi}}=1} \n{\left[\lan{\nabla}^{\gamma-\delta} \mathcal F^{-1}_{\tau,\xi}[|\wt{u}|] \right]\,w^{\delta}\,\left[L^{-\f{1}{2}}z^{\delta}\right] }{L^2_{t,x}}\\
&\hspace{-120pt}\lesssim_{\delta} M_1^* \|u\|_{X^{0,\frac{1}{2}+\delta}}\|v\|_{X^{0,\frac{1}{2}}}\|w\|_{X^{0,\frac{1}{2}}}
\end{align*}
where we have used H\"older and \eqref{xsb2} for the last inequality, and $M_1^*$ is exactly $ \lan{\xi}^{\delta} \lan{\xi_2}^{-\delta} M_1$. Such transfer only results in a harmless exchange of $\lan{\xi}^{\delta}$ for $\lan{\xi_2}^{\delta}$. This proves \eqref{eq:wff}.
The proof of \eqref{eq:ffw} is similar, so we will prove this first.
\textbf{Proof of \eqref{eq:ffw}:} The proof is essentially identical to that of \eqref{eq:wff}. In place of $M_1$, we have
\begin{align*}
M_2 &:= \sup_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi,\quad \xi_j \neq 0 \end{array}} \f{\lan{\xi_1}^{s+\delta}\langle\xi_2\rangle^{s+\delta} \langle \xi_3\rangle^{s+\delta}\langle \xi_3\rangle^{\gamma-s}}{|\xi_3|\lan{\xi_3}^{\gamma-\delta} L_{\max}^{\frac{1}{2}-2\delta}} \\
&\lesssim \sup_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi \end{array}} \frac{L_{\max}^{-\delta} \lan{\xi_1}^{s+\delta}\langle\xi_2\rangle^{s+\delta} \langle \xi\rangle^{\gamma-s}}{\lan{\xi_3}^{1+\gamma-s-2\delta} (|\xi_1+ \xi_2||\xi_2+\xi_3||\xi_3+\xi_1|)^{\frac{1}{2}-3\delta} }.
\end{align*}
We omit most cases since they follow the same estimate of $M_1$. The only case which differs from the estimate of $M_1$ is the case when $\xi_3 \ll \xi_{\max}$.
First, we consider $\xi_1\sim \xi_2\sim \xi \gg \xi_3$. Note that in this case,
\begin{equation}\label{eq:save}
|\xi_1 + \xi_2| |\xi_2 +\xi_3| |\xi_3+\xi_1| = |\xi-\xi_3| |\xi_2 +\xi_3| |\xi_3+\xi_1|
\end{equation}
which has size $\xi_{\max}^3$ under our assumptions. Thus, $M_2 \lesssim L_{\max}^{-\delta} \xi_{\max}^{\gamma + s-\f{3}{2} + 11\delta}$.
We also should consider $\xi_1 \sim \xi_2 \gg \max(|\xi|,|\xi_3|)$ and $\xi_1 \sim \xi \gg \max(|\xi_2|,|\xi_3|)$. One needs to be careful here since we cannot assume that small indices are of the similar size.
When $\xi_1 \sim \xi_2 \gg \max(|\xi|,|\xi_3|)$, we use \eqref{eq:save}. Noting that $\lan{\xi_3} |\xi-\xi_3|\gtrsim \xi$, we can write $\lan{\xi_3}|\xi-\xi_3| |\xi_2 +\xi_3| |\xi_3+\xi_1| \gtrsim \xi_{\max}^2 |\xi|$. Then,
\[
M_2 \lesssim L^{-\delta}_{\max} \xi_{\max}^{2s+8\delta -1} |\xi|^{\gamma-s +3\delta - \f{1}{2}} \leq L^{-\delta}_{\max}|\xi|^{s+\gamma+11\delta -\f{3}{2}}.
\]
Finally, when $\xi_1 \sim \xi \gg \max(|\xi_2|,|\xi_3|)$, similar argument gives
\[
M_2 \lesssim L^{-\delta}_{\max} \xi_{\max}^{\gamma + 7\delta - 1} |\xi_2|^{s+4\delta -\f{1}{2}}.
\]
Thus, with $\gamma <1-10\delta$, the desired estimate holds. This exhaust the cases when $\xi_3 \ll \xi_{\max}$. For the other cases, refer to the proof of \eqref{eq:wff}.
\textbf{Proof of \eqref{eq:hff}:} The idea of this estimate is very similar to that of \eqref{eq:wff}. The new ingredient here is that one can choose to switch out modulation frequency for spatial derivative in high frequency realm.
Begin by writing $\n{\mathcal{F}_{\xi}^{-1}[\mathcal{NR}(u,v,w)] }{X_T^{\gamma,-\frac{1}{2}+\delta}}= $
\[
\n{ \sum_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi, \quad \xi_j \neq 0 \end{array} } \frac{\langle \xi_1\rangle^s \langle\xi_2\rangle^s \langle \xi_3\rangle^{s}\langle \xi\rangle^{\gamma-s}}{i\xi_3\langle\tau-\xi^3 \rangle^{\frac{1}{2}-\delta}} [\widetilde{u}(\xi_1)*_{\tau} \widetilde{v}(\xi_2)*_{\tau} \widetilde{w}(\xi_3)](\tau)}{L^2_{\tau} l^2_{\xi}(\mathbf{Z}\setminus \{0\})}.
\]
We can split the summand into two sets: 1) $L_{\max} \gtrsim \xi_{\max}^2 \min(|\xi_1 + \xi_2|, |\xi_2+\xi_3|,|\xi_3+\xi_1|)$; 2) otherwise. In the first case, gain through the modulation frequency will be enough, so we will use the $X^{0,\f{1}{2}}$~norm. In the second case, note that $\xi\sim \xi_1 \sim \xi_2 \sim \xi_3$ must hold, since otherwise \eqref{eq:xisquaregain} forces the first condition to hold. Therefore, $\xi_1$ is high frequency in this case. So we will use the $X^{1, 0}$~norm.
First, assume $L_{\max} \gtrsim \xi_{\max}^2 \min(|\xi_1 + \xi_2|, |\xi_2+\xi_3|,|\xi_3+\xi_1|)$. In this case, the computation is almost identical to the proof of \eqref{eq:wff} and it suffices to estimate
\begin{align*}
M_3 &:= \sup_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi,\quad \xi_j \neq 0 \end{array}} \f{\lan{\xi_1}^{s+\delta} \langle\xi_2\rangle^{s+\delta} \langle \xi_3\rangle^{s+\delta}\langle \xi\rangle^{\gamma-s}}{|\xi_3| L_{\max}^{\frac{1}{2}-2\delta}} \\
&\lesssim \sup_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi \end{array}} \frac{L_{\max}^{-\delta} \lan{\xi_1}^{s+\delta}\langle\xi_2\rangle^{s+\delta} \langle \xi\rangle^{\gamma-s}}{\lan{\xi_3}^{1-s-\delta} \xi_{\max}^{1-6\delta} \min(|\xi_1+\xi_2|,|\xi_2 + \xi_3|, |\xi_3 + \xi_1|)^{\f{1}{2}-3\delta} }
\end{align*}
If $\xi_3 \sim \xi_{\max}$, then $M_3 \lesssim L_{\max}^{-\delta} \xi_{\max}^{\gamma+2s -2 +9\delta}$ is bounded. Otherwise $\xi_3 \ll \xi_{\max}$, and the desired estimate follows from the arguments in the proof of \eqref{eq:ffw}.
Next, we can assume $L_{\max} \ll \xi_{\max}^2 \min(|\xi_1 + \xi_2|, |\xi_2+\xi_3|,|\xi_3+\xi_1|)$. As stated above, this forces all frequencies to be at the same level. Following previous computations,
\begin{align*}
\n{\mathcal{F}_{\xi}^{-1}[\mathcal{NR}(u,v,w)] }{X^{\gamma,-\frac{1}{2}+\delta}} &\lesssim M_3^* \n{\frac{\left[\lan{\cdot}|\widetilde{u}|\right]*\widetilde{v^{\delta}} * \widetilde{w^{\delta}}](\tau,\xi)}{\lan{\xi}^{\delta} L^{\frac{1}{2}+\delta}}}{L^2_{\tau} l^2_{\xi}}\\
&\lesssim M_3^* \|u\|_{X^{1,0}} \sup_{\|z\|_{L^2_{\tau}l^2_{\xi}}=1} \n{v^{\delta}\,w^{\delta}\,\left[L^{-\f{1+\delta}{2}}z^{\delta}\right] }{L^2_{t,x}}\\
&\lesssim_{\delta} M_3^* \|u\|_{X^{1,0}}\|v\|_{X^{0,\frac{1}{2}}}\|w\|_{X^{0,\frac{1}{2}}}
\end{align*}
where
\begin{align*}
M_3^* &:= \sup_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi,\quad \xi_j \neq 0 \end{array}} \f{L_{\max}^{3\delta} \langle\xi_2\rangle^{s+\delta} \langle \xi_3\rangle^{s+\delta}\langle \xi\rangle^{\gamma-s+\delta}}{\lan{\xi_1}^{1-s}|\xi_3| }\\
&:= \sup_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi,\quad \xi_j \neq 0 \end{array}} \f{L_{\max}^{-\delta}\xi_{\max}^{12\delta} \langle\xi_2\rangle^{s+\delta} \langle \xi\rangle^{\gamma-s+\delta}}{\lan{\xi_1}^{1-s}\lan{\xi_3}^{1-s-\delta} }.
\end{align*}
Since all $\xi\sim \xi_1\sim \xi_2 \sim \xi_3$, we have $M_3^* \lesssim L_{\max}^{-\delta}\xi_{\max}^{\gamma +2s +15\delta -2}$. Thus we have \eqref{eq:hff}.
\eqref{eq:ffh} can be proved exactly the same way, thus the proof is omitted.
\end{proof}
The next lemma deals with the \emph{resonant} term~$\mathcal{R}$ in \eqref{eqwper}. To reduce the number of cases, we ignore the complex conjugation. In particular, this means that $\mathcal{R}(\cdot, \cdot,\cdot)$ can be considered to be symmetric in all three variables. This does not cause any problem in the proof, since we do not take advantage of cancellations here.
\begin{lemma}\label{res}
Given $0\leq s<1/2$, $0<\gamma\leq 1$ and $0<\delta<1/2$, the following estimate holds for arbitrary functions $u,v\in L^{\infty}_t([0,T];L^2_x({\mathbf T}))$ and $w\in L^2 H_x^{\gamma}$.
\[
\n{\mathcal F^{-1}\left[\mathcal{R}(u,v,w)\right]}{X_T^{\gamma, -\f{1}{2}+\delta}} \lesssim_{\delta} T^{\f{1}{2}-\delta}\|u\|_{L^{\infty}_t([0,T];L^2_x({\mathbf T}))} \|v\|_{L^{\infty}_t([0,T];L^2_x({\mathbf T}))} \|w\|_{L^{\infty}_t([0,T];H^{\gamma}_x({\mathbf T}))}.
\]
\end{lemma}
\textbf{Remark:} The estimate above suffices to control all the resonance because
\[
R[f]\in L^{\infty}_T L^2_x;\quad h,k \in L^{\infty}_T L^2_x \cap L^{2}_T H^{\gamma}_x; \quad w\in X_T^{\gamma, \f{1}{2}+\delta}\subset L^{2}_T H^{\gamma}_x \cap C^0_t H^{\gamma}_x.
\]
\begin{proof}
In the resonant part, we do not benefit from modulation frequencies, so we can enjoy a large power of $T$. We have
\begin{align*}
\n{\mathcal F^{-1}\left[\mathcal{R}(u,v,w)\right]}{X_T^{\gamma, -\f{1}{2}+\delta}} &\lesssim_{\delta} T^{\f{1}{2}-\delta} \n{\frac{\langle \xi \rangle^{2s+\gamma}}{\xi} |\widehat{u}| |\wt{v}| |\wh{w}|}{L^2_T l^2_{\xi}}\\
&\lesssim T^{\f{1}{2}-\delta} \n{|\widehat{u}| |\wh{v}| \left[\langle \xi \rangle^{\gamma} |\wh{w}|\right]}{L^2_T l^2_{\xi}}\\
&\lesssim T^{\f{1}{2}-\delta}\n{\widehat{u}}{L^{\infty}_T l^{\infty}_{\xi}}\n{\widehat{v}}{L^{\infty}_T l^{\infty}_{\xi}} \n{\left[\langle \xi \rangle^{\gamma} |\wh{w}|\right]}{L^2_T l^2_{\xi}}\\
&\lesssim T^{\f{1}{2}-\delta}\n{u}{L^{\infty}_T L^2_{x}} \n{v}{L^{\infty}_T L^{2}_{x}} \n{w}{L^2_T H^{\gamma}_x}.
\end{align*}
\end{proof}
Finally, it remains to estimate the quinti-linear term~$\mathcal{Q}$ in \eqref{eqwper}. As observed before, this will be essentially a trilinear estimate, so we will use the $L^6$~embedding.
\begin{lemma} \label{le:quintic}
Given $0\leq s <1/2$, $0<10\delta<1-2s$ and $0<\gamma\leq 1$, the following estimates hold for $R[f]$ and any $u\in X^{0,\f{1}{2}+\delta}$.
\begin{align}
\n{J(\mathcal F^{-1}_{\xi}[\mathcal{R}(R[f],R[f],R[f])],u,u)}{X_T^{\gamma, -\f{1}{2}+\delta}} &\lesssim_{\delta} T^{\delta} \|f\|_{L^2}^2 \|R[f]\|_{X^{0,\f{1}{2}+\delta}} \|u\|^2_{X^{0,\f{1}{2}+\delta}}\label{eq:j1}\\
\n{J(u,u,\mathcal F^{-1}_{\xi}[\mathcal{R}(R[f],R[f],R[f])])}{X_T^{\gamma, -\f{1}{2}+\delta}} &\lesssim_{\delta} T^{\delta} \|f\|_{L^2}^2\|R[f]\|_{X^{0,\f{1}{2}+\delta}} \|u\|^2_{X^{0,\f{1}{2}+\delta}}\label{eq:j2}
\end{align}
\end{lemma}
\begin{proof}
First, we consider \eqref{eq:j1}. As in Lemma~\ref{nonres}, we decompose the modulation frequency such that $\lan{\tau_j - \xi_j^3} \sim L_j$ and $\lan{\tau-\xi^3} \sim L$ for dyadic indices~$L, L_j \geq 1$. We will show that each modulational component is bounded by $C\|f\|_{L^2}^2 L_{\max}^{-\delta}$ where $C\leq C(\delta)$.
First we consider the case when $L\sim L_{\max}$. Using Plancherel, H\"older and $L^6$ embedding~\eqref{xsb2}, we show $\n{J(\mathcal F^{-1}_{\xi}[\mathcal{R}(R[f],R[f],R[f])],u,u)}{X_T^{\gamma, -\f{1}{2}+\delta}} = $
\begin{align*}
&\n{\sum_{\tiny \begin{array}{c} (\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1 + \xi_2 + \xi_3 = \xi, \qquad \xi_j \neq 0\end{array}} \f{\lan{\xi_1}^{3s} \lan{\xi_2}^s \lan{\xi_3}^s |\wh{f}(\xi_1)|^2\left[\wh{R[f]}(\xi_1) *_{\tau} \wh{u}(\xi_2) *_{\tau}\wh{u}(\xi_3)\right]}{-\lan{\tau- \xi^3}^{\f{1}{2}-\delta}\xi_1 \xi_3 \lan{\xi}^s (\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)} }{L^2_{\tau} l^2_{\xi}}\\
&\lesssim_{\delta} M \n{ \wt{R[f]^{-\delta}}* ( \widetilde{v^{-\delta}} * \widetilde{w^{-\delta}})}{L^2_{\tau}l^2_{\xi}} \lesssim_{\delta} M\| R[f]\|_{X^{0,\f{1}{2}+\delta}} \| v\|_{X^{0,\f{1}{2}+\delta}} \| w\|_{X^{0,\f{1}{2}+\delta}}
\end{align*}
where $v^{-\delta}$ is defined by $\wt{v^{-\delta}}(\xi) = \lan{\xi}^{-\delta}|\wt{v}|(\tau,\xi)$ and
\begin{align*}
M &:= \sup_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi,\quad \xi_j \neq 0 \end{array}} \frac{\lan{\xi_1}^{3s+\delta} \lan{\xi_2}^{s+\delta}\lan{\xi_3}^{s+\delta}\lan{\xi}^{\gamma-s} \left|\wh{f}(\xi_1)\right|^2 }{L_{\max}^{\f{1}{2}-\delta} |\xi_1| |\xi_3| |(\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)|}\\
&\lesssim \sup_{\tiny \begin{array}{c}(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0\\ \xi_1+\xi_2+\xi_3=\xi,\quad \xi_j \neq 0 \end{array}} \frac{L_{\max}^{-\delta}\|f\|_{L^2}^2\lan{\xi_1}^{3s-1+\delta} \lan{\xi_2}^{s+\delta}\lan{\xi}^{\gamma-s}}{\lan{\xi_3}^{1-s-\delta} |(\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)|^{\f{3}{2}-2\delta}}.
\end{align*}
So it suffices to find an absolute constant $C$ to bound
\begin{equation}\label{eq:lastbound}
\frac{\lan{\xi_1}^{3s-1+\delta} \lan{\xi_2}^{s+\delta}\lan{\xi}^{\gamma-s}}{\lan{\xi_3}^{1-s-\delta} |(\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)|^{\f{3}{2}-2\delta}}
\end{equation}
for $\xi, \xi_1, \xi_2,\xi_3\in bz\setminus\{0\}$ satisfying $(\xi_1+\xi_2)(\xi_2+\xi_3)(\xi_3+\xi_1)\neq 0$ and $\xi_1+\xi_2+\xi_3=\xi$.
Recalling $(\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)\gtrsim \xi_{\max}$, we have $\eqref{eq:lastbound} \lesssim \xi_{\max}^{\gamma +s +3\delta -3/2}$ which is bounded.
We remark that when $L\ll L_{\max}$, then we can transfer $L_{\max}$ from elsewhere as done in Lemma~\ref{nonres} possibly at the cost of $\xi_{\max}^{\delta}$.
To show \eqref{eq:j2}, we follow the same computations and reduce to finding the bound for
\begin{equation}\label{eq:lastbound2}
\frac{\lan{\xi_1}^{s+\delta} \lan{\xi_2}^{s+\delta}\lan{\xi}^{\gamma-s}}{\lan{\xi_3}^{2-3s-\delta} |(\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)|^{\f{3}{2}-2\delta}}.
\end{equation}
Using $|(\xi_1 + \xi_2)(\xi_2+ \xi_3)(\xi_3+\xi_1)|\gtrsim \xi_{\max}$, we obtain $\eqref{eq:lastbound2} \lesssim \xi_{\max}^{\gamma+s+4\delta -\f{3}{2}}$. This proves \eqref{eq:j2}.
\end{proof}
\subsection{Local theory}
\label{sec:local}
We now have obtained all necessary estimates to perform contraction on $Y^{\gamma,\frac{1}{2}+\delta}$ for $0<10\delta<1-2s$ and $\gamma\leq 1-10\delta$. In this section, we assume $T>0$ is small.
\begin{proposition}\label{pro:local}
The PBIVP \eqref{eqwper} is locally well-posed in $H^{\gamma}$ where $\gamma$ satisfies $\gamma\leq 1-10\delta$ for some $\delta>0$ such that $0<10\delta<1-2s$. Furthermore, there exists $T = O \left( \|f\|_{L^2}^{-10/\delta}\right)$, such that the solution $w(t)$ for $t\in [0,T]$ solves \eqref{eqwper} and satisfies
\[
\|w\|_{X^{\gamma,\f{1}{2}+\delta}} \sim \|w(0)\|_{H^{\gamma}}.
\]
\end{proposition}
\begin{proof}
We write \eqref{eqwper} via the Duhamel's formula,
\begin{equation}\label{duhamel}
w(t) = \eta(t) e^{-t\partial_x^3} w_0 + \eta(t/T)\int_0^t e^{-(t-s)\partial_x^3} \left[\mathcal{NR} + \mathcal{R} + \mathcal{Q}\right]\, ds
\end{equation}
where $w_0 = w(0)$ given in \eqref{eqwper}. We define the map~$\Lambda_T: Y^{0,\f{1}{2}+\delta} \to Y^{0,\f{1}{2}+\delta}$ so that $\Lambda_T(w)$ is the RHS of \eqref{duhamel} for arbitrary function $w\in Y^{0,\f{1}{2}+\delta}$.
Let $\mathcal{B}$ be a ball in $Y^{\gamma,\frac{1}{2}+\delta}$ centered at $-\eta(t) e^{-t\partial_x^3} w_0$ with small radius. We now show that for $T$ sufficiently small, $\Lambda_T$ is a contraction on $\mathcal{B}$. By Propositions~\ref{xsb} and \ref{timeloc}, we have
\[
\n{\Lambda_T w - e^{-t\partial_x^3} w_0}{Y^{\gamma,\frac{1}{2}+\delta}} \lesssim_{\eta} \|\mathcal{NR}\|_{Y_T^{\gamma,-\frac{1}{2}+\delta}}+ \|\mathcal{R}\|_{Y_T^{\gamma,-\frac{1}{2}+\delta}} + \|\mathcal{R}\|_{Y_T^{\gamma,-\frac{1}{2}+\delta}}.
\]
Applying Lemmas~\ref{nonres}, \ref{res} and \ref{le:quintic}, and noting the nonlinearies \eqref{eq:nr1}-\eqref{eq:q2}, we can estimate above by
\begin{align}
C_{\delta} T^{\delta}&\left[\left(\|w\|_{X^{\gamma,\f{1}{2}+\delta}} + \|h+k\|_{X_1^{1,0}\cap X_1^{0,\f{1}{2}}}\right)\n{R[f]+h+k+w}{X_1^{0,\f{1}{2}}}^2 \right.\notag\\
&\left.+\|w+h+k\|_{L_t^{\infty}([0,T]; H^{\gamma}_x)}\n{R[f]+h+k+w|}{L^{\infty}_t([0,T];L^2_x)}+ \|f\|^2_{L^2} \n{R[f]}{X_1^{0,\f{1}{2}+\delta}}^3\right].\label{finalcont}
\end{align}
To simplify the estimates, assume $\|f\|_L^2\gg 1$ so that we may drop all lower powers. By Lemmas~\ref{tpmap1} and \ref{le:jmap} and since $w\in \mathcal{B}$, we have
\[
\|w\|_{X^{\gamma,\f{1}{2}+\delta}} \sim \|w_0\|_{H^{\gamma}} \lesssim \|T(f,f)\|_{H^{\gamma}} + \|J(f,f,f)\|_{H^{\gamma}} \lesssim \|f\|_{L^2}^2 + \|f\|_{L^2}^3\lesssim \|f\|_{L^2}^3.
\]
We use Lemma~\ref{rffree} for $R[f]$ and estimates\eqref{eq:tspace}, \eqref{eq:jspace} for $h,k$ respectively to show
\begin{align*}
\eqref{finalcont} &\leq C_{\delta} T^{\delta} \left(\|v\|_{X^{0,\f{1}{2}}\cap \wt{l^2_{\xi} L^1_{\tau}}}^2 + \|f\|_{L^2}^5\right)\left(\|f\|_{L^2}^5 + \|v\|_{X^{0,\f{1}{2}} \cap \wt{l^2_{\xi} L^1_{\tau}}}^2\right)\leq C_{\delta} T^{\delta} \|f\|^{10}
\end{align*}
where we used Proposition~\ref{pro:timebound} to obtain $\|v\|_{X^{0,\f{1}{2}} \cap \wt{l^2_{\xi} L^1_{\tau}}} \lesssim \|f\|_{L^2}$ for short time~$T$. Thus we need $T^{\delta} \ll \|f\|^{-10}$.
It is easy to see from above computations that $T\ll \|f\|^{-10/\delta}$ suffices for contraction also. This proves the desired local well-posedness.
\end{proof}
The following is the main corollary from the local theory, which iterated in the next section to extend the result globally in time.
\begin{corollary}\label{cor:localsmoothing}
Given the initial data~$f$ in \eqref{periodic} and $\delta,\gamma,T >0$ as in Proposition~\ref{pro:local}, the local-in-time solution satisfies
\[
\sup_{t\in [0,T]} \|v(t)-R[f](t)\|_{H^{\gamma}_x} \leq C_{\delta} (1+ \|f\|_{L^2})^5
\]
\end{corollary}
\begin{proof}
Since $v= R[f] + h + k + w$, it suffices to estimate
\[
\|h+k+w\|_{L^{\infty}_t([0,T]; H^{\gamma}_x)} \lesssim_{\delta} \|h\|_{L^{\infty}_t([0,T]); H^1_x} + \|k\|_{X^{0,\f{1+\delta}{2}}_T} + \|w\|_{X^{\gamma, \f{1}{2}+\delta}}.
\]
The first two terms are estimated through Lemma~\ref{tpmap1} and \eqref{eq:jspace} respectively, and the last term is given in Proposition~\ref{pro:local}.
\end{proof}
\subsection{Conclusion of the proof of Theorem~\ref{thm2}}\label{global}
With the local theory from Section~\ref{sec:local}, we iterate to obtain the corresponding global theory. This is not very easy in this setting because $R[f]$ is not linear. However, we can overcome this and still iterate as desired. In this section, we assume $T>0$ is arbitrarily large.
Let $\gamma, \delta>0$ be as in Proposition~\ref{pro:local}. Given $T>0$ large, Proposition~\ref{pro:timebound} gives that $\sup_{t\in [0,T]} \|v(t)\|_{L^2} \lesssim \lan{T}^s \|f\|_{L^2}$. Then we select the time increment $\varepsilon = O_{\|f\|_{L^2}}(\lan{T}^{-10s/\delta})$ fixed in with the local theory holds as stated in Section~\ref{sec:local}. The number of iterations~$M$ is of order $T/\varepsilon$, so we let $M = O_{\|f\|_{L^2}}(T^{1+10s/\delta})$.
Our iterating mechanism is as follows. Uniformly partition the time interval~$[0,T]$ into $M$ pieces and let $\varepsilon = T/M$. Denote $v_j := v(j\varepsilon)$. By triangular inequality,
\begin{equation}\label{eq:it}
\|v(T) - R[f](T)\|_{H^{\gamma}} \leq \sum_{j=1}^{M} \n{R[v_j]((M-j)\varepsilon)- R[v_{j-1}]((M-j+1)\varepsilon)}{H^{\gamma}}.
\end{equation}
The summand on the RHS of \eqref{eq:it} can be expressed as
\begin{equation}\label{eq:itsum}
\n{\lan{\xi}^{\gamma}\left(\wh{v_j}e^{-2i\f{\lan{\xi}^{2s}}{\xi}|\wh{v_j}|^2 (M-j)\varepsilon} - \wh{v_{j-1}} e^{-2i\f{\lan{\xi}^{2s}}{\xi}|\wh{v_{j-1}}|^2(M-j+1)\varepsilon}\right)}{l^2_{\xi}}
\end{equation}
Adding and subtracting
\[
\lan{\xi}^{\gamma} \wh{v_j} \exp\left(-2i\f{\lan{\xi}^{2s}}{\xi} |\wh{v_{j-1}}(\xi)|^2 (M-j)\varepsilon\right),
\]
we note that \eqref{eq:itsum} splits into two piecies:
\begin{align}
&\n{\lan{\xi}^{\gamma} \wh{v_j} \left(\exp \left[-2i \f{\lan{\xi}^{2s}}{\xi}\left( |\wh{v_j}|^2 - |\wh{v_{j-1}}|^2\right)(M-j)\varepsilon\right] -1 \right)}{l^2_{\xi}}\label{eq:it1}\\
&+ \n{\lan{\xi}^{\gamma}\left(\wh{v_j} - \wh{v_{j-1}} e^{-2i \f{\lan{\xi}^{2s}}{\xi} |\wh{v_{j-1}}|^2\varepsilon}\right)}{l^2_{\xi}}.\label{eq:it2}
\end{align}
The second piece~\eqref{eq:it2} is equivalent to $\|v_{j} - R[v_{j-1}](\varepsilon)\|_{H^{\gamma}}$. Thus, using Corollary~\ref{cor:localsmoothing}, Proposition~\ref{pro:timebound} and assuming $T, \|f\|_{L^2}\gg 1$
\[
\eqref{eq:it2} \leq C_{\delta} (1+\|v_{j-1}\|_{L^2})^5 \leq C_{\delta} T^{5s} \|f\|_{L^2}^5.
\]
To estimate \eqref{eq:it1}, we use the mean-value theorem,
\begin{align*}
\eqref{eq:it1} &\lesssim T \n{\lan{\xi}^{\gamma} \wh{v_j} \f{\lan{\xi}^{2s}}{\xi} \left(\left|\wh{v_j}(\xi)\right|^2 - \left|\wh{v_{j-1}}(\xi)\right|^2\right) }{l^2_{\xi}}\\
&\lesssim T \n{\lan{\xi}^{\gamma}\left(\left|\wh{v_j}\right| - \left|\wh{v_{j-1}}\right|\right)}{l^{2}_{\xi}} \left(\n{\wh{v_j}}{l^{\infty}_{\xi}} + \n{\wh{v_{j-1}}}{l^{\infty}_{\xi}} \right) \n{\wh{v_j}}{l^{\infty}_{\xi}}
\end{align*}
Apart from the first term, we can estimate $\n{\wh{v_j}}{l^{\infty}_{\xi}} \leq \n{v_j}{L^2} \lesssim T^{s}\|f\|_{L^2}$. To estimate the first term, note that we can replace
\[
\left|\wh{v_{j-1}}(\xi)\right| = \left| \wh{v_{j-1}}(\xi) \exp \left(-2i \f{\lan{\xi}^{2s}}{\xi} |\wh{v_{j-1}}(\xi)|^2 \varepsilon\right)\right|.
\]
Then using triangular inequality, this term is bounded by
\[
\n{\lan{\xi}^{\gamma}\left( \wh{v_j}(\xi) - \wh{v_{j-1}}(\xi) e^{-2i \f{\lan{\xi}^{2s}}{\xi} |\wh{v_{j-1}}(\xi)|^2 \varepsilon} \right)}{l^2}
\]
which is equivalent to $\n{v_j- R[v_{j-1}](\varepsilon)}{H^{\gamma}}$ which is bounded by $C_{\delta}T^{5s} \|f\|^5_{L^2}$ from Corollary~\ref{cor:localsmoothing}. Thus
\[
\eqref{eq:it1} \leq C_{\delta} T^{1+7s} \|f\|_{L^2}^5.
\]
Summing in $j=1,\cdots, M$, we have
$\|v(T) - R[f](T)\|_{H^{\gamma}} \leq C_{\delta} M T^{1+7s} \|f\|_{L^2}^5$. Considering $M = O_{\|f\|_{L^2}} (T^{1+10s/\delta})$,
\[
\|v(T) - R[f](T)\|_{H^{\gamma}} \leq C_{\delta, \|f\|_{L^2}} T^{2+7s+ 10s/\delta}.
\]
Thus we obtain Theorem~\ref{thm2} with $\alpha(\delta) = 2+7s + 10s/\delta$.
To prove the Lipschitz property~\eqref{eq:thmlip}, let $f^1, f^2 \in L^2$ and $\|f^1 - f^2\|_{H^{\gamma}}<\infty$ be the initial data of \eqref{periodic}. Decompose the corresponding solutions $v^j = R[f^j] + h^j + k^j + w^j$ for $j=1,2$. From Lemma~\ref{lipschitz},
$\|R[f^1]-R[f^2]\|_{C^0_t([0,T];H^{\gamma}_x)} \lesssim \|f^1 - f^2\|_{H^{\gamma}}$. For $h^j := T(v^j, v^j)$, we use Lemma~\ref{tpmap1}
\begin{align*}
\|h^1 - h^2\|_{L^{\infty}_t([0,T];H^{\gamma}_x)} &\lesssim \n{T(u+v, u-v)}{L^{\infty}_t([0,T];H^{\gamma}_x)}\\
&\lesssim \|v^1+v^2\|_{L^{\infty}_T L^2} \|v^1-v^2\|_{L^{\infty}_t L^2}\\
&\lesssim T^{2s} (\|f^1\|_{L^2}+\|f^2\|_{L^2})\|f^1 - f^2\|_{L^2}.
\end{align*}
where the last statement follows from the local and global well-posedness theory of \cite{I1}. For $k^j$,
\begin{align*}
\|k^1 - k^1\|_{L^{\infty}_T H^{\gamma}} &= \n{J(R[f^1],R[f^1],R[f^1])- J(R[f^2],R[f^2],R[f^2])}{X^{\gamma,\f{1+\delta}{2}}_T}\\
&\hspace{-60pt}= \n{J(R[f^1] - R[f^2], R[f^1],R[f^1]) + J(R[f^2], R[f^1]+ R[f^2], R[f^1]- R[f^2])}{X^{\gamma,\f{1+\delta}{2}}_T}.
\end{align*}
Applying Lemmas~\ref{le:kspace}, \ref{rffree} and \ref{lipschitz}, we obtain the desired estimate.
The Lipschitz continuity of $w^j$ follows somewhat indirectly from the local theory in Proposition~\ref{pro:local}. Note that $w^j$ can be regarded as a perturbation of the free solution $-e^{-t\partial_x^3} w^j(0)$ where $w^j(0) = -T(f^j,f^j) - J(f^j,f^j,f^j)]$ for short time. Since
\[
\|e^{-t\partial_x^3} w^1(0) - e^{-t\partial_x^3} w^2(0)\|_{H^{\gamma}} \lesssim_{\|f^1\|_{L^2}, \|f^2\|_{L^2}} \|f^1 - f^2\|_{L^2},
\]
by Lemmas~\ref{tpmap1} and \ref{le:jmap}, we gain local Lipschitz property from Proposition~\ref{pro:local}. Thus the Lipschitz property of $w^j$ follows from standard iteration. This proves Theorem~\ref{thm2}.
\bibliographystyle{plain}
|
1,116,691,500,208 | arxiv | \section{Introduction}
\label{sec:introduction}
Let $u$ and $v$ be two words (not necessarily distinct) of length $n$,
over a finite alphabet $F$ of cardinality $q$. We say that $u$ and $v$
are \emph{overlapping} if a non-empty proper prefix of $u$ is equal to
a non-empty proper suffix of $v$, or if a non-empty proper prefix of
$v$ is equal to a non-empty proper suffix of $u$. So, for example, the
binary words $00000$ and $01111$ are overlapping; so are the words
$10001$ and $11110$. However, the words $11111$ and $01110$ are
non-overlapping.
We say that a code $C\subseteq F^n$ is \emph{non-overlapping} if for
all (not necessarily distinct) $u,v\in C$, the words $u$ and $v$ are
non-overlapping. The following is an example of a non-overlapping
binary code of length $6$ containing 3 codewords:
\[
C=\{001101,001011,001111\}.
\]
We write $C(n,q)$ for the maximum number of codewords in a
$q$-ary non-overlapping code of length $n$. It is easy to see that
$C(1,q)=q$. From now on, to avoid trivialities, we always assume that
$n\geq 2$.
Inspired by the use of distributed sequences in frame synchronisation
applications by van Wijngaarden and Willink~\cite{WijngaardenWillink},
Baji\'c and Stojanovi\'c~\cite{BajicStojanovic} were the first to
study non-overlapping codes. (Baji\'c and Stojanovi\'c used the term
cross-bifix-free sets rather than non-overlapping codes.) See
also~\cite{Bajic,BajicStojanovicLindner,BilottaGrazzini,BilottaPergola,CheeKiah12,WijngaardenWillink}
for various aspects of non-overlapping (cross-bifix-free) codes and
their applications to synchronisation.
Chee, Kiah, Purkayastha and Wang~\cite{CheeKiah12} showed that
\begin{equation}
\label{eqn:upper}
C(n,q)\leq \frac{q^n}{2n-1},
\end{equation}
and provided a construction of a class of non-overlapping codes whose
cardinality is within a constant factor of the bound~\eqref{eqn:upper}
when the alphabet size $q$ is fixed and the length $n$ tends to infinity.
Chee \emph{et al.}\ established the bound~\eqref{eqn:upper} by appealing to the
application in synchronisation (deriving the bound from the fact that
a certain variance must be positive). In Section~\ref{sec:upper}
below, we provide a direct combinatorial proof of this bound. (Indeed,
the combinatorial derivation allows us to improve the bound slightly
to a strict inequality.)
The construction of Chee \emph{et al.}\ becomes poor for large alphabet sizes
(in the sense that the ratio between the number of codewords in the
construction and the upper bound tends to $0$ as $q$ increases). In
Section~\ref{sec:construction}, we provide a simple generalisation of
their construction which performs well when $q$ is large. Indeed, in
Section~\ref{sec:general_parameters} we show that this generalised
construction produces non-overlapping codes whose cardinality is
within a constant factor of the upper bound~\eqref{eqn:upper} even
when the alphabet size $q$ is allowed to grow.
In Section~\ref{sec:small_length}, we provide exact values
for $C(2,q)$ and $C(3,q)$; these values show that the
bound~\eqref{eqn:upper} is not always asymptotically tight. We also state a conjecture on the asymptotics of
$C(n,q)$ when $n$ is fixed and $q$ tends to infinity.
\section{An upper bound}
\label{sec:upper}
We provide a direct combinatorial proof of the following theorem, that
slightly strengthens the bound due to Chee \emph{et al.}~\cite{CheeKiah12}.
\begin{theorem}
\label{thm:upper}
Let $n$ and $q$ be integers with $n\geq 2$ and $q\geq 2$. Let
$C(n,q)$ be the number of codewords in the largest non-overlapping
$q$-ary code of length~$n$. Then
\[
C(n,q)<\frac{q^n}{2n-1}.
\]
\end{theorem}
\begin{proof}
Let $C$ be a non-overlapping code of length $n$ over an alphabet $F$
with $|F|=q$. Consider the set $X$ of pairs $(w,i)$ where $w\in
F^{2n-1}$, $i\in\{1,2,\ldots,2n-1\}$ and the (cyclic) subword of $w$
starting at position $i$ lies in $C$. So, for example, if $C$ is
the code in the introduction then $(01111110011,8)\in X$.
We see that $|X|=(2n-1)|C|q^{n-1}$, since there are $2n-1$ choices
for $i$, then $|C|$ choices for the codeword starting in the $i$th
position of $w$, then $q^{n-1}$ choices for the remaining positions
in $w$.
Since $C$ is non-overlapping, two codewords cannot appear as
distinct cyclic subwords of any word $w$ of length $2n-1$. Thus,
for any $w\in F^{2n-1}$ there is at most one choice for an integer $i$ such
that $(w,i)\in X$. Moreover, no subword of any of the $q$ constant
words $w$ of length $2n-1$ can appear as a codeword in a
non-overlapping code. So $|X|\leq q^{2n-1}-q<q^{2n-1}$.
The theorem now follows from the inequality
\[
(2n-1)|C|q^{n-1}\leq|X|< q^{2n-1}.\qedhere
\]
\end{proof}
\section{Constructions of non-overlapping codes}
\label{sec:construction}
Let $F=\{0,1,\ldots, q-1\}$. Chee \emph{et al.}\ provide the following
construction of a non-overlapping code of length $n$ over $F$.
\begin{construction}[Chee \emph{et al.}~\cite{CheeKiah12}]
\label{con:Chee}
Let $k$ be an integer such that $1\leq k\leq n-1$. Let $C$ be the set of all
words $c\in F^n$ such that:
\begin{itemize}
\item $c_i=0$ for $1\leq i\leq k$ (so all codewords start with $k$
zeroes);
\item $c_{k+1}\not=0$, and $c_n\not=0$;
\item the sequence $c_{k+2},c_{k+3},\ldots,c_{n-1}$ does not contain
$k$ consecutive zeroes.
\end{itemize}
Then $C$ is a non-overlapping code.
\end{construction}
It is not hard to see that the construction above is indeed a
non-overlapping code. Chee \emph{et al.}\ show that the construction is
already good for small parameters. Indeed, they show that for binary
codes, Construction~\ref{con:Chee} (with the best choice of $k$)
achieves the best possible code size whenever $n\leq 14$ and $n\neq 9$.
It less clear how to choose $k$ in general so that $C$ is
as large as possible, and what the resulting asymptotic size of the
code is. Much of the paper of Chee \emph{et al.}\ sets out to answer these
questions. Indeed, the authors argue that when $q$ is fixed, and $k$
is chosen appropriately (as a function of $n$), we have that
\[
\liminf_{n\rightarrow\infty}|C|/(q^n/n)\geq \frac{q-1}{qe},
\]
where $e$ is the base of the natural logarithm. This shows that
Theorem~\ref{thm:upper} is tight to within a constant factor when $q$
is fixed. Their result uses a delicate argument using techniques from
algebraic combinatorics. In fact, the following much simpler argument
gives a similar, though weaker, result.
\begin{lemma}
\label{lem:easy}
Let $q$ be a fixed integer, $q\geq 2$. Then the codes in
Construction~\ref{con:Chee} show that
\[
\liminf_{n\rightarrow\infty}C(n,q)/(q^n/n)\geq \frac{(q-1)^2(2q-1)}{4q^4},
\]
\end{lemma}
\begin{proof}
We begin by claiming that when $2k\leq n-2$ the number of $q$-ary
sequences of length $n-k-2$ containing no $k$ consecutive zeros is at
least
\[
q^{n-k-2}-(n-2k-1)q^{n-2k-2}.
\]
To see this, note that any sequence that fails the condition of
containing no $k$ consecutive sequences of zeroes must contain $k$
consecutive zeros starting at some position $i$, where $1\leq i\leq
n-k-2-(k-1)$. Since there are $n-2k-1$ possibilities for $i$, and
$q^{n-2k-2}$ sequences containing $k$ zeros starting at position $i$,
our claim follows. Thus, if $C$ is the non-overlapping code in
Construction~\ref{con:Chee},
\[
|C|\geq
(q-1)^2(q^{n-k-2}-nq^{n-2k-2})=\left(\frac{q-1}{q}\right)^2q^n(q^{-k}-nq^{-2k}).
\]
The function $q^{-k}-nq^{-2k}$ is maximised when
$k=\log_q(2n)+\delta$, where $\delta$ is chosen so that $|\delta|<1$
and $k$ is an integer. In this case, the value of $q^{-k}-nq^{-2k}$ is
bounded below by $(2q-1)/(4nq^2)$ (this can be shown by always taking
$\delta$ to be non-negative). Thus
\[
|C|\geq \left(\frac{(q-1)^2(2q-1)}{4nq^4}\right)q^n.\qedhere
\]
\end{proof}
When the alphabet size $q$ is much larger than the length $n$,
Construction~\ref{con:Chee} produces codes that are much smaller than
the upper bound in Theorem~\ref{thm:upper}. The following
generalisation of Construction~\ref{con:Chee} does not have this
drawback; we discuss this issue further in
Sections~\ref{sec:small_length} and~\ref{sec:general_parameters} below.
Let $S\subseteq F^k$. We say that a word $x_1x_2\cdots x_r\in F^r$
is \emph{$S$-free} if $r<k$, or if $r\geq k$ and $x_ix_{i+1}\cdots
x_{i+k-1}\not\in S$ for all $i\in\{1,2,\ldots,r-k+1\}$.
\begin{construction}
\label{con:new}
Let $k$ and $\ell$ be such that $1\leq k\leq n-1$ and $1\leq \ell\leq
q-1$. Let $F=I\cup J$ be a partition of a set $F$ of cardinality $q$
into two parts $I$ and $J$ of cardinalities $\ell$ and $q-\ell$
respectively. Let $S\subseteq I^k\subseteq F^k$. Let $C$ be the set of all words $c\in F^n$ such that:
\begin{itemize}
\item $c_1c_2\cdots c_k\in S$;
\item $c_{k+1}\in J$, and $c_n\in J$;
\item the word $c_{k+2},c_{k+3},\ldots,c_{n-1}$ is $S$-free.
\end{itemize}
Then $C$ is a non-overlapping code.
\end{construction}
It is easy to see that Construction~\ref{con:Chee} is the special case
of Construction~\ref{con:new} with $\ell=1$, $I=\{0\}$ and
$S=\{0^k\}$.
\section{Non-overlapping codes of small length}
\label{sec:small_length}
This section considers non-overlapping codes of fixed length $n$, when the
alphabet size $q$ becomes large. In this situation,
Construction~\ref{con:Chee} produces codes that are much smaller than
the upper bound in Theorem~\ref{thm:upper}. To see this, note that
there are at most $q^{n-k}$ codewords in a code $C$ from
Construction~\ref{con:Chee}, since the first $k$ components of any
codeword are fixed. So, since $k$ is positive, $|C|\leq q^{n-1}$ and
therefore $|C|/(q^n/n)\leq n/q$.
The proof of the following theorem shows that the codes given by
Construction~\ref{con:new} are within a constant factor of the bound
in Theorem~\ref{thm:upper} whenever $n$ is fixed and
$q\rightarrow\infty$.
\begin{theorem}
\label{thm:n_fixed}
Let $n$ be a fixed positive integer, $n\geq 2$. Then
\[
\liminf_{q\rightarrow\infty} C(n,q)/(q^n/n)\geq
\left(\frac{n-1}{n}\right)^{n-1}.
\]
\end{theorem}
\begin{proof}
We use Construction~\ref{con:new} in the special case when
$k=n-1$ and $S=I^k$. In this case (in the notation of
Construction~\ref{con:new}) $C$ is the set of words whose first $n-1$
components lie in $I$, and whose final component lies in $J$. So here
$|C|=\ell^{n-1}(q-\ell)$.
Let $\ell=\lceil ((n-1)/n)q\rceil$. Since $q-\ell\geq (1/n)q-1$, we
find that
\[
|C|=\frac{1}{n}\left(\frac{n-1}{n}\right)^{n-1}q^n-O(q^{n-1}),
\]
and so the theorem follows.
\end{proof}
The following theorem shows (in particular) that the bound
($|C|<q^2/3$) of Theorem~\ref{thm:upper} is not asymptitically tight
when $n=2$ and $q\rightarrow\infty$.
\begin{theorem}
\label{thm:n_2}
A largest $q$-ary length $2$ non-overlapping code has $C(2,q)$
codewords, where $C(2,q)=\lfloor q/2\rfloor\, \lceil q/2\rceil$. In particular,
\[
\lim_{q\rightarrow\infty} C(2,q)/(q^2/2)=\frac{1}{2}.
\]
\end{theorem}
\begin{proof}
Construction~\ref{con:new} in the case $n=2$, $k=1$, $\ell=\lfloor
q/2\rfloor$ and $S=I^k$ provides the lower bound on $C(2,q)$ we require.
Let $C$ be a $q$-ary non-overlapping code of length $q$. Let $I$ be the set
of symbols which occur in the first position of a codeword in $C$, and
let $J$ be the set of symbols that occur in the final position of a
codeword in $C$. Since $C$ is non-overlapping, $I$ and $J$ are
disjoint. Thus
\[
|C|\leq |I||J|\leq |I|(q-|I|)\leq \lfloor q/2 \rfloor \lceil
q/2\rceil.
\]
\end{proof}
In the following theorem, $[x]$ denotes the nearest integer to the
real number $x$.
\begin{theorem}
\label{thm:n_3}
A largest $q$-ary length $3$ non-overlapping code has $C(3,q)$
codewords, where $C(3,q)=[2q/3]^2(q-[2q/3])$. In particular,
\[
\lim_{q\rightarrow\infty} C(3,q)/(q^2/3)=\frac{4}{9}.
\]
\end{theorem}
\begin{proof}
Construction~\ref{con:new} in the case $n=3$, $k=2$, $\ell=[
2q/3]$ and $S=I^k$ provides the lower bound on $C(2,q)$ we require.
Let $C$ be a $q$-ary non-overlapping code of length $q$ of maximal
size. Let $F$ be the underlying alphabet of $C$, so $|F|=q$.
Let $I$ be the set of symbols which occur in the first position
of a codeword in $C$. Let $J$ be the complement of $I$ in
$F$, so $|J|=q-|I|$. Since $C$ is non-overlapping,
the symbols that occur in the final component of any codeword lie in
$J$. So we may write $C$ as a disjoint union $C=C_1\cup C_2$, where
$C_1\subseteq I\times I\times J$ and $C_2\subseteq I\times J\times J$.
Let $X$ be the set of all pairs $(b,c)\in I\times J$ such
that $abc\in C$ for some $a\in I$. Define
\begin{align*}
\overline{C_1}&=\{abc\mid a\in I\text{ and }(b,c)\in X\},\\
\overline{C_2}&=\{bcd\mid (b,c)\in (I\times J)\setminus X\text{ and } d\in J\}.
\end{align*}
Clearly $C_1\subseteq \overline{C_1}$. Moreover, $C_2\subseteq
\overline{C_2}$, since whenever $bcd\in C$ is a codeword, the fact
that $C$ is non-overlapping implies that
$(b,c)\not\in X$. But $\overline{C}=\overline{C_1}\cup\overline{C_2}$
is a non-overlapping code, and so $C=\overline{C}$ as $C$ is maximal.
We have
\[
|C|=|\overline{C}|=|X| |I|+(|I| |J| - |X|)|J|=|X|(|I|-|J|)+|I||J|^2.
\]
If $|I|\leq |J|$, then the maximum value of $|C|$ is achieved when
$|X|=0$, at $\max_{i\in\{1,2,\ldots,\lfloor q/2\rfloor\}}i^2(q-i)$. If
$|I|>|J|$, the maximum value of $|C|$ is achieved when $|X|=|I||J|$,
at $\max_{i\in\{\lfloor q/2\rfloor,\lfloor
q/2\rfloor+1,\ldots,q-1\}}i^2(q-i)$. Thus
\[
|C|\leq \max_{i\in\{1,2,\ldots,q-1\}}i^2(q-i)= [2q/3]^2(q-[2q/3]),
\]
and so the theorem follows.
\end{proof}
It would be interesting to determine the asymptotic behaviour of
$C(n,q)$ when $q\rightarrow\infty$ for a general fixed length $n$. I
believe the following two conjectures are true.
\begin{conjecture}
\label{conj_weak}
Let $n$ be an integer such that $n\geq 2$. Then
\[
\lim_{q\rightarrow\infty}C(n,q)/q^n=\frac{1}{n}\left(\frac{n-1}{n}\right)^{n-1}.
\]
\end{conjecture}
The following conjecture implies Conjecture~\ref{conj_weak}.
\begin{conjecture}
\label{conj_strong}
Let $n$ be an integer such that $n\geq 2$. For all sufficiently large
integers $q$, a largest $q$-ary non-overlapping code of length $n$ is
given by Construction~\ref{con:new} in the case $k=n-1$ (and some
value of $\ell$).
\end{conjecture}
\section{Good constructions for general parameters}
\label{sec:general_parameters}
This section shows that Construction~\ref{con:new} is always good, in
the sense that it produces non-overlapping codes of cardinality within a
constant factor of the upper bound given by Theorem~\ref{thm:upper}
for all parameters. This is implied by the proof of the following theorem.
\begin{theorem}
\label{thm:general}
There exist absolute constants $c_1$ and $c_2$ such that
\[
c_1(q^n/n)\leq C(n,q) \leq c_2(q^n/n)
\]
for all integers $n$ and $q$ with $n\geq 2$ and $q\geq 2$.
\end{theorem}
\begin{proof}
The existance of $c_2$ follows by the upper bound on $C(n,q)$ given
by Theorem~\ref{thm:upper}. We prove the lower bound by showing that
there exists a constant $c_1$ such that for all choices of $n$ and
$q$, one of the constructions given by Construction~\ref{con:new}
contains at least $c_1(q^n/n)$ codewords.
Let $(n_1,q_1),(n_2,q_2),\ldots$ be an infinite sequence of pairs of
integers where $n_i\geq 2$ and $q_i\geq 2$. It suffices to show that
$C(n_i,q_i)/(q_i^{n_i}/n_i)$ is always bounded below by some positive
constant as $i\rightarrow\infty$. Suppose, for a contradiction, that
this is not the case. By passing to a suitable subsequence if
necessary, we may assume that $C(n_i,q_i)/(q_i^{n_i}/n_i)\rightarrow 0$
as $i\rightarrow\infty$. If the integers $q_i$ are bounded, then
Lemma~\ref{lem:easy} gives a contradiction. If the integers~$n_i$ are
bounded, we again have a contradiction, by
Theorem~\ref{thm:n_fixed}. So we may assume, without loss of
generality, that the integer sequences $(n_i)$ and $(q_i)$ are
unbounded. By passing to a suitable subsequence if necessary, we may
therefore assume that $(n_i)$ and $(q_i) $ are strictly increasing
sequences (and that $n_i$ and $q_i$ are sufficiently large for our
purposes below). In particular, we may assume that
$n_i\rightarrow\infty$ and $q_i\rightarrow\infty$ as
$i\rightarrow\infty$.
Let $k_i=\lceil \log_2 2n_i\rceil$, and set $s_i=\lfloor
q_i^{k_i}/(2n_i)\rfloor$. Let $F_i$ be a set of size $q_i$. Let
$I_i\subseteq F_i$ have cardinality $\ell_i$, where $\ell_i=\lceil
s_i^{1/k_i}\rceil$. Let $J_i$ be the complement of $I_i$ in $F_i$. Let
$S_i$ be a subset of $I_i^{k_i}$ of cardinality $s_i$. Note that such
a set $S_i$ exists, by our choice of $\ell_i$.
Let $C_i$ be the $q_i$-ary non-overlapping code of length $n_i$ given
by Construction~\ref{con:new} in the case $k=k_i$, $\ell=\ell_i$,
$I=I_i$, $J=J_i$ and $S=S_i$. Then
\begin{equation}
\label{eqn:C}
|C_i|=|S|(q_i-\ell_i)^2f_i
\end{equation}
where $f_i$ is the number of $S$-free sequences of length
$n_i-k_i-2$. We now aim to find a lower bound on $|C_i|$.
Since $q_i\rightarrow\infty$ as $i\rightarrow\infty$, we
see that
\[
q_i^{k_i}/(2n_i)\geq
q_i^{\log_2(2n_i)}/(2n_i)=2^{(\log_2(q_i)-1)(\log_2(2n_i)}\rightarrow\infty.
\]
Hence
\begin{equation}
\label{eqn:S}
|S|\sim q_i^{k_i}/(2n_i)
\end{equation}
as $i\rightarrow\infty$.
Note that
\[
(2n_i)^{(1/k_i)}\geq 2^{\log_2 (2n_i)/2\log_2(2n_i)}=2^{1/2},
\]
and hence
\[
s_i^{1/k_i}\leq \left(\frac{q_i^{k_i}}{2n_i}\right)^{1/k_i}
\leq 2^{-1/2}q_i.
\]
Since $(1-2^{-1/2})^2>(1/12)$, we see that
\begin{equation}
\label{eqn:ell}
(q_i-\ell_i)^2>(1/12)q_i^2
\end{equation}
for all sufficiently large $i$.
The number of $S$-free $q$-ary sequences of length $r$ is at least
$q^r-(r-k+1)|S|q^{r-k}$, since every word that is not $S$-free
must contain an element of $S$ somewhere as a subword. So the
number of $S$-free $q$-ary sequences of length $r$ is at least
$q^r-r|S|q^{r-k}=q^{r}(1-r|S|q^{-k})$. Thus
\begin{equation}
\label{eqn:f}
\begin{split}
f_i&\geq q_i^{n_i-k_i-2}(1-(n_i-k_i-2)|S_i|q_i^{-k_i}\\
&\geq \frac{1}{2}q_i^{n_i-k_i-2}(2-2n_i|S_i|q_i^{-k_i})\\
&\sim \frac{1}{2}q_i^{n_i-k_i-2},
\end{split}
\end{equation}
the last step following from~\eqref{eqn:S}.
Now \eqref{eqn:S},~\eqref{eqn:ell} and~\eqref{eqn:f} combine with
\eqref{eqn:C} to show that $|C_i|> (1/50)(q_i^{n_i}/n_i)$ for all
sufficiently large $i$. This
contradiction completes the proof of the theorem.
\end{proof}
|
1,116,691,500,209 | arxiv | \section{Introduction}
\label{sec:intro}
The task of generating textual descriptions of images tests a machine's ability to understand visual data and interpret it in natural language.
It is a fundamental research problem lying at the intersection of natural language processing, computer vision, and cognitive science.
For example, single-image captioning~\citep{farhadi2010every, kulkarni2013babytalk, vinyals2015show, xu2015show} has been extensively studied.
\begin{figure}[htbp]
\centering
\includegraphics[width=.9\linewidth]{images/overview.pdf}
\caption{Overview of the visual comparison task and our motivation. The key is to understand both images and compare them. Explicit semantic structures can be compared between images and used to generate comparative descriptions aligned to the image saliency.}
\label{fig:task}
\end{figure}
Recently, a new intriguing task, visual comparison, along with several benchmarks ~\citep{jhamtani2018learning, tan2019expressing, park2019robust, forbes2019neural} has drawn increasing attention in the community.
To complete the task and generate comparative descriptions, a machine should understand the visual differences between a pair of images (see \cref{fig:task}).
Previous methods~\cite{jhamtani2018learning} often consider the pair of pre-trained visual features such as the ResNet features~\cite{he2016deep} as a whole, and build end-to-end neural networks to predict the description of visual comparison directly.
In contrast, humans can easily reason about the visual components of a single image and describe the visual differences between two images based on their semantic understanding of each one.
Humans do not need to look at thousands of image pairs to describe the difference of new image pairs, as they can leverage their understanding of single images for visual comparison.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{images/model.pdf}
\caption{Our \textsc{L2C} model. It consists of a segmentation encoder, a graph convolutional module, and an LSTM decoder with an auxiliary loss for single-image captioning. Details are in \cref{sec:method}.}
\label{fig:model}
\end{figure*}
Therefore, we believe that visual differences should be learned by understanding and comparing every single image's semantic representation.
A most recent work~\cite{zhang2020diagnosing} conceptually supports this argument, where they show that low-level ResNet visual features lead to poor generalization in vision-and-language navigation, and high-level semantic segmentation helps the agent generalize to unseen scenarios.
Motivated by humans, we propose a Learning-to-Compare (\textsc{L2C}) method that focuses on reasoning about the semantic structures of individual images and then compares the difference of the image pair.
Our contributions are three-fold:
\begin{itemize}
\setlength\itemsep{-0.2em}
\item We construct a structured image representation by leveraging image segmentation with a novel semantic pooling, and use graph convolutional networks to perform reasoning on these learned representations.
\item We utilize single-image captioning data to boost semantic understanding of each image with its language counterpart.
\item Our \textsc{L2C} model outperforms the baseline on both automatic evaluation and human evaluation, and generalizes better on the testing image pairs.
\end{itemize}
\section{\textsc{L2C} Model}
\label{sec:method}
We present a novel framework in \cref{fig:model}, which consists of three main components.
First, a \emph{segmentation encoder} is used to extract structured visual features with strong semantic priors.
Then, a \emph{graph convolutional module} performs reasoning on the learned semantic representations.
To enhance the understanding of each image, we introduce a \emph{single-image captioning auxiliary loss} to associate the single-image graph representation with the semantic meaning conveyed by its language counterpart.
Finally, a decoder generates the visual descriptions comparing two images based on differences in graph representations.
All parameters are shared for both images and both tasks.
\subsection{Semantic Representation Construction}
To extract semantic visual features, we utilize pre-trained fully convolutional networks (FCN)~\citep{long2015fully} with ResNet-101 as the backbone.
An image $\mathcal{I}$ is fed into the ResNet backbone to produce a feature map $\mathcal{F}$ $\in \mathbb{R}^{D\times H\times W}$, which is then forwarded into an FCN head that generates a binary segmentation mask $B$ for the bird class.
However, the shapes of these masks are variable for each image, and simple pooling methods such as average pooling and max pooling would lose some information of spatial relations within the mask.
To address this issue and enable efficient aggregation over the area of interest (the masked area), we add a module after the ResNet to cluster each pixel within the mask into $K$ classes. Feature map $\mathcal{F}$ is forwarded through this pooling module to obtain a confidence map $\mathcal{C}$ $\in \mathbb{R}^{K\times H\times W}$, whose entry at each pixel is a $K$-dimensional vector that represents the probability distribution of $K$ classes.
Then a set of nodes $V = \{v_1, ..., v_K\}, v_k \in \mathbb{R}^D$ is constructed as following:
\begin{equation}
v_k= \sum_{i, j} \mathcal{F} \odot \mathcal{B} \odot \mathcal{C}_k
\end{equation}
where $i$=$1,... H,$ $j$=$1,...,W ,$, $\mathcal{C}_k$ is the $k$-th probability map and $\odot$ denotes element-wise multiplication.
To enforce local smoothness, i.e., pixels in a neighborhood are more likely belong to one class, we employ total variation norm as a regularization term:
\begin{equation}
\mathcal{L}_{TV} = \sum_{i,j}|C_{i+1,j}-C{i,j}|+|C_{i,j+1}-C{i,j}|
\end{equation}
\subsection{Comparative Relational Reasoning}
Inspired by recent advances in visual reasoning and graph neural networks ~\citep{chen2018iterative, li2019visual}, we introduce a relational reasoning module to enhance the semantic representation of each image.
A fully-connected visual semantic graph $G = (V, E)$ is built, where $V$ is the set of nodes, each containing a regional feature, and $E$ is constructed by measuring the pairwise affinity between each two nodes $v_i, v_j$ in a latent space.
\begin{equation}
A(v_i, v_j) = (W_i v_i)^T (W_j v_j)
\end{equation}
where $W_i, W_j$ are learnable matrices, and $A$ is the constructed adjacency matrix.
We apply Graph Convolutional Networks (GCN) ~\citep{kipf2016semi} to perform reasoning on the graph.
After the GCN module, the output $V^o = \{v_1^o, ..., v_K^o\}, v_k^o \in \mathbb{R}^D$ will be a relationship enhanced representation of a bird.
For the visual comparison task, we compute the difference of each two visual nodes from two sets, denoted as $V^g_{diff} = \{v_{diff,1}^o, ..., v_{diff,K}^o\}, v_{diff,k}^o = v_{k,1}^o - v_{k, 2}^o \in \mathbb{R}^D$.
\subsection{Learning to Compare while Learning to Describe}
After obtaining relation-enhanced semantic features, we use a Long Short-Term Memory (LSTM) ~\citep{hochreiter1997long} to generate captions.
As discussed in \cref{sec:intro}, semantic understanding of each image is key to solve the task. However, there is no single dataset that contains both visual comparison and single-image annotations.
Hence, we leverage two datasets from similar domains to facilitate training. One is for visual comparison, and the other is for single-image captioning. Alternate training is utilized such that for each iteration, two mini-batches of images from both datasets are sampled independently and fed into the encoder to obtain visual representations $V^o$ (for single-image captioning) or $V^o_{diff}$ (for visual comparison).
The LSTM takes $V^o$ or $V^o_{diff}$ with previous output word embedding $y_{t-1}$ as input, updates the hidden state from $h_{t-1}$ to $h_t$, and predicts the word for the next time step.
The generation process of bi-image comparison is learned by maximizing the log-likelihood of the predicted output sentence. The loss function is defined as follows:
\begin{equation}
\mathcal{L}_{diff}=-\sum_t {\log P(y_{t}|y_{1:t-1}, V^o_{diff})}
\end{equation}
Similar loss is applied for learning single-image captioning:
\begin{equation}
\mathcal{L}_{single}=-\sum_t {\log P(y_{t}|y_{1:t-1}, V^o)}
\end{equation}
Overall, the model is optimized with a mixture of cross-entropy losses and total variation loss:
\begin{equation}
\begin{split}
\mathcal{L}_{loss} = \mathcal{L}_{diff} + \mathcal{L}_{single} + \lambda \mathcal{L}_{TV}
\end{split}
\end{equation}
where $\lambda$ is an adaptive factor that weighs the total variation loss.
\section{Experiments}
\subsection{Experimental Setup}
\paragraph{Datasets}
The Birds-to-Words (B2W) has 3347 image pairs, and each has around 5 descriptions of visual difference. This leads to 12890/1556/1604 captions for train/val/test splits. Since B2W contains only visual comparisons, We use the CUB-200-2011 dataset (CUB) ~\citep{wah2011caltech}, which consists of single-image captions as an auxiliary to facilitate the training of semantic understanding.
CUB has 8855/2933 images of birds for train/val splits, and each image has 10 captions.
\paragraph{Evaluation Metrics}
Performances are first evaluated on three automatic metrics\footnote{\url{https://www.nltk.org}}: BLEU-4~\citep{papineni2002bleu}, ROUGE-L~\citep{lin-2004-rouge}, and CIDEr-D~\citep{vedantam2015cider}. Each generated description is compared to all five reference paragraphs. Note for this particular task, researchers observe that CIDEr-D is susceptible to common patterns in the data (See \cref{tab:main} for proof), and ROUGE-L is anecdotally correlated with higher-quality descriptions (which is noted in previous work~\citep{forbes2019neural}). Hence we consider ROUGE-L as the major metric for evaluating performances.
We then perform a human evaluation to further verify the performance.
\begin{table*}[t]
\small
\centering
\setlength{\tabcolsep}{8pt}
\begin{tabular}{l rrrrr rrrrr}
\toprule
& \multicolumn{3}{c}{\textbf{Validation}} & \multicolumn{3}{c}{\textbf{Test}}\\
\cmidrule(lr){2-4} \cmidrule(lr){5-7}
Model & BLEU-4 $\uparrow$ & ROUGE-L $\uparrow$ & CIDEr-D $\uparrow$ & BLEU-4 $\uparrow$ & ROUGE-L $\uparrow$ & CIDEr-D $\uparrow$ \\
\toprule
Most Frequent & 20.0 & 31.0 & \textbf{42.0} & 20.0 & 30.0 & \textbf{43.0} \\
Text-Only & 14.0 & 36.0 & 5.0 & 14.0 & 36.0 & 7.0 \\
Neural Naturalist & 24.0 & 46.0 & 28.0 & 22.0 & 43.0 & 25.0 \\
CNN+LSTM & 25.1 & 43.4 & 10.2 & 24.9 & 43.2 & 9.9 \\
\midrule
\textsc{L2C} [B2W] & 31.9 & 45.7 & 15.2 & 31.3 & 45.3 & 15.1 \\
\textsc{L2C} [CUB+B2W] & \textbf{32.3} & \textbf{46.2} & 16.4 & \textbf{31.8} & \textbf{45.6} & 16.3 \\
\midrule
Human & 26.0 & 47.0 & 39.0 & 27.0 & 47.0 & 42.0 \\
\bottomrule
\end{tabular}
\caption{Results for visual comparison on the Birds-to-Words dataset~\citep{forbes2019neural}. \textit{Most Frequent} produces only the most observed description in the dataset: ``the two animals appear to be exactly the same". \textit{Text-Only} samples captions from the training data according to their empirical distribution. \textit{Neural Naturalist} is a transformer model in ~\citet{forbes2019neural}. \textit{CNN+LSTM} is a commonly-used CNN encoder and LSTM decoder model.
}
\label{tab:main}
\end{table*}
\paragraph{Implementation Details}
We use Adam as the optimizer with an initial learning rate set to 1e-4. The pooling module to generate $K$ classes is composed of two convolutional layers and batch normalization, with kernel sizes 3 and 1 respectively. We set $K$ to 9 and $\lambda$ to 1. The dimension of graph representations is 512. The hidden size of the decoder is also 512. The batch sizes of B2W and CUB are 16 and 128. Following the advice from ~\citep{forbes2019neural}, we report the results using models with the highest ROUGE-L on the validation set, since it could correlate better with high-quality outputs for this task.
\subsection{Automatic Evaluation}
As shown in \cref{tab:main}, first, L2C[B2W] (training with visual comparison task only) outperforms baseline methods on BLEU-4 and ROUGE-L. Previous approaches and architectures failed to bring superior results by directly modeling the visual relationship on ResNet features.
Second, joint learning with a single-image caption L2C[B2W+CUB] can help improve the ability of semantic understanding, thus, the overall performance of the model.
Finally, our method also has a smaller gap between validation and test set compared to \textit{neural naturalist}, indicating its potential capability to generalize for unseen samples.
\begin{table}
\small
\centering
\begin{tabular}{c c|c|c}
\toprule
Choice (\%) & L2C & CNN+LSTM & Tie \\
\midrule
Score & \textbf{50.8} & 39.4 & 9.8 \\
\bottomrule
\end{tabular}
\caption{Human evaluation results. We present workers with two generations by L2C and CNN+LSTM for each image pair and let them choose the better one.
}
\label{tab:human}
\end{table}
\subsection{Human Evaluation}
To fully evaluate our model, we conduct a pairwise human evaluation on Amazon Mechanical Turk with 100 image pairs randomly sampled from the test set, each sample was assigned to 5 workers to eliminate human variance. Following~\citet{wang2018arel}, for each image pair, workers are presented with two paragraphs from different models and asked to choose the better one based on text quality\footnote{We instruct the annotators to consider two perspectives, relevance (the text describes the context of two images) and expressiveness (grammatically and semantically correct).}. As shown in \cref{tab:human}, \textsc{L2C} outperforms \textsc{CNN+LSTM}, which is consistent with automatic metrics.
\subsection{Ablation Studies}
\paragraph{Effect of Individual Components}
We perform ablation studies to show the effectiveness of semantic pooling, total variance loss, and graph reasoning, as shown in \cref{tab:ablation}.
First, without semantic pooling, the model degrades to average pooling, and results show that semantic pooling can better preserve the spatial relations for the visual representations.
Moreover, the total variation loss can further boost the performance by injecting the prior local smoothness.
Finally, the results without GCN are lower than the full L2C model, indicating graph convolutions can efficiently modeling relations among visual regions.
\begin{table}[t]
\small
\centering
\setlength{\tabcolsep}{2pt}
\begin{tabular}{l rrr}
\toprule
& \multicolumn{3}{c}{\textbf{Validation}}\\
\cmidrule(lr){2-4}
Model & BLEU-4 $\uparrow$ & ROUGE-L $\uparrow$ & CIDEr-D $\uparrow$ \\
\toprule
L2C & \textbf{31.9} & \textbf{45.7} & \textbf{15.2} \\
\midrule
$-$ Semantic Pooling & 24.5 & 43.2 & 7.2 \\
$-$ TV Loss & 29.3 & 44.8 & 13.6 \\
$-$ GCN & 30.2 & 43.5 & 10.7 \\
\bottomrule
\end{tabular}
\caption{Ablation study on the B2W dataset. We individually remove Semantic Pooling, total variation (TV) loss, and GCN to test their effects.
}
\label{tab:ablation}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=.8\linewidth]{images/robust.pdf}
\caption{Sensitivity test on number of K chosen.}
\label{fig:robust}
\end{figure}
\paragraph{Sensitivity Test}
We analyze model performance under a varying number of $K$ ($K$ is the number of classes for confidence map $\mathcal{C}$), as shown in \cref{fig:robust}. Empirically, we found the results are comparable when $K$ is small.
\section{Conclusion}
In this paper, we present a learning-to-compare framework for generating visual comparisons.
Our segmentation encoder with semantic pooling and graph reasoning could construct structured image representations.
We also show that learning to describe visual differences benefits from understanding the semantics of each image.
\section*{Acknowledgments}
The research was partly sponsored by the U.S. Army Research Office and was accomplished under Contract Number W911NF19-D-0001 for the Institute for Collaborative Biotechnologies. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
|
1,116,691,500,210 | arxiv | \section{Introduction}
\label{sec:intro}
The Quantum Adiabatic Algorithm (QAA) ~\cite{farhi_long:01} is an algorithm for solving optimization problems using a quantum computer. The optimization problem to be solved is defined by
a cost function which acts on $N$ bit strings. The computational task is to
find the global minimum of the cost function.
To use the QAA, the cost function is first
encoded in a quantum Hamiltonian $H_P$ (called the `problem Hamiltonian') that
acts on the Hilbert space of $N$ spin $\frac{1}{2}$ particles. The problem
Hamiltonian is written as a function of $\sigma_z$ Pauli-matrices and is
therefore diagonal in the computational basis. The ground state of $H_P$
corresponds to the solution (i.e., lowest cost bit string) of the optimization
problem.
To find the ground state of the problem Hamiltonian, the system is first prepared in the ground state
of another Hamiltonian $H_B$, known as the beginning Hamiltonian. The
beginning Hamiltonian does not commute with the problem Hamiltonian and must
be chosen so that its ground state is easy to prepare. Here we use the
standard choice
\[
H_B=\sum_{i=1}^{N}\frac{\left(1-\sigma_x^{i}\right)}{2},
\]
which has a product state as its ground state.
The Hamiltonian of the system is slowly modified from $H_B$ to
$H_P$. Here we consider a linear interpolation between the two Hamiltonians
\begin{equation}
\hat{H}(s)= (1-s)H_B+s H_P \,,
\label{eq:interp}
\end{equation}
where $s(t)$ is a parameter varying smoothly with time,
from $s(0)=0$ to $s(\mathcal{T})=1$ at the end of the algorithm after a total evolution time $\mathcal{T}$.
If the parameter $s(t)$ is changed slowly enough, the adiabatic theorem of
Quantum Mechanics~\cite{kato:51,messiah:62,amin:09b,jansen:07}),
ensures that the system will stay close to the ground state
of the instantaneous Hamiltonian throughout the evolution. After time
$\mathcal{T}$ the state obtained will be close to the ground state of $H_P$.
A final measurement of the state in the Pauli-$z$ basis then produces the
solution of the optimization problem.
The runtime $\mathcal{T}$ must be chosen to be large enough so that the adiabatic
approximation holds: this condition determines the
efficiency, or complexity, of the QAA. A condition on $\mathcal{T}$ can
be given in terms of the eigenstates $\{ | m
\rangle \}$ and eigenvalues $\{E_m \}$ of the Hamiltonian $H(s)$,
as~\cite{wannier:65,farhi:02}
\begin{equation}
\mathcal{T} \gg \hbar \, { \textrm{max}_{s} |V_{10}(s)| \over
(\Delta E_{\textrm{min}})^2} \,,
\end{equation}
where $\Delta E_{\textrm{min}}$ is the minimum of the first
excitation gap
$\Delta E_{\textrm{min}} = \textrm{min}_{s} \Delta E$
with $\Delta E = E_1-E_0$,
and $V_{m 0} = \langle 0 | {\textrm d} H / {\textrm d} s | m\rangle$.
Typically, matrix elements of $H(s)$ scale as a low polynomial of the
system size $N$, and the question of whether the runtime is
polynomial or exponential as a function of $N$
therefore depends on how the minimum gap $\Delta
E_{\textrm{min}} $ scales with $N$. If the gap becomes
exponentially small at any point in the evolution, then the computation
requires an exponential amount of time and the QAA is inefficient. The
dependence of the minimum gap on the system size for a given problem is
therefore a central issue in determining the complexity of the QAA.
A notable feature of the interpolating Hamiltonian \eqref{eq:interp} is that
it is real and all of its off diagonal matrix elements are non-positive.
Hamiltonians which have this property are called stoquastic \cite{bravyi:08}. There is complexity-theoretic evidence
that some computational problems regarding the ground states of stoquastic
Hamiltonians are easier than the corresponding problems for more general
Hamiltonians \cite{Bravyi:09}. It may be the case that quantum adiabatic
algorithms using stoquastic interpolating Hamiltonians (such as the ones we
consider here) are no more powerful than classical algorithms--this remains an
intriguing open question.
An interesting question about the QAA is how it performs on
``hard'' sets of problems -- those for which all known algorithms take an
exponential amount of time. While early studies of
the QAA done on small systems ($N \leq 24$)~\cite{farhi_long:01,hogg:03}
indicated that the time required to solve one such problem might scale
polynomially with $N$, several later studies using larger system sizes gave
evidence that this may not be the case.
References ~\cite{farhi:02,farhi:08} show that adiabatic algorithms will fail
if the initial Hamiltonian is chosen poorly. Recent work has elucidated a more
subtle way in which the adiabatic algorithm can
fail~\cite{altshuler:09b,amin:09,farhi-2009,altshuler:09,FSZ10}.
The idea of these works is that a very
small gap can appear in the spectrum of the interpolating Hamiltonian due to
an avoided crossing between the ground state and another level corresponding
to a local minimum of the optimization problem. The location of these avoided
crossings moves towards $s=1$ as the system size grows. They have been called
``perturbative crosses'' because it is possible to locate them using low order
perturbation theory. Altshuler {\it et al.}~\cite{altshuler:09} have argued
that this failure mode dooms the QAA for random instances of NP-complete
problems. However, the arguments of Altshuler {\it et al.}~have been
criticized by Knysh and Smelyanskiy~\cite{knysh:11}. The application of the
QAA to hard optimization problems has been reviewed recently in
Ref.~\cite{bapst:12}.
Young {\it et al.}~\cite{young:08,young:10} recently examined the performance
of the QAA on random instances of the constraint satisfaction problem called 1-in-3 SAT (to be
described in the next section) and showed the presence of avoided crossings
associated with very small gaps. These `bottlenecks' appears in a larger and
larger fraction of the instances as the problem size $N$ increases, indicating
the existence of a first order quantum phase transition. This leads to an
exponentially small gap for a \textit{typical} instance, and therefore also to
the failure of adiabatic quantum optimization.
It is not yet clear to what extent the above behavior found for 1-in-3 SAT is
general and whether it is a feature inherent to the QAA that will plague most
if not all problems fed into the algorithm or something more benign than this.
Previous work~\cite{jorg:08,jorg:10,jorg:10b,hen:11} had argued that a first
order quantum phase transition occurs for a broad class of random optimization
models.
In this paper we contrast the performance of the quantum adiabatic algorithm on random instances of two
combinatorial optimization problems. The first problem we consider is 3-XORSAT
on a random 3-regular hypergraph, which was studied previously in
Ref.~\cite{jorg:10}. Interestingly, although this computational problem is
classically easy--an instance can always be solved in polynomial time on a
classical computer by using Gaussian elimination--it is known that classical
algorithms that do not use linear algebra are stymied by this
problem~\cite{franz:01,xor_diff,ricci-tersenghi:11,guidetti:11}.
In Ref.~\cite{jorg:10} it was shown that the
QAA fails to solve this problem in polynomial time. In this paper we provide
more numerical evidence for this. We also furnish a duality transformation
that helps to understand properties of this model.
The second computational problem we consider is Max-Cut on a 3-regular graph.
This problem is NP-hard. However we consider random instances, for which the computational complexity is
less well understood.
A nice feature of these problems is that the regularity of the associated hypergraphs
constrains the two ensembles of random instances. Studying the performance of the QAA
for these problems, we therefore expect to see smaller instance-to-instance
differences than for the unconstrained ensembles of instances.
We use two different methods to study the performance of the QAA. The first
method is quantum Monte Carlo simulation. It is a numerical method that
is based on sampling paths from the Taylor expansion of the partition function of the system.
Using this method we can extract, for a given instance, the thermodynamic properties
(in particular the ground state energy) as well as
the eigenvalue gap for the interpolating Hamiltonian $H(s)$.
This allows us to
investigate the size dependence of the typical minimum gap of the problem from
which we can extrapolate the large-size scaling of the computation time
$\mathcal{T}$ of the QAA.
The second approach is a quantum cavity method. It is a semi-analytical method that allows us
to compute the thermodynamic properties averaged over
the ensemble of instances in the limit $N\rightarrow \infty$.
It leads to a set of self-consistent equations that can be solved analytically in some classical
examples~\cite{cavity,mezard:03b}. However in the quantum case the equations are more complicated and are solved numerically~\cite{laumann:08,krzakala:08}. The method is not exact on general graphs. For locally tree-like random graphs,
it provides the exact solution of the problem if some assumptions on the Gibbs
measure are satisfied~\cite{cavity,mezard:03b,MM09}.
As we will discuss below the cavity method we use in this paper
gives the exact result for
3-XORSAT, while it only gives an approximation for the Max-Cut problem.
Using these methods we conclude that the quantum adiabatic algorithm fails to
solve both problems efficiently, although in a qualitatively different way.
The plan of this paper is as follows. In Section~\ref{sec:models} we describe
the two computational problems that we investigate. In Sec.~\ref{sec:method}
we discuss the methods that we use to obtain our results. These
results are presented in Sec.~\ref{sec:results} and our conclusions are
summarized in Sec.~\ref{sec:conclusions}. Some parts of this paper have
previously appeared in the PhD thesis of one of the authors \cite{gosset11}.
\section{\label{sec:models}Models}
We now discuss in detail the two computational problems 3-regular 3-XORSAT and
3-regular Max-Cut.
When studying the efficiency of the QAA numerically~\cite{farhi_long:01,young:08,young:10}, it is convenient to
consider instances with a unique satisfying assignment (USA) for reasons that
will be explained in Sec.~\ref{sec:numres}. On the other hand, the quantum
cavity method is designed to study the ensemble of random instances with no
restrictions on the number of satisfying assignments. In this section we
specify the random ensembles of instances that we investigate in this paper.
\subsection{\label{sec:XORSAT}3-regular 3-XORSAT}
The 3-XORSAT problem is a clause based constraint satisfaction problem. An
instance of such a constraint satisfaction problem is specified as a list of $M$ logical conditions
(clauses) on a set of $N$ binary variables. The problem is to determine
whether there is an assignment to $N$ bits which satisfies all $M$ clauses.
In the 3-XORSAT problem each clause involves three bits. A given clause is
satisfied if the sum of the three bits (mod 2) is a specified value (either 0
or 1, depending on the clause). We consider the ``3-regular'' case where
every bit is in exactly three clauses which implies $M= N$. This model has
already been considered by J\"org {\it et al.}~\cite{jorg:10}. The factor
graph for an instance of 3-regular 3-XORSAT is sketched in
Fig.~\ref{fig:3xor2}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{factor_graph.eps}
\caption{
Factor graph of a small part of an instance of the 3-regular 3-XORSAT
problem. In the full factor graph, each clause ($\square$) is connected to
exactly three bits ($\bigcirc$) and each bit is connected to exactly three
clauses, so there are no leaves and the graph closes up on itself. }
\label{fig:3xor2}
\vspace{-0.7cm}
\end{center}
\end{figure}
Since this problem just involves linear constraints (mod 2), the
satisfiability problem can be solved in polynomial time using Gaussian
elimination. However, it is well known that this problem presents difficulties for solvers that do not use linear algebra (see, e.g.\
Refs.~\cite{franz:01,xor_diff,ricci-tersenghi:11,guidetti:11}).
We associate each instance of 3-regular 3-XORSAT with a problem Hamiltonian
$H_P$ that acts on $N$ spins. Each clause is mapped to an operator which acts
nontrivially on the spins involved in the clause. The operator for a given
clause has energy zero if the clause is satisfied and energy equal to $1$ if
it is not, so
\begin{equation}
H_{P}=\sum_{c=1}^{N}\left(\frac{1-J_{c}\sigma_{z}^{i_{1},c}\sigma_{z}^{i_{2,}c}\sigma_{z}^{i_{3},c}}{2}\right)\label{eq:3REG3XOR}.
\end{equation}
Here each clause $c\in\{1,...,N\}$ is associated with the 3 bits $i_{1,c},i_{2,c},i_{3,c}$
and a coupling $J_{c}\in\{\pm1\}$ which tells us if the sum of the bits mod 2 should be $0$ or $1$ when the clause is satisfied.
\subsubsection{Random Instances of 3-regular 3-XORSAT}
As in Ref.~\cite{jorg:10}, we consider both the random ensemble of instances
of this problem and the random ensemble of instances which have a unique
satisfying assignment (USA). In the 3-XORSAT problem as $N\rightarrow\infty$, instances with a USA are
\textit{a nonzero fraction}, about 0.285~\cite{jorg:10}, of the set of all
instances, so the random ensemble of USA instances should be a good
representation of the fully random ensemble.
All satisfiable instances (and in particular instances with a USA)
have the property that the cost function Eq.~(\ref{eq:3REG3XOR})
can be mapped unitarily into the form
\begin{equation}\label{XOR_ferro}
H_{P}=\sum_{c}\left(\frac{1-\sigma_{z}^{i_{1},c}\sigma_{z}^{i_{2,}c}\sigma_{z}^{i_{3},c}}{2}\right)
\end{equation}
by a product of bit flip operators.
\subsubsection{Previous Work}
Reference \cite{jorg:10} studied the performance of the QAA on the
random ensemble of instances of 3-regular 3-XORSAT using quantum cavity method
and quantum Monte Carlo simulation. They also studied the ensemble of random
instances with a USA using exact numerical diagonalization. This work gave
evidence that there is a first order quantum phase transition which occurs at
$s_{c}\approx\frac{1}{2}$ in the ground state. Their results also demonstrate
that the minimum gap is exponentially small as a function of $N$ at the
transition point.
\subsubsection{Duality Transformation}
\label{duality}
In this section we demonstrate a duality mapping for the ensemble of random
instances of 3-regular 3-XORSAT with a unique satisfying assignment. This
duality mapping explains the critical value $s_{c}=\frac{1}{2}$ of the quantum
phase transition in this model~\cite{jorg:10}. Consider the Hamiltonian
\begin{equation}
H(s)=(1-s)\sum_{i=1}^{N}\bigg(\frac{1-\sigma_{x}^{i}}{2}\bigg)+
s\sum_{c=1}^{N}\bigg(\frac{1-\sigma_{z}^{i_{1},c}\sigma_{z}^{i_{2,}c}\sigma_{z}^{i_{3},c}}{2}\bigg)
\label{eq:h_lambda_M} \,.
\end{equation}
Here, the first term is the beginning Hamiltonian and the second term is the
problem Hamiltonian for an instance of 3-regular 3-XORSAT with a unique
satisfying assignment. The 3-regular hypergraph specifying the instance can be
represented by a matrix $M$ where \[ M_{ij}=\begin{cases} 1 & ,\text{ if bit j
is in clause i}\\ 0 & \text{, otherwise. }\end{cases}\] and where $M$ has 3
ones in each row and 3 ones in each column. The fact that there is a unique
satisfying assignment $000...0$ is equivalent to the statement that the matrix
$M$ is invertible over $\mathbb{F}_{2}^{N}$. To see this, consider the
equation (with addition mod 2)
\[
M\vec{v}=\vec{0}.
\]
This equation has the unique solution $\vec{v}=\vec{0}$ if and only if there
is a unique satisfying assignment for the given instance. This is also the
criterion for the matrix $M$ to be invertible.
The duality that we construct shows that the spectrum of $H(s)$ is the same as
the spectrum of $H_{\text{DUAL}}(1-s)$ where $H_{DUAL}$ is obtained by
replacing the problem Hamiltonian hypergraph by its dual--that is to say, the
instance corresponding to a matrix $M$ is mapped to the instance associated
with $M^T$. The ground state energy per spin (averaged over all 3-regular
instances with a unique satisfying assignment) is symmetric about
$s=\frac{1}{2}$ and the first order phase transition observed in
Ref.~\cite{jorg:10} occurs at $s=\frac{1}{2}$. For each $c=1,...,N$
define the operator
\begin{equation} X_{c}=\sigma_{z}^{i_{1},c}\sigma_{z}^{i_{2},c}\sigma_{z}^{i_{3},c}.\label{eq:Xop}\
\end{equation}
We also define, for each clause $c$, a bit string $\vec{y}^{c}$ \[
\vec{y}^{c}=M^{-1}\hat{e}_{c}.\] Here $\hat{e}_{c}$ is the unit vector with
components $(\hat{e}_{c})_i=\delta_{ic}$. Note that $\vec{y}^{c}$ is the
unique bit string which violates clause $c$ and satisfies all other clauses.
Such a bit string is guaranteed to exist since $M$ is invertible. Let
$y_{i}^{c}$ denote the $i$th bit of the string $\vec{y}^{c}.$ Define, for each
$c=1,...,N$,
\begin{equation}
Z_{c}=\prod_{i=1}^{N}\left[\sigma_{x}^{i}\right]^{y_{i}^{c}}.\label{eq:zed_c}
\end{equation}
Note that \[ \{Z_{c},X_{c}\}=0\] and \[ [Z_{c},X_{c^{\prime}}]=0\text{ for
}c\neq c^{\prime}.\] For each bit $i=1,...,N$ let $c_{1}(i),c_{2}(i),c_{3}(i)$
be the clauses which bit $i$ participates in. Then
\begin{equation}
\sigma_{x}^{i}=Z_{c_{1}(i)}Z_{c_{2}(i)}Z_{c_{3}(i)}.\label{eq:xop}
\end{equation}
This follows from the fact that \[
M\hat{e}_{i}=\hat{e}_{c_{1}(i)}+\hat{e}_{c_{2}(i)}+\hat{e}_{c_{3}(i)}\] and so
\begin{eqnarray*}
\hat{e}_{i} & = & M^{-1}\left(\hat{e}_{c_{1}(i)}+
\hat{e}_{c_{2}(i)}+\hat{e}_{c_{3}(i)}\right)\\
& = & \vec{y}^{c_{1}(i)}+\vec{y}^{c_{2}(i)}+\vec{y}^{c_{3}(i)}.
\end{eqnarray*}
The above equation and the definition Eq.~(\ref{eq:zed_c}) show
Eq.~(\ref{eq:xop}). Now using Eqs.~(\ref{eq:xop}) and~(\ref{eq:Xop}) write
\begin{eqnarray}
H(s)& = & (1-s)\sum_{i=1}^{N}\bigg(\frac{1-\sigma_{x}^{i}}{2}\bigg)\\
&+ &s\sum_{c=1}^{N}\bigg(\frac{1-\sigma_{z}^{i_{1},c}\sigma_{z}^{i_{2,}c}
\sigma_{z}^{i_{3},c}}{2}\bigg)\\
& = & (1-s)\sum_{i=1}^{N}\bigg(\frac{1-Z_{c_{1}(i)}Z_{c_{2}(i)}Z_{c_{3}(i)}}{2}\bigg)\\
&+& s\sum_{c=1}^{N}\bigg(\frac{1-X_{c}}{2}\bigg) \label{final_eq} \,.
\end{eqnarray}
The $X$ and $Z$ operators satisfy the same commutation relations as the
operators $\sigma_{x}$ and $\sigma_{z}$. Comparing Eq.~(\ref{final_eq}) with
Eq.~(\ref{eq:h_lambda_M}) we conclude that the spectrum of $H(s)$ is the same
as the spectrum of $H_{DUAL}(1-s)$. This result can be thought of as an
extension of the duality of the one-dimensional random Ising model in a
transverse field, see e.g.\ Ref.~\cite{fisher:95}.
In Fig.~\ref{fig:3xor1} we show the first four energy levels of the
interpolating Hamiltonian $H(s)=sH_B+(1-s)H_P$ as a function of $s$ for one
16-bit instance of 3-XORSAT. The duality transformation means that these energy
levels are the same as for the interpolating Hamiltonian
$H'(s)=sH_{P,DUAL}+(1-s)H_B$ which involves the dual instance. Evident from
the figure is the apparent symmetry of the energy levels around $s=1/2$. In
this case the instance and its dual are similar from the point of view of
the QAA.
The duality argument given here has implications for the phase transition which occurs in the ensemble of random instances of 3-regular 3-XORSAT as $N\rightarrow\infty$. Our numerics in section \ref{sec:results} show that for large $N$, the ground state energy per spin as a function of $s$ (averaged over the ensemble of instances) has a nonzero derivative as $s\rightarrow\frac{1}{2}$. The duality transformation given here implies that this curve is symmetric about $s=\frac{1}{2}$. So there is a discontinuity in the derivative of this curve at $s=\frac{1}{2}$, which is associated with a first order phase transition.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{N16_M16_1.eps}
\caption{(Color online)
First four energy levels of the interpolating Hamiltonian for a 16-bit
instance of the 3-regular 3-XORSAT problem. The energy curves for this
instance are close to being symmetric about $s =1/2$. Our duality
transformation means that sending $s\rightarrow(1-s)$ we obtain the spectrum
of the interpolating Hamiltonian for a different instance from the
same ensemble, obtained by
interchanging the clauses and bits.}
\label{fig:3xor1}
\vspace{-0.7cm}
\end{center}
\end{figure}
\subsection{3-regular Max-Cut}
The second model we discuss is also a clause based problem. The instances we
consider are not satisfiable and we are interested in finding the assignment
which gives the maximum number of satisfied clauses. We view this problem as
minimizing a cost function that computes the number of unsatisfied clauses.
The 3-regular Max-Cut problem is defined on $N$ bits, and each bit appears in
exactly three clauses. Each clause involves two bits and
is satisfied if and only if the sum of the two bits (modulo 2) is 1. The
number of clauses is therefore $M=3N/2$. The problem Hamiltonian is
\begin{equation}
H_{P}=\sum_{c}\left(\frac{1+\sigma_{z}^{i_1,c}\sigma_{z}^{i_2,c}}{2}\right) \,.
\label{eq:hamMaxCut}
\end{equation}
The ground state of this Hamiltonian encodes the solution to the Max-Cut
problem.
The model can also be viewed as an antiferromagnet on a 3-regular random
graph. Because the random graph in general has loops of
odd length, it is not possible to satisfy all of the clauses.
The Max-Cut problem is NP-hard and accordingly there is no known classical
polynomial time algorithm which computes the ground state energy of the
problem Hamiltonian \eqref{eq:hamMaxCut}. Indeed, even achieving a certain
approximation to the ground state energy is hard, which follows from the fact
that it is NP hard to approximate the Max-Cut of $3$-regular graphs to within
a multiplicative factor $0.997$~\cite{berman:99}. Interestingly, however,
there is a classical polynomial time algorithm which achieves an approximation
ratio of at least $0.9326$~\cite{halperin:04}.
\subsubsection{Random Instances of 3-regular Max-Cut}
Using the quantum cavity method we study the ensemble of random instances of
3-regular Max-Cut.
The random instances we studied using quantum Monte Carlo simulation were restricted
to those which have exactly 2 minimal energy states (note that this is the
smallest number possible since the problem is symmetric under flipping all the
spins) and for which the ground state energy of the problem Hamiltonian is equal to
$\frac{1}{8}N$. We choose to study instances with a unique satisfying assignment (up to the bit-flip symmetry of
this problem) because it is numerically more convenient for the extraction of the relevant gap (to
the first even state). For the range of sizes studied,
$\frac{1}{8}N$ was found
numerically to be the most probable value of the ground state energy. The restriction to instances with a fixed Max-Cut ($\frac{1}{8}N$) further reduces the
instance-to-instance fluctuations. However, this choice affects the ensemble averaged
value of thermodynamic observables (e.g. the average energy of fully random
instances is different from $N/8$), making it more difficult to compare the quantum Monte Carlo results
with our quantum-cavity results on the fully random ensemble. We expect (and find
numerically) that this set of instances makes up an exponentially small
fraction of the whole random ensemble for large $N$.
\subsubsection{Previous work}
Laumann et al.~\cite{laumann:08}
used the quantum cavity method to study the transverse field spin glass with
the problem Hamiltonian
\begin{equation}
H_{P}=\sum_{c}
\left(\frac{1+J_{c}\sigma_{z}^{i_1, c}\sigma_{z}^{i_2, c}}{2}\right)\label{eq:classical_spinglass}
\end{equation}
where each $J_{c}$ is chosen to be $+1$ or $-1$ with equal probability.
In general there is no {}``gauge transformation'' equivalence between this problem Hamiltonian and the
antiferromagnetic problem Hamiltonian
Eq.~(\ref{eq:hamMaxCut}). However we do expect these models to exhibit
similar properties since a random graph is locally tree-like, and on a tree
such a gauge transformation does exist, see Ref.~\cite{zdeborova:09} for a
discussion of this point in the case where there is no transverse field
present.
Laumann et al. found that this system exhibits a second order phase transition
as a function of the transverse field. Their method is similar to the
quantum cavity method that we use, although the numerics performed in
Ref.~\cite{laumann:08} have some systematic errors which our calculations
avoid. The method used in Ref.~\cite{laumann:08} is a discrete imaginary time
formulation of the quantum cavity method which has nonzero Trotter error,
whereas our calculation works in continuous imaginary time~\cite{krzakala:08}
where this source of error is absent. Our calculation also does not use the
approximation used in Ref.~\cite{laumann:08} where the ``effective action'' of
a path in imaginary time is truncated at second order in a cluster expansion.
\section{\label{sec:method}Method}
\subsection{\label{sec:numres}Quantum Monte Carlo}
The complexity of the QAA algorithm is determined by
the size dependence of the ``typical'' minimum gap of the problem. Following
Refs.~\cite{young:08, young:10, hen:11}, we analyze the size-dependence of
these gaps by
considering (typically) 50 instances for each size, and then extracting the
minimum gap for each of them. For each instance, we perform quantum Monte
Carlo simulations for a range of $s$ values and hunt for the minimum gap. We
then take the median value of the minimum gap among the different instances
for a given size to obtain the ``typical'' minimum gap. In situations where
the distribution of minimum gaps is very broad, the average can be dominated
by rare instances which have a much bigger gap than the typical one, and so
the typical value (characterized, for example, by the median) is a better
measure than the
average. This has been discussed in detail by Fisher~\cite{fisher:95} who
solved the random transverse field model in one-dimension exactly, and found
that the average gap at the quantum critical point vanishes polynomially with
$N$ while the typical gap has stretched exponential behavior,
$\exp(- c N^{1/2})$.
Quantum Monte Carlo simulation works by sampling random variables from a probability distribution (over some configuration space) which contains information about the quantum system of interest. The probability distribution is sampled by Markov chain Monte Carlo, and properties of the quantum system to be studied are obtained as expectation values. Different quantum Monte Carlo methods are based on different ways of associating probability distributions to a quantum system.
In our simulations we use a quantum Monte Carlo technique known as the stochastic series expansion (SSE) algorithm~\cite{sandvik:99,sandvik:92}. In this method, the probability distribution associated with the quantum system is derived from the Taylor series expansion of the partition function ${\text Tr}[{\text e}^{-\beta \hat{H}}]$ at inverse temperature $\beta$. This is in contrast to other quantum Monte Carlo techniques which are based on the path integral expansion of the partition function. Whereas some of these techniques have systematic errors because the path integral expansions used are inexact, the SSE that we use has no such systematic error.
A second feature of the SSE method that we use is that the Markov chain used
to sample configurations allows global, as well as local, updates,
which leads to faster equilibration. We further speed up equilibration by
implementing ``parallel tempering''~\cite{hukushima:96}, where simulations for
different values of $s$ are run in parallel and spin configurations with
adjacent values of $s$ are swapped with a probability satisfying the detailed
balance condition. Traditionally, parallel tempering is performed for systems
at different temperatures, but here the parameter $s$ plays the role of
(inverse) temperature~\cite{sengupta:02}.
The details of our implementations of the SSE method for 3-XORSAT and Max-Cut are slightly different and are further discussed
in Appendix~\ref{app:details_QMC}.
Moreover, the Max-Cut problem can not be simulated with the `traditional' SSE method because of its symmetry under flipping all of the spins:
For a given instance of Max-Cut, every eigenstate of the interpolating Hamiltonian $H(s)$ is either even or odd under this symmetry transformation. Since the ground state is even, here we are interested in the eigenvalue gap to the first even excited state. We therefore design our quantum Monte Carlo simulation so that it works in the subspace of even states.
The modified algorithm is detailed in Appendix~\ref{app:projSSE}.
In our simulations we extract the gap from imaginary time-dependent correlation functions.
The gap of the system for a given instance and a given $s$ value is extracted
by analyzing measurements of (imaginary) time-dependent correlation functions
of the type
\begin{equation}
C_A(\tau) = \langle \hat{A}(\tau) \hat{A}(0) \rangle -\langle A \rangle^2 \,,
\end{equation}
where the operator $\hat{A}$ is some measurable physical quantity.
It is useful to optimize the choice of correlation functions such that
the contribution from the first excited state, $m=1$ in Eq.~\eqref{Ctau} below, is as
large as possible relative to the contributions from higher excited
states. One way of doing this, which was used in some of the runs,
is described in Ref.~\cite{hen:12}.
The evaluation of $\langle A \rangle^2$ in the above equation is computed from
the product $\langle A \rangle^{(1)} \langle A \rangle^{(2)}$ where the two
indices correspond to different independent simulations of the same system.
This eliminates the bias stemming from straightforward squaring of the
expectation value.
In the low temperature limit, $T \ll \Delta E_1$ where $\Delta E_1 = E_1 - E_0$,
the system is in its ground state so
the imaginary-time correlation function is given by
\begin{equation}
C_A(\tau) = \sum_{m=1} |\langle 0 | \hat{A} | m \rangle|^2
\left( e^{-\Delta E_m \tau} + e^{-\Delta E_m (\beta - \tau)} \right) \,,
\label{Ctau}
\end{equation}
where $\Delta E_m = E_m - E_0$. At
long times, $\tau$, the correlation function is dominated by the
smallest gap $\Delta E_1$
(as long as the
matrix element $|\langle 0 | \hat{A} | 1 \rangle|^2$ is nonzero). On a log-linear plot
$C_A(\tau)$, then has a region where it is a straight line whose slope
is the negative of the gap. This can therefore be extracted by
linear fitting.
A more detailed description of the method may be found in Ref.~\cite{hen:12}.
\subsection{The quantum cavity method}
The quantum cavity method~\cite{krzakala:08,laumann:08} is a technique that
is used to study thermodynamic properties of transverse field spin
Hamiltonians. In our implementation we use the continuous imaginary time method
from Ref.~\cite{krzakala:08}. Quantum Cavity methods have now been used to study a number of problems including the
ferromagnet on the Bethe lattice in uniform~\cite{krzakala:08} and random~\cite{dimitrova:2011} transverse field,
the spin glass on
the Bethe Lattice~\cite{laumann:08}, 3-regular 3-XORSAT~\cite{jorg:10}, and the quantum Biroli-M\'ezard model~\cite{2011PhRvB..83i4513F}.
If the Hamiltonian is a two-local transverse field Hamiltonian on a finite
number of spins and if the interaction graph consists of a tree (i.e if there
are no loops) then the quantum cavity equations are exact. In this case the
quantum cavity equations are a closed set of equations that exactly
characterize the thermodynamic properties of the system at a fixed inverse
temperature $\beta$. If instead the interaction graph is a random regular
graph with a finite number of spins then it must have loops. As $N\rightarrow \infty$ we can think of it as an ``infinite tree'' since the typical
size of loops in such a graph diverges.
Quantum
cavity methods (such as the one we use) for problems defined on random
regular graphs make use of two properties of the system:
{\it (i)} the fact that a random
regular graph is locally tree-like;
{\it (ii)} the fact that spin-spin correlations decay quickly as a function of distance.
While the first property is true with probability 1 for random regular graphs when $N\to\infty$,
the second property is not always true and we now discuss it more carefully.
The simplest case is when the Gibbs measure is characterized by a single pure state that has
the clustering property, as in a paramagnetic phase. This happens at high enough temperature
or large enough transverse field. In this case, correlations decay exponentially and the simplest version
of the cavity method (the so-called ``replica symmetric (RS)'' cavity method) gives the exact result.
Upon lowering the temperature or the transverse field, a phase transition towards a more complicated
phase can be encountered. If this phase is a standard broken-symmetry phase (e.g. a ferromagnetic
phase), then correlation decay holds provided one adds an infinitesimal symmetry-breaking field,
and the RS cavity method still provides the exact result~\cite{krzakala:08}.
However, if the transition is to a spin glass phase, then the Gibbs measure is
split into a large number of states, and the decorrelation property that is
required by the cavity method only holds within each state. In this case,
there is no explicit symmetry breaking, therefore the states cannot be
selected by adding an infinitesimal external field. It turns out that refined
versions of the cavity method must be used, that are based on assumptions on
the structure of these states~\cite{MM09}. The simplest assumption is that
states are distributed in a uniform way in the phase space of the system, and
leads to the so-called ``1 step replica symmetry breaking (1RSB)''
approximation. In more complicated cases, states might be arranged in
``clusters'' leading to a hierarchical organization, and this requires further
steps of RSB~\cite{mezard:87}. A consistency check can be performed within the
method to check whether a given RSB scheme gives the exact result or whether
further RSB steps are required.
For the XORSAT problem on random regular graphs, it can be rigorously shown
that the 1RSB scheme gives the exact result in the classical
case~\cite{xor_1,xor_2}, and it has been conjectured that the same is true for
the quantum problem in transverse field~\cite{jorg:10}. For the specific case
of 3-regular 3-XORSAT investigated here, a RS calculation is enough to get the
thermodynamic properties~\cite{jorg:10}, which is why we use the RS method
in our simulations of this model. We study this problem at a temperature low enough that no residual
temperature dependence of the energy is observed.
Furthermore, we set the parameters of our calculation to be more computationally demanding than those of reference~\cite{jorg:10}, which allows us to achieve better precision.
The study of 3-regular Max-Cut is more involved. To understand how well the cavity method works on this problem, we can look at results obtained for the classical 3 regular spin-glass with Hamiltonian \eqref{eq:classical_spinglass}. We expect that these problems have very similar (possibly identical) thermodynamic properties~\cite{zdeborova:09}. For the classical 3-regular spin glass it can be shown that neither the RS nor the 1RSB cavity method give exact results~\cite{cavity,mezard:03b,zdeborova:09}, and it is widely believed that
an infinite number of RSB steps is required (this has been shown rigorously
for the $z$-regular spin glass as $z\to\infty$, which corresponds to the
Sherrington-Kirkpatrick model~\cite{mezard:87}). However the 1RSB calculation
gives a good approximation to the classical ground state
energy~\cite{mezard:03b}.
To study 3-regular Max-Cut we used the 1RSB quantum cavity method with Parisi
parameter $m=0$. This level of approximation is more accurate than the replica symmetric approximation but less accurate than the full 1RSB calculation. With this method we studied the $N\rightarrow\infty$ limit of the random ensemble of 3-regular Max-Cut Hamiltonians
\[
H(\lambda)=\sum_{c}\sigma_z^{i_1,c}\sigma_z^{i_2,c}-\lambda\sum_{i=1}^{N}\sigma_x^{i}.
\]
We ran our calculations at finite inverse temperature
$\beta=4$ and various values of the transverse field $\lambda$. Thermodynamic
expectation values with respect to $H(\lambda)$ at inverse temperature $\beta$
can be related to thermodynamic expectation values with
respect to $H(s)=(1-s)H_B+sH_P$ at $s=\frac{1}{1+\lambda}$ and inverse temperature
$\beta'=\frac{2\beta}{s}$. Note that there is an $s$ dependence introduced in
the temperature when relating these two thermodynamic ensembles.
\section{\label{sec:results}Results}
In what follows we present the results of the QMC simulation alongside those
of the quantum cavity results for the two problems we study here. We show that
the QAA fails with the choice of interpolating Hamiltonians discussed
previously; for both problems the running time appears to be exponentially long as a
function of the problem size. However, the reasons for this failure are
different for each of the models.
\subsection{Random 3-regular 3-XORSAT}
The 3-regular 3-XORSAT problem was studied by J\"org {\it et
al.}~\cite{jorg:10} who determined the minimum gap for sizes up to $N= 24$.
Here, we extend the range
of sizes up to $N= 40$ by quantum Monte Carlo simulations. The two sets of
results agree and provide compelling evidence for an exponential minimum gap.
The duality argument in Sec.~\ref{duality}, shows that the quantum phase
transition occurs exactly at $s = s_c = 1/2$. Our numerics show that the phase transition is strongly first
order, in agreement with Ref.~\cite{jorg:10}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{3XOR_DE_min_median.eps}
\caption{(Color online)
Median minimum gap as a function of problem size of the 3-regular 3-XORSAT
problem on a log-linear scale. The straight-line fit is good, indicating
an exponential dependence which in turn leads to an exponential complexity of
the QAA for this problem. Triangles indicate exact-diagonalization results
while the circles are the results of QMC simulations.}
\label{fig:3xorDE}
\vspace{-0.7cm}
\end{center}
\end{figure}
We show results for the median minimum gap as a function of size for the
3-regular 3-XORSAT problem in Fig.~\ref{fig:3xorDE} (log-lin). A straight line
fit works well for the log-lin plot, which provides evidence that
the minimum gap is exponentially small in the system size. The results shown here generalize and
agree with those obtained by J\"org {\it et al.}~\cite{jorg:10}. While
Ref.~\cite{jorg:10} computed the average
minimum gap and we computed the median, the
difference here is very small because the distribution of minimum gaps is
narrow for this problem, see for example Fig.\ 21 of Ref.~\cite{bapst:12}.
We also computed some ground-state properties of the model: the energy
$\langle \hat{H} \rangle$, the magnetization along the $x$-axis $M_x=\frac1{N}
\sum_i \langle \sigma_i^x \rangle$, and the spin-glass order parameter defined
by:
\begin{equation}
\label{eq:q}
q= \frac1{N} \sum_i \langle \sigma_i^z \rangle^2 \,.
\end{equation}
These quantities, averaged over 50 instances for each size,
are plotted in Figs.~\ref{fig:E03XOR},
\ref{fig:Sx3XOR}, and~\ref{fig:q3XOR}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{E0_combine3.eps}
\caption{(Color online)
Mean energy (averaged over 50 sample instances per size) of the 3-regular
3-XORSAT problem as a function of the adiabatic parameter $s$ for different
sizes (QMC results) compared with the RS quantum cavity
calculations. Because of the duality of the model, the true curve (averaged
over all instances at a given value of $N$) is symmetric about $s=1/2$. The
main panel shows a blowup near the symmetry point $s=1/2$. In the inset, the
entire range is shown.}
\label{fig:E03XOR}
\vspace{-0.7cm}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{Sx.eps}
\caption{(Color online) Magnetization along the $x$-axis,
$M_x = N^{-1} \sum_i \langle \sigma^x_i \rangle$, as a function of the
adiabatic parameter $s$ for the 3-regular 3-XORSAT problem. Results obtained both by QMC and the cavity method are shown. The latter
indicates a sharp discontinuity at $s = s_c = 1/2$.
The slope of the QMC results at $s=1/2$ increases with increasing $N$,
consistent with a discontinuity at $N = \infty$.}
\label{fig:Sx3XOR}
\vspace{-0.7cm}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{q.eps}
\caption{(Color online)
The spin-glass order parameter $q$ as defined in Eq.~\eqref{eq:q} as a function
of the adiabatic parameter $s$ for the 3-regular 3-XORSAT problem. The rapid
change for large sizes around $s=1/2$
indicates a first-order quantum phase transition at this value of $s$.}
\label{fig:q3XOR}
\vspace{-0.7cm}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{E0_extrap_s_050_fit_3points_b.eps}
\caption{(Color online)
Extrapolation of the energy values as given by the QMC method (solid line) for
different system sizes at $s=1/2$ as compared to the value given by the cavity
method (which is for $N = \infty$) for the 3-regular 3-XORSAT problem, assuming a $1/N$ dependence. Extrapolating the QMC results to $N = \infty$
seems to given a result consistent with the cavity value.
}
\label{fig:extrap_en3XOR}
\vspace{-0.7cm}
\end{center}
\end{figure}
Figure \ref{fig:E03XOR} shows that for large system sizes differences between the QMC results for the
ground state energy and the (replica symmetric) cavity results
are small. Reference \cite{jorg:10} has argued that the replica symmetric
(RS) cavity method is actually exact for the thermodynamic properties of the 3
regular 3-XORSAT problem. To check this, in Fig.~\ref{fig:extrap_en3XOR} we
have expanded the vertical scale and show an extrapolation of the QMC results
to $N=\infty$ at the critical value $s=s_c=1/2$ (where finite-size corrections
are largest). The extrapolated value appears to be consistent
with the cavity result.
The rapid variation of $M_x$ and $q$ shown in figures \ref{fig:Sx3XOR}
and~\ref{fig:q3XOR} in the vicinity of $s_c = 1/2$ is evidence for a
first-order transition. Figure
\ref{fig:Sx3XOR} also shows a discontinuity in the $x$-axis
magnetization predicted by the cavity calculations at $s=\frac{1}{2}$. In the quantum Monte Carlo data we see that the slope of the
magnetization increases with $N$ and is therefore consistent with the cavity
prediction for $N\rightarrow \infty$.
\subsection{Random 3-regular Max-Cut}
In the Monte Carlo simulations of the Max-Cut problem we
restrict ourselves to instances for which the problem Hamiltonian has a ground
state degeneracy of two, and for which the ground state energy is $N/8$. For this
ensemble of instances, we measured the
energy, the $x$-magnetization and the spin-glass order parameter using quantum
Monte Carlo simulations. Because of the bit-flip symmetry of the
model, we use the following different definition of the spin glass order
parameter:
\begin{equation}
\label{eq:q2}
q'= \left({1\over N(N-1)}\, \sum_{i\ne j} \langle \sigma_i^z \sigma_j^z
\rangle^2 \right)^{1/2}\,.
\end{equation}
\begin{figure}[H]
\begin{center}
\includegraphics[width=\columnwidth]{qMax_both.eps}
\caption{(Color online)
The spin-glass order parameter $q'$ obtained from Monte Carlo
simulations, obtained from Eq.~(\ref{eq:q2}), as
a function of the adiabatic parameter $s$ for the Max-Cut problem. Also shown
is the value of $\bar{q}$ from the cavity calculation, which is defined differently as discussed in the text. The inset shows a
global view over the whole range of s, indicating large differences between
the Monte Carlo and cavity calculations for large $s$. This may be due to the
different ensembles used in the two calculations, as discussed in the text.
}
\label{fig:qMax}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=\columnwidth]{SxMax.eps}
\caption{(Color online)
Magnetization along the $x$ direction, $M_x = N^{-1}
\sum_i \langle \sigma^x_i \rangle$, as a function of the adiabatic
parameter $s$ for the Max-Cut problem.}
\label{fig:SxMax}
\end{center}
\end{figure}
In Figs.~\ref{fig:qMax}, \ref{fig:SxMax} and \ref{fig:E0Max} we compare the QMC results with those of our quantum cavity method computation. Recall that the cavity method
results apply to the random ensemble of instances. Formally the value of the spin glass parameter $q$ from Eq.~\eqref{eq:q} is zero due to the bit flip symmetry of the Hamiltonian. However, the cavity method works in the thermodynamic limit in which this symmetry is spontaneously broken for $s$ greater than the critical value $s_c$. For the cavity calculations, we measure a spin glass order parameter $\bar{q}$ which is the
magnetization squared for each thermodynamic ``state'', averaged over the
states. This becomes non-zero for $s > s_c \simeq 0.36$ as shown in
Fig.~\ref{fig:qMax}. The Monte Carlo results are consistent with this value
of $s_c$. The inset of Fig.~\ref{fig:qMax} shows a global view of the two
spin glass order parameters that we measure, over the whole range of $s$.
There we see differences between the (different) order parameters measured
using Monte Carlo and cavity calculations for large $s$. Note that the Monte
Carlo simulations take only instances with a doubly degenerate ground state
for the problem Hamiltonian, $s=1$, so $q'=1$ in this limit, whereas the cavity
calculations are done for the random ensemble where the instances have much
larger degeneracy at $s=1$ and so $\bar{q} < 1$ in this limit.
Our numerical results for the $x$-component of the magnetization are shown in
Fig.~\ref{fig:SxMax}. We see no evidence of a discontinuity in this quantity at $s_c \simeq 0.36$ for large $N$. This is in contrast with the corresponding plot for 3-XORSAT in figure \ref{fig:Sx3XOR}.
Results for the energy of the Max-Cut problem, obtained both from
Monte Carlo and the cavity approach are shown in Fig.~\ref{fig:E0Max}. The two
agree reasonably well but there are differences in the spin glass phase, $s >
s_c$, which may be due to the different ensembles used in the two
calculations. We also note that our cavity method
computation is performed at nonzero temperature.
Using quantum Monte Carlo simulations we have determined the energy gap as a function
of $s$ for $s$ in the range between $0.3$ (i.e.\ well below $s_c$) and $0.5$
(i.e.\ well above $s_c$) for sizes between $N = 16$ and $160$. For the smaller
sizes we find a single minimum in this range, which lies a little above $s_c$.
However, for larger sizes, we see a fraction of instances in which there is a
minimum close to $s_c$ and a second, deeper, minimum for $s > s_c$ well inside the spin-glass phase. A set
of data which shows two minima is presented in Fig.~\ref{fig:128gap}.
This interesting behavior of the minimum gaps suggests the following
interpretation: The minima found close (just above) $s_c$ correspond to the
order-disorder quantum phase transition. Above $s_c$ the system is the
spin-glass phase. The minima that are well within the spin-glass phase may
correspond to `accidental' or perturbative crossings in the spin glass.
Double-minima occurrences become more frequent as
the system size increases. While no double minima were found for
sizes $N=16, 24$ and $32$ (within the studied range of $s$ values), for sizes
$N=64, 128$ and $160$ the percentage of instances that exhibit such double
minima was found to be approximately $7\%$, $36\%$ and $40\%$, respectively
(obtained from $\sim 50$ instances for each size).
We have therefore performed two analyses on the data for the gap. In the first
analysis we determine the global minimum (for the range of $s$ studied) for
each instance and determine the median over the instances. There are about 50
instances for each size. This data is presented in Table~\ref{table:gap1}, and is plotted in Fig.~\ref{fig:maxcutDE}
both as log-lin (main figure) and log-log (inset). A straight line fit works
well for the log-lin plot (goodness of fit parameter $Q = 0.57$), provided
that we omit the two \textit{smallest}
sizes. The goodness of fit parameter $Q$ is the probability that, given the fit,
the data could have the observed value of $\chi^2$ or greater,
see Ref.~\cite{press:92}.
However, a straight-line fit
works much less well for the
log-log plot ($Q = 2.7 \times 10^{-3}$), again omitting the two smallest
sizes, because the data for the
\textit{largest} size lies below the extrapolation from smaller sizes.
If smaller points do not lie on the fit, it is possible that the fit is correct and the
deviations are due to corrections to scaling. However, if the largest size
shows a clear deviation then the fit can not describe the asymptotic
large-$N$ behavior. From these fits we conclude that an exponentially decreasing gap is
preferred over a polynomial gap.
\begin{figure}[H]
\begin{center}
\includegraphics[width=\columnwidth]{E0_broadrange.eps}
\includegraphics[width=\columnwidth]{E0Max.eps}
\caption{(Color Online) Energy as a function of the adiabatic parameter $s$
for the Max-Cut problem. The cavity results computed at inverse temperature $
\beta=\frac{8}{s}$ are depicted by the dashed line. The lower panel is a blow up of the
region around the maximum, which illustrates the difference between the Monte
Carlo and cavity results. Some of this
difference may be due
to the different ensembles used, as discussed in the text.
}
\label{fig:E0Max}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{128gap.eps}
\caption{(Color online) The gap to the first (even) excited state as a function of the adiabatic
parameter $s$ for one of the $N=128$ instances of the Max-Cut problem,
showing two distinct minima.
The first, higher, minimum is close to $s \approx 0.36$ (the location of the
order-disorder phase transition) while the other, lower minimum
(global in the range)
is well within the spin-glass phase.}
\label{fig:128gap}
\vspace{-0.7cm}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{DE_combined.eps}
\caption{(Color online)
Median minimum gap, on log-linear (main panel) and log-log (inset) scales, for
the 3-regular Max-Cut problem for $s$ in the range 0.3 to 0.5.
The straight-line fit on the log-linear scale (omitting the two smallest
sizes)
is a much better fit ($Q=0.57$) than that of the log-log
scale ($Q=2.7 \times 10^{-3}$), in which the two smallest sizes are also
omitted.}
\label{fig:maxcutDE}
\end{center}
\end{figure}
\begin{table}[H]
\caption{Median minimum gap for 3-regular Max-Cut (plotted in figure \ref{fig:maxcutDE})}
\centering
\begin{tabular}{c c c c}
\hline\hline
N & Median gap & Error \\ [0.5ex]
\hline
16 & 0.3203 & 0.0056 \\
24 & 0.2323 & 0.0057\\
32 & 0.1844 & 0.0057\\
64 & 0.1113 & 0.0058\\
128 & 0.0496 & 0.00473\\
160 & 0.0291 & 0.0049\\ [1ex]
\hline
\end{tabular}
\label{table:gap1}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{DE_str_exp_0_5.eps}
\caption{(Color online) A stretched exponential fit to
$A e^{-c N^{1/2}}$ for data for the median minimum gap for the 3-regular Max-Cut
problem, omitting the two smallest sizes. The fit is satisfactory ($Q = 0.31$).
}
\label{fig:maxcutDE_str_exp}
\vspace{-0.7cm}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{DE_local_combined.eps}
\caption{(Color online) Median minimum gap, on log-linear (main panel) and
log-log (inset) scales, as a function of problem size for the 3-regular Max-Cut
problem. Here, the minimum gaps were taken from the vicinity of the quantum
phase transition at $s = s_c \simeq 0.36$ and are therefore not necessarily the
global minima. The two fits indicate that in this case the polynomial
dependence is more probable.}
\label{fig:maxcutLocalDE}
\vspace{-0.7cm}
\end{center}
\end{figure}
There are other possibilities for the scaling of the minimum
gap with size, in addition to polynomial or
exponential. For example, we considered a
``stretched exponential'' scaling of the form $A e^{-c N^{0.5}}$. Omitting the first two
points (as we did for the exponential fit) we find the fit is satisfactory, as shown in the upper panel of
Fig.~\ref{fig:maxcutDE_str_exp} ($Q = 0.31$). Hence it is possible that the minimum gap decreases as a stretched exponential.
However, if for the instances with more than one minimum, we just take the
minimum close to the critical value $s_c$, a different picture emerges, as shown in
Fig.~\ref{fig:maxcutLocalDE}. In this case, a
straight line fit works well for
the log-log plot ($Q=0.96$), but poorly for the log-lin plot
($Q = 0.016$). For consistency, we again omitted
the two smallest sizes for the log-lin plot. These
results indicate that the gap
only decreases polynomially with size near the quantum critical point.
So far we have plotted results for the \textit{median} minimum gap, which is
a measure of the \textit{typical} value. However, it is important to note
that there are large
fluctuations in the value of the minimum gap between instances. This is
illustrated in Fig.~\ref{fig:scatter} which presents the values of the minimum
gap for all 47 instances for $N = 160$. For the 19 instances with two minima
in the range of $s$ studied, the minimum at larger $s$ is lower than the one
at smaller $s$. For these instances, the figure shows both
the ``local'' (smaller-$s$), and the
``global'' (larger-$s$) minima.
From Fig.~\ref{fig:scatter} we note that a substantial fraction of instances for
N = 160 have a minimum gap which is
\textit{much} smaller than the median, $0.0291(49)$. Smaller sizes do not have
such a pronounced tail in the distribution for small gaps.
We should mention that the gap is not precisely determined if
it is extremely small because we require the condition $\beta \Delta E \gg 1$.
For $N=160$ we took $\beta = 2048$ so this condition is well satisfied for
gaps around the median. Hence we are confident that the median is
accurately determined. However, it is not well satisfied for the smallest gaps
in Fig.~\ref{fig:scatter}. Thus, while it is clear that a non-negligible fraction
of instances for $N=160$ do have a very small minimum gap, the precise value of the
very small gaps in Fig.~\ref{fig:scatter} is uncertain. We note that
if the fraction of instances with a very small minimum gap
continues to increase with $N$, then, asymptotically,
the median would decrease \textit{faster} than that
shown by the fit in the main part of Fig.~\ref{fig:maxcutDE}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{gaps_all.eps}
\caption{(Color online) A scatter plot of the minimum gap for all 47 instances
for size $N=160$ for the 3-regular Max-Cut
problem. For the 19 instances with two minima in the range of
$s$ studied, both are shown, that closer to $s = 0.36$ being denoted
``local'', and the smaller one, at larger $s$, being denoted ``global''. Note
the large scatter in the values of the minimum gap for different instances.}
\label{fig:scatter}
\vspace{-0.7cm}
\end{center}
\end{figure}
\section{\label{sec:conclusions}Summary and conclusions}
It was demonstrated in Ref.~\cite{jorg:10} that the quantum adiabatic
algorithm fails to solve random instances of 3-regular 3-XORSAT in polynomial
time, due to an exponentially small gap in the interpolating Hamiltonian which
occurs near $s = s_c = \frac{1}{2}$. This exponentially small gap is associated
with a first order quantum phase transition in the ground state. In this work
we have provided additional numerical evidence for this. We have also
demonstrated using a duality transformation that the critical value of the
parameter $s$ is in fact at exactly $s_{c}=\frac{1}{2}$. We have also shown that the ground state energy
of the three regular 3-XORSAT model with a transverse field agrees very well
with a replica symmetric (RS) cavity calculation. This provides support for the
claim of Ref.~\cite{jorg:10} that the RS calculation is exact for the
thermodynamic properties of this model.
For the random ensemble of Max-Cut instances that we consider, we find that
the interpolating Hamiltonians exhibit a second order, continuous phase
transition at a critical value $s = s_c \simeq 0.36$. Near this critical value of $s$ we
find that the eigenvalue gaps decrease only polynomially with the number of
bits. However, we also observe very small gaps at values $s>s_{c}$,
i.e.\ in the spin glass phase. An analysis of the fits indicates that a
gap decreasing exponentially with size is preferred over a polynomially
varying gap,
though a stretched exponential fit is also satisfactory.
For both of the problems we studied, the adiabatic interpolating Hamiltonians
are stoquastic. This makes it possible for us to numerically investigate the
performance of the QAA using quantum Monte Carlo simulation and the quantum
cavity method. However it is possible that quantum adiabatic algorithms with
stoquastic interpolating Hamiltonians are strictly less powerful than more
general quantum adiabatic algorithms.
The QMC calculations consider
only instances in which the
problem Hamiltonian, $H_P$, has a doubly degenerate ground state
and a specified value of the ground state energy.
These instances are exponentially rare. By
contrast the cavity approach considers random instances. However, we do not think that these restrictions invalidate the conclusions on the minimum gap summarized in the previous paragraph.
Also, it should be noted that inside the spin-glass phase, QMC techniques
become less and less efficient as the adiabatic parameter $s$ approaches $1$,
i.e.\ when the Hamiltonian approaches the classical problem Hamiltonian. Hence
we have not been able to study $s$ values much larger than 0.5 for a broad range of
sizes. It is possible, indeed likely, that there are
other avoided crossings in this range which might
lead to even smaller minima than those found in the studied range,
$0.3 \leq s \leq 0.5$. These minima will however not alter the conclusion
that the overall scaling of the running time of the QAA -- when applied to the
Max-Cut problem -- appears to grow exponentially (or perhaps in a stretched
exponential manner) with problem size.
The first order phase transition in 3-regular 3-XORSAT prevents the quantum
adiabatic algorithm from successfully finding a satisfying assignment. In
contrast, the second order phase transition in 3-regular Max-Cut does not
determine the performance of the quantum adiabatic algorithm on this problem.
In this example the small gaps which occur beyond $s_{c}$ cause the quantum
adiabatic algorithm to fail. These small gaps may be associated with
{}``perturbative crossings'' as described in Refs.~\cite{altshuler:09b,amin:09,altshuler:09,farhi-2009,FSZ10}.
\begin{acknowledgments}
APY and IH acknowledge partial support by the National Security Agency (NSA)
under Army Research Office (ARO) contract number W911NF-09-1-0391, and in part
by the National Science Foundation under Grant No.~DMR-0906366. APY and IH
would also like to thank Hartmut Neven and Vasil Denchev at Google for
generous provision of computer support. They are also grateful for computer
support from the Hierarchical Systems Research Foundation. AS acknowledges support by the National Science Foundation under Grant No. DMR-1104708.
EF, DG and FZ acknowledge financial support from
MIT-France Seed Fund/MISTI Global Seed Fund grant: ``Numerical simulations and quantum algorithms''.
FZ would like to thank F.~Krzakala, G.~Semerjian and L.~Foini for useful
discussions. DG acknowledges partial support from NSERC.
\end{acknowledgments}
|
1,116,691,500,211 | arxiv | \section{Introduction}
Distance scale measurements of the expansion rate of the Universe or
Hubble's Constant, $H_{0}$, have improved significantly over recent
years. For example, estimates of $H_{0}$ calculated by \citet{riess2016}
find a best fit value of $H_{0} = 73.24 \pm
1.74$\,km\,s$^{-1}$\,Mpc$^{-1}$, a quoted accuracy of 2.4\%. However,
this result is in serious tension with $H_{0}$ predictions made through
$\Lambda$CDM model fits to the Planck CMB Power Spectrum. This `early
Universe' measurement yields a value of $H_{0} = 67.4 \pm
0.5$\,km\,s$^{-1}$\,Mpc$^{-1}$ \citep[][]{planck2018}, which presents a
tension at the $3-4 \sigma$ level with measurements made using the local
distance scale \citep[see also][]{riess18b}.
These authors recognise the possibility that a source of the $\sim 9\%$ discrepancy between the $H_{0}$ measurements is unaccounted systematic uncertainties in one of, or both of the distance scale and early Universe approaches. However, an alternative proposal lies in studies of the galaxy distribution in the local Universe by \citet{shanks90}, \citet{Metcalfe91}, \citet{metcalfe01}, \citet{frith03} \& \citet{busswell04}, who find evidence for an under-density or `Local Hole' stretching to $150-200\,h^{-1}$\,Mpc in the local galaxy environment.
Notably, \citet{ws14} (hereafter \citetalias{ws14}) suggest that the tension in $H_{0}$ measurements may arise from the outflow effects of the Local Hole. They find a detected under-density of $\approx 15\pm3\%$ in number-magnitude counts $n(m)$ and redshift distributions $n(z)$, measured relative to a homogeneous model over a $\sim 9,000$ square degree area covering the NGC and SGC. This under-density is most prominent at $K<12.5$ and leads to an $\sim 2-3\%$ increase in $H_{0}$ which alleviates the tension to a $5\%$ level. Further, \citet{shanks19} suggested that Gaia DR2 parallaxes might not have finally confirmed the Galactic Cepheid distance scale as claimed by \cite{riess18b} and could at least superficially, help reduce the overall tension to $<1 \sigma$.
Moreover, the existence of the Local Hole has been detected in wider
cluster distributions, with \citet{bohringer15}, \citet{collins16} and \citet{bohringer20}
finding underdensities of $\sim 30\%$ in the X-Ray cluster redshift
distributions of the REFLEX II and CLASSIX surveys respectively. These
results are in strong agreement with the galaxy counts of
\citetalias{ws14}, and suggest that the observed $H_0$ within the
under-density would be inflated by $5.5^{+2.1}_{-2.8}\%$.
Contrastingly, \citet{riess18a} critique the assumption of isotropy and spherical symmetry assumed in the modelling of the Local Hole, highlighting that the \citetalias{ws14} dataset covers only $20\%$ of the sky, yet measurements drawn from this subset are projected globally to draw conclusions on the entire local environment. These authors further suggest that such an all-sky local under-density would then be incompatible with the expected cosmic variance of mass density fluctuations in the $\Lambda$CDM model at the $\approx6 \sigma$ level. In addition, \citet{kenworthy19} failed to find dynamical evidence in the form of infall velocities for the Local Hole in their Pantheon supernova catalogue.
Further, through analyses of the galaxy distribution in the 2M++
Catalogue, \citet{jasche19}, following \citet{lh11} (hereafter
\citetalias{lh11}) find that local structure can be accommodated within
a standard concordance model, with no support for an under-density on
the scale suggested by \citetalias{ws14}. However, \citet{shanks19b}
(see also \citealt{ws16}) question the choice of the Luminosity Function
(LF) parameters used by \citet{jasche19} and \citetalias{lh11}.
In this work we will examine two aspects of the above arguments against
the Local Hole. First, to address the premise that the conclusions of
\citetalias{ws14} cover too small a sky area to support a roughly
isotropic under-density around our position, we will extend the analysis
of \citetalias{ws14} and measure $K-$ band $n(m)$ and $n(z)$ galaxy
counts over $\approx90$\% of the sky to a limiting Galactic latitude
$|b|\ga 5^\circ$.
Second, we will compare the $n(z)$ and $n(K)$ model predictions of
\citetalias{ws14} with \citet{lh11}, hereafter \citetalias{lh11}. These
predictions will be compared at both the bright 2MASS limit and at the
fainter $K-$band limit of the GAMA survey to try and understand the
reasons for the different conclusions of \citetalias{ws14} and \citetalias{lh11} on the existence of the `Local Hole'.
\section{Data}
\label{sec:data}
\subsection{Photometric Surveys}
\label{subsec:photometric_surveys}
We now detail properties of the photometric surveys used to provide
$n(m)$ counts, alongside calibration techniques and star-galaxy
separation methods that we apply to ensure consistency between the
photometric data and model fit. Following \citetalias{ws14}, we choose
to work in the Vega system throughout. Thus for the GAMA
survey, we apply a $K-$band conversion from the $AB$ system according to
the relation determined by \citet{Driver2016}:
\begin{equation}
K_s(Vega) = K_s(AB)-1.839
\end{equation}
\subsubsection{2MASS}
\label{sub_subsec:2mass}
The Two Micron All Sky Survey, 2MASS \citep{skrutskie06} is a
near-infrared photometric survey achieving a 99.998\% coverage of the
celestial sphere. In this work we will take $K$-band $n(m)$ counts from
the 2MASS Extended Souce Catalogue (2MASS$\_$xsc), which is found to be
$\sim97.5\%$ complete \citep{mcintosh06}, with galaxies thought to
account for $\approx 97\%$ of sources.
For the galaxy $n(m)$ results, we choose to work in Galactic
coordinates, and present counts from down to a limiting Galactic
latitude $|b|>5^{\circ}$ except for the Galactic longitude range,
$330<l<30^\circ$ where our limit will be $|b|>10^{\circ}$. This is the
same 37063 deg$^2$ area of sky used by \citetalias{lh11}. These cuts are
motivated by the increasing density of Galactic stars at lower latitudes
and close to the Galactic Centre.
Following \citetalias{ws14}, sources are first selected according to the
quality tags `\textit{cc\_flg}=0' or `\textit{cc\_flg}=Z'. We will work
with a corrected form of the 2MASS$\_$xsc extrapolated surface
brightness magnitude, `$K\_m\_ext$', quoted in the Vega system. The
conversion we use is detailed in \citetalias{ws14} Appendix A1, and
utilises the $K-$band photometry of \citet{loveday2000}. For sources in
the range $10<K<13.5$ we take a corrected form of the magnitude,
$K\_\text{Best}$, defined as:
\begin{equation}
K\_\text{Best} = 0.952 \times (K\_m\_ext + 0.5625)
\end{equation}
\noindent The effect of converting to the $K\_\text{Best}$ system is to
slightly steepen the observed counts at the fainter end. However, the
effect of the conversion is small and its inclusion does not alter the
conclusions we draw.
To remove stellar sources in 2MASS we exploit here the availability of
the Gaia EDR3 astrometric catalogue \citep{Gaia2016,Gaia2021} and simply
require that a source detected in Gaia EDR3 is not classed as pointlike
as defined by eq \ref{eq:gaia} of Section
\ref{sub_subsec:star_separation}. But when compared to the star-galaxy
separation technique used by WS14, little difference to the galaxy
$n(K)$ and $n(z)$ is seen.
Finally, 2MASS galaxy $K_s$ magnitudes are corrected throughout for
Galactic absorption using the $E(B-V)$ extinction values determined by
\citet{Schlafly2011} and $A_{K_s}=0.382E(B-V)$. The coefficient
here corresponds to the relation $A_V=3.1E(B-V)$ for the $V$-band.
\subsubsection{GAMA}
\label{sub_subsec:gama}
The Galaxy And Mass Assembly, GAMA survey \citep{driver09} provides a
multi-wavelength catalogue covering the near- and mid-infrared,
comprising $\approx 300,000$ galaxies over an area of $\approx180$
deg$^2$. The survey offers deeper $K$ counts which are not accessible in
the 2MASS sample, so we will use GAMA to compare the ability of the
\citetalias{ws14}- and \citetalias{lh11}-normalised models to fit faint
$K-$band $n(m)$ counts. Measurements will be taken from the GAMA DR3
release \citep{baldry18} using the Kron magnitude
`\textit{MAG\_AUTO\_K}', initially given in the AB system. We will
target the combined count of the 3 equatorial regions G09, G12 and G15,
each covering 59.98 square degrees with an estimated galaxy completeness
of $\approx 98.5\%$ \citep{baldry10}. We shall take the GAMA sample to
be photometrically complete to $K<15.5$ but only complete to $K<15$ for
their redshift survey since a visual inspection of the $K$ counts of
galaxies with redshifts indicate that only the G09 and G12 redshift
surveys reach this limit. For star-galaxy separation we shall first use
the $g-i:J-K$ galaxy colour-based method recommended for GAMA by
\citet{baldry10} (see also \citealt{Jarvis13}) before applying the Gaia
criteria of Section \ref{sub_subsec:star_separation} to this subset to
reject any remaining stars.
\subsubsection{VICS82}
\label{sub_subsec:vics82}
VISTA-CFHT Stripe 82, VICS82 \citep{Geach2017}, is a survey in the
near-infrared over $J$ and $K_s$ bands, covering $\approx150$ deg$^2$ of
the SDSS Stripe82 equatorial field. The survey provides deep coverage to
$K<20$. Sources are detected and presented measuring a total magnitude
‘$MAG-AUTO$’ quoted in the AB system. The image extraction gives a
star-galaxy separation flag, `$Class~Star$’, with extended and
point-like sources distributed at 0 and 1 respectively. Whereas
\citet{Geach2017} defined pointlike sources at $Class~Star> 0.95$, we
shall define extended objects using a more conservative cut at
$Class~Star<0.9$. We then use the Gaia method of Section
\ref{sub_subsec:star_separation} to remove any remaining pointlike
objects. In terms of $K$ magnitude calibration, we start from the same
VICS82 $K\_mag\_auto$ system as \citet{Geach2017} who note that there
is zero offset to 2MASS total $K\_20$ magnitudes (see their Fig. 4).
However, in Appendix \ref{appendix_vics82} we find that between $12.0<K\_m\_ext<13.5$
the offset $K\_m\_ext-K\_VICS82=0.04\pm0.004$ mag and this is the offset we use
for these VICS82 data in this work. As with GAMA, we then use the
deep $K-$ band counts of VICS82 to test how well the \citetalias{ws14}
model predicts faint galaxy counts beyond the 2MASS $K<13$ limit.
\subsubsection{Star-Galaxy Separation using Gaia}
\label{sub_subsec:star_separation}
The Gaia Survey \citep{gaia18} provides an all-sky photometry and
astrometry catalogue for over 1 billion sources in the $G-$band, and is
taken as essentially complete for stars between $G=12$ and $G=17$. The
filter used to determine pointlike objects makes use of the total flux
density `$G$' and astrometric noise parameter `$A$', which is a measure
of the extra noise per observation that can account for the scatter of
residuals \citep{lindegren18}. Explicitly, through the technique of
\citet{krolewski20}, pointlike sources are then classified as:
\begin{equation}
\label{eq:gaia}
\text{pointlike}(G,A)=\left\{
\begin{array}{ll}
\log_{10}A<0.5 & \mbox{if } G<19.25 \\
\log_{10}A<0.5 + \frac{5}{16}(G-19.25) & \mbox{otherwise}
\end{array}
\right.
\end{equation}
This separation technique is applied, sometimes in combination with other
techniques, to the raw photometric datasets taken from 2MASS, GAMA and
VICS82 used to analyse the wide-sky and faint-end $n(m)$ counts.
\subsection{Redshift Surveys}
\label{subsec:redshift_surveys}
We now present characteristics of the redshift surveys used to measure
the $n(z)$ galaxy distribution, and the techniques we apply to ensure
the data remain consistent with those of \citetalias{ws14}.
To achieve close to all-sky measurement, we similarly take the observed
$n(z)$ survey distribution to the same \citetalias{lh11} $(l,b)$ limits
discussed in Section \ref{sub_subsec:2mass}, and work with redshifts
reduced to the Local Group barycentre (see Eq. 10 of \citetalias{ws14}).
While \citetalias{ws14} use the SDSS and 6dFGRS surveys to measure
separate distributions in the northern- and southern-galactic
hemispheres respectively, we will access a larger sky area using the
wide-sky redshift surveys based on the photometric 2MASS catalogue.
\subsubsection{2MRS}
\label{sub_subsec:2MRS}
The 2MASS Redshift Survey, 2MRS \citep[][]{huchra12} is a spectroscopic
survey of $\sim45,000$ galaxies covering 91$\%$ of the sky built from a
selected sample of the 2MASS photometric catalogue limited to $K<11.75$.
The 2MRS Survey is reported to be 97.6$\%$ complete excluding the
galactic region $|b|<5^{\circ}$, and provides a coverage to a depth
$z\sim0.08$.
To remain consistent with the $n(m)$ distributions, we work with a
$K-$band limited 2MRS sample, achieved by matching the 2MRS data with
the star-separated 2MASS Extended Source Catalogue. To minimise
completeness anomalies, we take a conservative cut at $K<11.5$ to
measure the $n(z)$ distribution. In Table~\ref{tab:numbers} we provide
summary statistics of the $n(z)$ dataset achieved by the matching
procedure, alongside the corresponding 2MASS $n(m)$ count.
\subsubsection{2M++}
\label{sub_subsec:2M++}
The 2M++ Catalogue \citep[][]{lh11} is a spectroscopic survey of
$\sim70,000$ galaxies comprised of redshift data from 2MRS, 6dFGRS and
SDSS. The 6dFGRS/SDSS and 2MRS data are given to $|b|>10^{\circ}$ and
$|b|>5^{\circ}$ respectively, except in the region
$-30^{\circ}<l<+30^{\circ}$ where 2MRS is limited to $|b|>10^{\circ}$.
The 2M++ Catalogue applies masks to this field to associate particular
regions to each survey, weighting by completeness and magnitude limits.
Overall, this creates a set of galaxies covering an all-sky area of
$37,080$ deg$^{2}$ which is thought to be $\sim 90\%$ complete to $K
\leq 12.5$. To compare with counts from 2MRS we will measure the
redshift distribution to a depth $K<11.5$, with the summary statistics
presented in Table~\ref{tab:numbers}.
\begin{table}
\centering
\caption{ Summary statistics of the $n(m)$ and $n(z)$ datasets we use for analysis of the Local Hole over the wide-sky area ($|b|>5^{\circ}$ except for $|b|>10^{\circ}$ at $330^\circ<l<30^\circ$).}
\label{tab:numbers}
\begin{tabular}{ccccc}
\hline
\hline
\multirow{2}{*}{Survey} & Wide-Sky Area & \multirow{2}{*}{Mag. Limit} & $n(m)$ & \multirow{2}{*}{$n(z)$} \\
& (sq. deg.) & & (2MASS) & \\
\hline
\hline
2MRS & \multirow{2}{*}{$37,063$} & \multirow{2}{*}{$K<11.5$} & \multirow{2}{*}{$41,771$} & $38,730$ \\
2M++ & & & & $34,310$ \\
\hline
2MRS & \multirow{2}{*}{$37,063$} & \multirow{2}{*}{$K<11.75$} & \multirow{2}{*}{$59,997$} & $43,295$ \\
2M++ & & & & $44,152$ \\
\hline
\hline
\end{tabular}
\end{table}
\subsubsection{Spectroscopic Incompleteness}
\label{sub_subsec:spectroscopic_incompleteness}
For a given $n(z)$ sample taken from 2MRS and 2M++, we correct the data
using an incompleteness factor. The observed $n(z)$ distribution from
the survey is multiplied by the ratio of the total number of photometric
to spectroscopic galaxies within the same target area and magnitude
limit. Here, the photometric count is taken from the 2MASS Extended
Source Catalogue and the correction ensures that the total number of
galaxies considered in the redshift distribution $n(z)$ is the same as
in the magnitude count $n(m)$. A breakdown of the completeness of each
survey as a function of magnitude is presented in Appendix \ref{appendix_spectra}.
\subsection{Field-field errors}
\label{subsec:error_calculation}
The field-field error, $\sigma$, in the galaxy 2-D sky or 3-D volume
density in each photometric or spectroscopic bin is simply calculated by
sampling the galaxy densities in $n$ sub-fields within the
wide-sky area and calculating their standard error. For $n$
sub-fields each with galaxy density, $\rho_i$, the standard error
$\sigma$ on the mean galaxy density, $\bar{\rho}$, in each magnitude or
redshift bin is therefore,
\begin{equation}
\sigma^{2}=\frac{1}{n(n-1)}\sum_{i=1}^{n}\left(\rho_i-\bar{\rho}\right)^{2}
\label{eq:error}
\end{equation}
\noindent So, for the 2MASS wide-sky survey, we divide its area into 20
subfields each covering 1570 deg$^2$ over the majority of the sky, but
in the offset strip for $330^\circ<l<30^\circ$, there are 4 additional
subfields of equal area 1420 deg$^2$ that have slightly different
boundaries. The 10\% smaller boundaries for 4 out of 24 sub-fields is
assumed to leave eq \ref{eq:error} a good approximation to the true
field-field error estimate. In Section~\ref{sec:all_sky_nm} we detail
the Galactic coordinate boundaries of each sub-field in a Mollweide
projection, and consider the individual galaxy densities in each of
these $n=24$ sub-fields to visualise the extent on the sky of the Local
Hole.
\section{Modelling}
\label{subsec:modelling}
To examine the redshift and magnitude distribution of galaxies, we
measure their differential number counts per square degree on the sky as
a function of magnitude, $m$, and redshift, $z$, over a bin size
$\Delta m = 0.5$ and $\Delta z = 0.002$ respectively. The observed
counts are then compared to the \citetalias{ws14} theoretical
predictions that assumed a model based on the sum of contributions from
the type-dependent LFs of \citet{metcalfe01}. The LF parameters
$\phi^{*}, \alpha, M^{*}$, which represent the characteristic density,
slope and characteristic magnitude respectively, are presented for each
galaxy type in Table~\ref{tab:params}.
The apparent magnitude of galaxies is further dependent on their
spectral energy distribution and evolution, modelled through $k(z)$ and
$e(z)$ corrections respectively. Thus, we calculate the apparent
magnitude $m$ by including these in the distance modulus for the $[m,z]$
relation:
\begin{equation}
m = M + 5\log_{10}(D_L(z)) + 25 + k(z) + e(z)
\end{equation}
\noindent where $D_L(z)$ represents the luminosity distance at redshift,
$z$. In this work, the $k$ and $e$ corrections are adopted from
\citetalias{ws14} who adopt Bruzual \& Charlot (2003) stellar synthesis
models. We note that the K band is less affected by $k$ and $e$
corrections than in bluer bands because of the older stars that
dominate in the near-IR.
In addition to the basic homogeneous prediction, we consider the
\citetalias{ws14} inhomogeneous model in which the normalisation
$\phi^{*}$ is described as a function of redshift. We trace the radial
density profile shown in each redshift bin of the observed $n(z)$ count
(see Fig.~\ref{fig:nz} (b)), and apply this correction to the $n(m)$
model prediction according to
\begin{equation}
\phi^{*}(z)=
\begin{cases}
\frac{n(z)_{\text{obs}}}{n(z)_{\text{global}}}\phi^{*}_{\text{global}} & z \leq z _{\text{global}} \\
\phi^{*}_{\text{global}} & z > z_{\text{global}}
\end{cases}
\end{equation}
\noindent where the $n(z)_{\text{obs}}$ are the observed distributions
from our chosen redshift surveys, $\phi^{*}_{\text{global}}$ describes
the standard homogeneous normalisation as detailed in
Table~\ref{tab:params}, and $z_{\text{global}}$ is the scale at which
the inhomogeneous model transitions to the homogeneous galaxy density.
In such a way we can model the effect of large-scale structure in the
number-magnitude prediction, which we use as a check for consistency in
measurements of any under-density between the observed $n(m)$ and $n(z)$
counts. In this work we test the effect of two transition values
$z_{\text{global}}=0.06$ and $z_{\text{global}}=0.07$.
\begin{table}
\centering
\caption{The luminosity function parameters defined at zero redshift as
a function of galaxy-type, used as the homogeneous model by WS14 and
adopted in this work. The absolute magnitudes are `total' $K-$band
magnitudes, corresponding to our $K\_\text{Best}$ system. Here, the Hubble
parameter $H_0=100\,h\,$km\,s$^{-1}$\,Mpc$^{-1}$.}
\label{tab:params}
\begin{tabular}{cccc}
\hline
\hline
Type & $\phi^{*}(h^{3}$\,Mpc$^{-3})$ & $\alpha$ & $M^*_K+5log_{10}(h)$ \\
\hline
\hline
E/S0 & $7.42\times10^{-3}$ & $-0.7$ & $-23.42$ \\
Sab & $3.70\times10^{-3}$ & $-0.7$ & $-23.28$ \\
Sbc & $4.96\times10^{-3}$ & $-1.1$ & $-23.33$ \\
Scd & $2.18\times10^{-3}$ & $-1.5$ & $-22.84$ \\
Sdm & $1.09\times10^{-3}$ & $-1.5$ & $-22.21$ \\
\hline
\hline
\end{tabular}
\end{table}
\begin{figure*}
\includegraphics[width=17cm]{nz_LH11_v8.png}
\caption{The observed $n(z)$ distributions of the 2MRS and 2M++
Catalogues measured to the wide-sky area $|b|\ga5^\circ$,
and consistently limited to $K<11.5$ where: \textbf{(a)} Counts are
fit alongside the WS14 homogeneous model and LH11-normalised model
over a bin size $\Delta z=0.002$. \textbf{(b)} The $n(z)$ counts are
normalised to the WS14 model to demonstrate observed under- and
overdensities across the distribution.}
\label{fig:nz}
\end{figure*}
\section{Galaxy redshift distribution}
\label{sec:all_sky_nz}
The observed $n(z)$ distribution measured in the 2MRS and 2M++
catalogues over the wide-sky area to $|b|\ga5^\circ$ is shown in
Fig.~\ref{fig:nz}(a). The data are limited to $K<11.5$ and compared to
the $n(z)$ predictions of the homogeneous \citetalias{ws14} LF
model,\footnote{We note that convolving the
\citetalias{ws14} model $n(z)$ with a Gaussian of width
$\sigma_z=0.001$ to represent the combined effect of redshift errors and
peculiar velocities of $\pm300$km s$^{-1}$ shows no discernible
difference.} with a corresponding plot of the observed $n(z)$ divided
by the model shown in Fig.~\ref{fig:nz} (b). Counts have been corrected
with the spectroscopic incompleteness factor described in
Section~\ref{sub_subsec:spectroscopic_incompleteness}, and a description
of the completeness of each sample as a function of magnitude is given
in Appendix \ref{appendix_spectra}. Errors have been calculated using
the field-field method incorporating the uncertainty in each observed
redshift bin combined with the uncertainty in the incompleteness.
Subject to the limiting magnitude $K<11.5$, each survey shows a
distribution where the majority of the observed $n(z)$ data fall below
the predicted count of the \citetalias{ws14} homogeneous model. The
observed distributions fail to converge to the model until $z>0.06$ and
below this range the data exhibit a characteristic under-density that is
consistent with $n(z)$ counts over the NGC and SGC presented in
\citetalias{ws14}.
To analyse the scale of under-density in our measurements, we consider
the `total' density contrast, calculated by evaluating the difference
between the sum of the observed count and predicted count, normalised to
the sum of the predicted count. Here we take the sum over $n(z)$ bins
from $z=0$ to the upper limits of $z=0.05$ and $z=0.075$. Calculations
of the density contrast in our wide-sky 2MRS and 2M++ distributions
within these bounds are presented in Table~\ref{tab:nz_all_sky}.
\begin{table}
\begin{center}
\caption{The measured density contrasts between the WS14 LF
model and $n(z)$ counts of 2MRS and 2M++ over the $\sim 37,000$ sq. deg.
wide-sky area. The samples are taken to a limiting magnitude $K<11.5$
and detail the scale of under- and overdensities to the specified ranges
$z<0.05$ and $z<0.075$.}
\label{tab:nz_all_sky}
\begin{tabular}{ccc}
\hline
\hline
Sample Limit & Survey & Density Contrast $(\%)$ \\
\hline
\hline
\multirow{2}{*}{$z<0.05$} & 2MRS & $-23\pm2$ \\
& 2M++ & $-21\pm3$ \\
\hline
\multirow{2}{*}{$z<0.075$} & 2MRS & $-22\pm2$ \\
& 2M++ & $-21\pm2$ \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
The measured density contrast of each survey at $z<0.075$ are in
excellent agreement and indicate that the wide-sky $n(z)$ counts are
$\sim 21-23\%$ underdense relative to the model. At both limits, the
2MRS dataset produces a marginally greater under-density than 2M++,
however, the two values remain consistent to within $1 \sigma$ and
demonstrate a continuous under-density in the $n(z)$ distribution.
\begin{figure*}
\includegraphics[height=8.5cm]{nm_LH11_v8.png}
\caption{The observed $K-$band $n(m)$ counts of the 2MASS Extended
Source Catalogue taken over the wide-sky region to
$|b|\ga5^\circ$, where: \textbf{(a)} The observed counts
are compared to the WS14 and LH11 homogeneous models. \textbf{(b)}
The observed counts divided by the WS14 homogeneous model are
compared to the inhomogeneous, variable $\phi^{*}(z)$, versions of
the WS14 models based on the 2MRS and 2M++ $n(z)$'s, and similarly
divided by the homogeneous WS14 model. The transition to the
homogeneous case for both of these inhomogeneous LSS models is investigated
for both $z_{\text{global}}=0.06$ and $z_{\text{global}}=0.07$. }
\label{fig:nm}
\end{figure*}
We note that in our approach we have applied a single incompleteness
factor to correct each bin in the observed $n(z)$ distribution equally
while a more detailed examination could incorporate a
magnitude-dependent factor. This technique was implemented in
\citetalias{ws14}, where the completeness factor was introduced into the
LF $n(z)$ model such that each bin conserved the galaxy number. However,
the change to the $n(z)$ sample as a result of this method was less than
$1\%$ and so we have not implemented this more detailed correction here.
We shall return in Section \ref{sec:lh11_2mass} to discuss
the reasons for the difference in the $n(z)$ model prediction of \citet{lh11},
also shown in Fig. \ref{fig:nz}(a).
\begin{figure*}
\includegraphics[width=17cm]{mollweide_v8.png}
\caption{A Mollweide contour plot detailing the galactic coordinate
positions of each sub-field we have used to calculate the
field-field errors in our wide-sky $n(m)$ and $n(z)$ distributions.
In each region we have evaluated the 2MASS $n(m)$ density contrast,
measured at $10<K<12.5$, and plotted local galaxy structures to
investigate the regional densities. The legend describes the key for
each galaxy structure, and their corresponding redshift is given in
brackets.}
\label{fig:mollweide}
\end{figure*}
\section{Galaxy number magnitude counts}
\label{sec:all_sky_nm}
\subsection{2MASS $n(m)$ counts}
\label{subsec:2MASS_nm}
We now consider the 2MASS number-magnitude counts, and examine the
extent the \citetalias{ws14} homogeneous model can self-consistently
replicate an $n(m)$ under-density that is of the same profile and at a
similar depth as that suggested by the galaxy redshift distributions of
2MRS and 2M++.
The observed $K-$band $n(m)$ count of the 2MASS Extended Source
Catalogue to the wide-sky limit of $|b|\ga5^\circ$, is presented in
Fig.~\ref{fig:nm}(a). Similar to the $n(z)$ comparison in Fig.
\ref{fig:nz}, these counts appear low compared to the homogeneous model
of \citetalias{ws14}, here at $K<12$.
To examine whether the $n(m)$ counts are consistent with the form of the
under-density shown in the $n(z)$ measurements, we also predict this
$n(m)$ based on the LSS-corrected $\phi^*(z)$ normalisation (see Section
\ref{subsec:modelling}). We first show the observed $n(m)$ count
divided by the homogeneous \citetalias{ws14} model in
Fig~\ref{fig:nm}(b). Then we use the $n(z)_{\text{obs}}$ derived from
each of the 2MRS and 2M++ $n(z)$ distributions in Fig.~\ref{fig:nz} (b),
both similarly divided by the \citetalias{ws14} homogeneous model. The
orange and green lines represent the 2MRS and 2M++ -corrected models
respectively.
At $K<12.5$, the wide-sky $n(m)$ distribution shows a significant
under-density relative to the homogeneous prediction, only reaching
consistency with the model at $K\approx13$. Moreover, we find that the
$\phi^{*}(z)$ models describing the observed $n(z)$ inhomogeneities in
each of 2MRS and 2M++ give a significantly more accurate fit to the
2MASS $n(m)$ count. This indicates that the profile of the under-density
in the galaxy redshift distributions, measured relative to the
\citetalias{ws14} homogeneous prediction, is consistent with the
observed $n(m)$ counts.
To explicitly evaluate the 2MASS $n(m)$ under-density, we give
calculations of the density contrast in Table~\ref{tab:nm_all_sky}. To
mitigate the uncertainty at the bright end and remain in line with
measurements given by \citetalias{ws14}, we take a fixed lower
bound, $K>10$, and vary the upper magnitude bound.
\begin{table}
\begin{center}
\caption{Measurements of the density contrast in the 2MASS wide-sky
$n(m)$ counts relative to the WS14 model, taken to various $K-$ limits
to examine the extent of underdensities in the distribution. Errors are field-field
based on 24 sub-fields.}
\label{tab:nm_all_sky}
\begin{tabular}{ccc}
\hline
\hline
Sky Region & Sample Limit & Density Contrast ($\%$) \\
\hline
\hline
$|b|\ga5^\circ$ & $10<K<11.5$ & $-20 \pm 2$ \\
$|b|\ga5^\circ$ & $10<K<12.5$ & $-13 \pm 1$ \\
$|b|\ga5^\circ$ & $10<K<13.5$ & $-3 \pm 1$ \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
The measurements of the total density contrast in the wide-sky $n(m)$
count in Table \ref{tab:nm_all_sky} demonstrate a significant scale of
under-density at $10<K<11.5$ that becomes less pronounced approaching
$K\approx13.5$. Notably, at $K<11.5$ we measure an under-density of
$20\pm2\%$, which is consistent with the $\approx 21-22\%$ under-density
shown in the 2MRS and 2M++ $n(z)$ counts. Additionally, for $K<12.5$ we
find a wide-sky under-density of $13\pm1\%$, which is in good agreement
with the $15\pm3\%$ under-density calculated in the three
\citetalias{ws14} fields over the same magnitude range. The field-field
errors suggest strongly significant detections of a 13-21\% underdensity
over the wide-sky area. This is in agreement with \citetalias{ws14}, who
found an $\approx15$\% underdensity from their sample covering a
$\approx4\times$ smaller area over the NGC and SGC. In addition, we note
the effect of the magnitude calibration to the Loveday system. Excluding
the correction lowers the observed count at the faint end by
$\approx10\%$, confirming the conclusion of \citetalias{ws14} that an
under-density is seen independent of applying the Loveday magnitude
correction. Finally, we again shall return in Section
\ref{sec:lh11_2mass} to discuss why the $n(m)$ model prediction of
\citetalias{lh11} also shown in Fig. \ref{fig:nm}(a) are so much lower than
that of \citetalias{ws14}.
\subsection{Sub-field $n(m)$ Density Contrast Measurements}
\label{subsec:Mollweide_Plot}
To further assess the sky extent of the Local Hole, we next consider the
properties of the wide-sky sub-fields from which we derive the
field-field errors and evaluate the 2MASS $n(m)$ density contrast in
each sub-field region.
Fig.~\ref{fig:mollweide} shows the density contrast between the 2MASS
$n(m)$ counts and the \citetalias{ws14} homogeneous model in each
sub-field area that is also used to evaluate the wide-sky $n(m)$ and $n(z)$
field-field errors. The average density contrast in each field is plotted
colour-coded on a Mollweide projection, which also details the geometric
boundaries of each region in Galactic coordinates.
To probe the under-density, we choose to take the sum over the range
$10<K<12.5$ to remain consistent with the limits considered in
\citetalias{ws14}. In addition, to examine the properties of individual
regions we plot the local galaxy clusters and superclusters highlighted
in \citetalias{lh11} using positional data from \citet{abell89},
\citet{einasto97} and \citet{ebeling98}, and provide their redshift as
quoted by \citet{huchra12} in the 2MRS Catalogue.
From the lack of yellow-red colours in Fig.~\ref{fig:mollweide} it is
clear that underdensities dominate the local Large Scale Structure
across the sky. Now, there are several fields which demonstrate an
$n(m)$ count that marginally exceeds the \citetalias{ws14} prediction
and we find that such (light green) regions tend to host well known
local galaxy clusters. The 4 out of 24 areas that show an over-density
are those that contain clusters 2,3,4 - Corona Borealis+Bootes+Coma; 6,7
- Shapley+Hydra-Centaurus; 8 - Perseus-Pisces, using the numbering system
from Fig. \ref{fig:mollweide}. The influence of the structures in these 4
areas is still not enough to dominate the Local Hole overall $13\pm1$\%
under-density in the wide-sky area in Fig.~\ref{fig:mollweide}.
\begin{figure*}
\includegraphics[width=\columnwidth]{LF_WS14_LH11_logh.png}
\includegraphics[width=8.5cm]{WS14_LH11_kpe_z.png}
\caption{(a) The galaxy $K$ luminosity function of WS14 as used here
compared to that of LH11. (b) The $k$ and $k+e$ corrections of WS14
compared to those of LH11, for the $K$-band.}
\label{fig:LF_params}
\end{figure*}
We conclude that the observed $n(m)$ and $n(z)$ galaxy counts taken to
$|b|\ga5^\circ$ in 2MASS, 2MRS and 2M++, show a consistent overall
under-density measured relative to the \citetalias{ws14} model that
covers $\approx90$\% of the sky. At a limiting depth of $K=11.5$ the $n(m)$ counts
show an under-density of $20\pm2$\% and this scale is replicated in form
in the $K-$limited $n(z)$ distributions at $z<0.075$ which show an
under-density of $\sim21-22$\%.
\section{Comparison of LF and other model parameters}
\label{sec:normalisation_testing}
The above arguments for the Local Hole under-density depend on the
accuracy of our model LF and to a lesser extent our $k+e$ parameters
that are the basis of our $n(z)$ and $n(m)$ models. We note that
\citet{ws16} made several different estimates of the galaxy LF in the
$K$ band from the $K<12.5$ 6dF and SDSS redshift surveys including
parametric and non-parametric `cluster-free' estimators and found good
agreement with the form of the LF used by \citetalias{ws14} and in this
work. The `cluster-free' methods are required since they ensure that at
least the form of the LF is independent of the local large-scale
structure and mitigates the presence of voids as well as clusters. The
non-parametric estimators also allowed independent estimates of the
local galaxy density profiles to be made and showed that the results of
\citetalias{ws14} were robust in terms of the choice of LF model. The
\citetalias{ws14} LF normalisation was also tested using various methods
as described in Section 2.3.1 of \citet{ws16}.
We now turn to a comparison between the \citetalias{ws14} galaxy count
predictions with those made by \citet{lh11} who failed to find an
under-density in the 2M++ $n(z)$ data. To examine the counts produced by
their model we assume the LF parameters given in their Table 2, where
in the Local Group frame with $750<v<20000$ km s$^{-1}$, they find
$\alpha=-0.86$; $M^*=-23.24+5{\rm log}_{10}(h)$;
$\phi^*=1.13\times10^{-2} h^3 {\rm Mpc}^{-3}$, independent of galaxy
type. Note that we brighten the LH11 $M^*$ by 0.19 mag to
$M^*=-23.43+5{\rm log}_{10}(h)$ in our version of their model to account
for the 0.19 mag difference between $K\_m\_ext$ magnitudes used here and
the 2MASS $K_{Kron}(=K\_20)$ magnitudes used by \citetalias{lh11} (see
Appendix \ref{appendix_2mass}). In Fig. \ref{fig:LF_params} (a) we
compare their $z=0$ LF with our LF summed over our five galaxy types.
Importantly, \citetalias{lh11} note that their fitted LFs show a
distinctly flatter faint slope ($\alpha>-1$) than other low redshift LF
estimates (see their Fig 7a) that generally look more similar to the
steeper \citetalias{ws14} LF (see also \citealt{ws16}). However,
Fig.~\ref{fig:LF_params} (a) shows that the form of both LF's is similar
in the range around $M^*$ that dominates in magnitude limited galaxy
samples, apart from their normalisation, with the \citetalias{lh11} LF
appearing $\approx40$\% lower than that of \citetalias{ws14}. We shall argue
that this low normalisation is crucial in the failure of
\citetalias{lh11} to find the `Local Hole'.
Next, we compare the $k+e$ - redshift models of \citetalias{lh11} and
\citetalias{ws14} in Fig. \ref{fig:LF_params}b. Two $k+e(z)$ models are
shown for \citetalias{ws14} representing their early-type model applied
to E/S0/Sab and their late type model applied to Sbc/Scd/Sdm. These
models come from \cite{bc03} with parameters as described by
\cite{metcalfe06} At $z=0.1$ these models give respectively
$\Delta_K=-0.28$ and $\Delta_K=-0.31$. We also show just the $k(z)$ for
early and late types in Fig.\ref{fig:LF_params}. At $z=0.1$ these
$k(z)$ models give respectively $\Delta_K=-0.26$ and $\Delta_K=-0.25$,
implying little evolution in the $e(z)$ model for the early types and
0.06 mag for the late types.
We note that \citetalias{lh11} apply their $k+e$ corrections to the data
whereas we apply them to the model. So reversing their sign on their
$k(z)$ and $e(z)$ terms, the correction we add to our $K$ magnitudes in
our count model is
\begin{equation}
\Delta_K(z)=k(z)-e(z).
\end{equation}
\citetalias{lh11} give $k(z)=-2.1z$ and $e(z)=0.8z$ giving our additive correction as
\begin{equation}
\Delta_K(z)=k(z)-e(z)=-2.1z-0.8z=-2.9z
\end{equation}
\noindent as representing the \citetalias{lh11} $k$- and evolutionary
corrections, giving $\Delta_K=-0.29$mag at $z=0.1$. Their second model
includes an additional galaxy $(1+z)^4$ surface brightness dimming
correction so in magnitudes is
\begin{equation}
\Delta_K(z)=0.16(10\log_{10}(1+z))+1.16(k(z)-e(z))
\end{equation}
\noindent i.e.
\begin{equation}
\Delta_K(z)=1.6\log_{10}(1+z)-3.4z
\end{equation}
\noindent and so $\Delta_K=-0.27$mag at $z=0.1$.
Since we are using total $K$ magnitudes, the effect of cosmological
dimming of surface brightness is included in our measured magnitudes. So
in any comparison of the \citetalias{lh11} model with our $K$ band data,
only the $k+e$ terms are used in the model. So at $z=0.1$ our $k+e$
term is $\Delta_K\approx-0.29$mag, the same as the $\Delta_K=-0.29$mag
of \citetalias{lh11}. Similarly at $z=0.3$ which is effectively our
largest redshift of interest at $K<15.5$, $z=0.3$,
$\Delta_K\approx-0.60$ to -0.69 mag for the \citetalias{ws14} $k+e$
model compared to $\Delta_K=-0.87$mag for \citetalias{lh11}.
\subsection{Lavaux \& Hudson $n(m)$ and $n(z)$ comparisons to $K=11.5$}
\label{sec:lh11_2mass}
In Figs.~\ref{fig:nz}(a) and ~\ref{fig:nm}(a) we now compare the
\citetalias{lh11} model predictions to those of \citetalias{ws14} for
the 2MRS and 2M++ $n(z)$ and 2MASS $n(K)$ distributions. Most notably,
we find that the \citetalias{lh11} model produces theoretical $n(K)$ and
$n(z)$ counts that are significantly lower than the \citetalias{ws14}
counterparts and, if anything, slightly {\it under}-predict the observed
wide-sky counts particularly near the peak of the $n(z)$ in
Fig.~\ref{fig:nz}(a). The \citetalias{lh11} $n(K)$ model is offset by
$\approx40$\% from the \citetalias{ws14} $n(K)$ prediction. We also note
that the $n(z)$ distribution predicted by \citetalias{lh11} when
compared to the 2M++ $n(z)$, limited at $K=11.5/12.5$ mag, shows
excellent agreement (see \citetalias{lh11} Fig. 5). However, in
Fig.~\ref{fig:nm}(a), beyond $K>12.5$, the \citetalias{lh11} $n(K)$ model
diverges away from the 2MASS data. In contrast, the \citetalias{ws14}
model was found to generate a consistency between the wide-sky $n(K)$
and $n(z)$ distributions and imply a similar under-density of
$\approx20$\% at $K<11.5$. Due to the consistency of the slope in each
model at both the bright and faint end of $n(K)$ counts, it is likely
that the difference between the \citetalias{lh11} and the
\citetalias{ws14} models is caused by the different effective
normalisation in $\phi^*$ seen around the break in the LF in
Fig.~\ref{fig:LF_params}.
We further note that when we try to reproduce Fig. 5 of
\citetalias{lh11}, by combining $n(z)$ model predictions using their LF
model parameters for their combined $K<11.5$ and $K<12.5$ 2M++ samples
covering 13069 and 24011 deg$^2$ respectively, we find that we
reasonably reproduce the form and normalisation of their predicted
$n(z)$ to a few percent accuracy. So why the fit of the
\citetalias{lh11} model is poorer than in our Fig. \ref{fig:nz} (a) than
in their Fig. 5 remains unknown. Nevertheless, we accept that their
model fits our Fig. \ref{fig:nz} (a) $n(z)$ better than the model of
\citetalias{ws14}.
\subsection{Lavaux \& Hudson $n(m)$ comparison at $K<16$}
To examine the ability of the \citetalias{lh11} model simultaneously to
predict the galaxy $n(K)$ at bright and faint magnitudes, we now compare
the \citetalias{lh11} and \citetalias{ws14} models to the fainter
$n(K)$ counts of the GAMA survey, shown in Fig.~\ref{fig:gama}. We
calculate errors using field-field errors as described in Section
\ref{subsec:error_calculation}.
To compare the count models, we again assume the \citetalias{lh11} LF
parameters from their Table 2, $\alpha=-0.86$; $M^*=-23.24+5{\rm
log}_{10}(h)$ (corrected brighter by 0.19 mag into our system);
$\phi^*=1.13\times10^{-2} h^3 {\rm Mpc}^{-3}$. We also assume the $k+e$ term
of $\Delta_K=-2.9z$ used by \citetalias{lh11}, one cut at $z<0.6$ and
one cut at $z<1$ as shown by the dashed and dotted lines in Fig.
\ref{fig:gama}.
\begin{figure}
\includegraphics[width=\columnwidth]{nk_gama_final_v7.png}
\caption{The WS14 and LH11 count models compared to the GAMA
survey observed $n(K)$ counts averaged over 3 fields. Solid circles
are the GAMA counts with the Gaia star-galaxy separation and open
circles are with the Gaia separation applied after star-galaxy
separating by colour \protect\citep{baldry10}. Two versions of the
LH11 model are shown with redshift cuts at $z<0.6$ and $z<1.0$ to
prevent the model diverging due to an unphysical high redshift tail.
Field-field errors based on the 3 GAMA fields are shown.}
\label{fig:gama}
\end{figure}
The two \citetalias{lh11} predictions reasonably fit the bright data at
$K<11$ but lie below the observed GAMA data out to $K\approx15$, then
agreeing with these data at $K\approx15.5$. In the case of the version
cut at $z<1.0$, the model then rises above the GAMA counts. The model
cut at $z<0.6$ remains in better agreement with these data. But without
the redshift cuts we find that the $\Delta K=-2.9z$ k+e term used by
\citetalias{lh11} would vastly overpredict the observed galaxy count not just at
$K>15.5$mag but at brighter magnitudes too. This is the usual problem
with an evolutionary explanation of the steep count slope at $K<12$, in
that models that fits that slope then invariably overpredict the
slope at fainter magnitudes. For an evolutionary model to fit, a strong
evolution, either in galaxy density or luminosity (as in the
\citetalias{lh11} + \citetalias{ws14} models used here) is needed out to
$z<0.1$ and then something quite close to a no-evolution model is
required at $0.1<z<1$ in the $K$ band. This is similar to what was found
in the $b_J$-band where strong luminosity evolution is at least more
plausible. In $K$ the evolution is less affected by increasing numbers
of young blue stars with redshift and so the evolutionary explanation is
even less attractive.
The conclusion that the steep $K$ counts are caused by local large-scale
structure rather than evolution is strongly supported by the form of the
$n(z)$ seen in Fig. \ref{fig:nz} where the pattern of underdensities is
quite irregular as expected if dominated by galaxy clustering rather
than the smoothly increasing count with $z$ expected from evolution. We
have also shown that following the detailed changes in $n(z)$ with
redshift to model $\phi^*(z)$ gives a consistent fit to the steep
$n(m)$ distribution at $K<12$. We conclude that unless a galaxy
evolution model appears that has the required quick cut-off at
$z\approx0.1$ required simultaneously in the $K$ and $b_J$ counts then
the simplest explanation of the steep $n(K)$ slope at bright magnitudes
is the large scale structure we have termed the `Local Hole`.
\begin{figure}
\includegraphics[width=\columnwidth]{nz_gama_k_10_15_v2.png}
\caption{Galaxy $n(z)$ for GAMA survey limited at $10<K<15$ and the
predictions of the WS14 and LH11 models. We chose the $K<15$ limit here
because this appears to be the effective limit for the $K$ band
spectroscopic survey in G09 and G12, although G15 may be complete to a
0.5mag fainter limit. We note that there is a `bump' in the GAMA $n(z)$ at $z\approx0.25$
that appears to have its origin mostly in the G09 and G15 fields with less contribution from
G12. G09 and G15 are the two most widely separated fields of the three,
arguing that this feature is a statistical fluctuation, if not caused by some
$z$ survey target selection issue.
}
\label{fig:gama_nz}
\end{figure}
These conclusions are confirmed by the GAMA $n(z)$ in the range
$10<K<15$, averaged over the G09, G12 and G15 fields and compared to the
\citetalias{ws14} + \citetalias{lh11} models in Fig. \ref{fig:gama_nz}.
Similar results are seen to those for the GAMA $n(K)$ in Fig.
\ref{fig:gama} with the \citetalias{ws14} model better fitting these
data than the \citetalias{lh11} model that again significantly
underestimates the observed $n(z)$. Some hint of an under-density is
seen out to $z\approx0.12$ in the \citetalias{ws14} model
comparison with the observed data but the area covered is only $180$
deg$^2$ so the statistical errors are much larger than for the brighter
$K<11.5$ or $K<12.5$ `wide-sky' redshift survey
samples.\footnote{We note that at the suggestion of a
referee, we investigated the 2MASS Photometric Redshift Survey
(2MPZ, \citealt{Bilicki2014}) $n(z)$ over the wide sky area used in Fig.\ref{fig:nz} to
$K<13.7$, finding evidence that this underdensity may extend to
$z\approx0.15$. But since this result could be affected by as yet unknown
systematics in the 2MPZ photometric redshifts, we have left this analysis for
future work.}
\begin{figure}
\includegraphics[width=\columnwidth]{nk_vics82_kp04_12_18_v2.png}
\caption{The WS14 and LH11 count models compared to the VICS82 survey
\protect\citep{Geach2017} observed $n(K)$ counts averaged over
$\approx150$ deg$^2$ to $K<18$. Results are based on star-galaxy
separation $Class\_Star<0.9$ with further removal of Gaia pointlike
objects as defined by eq \ref{eq:gaia}. Field-field errors based on 2
sub-fields of area 69 and 81 deg$^2$ are shown. The \citetalias{lh11}
models again have redshift cuts at $z<0.6$ and $z<1$ to prevent
divergence due to an unphysical high redshift tail.}
\label{fig:vics82}
\end{figure}
\subsection{VICS82 $K$ count model comparison to $K=18$}
To assess further the LF normalisation uncertainties, we present in
Fig. \ref{fig:vics82} the $n(K)$ galaxy counts in the range $12<K<18$
over the $\approx150$deg$^2$ area of the VICS82 survey
\citep{Geach2017}. Here, the faint $K=18$ limit is 2 mag fainter than
the GAMA limit in Fig.~\ref{fig:gama}. Use of the fainter, $K>18$,
VICS82 data to test LF parameters would increasingly depend on the
evolutionary model assumed. The bright limit is chosen because the
$Class~Star$ parameter is only calculated by \citet{Geach2017} for
$K>12$ to avoid effects of saturation. The $K$ magnitudes are corrected
into the 2MASS $K\_m\_ext$ system (see Section \ref{sub_subsec:vics82}
and Appendix \ref{appendix_vics82}). As also described in Section
\ref{sub_subsec:vics82} we have assumed a conservative star-galaxy
separation using $Class~Star<0.9$ and then removing any remaining
pointlike objects using Gaia data and eq \ref{eq:gaia}. We note that
there is good agreement with the counts given by \citet{Geach2017} in
their Fig. 5, once our magnitude offsets are taken into account. In the
full range, $12<K<16$, we again see excellent agreement with the
\citetalias{ws14} model and again the \citetalias{lh11} model
significantly under-predicts the galaxy counts. We conclude that, like
the GAMA counts, the VICS82 $K$-band data also strongly support the
accuracy of the \citetalias{ws14} model and its LF parameters, from
counts based on a completely independent sky area.
\subsection{Discussion}
What we observe is that the brighter $K<11.5$ 2MRS $n(z)$ requires a
20\% lower $\phi^*$ than the $K<15$ GAMA $n(z)$. So good fits to both
$n(z)$'s can be obtained if the LF $\phi^*$ is left as a free parameter
(see also Fig. 7 of \citealt{Sedgwick21}). This means that the Local
Hole may have quite a sharp spatial edge at $z\approx0.08$ or
$r\approx240h^{-1}$ Mpc. Otherwise, in an evolutionary interpretation
this would look more like pure density evolution than luminosity
evolution. In the density evolution case it is true that it would be
nearly impossible to differentiate a physical under-density from a
smoothly increasing galaxy density with redshift due to evolution. But
the reasonable fit of homogeneous models in the $z<0.08$ range would again
imply that there was a sharp jump in the galaxy density above this
redshift. Again this increase in density cannot continue at $z>0.08$ for
the same reason as for pure luminosity evolution, since the counts at
higher redshift would quickly be over-predicted. We regard either of
these sharply changing evolutionary scenarios around $z\approx0.08$ as
much less likely than an under-density, as has been argued for some
years even on the basis of blue-band number counts \citep{shanks90,Metcalfe91}.
We highlight the relative normalisations of the \citetalias{ws14} and
\citetalias{lh11} LF models as the key outcome of our analysis. The
\citetalias{lh11} model fails to fit the faint $n(m)$ galaxy counts in
the GAMA survey. If their normalisation is correct and no local
under-density exists then it is implied that galaxies must evolve in a
way that their space density sharply increases at $z\ga0.08$ and $K>12$
and then returns to a non-evolving form out to $z\approx0.5$ and $K>20$.
This single spurt of evolution at $z\approx0.08$ has to be seen at
similar levels in the $b_J$, $r$ and $H$ bands as well as in the $K$
band. It was the unnaturalness of this evolutionary interpretation that
originally led e.g. \citet{shanks90} to normalise their LF estimates at
$b_J(\sim g)>17$ mag rather than at brighter magnitudes where the form
of the LF was estimated. Even authors who originally suggested such an
evolutionary explanation (e.g. \citealt{Maddox1990}) have more recently
suggested that a large scale structure explanation was more plausible
(e.g. \citealt{Norberg2002}). Moreover, \citetalias{ws14} have
presented dynamical evidence for a local outflow in their analysis of
the relation between $\bar{z}$ and $m$ and \citet{shanks19, shanks19b}
have shown that this outflow is consistent with the Local Hole
under-density proposed here. It will also be interesting to see whether
future all-sky SNIa supernova surveys confirm this $\bar{z}:m$ outflow
evidence, based as it is on the assumption that the $K-$band luminosity
function is a reasonable standard candle.
We suggest that the crucial issue for \citetalias{lh11} and
\citet{Sedgwick21} is that they have fitted their LF parameters and
particularly the LF normalisation in the volume dominated by the Local
Hole and thus calibrated out the under-density. Certainly their $n(K)$ and
$n(z)$ models clearly fail at magnitudes and redshifts just outside the
ranges where they have determined their LF parameters. These authors
would need to show powerful evidence for the $z<0.1$ evolution spurt in
the favoured $\Lambda$CDM model before their rejection of the Local Hole
hypothesis could be accepted. In the absence of such a model the balance
of evidence will clearly favour the Local Hole hypothesis.
\section{Conclusions}
In this work we have examined the local galaxy distribution and extended
the work of \citet{ws14} by measuring observed number-redshift $n(z)$
and number-magnitude $n(m)$ galaxy counts in the $K-$band across
$\approx90$\% of the sky down to a Galactic latitude
$|b|\ga5^\circ$.
The $n(z)$ distributions from the 2MRS and 2M++ surveys to $K<11.5$ were
compared to the homogeneous model of \citetalias{ws14} (see also
\citealt{metcalfe01,metcalfe06}). These wide-sky $n(z)$ distributions
showed excellent agreement and implied an under-density of $22\pm2$\%
relative to the model at $z<0.075$. We also find that the 2MASS $K$ counts
show a similar under-density of $20\pm2\%$ at $K<11.5$ relative to the
same model, only converging to the predicted count at $K\approx13.5$. In
addition, an LSS-corrected $\phi^{*}(z)$ model based on the $n(z)$
distribution, when compared to the 2MASS $K$ counts, showed a much
improved fit, confirming the consistency of the 2MASS $n(m)$ and the
2MRS/2M++ $n(z)$ in detecting this under-density relative to the
\citetalias{ws14} model. We also found the under-density covered 20/24
or $\approx83$\% of the observable wide-sky with only areas containing
the Shapley and other super-clusters and rich clusters like Coma showing up as
over- rather than an under-densities.
Combined, our $n(m)$ and $n(z)$ counts are in good agreement with the
work of \citetalias{ws14}, \citet{frith03}, \citet{busswell04} and
\citet{keenan13}, who find overall underdensities of the order $\approx
15-25$ \% using a similar galaxy counts method. We also recall that in
the $\approx9000$deg$^2$ sky area analysed by \citetalias{ws14}, the
under-density patterns found in redshift were confirmed in detail by the
distribution traced by X-ray galaxy clusters in the same volume
\citep{bohringer20}.
To examine whether our measured under-density represents a physical
Local Hole in the galaxy environment around our observer location
requires a confirmation of the accuracy of the \citetalias{ws14} galaxy
count model. We have investigated this by comparing the model's
predictions for the fainter $K$ galaxy counts from the GAMA and VICS82
surveys. We have also compared these data with the model predictions of
\citetalias{lh11} who failed to find an under-density in the 2M++ survey.
The $n(m)$ and $n(z)$ counts predicted by the \citetalias{lh11} model
are lower by $\approx40$\% compared to the \citetalias{ws14} model; the
\citetalias{lh11} model thus initially appears to under-predict the
observed wide-sky $n(K)$ and $n(z)$ distributions from 2MASS, 2MRS and
2M++. Then, at $K>13.5$, beyond the 2MASS sample range, the
\citetalias{ws14} prediction fits very well the observed $n(K)$ and
$n(z)$ counts in the GAMA survey and the observed $n(K)$ in the VICS82
survey. However, the \citetalias{lh11} model shows a consistently poor
fit over both the full GAMA+VICS82 $n(K)$ and GAMA $n(z)$
distributions. Thus the GAMA + VICS82 results indicate that the
\citetalias{ws14} model can more accurately fit deep $K-$ counts than
the \citetalias{lh11} model, supporting its use in interpreting the
lower redshift, wide-sky surveys.
Consequently, our analyses here support the existence of the `Local
Hole' under-density over $\approx90$\% of the sky. At the limiting
magnitude $K<11.5$ the under-density of $20\pm2$\% in the $n(z)$ counts
corresponds to a depth of $\approx 100h^{-1}$ Mpc, while the $13\pm1$\%
under-density at $K<12.5$ in the 2MASS wide-sky $n(m)$ counts, that is
in good agreement with \citetalias{ws14}, would imply the under-density
extends further to a depth of $\approx 150h^{-1}$Mpc. We note that
the statistical error on our LF normalisation can be easily estimated
from the field-to-field errors in the $10<K<15$ galaxy counts between
the 3 GAMA fields (see Table \ref{tab:2MASS_GAMA}) and this gives an
error of $\pm3.4$\%. The error estimated from the two VICS82 sub-fields
would be similar at $\pm3.6$\% in the range $12<K<16$, decreasing to
$\pm1.1$\% in the range $12<K<18$. Combining the GAMA $\pm3.4$\% error
with the $\pm2$\% error on the -20\% under-density to $K<11.5$ mag gives
the full uncertainty on the Local Hole under-density out to 100h$^{-1}$
Mpc to be $-20\pm3.9$\% i.e. a $5.1\sigma$ detection. Similarly the
Local Hole $K<12.5$ under-density out to $\approx150$h$^{-1}$ Mpc is a
$-13\pm3.5$\% or a $3.7\sigma$ detection.
Such a 13-20\% underdensity at $\approx$100-150 h$^{-1}$Mpc scales would
notably affect distance scale measurements of the expansion rate $H_0$.
We can calculate this by assuming the linear theory discussed in
\citetalias{ws14} and \citet{shanks19}, where $\delta H_{0}/H_{0} =
-\frac{1}{3} \, \delta\rho_{g}/\rho_{g} \times \Omega_{m}^{0.6}/b$. Here
we take the galaxy bias $b\approx1.2$ for $K-$selected 2MRS galaxies in
the standard model (see e.g. \citealt{Boruah2020}; also
\citealt{Maller2005,Frith2005} although these latter $b$ values should
be treated as upper limits since they apply to $K<13.5$ and bias is
expected to rise with redshift.) From our measured $n(m)$ and $n(z)$
underdensities this would produce a decrease in the local value of $H_0$
of $\approx2-3$\%.
We finally consider the significance of such a large scale
inhomogeneity within the standard cosmological model.
\citet{frith06} created mock 2MASS catalogues from the
Hubble Volume simulation to determine theoretically allowed
fluctuations and found that a $1\sigma$ fluctuation
to $H=13$ ($K\approx12.5$) over 65\% of the sky corresponded to
$\pm3.25$\%. Scaling this to the 90\% wide-sky coverage used here
implies $1\sigma=2.8$\%. Given our $13\pm3.5$\% under-density to
$K<12.5$, we can add in quadrature this $\pm2.8$\%
expected fluctuation from the $\Lambda$CDM model to obtain $13\pm4.5$\%
with the error now including our measurement error and the expected
count fluctuation expected out to $\approx150$h$^{-1}$ in $\Lambda$CDM.
The Local Hole with a 13\% under-density therefore here corresponds to
a $2.9\sigma$ deviation from what is expected in a $\Lambda$CDM cosmology.
If we scale this from $K<12.5$ mag to $K<11.5$ mag via a 3-D version of
Eq. 3 of \citet{frith05}, a $1\sigma$ fluctuation at $K<11.5$ corresponds
to $\pm5.6$\%. At $K<12.5$, the under-density is $-20\pm2$\% and folding in
the $\pm3.4$\% normalisation error gives $-20\pm3.9$\% or a $5.1\sigma$
detection of the Local Hole under-density. Then adding in the $\pm5.6$\%
expected fluctuation amplitude just calculated gives $-20\pm6.8$\%,
implying again a $2.9\sigma$ deviation in the $\Lambda$CDM cosmology,
similar to the $K<11.5$ case.
However, the deviation from $\Lambda$CDM is likely to be more
significant. For example, if we normalised our model via the VICS82
$n(K)$ counts in the $12<K<18$ range (see Fig. \ref{fig:vics82}) then
this would argue that our LF normalisation should be still higher and the
field-field error would also be lower at $\pm1.1$\%. Additionally,
taking into account the excellent fit of the WS14 model to the 2MASS
wide-sky data itself at $12.5<K<13.5$ (see Fig. \ref{fig:nm}) would also
further increase the significance of the deviation from $\Lambda$CDM.
Although the Hubble Volume mocks of \citet{frith06} have
tested our methodology in the context of an N-body simulation `snapshot'
with an appropriate galaxy clustering amplitude in volumes similar to
those sampled here, it would be useful to make further tests in a more
realistic simulation. For example, a full lightcone analysis could be
made, applying our selection cuts in a mock that includes a full
`semi-analytic' galaxy formation model (e.g. \citealt{Sawala2022}).
This would make a further direct test of our methodology while checking
if there is any evolutionary effect that provides the spurt of density
evolution at $z\approx0.08$ required to provide an alternative to our
large-scale clustering explanation of the Local Hole.
We therefore anticipate that further work to separate out the effects of
evolution and LSS on the luminosity function in each of the
\citetalias{ws14} and \citetalias{lh11} approaches will shed further
light on the presence and extent of the Local Hole. Similarly, further
work will be needed to resolve the discrepancy between the detection of
dynamical infall at the appropriate level implied from the Local Hole
under-density found by \citetalias{ws14}, \citet{shanks19} and
\citet{shanks19b} as compared to the lack of such infall found by
\citet{kenworthy19} and \citet{Sedgwick21}. But here we have confirmed
that the proposed Local Hole under-density extends to cover almost the
whole sky, and argued that previous failures to find the under-density
are generally due to homogeneous number count models that assume global
LF normalisations that are biased low by being determined within the
Local Hole region itself.
Finally, if the form of the galaxy $n(K)$ and $n(z)$ do
imply a `Local Hole' then how could it fit into the standard
$\Lambda$CDM cosmology? Other authors have suggested possibilities to
explain unexpectedly large scale inhomogeneities such as an anisotropic
Universe (e.g. \citealt{Secrest2021}). However, it is hard to see how such
suggestions retain the successes of the standard model in terms of the
CMB power spectrum etc. We note that other anomalies in the local galaxy
distribution exist e.g. \citet{Mackenzie2017} presented evidence for a
coherence in the galaxy redshift distribution across
$\approx600$h$^{-1}$ Mpc of the Southern sky out to $z\approx0.1$.
Prompted by this result and by the `Local Hole' result reported here,
Callow et al. (2021, in prep.) will discuss the possibilities that arise
if the topology of the Universe is not simply connected. We emphasise
that there is no proof but here we just use this model as an example of one
that might retain the basic features of the standard model while
producing a larger than expected coherent local under- or over-density.
It will be interesting to look for other models that introduce such `new
physics` to explain the local large-scale structure while simultaneously
reducing the tension in Hubble's Constant.
\section*{Acknowledgements}
We first acknowledge the comments of an anonymous referee that have significantly
improved the quality of this paper. We further acknowledge STFC Consolidated Grant
ST/T000244/1 in supporting this research.
This publication makes use of data products from the Two Micron All Sky
Survey (2MASS), which is a joint project of the University of
Massachusetts and the Infrared Processing and Analysis Center/California
Institute of Technology, funded by the National Aeronautics and Space
Administration and the National Science Foundation.
It also makes use of the 2MASS Redshift survey catalogue as described by
\citet{huchra12}. The version used here is catalog version 2.4 from the
website http://tdc-www.harvard.edu/2mrs/ maintained by Lucas Macri.
Funding for SDSS-III has been provided by the Alfred P. Sloan
Foundation, the Participating Institutions, the National Science
Foundation and the US Department of Energy Office of Science. The
SDSS-III website is http://www.sdss3.org/.
The 6dF Galaxy Survey is supported by Australian Research Council
Discovery Projects Grant (DP-0208876). The 6dFGS web site is
http://www.aao.gov.au/local/www/6df/.
GAMA is a joint European–Australasian project based around a
spectroscopic campaign using the Anglo-Australian Telescope. The GAMA
input catalogue is based on data taken from the Sloan Digital Sky Survey
and the UKIRT Infrared Deep Sky Survey. Com- plementary imaging of the
GAMA regions is being obtained by a number of independent survey
programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE,
Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is
funded by the STFC (UK), the ARC (Australia), the AAO and the
participating institutions. The GAMA website is
http://www.gama-survey.org/.
This work has made use of data from the European Space Agency (ESA) mission
{\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia}
Data Processing and Analysis Consortium (DPAC,
\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the {\it Gaia} Multilateral Agreement.
\section*{Data Availability}
The 2MASS, 2MRS, 6dF, SDSS, GAMA, VICS82 and Gaia data we have used are all publicly available.
All other data relevant to this publication will be supplied on request to the authors.
\bibliographystyle{mnras}
|
1,116,691,500,212 | arxiv | \section{Introduction}
\label{intro}
In complex thin-film transition-metal oxide heterostructures, interface phenomena can substantially affect the electronic properties and lead new electronic ordered states not observed in the bulk constituents \cite{ohtomo1,pavlenko_kopp,pavlenko_kopp2,pavlenko_schwabl2,jackeli,pavlenko_sawatzky,pavlenko_orb}. Prominent examples are the titanate
interfaces in the heterostructures LaAlO$_3$/SrTiO$_3$, LaTiO$_3$/SrTiO$_3$ and similar compounds \cite{ohtomo,thiel}. These structures have recently
attracted much attention which is focused on the discovered two-dimensional electron liquid state stabilized by the interface electronic reconstruction.
The magnetoresistance and torque magnetometry measurements \cite{brinkman,dikin,li} demonstrate a hysteretic behavior in the
field-dependences of magnetoresistance and superparamagnetic behavior which suggests the existence of ferromagnetic puddles
in the samples of LaAlO$_3$ (LAO) grown on SrTiO$_3$ (STO).
The observation of the magnetism at the LAO/STO interface triggered
an intense exploration of the role of impurities in the formation of the magnetically ordered state at these interfaces \cite{ohtomo,brinkman,kalabukhov,dikin,li,ariando,pavlenko,pavlenko2,pavlenko3}.
In particular, recent first-principle studies demonstrate a vacancy-related magnetic exchange splitting of $3d$ states of interface Ti atoms~\cite{pavlenko2,pavlenko3}. In this case, the conducting electrons, which emerge due to the charge compensation of the polar discontinuity occupy $d_{xy}$ bands, whereas the vacancy-released electrons occupy $e_g$ states shifted below the Fermi level due to the interface orbital reconstruction ~\cite{pavlenko2,pavlenko3}.
STM, cathode luminescence studies and conductivity measurements provide strong support for
the existence of the oxygen vacancies in STO layers of LAO/STO heterostructures \cite{muller,kalabukhov}.
As the DFT calculations are unable to access a realistic random low-concentration distribution of impurities,
we consider here an effective two-dimensional one-band Hubbard model on a square lattice with disordered random vacancies. In this model, each vacancy introduces an exchange splitting of the local $3d_{xy}$ state of neighbouring Ti atoms, in this way stabilizing a ferromagnetic order through the electronic transfer term.
\section{One-orbital model of random oxygen vacancies}
The two-dimensional electronic liquid at the titanate interfaces is described by an effective one-band Hubbard model on a lattice with $N$ sites which corresponds to the interface TiO$_2$ layer. Each site $i$ identifies a doubled $\sqrt{2}\times \sqrt{2}$ TiO$_2$ unit cell with two Ti atoms $j=1,2$ and four nearest neighbouring oxygen atoms $l=1,\ldots4$ (see Figure~\ref{fig1}), where the cell doubling is introduced for the studies of magnetically ordered states on two Ti sublattices. In this configuration, the oxygen vacancy corresponds to the elimination of one of the oxygens in the Ti$_2$O$_4$-plaquette.
The local disorder induced by an oxygen vacancy on the lattice site ($i,l$) is introduced through the local random fields $h_{i\sigma,l}=h_\sigma$, which shift the electronic $3d_{xy}$ states of the neighbouring Ti atoms:
\begin{eqnarray}
H=&&(\varepsilon_d-\mu)\sum_{i=1\atop \sigma; j=1,2}^N d_{i\sigma,j}^\dag d_{i\sigma,j}+U\sum_{i\atop j=1,2} n_{i\uparrow,j} n_{i\downarrow,j} +T_{dd} \nonumber\\
&& +\sum_{i\sigma\atop l=1,\ldots 4} h_\sigma(1-x_{il})d_{i\sigma,2}^\dag d_{i\sigma,2}\\
&& +\sum_{\langle ii'\rangle\sigma \atop l=1,\ldots 4} h_\sigma(1-x_{i'l})d_{i\sigma,1}^\dag d_{i\sigma,1}, \nonumber\\
&& T_{dd}=t \left (\sum_{i ;\sigma} d_{i\sigma,1}^\dag d_{i\sigma,2}+\sum_{\langle ii'\rangle ;\sigma} d_{i\sigma,1}^\dag d_{i'\sigma,2}+h.c. \right).
\end{eqnarray}
Here $d_{i\sigma,j}^{\dag}$ are electron creation operators and $n_{i\sigma,j}=d_{i\sigma,j}^\dag d_{i\sigma,j}$ are the occupation numbers for the self-doped $3d_{xy}$ electrons with the local energy $\varepsilon_d$ and chemical potential $\mu$; $U$ is the local Hubbard repulsion and $t$ is the effective indirect $d-d$ electronic transfer energy. The binary discrete random variable $x_{il}=\{0,1\}$ is zero if the oxygen atom is absent in the oxygen position ($i$,$l$) of the unit cell $i$. The two last terms in $H$ describe the magnetic splitting $h_\uparrow =-h_\downarrow=-h$ of the local electronic states of Ti due to an oxygen vacancy in each of four possible neighbouring ($i,l$) positions \cite{pavlenko2,pavlenko3}. For Ti$_2$, the nearest neighbouring oxygen atoms belong also to different unit cells with the coordinates $\vec{R}_{i'}=\{\vec{R}_{i},\vec{R}_{i}-\vec{a}_x,\vec{R}_{i}+\vec{a}_y,\vec{R}_{i}-\vec{a}_x+\vec{a_y}\}$.
\begin{figure}[tbp]
\epsfxsize=5.0cm {\epsfclipon\epsffile{fig1.eps}}
\caption{Scheme of a doubled unit cell on the MO$_2$-plane (M=Ti) of a SrTiO$_3$-layer
}
\label{fig1}
\end{figure}
In the ferromagnetic state, the local magnetic moment $m=m_j=\langle n_{i\uparrow,j}\rangle-\langle n_{i\downarrow,j} \rangle$ is defined through the average orbital occupancies of the majority versus the minority local spin states which can be expressed via the thermodynamic averages of the corresponding double-time one-particle Green functions $G_{i\sigma; i'\sigma'}^{jj'}(t-t')=-i\Theta(t-t')\langle [d_{i\sigma,j}(t),d_{i'\sigma,j'}^{\dag}(t')]\rangle$ \cite{zubarev}. Here $d_{i\sigma,j}(t)$ and $d_{i'\sigma,j'}^\dag(t')$ are the Heisenberg representations of the operators $d_{i\sigma,j}$ and $d_{i'\sigma,j'}^\dag$. To calculate the average electronic orbital occupancies for the Hamiltonian with the random configurational variables $x_{il}$, we employ the coherent potential approximation \cite{elliott,esterling} which allows to express the configurationally averaged Green functions $\langle G_{i\sigma;i'\sigma'}^{jj'}(t-t') \rangle_c$ through the effective medium Green functions $R_{i\sigma; i'\sigma'}^{jj'}(t-t')$:
\begin{eqnarray}
G_{i\sigma; i'\sigma'}^{jj'}=R_{i\sigma; i'\sigma'}^{jj'}+\sum_{l=1\atop g=1,2}^N
R_{i\sigma; l\sigma'}^{jg} T_g^l R_{l\sigma; i'\sigma'}^{gj'},
\end{eqnarray}
where the $T$ matrix $\{ T_g^l\}$ describes the local scattering on the defect potential.
In the $(\vec{k},\omega)$-space, the effective medium sublattice Green functions have the following form:
\begin{eqnarray}\label{r}
R_{\sigma;\sigma}^{jj}(\vec{k},\omega)=R_{\sigma}^{jj}(\vec{k},\omega)=\frac{1}{2\pi D^\sigma}\left( \frac{x_1^\sigma-\varepsilon_{j}^\sigma}{\omega-x_1^\sigma}-\frac{x_2^\sigma-\varepsilon_{j}^\sigma}{\omega-x_2^\sigma} \right),
\end{eqnarray}
where $x_{1;2}^\sigma=(\varepsilon_1^\sigma+\varepsilon_2^\sigma)/{2}\pm {D^\sigma}/{2}$, $\varepsilon_{1;2}^\sigma=\varepsilon_d-\mu+U\langle n_{\bar{\sigma};1/2}\rangle+\Sigma_{1;2}^\sigma$, and $D^\sigma=\sqrt{(\varepsilon_1^\sigma-\varepsilon_2^\sigma)^2+4t^2|z_{\vec{k}}|^2}$ with $z_{\vec{k}}=1+\exp(ik_xa_x)+\exp(-ik_ya_y)+\exp(i\vec{k}(\vec{a}_x-\vec{a}_y))$ and $\langle n_{i\sigma;1/2}\rangle=\langle n_{\sigma;1/2}\rangle$. The effective self-energies $\Sigma_j$ should be determined from the equality of the effective-medium propagators and the corresponding configurationally averaged Green functions
\begin{eqnarray}
R_{i\sigma;i'\sigma'}^{jj'}(t-t')=\langle G_{i\sigma;i'\sigma'}^{jj'}(t-t') \rangle_c,
\end{eqnarray}
which is equivalent to the condition $\langle T_g^l\rangle_c=0$ in the single-site approximation, and in our case leads to the following equations for the determination of $\Sigma_j$:
\begin{eqnarray}\label{t}
\langle V_j^{i\sigma}(1-g_j^\sigma V_j^{i\sigma})^{-1}\rangle_c=0; \quad (j=1,2),
\end{eqnarray}
where $V_1^{i\sigma}=h_\sigma (4-x_{i1}-x_{i-a_x,2}-x_{i+a_y,4}-x_{i-a_x+a_y,3})-\Sigma_1^\sigma$
and $V_2^{i\sigma}=h_\sigma (4-\sum_l x_{il})-\Sigma_2^\sigma$ are the random deviations from the effective medium self energies due to the local disorder. The local Green functions
\begin{eqnarray}\label{g0}
(g_j^\sigma)^{-1}=(G_{0j}^\sigma)^{-1}-\Sigma_j^\sigma
\end{eqnarray}
are determined from the bare Green functions for the stoichiometric lattice:
\begin{eqnarray}\label{gg0}
(G_{0j}^\sigma)^{-1}=\omega-\varepsilon_d+\mu-U\langle n_{\bar{\sigma};j}\rangle
\end{eqnarray}
which are calculated after the mean-field decoupling of the local Hubbard term in $H$. We note that the mean-field approach allows to capture the main features of the ordered states in the correlated systems and is widely used to study orbital physics at correlated interfaces \cite{jackeli,hirsch}.
In the ferromagnetically ordered state, we have
$\Sigma_1^\sigma=\Sigma_2^\sigma$,
$G_{01}^\sigma=G_{02}^\sigma$, and the problem is reduced to the solution of the equations (\ref{t}) and the selfconsistent equations for the magnetization and chemical potential
\begin{eqnarray}\label{mn}
m=\langle n_{\sigma,j}\rangle - \langle n_{\bar{\sigma},j}\rangle\nonumber\\
n=\langle n_{\sigma,j}\rangle + \langle n_{\bar{\sigma},j}\rangle,
\end{eqnarray}
which should be considered for a given electron concentration $n$. The thermodynamically averaged occupancies $\langle n_{\sigma,j}\rangle$ in this case are expressed through the effective-medium Green functions $R_{\sigma;\sigma}^{jj}(\vec{k},\omega)$ and correspond to the configurationally averaged electronic occupation numbers.
In the limit of small concentrations of the oxygen vacancies $c=\langle 1-x_{il}\rangle_c$, we can obtain the following expansion for $\Sigma_j^\sigma$
\begin{eqnarray}\label{sigma_c}
\Sigma_j^\sigma(\omega)=4h_\sigma \frac{c}{1-c}S_j(\omega)+O(c^2),
\end{eqnarray}
where $S_j(\omega)=1+{h_\sigma}/{(\omega-\varepsilon_d+\mu-U\langle n_{\bar{\sigma};j}\rangle-h_\sigma)}$. From (\ref{sigma_c}) we see that $\Sigma_j^\sigma \rightarrow 0$ for vanishing vacancy concentration $c \rightarrow 0$.
\section{Results and discussion}
The numerical solutions of the equations (\ref{mn}) for different vacancy concentrations $c$ have been analyzed in the low-temperature range $0.0001<k_BT/t<0.01$ for the electronic concentration $n$ in the range between 0.1 and 0.4 which corresponds to the concentrations $6\cdot 10^{13}$~cm$^{-2}$--$2\cdot 10^{14}$~cm$^{-2}$ of the interface-doped polar charge in the TiO$_2$ layers measured in Hall effect experiments \cite{thiel,ohtomo} and estimated from ab-initio calculations \cite{pavlenko,pavlenko2,pavlenko3}.
\begin{figure}[tbp]
\epsfxsize=8.0cm {\epsfclipon\epsffile{fig2.eps}}
\caption{Local magnetic moment $m$ versus O-vacancy concentration $c$ for different $U$ and electron concentrations $n$. Here $h/t=0.25$, $k_BT/t=0.01$. In plot(b), the electron concentration is fixed to $n=0.4$.}
\label{fig2}
\end{figure}
In the regime of weak and intermediate Hubbard correlation energies $U/t \le 5$, we obtained a monotonic increase of the local magnetization for higher vacancy concentrations $c$, which can be observed in Fig.~\ref{fig2}(a). The value $U/t \approx 2-5$ corresponds to the correlation energies of Ti $3d$ electrons used in ab-initio DFT+U calculations in \cite{pavlenko,pavlenko2,pavlenko3}. Fig.~\ref{fig2}(a) shows that a robust magnetic state with magnetic moments larger than $0.05$~$\mu_B$ can be achieved only for the high vacancy concentration $c>0.1$, which corresponds to vacancy densities above a critical value $6\cdot 10^{13}$~cm$^{-2}$. This range of vacancy densities was indeed analyzed in the ab-initio calculations which explains a good agreement of the results obtained in \cite{pavlenko,pavlenko2,pavlenko3} with the experimental measurements.
Furthermore, the analysis of the magnetic moments calculated in the range of strong Hubbard correlations $U/t>6$ shows the existence of a transition from the weak-magnetization regime to the regime of strong magnetism, indicated in Fig.~\ref{fig2}(b). In the strong-correlation regime, large values of the local magnetic moments of the order $0.3$~$\mu_B$ are stabilized already for small $c$, which can be explained by the intrinsic property of the Hubbard model to stabilize the ferromagnetic state in the concentration range of approximately quarter-filling:
the transition from the paramagnetic to the ferromagnetic state upon an increase of $U$ can be identified from the magnetic phase diagram for the two-dimensional Hubbard model \cite{hirsch}, where the concentration range $0.2<c<0.5$ corresponds to the electronic doping levels considered in this work. Considering the large values of the magnetic moments, the situation at the cuprate interfaces corresponds rather to the strong-correlation regime. One can expect that the strong-correlation regime can be realized, for example, at oxygen-reduced cuprate interfaces, which would
provide a possibility for a strong enhancement of the ferromagnetic state by oxygen vacancies, a scenario which may lead to the formation of ferromagmetic regions coexisting with a superconducting background.
\section*{Summary}
We considered magnetic states of correlated oxide interfaces, where effective charge self-doping and magnetically ordered states emerge due to the electronic and ionic reconstructions. Employing the coherent potential approximation to the effective one-band Hubbard model with disorder, we analyzed the effect of random oxygen vacancies on the formation of two-dimensional magnetism. We find that the random vacancies enhance the ferromagnetically ordered state and stabilize a robust magnetization above a critical vacancy concentration of about $c=0.1$. In the strong-correlated regime, we also obtain a nonmomnotonic increase of the magnetization upon an increase of vacancy concentration and observe a substantial increase of the magnetic moments. This enhancement appears due to
the intrinsic property of strong electron correlations to stabilize a ferromagnetic state at electron doping levels $0.3-0.5$ electrons per unit cell, typical for the polar-doped interfaces \cite{ohtomo,thiel}. Although a mean-field evaluation might well overestimate the tendency towards a ferromagnetic state, the inhomogeneous (disordered) states considered here rather support ferromagnetic correlations: disorder reduces the kinetic energy and the impurities, being effectively magnetic, present a seed for at least short-range magnetism.
\begin{acknowledgements}
This work was supported by the DFG (TRR~80), A.~von~Humboldt Foundation
and by the Ministry of Education and Science of Ukraine (Grant No.~0110U001091).
Grants of computer time from the UNL Holland Computing
Center and Leibniz-Rechenzentrum M\"unchen through the SuperMUC project pr58pi are gratefully acknowledged.
We wish to acknowledge very useful discussions with J.~Mannhart, G.A.Sawatzky, E.Y.Tsymbal, I.~Stasyuk and K.~Moler.
\end{acknowledgements}
|
1,116,691,500,213 | arxiv | \section{Microscopic Model}
In this section we present the
microscopic model from which we calculate the
ECE. The model was recently
proposed in Ref.~[\onlinecite{Guzman2013a}] for relaxor ferroelectrics.
We present it here for the sake of completeness.
We focus on the relevant transverse optic mode configuration coordinate $u_i$ of the ions in the unit cell $i$ along the polar axis (chosen to be the $z$-axis).
$u_i$ experiences a local random field $h_i$ with probability $P(h_1, h_2,...)$ due to the compositional disorder introduced by the different ionic radii and
different valencies of, say, Mg$^{2+}$, Nb$^{5+}$, and Ti$^{4+}$ in the typical relaxor (PMN)$_{1-x}$-(PT)$_{x}$. The model Hamiltonian is
\begin{equation}
\label{eq:Hamiltonian_RFE}
H=\sum_{i}\left[\frac{\Pi_i^2}{2M}+V(u_i)\right]-\frac{1}{2}\sum_{i,j}v_{ij}u_i u_j -\sum_i h_i u_{i} - E_0 \sum_i u_i
\end{equation}
where $\Pi_i$ is the momentum conjugate to $u_i$, $M$ is an effective mass, and $E_0$ is
a static applied electric field.
We assume the $h_i$'s are independent random variables with Gaussian probability distribution with zero mean and variance $\Delta^2$.
$V(u_i) =\frac{\kappa}{2} u_i^2 + \frac{\gamma}{4} u_i^4 $ is an anharmonic potential
with $\kappa,~\gamma$ positive constants.
$
v_{ij}/{e^*}^2=
\begin{cases}
3 \frac{(Z_{i}-Z_{j})^2}{|{\bm R}_i-{\bm R}_j|^5} -\frac{1}{|{\bm R}_i-{\bm R}_j|^3}, & {\bm R}_i \neq {\bm R}_j\\
0, & {\bm R}_i = {\bm R}_j,
\end{cases}
$
is the dipole interaction
where $e^*$ is the effective charge and $Z_i$ is the $z$-component of ${\bm R}_i$.
For future use, we denote
$v_{\bm q} = \sum_{i,j}v_{ij} e^{ i {\bm q} \cdot ( {\bm R}_i - {\bm R}_j ) }$
the Fourier transform of the dipole interaction,
$v_{0}~( = 4\pi n {z^*}^2 / 3 )$
the ${\bm q} \to 0 $ component of $v_{\bm q}$ in the direction transverse to the polar axis $\hat{ \bm z }$~($ v_{\bm q}$ is non-analytic for ${\bm q} \to 0 $),
$n$ the number of unit cells per unit volume, and $a$ the lattice constant.
\section{Variational Solution and Entropy Function}
In the present section, we briefly describe the variational
solution of the problem posed by the Hamiltonian~(\ref{eq:Hamiltonian_RFE})
and obtain an expression for the entropy function.
We consider a trial probability distribution,
$\rho^{tr} = e^{-\beta H^{tr}} / Z^{tr} $
where $H^{tr} = \sum_i \frac{\Pi_i^2}{2M} + \frac{1}{2}\sum_{i,j} ( u_i - p ) G_{i-j} ( u_j - p ) - \sum_{i} h_i u_i,$ is
the Hamiltonian of coupled displaced harmonic oscillators in a quenched random field,
and $Z^{tr} = \mbox{Tr} e^{- \beta H^{tr} } $ its normalization.
where $\Pi_i$ is the conjugate momentum of the displacement coordinate $u_i$ at site $i$
and $M$ is an effective mass.
We consider random fields~$\{ h_1, h_2, \dots \}$ with Gaussian probability distribution $P(h_1,h_2,...)$ with zero mean and variance $\Delta^2$.
$p$ is an uniform order parameter: it is the displacement coordinate averaged over thermal disorder first,
and then over compositional disorder,
$p = \int_{-\infty}^\infty dh_1 dh_2 \cdots P(h_1,h_2,...) \, \mbox{Tr} \, \rho^{tr} \, u_i$;
$\Omega_{\bm q}$ is the frequency of the transverse optic mode at wavevector ${\bm q}$ and it is given by the Fourier transform
of $G_{i-j}$, $ M \Omega_{\bm q}^2 = \sum_{i,j} G_{i-j} e^{ i {\bm q} \cdot ( {\bm R}_i - {\bm R}_j ) } $.
We define $ G_{i-j}^{-1} = \left( 1 / N \right) \sum_{\bm q} (M \Omega_{\bm q}^2 )^{-1} e^{ -i {\bm q} \cdot ( {\bm R}_i - {\bm R}_j ) } $.
$p$ and $\Omega_{\bm q} $ are variational parameters and are determined by minimization of the free energy
$ F = \int_{-\infty}^\infty dh_1 dh_2 \cdots P(h_1,h_2,...) \, \mbox{Tr} \, \rho^{tr} \left[ H + k_B T \ln \rho^{tr} \right] $.
The entropy $S$ is given as follows,
\begin{align}
\label{eq:entropy_1}
\frac{S}{N} &= \int_{-\infty}^\infty dh_1 dh_2 \cdots P(h_1,h_2,...) \, \mbox{Tr} \, \rho^{tr} \, \left( -k_B \ln \rho^{tr} \right)
\nonumber \\
&~~~~~~~~~~~~~~~~~~~~~~~~~
=
\frac{k_B}{N} \sum_{\bm q} \left\{ \frac{ \beta \hbar \Omega_{\bm q} }{ 2 } \coth \left( \frac{ \beta \hbar \Omega_{\bm q} }{ 2 } \right) - \ln\left[ 2\sinh \left( \frac{ \beta \hbar \Omega_{\bm q} }{ 2 } \right) \right] \right\}.
\end{align}
$S$ depends on temperature $T = (k_B \beta)^{-1}$, the applied static field $E_0$; and the stength of compositional disorder $\Delta$.
The adiabatic changes in temperature $\Delta T$ and isothermal changes in entropy $\Delta S$ presented in the main text
are calculated from Eq.~(\ref{eq:entropy_1}). The correlation length of the fluctuations of polarization, $\xi$,
is given by the frequency of the transverse optic phonon at the zone center, $\xi/a = \sqrt{ (3/4\pi) \zeta / (M \Omega_0^2) }$.~\cite{SGuzman2013a}
\section{Broad Peak in the ECE of Conventional Ferroelectrics}
In this section, we show that Landau theory
predicts a broad peak in the ECE of conventional ferroelectrics.
We consider the simple Landau theory of Ref.~[\onlinecite{SNovak2013a}]
for the conventional ferroelectric BaTiO$_3$~(BTO) in an applied field $E_0$ along the $(001)$ axis.
Near the the paraelectric-to-ferroelectric transition of BTO,
the Landau free energy is given as follows,~\cite{SNovak2013a}
\begin{align}
\label{eq:Free_BTO}
F = \frac{a(T-T_0)}{2}P^2 + \frac{b}{4} P^4 + \frac{c}{6} P^6 + \frac{d}{8}P^8 -P E,
\end{align}
where $P$ is the polarization and $T_0 \simeq 400\,$K is the supercooling temperature.
The coefficients $a = 1.696 \times 10^6\,$ Nm$^2$C$^{-2}$, $b=3.422\times 10^9\,$Nm$^6$C$^{-4}$,
$c=1.797\times 10^{11}\,$Nm$^{10}$C$^{-6}$, $d=3.214 \times 10^{12}\,$Nm$^{14}$C$^{-8}$ are
determined from the dielectric susceptibility and heat capacity experiments.~\cite{Novak2013a}
The isothermal changes in temperature $\Delta T = T_2 -T_1$, are calculated self-consistently from
the relation $T_2 = T_1 \mbox{exp}\left[ (a / 2 C ) \left( P^2(E_2,T_2) - P^2(E_1,T_1) \right) \right]$,
where the temperature and electric field dependence of $P$ are determined by the standard minimization
procedure of the free energy~(\ref{eq:Free_BTO}). $ C $ is the contribution from the lattice to the
specific heat.
In the electric field-temperature phase diagram of BTO
the spinodal line begins at about $(E_0, T_{c}^0) \simeq (0, 405\,\mbox{K})$
and ends at a critical point $(E_{cr}, T_{cr}) \simeq (10\,\mbox{kV/cm},415\,\mbox{K})$.~\cite{SNovak2013a}
The paraelectric-to-ferroelectric transition is discontinous along the spinodal line and
it is continous at $(E_{cr}, T_{cr})$. BTO is supercritical above the critical point and no transition occurs.
Figure~\ref{fig:dpBTO}~(a) shows the adiabatic changes
in temperature $\Delta T$ for BTO for several changes in the field strength, $\Delta E_0$.
$\Delta T$ exhibits a single peak at the transition point
for ($\Delta E_0 \lesssim 0 \to 100 E_{cr}$), as expected.
For larger $\Delta E_0$'s than this, a broad peak develops.
Clearly, this peak cannot be observed experimentally
as the required fields are well beyond BTO's breakdown electric field~($\simeq 14\,$kV/cm).~\cite{SLandolt-Bornstein}
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{Fig_S1.pdf}
\caption{Adiabatic changes in temperature, $\Delta T$
in the conventional ferroelectric BTO for several changes in the electric field
strength $\Delta E_0$ applied along the $(001)$ direction. A secondary, broad peak arises in $\Delta T$ in addition
to that at the transition temperature for very large $\Delta E_0$.}
\label{fig:dpBTO}
\end{figure}
\section{Figure of Merit}
The purpose of this section is to derive our figure of merit given in Eq. (\ref{eq:fom})
of our manuscript. To do so we first derive the relation between the
ECE and the correlation length of fluctuations of polarization (Eqs. (\ref{eq:DeltaS}) and (\ref{eq:DeltaT}) of the main text).
We consider the case of no compositional disorder where a mean field approximation
is appropriate, as discussed in the main text.
In the mean field approximation, the free energy per site $F/N$ of
the Hamtilonian (\ref{eq:Hamiltonian_RFE}) is given as follows, \cite{SBlinc1974a}
\begin{align}
\label{eq:free}
\frac{F}{N} = \frac{\kappa}{2}\left( p^2 + \sigma \right) + \frac{ \gamma }{4} \left( p^4 + 6p^2 \sigma + 3 \sigma^2 \right)
- \frac{1}{2}v_0 p^2 - E_0 p - \frac{ k_B T }{2} - k_B T \log \left[ 2\pi \sqrt{ M \sigma k_B T } \right].
\end{align}
Here, $p = \left< u_i \right>$ is a homogenous order parameter and
$ \sigma = \left< ( u_i - \left< u_i \right> )^2 \right> $ are mean squared fluctuations.
Minimization of the free energy with respect to $p$ and $\sigma$ gives the following result,
\begin{align}
\label{eq:mft_eq_RFE_1}
E_0 &= \left( M \Omega_0^2 - 2 \gamma p^2 \right)p, \\
M \Omega_0^2 &= \kappa + 3 \gamma \left[ \sigma + p^2 \right] - v_0, \\
\label{eq:mft_eq_RFE_4}
\sigma &= \frac{ k_B T }{ M \Omega_0^2 + v_0 }.
\end{align}
Here, $ \Omega_0$ is the frequency of the zone-center soft mode.
Equations~(\ref{eq:mft_eq_RFE_1})-(\ref{eq:mft_eq_RFE_4})
are self-consistent equations that determine the temperature and electric field dependence of $\Omega_0$ and $p$.
Equations~(\ref{eq:mft_eq_RFE_1})-(\ref{eq:mft_eq_RFE_4})
give a second order phase transition at the critical temperature
$T_c^0 = \left( v_0 - \kappa \right) v_0^\perp / (3 \gamma k_B)$
with a soft mode frequency that follow the Cochran law,
$ M \Omega_0^2 = 3v_0 (T-T_c)/C_{CW}$
above about $T_c^0$ in the absence of an applied field.
$C_{CW} = \left(2 v_0^\perp - \kappa \right)v_0^\perp/( 3 \gamma k_B ) $ is the Curie-Weiss constant.
The soft mode frequency determines the correlation length
of fluctuations of polarization, $\xi$,~\cite{SGuzman2013a}
\begin{align}
\label{eq:corrlength}
\frac{\xi}{a} = \sqrt{ \frac{3 \zeta v_0 / (4 \pi)}{M \Omega_0^2 }},
\end{align}
where $a$ is the lattice constant and $\zeta$ is a dimensionaless constant.
The entropy is given as follows,
$ S(T, E_0) = - \left( \partial F / \partial T \right)_{p, \sigma, N} = N k_B \left( 1 + \ln \sqrt{ \sigma T } \right)+$ terms independent of $p,\sigma$ and $T$. Near the FE transition and for weak fields ($ M \Omega_0^2/v_0 \ll 1$),
the entropy is given as follows,
\begin{align}
\label{eq:Sex}
\frac{S(T, E_0)}{N k_B} = \ln T - \frac{1}{2} \frac{M \Omega_0^2(T,E_0)}{v_0},
\end{align}
where we have ignored terms independent of $T$.
By substituting Eq. (\ref{eq:corrlength}) into the above equation and considering changes
in the electric field $\Delta E = E_2 - E_1$, we obtain the following result for the magnitude of the isothermal changes in entropy,
\begin{align}
\label{Seq:DeltaS}
\frac{ \left| \Delta S ( T, E_2, E_1 ) \right| }{N k_B} = \frac{3 \zeta a^2 }{8 \pi} \left[ \xi^{-2}(T, E_2) - \xi^{-2}(T, E_1) \right].
\end{align}
This is our desidered result and corresponds to Eq. (1) in the paper.
We now calculate the adiabatic changes in temperature.
For adiabatic processes, Eq. (\ref{eq:Sex}) gives
\begin{align}
\label{Seq:DeltaT}
\frac{ \Delta T(T_1, E_2, E_1) }{T_1} = \frac{ 3 \zeta a^2 }{8 \pi } \left[ \xi^{-2}(T_2, E_2) - \xi^{-2}(T_1, E_1) \right].
\end{align}
where $\Delta T = T_2 - T_1$
and we have made
the assumption that the changes in temperature are small ($ \frac{\Delta T}{T_1} \ll 1 $).
This corresponds to Eq. (\ref{eq:DeltaT}) in the main paper.
To derive our figure of merit, we evaluate Eqs.~(\ref{Seq:DeltaS}) and~(\ref{Seq:DeltaT}) at the transition temperature for
$E_2=E_0>0$ and $E_1=0$.
From Eqs. ~(\ref{eq:mft_eq_RFE_1})-(\ref{eq:corrlength})
one can show that at $T_c^0$,
\begin{align}
\frac{\xi^{-2} (T_c^0, E_0 ) }{a^{-2}} &= \frac{4 \pi v_0}{\zeta k_B C_{CW} } p^2(T_c^0,E_0),
\end{align}
where $ p(T_c^0, E_0) = \left( k_B C_{CW} E_0 / v_0^2 \right)^{1/3} $. The saturated polarization $P_s = n e^* p(T=0,E_0=0) $
is also derived from Eqs.~(\ref{eq:mft_eq_RFE_1})-(\ref{eq:mft_eq_RFE_4}) and the result is $P_s = \sqrt{9 k_B n T_c^0 / (4 \pi) }$. By substituting all these results
into Eq.~(\ref{Seq:DeltaS}) we obtain,
\begin{align*}
\frac{ \left| \Delta S ( T_c^0, E_0 ) \right| }{N k_B}
= \frac{1}{2}\left( \frac{27}{4\pi} \right)^{2/3} \left( \frac{ T_c^0 }{ C_{CW} } \right)^{1/3} \left( \frac{E_0}{P_s^0} \right)^{2/3},
\end{align*}
where we have done the rescaling $ E_0 \to E_0 e^* $ so that $E_0$ in the above equation has units of electric field (see Hamiltonian). The same result is obtained for $ \Delta T ( T_c^0, E_0 )/T_c^0$. This is our figure of merit given in Eq. (\ref{eq:fom}) of the main text.
|
1,116,691,500,214 | arxiv | \section{Introduction}
\label{sc:Introduction}
Control of autonomous cars is a challenging task and has attracted considerable
attention in recent years~\cite{Buehler2009}. One particular case of autonomous
driving is autonomous racing, where the goal is to drive around a track as fast
as possible, potentially to race against competitors and to avoid
collisions~\cite{Kritayakirana2012}. In order to achieve high performance at
these extreme conditions, racing teams today spend a significant amount of time
and effort on modeling, which is challenging especially near the limits of tire
adhesion~\cite{Guiggiani2014}. Learning-based control methods have been proposed
to address this challenge and show great potential towards improving racing
performance~\cite{Kolter2010}. They do, however, often suffer from poor model
accuracy and performance during transient learning phases. This can lead to
violation of critical constraints~\cite{Akametalu2014} related to keeping the
car on track and avoiding collisions, compromising not only performance, but the
success of the entire race. In addition, iteratively learning the racing task on
a lap-by-lap basis, as considered e.g.\ in~\cite{Kapania2015}, suffers from poor
generalization and does typically not allow for maintaining high performance for
dynamic racing tasks, such as obstacle avoidance or overtaking. This paper
addresses these challenges by learning the dynamics model from data and
considering model uncertainty to ensure constraint satisfaction in a nonlinear
model predictive control (NMPC) approach, offering a flexible framework for
racing control.
\par
Recently, a number of autonomous racing control methods were presented that rely on NMPC formulations. An NMPC racing approach for miniature race cars was proposed in~\cite{Liniger2015}, which uses a contouring control formulation to maximize track progress over a finite horizon and enables obstacle avoidance. It was extended to a stochastic setting in order to take model uncertainty into account in~\cite{Carrau2016} and~\cite{Liniger2017a}. Using model learning in an MPC framework allows for generalizing from collected data and for improving performance in varying racing tasks. This was, for instance, demonstrated in~\cite{Niekerk2017} by using the mean estimate of a Gaussian Process (GP) as a dynamics model for an NMPC method based on~\cite{Liniger2015}. Furthermore, the MPC approach recently proposed in~\cite{Rosolia2017a} was applied to the problem of autonomous racing, where the model is improved with an iterative parameter estimation technique~\cite{Rosolia2017b}.
\par
The method presented in this paper makes use of GP regression to improve the dynamics model from measurement data, since GPs inherently provide a measure for residual model uncertainty, which is integrated in a cautious NMPC controller.
To this end we extend the approach presented in~\cite{Liniger2015} with a learning module and reformulate the controller in a stochastic setting.
A key element differentiating the approach from available results is the stochastic treatment of a GP model in an NMPC controller to improve both performance and constraint satisfaction properties.
We derive a tractable formulation of the problem that exploits both the improved dynamics model and the uncertainty and show how chance constraints on the states can be approximated in deterministic form.
The framework thereby allows for specifying a minimum probability of satisfying critical constraints, such as track boundaries, offering an intuitive and systematic way of defining a desired trade-off between aggressive driving and safety in terms of collision avoidance. \par
While the use of GPs in MPC offers many benefits, it poses computational challenges for use with fast sampled and larger scale systems, such as the race car problem, since the evaluation complexity of GPs is generally high and directly scales with the number of data points considered. Various approaches to address this limitation have been presented in the literature. One class of methods relies on an approximation by a finite number of basis functions, such as the sparse spectrum approximation~\cite{Lazaro-Gredilla2010}, which is also used in the GP-based NMPC in~\cite{Niekerk2017}. We present an approach for predictive control based on a sparse GP approximation using inducing inputs~\cite{Quinonero-candela2005}, which are selected according to an approximate trajectory in state-action space. This enables a high-fidelity local approximation currently relevant for control at a given measured state, and facilitates real-time implementability of the presented controller.
\par
We finally evaluate the proposed cautious NMPC controller in simulations of a race. The results demonstrate that it provides safe and high performance control at sampling times of $30 \text{ ms}$, which is computationally on par with NMPC schemes without model learning~\cite{Liniger2015}, while improving racing performance and constraint satisfaction. We furthermore demonstrate robustness towards process noise, indicating fitness for hardware implementation.
\section{Preliminaries}
\label{sc:SPGIntro}
In the following we specify the notation used in the paper and briefly introduce GP regression and sparse approximations based on inducing inputs as relevant to the presented control approach.
\subsection{Notation}
\label{ssc:Notation}
For two matrices or vectors we use $[A;B] := {[A^T \, B^T]}^T$ for vertical matrix/vector concatenation. We use ${[y]}_i$ to refer to the $i$-th element of the vector $y$, and similarly ${[A]}_{\cdot,i}$ for the $i$-th column of matrix A. A normal distribution with mean $\mu$ and variance $\Sigma$ is denoted $\mathcal{N}(\mu,\Sigma)$. We use $\Vert x \Vert$ for the 2-norm of vector $x$ and $\diag(x)$ to express a diagonal matrix with elements given by the vector x. The gradient of a vector-valued function $f: \mathbb{R}^{n_z} \rightarrow \mathbb{R}^{n_f}$ with respect to vector $x~\in~\mathbb{R}^{n_x}$ is denoted $\nabla_{\!x} f : \mathbb{R}^{n_z} \rightarrow \mathbb{R}^{n_f \times n_x}$.
\subsection{Gaussian Process Regression}
\label{ssc:GP}
Consider M input locations collected in the matrix $\mathbf{z} = \nobreak [z_1^T; \ldots ; z_\nt^T] \in \mathbb{R}^{\nt \times \nz}$ and corresponding measurements $\mathbf{y} = [y_1^T; \ldots ; y_\nt^T] \in \mathbb{R}^{\nt \times \nd}$ arising from an unknown function $g(z):\mathbb{R}^\nz \rightarrow \mathbb{R}^{\nd}$ under the following statistical model
\begin{equation}
y_j = g(z_j) + \omega_j \eqc \label{eq:likelyhood}
\end{equation}
where $\omega_j$ is i.i.d. Gaussian noise with zero mean and diagonal variance $\Sigma_w = \diag([\sigma^2_1; \ldots; \sigma^2_{\nd}]$. Assuming a GP prior on $g$ in each output dimension $a \in \{1, \ldots, \nd \}$, the measurement data is normally distributed with
\begin{equation*}
{[\mathbf{y}]}_{\cdot,a} \sim \mathcal{N}(0,K_{\mathbf{z}\mathbf{z}}^a + \sigma_a^2) \eqc
\end{equation*}
where $K_{\mathbf{z}\mathbf{z}}^a$ is the Gram matrix of the data points using the kernel function $k^a(\cdot,\cdot)$ on the input locations $\mathbf{z}$, i.e. ${[K_{\mathbf{z}\mathbf{z}}^a]}_{ij} = k^a(z_i,z_j)$. The choice of kernel functions $k^a$ and its parameterization is the determining factor for the inferred distribution of $g$ and is typically specified using prior process knowledge and optimization based on observed data~\cite{Rasmussen2006}. Throughout this paper we consider the squared exponential kernel function
\begin{equation*}
k(z,\tilde{z}) = \sigma_f^2 \exp\left(-\frac{1}{2}{(z-\tilde{z})}^T L^{-1} (z-\tilde{z})\right) \eqc
\end{equation*}
in which $L \in \mathbb{R}^{\nz \times \nz}$ is a positive diagonal length scale matrix. It is, however, straightforward to use any other (differentiable) kernel function. \par
The joint distribution of the training data and an arbitrary test point $z$ in output dimension $a$ is given by
\begin{equation}
p({[y]}_a, {[\mathbf{y}]}_{\cdot,a}) \sim \mathcal{N}\left(0, \begin{bmatrix} K^a_{\mathbf{z}\mathbf{z}} & K^a_{\mathbf{z}z} \\ K^a_{z\mathbf{z}} & K^a_{zz} \end{bmatrix} \right) \label{eq:jointDist} \eqc
\end{equation}
where ${[K^a_{\mathbf{z}z}]}_j = k^a(z_j,z)$, $K^a_{z\mathbf{z}} = {(K^a_{\mathbf{z}z})}^T$ and similarly $K^a_{zz} = k^a(z,z)$. The resulting conditional distribution is Gaussian with $p({[y]}_a\,|\,{[\mathbf{y}]}_{\cdot,a}) \sim \mathcal{N}(\mu^d_a(z),\Sigma^d_a(z))$ and
\begin{subequations}\label{eq:GP_post}
\begin{align}
\mu_a^d(z) &= K_{z\mathbf{z}}^a{(K_{\mathbf{z}\mathbf{z}}^a + I \sigma^2_{a})}^{-1} {[\mathbf{y}]}_{\cdot,a} \eqc \\
\Sigma_a^d(z) &= K^a_{zz} - K_{z\mathbf{z}}^a{(K_{\mathbf{z}\mathbf{z}}^a + I \sigma^2_{a})}^{-1} K_{\mathbf{z}z}^a \eqd
\end{align}
\end{subequations}
We call the resulting GP approximation of the unknown function $g(z)$
\begin{equation}
d(z) \sim \mathcal{N}(\mu^d(z),\Sigma^d(z)) \label{eq:GP}
\end{equation}
with $\mu^d = [\mu^d_1;\ldots;\mu^d_\nd]$ and $\Sigma^d = \diag([\Sigma^d_1;\ldots ;\Sigma^d_\nd])$. \par
Evaluating~\eqref{eq:GP} has cost $\mathcal{O}(\nd \nz \nt)$ and $\mathcal{O}(\nd \nz \nt^2)$ for mean and variance, respectively and thus scales with the number of data points. For many data points or fast real-time applications this limits the use of a GP model. To overcome these issues, various approximation techniques have been proposed, one class of which is sparse Gaussian processes using inducing inputs~\cite{Quinonero-Candela2007}, briefly outlined in the following.
\subsection{Sparse Gaussian Processes}
\label{ssc:sGP}
Most sparse GP approximations can be understood using the concept of inducing targets $\mathbf{y}_{ind}$ at inputs $\mathbf{z}_{ind}$ and an inducing conditional distribution $q$ to approximate the joint distribution~\eqref{eq:jointDist} by assuming that test points and training data are conditionally independent given $\mathbf{y}_{ind}$~\cite{Quinonero-candela2005}:
\begin{align*}
&p({[y]}_a,{[\mathbf{y}]}_{\cdot,a}) = \int \! p({[y]}_a,{[\mathbf{y}]}_{\cdot,a} \, | \, \mathbf{y}_{ind}) p(\mathbf{y}_{ind}) \, \dif \mathbf{y}_{ind} \\&\approx \int \! q({[y]}_a \, | \, \mathbf{y}_{ind}) q({[\mathbf{y}]}_{\cdot,a} \, | \, \mathbf{y}_{ind}) p(\mathbf{y}_{ind}) \dif \mathbf{y}_{ind} \, .
\end{align*}
There are numerous options for selecting the inducing inputs, e.g.\ heuristically as a subset of the original data points, by treating them as hyperparameters and optimizing their location~\cite{Snelson2006}, or letting them coincide with test points~\cite{Tresp2000}. \par
In this paper, we make use of the state-of-the-art Fully Independent Training Conditional (FITC) approximation to approximate the GP distribution and reduce computational complexity~\cite{Snelson2006}. Given a selection of inducing inputs $\mathbf{z}_{ind}$ and using the shorthand notation $Q^a_{\zeta\tilde{\zeta}} := \nobreak K^a_{\zeta\mathbf{z}_{ind}} {(K^a_{\mathbf{z}_{ind}\mathbf{z}_{ind}})}^{-1} K^a_{\mathbf{z}_{ind}\tilde{\zeta}}$ the approximate posterior distribution is given by
\begin{subequations}\label{eq:sGP}
\begin{align}
\tilde{\mu}^{d}_a(z) &= Q^a_{z\mathbf{z}}{(Q^a_{\mathbf{z}\mathbf{z}} + \Lambda)}^{-1}{[\mathbf{y}]}_{\cdot,a} \eqc \\
\tilde{\Sigma}^{d}_a(z) & = K^a_{zz} - Q^a_{z\mathbf{z}}{(Q^a_{\mathbf{z}\mathbf{z}} + \Lambda)}^{-1}Q_{\mathbf{z}z}
\end{align}
\end{subequations}
with $\Lambda = \diag(K^a_{\mathbf{z}\mathbf{z}} - Q^a_{\mathbf{z}\mathbf{z}} + I \sigma^2_{a})$. Concatenating the output dimensions similar to~\eqref{eq:GP} we arrive at the approximation
\begin{equation*}
\tilde{d}(z) \sim \mathcal{N}(\tilde{\mu}^d(z),\tilde{\Sigma}^d(z)) \eqd
\end{equation*}
Several of the matrices used in~\eqref{eq:sGP} can be precomputed such that the evaluation complexity becomes independent of the number of original data points. Using $\tilde{M}$ inducing points, the computational complexity for evaluating the sparse GP at a test point is reduced to $\mathcal{O}(\nd \nz \tilde{M})$ and $\mathcal{O}(\nd \nz \tilde{M}^2)$ for the predictive mean and variance, respectively.
\section{Race Car Modeling}
\label{sc:system}
\begin{figure}[h]
\centering
\def\svgwidth{5.5cm}
\input{figures/bycicle_tex.pdf_tex}
\caption{Schematic of the car model.}\label{fig:bicycle}
\end{figure}
This section presents the race car setup and nominal modeling of the car dynamics, which will serve as a base model for the learning-based control approach. This is largely based on material presented in~\cite{Liniger2015}, which provides a more detailed exposition.
\subsection{Car Dynamics}
\label{ssc:NominalDynamics}
We consider the following model structure to describe the dynamics of the miniature race cars
\begin{align} \label{eq:contSys}
\dot{x} =f_c(x,u) + B_d (g_c(x,u) + w) \eqc
\end{align}
where $f_c(x,u)$ are the nominal system dynamics of the car modeled from first principles, and $g_c(x,u)$ reflects unmodeled dynamics. The considered nominal dynamics are obtained from a bicycle model with nonlinear tire forces as shown in Figure~\ref{fig:bicycle}, resulting in
\begin{align}
f_c(x,u) & = \begin{bmatrix*}[l] v_x \cos(\Phi) - v_y \sin(\Phi)\\
v_x \sin(\Phi) + v_y \cos(\Phi)\\
\omega \\
\frac{1}{m}\Big(F_{r,x}(x,u) - F_{f,y}(x,u) \sin{\delta} + m v_y \omega \Big)\\
\frac{1}{m} \Big( F_{r,y}(x,u) + F_{f,y}(x,u) \cos{\delta} - m v_x \omega \Big) \\
\frac{1}{I_z}\Big(F_{f,y}(x,u) l_f \cos{\delta} - F_{r,y}(x,u) l_r \Big)\end{bmatrix*} \eqc \label{eq:nomDynamics_cont}
\end{align}
where $x = [X;Y;\Phi;v_x;v_y;\omega]$ is the state of the system, with position $(X,Y)$, orientation $\Phi$, longitudinal and lateral velocities $v_x$ and $v_y$, and yaw rate $\omega$. The inputs to the system are the motor duty cycle $p$ and the steering angle $\delta$, i.e., $u=[p; \delta]$. Furthermore, $m$ is the mass, $I_z$ the moment of inertia and $l_r$ and $l_f$ are the distance of the center of gravity from the rear and front tire, respectively. The most difficult components to model are the tire forces $F_{f,y}$ and $F_{r,y}$ and the drivetrain force $F_{r,x}$. The tires are modeled by a simplified Pacejka tire model~\cite{Pacejka1992} and the drivetrain using a DC motor model combined with a friction model. For the exact formulations of the forces, we refer to~\cite{Liniger2015}. \par
In order to account for model mismatch due to inaccurate parameter choices and limited fidelity of this simple model, we integrate $g_c(x,u)$ capturing unmodeled dynamics, as well as additive Gaussian white noise $w$.
Due to the structure of the nominal model, i.e.\ since the dynamics of the first three states are given purely by kinematic relationships, we assume that the model uncertainty, as well as the process noise $w$, only affect the velocity states $v_x$, $v_y$ and $\omega$ of the system, that is $B_d = [0;I_3]$. \par
For the use in a discrete-time MPC formulation, we finally discretize the system using the Euler forward scheme with a sampling time of $T_s$, resulting in the following description,
\begin{align}
x(k\!+\!1) = f(x(k),u(k)) \! + \! B_d (g(x(k),u(k)) \! + \! w(k)), \label{eq:disc_system}
\end{align}
where $w(k)$ is i.i.d.\ normally distributed process noise with $w(k) \sim \mathcal{N}(0,\Sigma^w)$ and $\Sigma^w = \text{diag}[\sigma^2_{v_x};\sigma^2_{v_y};\sigma^2_{\omega}]$, which, together with the uncertain dynamics function $g$, will be inferred from measurement data.
\subsection{Race Track and Constraints}
\label{ssc:RaceTrack}
We consider a race track given by its centerline and a fixed track width. The centerline is described by a piecewise cubic spline polynomial, which is parametrized by the path length $\Theta$. Given a $\Theta$, we can evaluate the corresponding centerline position $(X_c(\Theta),Y_c(\Theta))$ and orientation $\Phi_c(\Theta)$. By letting $\tilde{\Theta}$ correspond to the projection of $(X,Y)$ on the centerline, the constraint for the car to stay within the track boundaries is expressed as
\begin{equation}
\mathcal{X}(\tilde{\Theta}) := \left\{x \, \middle| \, \left \Vert \begin{bmatrix} X \\ Y \end{bmatrix} - \begin{bmatrix} X_c(\tilde{\Theta}) \\ Y_c(\tilde{\Theta}) \end{bmatrix} \right\Vert \leq r \right\} \eqc \label{eq:trackCon}
\end{equation}
where $r$ is half the track width. \par
Additionally, the system is subject to input constraints,
\begin{align}
\mathcal{U} = \left\{u \, \middle| \, \begin{bmatrix} 0 \\ -\delta_{\max} \end{bmatrix} \leq \begin{bmatrix} p \\ \delta \end{bmatrix} \leq \begin{bmatrix} 1 \\ \delta_{\max} \end{bmatrix} \right\} \eqc \label{eq:uCon}
\end{align}
i.e.\ the steering angle is limited to a maximal angle $\delta_{\max}$ and the duty cycle has to lie between zero and one.
\section{Learning-based Controller Design}
\label{sc:ControllerDesign}
In the following, we first present the model learning module that is subsequently used in a cautious NMPC controller. We briefly state the contouring control formulation~\cite{Liniger2015}, serving as the basis for the controller and integrate the learning-based dynamics using a stochastic GP model. Afterwards, we introduce suitable approximations to reduce computational complexity and render the control approach real-time feasible.
\subsection{Model Learning}
\label{ssc:ModelLearning}
We apply Gaussian process regression~\cite{Rasmussen2006} to infer the vector-valued function $g$ of the discrete-time system dynamics~\eqref{eq:disc_system} from previously collected measurement data of states and inputs. Training data is generated as the deviation to the nominal system model, i.e.\ for a specific data point:
\begin{align*}
y_j &= g(x(j),u(j)) + w(j) = B_d^{\dagger}\left(x(j\!+\!1) - f(x(j),u(j))\right) \eqc \\
z_j &= [x(j) ; u(j)] \eqc
\end{align*}
where $^\dagger$ is the pseudoinverse. Note that this is in the form of~\eqref{eq:likelyhood} and we can directly apply~\eqref{eq:GP_post} to derive a GP model $d(x_i,u_i)$ from the data, resulting in the stochastic model
\begin{align}
x_{i+1} = f(x_{i},u_i) + B_d (d(x_i,u_i) + w_i) \eqd \label{eq:learned_system}
\end{align}
The state $x_{i}$ obtained from this model, which will be used in a predictive controller, is given in form of a stochastic distribution.
\subsection{Contouring Control}
\label{ssc:ContouringControl}
The learning-based NMPC controller makes use of a contouring control formulation, which has been introduced in~\cite{Faulwasser2009, Lam2010} and was shown to provide good racing performance in~\cite{Liniger2015}.
The objective of the optimal contouring control formulation is to maximize progress along the race track. An approximation of the car position along the centerline is introduced as an optimization variable by including integrator dynamics $\Theta_{i+1} = \Theta_{i} + v_i$, where $\Theta_i$ is a position along the track at time step $i$ and $v_i$ is the incremental progress. The progress along the centerline over the horizon is then maximized by means of the overall incremental progress $\sum_{i = 0}^N v_i$. \par
In order to connect the progress variable to the race car's position, $\Theta_i$ is linked to the projection of the car on the centerline. This is achieved by minimizing the so-called lag error $\hat{e}^l$ and contouring error $\hat{e}^c$, defined as
\begin{align*}
\hat{e}^l(x_i,\Theta_i) = &-\cos(\Phi(\Theta_i))(X_i-X_c(\Theta_i)) \nonumber \\ &- \sin(\Phi(\Theta_i))(Y_i-Y_c(\Theta_i))\,,\\
\hat{e}^c(x_i,\Theta_i) = &\phantom{+}\sin(\Phi(\Theta_i))(X_i-X_c(\Theta_i)) \nonumber \\ &- \cos(\Phi(\Theta_i))(Y_i-Y_c(\Theta_i))\,.
\end{align*}
For small contouring error $\hat{e}^c$, the lag error $\hat{e}^l$ approximates the distance between the projection of the car's position and $(X_c(\Theta_i),Y_c(\Theta_i))$, such that a small lag error ensures a good approximate projection. The stage cost function is then formulated as
\begin{align}
l(x_i,u_i,\Theta_i,v_i) = &\Vert \hat{e}^c(x_i,\Theta_i) \Vert^2_{q_c} + \Vert \hat{e}^l(x_i,\Theta_i) \Vert^2_{q_l} \nonumber \\
&- \gamma v_i + l_{reg}(\Delta u_i, \Delta v_i) \eqd \label{eq:cost}
\end{align}
The term $-\gamma v_i$ encourages the progress along the track, using the relative weighting parameter $\gamma$. The parameters $q_c$ and $q_l$ are weights on contouring and lag error, respectively, and $l_{reg}(\Delta u_i,\Delta v_i)$ is a regularization term penalizing large changes in the control input and incremental progress $l_{reg}(\Delta u_i,\Delta v_i) = \Vert u_i - u_{i-1} \Vert^2_{R_u} + \Vert v_i - v_{i-1} \Vert^2_{R_v}$, with the corresponding weights $R_u$ and $R_v$.
\par
Based on this contouring formulation, we define a stochastic MPC problem that integrates the learned GP-model~\eqref{eq:learned_system} and minimizes the expected value of the cost function~\eqref{eq:cost} over a finite horizon of length N\@:
\begin{mini!}
{U,V}{\E\left(\sum_{i=0}^{N-1} l(x_i,u_i,\Theta_i,v_i) \right)\label{eq:costExpValue}}
{\label{eq:opt_orig}}{}
\addConstraint{x_{i+1}}{= f(x_{i},u_{i}) + B_d (d(x_{i},u_{i}) + w_i)}
\addConstraint{\Theta_{i+1}}{ = \Theta_i + v_i}
\addConstraint{P(x_{i+1}}{ \in \mathcal{X}(\Theta_{i+1}))> 1- \epsilon\label{eq:ChanceState}}
\addConstraint{u_{i}}{\in \mathcal{U}}
\addConstraint{x_{0}}{= x(k) , \, \Theta_0 = \Theta(k) \eqc}
\end{mini!}
where $i=0,\ldots,N\!-\!1$ and $x(k)$ and $\Theta(k)$ are the current system state and the corresponding position on the centerline. The state constraints are formulated w.r.t.\ the centerline position at $\Theta_i$ as an approximation of the projection of the car position, and are in the form of chance constraints which guarantee that the track constraint~\eqref{eq:trackCon} is violated with a probability less than $1-\epsilon$. \par
Solving problem~\eqref{eq:opt_orig} is computationally demanding, especially since the distribution of the state is generally not Gaussian after the first prediction time step. In addition, fast sampling times -- in the considered race car setting of about $30 \text{ ms}$ -- pose a significant challenge for real-time computation. In the following subsections, we present a sequence of approximations to reduce the computational complexity of the GP-based NMPC problem for autonomous racing in~\eqref{eq:opt_orig} and eventually provide a real-time feasible approximate controller that can still leverage the key benefits of learning.
\subsection{Approximate Uncertainty Propagation}
\label{ssc:UncertProp}
At each time step, the GP $d(x_i,u_i)$ evaluates to a stochastic distribution according to the residual model uncertainty, which is then propagated forward in time, rendering the state distributions non-Gaussian. In order to solve~\eqref{eq:opt_orig}, we therefore approximate the distributions of the state at each prediction step as a Gaussian, i.e. $x_i \sim \mathcal{N}(\mu^x_i, \Sigma^x_i)$~\cite{Candela2003,Deisenroth2010,Hewing2017}. The dynamics equations for the Gaussian distributions can be found e.g.\ through a sigma point transform~\cite{Ostafew2016} or a first order Taylor expansion detailed in Appendix~\ref{app:EKF_GP}. We make use of the Taylor approximation offering a computationally cheap procedure of sufficient accuracy, resulting in the following dynamics for the mean and variance
\begin{subequations}
\begin{align}
\mu^x_{i+1} &= f(\mu^x_i,u_i) + B_d \mu^d(\mu^x_i,u_i) \eqc \\
\Sigma^x_{i+1} &= \tilde{A}_i \begin{bmatrix} \Sigma^x_i & \star \\ \nabla_{\!x} \mu^d(\mu^x_i,u_i) \Sigma^x_i & \Sigma^d(\mu^x_i,u_i) \end{bmatrix} \tilde{A}_i^T \label{eq:var_dyn} \eqc
\end{align}
\end{subequations}
where $\tilde{A}_i = \begin{bmatrix} \nabla_{\!x} f(\mu^x_i,u_i) & B_d \end{bmatrix}$ and the star denotes the corresponding element of the symmetric matrix. \par
\subsection{Simplified Chance Constraints}
\label{ssc:chance_constrains}
The Gaussian approximation of the state distribution allows for a simplified treatment of the chance constraints~\eqref{eq:ChanceState}. They can be approximated as deterministic constraints on mean and variance of the state using the following Lemma.
\begin{Lemma}\label{lm:chanceConstr}
Let $n$-dimensional random vector $x \sim \mathcal{N}(\mu,\Sigma)$ and the set $\mathcal{B}^{x_c}(r) =\nobreak \left\{ x \, | \, \Vert x-x_c \Vert \leq r \right\}$. Then
\[\Vert \mu - x_c \Vert \leq r - \sqrt{\chi^2_n(p) \lambda_{max}(\Sigma)} \Rightarrow \Pr(x \in \mathcal{B}^{x_c}(r)) \geq p,\]
where $\chi^2_n(p)$ is the quantile function of the chi-squared distribution with $n$ degrees of freedom and $\lambda_{max}(\Sigma)$ the maximum eigenvalue of $\Sigma$.
\end{Lemma}
\begin{proof} Let $\mathcal{E}^x_p := \{ x \, | \, {(x-\mu)}^T \Sigma^{-1} (x-\mu) \leq \chi^2_n(p) \}$ be the confidence region of $x$ at level $p$, such that $\Pr(x \in \nobreak \mathcal{E}^x_p) \geq \nobreak p$.
We have $\mathcal{E}^x_p \subseteq \mathcal{E}^{\tilde{x}}_p$ with $\tilde{x} \sim \mathcal{N}(\mu,\lambda_{max}(\Sigma) \, I)$, i.e. $\mathcal{E}^{\tilde{x}}_p$ is an outer approximation of the confidence region using the direction of largest variance.
Now $\mu \in \mathcal{B}^{x_c}(r- \nobreak \sqrt{\chi^2_n(p) \lambda_{max}(\Sigma)})$ implies $\mathcal{E}^{\tilde{x}}_p \subseteq \nobreak \mathcal{B}^{x}_c(r)$, which means $\Pr(x \in \mathcal{B}^{x}_c(r)) \geq \Pr(x \in \nobreak \mathcal{E}^{\tilde{x}}_p) \geq \Pr(x \in \nobreak \mathcal{E}^{x}_p) =\nobreak p$.
\end{proof}
Using Lemma~\ref{lm:chanceConstr}, we can formulate a bound on the probability of track constraint violation by enforcing
\begin{align} \label{eq:ChanceConstrMean}
\left\Vert \begin{bmatrix} \mu^X_i \\ \mu^Y_i \end{bmatrix} - \begin{bmatrix} X_c(\Theta_i) \\ Y_c(\Theta_i) \end{bmatrix} \right\Vert \leq r - \sqrt{\chi^2_2(p) \lambda_{max}(\Sigma_i^{XY})},
\end{align}
where $\Sigma_i^{XY} \in \mathbb{R}^{2 \times 2}$ is the marginal variance of the joint distribution of $X_i$ and $Y_i$.
This procedure is similar to constraint tightening in robust control. Here the amount of tightening is related to an approximate confidence region for the deviation from the mean system state.\par
Constraint $\eqref{eq:ChanceConstrMean}$ as well as the cost~\eqref{eq:cost} require the variance dynamics. The next section proposes a further simplification to reduce computational cost by considering an approximate evolution of the state variance.
\subsection{Time-Varying Approximation of Variance Dynamics}
\label{ssc:TVapprox}
The variance dynamics in~\eqref{eq:var_dyn} require $\frac{N}{2}(n^2 + n)$ additional variables in the optimization problem and can increase computation time drastically. We trade off accuracy in the system description with computational complexity by evaluating the system variance around an approximate evolution of the state and input. This state-action trajectory can typically be chosen as a reference to be tracked or by shifting a solution of the MPC optimization problem at an earlier time step. Denoting a point on the approximate state-action trajectory with $(\bar{\mu}^x_i, \bar{u}_i)$, the approximate variance dynamics are given by
\begin{align*}
\bar{\Sigma}^x_{i+1} &= \bar{A}_i \begin{bmatrix} \bar{\Sigma}^x_i & \star \\ \nabla_{\!x} \mu^d(\bar{\mu}^x_i,\bar{u}_i) \bar{\Sigma}^x_i & \Sigma^d(\bar{\mu}^x_i,\bar{u}_i) \end{bmatrix} \bar{A}_i^T
\end{align*}
with $\bar{A}_i = [\nabla_{\!x} f(\bar{\mu}^x_i, \bar{u}_i)\ B_d]$. The variance along the trajectory thus does not depend on any optimization variable and can be computed before the state measurement becomes available at each sampling time. The precomputed variance is then used to satisfy the chance constraints approximately, by replacing $\Sigma^{XY}$ with $\bar{\Sigma}^{XY}$ in~\eqref{eq:ChanceConstrMean}. The resulting set is denoted $\bar{\mathcal{X}}(\bar{\Sigma}^x_i, \Theta_i)$.
Figure~\ref{fg:uncert_ex} shows an example of a planned trajectory with active chance constraints according to this formulation with $\chi^2_2(p) = 1$. \par
\begin{figure}
\center
\includegraphics[width=0.54\linewidth]{ex}
\caption{Planned trajectory with active chance constraints. Shown is the mean trajectory of the car with 1-$\sigma$ confidence level perpendicular to the car's mean orientation.}\label{fg:uncert_ex}
\end{figure}
In the following, we use similar ideas to reduce the computational complexity of the required GP evaluations by dynamically choosing inducing inputs in a sparse GP approximation.
\subsection{Dynamic Sparse GP}
\label{ssc:DSGP}
Sparse approximations as outlined in Section~\ref{ssc:sGP} can considerably speed up evaluation of a GP, with little deterioration of prediction quality. For fast applications with high-dimensional state-input spaces, however, the computational burden can still be prohibitive.
\par
We therefore propose to select inducing inputs locally at each sampling time, which relies on the idea that in MPC the area of interest at each sampling time typically lies close to a known trajectory in the state-action space. Similar to the approximation presented in the previous subsection, inducing inputs can then be selected along the approximate trajectory, e.g.\ according to a solution computed at a previous time step.
\par
We illustrate the procedure using a two-dimensional example in Figure~\ref{fg:DSGP_toyEx} showing the dynamic approximation for a simple double integrator. Shown is the contour plot of the posterior variance of a GP with two input dimensions $x_1$ and $x_2$. Additionally, two trajectories generated from an MPC are shown. The solid red line corresponds to a current prediction trajectory, while the dashed line shows the previous prediction, which is used for local approximation of the GP\@. As the figure illustrates, full GP and sparse approximation are in close correspondence along the predicted trajectory of the system. \par
The dynamic selection of local inducing points in a receding horizon fashion allows for an additional speed-up by computing successive approximations adding or removing single inducing points by means of rank 1 updates~\cite{Seeger2004}. These are applied to a reformulation of~\eqref{eq:sGP}, which offers better numerical properties~\cite{Quinonero-candela2005} and avoids inversion of the large matrix $Q^a_{\mathbf{z}\mathbf{z}} + \Lambda$,
\begin{subequations}
\begin{align*}
\tilde{\mu}^{a}_d(z) &= K^a_{z\mathbf{z}} \Sigma K^a_{\mathbf{z}_{ind},\mathbf{z}} \Lambda^{-1}{[\mathbf{y}]}_{\cdot,a} \, ,\\
\tilde{\Sigma}^{a}_d(z) & = K^a_{zz} - Q^a_{zz} + K^a_{z\mathbf{z}_{ind}} \Sigma K^a_{\mathbf{z}_{ind} z} \eqc
\end{align*}
\end{subequations}
with $\Sigma = {\left(K^a_{\mathbf{z}_{ind}\mathbf{z}_{ind}} + K^a_{\mathbf{z}_{ind} \mathbf{z}} \Lambda^{-1} K^a_{\mathbf{z} \mathbf{z}_{ind}}\right)}^{-1}$. Substitution of single inducing points corresponds to a single line and column changing in $\Sigma^{-1}$. The corresponding Cholesky factorizations can thus efficiently be updated~\cite{Nguyen-Tuong2009}.
\begin{figure}
\center
\setlength\figureheight{6cm}
\setlength\figurewidth{7.5cm}
\input{figures/minExampleSparseApprox.tex}
\vspace{-0.6cm}
\caption{Contour plots of the posterior variance of a GP for the full GP (top left) and dynamic sparse approximation (top right). The solid red line is the trajectory planned by an MPC, the dashed red line the trajectory of the previous time step used for the approximation, with inducing points indicated by black circles. The bottom plot shows the respective variances along the planned trajectory.}\label{fg:DSGP_toyEx}
\end{figure}
\subsection{Resulting Control Formulation for Autonomous Racing}
\label{ssc:Resulting Formulation}
We integrate the approximations presented in the previous sections in the learning-based MPC problem in~\eqref{eq:opt_orig} resulting in the following approximate optimization problem
\begin{mini!}
{U,V}{ \E \left(\sum_{i=0}^{N-1} l(\mu^x_i,u_i,\Theta_i,v_i) \right)}
{\label{eq:final_formulation}}{}
\addConstraint{\mu^x_{i+1} }{= f(\mu^x_{i},u_{i}) + B_d \tilde{\mu}^d(\mu^x_{i},u_{i})}
\addConstraint{\Theta_{i+1}}{ = \Theta_i + v_i}
\addConstraint{\mu^x_{i+1}}{ \in \bar{\mathcal{X}}(\bar{\Sigma}^x_{i+1},\Theta_{i+1})\label{eq:chanceConstrFinal}}
\addConstraint{u_i }{\in \mathcal{U}}
\addConstraint{\mu^x_{0}}{ = x(k), \, \Theta_0 = \Theta(k) \, ,}
\end{mini!}
where $i = 0,\ldots,N\!-\!1$. By reducing the learned model to the mean GP dynamics and considering approximate variance dynamics and simplified chance constraints, the problem is reduced to a deterministic nonlinear program of moderate dimension. \par
In the presented form, the approximate optimization problem~\eqref{eq:final_formulation} still requires an optimization over a large spline polynomial corresponding to the entire track.
Since evaluation of this polynomial and its derivative is computationally expensive, one can apply an additional approximation step and quadratically approximate the cost function around the shifted solution trajectory from the previous sampling time, for which the expected value is equivalent to the cost at the mean. Similarly, $\Theta_i$ can be fixed using the previous solution when evaluating the state constraints~\eqref{eq:chanceConstrFinal}, such that the spline can be evaluated separately from the optimization procedure, as done in~\cite{Liniger2015}.
\section{Simulation}
\label{sc:simulation}
We finally evaluate the proposed control approach in simulations of a race. The race car is simulated using system~\eqref{eq:contSys} with $g_c$ resulting from a random perturbation of all parameters of the nominal dynamics $f_c$ by up to $\pm 15\%$ of their original value. We compare two GP-based approaches, one using the full GP $d(x_i,u_i)$ with all available data points and one a dynamic sparse approximation $\tilde{d}(x_i,u_i)$, against a baseline NMPC controller, which makes use of only the nominal part of the model $f_c$, as well as against a reference controller using the true system model, i.e.\ with knowledge of $g_c$.
\subsection{Simulation Setup}
\label{sc:simulation_setup}
We generate controllers using formulation~\eqref{eq:final_formulation}, both for the full GP and the dynamic sparse approximation with 10 inducing inputs along the previous solution trajectory of the MPC problem. The inducing points are placed with exponentially decaying density along the previous solution trajectory, putting additional emphasis on the current and near future states of the car. The prediction horizon is chosen as $N = 30$ and we formulate the chance constraints~\eqref{eq:chanceConstrFinal} with $\chi^2_2(p) = 1$.
To guarantee feasibility of the optimization problem, we implement the chance constraint using a linear quadratic soft constraint formulation. Specifically, we use slack variables $s_i \geq 0$, which incur additional costs $l_s(s_i) = \Vert s_i \Vert^2_{q_s} + c_s s_i$. For sufficiently large $c_s$ the soft constrained formulation is exact, if feasible~\cite{Kerrigan2000}.
To reduce conservatism of the controllers, constraints are only tightened for the first 15 prediction steps and are applied to the mean for the remainder of the prediction horizon, similar to the method used in~\cite{Carrau2016}.\par
The system is simulated for one lap of a race, starting with zero initial velocity from a point on the centerline under white noise of power spectral density $Q_w = \frac{1}{T_s}\diag([0.001;0.001;0.1])$. The resulting measurements from one lap with the baseline controller are used to generate 350 data-points for both GP-based controllers. Hyperparameters and process noise level were found through likelihood optimization, see e.g.~\cite{Rasmussen2006}. \par
To exemplify the learned deviations from the nominal system, Figure~\ref{fg:GPprediction} shows the encountered dynamics error in the yaw-rate and the predicted error during a lap with the sparse GP-based controller. Overall, the learned dynamics are in good correspondence with the true model and the uncertainty predicted by the GP matches the residual model uncertainty and process noise well. Note that the apparent volatility in the plot does not correspond to overfitting, but instead is due to fast changes in the input and matches the validation data.\par
Solvers were generated using FORCES Pro~\cite{FORCESPro} with a sampling time of $T_s = 30 \text{ ms}$ and the number of maximum solver iterations were limited to 75, which is sufficient to guarantee a solution of required accuracy. All simulations were carried out on a laptop computer with a 2.6 GHz i7-5600 CPU and 12GB RAM\@. \par
\begin{figure}
\center
\setlength\figureheight{3cm}
\setlength\figurewidth{7cm}
\input{figures/ResultPredictionGP10.tex}
\caption{Prediction of the dynamic sparse GP with 10 inducing inputs during a race lap. Shown as black dots are the error on the yaw rate under process noise as encountered at each time step. The blue line shows the dynamics error predicted by the GP \@. The shaded region indicates the 2-$\sigma$ confidence interval, including noise.}\label{fg:GPprediction}
\end{figure}
\subsection{Results}
\label{ssc:results}
To quantify performance of the proposed controllers we compare the lap time $T_l$ and the average squared slack of the realized states $\overline{s_0^2}$ corresponding to state-constraint violations. We furthermore state average solve times $\overline{T}_{\!c}$ of the NMPC problem and its $99.9$th percentile $T_c^{99.9}$ over the simulation run. To demonstrate the learning performance we also evaluate the average 2-norm error in the system dynamics $\overline{\Vert e \Vert}$, i.e.\ the difference between the mean state after one prediction step and the realized state, $e(k\!+\!1) = \mu^x_1 - x(k\!+\!1)$. \par
\begin{figure}
\center
\setlength\figureheight{7cm}
\setlength\figurewidth{6.5cm}
\input{figures/ResultTrajectories_noNoise.tex}
\caption{Resulting trajectories on the race track for simulations without process noise with baseline, reference and sparse GP-based controller.} \label{fg:res_no_noise}
\end{figure}
For direct comparison, we first evaluate controller performance in simulations without process noise. As evident in Figure~\ref{fg:res_no_noise}, the baseline controller performs visually suboptimally and is unable to guarantee constraint satisfaction, even in the absence of process noise. The reference controller and sparse GP-based controller (GP-10) perform similarly. Table~\ref{tb:res}(a) summarizes the results of the simulations without process noise. We can see that the full GP controller (GP-Full) matches the performance of the reference controller. It also displays only small constraint violations, while the reference controller exhibits some corner cutting behavior leading to constraint violations. This is due to unmodeled discretization error, also evident in the dynamics error of the reference controller. The discretization error is partly learned by the GPs, leading to lower error than even the reference controller. Overall the sparse GP controller demonstrates a performance close to that of the full GP controller, both in terms of lap time and constraint satisfaction and is able to significantly outperform the baseline controller.\par
\begin{table}[h]
\center
\begin{minipage}{\linewidth} \caption{Simulation results}\label{tb:res}
\begin{subtable}{\linewidth} \caption{without process noise \hspace*{\fill}} \vspace{-0.1cm}
\begin{tabular}{cccccc} \toprule
Controller & $T_l$ [s] & $\overline{s_0^2}$ [$10^{-3}$] & $\overline{\Vert e \Vert}$ [-] & $\overline{T}_{\!c}$ [ms] & $T_c^{99.9}$ [ms]\\ \midrule
Reference & 8.64 & 4.50 & 0.18 & 9.4 & 19.1\\
Baseline & 9.45 & 4.77 & 1.20 & 10.8 & 20.6\\
GP-Full & 8.67 & 0.95 & 0.09 & 105.2 & 199.23\\
GP-10$^a$ & 8.76 & 1.77 & 0.16 & 12.3 & 26.9
\end{tabular}
\vspace{0.3cm}
\end{subtable}
\begin{subtable}{\linewidth} \caption{with process noise \hspace*{\fill}} \vspace{-0.1cm}\label{tb:res_noise}
\begin{tabular}{cccccc} \toprule
Controller & $T_l$ [s] & $\overline{s_0^2}$ [$10^{-3}$] & $\overline{\Vert e \Vert}$ [-] & $\overline{T}_{\!c}$ [ms] & $T_c^{99.9}$ [ms] \\ \midrule
Reference & 8.76 & 2.88 & 0.33 & 9.7 & 20.8 \\
Baseline$^b$ & 9.55 & 65.11 & 1.20 & 10.1 & 23.9\\
GP-Full & 8.80 & 0.68 & 0.23 & 102.0 & 199.4\\
GP-10$^a$ & 8.90 & 1.20 & 0.28 & 12.1 & 25.6\\
\end{tabular}
\vspace{0.2cm} \\
\footnotesize{${}^a$Requires an additional $\approx 2.5$ ms for sparse approximation.} \\
\footnotesize{${}^b$Eight outliers removed.} \vspace*{-0.3cm}
\end{subtable}
\end{minipage}
\end{table}
Table~\ref{tb:res}(b) shows the averaged simulation for different process noise realizations. The values are averaged over 200 runs, except for $T^{99.9}_c$, which is the $99.9$th percentile of all solve times. Qualitatively, the observations for the noise-free case carry over to the simulations in the presence of process noise. Most strikingly, the baseline NMPC controller displays severe constraint violations under noise. In eight cases this even causes the car to completely lose track. The runs were subsequently removed as outliers in Table~\ref{tb:res}(b). All other formulations tolerate the process noise well and achieve similar performance as in the noise-free case. The reference controller achieves slightly faster lap times than the GP-based formulations. These, however, come at the expense of higher constraint violations. Through shaping the allowed probability of violation in the chance constraints~\eqref{eq:chanceConstrFinal}, the GP-based formulations allow for a trade-off between aggressive racing and safety. \par
The simulations underline the real-time capabilities of the sparse GP-based controller. While the full GP formulation has excessive computational requirements relative to the sampling time of $T_s = 30 \text{ ms}$, the dynamic sparse formulation is solved in similar time as the baseline formulation. It does, however, require the successive update of the sparse GP formulation, which in our implementation took an additional $2.5 \text{ ms}$ on average. Note that this computation can be done directly after the previous MPC solution, whereas the MPC problem is solved after receiving a state measurement at each sample time step. The computation for the sparse approximation thus does not affect the time until an input is applied to the system, which is why we state both times separately. With $99.9\%$ of solve times below $25.6 \text{ ms}$, a computed input can be applied within the sampling time of $T_s = 30\text{ ms}$, leaving enough time for the subsequent precomputation of the sparse approximation. \par
The results demonstrate that the presented GP-based controller can significantly improve performance while maintaining safety, approaching the performance of the reference controller using the true model. They furthermore demonstrate that the controller is real-time implementable and able to tolerate process noise much better than the initial baseline controller. Overall, this indicates fitness for a hardware implementation.
\section{Conclusion}
\label{sc:conclusion}
In this paper we addressed the challenge of automatically controlling miniature race cars with an MPC approach under model inaccuracies, which can lead to dramatic failures, especially in a high performance racing environment. The proposed GP-based control approach is able to learn from model mismatch, adapt the dynamics model used for control and subsequently improve controller performance. By considering the residual model uncertainty, we can furthermore enhance constraint satisfaction and thereby safety of the vehicle. Using a dynamic sparse approximation of the GP we demonstrated the real-time capability of the resulting controller and finally showed in simulations that the GP-based approaches can significantly improve lap time and safety after learning from just one example lap.
\appendices
\section{Uncertainty propagation for nonlinear systems}
\label{app:EKF_GP}
Let $\mu^x_i$ and $\Sigma^x_i$ denote the mean and variance of $x_i$, respectively. Using the law of iterated expectation and the law of total variance we have
\begin{align*}
\mu^x_{i+1} &= \E_{x_i}\left( \E_{d|x_i}\left( x_{i+1} \right) \right) \\
&= \E_{x_i} \left( f(x_i,u_i) + B_d \mu^d(x_i,u_i) \right) \\
\Sigma^x_{i+1} &= \E_{x_i}\left( \var_{d|x_i}\left( x_{i+1} \right) \right) + \var_{x_i} \left( \E_{d|x_i} \left( x_{i+1} \right)\right) \\
&= \E_{x_i}\left( B_d \Sigma^d(x_i,u_i) B_d^T \right) \\
&+ \var_{x_i} \left( f(x_i,u_i) + B_d \mu^d(x_i,u_i) \right)
\end{align*}
With a first order expansions of $f, \mu^d$ and $\Sigma^d$ around $x_i = \mu^x_i$ these can be approximated as~\cite{Candela2003}
\begin{align*}
\mu^x_{i+1} &\approx f(\mu^x_i,u_i) + B_d \mu^d(\mu^x_i,u_i) \eqc \\
\Sigma^x_{i+1} &\approx B_d \Sigma^d(\mu^x_i,u_i) B_d^T \\ &+ \nabla_{\!x} \tilde{f}(\mu^x_i,u_i) \Sigma^x_i {\left(\nabla_{\!x} \tilde{f}(\mu^x_i,u_i)\right)}^T
\end{align*}
with $\tilde{f}(\mu^x_i,u_i) = f(\mu^x_i,u_i) + B_d \mu^d(\mu^x_i,u_i)$.
\bibliographystyle{IEEEtran}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.